image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "queries", "node-js" ]
[ { "code": "http://localhost:5293/getSomeData$ node app.js\nServer started on port 5293\nD:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\utils.js:698\n throw error;\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:438:30)\n at listOnTimeout (node:internal/timers:569:17)\n at process.processTimers (node:internal/timers:512:7)\nEmitted 'error' event on Database instance at:\n at D:\\...\\myapp\\node_modules\\mongojs\\lib\\database.js:36:16\n at D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\utils.js:695:9\n at D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\mongo_client.js:285:23\n at connectCallback (D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\operations\\connect.js:367:5)\n at D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\operations\\connect.js:554:14\n at Object.connectHandler [as callback] (D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:286:11)\n at Timeout._onTimeout (D:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:443:25)\n at listOnTimeout (node:internal/timers:569:17)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n error: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n name: 'MongoNetworkError'\n },\n roundTripTime: -1,\n lastUpdateTime: 427883836,\n lastWriteDate: null,\n opTime: null,\n type: 'Unknown',\n topologyVersion: undefined,\n minWireVersion: 0,\n maxWireVersion: 0,\n hosts: [],\n passives: [],\n arbiters: [],\n tags: []\n }\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n\nNode.js v18.16.0\nmongojs: ^3.1.0\nexpress: ^4.18.2\nexpress-session: ^1.17.3\ndbAccessRouter.jsmyapp\\routers\\const dbAccess = require('express').Router();\nconst db = require('../db/dbAccess');\n\ndbAccess.get('/', (req, res, next) =>\n{\n db.getData(res, next);\n}\n\nmodule.exports = dbAccess;\ndbAccess.jsmyapp\\db\\const db = require('mongojs')('userdata', ['users']);\n\nexports.getData() = (res, next) =>\n{\n db.users.find({}, (err, found) =>\n {\n console.log(found.length);\n\t\tif (err || found.length == 0) \n\t\t{\n\t\t\tnext(err || \"no data found\");\n\t\t} \n\t\telse \n\t\t{\n\t\t\tconsole.log(found);\n\t\t\tres.status(200).json(found);\n\t\t}\n }\n}\nindex.jsmyapp\\const importExpress = require('express');\nconst app = importExpress();\n\napp.use(express.json());\nconst dbAccess = require('./routers/dbAccess');\napp.use('/fetch', dbAccess);\n\napp.listen(5293, () =>\n{\n console.log(\"Server started on port 5293\");\n}\n", "text": "I’m having difficult getting my NodeJS app to connect to my MongoDB instance. I can connect to the database instance (which I am running locally on my computer, where the data is stored on the D: drive) when I use the shell, but it doesn’t connect when I try to connect it via the app. I was able to get my code to work on a different computer, so I know it’s not the application that’s the issue. When I try to connect to the server to get a simple fetch request from the browser (using http://localhost:5293/getSomeData), my terminal spits out this error and the app closes:I’ve omitted parts of the filepath for brevity, where there are elipses (…). I’m not sure what to do, as it works on one Windows machine but not the other. I’m using these modules:The version of Node I’m using is 18.16.0. Any help would be much appreciated!Here is my app that I’m using, if it helps:dbAccessRouter.js ← Local path: myapp\\routers\\dbAccess.js ← Local path: myapp\\db\\index.js ← My code starts running from here; local path: myapp\\", "username": "mingtendo_N_A" }, { "code": "mongosh", "text": "Whatever you’re using as a connect string …\nCan you use that same connect string from the same host with mongosh and successfully connect?", "username": "Jack_Woehr" }, { "code": "const db = require('mongojs')('userdata', ['users']);\nmongolocalhost:5293/fetch", "text": "I’m not using a connect string, I think. The closest thing is probablybecause I’m using the mongoJS module to connect to my locally hosted server. I can connect to the server using my mongo shell, and it performs CRUD operations that way just fine. I am also able to do the same thing on the other machine where it does work. I’ve mirrored the file structure as closely as I can between the two, and they all have the same modules (and versions of them) installed, so I’m fairly certain it’s not the code itself. The only differences I have are that the machine where it does work (call it machine A) is that it uses MongoDB version 5.0.9, where as the machine where it doesn’t work (call it machine B) is using MongoDB version 5.0.18. I’ve tried using different browsers too (such as Firefox, Chrome) but nothing I do seems to make it work on machine B. I try typing in localhost:5293/fetch but the page just times out and the app crashes because it can’t connect to the local server.", "username": "mingtendo_N_A" }, { "code": "const mongojs = require('mongojs')const db = mongojs(connectionString, [collections])mongod", "text": "So you use the same connection string for both?const mongojs = require('mongojs')\nconst db = mongojs(connectionString, [collections])Taking a wild guess, I’d say your mongod is not listening on the interface thru which your code is trying to connect. Check your mongod configuration.", "username": "Jack_Woehr" }, { "code": "C:D:D:\\data\\dbC:C:> db.serverCmdLineOpts()\n{\n \"argv\" : [\n \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.exe\",\n \"--config\",\n \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.cfg\",\n \"--service\"\n ],\n \"parsed\" : {\n \"config\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.cfg\",\n \"net\" : {\n \"bindIp\" : \"127.0.0.1\",\n \"port\" : 27017\n },\n \"service\" : true,\n \"storage\" : {\n \"dbPath\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\data\",\n \"journal\" : {\n \"enabled\" : true\n }\n },\n \"systemLog\" : {\n \"destination\" : \"file\",\n \"logAppend\" : true,\n \"path\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\log\\\\mongod.log\"\n }\n },\n \"ok\" : 1\n}\n> db.serverCmdLineOpts()\n{\n \"argv\" : [\n \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.exe\"\n ],\n \"parsed\" : {\n \n },\n \"ok\" : 1\n}\nC:\\Program Files\\MongoDB\\Server\\5.0\\data\\D:\\data\\db\\", "text": "Yes, the connection string is exactly the same on both machines. However, on machine B, the MongoDB libraries are stored on my C: drive, while the data itself is on my D: drive at D:\\data\\db (my C: drive on machine B is quite small unfortunately). On machine A, everything is on the C: drive.I checked my MongoDB configuration, here is what it says for machine B:In contrast to machine A, this is what it says:I suppose this brings me closer to finding out what the problem is (or maybe this is the problem) but I’m not sure what to do to fix it. I don’t want to break MongoDB.Side note: It also seems like the data is being stored (on machine B) at C:\\Program Files\\MongoDB\\Server\\5.0\\data\\? Although, the folder D:\\data\\db\\ isn’t empty either, and their contents look very similar.", "username": "mingtendo_N_A" }, { "code": "", "text": "Does it have anything to do with trying to bind via IPv6? I see the failure talking about ::1", "username": "Jack_Woehr" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\5.0\\data\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\5.0\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n\n$ mongod\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.441-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.442-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.443-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.443-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.444-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.445-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.445-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}} \n{\"t\":{\"$date\":\"2023-06-21T17:06:50.445-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.446-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":16892,\"port\":27017,\"dbPath\":\"D:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"Infinity\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.446-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.447-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.18\",\"gitVersion\":\"796abe56bfdbca6968ff570311bf72d93632825b\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.447-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19045)\"}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.447-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.449-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"D:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.449-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15806M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.575-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:574669][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 10 through 11\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.623-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:622685][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 11 through 11\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.681-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:681696][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery \nloop: starting at 10/6400 to 11/256\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.773-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:773772][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 10 through 11\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.874-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:873806][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 11 through 11\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.920-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:920697][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.921-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:920697][16892:140714226965296], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:50.923-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381610:922689][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.067-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":618}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.067-04:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.071-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.148-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.149-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.152-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.153-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.154-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.493-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"D:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.496-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.498-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:06:51.498-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:07:51.151-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381671:151157][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 3, snapshot max: 3 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:08:51.349-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381731:348965][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 6, snapshot max: 6 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:09:51.505-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381791:505566][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 8, snapshot max: 8 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:10:51.671-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381851:671204][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10, snapshot max: 10 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:11:51.795-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381911:794859][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 12, snapshot max: 12 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:12:51.953-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687381971:953133][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 14, snapshot max: 14 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:13:52.092-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687382032:92101][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 16, snapshot max: 16 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:14:52.299-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687382092:299761][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 18, snapshot max: 18 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:15:52.465-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687382152:464879][16892:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 20, snapshot max: 20 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 3181\"}}\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\5.0\\data\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\5.0\\log\\mongod.log\n\n# network interfaces\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n\ndb.serverCmdLineOpts()127.0.0.127017", "text": "No, I don’t think so. Here’s the contents of the mongod.cfg file:When I boot up the server on machine B, it doesn’t mention anything about IPv6 binding. There’s a single message that says it’s bound to localhost, and that the server is closed to remote connections. You can Ctrl+F for the line by typing “This server is bound to localhost” for the exact message.I’ve tried modifying the .cfg file by removing all the networking stuff, so it looks like this:…but it doesn’t seem to do anything, as when I close the server, reboot it, and run the db.serverCmdLineOpts(), it still says that the bound IP address is 127.0.0.1, with port 27017.", "username": "mingtendo_N_A" }, { "code": "", "text": "Nothing funny in the logs?", "username": "Jack_Woehr" }, { "code": "C:\\Program Files\\MongoDB\\Server\\5.0\\log\\{\"t\":{\"$date\":\"2023-06-21T17:48:38.568-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:53174\",\"uuid\":\"ec452de0-f4c0-4a91-94b5-b6614cb2a356\",\"connectionId\":15,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-06-21T17:48:38.569-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn15\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:53174\",\"client\":\"conn15\",\"doc\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"5.0.18\"},\"os\":{\"type\":\"Windows\",\"name\":\"Microsoft Windows 10\",\"architecture\":\"x86_64\",\"version\":\"10.0 (build 19045)\"}}}}\n{\"t\":{\"$date\":\"2023-06-21T17:48:44.980-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687384124:980516][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10679, snapshot max: 10679 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:48:53.200-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn15\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:53174\",\"uuid\":\"ec452de0-f4c0-4a91-94b5-b6614cb2a356\",\"connectionId\":15,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-06-21T17:49:45.000-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687384185:11][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10681, snapshot max: 10681 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:50:45.019-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687384245:19677][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10683, snapshot max: 10683 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-21T17:51:45.038-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687384305:38178][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10685, snapshot max: 10685 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n", "text": "I hadn’t thought of that. Here’s the log from C:\\Program Files\\MongoDB\\Server\\5.0\\log\\ for when I connect to the server using a shell:I just connect to the server, and then exit immediately to get an idea of what it might put in the log. I then try to connect via the app, and this is what got added after the four lines above:Just some checkpoints where it saves a snapshot of the server, I assume. So no, I guess there’s nothing funny in the logs. I guess the server isn’t even recognizing that a local client is trying to connect to localhost. The rest of the log file is the same stuff: saving snapshots, and connections to the server by the shell.", "username": "mingtendo_N_A" }, { "code": "db.serverCmdLineOpts()localhost:5298/fetchmongo", "text": "Small update: I was able to check the mongod.cfg on machine A. The configuration file is the exact same as machine B, although when I run db.serverCmdLineOpts() in the shell nothing shows up, as I have shown above. But the CFG is the same, and I am able to connect to it using my app, using a web browser on the same machine (putting localhost:5298/fetch in the URL bar). Strangely, the log file does not log anything, despite my connections to the server, or spinning it up. The last log from the server on machine A was 2 days ago, on 2023-06-19. It notes some activities, like me inserting documents into the database via mongo shell, but nothing out of the ordinary, and nothing about connecting to the server (I had made the app just yesterday on 2023-06-20, so of course there wouldn’t be any logs about connecting when the last log is from the day before that).", "username": "mingtendo_N_A" }, { "code": "C:\\data\\db", "text": "I’ve tried using a database on the C: drive, but unfortunately that didn’t seem to work either. I think it’s important that I mention that when I made a C:\\data\\db drive, MongoDB seems to have been smart enough to clone the database from my D: drive. Yet, for some reason my app still refuses to connect to the database on machine B. I feel like this shouldn’t be something difficult to get working, yet it is.", "username": "mingtendo_N_A" }, { "code": "", "text": "You could try a re-install from scratch", "username": "Jack_Woehr" }, { "code": "> db.serverCmdLineOpts()\n{\n \"argv\" : [\n \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.exe\"\n ],\n \"parsed\" : {\n\n },\n \"ok\" : 1\n}\nmongo{\"t\":{\"$date\":\"2023-06-22T21:03:15.520-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687482195:519814][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 13947, snapshot max: 13947 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:04:15.538-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687482255:537953][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 13949, snapshot max: 13949 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:15.559-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687482315:558758][5476:140714226965296], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 13951, snapshot max: 13951 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.248-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23315, \"ctx\":\"serviceShutdown\",\"msg\":\"Received request from Windows Service Control Manager\",\"attr\":{\"code\":\"SERVICE_CONTROL_STOP\",\"inShutdown\":\"false\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.248-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"serviceShutdown\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.248-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"serviceShutdown\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.248-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.248-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.249-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.249-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"serviceShutdown\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"serviceShutdown\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"serviceShutdown\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"serviceShutdown\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"serviceShutdown\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784916, \"ctx\":\"serviceShutdown\",\"msg\":\"Reacquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784917, \"ctx\":\"serviceShutdown\",\"msg\":\"Attempting to mark clean shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.250-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.251-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"serviceShutdown\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"serviceShutdown\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"serviceShutdown\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"serviceShutdown\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"serviceShutdown\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"serviceShutdown\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"serviceShutdown\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"serviceShutdown\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.252-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"serviceShutdown\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.253-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"serviceShutdown\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687482331:253275][5476:140714226965296], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 13953, snapshot max: 13953 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 5806\"}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.285-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"serviceShutdown\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":33}}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.285-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22281, \"ctx\":\"serviceShutdown\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.285-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"serviceShutdown\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.285-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.285-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"serviceShutdown\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-06-22T21:05:31.289-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"serviceShutdown\",\"msg\":\"Now exiting\"}\nD:\\data\\db# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: %MONGO_DATA_PATH%\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: %MONGO_LOG_PATH%\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\ndbPathsystemLog.pathC:\\Program Files\\MongoDB\\Server\\5.0\\log\\", "text": "Last time I installed it as a service on machine B. This time I installed it without a service, and I also did not install Mongo Compass. Here is what I got from running the same command to check the configuration options:Okay, that looks good. It certainly looks like what was on machine A. I was also able to start the service on my D: drive, and I could also perform CRUD operations via the mongo shell.Here are the logs:In short, it doesn’t log anything much anymore. The entries in here coincide with me trying to start the server without having created D:\\data\\db first (I deleted it to get as clean a slate as possible). But whenever I log in via the shell, the server doesn’t log it anymore, also like on machine A.Here are also the configuration files.The only thing that seems to have changed between installations is that the dbPath isn’t explicit anymore, and neither is the path for systemLog.path. For reference, the path of the log above is C:\\Program Files\\MongoDB\\Server\\5.0\\log\\.Unfortunately, despite reinstalling it, my app still fails to connect to the server.", "username": "mingtendo_N_A" }, { "code": "\\data\\db$ node app.js\nServer started on port 5293\nAttempting GET request\nC:\\Users\\Mingtendo\\Documents\\ExpressCode\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\utils.js:698\n throw error;\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:438:30)\n at listOnTimeout (node:internal/timers:569:17)\n at process.processTimers (node:internal/timers:512:7)\nEmitted 'error' event on Database instance at:\n at C:\\...\\myapp\\node_modules\\mongojs\\lib\\database.js:36:16\n at C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\utils.js:695:9\n at C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\mongo_client.js:285:23\n at connectCallback (C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\operations\\connect.js:367:5)\n at C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\operations\\connect.js:554:14\n at Object.connectHandler [as callback] (C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:286:11)\n at Timeout._onTimeout (C:\\...\\myapp\\node_modules\\mongojs\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:443:25)\n at listOnTimeout (node:internal/timers:569:17)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n error: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n name: 'MongoNetworkError'\n },\n roundTripTime: -1,\n lastUpdateTime: 2832971,\n lastWriteDate: null,\n opTime: null,\n type: 'Unknown',\n topologyVersion: undefined,\n minWireVersion: 0,\n maxWireVersion: 0,\n hosts: [],\n passives: [],\n arbiters: [],\n tags: []\n }\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n\nNode.js v18.16.0\n", "text": "I’ve tried using a database on the C: drive. I made a new \\data\\db folder, and unlike before, it did not seem to copy the databases on my D: drive. That’s interesting. But my app still failed to connect to the server.So then I tried using both a database on the C: drive, and starting the project from the C: drive. I copied all the project files over, and started the server. This did not work either. Here’s the output:I replaced some of the filepath with elipses for brevity. It’s the exact same error. I’m starting to think that perhaps there’s something with my firewall? Or perhaps I still didn’t install it correctly? I don’t understand why it works on machine A, yet not on machine B. Maybe I need to use version 5.0.9, since 5.0.18 has issues?", "username": "mingtendo_N_A" }, { "code": "mongod.exenode.exe", "text": "I installed MongoDB version 5.0.9 from the archives. You have to navigate to the actual path of the executable to run it via the terminal, but it worked, and my old data was preserved. I ran it on my D: drive, but it still wouldn’t connect. At this point, I decided to go into my firewall settings. I opened Windows Defender Firewall and added the mongod.exe executable to the list of allowed apps. I also noticed that not all of my node.exe apps were checked? So just for testing sake I marked them as checked too. Unfortunately, that did not work either. I’m not sure what else I can do at this point to make it work. Perhaps I can just try using the latest stable version of MongoDB. I just wish it wasn’t this difficult.", "username": "mingtendo_N_A" }, { "code": "mongod --ipv6", "text": "Good news! I have found a solution. I found a Reddit page that mentioned trying IPv6. In case the Reddit comment gets deleted, I’ll put this here:Crypt0n95·11 mo. agoAre you sure your mongodb is listening on the ipv6 interface of your loopback device? If I remember correctly, mongod is not listening on ipv6 interfaces by default. I am not familiar with the JS driver but trying to connect via IPv6 seems odd to me too.Try using the IPv4 address of you loopback device explicitly by changing “localhost” to “127.0.0.1” in your connection string.ORMake sure mongod is listening on IPv6 addresses too.However, this is a workaround, since it works on machine A (although I have no idea how MongoDB was installed on machine A because I didn’t do it myself). Is there a way to set up MongoDB without having to type mongod --ipv6 everytime?", "username": "mingtendo_N_A" }, { "code": "", "text": "I had the same issue after i updated to Nodejs 18", "username": "fation" }, { "code": "", "text": "@Tarun_Gaur do you have any help on this problem?", "username": "Jack_Woehr" }, { "code": "mongo.cfg# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: %MONGO_DATA_PATH%\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: %MONGO_LOG_PATH%\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, ::1\n ipv6: true\n\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\ntruemongodmongod --ipv6db.serverCmdLineOpts()> db.serverCmdLineOpts()\n{\n \"argv\" : [\n \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.exe\"\n ],\n \"parsed\" : {\n\n },\n \"ok\" : 1\n}\n> exit; \n", "text": "I looked at the documentation and found that you should be able to enable IPv6 support by default by setting the proper commands. So, I went into the my mongo.cfg file stored in the same place on my C: drive, and put this in it:At least to my knowledge, ::1 is the localhost for IPv6. I also set the IPv6 boolean to true. However, when I start the server by using mongod, I am unable to connect to the server from my app. However, doing mongod --ipv6 still works. I’m confused as to why this didn’t work, because I’m pretty sure I enabled everything I needed to to have IPv6 support by ‘default’ without needing to specify any arguments when spinning up the server. I did a db.serverCmdLineOpts() when I started it after the CFG file modification, and it spit out this:…which does look like what was on machine A. To be clear, I did not install MongoDB as a Windows service this time, so that’s probably why, as before (you can check in Post #4) the “argv” had some commands in it relating to service options. Perhaps the CFG file is only read if you install MongoDB as a service?Furthermore, on machine A the connection to the server via the app worked just fine without me needing to specify IPv6 support (in the CFG file or as an argument), and I used NodeJS version 18.15 on it. I’ll still want to figure out why MongoDB performs so differently between the two, but right now I at least want the configuration file to work, which it doesn’t.", "username": "mingtendo_N_A" } ]
Can't connect to local MongoDB server using MongoJS on Windows
2023-06-21T14:54:49.868Z
Can&rsquo;t connect to local MongoDB server using MongoJS on Windows
1,207
https://www.mongodb.com/…4_2_1024x313.png
[]
[ { "code": "", "text": "\nScreenshot from 2023-06-22 11-24-391514×464 119 KB\n", "username": "Bhooshan_Mate" }, { "code": "", "text": "here is my mongod.service file\n\nScreenshot from 2023-06-22 11-27-32747×696 70.5 KB\n", "username": "Bhooshan_Mate" }, { "code": "", "text": "The error indicates the environment line is bad. What version have you installed and what OS ?", "username": "chris" }, { "code": "", "text": "Hi @Bhooshan_Mate,\nAfter a fast search on Google, i’ ve find this check to do about your error:The fast way to try to resolve the problem is to reinstall the package because the user created with the package manager It seems to have some problems.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to run mongodb on ubnutu jammy
2023-06-22T06:21:57.327Z
Unable to run mongodb on ubnutu jammy
469
null
[ "replication" ]
[ { "code": "", "text": "Hello there,\nI have this situation after I configured a replica set. I don’t know what I did but the member[n]._id sequence doesn’t match the member[n].host sequence.|IP Address |Host name |Member._id |Member[n] |Priority |Hidden |Site |Role|\n|.92.252 |HOST01\t |0\t\t |0\t\t |100\t |FALSE |A |Primary|\n|.95.253 |HOST06\t |5\t\t |1\t\t |0.2 |FALSE |B |Secondary|\n|.95.254 |HOST07\t |6\t\t |2\t\t |0 |TRUE |B |Secondary|\n|.95.252 |HOST05\t |4\t\t |3\t\t |0.3\t |FALSE |B |Secondary|\n|.95.251 |HOST04\t |3\t\t |4\t\t |0.5\t |FALSE |B |Secondary|\n|.92.253 |HOST03\t |2\t\t |5\t\t |40\t |FALSE |A |Secondary|\n|.92.254 |HOST02\t |1\t\t |6\t\t |50\t |FALSE |A |Secondary|As you can see 3 nodes are in Site A and 4 nodes in a DR site. The customer wants to do the fail over on the DR manually. I did changing the priority of the nodes. After changing the priority the HOST04 will have Priority 100 and become Primary and Secondary the following: HOST05: 50, HOST6: 40, HOST01: 0.5, HOST02: 0.3, HOST03: 02.\nI would know if could be any problem in the replica of the data or any other kind problems about the replication of the data considering that members[n]._id doesn’t matct the member[n].host", "username": "Enrico_Bevilacqua1" }, { "code": "max(_id) +1_id", "text": "I would know if could be any problem in the replica of the data or any other kind problems about the replication of the data considering that members[n]._id doesn’t matct the member[n].hostNo. If you remove a node and add one its id will be the max(_id) +1Just don’t rely on _id OR the index order of the members array to be matching a specific host when you’re writing scripts or documentation.", "username": "chris" }, { "code": "", "text": "Thank you for your reply.", "username": "Enrico_Bevilacqua1" } ]
Replica set configuration
2023-06-23T16:23:19.933Z
Replica set configuration
552
https://www.mongodb.com/…3_2_1024x576.png
[ "aggregation", "jakarta-mug" ]
[ { "code": "Pre-Sales Solutions ArchitectCo-Founder CodePolitan", "text": "\nmug-jakarta1920×1080 239 KB\nJakarta MongoDB User Group is thrilled to announce our first meetup in collaboration with Codepolitan on June 24th.Join us for an informative and interactive event featuring two sessions. The first session, “See in Action,” will provide a quick introduction to MongoDB and MongoDB Atlas through live demos. Discover the power of MongoDB Atlas and learn how to set up your own free forever MongoDB Atlas Cluster.In the second session, “Do and Learn,” we will guide you through adding a search capability to your app using MongoDB Atlas Search. Watch a live demo where we build a movie finder app with MongoDB, covering essential topics such as MongoDB’s document data storage model, aggregation pipeline, and search capabilities.We also have a Networking Time planned, offering you the opportunity to connect with fellow developers, customers, architects, and experts in the region. And of course, there will be trivia, swags, and dinner to make the evening even more enjoyable!Whether you’re a beginner or already have some experience with MongoDB, this meetup has something for everyone. Mark your calendars and join us for an insightful and fun-filled event. We can’t wait to meet you all!To RSVP - Please click on the “✓ RSVP” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.Event Type: In-Person\nLocation: Ariobimo Sentral Level 8 Jalan H. R. Rasuna Said Kav X-2 No. 5 Kuningan Timur, Setiabudi Jakarta Selatan 12950Pre-Sales Solutions Architect–\n\nimage679×679 81.8 KB\nCo-Founder CodePolitan", "username": "Harshit" }, { "code": "", "text": "Would be a great first event!\nThanks for embracing Jakarta, Indonesia, and engaging the Community.Looking forward to it ", "username": "Muhammad_Singgih_Z.A" }, { "code": "", "text": "Hello All,\nWe are excited to see you today!Sessions will start at 10:00 AM. Please make sure to be on time and get some time to meet the speakers and other attendees Looking forward to seeing you all at the event today!", "username": "Harshit" } ]
Jakarta MUG: MongoDB Inaugural Meetup with Codepolitan!
2023-06-12T17:12:01.108Z
Jakarta MUG: MongoDB Inaugural Meetup with Codepolitan!
3,593
https://www.mongodb.com/…_2_1024x449.jpeg
[ "replication" ]
[ { "code": "", "text": "Hi,I am using mongodb 4.0.26 and 3 node replicaset and nearly 500GB data, it has. Sometimes, especially one node getting crash. Data corruption occurs.İnitial sync is not a complete solution. Because even if ı do initial sync by deleting data file, after one or two months later it getting crash again.And also my disk is SSD.When I check the log file, ı can see checksum errors. What is the reason and permanent solution.\nScreen Shot 2023-06-20 at 19.55.141920×842 340 KB\n\n\nScreen Shot 2023-06-20 at 20.05.361920×709 272 KB\n", "username": "Yunus_Dal" }, { "code": "", "text": "This is likely a storage issue. Start with a healthcheck on the ssd.", "username": "chris" } ]
Mongodb data corruption
2023-06-23T11:29:53.369Z
Mongodb data corruption
384
null
[ "aggregation", "queries" ]
[ { "code": " \"items\": {\n \"dynamic\": false,\n \"fields\": {\n \"position\": {\n \"type\": \"stringFacet\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\nYour index could not be built: Unexpected error: Field must have either TokenStream, String, Reader or Number value; got SortedSetDocValuesFacetField\n", "text": "I need to facet on fields within a collection of documents. For example, mappings look like this:When creating these mappings I’m getting error:Are stringFacets not supported inside embeddedDocuments?", "username": "Luke_Snyder" }, { "code": "FacetField", "text": "FacetFieldThis is happening because of the text Index defined over the field/document.Error is coming from Lucene.MongoDB Atlas Search is a full-text search feature that is built on top of Lucene.", "username": "Anuj_Garg" }, { "code": "", "text": "Thanks Anuj,I’m aware it’s built on Lucene and that’s the source of the underlying error. However, according to the documents this should work. The docs note numeric and date facets are NOT supported on embeddedDocuments, but does not mention incompatibility with stringFacet.I need to know if stringFacet is also NOT supported and is accidentally omitted by the documentation, or if there is something wrong or missing in my syntax to get it working.", "username": "Luke_Snyder" }, { "code": "stringFacet", "text": "I need to know if stringFacet is also NOT supported and is accidentally omitted by the documentation, or if there is something wrong or missing in my syntax to get it working.The error you are getting is because you are trying to create a stringFacet on a field that is an embedded document. MongoDB Atlas does not support faceting on embedded documents.", "username": "Anuj_Garg" }, { "code": "embeddedDocuments", "text": "I’m attempting the stringFacet on a field within an embeddedDocument.The limitiations state:But fail to mention string faceting if this is indeed ALSO a limitation", "username": "Luke_Snyder" }, { "code": "", "text": "As per my understanding, MongoDB Atlas does not support faceting on embedded documents, regardless of the type of facet.", "username": "Anuj_Garg" }, { "code": "", "text": "Gotcha, Then hopefully someone from the mongo team can hop in here and confirm it is indeed an inaccuracy with the documentation and it should state that no faceting of any kind is supported within embeddedDocuments.", "username": "Luke_Snyder" } ]
Are stringFacets supported inside embeddedDocuments?
2023-06-22T20:25:20.884Z
Are stringFacets supported inside embeddedDocuments?
590
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const customerSchema = new mongoose.Schema({\n name: {\n type: String,\n required: [true, 'Error: Name is required!']\n },\n stores: [\n {\n store: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Store'\n },\n points: {\n type: Number\n }\n }\n ]\n})\n", "text": "Hi , I am new to MongoDB, I am using Mongoose to create this schema model:\nI am looking for to write a query that gives me the stores of a customer with customer ID “12345” that has stores points greater than 5.", "username": "Andy_Azma" }, { "code": "Customer.findOne({ _id: customerId })\n .populate({\n path: \"stores.store\",\n match: { points: { $gt: 5 } }\n })\n", "text": "You can refer from here Mongoose v7.3.1: Query Population", "username": "Anuj_Garg" }, { "code": "Customer.findOne({ _id: customerId })\n .populate({\n path: \"stores.store\",\n match: { points: { $gt: 5 } }\n })\nCustomer.findOne({ _id: '64911dbfc087bb01145cd6b3' })\n .populate({\n path: \"stores.store\",\n match: { points: { $gt: 5 } }\n })\n .then(result => console.log(result))\n .catch(e => console.log(e.message))\n{\n _id: new ObjectId(\"64911dbfc087bb01145cd6b3\"),\n name: 'Reza 8',\n stores: [\n {\n store: null,\n points: 3,\n _id: new ObjectId(\"649126d487886c930fd30f6e\")\n },\n {\n store: null,\n points: 10,\n _id: new ObjectId(\"649127244d57657bd134d941\")\n }\n ],\n __v: 0\n}\n", "text": "Hi Anuj,\nThanks for your suggestion, I tried it but it seems it gives me all the stores, not only the one that has more than 5 points. Here is the code I have:and this is what I got as result:it shows the store that has 3 points that is not correct. Do you have any suggestion please? I appreciate it", "username": "Andy_Azma" } ]
Please help me with this query
2023-06-23T04:29:50.866Z
Please help me with this query
352
null
[ "performance" ]
[ { "code": "", "text": "Hello, Everyone! I just started to learn MongoDB and it’s just been Over A week. When I started to learn development i created Web Designing site first. Now I want to implement MongoDb On that site but had some queries related to it. Kindly Dont Troll me as just I am a newbi.Thanks", "username": "Dr_Developer" }, { "code": "", "text": "The official MongoDB documentation provides comprehensive information about indexing in MongoDB. It explains the benefits of indexing, different types of indexes, index creation, and query optimization. You can refer to the following link: MongoDB Indexing Documentation", "username": "Anuj_Garg" }, { "code": "", "text": "Oh okey\nthank you Anuj_Garg for the documentation. Ill check it outThanks again", "username": "Dr_Developer" } ]
What is an index in MongoDB, and why is it important for performance optimization?
2023-06-22T16:13:20.792Z
What is an index in MongoDB, and why is it important for performance optimization?
619
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "X:\\BackEnd Web Development\\fruits\\node_modules\\mongoose\\lib\\connection.js:755\nerr = new ServerSelectionError();\n^MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\nat _handleConnectionErrors (X:\\BackEnd Web Development\\fruits\\node_modules\\mongoose\\lib\\connection.js:755:11)\nat NativeConnection.openUri (X:\\BackEnd Web Development\\fruits\\node_modules\\mongoose\\lib\\connection.js:730:11) {\nreason: TopologyDescription {\ntype: ‘Unknown’,\nservers: Map(1) {\n‘localhost:27017’ => ServerDescription {\naddress: ‘localhost:27017’,\ntype: ‘Unknown’,\nhosts: ,\npassives: ,\narbiters: ,\ntags: {},\nminWireVersion: 0,\nmaxWireVersion: 0,\nroundTripTime: -1,\nlastUpdateTime: 181899026,\nlastWriteDate: 0,\nerror: MongoNetworkError: connect ECONNREFUSED ::1:27017\nat connectionFailureError (X:\\BackEnd Web Development\\fruits\\node_modules\\mongodb\\lib\\cmap\\connect.js:383:20)\nat Socket. (X:\\BackEnd Web Development\\fruits\\node_modules\\mongodb\\lib\\cmap\\connect.js:307:22)\nat Object.onceWrapper (node:events:628:26)\nat Socket.emit (node:events:513:28)\nat emitErrorNT (node:internal/streams/destroy:151:8)\nat emitErrorCloseNT (node:internal/streams/destroy:116:3)\nat process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\ncause: Error: connect ECONNREFUSED ::1:27017\nat TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\nerrno: -4078,\ncode: ‘ECONNREFUSED’,\nsyscall: ‘connect’,\naddress: ‘::1’,\nport: 27017\n},\n[Symbol(errorLabels)]: Set(1) { ‘ResetPool’ }\n},\ntopologyVersion: null,\nsetName: null,\nsetVersion: null,\nelectionId: null,\nlogicalSessionTimeoutMinutes: null,\nprimary: null,\nme: null,\n‘$clusterTime’: null\n}\n},\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: null,\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined\n}Node.js v18.15.0", "username": "Rahul_pal" }, { "code": "", "text": "Hello @Rahul_palCan you please provide the Node.JS script you’re using?\nAre you using the MongoDB Node.JS Driver? I see you’re using Mongoose via the directory.\nWhat type of app is this? What is this running on?", "username": "Brock" }, { "code": "", "text": "Use 127.0.0.1 rather than localhost.The following::1:27017indicates that your localhost resolve to the IPv6 version.", "username": "steevej" }, { "code": "const mongoose = require('Mongoose');\nmongoose.connect(\"MongoDB://localhost:<PortNumberHereDoubleCheckPort>/<DatabaseName>\", {useNewUrlParser: true});\nconst <nameOfDbschemahere> = new mongoose.schema({\n name: String,\n rating: String,\n quantity: Number,\n someothervalue: String,\n somevalue2: String,\n});\n\nconst Fruit<Assuming as you call it FruitsDB> = mongoose.model(\"nameOfCollection\" , <nameOfSchemeHere>);\n\nconst fruit = new Fruit<Because FruitsDB calling documents Fruit for this>({\n name: \"Watermelon\",\n rating: 10,\n quantity: 50,\n someothervalue: \"Pirates love them\",\n somevalue2: \"They are big\",\n});\nfruit.save();\n\n", "text": "In MongoDB when logged in via terminal:\nshow dbs\nSelect the DB in this case whatever you named the database by typing: use databasename\nshow collections\nYou should then see the fruits collection.\ndb.fruits.find will then pull up all documents in fruit.", "username": "Brock" }, { "code": "", "text": "tried it but it gets stuck every time", "username": "Rahul_pal" }, { "code": "", "text": "@Brock Just 2 line script requiring mongoose and connecting it.\nI just started learning MongoDb and it’s my first MongoDb project trying to connect with Node.js", "username": "Rahul_pal" }, { "code": "", "text": "It should work with 127.0.0.1\nWhat error are you getting?timeout or someother?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\nimage1113×547 35.3 KB\n@Ramachandra_Tummala I waited for 5 minutes and it still did not give me nothing it gets freeze", "username": "Rahul_pal" }, { "code": "leCheckPort>/<DatabaseName>\", {useNewUrlParser: true});", "text": "leCheckPort>/<DatabaseName>\", {useNewUrlParser: true});Is your code complete?\nAfter useNewUrlParser: true there is a comma\nDo you have any other parameter before closing the brackets?\nMay be waiting for more inputs?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "That’s bizarre, it’s working fine on my local environment, there shouldn’t be other parameters needed. The comma is because it’s continuing with the parser.That said, there’s nothing returning because nothing is there @Rahul_pal. You need to build the schema, the DB, etc.@Rahul_pal if it’s been done correctly, you can do the following to check the work:In MongoDB when logged in via terminal:\nshow dbs\nSelect the DB in this case whatever you named the database by typing: use databasename\nshow collections\nYou should then see the fruits collection.\ndb.fruits.find will then pull up all documents in fruit.What the freeze is indicating is there is nothing to CD into, you need to not just “connect” it, but you need to “build” it.", "username": "Brock" }, { "code": "", "text": "Hey, also make sure that the port is correct in the configs, and use ls to see if you’re in the right directory for that project, too.", "username": "Brock" }, { "code": "", "text": "It worked for me, Thanks", "username": "Ehsaan_Mondal" }, { "code": "", "text": "hi Rahul ,\ni too facing the issue that , i can’t get the data from the local dataBase to the hyper shell when im console logging. Can u help me with this error , And Did your problem solved ?", "username": "chandra_kiran1" } ]
My MongoDB server is running locally, i can execute my commands on shell also but when i try to connect MongoDB with Node.js it gives me error
2023-03-23T20:11:09.667Z
My MongoDB server is running locally, i can execute my commands on shell also but when i try to connect MongoDB with Node.js it gives me error
2,234
https://www.mongodb.com/…4_2_1024x805.png
[]
[ { "code": "", "text": "so i have my port 27017 and it is secondary (i’ve done something in the past to it but i forgot), i cant change it back to primary. Plz help me\n( I have try rs.stepDown() and reconfig but it always said that cant run those command on secondary must be on primary)\nThis is my rs.status()\n\nimage1780×1400 181 KB\n\nHow can I change the 27017 to Primary.", "username": "Minh_Nguy_n1" }, { "code": "", "text": "In the current state you can’t, and the reason is because you only have 1 of 3 nodes available. When you don’t have a majority of nodes healthy the cluster can’t have a primary. You need to bring up one of the other nodes to a healthy state. Then you will have 2 of 3 nodes up so you will have a Primary and a Secondary.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thanks, well i guess i have to reinstall it all again.", "username": "Minh_Nguy_n1" }, { "code": "", "text": "Why do you have to reinstall? Can you not just start the mongod processes on the server?", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change port 27017 to PRIMARY
2023-06-20T14:06:41.817Z
Change port 27017 to PRIMARY
679
null
[]
[ { "code": "{ \"profile\": { \"dept\" : \"Other Healthcare\", \"iid\" : \"ams_sales\", \"title\" : \"Other\" }} _id: 2, Item_qnty: 10{ \"profile\": { \"dept\" : \"Healthcare\", \"iid\" : \"sales\", \"title\" : \"Healthcare\" }db.test.findOne().Item_qnty ", "text": "Hello Team, I have below kind of data:{ _id: 1,\n \"profile\": { \"dept\" : \"Other Healthcare\", \"iid\" : \"ams_sales\", \"title\" : \"Other\" } “Item_qnty”: 0\n} {\n _id: 2, “profile”: { “dept” : “Other Healthcare”, “iid” : “ams_sales”, “title” : “Other” }\n Item_qnty: 10}\n{ _id: 3,\n \"profile\": { \"dept\" : \"Healthcare\", \"iid\" : \"sales\", \"title\" : \"Healthcare\" } Item_qnty: 6\n`}I have to extract only Item_qnty for each document in a loop.\nI tried below statement but as it is findone so it returns only one value:db.test.findOne().Item_qnty 6I have huge volume of records/documents. I need only value from each document in a loop.\n`0, 10, 6", "username": "Shashank_Shekhar5" }, { "code": "", "text": "", "username": "Kobe_W" }, { "code": "", "text": "Thank you… but unfortunately db.collection.find() has no solution. I already mentioned in my queries. though I create a function and got the result, not exactly with the value but by some other way.", "username": "Shashank_Shekhar5" }, { "code": "db.test.findOne().project({Item_qnty: 1, _id: 0});\n", "text": "You will need to understand the concept of projection._id is default to ON so we need to set it OFF, all other fields default OFF so we set Item_qnty to ON.", "username": "Anuj_Garg" }, { "code": "", "text": "Many thanks for your time, contribution and support, but it doesn’t meet the requirement. I am getting below result:\n{\nItem_qnty: 0\n}\n{\nItem_qnty: 10\n}I dont want KEY “Item_qnty” I only want value 0,10,6 nothing else", "username": "Shashank_Shekhar5" }, { "code": "db.collection.aggregate([\n { $group: { _id: null, items: { $push: \"$Item_qnty\" } } }\n])\n", "text": "", "username": "Anuj_Garg" }, { "code": "for (inst in all_insts) \n{\n log.info('enforcing retention for institution %s', inst.id)\n users = User.find_by_iid(inst.id);\n}\n /* // gather up user id's and notify once\n notify_user_ids = set();\n for (user in users) {\n log.info('enforcing retention for user %s', user._id);\n }\n\t*/\n \n\n\n// Perform further operations with the watermark\n// ...\n", "text": "Sir, as I mentioned in earlier reply to someone else that I wrote a function and achieve whatever I want. I didnot get exactly the value of a Key but the end goal has been achieved. I need value to keep in loop and deduct the same from another value. Below code is working fine for me and my requirement: const currDate = new Date();\n// Array of collection names\nconst collectionNames = db.getCollectionNames();\n// Iterate over each collection\nfor (const collectionName of collectionNames)\n{\nconst collection = db.getCollection(collectionName);\n// Get documents with retention_days field\nconst documents = collection.find({ retention_days: { $exists: true } });\n// Iterate over each document\ndocuments.forEach (document =>\n{\nconst retentionDays = document.retention_days;\nconst watermark = new Date(currDate.getTime() - (retentionDays));\nall_insts = [collectionName] + list(collectionName.descendants)});\n}your code also gave me nearly requested output as below:\n{\n_id: null,\nretention_days: [\n10,\n0\n]\n}", "username": "Shashank_Shekhar5" } ]
Get value from a dictionary using MongoDB
2023-06-22T07:38:50.107Z
Get value from a dictionary using MongoDB
501
null
[ "compass", "atlas-cluster" ]
[ { "code": "", "text": "Hello team, when I try to use vscode to connect Altas always appear errors below:\n“Unable to connect: connection to 34.238.182.221:27017 closed”. I do no know how to check the detail errors. But when I tried to use Compass ,it is ok. Why this happened? And my connect link is :“mongodb+srv://myAtlasDBUser:[email protected]”. is it the same as Compass ,right? Thanks for your help~", "username": "lqjyxy_andy" }, { "code": "", "text": "Yes, the connection string is the same one you’d use in Compass. Are you able to connect with Compass?", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Thanks Massimiliano, the compass is still good on windows 11.Then I think it need to install on Ubuntu on my WSL first, then run the below command is working. (mongosh -u myAtlasDBUser -p XXXXX mongodb+srv://myAtlasDBUser:[email protected]/sample_airbnb). I finally replacement the pure windows ENV to Linux ", "username": "lqjyxy_andy" } ]
Vscode connect mongodb Altas
2023-06-19T22:41:17.428Z
Vscode connect mongodb Altas
511
null
[ "replication" ]
[ { "code": "2023-06-16T11:44:28.459+05:30 INFO 68004 --- [primary-1:27017] org.mongodb.driver.cluster : Rediscovering type of existing primary mongo-stack-secondary-1:27017\n2023-06-16T11:44:28.459+05:30 INFO 68004 --- [primary-1:27017] org.mongodb.driver.cluster : Discovered replica set primary mongo-stack-primary-1:27017 with max election id 7fffffff0000000000000060 and max set version 2\n2023-06-16T11:44:38.068+05:30 INFO 68004 --- [arbitar-1:27017] org.mongodb.driver.cluster : Rediscovering type of existing primary mongo-stack-primary-1:27017\n2023-06-16T11:44:38.069+05:30 INFO 68004 --- [arbitar-1:27017] org.mongodb.driver.cluster : Discovered replica set primary mongo-stack-arbitar-1:27017 with max election id 7fffffff0000000000000060 and max set version 2\n2023-06-16T11:44:38.069+05:30 INFO 68004 --- [condary-1:27017] org.mongodb.driver.cluster : Rediscovering type of existing primary mongo-stack-arbitar-1:27017\n2023-06-16T11:44:38.069+05:30 INFO 68004 --- [condary-1:27017] org.mongodb.driver.cluster : Discovered replica set primary mongo-stack-secondary-1:27017 with max election id 7fffffff0000000000000060 and max set version 2\n", "text": "Hello Everyone,Could someone explain these logs? I see these logs every sec.", "username": "Raghu_Kiran_Koduri" }, { "code": "", "text": "Hello @Raghu_Kiran_KoduriI believe those are informational messages showing the monitoring logs for replica set. Please refer below links for more informationI see these logs every secAs per the logs shared, it seems like these logs are generated after 10 seconds, is something a miss here?\nAre you facing any issues?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "Rediscovering type of existing primary mongo-stack-primary-1:27017\n2023-06-16T11:44:38.069+05:30 INFO 68004 --- [arbitar-1:27017] org.mongodb.driver.cluster : Discovered replica set primary mongo-stack-arbitar-1:27017 with max election id 7fffffff0000000000000060 and max set version 2\n", "text": "Thanks for the reply @Tarun_Gaur If these logs are expected, I think these logs statements should be at debug level as they are occurring every 10 secs. Or, since these logs are occurring very frequently is there anything which I need to worry about my replica set configuration or my cluster setup.The below logs are from arbitar. “Discovered replica set primary mongo-stack-arbitar-1” how come arbitar become primary.From the logs I am under an impression that primary node thinks the primary is “mongo-stack-primary-1” and arbitar node thinks the primary is \" mongo-stack-arbitar-1\" and secondary node thinks the primary is “mongo-stack-secondary-1” all having the same election id “7fffffff0000000000000060”", "username": "Raghu_Kiran_Koduri" }, { "code": "", "text": "As long as you don’t face any issues with your cluster or see any error in the logs, you should not worry as these messages are informational and in case something happens, these informational logs could help in determining the root case of the issue.I think these logs are from Java driver, so you can check if there are any setting that might work for you.arbitar node thinks the primary is \" mongo-stack-arbitar-1\"Are you sure you are using PSA and not PSS as arbiter node cannot become primary, can you please share output of rs.status() and rs.conf()?", "username": "Tarun_Gaur" } ]
Need help with these replicaSet cluster logs
2023-06-16T06:20:49.379Z
Need help with these replicaSet cluster logs
783
null
[]
[ { "code": "", "text": "Dear Team,I am a new member and I just start to play around with mongoDB.\nI have one question about permission in mongoDB.\nAfter I enable security:authorization: “enabled”, I cannot use db.serverCmdLineOpts() to check dbpath.\nThe question is what role should I grant to my user to be able to use command db.serverCmdLineOpts() ?Thanks,\nNara", "username": "Nara_Apple" }, { "code": "", "text": "Hi @Nara_Apple welcome to the community!For the serverCmdLineOpts command, you’ll need the clusterAdmin or clusterMonitor role for the user.You can search the linked page above for the commands you’re interested in. The page should list all commands available in MongoDB, along with the associated privilege required to run it.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hello Sir,It works now, thank you so much for your help.Best regards,\nNara", "username": "Nara_Apple" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
db.serverCmdLineOpts()
2023-06-21T04:14:59.101Z
db.serverCmdLineOpts()
300
null
[ "kotlin" ]
[ { "code": "", "text": "Hi all,In case you missed it – today, we announced the release of a Kotlin driver for server-side development with MongoDB. In addition to many of the features that you’d expect in a language driver, the driver supports some of the features we know Kotlin users benefit from, including coroutines and data classes. Check out the docs here!After you give the driver a try, let us know what you think. We always want to hear feedback from our users.", "username": "Ashni_Mehta" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Now available: Server-side Kotlin driver for MongoDB!
2023-06-23T02:08:27.472Z
Now available: Server-side Kotlin driver for MongoDB!
622
null
[]
[ { "code": "", "text": "Hi Team,I have migrated AWS RDS MySQL database to AWS Document DB, data were successfully migrated, But when i find the document db collections id shows like 1,2,3,…. etc. I want to generate auto generated random id. is this possible? if yes how to do that.I have refer on blog it says it’s not possible to generate object id, Please refer the below link.azure - Identity Column in DocumentDB - Stack OverflowHow to achieve that someone give me hand to over come the issue.", "username": "KRISHNAKUMAR_K" }, { "code": "", "text": "Hi @KRISHNAKUMAR_K,Seems your question is related to Document DB and not MongoDB. I would recommend asking this on Stack Overflow or an AWS product community.Regards,\nJason", "username": "Jason_Tran" } ]
How to auto generate Object_id in document db?
2023-06-22T15:05:51.014Z
How to auto generate Object_id in document db?
556
null
[]
[ { "code": "{\n \"title\": \"locations\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"boundary\": {\n \"title\": \"points\",\n \"bsonType\": \"array\",\n \"uniqueItems\": false,\n \"items\": {\n \"title\": \"xy\",\n \"bsonType\": \"array\",\n \"uniqueItems\": false,\n \"items\": {\n \"bsonType\": \"double\"\n }\n }\n }\n }\n}\n", "text": "Hi there,I have a field which is a two dimensional array that I am trying to build into a schema. The field is a set of x,y coordinates, e.g. [[1.0,1.0],[2.0,2.0],[3.0,3.0]].The schema I am trying to apply isThis validates fine, but the error I am getting from the client is ‘Client query is invalid/malformed (IDENT, QUERY)’.Is what I am trying to do possible? Any guidance anyone can give would be really appreciated!Thanks.", "username": "Alec_Seddon" }, { "code": "", "text": "Hi, Realm DB does not support storing lists of lists unfortunately at the moment. Using that JSON Schema will actually make the collection by “sync invalid”. We are currently in the process of trying to remove this limitation and have better support for geospatial data, so stay tuned for announcements on that.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks Tyler,That makes sense, appreciate the response.Look forward to future updates!Regards,Alec.", "username": "Alec_Seddon" }, { "code": "", "text": "Any update on this issue?", "username": "Jon_Shipman" }, { "code": "", "text": "Hi, I have no immediate update to share, but we have a project currently underway that should solve this issue in a more general way. Stay tuned for more!", "username": "Tyler_Kaye" }, { "code": "geometrygeometry{ coodinates: JSON.stringify(item.coordinates) }geometryStringgeometry{geometryString: JSON.stringify(item.geometry)}geometryString{geometry: JSON.parse(item.geometryString)}", "text": "In the interim, here is the solution we’re deploying:Hope this helps anyone still waiting on MongoDB to implement nested arrays!", "username": "Jon_Shipman" }, { "code": "", "text": "It looks like you’re using GraphQL. I believe nested arrays work with App Services and GraphQL, just not (yet) with Atlas Device Sync (which is what the original question was asking about).", "username": "Sudarshan_Muralidhar" }, { "code": "{\n\t\"geometry\": {\n\t\t\"bsonType\": \"object\",\n\t\t\"properties\": {\n\t\t\t\"coordinates\": {\n\t\t\t\t\"type\": \"array\",\n\t\t\t\t\"items\": {\n\t\t\t\t\t\"type\": \"array\",\n\t\t\t\t\t\"items\": {\n\t\t\t\t\t\t\"type\": \"array\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"type\": {\n\t\t\t\t\"type\": \"string\"\n\t\t\t}\n\t\t}\n\t}\n}\n{\n\t\"geometry\": {\n\t\t\"bsonType\": \"object\",\n\t\t\"properties\": {\n\t\t\t\"coordinates\": {\n\t\t\t\t\"type\": \"array\",\n\t\t\t\t\"items\": {\n\t\t\t\t\t\"type\": \"array\",\n\t\t\t\t\t\"items\": {\n\t\t\t\t\t\t\"type\": \"array\",\n\t\t\t\t\t\t\"items\": {\n\t\t\t\t\t\t\t\"bsonType\": \"double\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"type\": {\n\t\t\t\t\"type\": \"string\"\n\t\t\t}\n\t\t}\n\t}\n}\n{\n\t\"geometry\": {\n\t\t\"type\": \"Polygon\",\n\t\t\"coordinates\": [\n\t\t\t[\n\t\t\t\t[-96.5699999, 33.21],\n\t\t\t\t[-96.58, 32.9799999],\n\t\t\t\t[-96.92999999999999, 32.99],\n\t\t\t\t[-96.91999999999999, 33.21],\n\t\t\t\t[-96.5699999, 33.21]\n\t\t\t]\n\t\t]\n\t}\n}\n", "text": "They have not worked for me. You can generate a schema with GeoJSON data and it will generate a broken schema (that will have no error but will be unavailable to query).Example of a broken generated schema:If you try and fix it manually, it won’t validate.If you change it to this it will not deployAn example of GeoJSON data", "username": "Jon_Shipman" } ]
Nested arrays in schema
2023-02-13T09:35:26.613Z
Nested arrays in schema
1,383
null
[]
[ { "code": "{\n \"error\": \"failed to ping MongoDB: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: live-shard-00-00.ryh7x.mongodb.net:27017, Type: Unknown }, { Addr: live-shard-00-01.ryh7x.mongodb.net:27017, Type: Unknown }, { Addr: live-shard-00-02.ryh7x.mongodb.net:27017, Type: Unknown }, ] }\",\n \"level\": \"error\",\n \"msg\": \"failed to connect to database\",\n \"time\": \"2023-06-20T13:54:38Z\"\n}\n", "text": "Hello, Is there any news?\nI’m having the same issue with my lambdaSome details about my lambda\ngo 1.19\ngo.mongodb.org/mongo-driver v1.11.4and my DB\nVersion: 5.0.18\nRegion: AWS Ireland (eu-west-1)\nCluster tier: M10 (General)I have two stage test and prod. The test stage is working correctly, but the prod stage is throwing an error, even though both stages have the same configuration. I also added the IP of my VPC into IP Access List for two stages.", "username": "Thuc_NGUYEN1" }, { "code": "", "text": "Hi @Thuc_NGUYEN1,Hello, Is there any news?\nI’m having the same issue with my lambdaI’ve moved this to a new topic because it is difficult to confirm whether this is exactly the same issue as the original post. For example, the error you’ve included slightly differs to the original post.The test stage is working correctly, but the prod stage is throwing an error, even though both stages have the same configuration.Are both environments connecting to the same cluster? Can you also advise how often this is happening?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran I figured out the problem. It’s my fault, I forgot an outbound rule in the security group of my VPC, I added and it works like a charm.Thanks anywayRegards,\nThuc", "username": "Thuc_NGUYEN1" }, { "code": "", "text": "Awesome. Glad to hear you got it solved and thanks for updating the post with those details.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
M10 and AWS Lambda connection issue
2023-06-21T05:46:08.662Z
M10 and AWS Lambda connection issue
690
https://www.mongodb.com/…ab_2_1024x40.png
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hi Team,I have dropped a collection from my TestDB (from both Stage and Prod Server), and I wanted to restore it back using OplogReplay by taking backup of oplog.rs collection. But it is throwing the below error.\nCan you please help?Note: I am using AKS cluster.Syntax:\nmongorestore --host=xxxxxxxx --port 27017 --oplogReplay --oplogLimit 1644392307 --authenticationDatabase admin -u xxxxxx -p xxxx /var/lib/mongodb/local/oplog.rs.bsonStage Server Error:Failed: restore error: error applying oplog: applyOps: (Unauthorized) not authorized on admin to execute command { applyOps: [ { ts: Timestamp(1641288700, 2), t: 1, h: null, v: 2, op: “c”, ns: “config.$cmd”, o: { create: “system.indexBuilds”, idIndex: { v: 2, key: { _id: 1 }, name: “id” } } } ], lsid: { id: UUID(“a3a053a6-ef03-44e9-ad18-69d44040f577”) }, $clusterTime: { clusterTime: Timestamp(1644407312, 1), signature: { hash: BinData(0, 5219E1B914EF211962681D46B336D3864642C59F), keyId: 7049281289794355202 } }, $db: “admin”, $readPreference: { mode: “primaryPreferred” } }\n2022-02-09T11:48:32.786+0000 0 document(s) restored successfully. 0 document(s) failed to restore.Production Server Error:\nimage1914×76 70.1 KB\n", "username": "Rana_Saha" }, { "code": "", "text": "Now we are getting error in Stage server, I am not sure why Duplicate Key error is coming for a table which is dropped.2022-02-09T12:22:03.295+0000 skipping applying the config.system.sessions namespace in applyOps\n2022-02-09T12:22:03.296+0000 skipping applying the config.system.sessions namespace in applyOps\n2022-02-09T12:22:03.296+0000 skipping applying the config.transactions namespace in applyOps\n2022-02-09T12:22:03.398+0000 oplog 694MB\n2022-02-09T12:22:03.403+0000 Failed: restore error: error handling transaction oplog entry: error applying transaction op: applyOps: (DuplicateKey) E11000 duplicate key error collection: DataUniverseStg.Hierarchy index: HierachyIDandDUPKI dup key: { HierarchyID: 1343, DU_PKI: 15 }\n2022-02-09T12:22:03.403+0000 0 document(s) restored successfully. 0 document(s) failed to restore.", "username": "Rana_Saha" }, { "code": "", "text": "Hi Team,Any suggestion?Thanks,\nRana", "username": "Rana_Saha" }, { "code": "", "text": "Check this jira ticket.It suggests to use additional parameter\nhttps://jira.mongodb.org/browse/TOOLS-2041", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Similar error I am also getting. during restore - with latest version of mongodb tools 100.6.1\nmongorestore --authenticationDatabase admin --port 27017 -u admin --oplogReplay /tmp/new/logs_0/20230208015236_20230208015524/local/oplog.rs.bson --oplogLimit 1675821494:0 -p\n2023-02-08T07:29:32.814+0000\tchecking for collection data in /tmp/new/logs_0/20230208015236_20230208015524/local/oplog.rs.bson\n2023-02-08T07:29:32.814+0000\treplaying oplog\n2023-02-08T07:29:32.815+0000\tFailed: restore error: error applying oplog: applyOps: (Unauthorized) not authorized on admin to execute command { applyOps: [ { ts: Timestamp(1675821194, 1), t: 1, h: null, v: 2, op: “c”, ns: “config.$cmd”, o: { create: “system.sessions”, idIndex: { v: 2, key: { _id: 1 }, name: “id” } } } ], lsid: { id: UUID(“1da48df1-7268-4bf2-a1bb-6455005d68f3”) }, $clusterTime: { clusterTime: Timestamp(1675841367, 2), signature: { hash: BinData(0, 23BDCA774453874BA680500BE149D0764CC5811C), keyId: 7197629778025775108 } }, $db: “admin”, $readPreference: { mode: “primaryPreferred” } }\n2023-02-08T07:29:32.815+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.", "username": "Balram_Parmar" }, { "code": "", "text": "Appears to be privileges issue\nWhat role your user is having?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "My user is root role user -855WB43P44:PRIMARY> show users\n{\n“_id” : “admin.admin”,\n“userId” : UUID(“cc00c914-0020-47f4-8ca4-d1e0fe11f227”),\n“user” : “admin”,\n“db” : “admin”,\n“roles” : [\n{\n“role” : “root”,\n“db” : “admin”\n}\n],", "username": "Balram_Parmar" }, { "code": "", "text": "Root does not have privs to run this command\nApplyops is an internal command\nYou have to create customrole\nCheck these linksMongoDB Manual 3.2 applyOps", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for sharing let me try it out, however to me ‘root’ role is super user roles right, we should not need any additional role.\nfor me this looks to be bug in extension to similar issue raised earlier - https://jira.mongodb.org/browse/TOOLS-2952", "username": "Balram_Parmar" }, { "code": "applyOps--oplogReplayanyActionmongorestore--oplogReplay", "text": "‘root’ role is super user roles right,It kind of is, if you’re dealing with databases and collections. However, applyOps is an internal MongoDB command, and thus require a special system-level privilege.From https://www.mongodb.com/docs/database-tools/mongorestore/#required-accessTo run with --oplogReplay, create a user-defined role that has anyAction on anyResource.Grant only to users who must run mongorestore with --oplogReplay.Hope this helpsBest regards\nKevin", "username": "kevinadi" }, { "code": "rootreadWriteAnyDatabasedbAdminAnyDatabaseuserAdminAnyDatabaseclusterAdminrestorebackup", "text": "As per document - super user ‘root’ roles has combined privileges over below which includes restore as well -root Provides access to the operations and all the resources of the following roles combined:", "username": "Balram_Parmar" }, { "code": "", "text": "Please refer to the link shared by Kevin.Restore has access to all non system related objects but when you are accessing system related objects/running internal commands additional privileges need to be given\nThat’s why you need to create a custom role with access any object and grant it to the user who is performing the restore", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I agree restore role alone might have to give additional privileges, however this is about super user -root role whom should be able to do every thing without any restriction, did you check this link -https://www.mongodb.com/docs/manual/reference/built-in-roles/#mongodb-authrole-rootDid you mention here that with root role also you need extra privilege.", "username": "Balram_Parmar" }, { "code": "rootrootrootreadWriteAnyDatabasedbAdminAnyDatabaseuserAdminAnyDatabaseclusterAdminrestorebackup__systemmongosapplyOpsanyActionanyResourceFailed: restore error: error applying oplog: applyOps: (Unauthorized) not authorized on admin to execute command\napplyOps--oplogReplayanyActionmongorestore--oplogReplay", "text": "this is about super user -root role whom should be able to do every thing without any restrictionThe role called root in MongoDB terms is not the same as root in UNIX terms. In MongoDB, it is not “superuser”.See https://www.mongodb.com/docs/manual/reference/built-in-roles/root Provides access to the operations and all the resources of the following roles combined:It’s basically allows you to do data operations across all databases and all collections, but not system objects and system operations. Thus it’s not a superuser in the traditional UNIX convention.There is another role that’s basically superuser, but no user should be given this role according to the documentation:__system MongoDB assigns this role to user objects that represent cluster members, such as replica set members and mongos instances. The role entitles its holder to take any action against any object in the database.Do not assign this role to user objects representing applications or human administrators, other than in exceptional circumstances.If you need access to all actions on all resources, for example to run applyOps commands, do not assign this role. Instead, create a user-defined role that grants anyAction on anyResource and ensure that only the users who need access to these operations have this access.I’d like to go back to the earlier error that is being discussed:The error is saying that you need applyOps privilege to execute an oplog apply operation. This was answered earlier:From https://www.mongodb.com/docs/database-tools/mongorestore/#required-access To run with --oplogReplay, create a user-defined role that has anyAction on anyResource.Grant only to users who must run mongorestore with --oplogReplay.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for detailed clarification , so we can be assured that with ‘anyAction’ on ‘anyResource’, we wont hit any further issues like below -2023-02-09T17:56:49.621+0000 Failed: restore error: error applying oplog: applyOps: (Unauthorized) not authorized on admin to execute command { applyOps: [ { ts: Timestamp(1675883519, 1), t: 11, h: null, v: 2, op: “c”, ns: “admin.$cmd”, o: { create: “system.roles”, idIndex: { v: 2, key:And we can be assured that with these roles - we wont hit issue which were reported in this bug# and there are few open items still still as followup to this bug\nhttps://jira.mongodb.org/browse/TOOLS-3203Or do you suggest to use - __system role to resolve every issues universally.\nThanks again for helping on this.", "username": "Balram_Parmar" }, { "code": "mongodumpmongorestoreapplyOpsanyActionanyResourcerootchmod 777 *", "text": "https://jira.mongodb.org/browse/TOOLS-3203This is a totally different issue, as far as I can tell. Firstly, it’s a mongodump issue, not mongorestore, and secondly it concerns the config database of a sharded cluster.The applyOps permission issue you’re seeing is about executing a system level command that manipulates the oplog: a dangerous and potentially irreversible destructive operation if done by accident, hence the need for a special permission.Or do you suggest to use - __system role to resolve every issues universally.No, I would follow the documentation’s recommendation to not use this role and instead create a new user-defined role (anyAction on anyResource). Similar to UNIX, running everything as root and doing chmod 777 * when you’re seeing permission issues is usually not the right answer Hope this helpsBest regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Yes, I agree with input.\nI feel this jira ticket https://jira.mongodb.org/browse/TOOLS-2952 and ‘anyAction’ on ‘anyResource’ privileges are tightly coupled.\nIn jira we are skipping system collections like session/cache/transaction etc at both mongodump and restore both, as per code changes .\nAnd at the same time we are also asking users to give additional privileges if there are some system collections which are not skipped in above jira.\nimage1517×403 55.1 KB\nAny ways - We will go with document you have shared where additional privileges are mentioned to be used for restore.\nThanks again for all your inputs , we can close this thread - Also if you agree that skipping jira and additional permission are relevant then further followup can be done with Dev team.", "username": "Balram_Parmar" }, { "code": "", "text": "I have admin user with below privilege : anyAction on anyResource and still restore is failing with below error -replaying oplog\n2023-06-22T08:15:12.152-0700 Failed: restore error: error applying oplog: applyOps: (Location40528) Direct writes against config.transactions cannot be performed using a transaction or on a session.I am on mongdb 4.4.22 version.\nThis was never the case earlier while using mongo shell, recently we have started using mongosh.", "username": "Balram_Parmar" } ]
Mongorestore using OplogReplay
2022-02-09T12:00:26.173Z
Mongorestore using OplogReplay
4,750
https://www.mongodb.com/…2_2_1024x618.png
[ "queries", "java", "spring-data-odm" ]
[ { "code": "ReactiveMongoTemplateminPoolSizemaxPoolSizeimport com.myCorp.model.MediaStatusRepository;\nimport com.myCorp.model.PushStatus;\nimport org.springframework.data.mongodb.core.ReactiveMongoTemplate;\nimport org.springframework.data.mongodb.core.query.Criteria;\nimport org.springframework.data.mongodb.core.query.Query;\nimport org.springframework.stereotype.Component;\nimport reactor.core.publisher.Flux;\nimport reactor.core.publisher.Mono;\n\nimport javax.inject.Inject;\nimport javax.inject.Named;\n\n@Component\n@Named(\"myRepository\")\npublic class MyMongoMongoRepository {\n\n private static final String COLLECTION_NAME = \"my_collection\";\n private static final int LIMIT = 50;\n\n @Inject\n private ReactiveMongoTemplate mongoTemplate;\n\n\n public Mono<PushStatus> save(PushStatus pushStatus) {\n return mongoTemplate.insert(pushStatus, COLLECTION_NAME)\n .doOnSuccess(p -> System.out.println(\"saved\"))\n .doOnError(t -> System.out.println(\"error\"));\n\n }\n\n public Flux<PushStatus> find(String myKey, String myRef) {\n Query query = new Query();\n query.addCriteria(Criteria.where(\"myKey\").is(myKey));\n query.addCriteria(Criteria.where(\"myRef\").is(myRef));\n query.limit(LIMIT);\n return mongoTemplate.find(query, PushStatus.class, COLLECTION_NAME);\n }\n}\n\n", "text": "Hi,I migrate my application from Jboss to Spring Boot. I use the spring ReactiveMongoTemplate (in both jboss and Springboot) bean that requires a new version of mongodb-driver-core dependency.FYI, I dont have any problem with the Jboss version but it seems a memory leak with the Spring Boot versionI have tried to play with minPoolSize and maxPoolSize but no matter the configuration, I always have always the same problem with my Spring Boot app.Here is the difference:mongodb-driver-core:3.12.2 with jboss (no problem after injecting 50 query/sec during 10 minutes)mongodb-driver-core:4.6.1 witj spring boot(Java heap space after injecting 50 query/sec during 2 minutes)I notice that the async code has a lot of change beetween the two versions.3.12.2: https://github.com/mongodb/mongo-java-driver/blob/r3.12.2/driver-core/src/main/com/mongodb/internal/connection/DefaultConnectionPool.java4.6.1: https://github.com/mongodb/mongo-java-driver/blob/r4.6.1/driver-core/src/main/com/mongodb/internal/connection/DefaultConnectionPool.javaIn my heap dump i see a lot of ``LinkedBlockingQueue` instances.Leak SuspectsOne instance of “com.mongodb.internal.connection.DefaultConnectionPool$AsyncWorkManager” loaded by “jdk.internal.loader.ClassLoaders$AppClassLoader @ 0xe085d958” occupies 295?880?152 (59,69 %) bytes.Keywords\ncom.mongodb.internal.connection.DefaultConnectionPool$AsyncWorkManager\njdk.internal.loader.ClassLoaders$AppClassLoader @ 0xe085d958image1289×778 90.5 KBIs there a bug in the driver or do I need to add configuration to support the same load as on my jboss instance?Here is the application code sample (same on jboss and springboot version):Thanks", "username": "drexlbob_Julien42" }, { "code": "MongoWaitQueueFullExceptionMongoWaitQueueFullException", "text": "It’s hard to tell. The only possibly relevant change I’m aware of is in the 4.0 upgrade notes:The connection pool no longer enforces any restrictions on the size of the wait queue of threads or asynchronous tasks that require a connection to MongoDB. It is up to the application to throttle requests sufficiently rather than rely on the driver to throw a MongoWaitQueueFullException.But if you weren’t getting any MongoWaitQueueFullException exceptions thrown from the 3.12 driver then that’s probably not it.It might help the diagnosis along if you could reproduce this in a standalone application.", "username": "Jeffrey_Yemin" }, { "code": "MongoWaitQueueFullException", "text": "Hi @Jeffrey_Yemin ,Thanks for your answer.How the driver can throw a MongoWaitQueueFullException if in v4 the the wait queue size is not configurable in the connnection string? How to regulate it from client?", "username": "drexlbob_Julien42" }, { "code": "", "text": "Maybe the problem comme from the asyncWorker (the worker does not exists on v3)", "username": "drexlbob_Julien42" }, { "code": "", "text": "Hi @Jeffrey_Yemin ,Here are the results of the bench I did locally.Bench made with gatling which represents 100 writes/sec and 500 reads/sec during five minutes for the two use cases.Detail sample of the mongostat command:insert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn time\n102 510 *0 *0 0 110|0 0 0B 2.20G 664M 0 0|0 0|0 244k 8.67m 229 Jun 22 16:17:49.931\n100 504 *0 *0 0 101|0 0 0B 2.20G 664M 0 0|0 0|0 241k 8.58m 229 Jun 22 16:17:50.931\n98 490 *0 *0 0 100|0 0 0B 2.20G 664M 0 0|0 0|0 234k 8.33m 229 Jun 22 16:17:51.931\n102 511 *0 *0 0 103|0 0 0B 2.20G 664M 0 0|0 0|0 243k 8.67m 229 Jun 22 16:17:52.931\n97 489 *0 *0 0 98|0 0 0B 2.20G 664M 0 0|0 0|0 233k 8.28m 229 Jun 22 16:17:53.930\n103 510 *0 *0 0 105|0 0 0B 2.20G 664M 0 0|0 0|0 244k 8.72m 229 Jun 22 16:17:54.930\n100 503 *0 *0 0 102|0 0 0B 2.20G 664M 0 0|0 0|0 240k 8.56m 229 Jun 22 16:17:55.931\n100 501 *0 *0 0 104|0 0 0B 2.20G 664M 0 0|0 0|0 239k 8.52m 229 Jun 22 16:17:56.931\n97 489 *0 *0 0 100|0 0 0B 2.20G 664M 0 0|0 0|0 234k 8.33m 229 Jun 22 16:17:57.931\n99 500 *0 *0 0 103|0 0 0B 2.20G 664M 0 0|0 0|0 239k 8.50m 229 Jun 22 16:17:58.931We find that the read/writes are on the whole linear.There is a very rapid increase in the number of tasks to be processed (which causes the Java Heap Space after about 61000 waiting queues)Detail of the mongostat command:insert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn time\n74 362 *0 *0 168 259|0 0 0B 2.15G 634M 0 0|0 0|0 264k 5.12m 169 Jun 22 16:03:34.017\n73 363 *0 *0 243 332|0 0 0B 2.15G 634M 0 0|0 0|0 301k 5.59m 177 Jun 22 16:03:35.013\n74 376 *0 *0 296 389|0 0 0B 2.16G 634M 0 0|0 0|0 334k 6.12m 183 Jun 22 16:03:36.014\n76 392 *0 *0 405 494|0 0 0B 2.17G 634M 0 0|0 0|0 395k 6.76m 191 Jun 22 16:03:37.016\n75 364 *0 *0 264 363|0 0 0B 2.17G 634M 0 0|0 0|0 314k 5.93m 196 Jun 22 16:03:38.014\n93 470 *0 *0 331 433|0 0 0B 2.18G 634M 0 0|0 0|0 393k 7.31m 202 Jun 22 16:03:39.018\n97 490 *0 *0 356 468|0 0 0B 2.18G 634M 0 0|0 0|0 417k 7.75m 208 Jun 22 16:03:40.014\n90 468 *0 *0 361 467|0 0 0B 2.19G 634M 0 0|0 0|0 409k 7.40m 212 Jun 22 16:03:41.015\n87 438 *0 *0 345 446|0 0 0B 2.19G 634M 0 0|0 0|0 387k 7.06m 218 Jun 22 16:03:42.015\n100 486 *0 *0 412 497|0 0 0B 2.20G 634M 0 0|0 0|0 436k 8.14m 222 Jun 22 16:03:43.015We find that the read/writes are not linear, hence the reason for the constantly increasing queue.Can you help me please? If you want i can share you as an attachment the 2 projects with the gatling scenario to reproduce the problem.Thanks,\nJulien", "username": "drexlbob_Julien42" }, { "code": "", "text": "Have you tried simply limiting the number of concurrent (but still asynchronous) operations to 500 (using a Semaphore with 500 permits, for example)?I wonder if that will keep things steady.Jeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Out of memory: Java Heap Space with mongodb-driver-core:4.6.1 with ReactiveMongoTemplate
2023-06-16T12:59:03.492Z
Out of memory: Java Heap Space with mongodb-driver-core:4.6.1 with ReactiveMongoTemplate
1,469
https://www.mongodb.com/…5ea35b54a4a5.png
[ "data-modeling", "graphql" ]
[ { "code": "", "text": "I try to create a schema without predefined names. I found the “dictionary” option on documentation.\n\nimage823×476 40.6 KB\nbut my schema dont work:\n“paymentObject”: {\n“additionalProperties”: {\n“bsonType”: “string”\n},\n“bsonType”: “object”\n},and show the message\n\nimage1311×139 8.91 KB\nWhat am I doing wrong?", "username": "Edir_Dumaszak" }, { "code": "properties", "text": "Hi @Edir_Dumaszak,The top-level schema should follow the format defined here: https://www.mongodb.com/docs/atlas/app-services/schemas/#define-a-schema. To make your schema valid, the object you shared should be an entry in a top-level properties key.", "username": "Kiro_Morkos" }, { "code": "{\n \"tile\": \"payment\",\n \"additionalProperties\": true,\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"createdAt\": {\n \"bsonType\": \"date\"\n },\n \"createdBy\": {\n \"bsonType\": \"string\"\n },\n\n \"organization\": {\n \"bsonType\": \"objectId\"\n },\n \"paymentStatus\": {\n \"bsonType\": \"string\"\n },\n \"paymentUpdateAt\": {\n \"bsonType\": \"date\"\n },\n \"status\": {\n \"bsonType\": \"string\"\n },\n \"totalAmount\": {\n \"bsonType\": \"number\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n },\n \"paymentObject\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": {\n \"bsonType\": \"string\"\n } \n \n }\n }\n}\n", "text": "Hi @Kiro_MorkosI need populate the field `paymentObjetct with a graphql mutation, but is seem impossible.\nThis field no has fixed fields.\nI tries several ways without success.Can you help me?below my schema:", "username": "Edir_Dumaszak" }, { "code": "\"paymentObject\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": {\n \"bsonType\": \"string\"\n } \n \n }\n\"paymentObject\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"key\": {\n \"bsonType\": \"string\"\n },\n \"value\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n", "text": "You wouldn’t be able to have a dictionary-like object here. But if an array of each item as key-value pair works for you. You can do that by", "username": "Anuj_Garg" }, { "code": "", "text": "Thank you @Anuj_Garg\nThis solve my problem.But I still no know why the dictionary no works. Can you explaim to me?", "username": "Edir_Dumaszak" }, { "code": "", "text": "why the dictionary no worksDictionary doesn’t fit in schema structure available.If you field type to Object, you will be expected to set keys and their value types within schema definition. Otherwise schema would not compile.", "username": "Anuj_Garg" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Schema not working as describe on documentation
2023-06-21T17:11:28.095Z
Schema not working as describe on documentation
701
null
[ "node-js", "replication", "migration" ]
[ { "code": "", "text": "Hi all ,Currently i have an production live standalone mongodb server which is connected to an express nodejs application , now i went ahead and setup a replica set (3 machines) . Now how do i migrate data from this live DB to my replica set without any loss of data .", "username": "Stuart_S" }, { "code": "mongoexportmongoimport", "text": "Hi @Stuart_S and welcome to the MongoDB Community forum!!There could be two different ways to migrate from a standalone to a replica set.Case 1: If you have no data in your database and you only have a deploymentThe steps for the following are:Case 2: If you have a large collection in your database in the stand alone deployment:The MongoDB tools mongoexport will convert the collection into the format of the choice and further the mongoimport.\nFurther, you can also use mongodump and mongorestore for the process.However, please note that, mongoexport and mongoimport does not import and export the indexes in MongoDB. Hence you would be required to create index if using the former method.Also, please refer to the documentation on How to Convert a standalone to replica set for further information.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "But this page says the instance can just be restarted wish a replica set name , no mentioning about data import/export at all.", "username": "Kobe_W" }, { "code": "", "text": "Thanks @Aasawari for the answer\nI do have the Case 2 to do , but for live data and data that will be coming in continously, mongoexport or mongodump which is better?", "username": "Stuart_S" }, { "code": "", "text": "Hi @Stuart_SFor live data migration, you need to convert the stand alone replica set to a single node replica set and then pull the data to Atlas.Please note that, as per the documentation, you cannot user M0/M2/M5 shared tier clusters as source or destination for live migration.To add more details to the above information,Please note that, the process can potentially be disruptive, hence the recommendation is to test the workload before performing on the production environment.Let us know of you have further questions.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Another alternative:\nJust export the data to JSON, indexes as JSON, etc. and so on, and then just upload the JSON documents to the new MongoDB Cluster.If it’s large amounts of data, just export to JSON files in batches of appropriate sizes, and then import the JSON files into the new DB. I literally migrated a 600GB 8 Node Cluster running 4.2 on premise to an 10 Node cluster running 6.0 for a friends engineering firm doing this.0 Down time, zero shut downs, zero data lost, it was practically brainless. After confirmation all the aggregations and indexes/queries etc. were all there, we connected the servers to the new cluster, and nukes the Kubernetes containers running the 4.2. 100% successful migration in 20GB batches, and it only took 3 hours.You don’t have to get extreme and complicated for processes that can just be easy if you want them to.EDIT:\nThe 3 hours wasn’t because it takes 3 hours to export 600GB of data and configs, that only took 40 minutes. It took 3 hours because of having to troubleshoot Kubernetes and Docker issues, and the replication needing to happen on a backup location in Europe, had to wait for the data to upload to that cluster as well over a slow VPN. Otherwise the entire process wouldn’t have even taken 1.5 hours if that.EDIT:\nAnd with this exact method, I’m literally volunteering my time to oversee 6.3TB of a migration this weekend. And we’re going to export the JSON files to an external SSD, and just upload the JSON from the exact same SSDs to the new clusters. Upgrade/Migration. Going from several 4.0 and 4.2’s to several different (Breaking it up into smaller 3 node sharded clusters instead of giant monolithic clusters) 6.0 clusters. Zero downtime planned, zero production disruption, and then we’re nuking the entire old server rack and pending whether or not I can take the rack and old Dell Poweredgess for my lab.Don’t over think things, don’t push for complications, if there’s an easy, very safe method that meets your needs, and you’re willing to do it, then well, just do it. But this is just another alternative way @Stuart_S", "username": "Brock" }, { "code": "", "text": "@Stuart_SOther options:\nGraphQL API (My personal favorite, and is stupid easy, if you’re on premise, install the Apollo GraphQL server with your on premise MongoDB cluster and you can route the Apollo GraphQL to the Atlas, unless Atlas GraphQL API is broken) Or if it’s Atlas to Atlas, GraphQL to GraphQL via building an API with it to just send and receive the data.Atlas Functions HTTP service (Rest API)There are a bunch of other options to do this, just listing a couple of more for you.", "username": "Brock" }, { "code": "", "text": "@Brock\nCould you please explain how to export and then upload the data?\nDo you mean mongoexport and mongoimport tools?\nAre there any compatibility issues when exporting a 4.4 database to 6.0?\nThanks.", "username": "VSH" } ]
Database migration from standalone machine to cluster
2023-03-06T09:15:27.187Z
Database migration from standalone machine to cluster
1,535
null
[]
[ { "code": "", "text": "Hi, I am trying to run some triggers over a DB that has an analytics node. I would like to force to use this node to not overload the primary and secondary ones. I have been researching on how to do it. But I did not find any information about it. Is it possible and how can I select this node on the context.services.get(…) function?Thanks for all.", "username": "Christian_Jodra" }, { "code": "context.services.get", "text": "Hi @Christian_Jodra,You can specify read preference tags on the data source to target your analytics nodes - https://www.mongodb.com/docs/atlas/app-services/mongodb/read-preference/#procedureThis will be used for all operations on the data source, so if you want to use the analytics node for only a subset of operations you’ll want to create a separate data source and use that one in the context.services.get call.", "username": "Kiro_Morkos" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Run trigger over analytics node
2023-06-22T15:08:46.227Z
Run trigger over analytics node
606
null
[ "node-js", "indexes" ]
[ { "code": "", "text": "Dear community\nContext: Atlas M20 cluster, a collection with unexpected about 4 millions documents, still counting.\nI am about to add a new field to my documents (nodejs migration script). This field will be a string to be considered as an enum (<5 different values).\nThe value will be taken from an existing field on which I am currently querying with regex.\nI am expecting a performance improvement.\nIn addition I am considering adding an index on this new field.\nHere my questions:", "username": "Frederic_Klein" }, { "code": "null", "text": "can I expect an even bigger performance boost when querying on this newly indexed field (equality only) ?If the field(or the one you are sourcing from) was not indexed, then yes you will see a performance increase, as the query will perform an index scan instead of a collection scan.Hovever the low cardinality of the index won’t be as performant as one with higher cardinality.should I add the index before migrating the documents with new field value or should I first migrate all documents with the new field and then create the index?Its is probably more efficient to do it afterwards. If you create it beforehand there will be an additional 4M index entries to update when you run your migration, the non existing field is indexed as null. But with this number of documents you are unlikely to notice the difference.how long will the collection be locked by the index creation process preventing writes?Later versions of MongoDB(4.2+) only lock the collection at the start and end of the build. In my experience this is a very short period.If you have a Pre-Prod cluster test there beforehand to see how you applications perform.Wuth dedicated Atlas tiers (M10+) you can use a rolling index build to avoid any performance impact from building an index. However there will be a step-down of the primary when it comes to build on that member. I just mention this for completeness not that I think you need it.", "username": "chris" }, { "code": "", "text": "Thanks a lot @chris for these answers.\nI am running version 5.0.18 on GCP.When you say “a very short period”, what should I understand: a couple of seconds (2, 5, 10) or minutes?\nWhat will happen to writes (inserts) during that time?In your experience, how long should I expect the index to build for e.g. 6M entries by next Sunday (lower load on that day)?And after the index is built, is it a good idea to let my migration script run at full speed, possibly hammering hard on the opcounters or should I rather throttle it?In addition, I noticed that, although I am querying an aggregation pipeline having as first stage a $match on a range of a non indexed Date (timestamp) field and as second stage a $match ($ne) on another non indexed String field that can hold 3 different values, the Performance Advisor still does not suggest any index.\nAny idea why not?\nIs it considering that the potential performance improvement would not compensate the cost of the index?", "username": "Frederic_Klein" } ]
Create index on 4 millions documents
2023-06-21T16:42:13.265Z
Create index on 4 millions documents
587
https://www.mongodb.com/…c_2_1024x399.png
[ "aggregation", "atlas", "change-streams" ]
[ { "code": "_gtins_catalog\"originatingCommand\": {\n \"aggregate\": \"241e8df1-c055-424b-9e37-37012aa61f4e_gtins_catalog\",\n \"cursor\": {\n \"batchSize\": 0\n },\n \"pipeline\": [\n {\n \"$changeStream\": {\n \"startAfter\": {\n \"_data\": \"826470CC57000000012B0229296E04\"\n },\n \"fullDocument\": \"updateLookup\",\n \"showMigrationEvents\": true\n }\n }\n ],\n \"$db\": \"prod_sm\",\n...\n}\n", "text": "If I visit MongoDB Atlas Profiler, I can observe on each and every day a similar pattern of slow queries. It looks like this:\nimage1703×665 67.3 KB\nIf ordered by Mean or Sum of operation execution time, the top 30 or more are collections with suffix _gtins_catalog that are empty, created when the user is creating an account.Now, when I am investigating the queries for any of these collections they are usually several of them\nand all of them have pipeline like this:And the Operation Execution Time can even reach up to 2 min.And now is my question. I can’t figure out what cause this pipelines to be executed, especially on empty collections, and why they are this slow. Is this some internal operations that Mongo have to perform?\nIt would be very helpful If someone could cast a bit of light on this.", "username": "Grzegorz_Skupiewski" }, { "code": "", "text": "Hi @Grzegorz_SkupiewskiThanks for being a part of MongoDB’s developer community! I’d like to take a closer look and see what may be causing this rogue command. At first glance, this does not look like an internal operation that MongoDB is running.If you can click on one of the commands and investigate the query details, you should be able to see the “appName” that is calling this command. Would you mind checking on this and sharing what you find?Thanks,\nFrank", "username": "Frank_Sun" }, { "code": "\"appName\": \"mongot steady state\"\n", "text": "Hi @Frank_Sun\nThank you for the reply, this is what you’ve asked for:Edit: Now as it came to my mind I probably should have mentioned, that even it these collections are empty, they have an Atlas Search Index created on them.", "username": "Grzegorz_Skupiewski" }, { "code": "", "text": "Hey @Grzegorz_Skupiewski - PM from Atlas Search here - can you share what cluster tier you are on?", "username": "Elle_Shwer" }, { "code": "M30 (General)", "text": "Hi @Elle_Shwer, it is M30 (General).", "username": "Grzegorz_Skupiewski" }, { "code": "", "text": "I see, any chance you have a support package where you can submit a ticket about this?", "username": "Elle_Shwer" }, { "code": "", "text": "Unfortunately, that is not an option for us. Any suspicions what might be a cause or hints where to look further?", "username": "Grzegorz_Skupiewski" }, { "code": "", "text": "Without much more info, the only interpretation I heard was: The query itself looks like the changestream mongot/AtlasSearch follows for steady state replication. It is possible for changestream queries to be slow on a collection without any writes due to writes on neighboring collections. But there is no precedent for this / it’s not something we’ve seen before. It’s hard to know beyond that.", "username": "Elle_Shwer" } ]
MongoDB Atlas Profiler slow queries each day on empty collections
2023-06-15T08:30:31.261Z
MongoDB Atlas Profiler slow queries each day on empty collections
1,026
null
[]
[ { "code": "", "text": "I noticed that a Dedicated cloud cluster will be around $900/year to have vs a shared cloud cluster is free.\nIn the early stages I think the shared cloud cluster is fine and I’ve set it up (a free cloud cluster) to only accept communications from my ip address.\nIs that enough to be really really sure that no one else has access to my database?", "username": "Keith_Pittner" }, { "code": "", "text": "Not a mongodb employee. But i believe the answer is yes, otherwise it’s a security violation and no one will want to used the cheaper shared tier. (unless someone knows your login credential)", "username": "Kobe_W" }, { "code": "", "text": "Hello Kobe,thanks fo rthe reply. I think (and hope) you’re right. I’ll wait a little while to see if any mongodb employee confirms this.\n", "username": "Keith_Pittner" }, { "code": "", "text": "Hello @Keith_Pittner ,@Kobe_W is right and in addition to that.MongoDB takes security very seriously and only you or the users you added to your cluster with credentials can access the data and it does not matter if the cluster is free cluster(M0), shared cluster(M2, M5) or dedicated cluster(M10+).For more information about security, compliance and the standards implemented in Atlas, please seeMongoDB is dedicated to securing and protecting your data – with strong technical controls, regulatory compliance, organizational standards, and processes.This document conveys the depth of our commitment to customer trust by providing a detailed understanding of MongoDB Atlas security controls and features.Disclaimer: This does not prevent anybody with access to your or your user’s credentials to be able to connect to your database, so please make sure to have strong passwords, keeping it safe and not share credentials. Every user should have their own credentials with relevant roles for connecting to the cluster.Best Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Awesome!Thanks so much ", "username": "Keith_Pittner" }, { "code": "", "text": "Is that enough to be really really sure that no one else has access to my database?From the security perspective, the M0 is okay but from a performance perspective, M0 can have not-so-visible limitations.During costly operations like Geo-Indexing your instance can become non-responding or crash.", "username": "Anuj_Garg" }, { "code": "", "text": "that’s good to know. Thanks!", "username": "Keith_Pittner" } ]
How private is a shared server?
2023-06-20T15:40:05.019Z
How private is a shared server?
740
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I aim to capture all user-executed commands by leveraging “Retrieve Shell Logs functionality”. Although I attempted profiling but i don’t want to go with this approach, I require an alternative approach. Additionally, I am using the mongoDB community edition not the enterprise edition.", "username": "Himanshu_Sharma9" }, { "code": "", "text": "Logs for the MongoDB Shell are stored on a file in the filesystem, one file per shell session, as described here: https://www.mongodb.com/docs/mongodb-shell/logs/. This is valid both for community as well as for enterprise.If you are looking for something different, it would be helpful if you expanded a bit on what you are currently doing as well as what you are looking for.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "So i need all the details like executed commands by all users in mongoDB, can you tell me the how can i achieve this, by mongosh i can see all the commands and logs but i need to see in UI. Any suggestions ?", "username": "Himanshu_Sharma9" }, { "code": "", "text": "Ok, then probably looking at mongosh or other logs on the client side is not the best way. I would suggest looking into MongoDB logging (https://www.mongodb.com/docs/manual/reference/log-messages/). For Community that is probably your best option.\nWith MongoDB Enterprise you’d also have the auditing option: https://www.mongodb.com/docs/manual/core/auditing/.", "username": "Massimiliano_Marcon" } ]
Retrieve Shell Logs
2023-06-19T10:40:04.034Z
Retrieve Shell Logs
480
null
[ "android", "kotlin" ]
[ { "code": "", "text": "I’ve been trying to securely encrypt the data inside my local MongoDB Realm, but I’m having troubles, because I’m not sure how should I safely store the key inside the Android Keystore System, and pass that generated key as a bytearray to the MongoDB configuration?As far as I know, we are not able to retrieve the bytearray of a generated key from the Android Keystore. So how are we supposed to safely and securely generate the encryption key and encrypt our local MongoDB Realm?I’ve been searching for a solution for days, but haven’t been able to find any article/guide on this one.\nI would appreciate any help! Thank you!", "username": "111757" }, { "code": "", "text": "Encrypting data in a local MongoDB Realm database on Android involves generating and securely storing an encryption key. While Android Keystore provides a secure mechanism for generating and storing keys, it does not directly allow retrieval of the key as a bytearray. Instead, you can retrieve the key as a Key object and use it to perform encryption and decryption operations.", "username": "Anuj_Garg" } ]
MongoDB Realm Encryption with Android Keystore System?
2023-02-26T06:55:34.454Z
MongoDB Realm Encryption with Android Keystore System?
1,046
null
[]
[ { "code": "", "text": "Hello,I’ve been building my first MERN app and I’m thinking about incorporating change streams and socket io for certain aspects of the project. My initial setup just used change streams where the user would create a request object via an http request, the change stream is created and there is a timeout after which the change stream is closed.After researching socket io, and reading this tutorial, I’m considering switching to a more active system that uses both change streams and socket io. I can see a lot of benefits to this approach. However, I’m unsure of the way the developer in the tutorial goes about not closing his change streams. Here is the server file on the github repo for the project. Watcher function example. I don’t see him closing the change streams in his watcher functions (there are two but I can’t include more links), the server file, or anywhere else (unless I’m missing something).I thought not explicitly closing change streams was bad practice, however, I think it might be a better solution than opening a large number of separate change streams per each http request (per user) even if those are being closed. After-all, only two change streams are being opened in the linked example in total.Can someone with solid experience please weigh in here? And if it’s okay to not close the change stream, is there a way you would recommend improving upon what this developer did, e.g., in the event the change stream fails? I suppose resuming could be implemented. Any assistance with this is much appreciated.", "username": "John_Weathers" }, { "code": "", "text": "I use change stream with stream pipeline, in node.js pipeline function quit on error or exhaustion.\nI use health check API to make k8s in this case to restart the service and the change stream.\nso the infinite run of a change stream is bit of a challenge but you can solve it anyway you think serve the need.in your case (I don’t know socket.io so much) I would maybe identify an error or just catch the “close” event on a cursor and make the change stream part re-runnable for these cases with some retry limit.", "username": "Shay_I" } ]
Change Streams with Socket io (questions about MongoDB developer approach in tutorial)
2023-03-16T22:20:54.480Z
Change Streams with Socket io (questions about MongoDB developer approach in tutorial)
459
null
[ "node-js", "connecting", "change-streams" ]
[ { "code": "function closeChangeStream(timeInMs = 60000, changeStream) ", "text": "Hi!When I am opening a connection for a change stream do I really have to set a timeout for it?\nwhy do the latest tutorials suggest setting a timeout as some kind of best practice?see: function closeChangeStream(timeInMs = 60000, changeStream) Discover how to react to changes in your MongoDB database using change streams implemented in Node.js and Atlas triggers.", "username": "Shay_I" }, { "code": "", "text": "Hi @Shay_I ,Having a no timeout operations in programming is generally a bad habit. You need to secure your code with good resume operations and code to cover failure. Therefore indefent operations are not advised.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "pipeline", "text": "I think I’ve got to a good balance with not setting timeout:\nI use node.js stream API and the method pipeline helps to identify reading or writing errors, once the pipeline is close/exhausted/has error the pipeline command continue to the next line, then I rerun the pipeline or in case of an error it’s shuts off the health check API which causes k8s in this case to restart.\nsometimes the errors comes in case of primary switch so after few seconds the service stabilize again", "username": "Shay_I" } ]
Change Stream: Timeout Connection Or Open indefinitely?
2022-01-12T11:31:04.194Z
Change Stream: Timeout Connection Or Open indefinitely?
4,012
null
[ "ops-manager" ]
[ { "code": "", "text": "Is there any API available to restart Opsmanager service in version 5 and above ?", "username": "Murali_patibandla1" }, { "code": "", "text": "Hey @Murali_patibandla1,Apologize for the delayed response.Is there any API available to restart the Opsmanager service in version 5 and above?To my knowledge, there is no API available that can issue a restart command to the Ops Manager HTTP server process. You must restart it solely through the command line interface.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Is there any API available to restart Opsmanager service
2023-05-11T02:44:46.034Z
Is there any API available to restart Opsmanager service
670
null
[ "production", "ruby" ]
[ { "code": "Queryable Encryption <https://www.mongodb.com/docs/upcoming/core/queryable-encryption/queryable-encryption/>crypt_shared <https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#download-the-automatic-encryption-shared-library>mongocryptd", "text": "We are pleased to announce the 2.19.0 release of MongoDB’s Ruby Driver. This release adds support for MongoDB 7.0, as well as the following new features:See the Release Notes for a high level summary of what’s new and improved.Thank you to everyone who contributed to this release!", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Ruby driver 2.19.0 Released
2023-06-22T08:35:36.501Z
Ruby driver 2.19.0 Released
634
null
[ "node-js" ]
[ { "code": "", "text": "Title: Urgent Issue: Courses Disappeared and Exam Rescheduling ProblemHello everyone,I encountered a critical issue with my online courses and exam schedule, and I’m seeking assistance to resolve the matter promptly. Yesterday, everything seemed normal as I accessed all my courses, including Examity, a platform for scheduling exams. Consequently, I scheduled two exams: MongoDB Associate Developer Exam (Node.js) and DBA.To my dismay, today, on the day of the exam, I logged into my dashboard only to find that my courses were missing. The DBA course has vanished completely, and the other course now requires me to repay, making it impossible for me to access Examity and proceed with my exam. Unfortunately, my scheduled exam time has already passed, and I need a solution to reschedule my exam.I kindly request your assistance in resolving this matter as soon as possible. Any guidance or advice on how to regain access to my courses and reschedule the exams would be greatly appreciated.Thank you in advance for your help!sohaib", "username": "SOHAIBE_sohaib" }, { "code": "", "text": "Hey @SOHAIBE_sohaib,Thanks for reaching out to the MongoDB Community forums I noticed that you have already raised a ticket with the MongoDB Certification Team, and they are currently assisting you. Rest assured, the team will follow up with you once they are back online later today. Please note that the MongoDB certification team operates on weekdays from 9 am to 6 pm EST.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Urgent Issue: Courses Disappeared and Exam Rescheduling Problem
2023-06-21T20:24:17.764Z
Urgent Issue: Courses Disappeared and Exam Rescheduling Problem
700
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "A few months ago, I had an active MongoDB database, but for other reasons, I had to terminate it. However, it was a Dedicated tier db so I was able to take an Atlas snapshot of everything inside before it was deleted. The Atlas snapshot gave me a compressed .tar.gz with a restore folder that contained various collection and index .wt files.Fast forward to now, I’m hoping to be able to access the information within these files for a project I want to do, but I’m incredibly inexperienced with Mongo. Does anyone know how I can access the data within these .wt files without deploying a paid Atlas cluster?I tried the following:This sort of worked, but all I got was a single folder titled ‘admin’ that had two .BSON files in it:I would greatly appreciate any help! Thank you so much.", "username": "David_Zhang" }, { "code": "mongod", "text": "Hey @David_Zhang - Welcome to the community Sounds like most of the steps are fine. You don’t necessarily have to copy and paste all the .wt files into your local dbPath. Have you tried starting the mongod instance using the extracted data file directory instead? Example detailed in step 4 of the Restore from a Locally-Downloaded Snapshot documentation.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I tried that just now, but mongod eventually just exits out with an exitCode:62. I’m actually not sure why that isn’t workingmongod --dbpath C:\\Users\\slash\\Desktop\\extracted_data\\restore-63b11574dbc7f453dc3733f4", "username": "David_Zhang" }, { "code": "mongod", "text": "Are you using the same mongod version as the backup which was downloaded?", "username": "Jason_Tran" }, { "code": "", "text": "Oh I didn’t know that was a factor. Do you know how I can find out?", "username": "David_Zhang" }, { "code": "", "text": "I can’t recall any quick way off the top of my head. Perhaps you can log into your Atlas account and check the project activity feed to see what was the last version of your cluster.", "username": "Jason_Tran" }, { "code": "", "text": "It looks like I took and downloaded this snapshot on the 1st of January, 2023. Does this mean I should use mongod version 4.2.23 (https://www.mongodb.com/docs/manual/release-notes/4.2/)?", "username": "David_Zhang" }, { "code": "", "text": "I really appreciate your help by the way. I’m completely new to mongo and I’ve been trying to get this data out for like two days now haha", "username": "David_Zhang" }, { "code": "mongod", "text": "No worries - Happy to help It looks like I took and downloaded this snapshot on the 1st of January, 2023. Does this mean I should use mongod version 4.2.23 (https://www.mongodb.com/docs/manual/release-notes/4.2/)?I don’t believe thats a direct indicator of what version was downloaded from your Atlas cluster. You can try a few out and let me know how you go (for e.g. starting with version 4.2 mongod and trying versions before and after).", "username": "Jason_Tran" }, { "code": "", "text": "I tried every available version of MongoDB from 3.6.23 to 5.0.18, but all of them gave me errors. 4.2.24, which was the closest thing to the version I suspected I needed (4.2.23), was available and even that didn’t work.However, I was so desperate that I ended up going on a whim and editing the download link so that it was representative of 4.2.23. I didn’t expect much, but that actually ended up working! I was able to download and use 4.2.23 mongod.Tried what I was doing before and the .BSON dump finally executed properly.Thank you for your help! Have a nice day.", "username": "David_Zhang" }, { "code": "", "text": "Thanks for posting that detailed solution for your scenario David. Glad it ended up working again!Have a good one.", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Accessing data from Atlas snapshot .wt files
2023-06-22T03:44:34.961Z
Accessing data from Atlas snapshot .wt files
1,166
https://www.mongodb.com/…9_2_1024x576.png
[ "backup" ]
[ { "code": "", "text": "Hello Community,Please review my blog below and let me know for any enhancements/ issues.    mongodump and mongorestore are command-line tools provided by MongoDB to create backups of databases using the BSON data format (Binary JSON). These tools allow you to export and import data from MongoDB instances. mongodump: mongodump...", "username": "Srinivas_Mutyala" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Backup Strategies
2023-06-22T03:51:45.962Z
MongoDB Backup Strategies
620
null
[ "queries", "node-js" ]
[ { "code": "{\n _timestamp: {\n $gt: {\n $date: \"2023-06-18T00:00:00.0000Z\",\n },\n },\n}\n", "text": "I run the following in Atlas:And it happily returns what i want.\nWhen I run the same inside a collection.find() inside of Node, i get no results in my cursor. I have run without the filter to confirm everything is working. I’ve copied the query directly from Node to Atlas as well to confirm there are no differences and experimented quite a bit to get to the above after reading many forum posts that didn’t work (e.g. using new Date()). The data I am querying against is a date field not a string (and hence it works fine from Atlas). Any ideas? I’m sure I’m doing something stupid, but I’m not sure what stupid thing I’m doing.", "username": "michael_hyman1" }, { "code": "", "text": "The stupid thing most often made is using the wrong server, the wrong database or the wrong collection.", "username": "steevej" }, { "code": "{\n _timestamp: {\n $gt: {\n $date: \"2023-06-18T00:00:00.0000Z\",\n },\n },\n}\n", "text": "I appreciate that, but already checked that a few times. I can confirm it is the same database, same collection, when i remove the filter all of the data flows as expected, if i copy the filter over to the mongo ui and apply it filters properly/if i do it in node it returns no results.", "username": "michael_hyman1" }, { "code": "", "text": "Please share the code you triedusing new Date()", "username": "steevej" }, { "code": "", "text": "Hah, son of a gun. Now it is working with the new Date() syntax. Sorry for the bother", "username": "michael_hyman1" }, { "code": "$date: <ISODate>new Date(<ISODate>)$date{$date: <ISODate>}", "text": "Hi @michael_hyman1Glad you resolved the issue.I may have an idea why it’s working. In Atlas, it recognizes the extended JSON date format (the one using $date: <ISODate> in your earlier example). However Node doesn’t recognize this, and the only way to create a datetime variable is to use new Date(<ISODate>).I think this is why the earlier extended JSON $date doesn’t work in Node. It was literally searching for sub-documents that looks like {$date: <ISODate>} instead of a datetime datatype.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Fascinating. That makes a ton of sense and it feels good to have a sense of the reason.\nI have to say, Mongo has been a blast to work with even though I’m just touching the surface.", "username": "michael_hyman1" } ]
Node.js date filtering in collection.find()
2023-06-19T20:39:23.433Z
Node.js date filtering in collection.find()
685
null
[ "data-modeling" ]
[ { "code": " {\n _id: 23, // postId in postgres\n comments: [\n {\n _id: 34, // commentId in postgres\n text: \"This is comment 34\",\n }, \n {\n _id: 67,\n text: \"This is comment 67\",\n }, \n ]\n }\n {\n _id: 23, // postId in postgres\n comments: {\n 34: {\n _id: 34, // commentId in postgres\n text: \"This is comment 34\",\n }, \n 67: {\n _id: 67,\n text: \"This is comment 67\",\n }, \n }\n }\n", "text": "I am using mongodb to store one-to-many relationship (similar to https://docs.mongodb.com/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/)Say a collection of blog posts with comments embedded in it. Which of the below approaches is recommended?Query Patttern:Write Pattern:Note:Approach 1: (array of embedded documents)Approach 2: (map of embedded documents)", "username": "Hasan_Kumar" }, { "code": " {\"$project\" {comments.100 1\n comments.20 1}\n ....\n }\n", "text": "The normal way for mongodb is to store data(unknown values) to arrays,\nand keys to be metadata(the known schema)The reasons for that is\n1)unknown fields cannot be indexed\nin your first approach if you want to index the comment_id,you can do it with\na multikey index,but in second how to make index on comment_id?\n2)Dynamic embeded documents are not well supported\nFor example we cant do get($$k)/remove($$k)/add($$k,$$v) in a fast way.\n(if key is a mongodb variable)If we have constant keys,meaning that we know them at the time we send the\nquery(not keys in mongodb variables),we can use pipeline stage operators,or even\nconstruct, query parts,but those are weird methods.Query pattern\nFor a given blog post, return comments with ids in a given array (around 1000 per second)Query patttern,first approach\n1)find post\n2)filter comments you need($filter on the comments map)\n*if you have like 10 average it will be very fastQuery pattern,second approach\n1)find post\n2)how you get the comments?\nyou contruct on driver something like the bellow,and you send the query after?This is weird wayI cant be sure the first is always the best,but i think in mongodb,data should go\ninto arrays,and schema should go into keys.", "username": "Takis" }, { "code": "", "text": "How to make write pattern (Update/insert a comment with given id) work in case of array of embedded dcouments?\nGiven I always fetch comments by id, I am assuming Approach 2 will be more efficient than Approach 1 as I don’t have to search through all documents (for large documents)", "username": "Hasan_Kumar" }, { "code": "", "text": "Its easy to insert/or update a member(here a document) into an array.Query operators\nInsert new → $push\nUpdate existing → update a memberOr you can do a pipeline update\nInsert new → $concat\nUpdate existing → $mapYou dont need a pipeline update really,its for more complicated updates,query operators are fine.\nSee also thisI guess you know the post_id parent,of the comment that will be updated/insert,make an index\non post_id,and it will be very fast,you dont have like too many comments/post.If you dont know the post_id and you have only the comment_id that you want to update,you\ncan make a multikey index on comments._id,to find the post document fast.", "username": "Takis" }, { "code": "", "text": "From what I read from MongoDB - Safely Upserting Subdocuments Into Arrays there is no single operator to atomically upsert into array of embedded documents.\nIs it still the case?I am worried about ending up with duplicate comments if two parallel threads try to insert($push) simulataneously", "username": "Hasan_Kumar" }, { "code": "", "text": "MongoDB updates in 1 document they are atomic,you dont have to worry about anything.\n(all the changes that you do in 1 document,with 1 update,will be done all or nothing)MongoDB after >= 4.0 , we are now in MongoDB 5 supports trascactions so you dont have\nto worry about multi-document operations also.Also mongodb supports left joins with $lookup >= 3.2.This presentation is probably older,but the last slide shows how to push into an array,\nwith safely.You dont need transactions here or $lookup.See the above links also.\nIf you are new to MongoDB see the mongodb uni\nTo see all the videos you just enroll in the course,its free,and there is no need to pass the course,you can re-take the course if you want another time.", "username": "Takis" }, { "code": "", "text": "I was not in favour of using the solution in presentation as it involves making multiple queries for making an upsert. Was wondering if there is a native way to do that in a single statement without encountering race conditions.I understand changes to a document are atomic in general. But was worried about ending up with two simultaneous $push resulting in two duplicate subdocuments in array.The stack overflow answer linked also does it in two steps (first steps removes the record from array if it exists and second step inserts). But still feel it is prone to race conditions.\nThe second solution does something similar but uses the aggregation pipeline. Do you know what lock is acquired by the read statement in a pipeline? i.e, if two servers have sent the same query to mongodb, does mongodb serialize the full pipeline or just the write parts of the pipeline?", "username": "Hasan_Kumar" }, { "code": "", "text": "HelloWhen you send 1 query,you can change many things in the document and change will be\natomic.MongoDB offers pipeline updates also,that can do all sorts of complicated updates\nin 1 document.(you can send multiple queries also,but you need transactions to make them atomic)For an example of how to update an array with members documents see this queryIts atomic and its fast.It checks if new member => update,else add in the end of the array.\nIn your case if you already know if new or existing comment,you can make it even faster,\nand avoid the check.", "username": "Takis" }, { "code": "", "text": "Hey Hasan. I’m running into the same thing now re: deciding between structuring as nested objects vs. arrays. What did you end up going with? What do you recommend after dealing with this?", "username": "Ryan_Murphy" }, { "code": "", "text": "After considering both options thoroughly, we decided to using map, because there is no Safe Way to Upsert a series of objects without causing duplications using array", "username": "mart_q" } ]
Use map vs array of embedded documents?
2021-07-26T19:04:41.152Z
Use map vs array of embedded documents?
13,775
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.12.0 of the MongoDB Go Driver.This release adds support for MongoDB 7.0, including production-ready support for Queryable Encryption. It also adds a new logging interface and configuration API improvements. For more information please see the 1.12.0 release notes.You can obtain the driver source from GitHub under the v1.12.0 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.12.0 Released
2023-06-22T00:37:08.041Z
MongoDB Go Driver 1.12.0 Released
702
null
[ "python", "production" ]
[ { "code": "", "text": "We are pleased to announce the 1.0.0 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.This is a major release that support for:See the 1.0.0 release notes in JIRA for the complete list of resolved issues.Documentation: [PyMongoArrow 1.0.0 Documentation])(PyMongoArrow 1.0.0 Documentation — PyMongoArrow 1.0.0 documentation)\nSource: [GitHub](Release 1.0.0 · mongodb-labs/mongo-arrow · GitHubThank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
PyMongoArrow 1.0.0 Released
2023-06-21T23:37:42.269Z
PyMongoArrow 1.0.0 Released
617
null
[ "atlas-functions" ]
[ { "code": "process.versionv10.18.1", "text": "Hello, a process.version shows the current version of node is v10.18.1.\nIs there any way we can change this? i could not find anything about this topic.Thank You", "username": "Georges_Jamous" }, { "code": "", "text": "i wonder too, if i could switch nodejs version used in realm function runtime.\ncurrently, i couldn’t update third party module cause nodejs version is low", "username": "11115_1" }, { "code": "", "text": "Hello,This is also a major issue for us. I’ve also reported my concerns to the premium support and one of their product manager. They are aware of this limitation.I’ve also created this feedback:It would be great if you can upgrade the version of Node.js used in Realm Functions (currently in version 10.18.1). Version 10.x, which was an LTS, has been unmaintained for many years. The end of life of Node v12 is scheduled for April 2022.\n\nIs it...Please upvote", "username": "GuillaumeAp" }, { "code": "new Date('2022-04-14 16:46:08 (CET)')", "text": "Upvoted!This is important. Using old Node v10 syntax and limitations is not good for productivity.I’m mainly missing two things:", "username": "Mikael_Gurenius" }, { "code": "", "text": "I was hoping that MongoDB 6.0 would bring an update to the Node engine, but no.When can we expect an update?", "username": "Mikael_Gurenius" }, { "code": "", "text": "Wow, this is disappointing. There should be a huge warning in the docs about how outdated node is for functions.", "username": "Jan_Schwenzien" }, { "code": "", "text": "Fully agree. We have started moving away from functions except the super simple ones.", "username": "Mikael_Gurenius" }, { "code": "", "text": "Have you changed your node version in atlas app services? If you did then please tell me that how can I update node version?", "username": "Shahzad_Safdar" }, { "code": "", "text": "Hey everyone thanks for all the feedback! Our version of Node is actually custom so we are able to support some recent dependencies, it was based off of v10 but we are migrating to newer versions with the ability to support multiple ones as well.Regarding syntax support I’d recommend to run your function with modern syntax, every function is transpiled so we are able to support newer ES features. If you have specific requests or have tried an npm package that doesn’t work please let me know and we will make sure to add support for it.", "username": "Gabriele_Cimato" }, { "code": "console.log(JSON.stringify(process.versions)) // {\"node\":\"10.18.1\"}\n\nconst saml = require('saml'); // {}\n", "text": "Thank you @Gabriele_CimatoI would like support for SAML. It requires Node v12 or later.PS: When I read documentation at https://www.mongodb.com/docs/manual/core/server-side-javascript/ I had the impression that runtime was improved at 6.0:MongoDB 6.0 upgrades the internal JavaScript engine used for server-side JavaScript, $accumulator, $function, and $where expressions and from MozJS-60 to MozJS-91.", "username": "Mikael_Gurenius" }, { "code": "", "text": "Hey @Milosz_Kowalski I was able to use the “saml” package in functions, so I’d recommend to try doing what you wanted to do with saml and let me know if you have specific errors coming up. I don’t have valid saml options to test this out fully though. There’s also @node-saml/passport-saml which might also help as an alternative if you end up being blocked with what you want to accomplish now.Regarding the link that’s the drivers doc not app services which doesn’t use a custom Node engine like we do for App Services Functions.", "username": "Gabriele_Cimato" }, { "code": "saml", "text": "Well @Gabriele_Cimato, it’s the signing part that doesn’t work with saml package. I would be happy if this one can be supported!Simple example:\n\nimage1202×1022 100 KB\n", "username": "Mikael_Gurenius" }, { "code": "", "text": "I looked into this and it seems like something is being corrupted during transpilation. If you try to upload a compressed archive like so:\nScreenshot 2023-05-08 at 2.16.42 PM1188×1356 96.7 KB\ninstead of installing the dependency I noticed that it works as expected. Give it a try to see if you can get unblocked while I keep investigating. I hope this helps!", "username": "Gabriele_Cimato" }, { "code": "node_modules", "text": "Not really.I uploaded a node_modules archive and dependencies was installed. However, trying to use the library yields “Cannot found module saml”. It looks to be installed but isn’t.More in next reply…\nimage726×554 28.9 KB\n\nimage1268×1346 59.7 KB\n", "username": "Mikael_Gurenius" }, { "code": "3.0.1exports = async function(){\n \n const saml = require('saml').Saml20; \n\n const options = {\n cert: \"-----BEGIN CERTIFICATE-----MIIDQTCCAiigAwIBAgIBADANBgkqhkiG9w0BAQ0FADA6MQswCQYDVQQGEwJ1czELMAkGA1UECAwCQ0ExDjAMBgNVBAoMBU1vbmdvMQ4wDAYDVQQDDAVyZWFsbTAeFw0yMzA1MTcxNjMyMzhaFw0yNDA1MTYxNjMyMzhaMDoxCzAJBgNVBAYTAnVzMQswCQYDVQQIDAJDQTEOMAwGA1UECgwFTW9uZ28xDjAMBgNVBAMMBXJlYWxtMIIBIzANBgkqhkiG9w0BAQEFAAOCARAAMIIBCwKCAQIA36ruOTlsZvICTs9ve0Wc0fUe32wxrFTcrd+Y7ykMgSF2Ykyl+PYFgHUF6WkgdEeXQ23PfCNa+kGQvXM7wY9bQqWUa0Aiac07iArU7XZQYiOFmbcUWSPaGiOeTRvFKRR86ecnA/Faog880KzAYRf5g4E99RVeii0FtQwoYhO4VawoPtAMb2UEZZq/ByUX6zguGsBAQhFBi6I8ifK47I5kLbsYosQNJrhiTSEyb6nHW//k7N7C8NlmHLfzz+2bRmq+zh29FznrwPN/i/pGVnKhElm2R/wvaR+YhpUzyfNX/5M7HgH8mlCNfFMFeh9kbhCUc745nsauhkSkJNVqsppqLoUCAwEAAaNQME4wHQYDVR0OBBYEFDW1+Br2+ROw7Fr1e1W6kv8KrW58MB8GA1UdIwQYMBaAFDW1+Br2+ROw7Fr1e1W6kv8KrW58MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQENBQADggECAEokh67pGDITXtx0f6H9D22LGNqqGsjHt+DHWk9Kjf7AY7i7qZ1yN5TyWRsrspWsJOfrTh/dOne1aHyHcLPPoY2aRzKaF7h0SV9QbjjURHKn4GlY7Xks3xQ4VxsxBTszUN8aqqLRuPcH3ulEeq6AqGRPO0fYLmtlAJ0DFKxalw5SIZXjwlo3Vqui/ufiPlk+0cqKcS8YRRPX5wmcr9BJRSGyaGy0Zw92LyX0IUeP6qi2wT9GirF0gUrbEY+TfARYUy3eDHVdSV3S5foEcRaA0WbPxplC90Cd5JEo64YkdxS0wasQHV+YeRiF6yk6Bms6aRiBTqWYZ5hA6KGZCqYPGw6j-----END CERTIFICATE-----\",\n key: \"-----BEGIN PRIVATE KEY-----MIIEwQIBADANBgkqhkiG9w0BAQEFAASCBKswggSnAgEAAoIBAgDfqu45OWxm8gJOz297RZzR9R7fbDGsVNyt35jvKQyBIXZiTKX49gWAdQXpaSB0R5dDbc98I1r6QZC9czvBj1tCpZRrQCJpzTuICtTtdlBiI4WZtxRZI9oaI55NG8UpFHzp5ycD8VqiDzzQrMBhF/mDgT31FV6KLQW1DChiE7hVrCg+0AxvZQRlmr8HJRfrOC4awEBCEUGLojyJ8rjsjmQtuxiixA0muGJNITJvqcdb/+Ts3sLw2WYct/PP7ZtGar7OHb0XOevA83+L+kZWcqESWbZH/C9pH5iGlTPJ81f/kzseAfyaUI18UwV6H2RuEJRzvjmexq6GRKQk1WqymmouhQIDAQABAoIBAUPK81m05f5t6/UnOosKlnWs7iaaDJRHRHwPAbO7pWaeVduFj+jd6Nz+m0Qb8RJNgLOXXQQrUy/3H3/MpZgNc4PH3CyFy+h1pE2futoeuk6EpcHpk+lQzJKPqTOF70R8SUA8J78yMF5eb/hv4/+J3L7XNYhLadRHwSsW/EZ946lT8+sAyZaR5BKBN/NXKHvttGEpDUJfJ8bBsAEmTOH3B2C+ILjdh7nTPgcms/yOmD1TixoOBdGJ38DNgaEwi+x1qb87RyHbcCbtJ8IJRsZgD7GrVZqo7KIAyk4DHDmCrl4vzXokw5a7CnWGTMf0jlwmz5joWn7Jc55qXp0Wuw6kMW8hAoGBD7mfgexvYFdJmNseipsw6888LT+Tmf+H/xcThC/HYVQQaGP+JwyYdIb6zIIYloJYxVTBC8odmvAJy3V+qKfyJfUqweqzssPXY9M+1K4/+LQrxlBc5RFhXGCVqO+PevjitFDnPFXe+leOnj8L1eYtLXDSsSNAnAGqlRmS32LN8HYbAoGBDjk/H0tyQYmpKnGG93eKFxw5wjzKBlL3M54X1Xf1Y6pFu841jLjCUJ37kh3JAoHdc8Fx8140tu6mFZsMusbdwyivQ8tEUKvmnaOvrhQo41HErjbx/JEZu7MtSlr4o8s5S2PPNQktnacTwuj9njFsRbagOblvxYDQHzziwurE8rffAoGBDDuzwzdUTfaZ4rhUQjAJFunZPro+8YbBHcmt/R/OVAE54nwns+kwkTaQ1Zg/2Jb+yETvCWTrMyWZ+RYmur4suyrHYKRdt6xzW81zC7GjQq+nflf2bJ3gyCS7SPlU/a2xb+WgfmevV8HVyXXylyzB6J/kyLlMAhGpyuRiRjZvT6oRAoGBC8VOOny6OghNKWW39rVDXuqp1ddurJseHeZX/P1/4pG3kbsNz73aeNK0rO/fOCb2d+P/hBJS94w5f6nHeA442Vei6yydBVGs0In0SdA/Ihe59x5bVdNSg2W9Nkpgd1Qnvv8DK/XDfTMWBHfB4pct7ec7Y2nVWJHIKgG9+uZERETrAoGBDuWOSauuHAXMW77e/4HMLrVMP7IuaXjWqudp2SRtQ49FIhgcnltU9f8e1OPQSD5fQrAOKjH34u39vpuMn6mI/wsWI44X234e9DcVIkmD4+pgtMRP7fk9IEUh9uM0h91FJ3+y9sp/5JZnqNqdHFlPYnsW1ePCE/f94tF3wK8dlFo/-----END PRIVATE KEY-----\",\n issuer: 'urn:issuer',\n lifetimeInSeconds: 600,\n audiences: 'urn:myapp',\n attributes: { 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress': '[email protected]', 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name': 'Foo Bar' },\n nameIdentifier: 'foo',\n sessionIndex: '_faed468a-15a0-4668-aed6-3d9c478cc8fa'\n };\n \n return {\n signedAssertion: saml.create(options),\n assertion: saml.createUnsignedAssertion(options)\n };\n}\n", "text": "Now, installing library as intended, it looks like this. Same 3.0.1 version. The library is available and usable. Except for the signed part… Try for yourself, I’m adding the code below.\nimage1224×642 31 KB\n\nimage958×1124 116 KB\n", "username": "Mikael_Gurenius" }, { "code": "", "text": "Hey @Mikael_Gurenius I think I identified the issue, we’re currently working on it and I’m hoping to get a fix out before EOW. I will update you here once the fix is out!", "username": "Gabriele_Cimato" }, { "code": "", "text": "@Mikael_Gurenius I just deployed a fix, would you mind giving it another try? No need to upload dependencies you should be able to do it by adding “SAML” through the UI. Just to be safe try it with a new app and let me know!", "username": "Gabriele_Cimato" }, { "code": "", "text": "@Gabriele_Cimato, thank you! I can confirm that the code snippet I posted above is now working! What was the issue?", "username": "Mikael_Gurenius" }, { "code": "", "text": "…but is awfully slow. Testing a few times is 1.5 - 2.0 seconds for the execution. Whereby my current implementation is stable in the 0.5 - 0.7 region doing an external HTTP call to a node server with one task: return the signedAssertion.Is there anything to be done with the performance?\nimage1896×1032 196 KB\n", "username": "Mikael_Gurenius" }, { "code": "", "text": "When you will update the current Node version from V10.18.1 to latest version??", "username": "Shahzad_Safdar" } ]
Change function runtime node version
2022-03-02T21:36:58.063Z
Change function runtime node version
5,814
null
[ "python", "production", "motor-driver" ]
[ { "code": "", "text": "We are pleased to announce the 3.2.0 release of Motor - MongoDB’s Asynchronous Python Driver. This release brings support for MongoDB version 7.0See the changelog for a high-level summary of what is in this release or see the Motor 3.2.0 release notes in JIRA for the complete list of resolved issues.Documentation: Motor: Asynchronous Python driver for MongoDB — Motor 3.2.0 documentation\nChangelog: Changelog — Motor 3.2.0 documentation\nSource: GitHub - mongodb/motor at 3.2.0Thank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Motor 3.2.0 Released
2023-06-21T21:28:41.552Z
Motor 3.2.0 Released
802
null
[ "atlas-cluster", "android" ]
[ { "code": "", "text": "Hello mongodb team hope you’re doing well.Please can you help me solving this problem. I met it when trying to login to my app with my android phone.I’ve got this messagerecoverable event subscription error encountered: error getting new mongo client while creating pbs app translator: error connecting to MongoDB service cluster: failed to ping: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: realmcluster-shard-00-00.n3so8.mesh.mongodb.net:30454, Type: Unknown, Last error: remote error: tls: internal error }, { Addr: realmcluster-shard-00-01.n3so8.mesh.mongodb.net:30454, Type: Unknown, Last error: remote error: tls: internal error }, { Addr: realmcluster-shard-00-02.n3so8.mesh.mongodb.net:30454, Type: RSSecondary, Tag sets: provider=AWS,nodeType=ELECTABLE,region=US_EAST_1,workloadType=OPERATIONAL, Average RTT: 2642731 }, ] }Thank.", "username": "Ody" }, { "code": "", "text": "Hi, this is a transient issue with connecting to MongoDB. You can see it eventually gets over it and successfully connects here: App ServicesI will add this as something we will no longer log for customers since this should not be anything to be worried about.If you go to the Cluster > Metrics tab you should see that the cluster was experiencing some issues during this time. I would recommend upgrading to a dedicated tier for less instability and more visibility if this becomes a persistent issue.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you, everything is working now.", "username": "Ody" } ]
Meet fatal error when connecting
2023-06-14T16:16:25.743Z
Meet fatal error when connecting
738
null
[ "dot-net", "production" ]
[ { "code": "", "text": "This is the general availability release for the 2.20.0 version of the driver.The main new features in 2.20.0 include:The full list of issues resolved in this release is available at CSHARP JIRA project.Documentation on the .NET driver can be found here.", "username": "Oleksandr_Poliakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.20.0 Released
2023-06-21T18:52:59.801Z
.NET Driver 2.20.0 Released
840
null
[ "node-js", "mongoose-odm" ]
[ { "code": "type:String,\n\nrequired:true,\n\nunique:true,\ntype:String,\n\nrequired:true,\ntype:String,\n\nrequired:false,\n type:String,\n\n required:true,\n type:Array,\n\n required:false\n\"code\": 11000,\n\n\"keyPattern\": {\n\n \"email\": 1\n\n},\n\n\"keyValue\": {\n\n \"email\": null\n\n}\n", "text": "I have 3 collections in my database-- Users, Posts, type.\nUser model–const userSchema = new Schema( {\nusername:{\ntype:String,\nrequired:true,\nunique:true,\n},\nemail:{\ntype:String,\nrequired:true},\npassword:{\ntype:String,\nrequired:true,\n},\nprofilePic:{\ntype:String,\ndefault:“”,\n}},\n{timestamps:true}\n);const User = mongoose.model(‘User’, userSchema);Post Model–\nconst postSchema = new Schema( {title:{},description:{},photo:{},username:{},categories:{},},{timestamps:true});const Post = mongoose.model(‘Post’, postSchema);Create Post worked once from postman. Now it gives me the following error.Postman request–\n{\n“username”:“Mouse”,\n“title”:“Cruise along”,\n“description”:“The Island”\n}Response----“index”: 0,}I am not sure what I am doing wrong!?", "username": "Pranoti_Savadi" }, { "code": "", "text": "Hi @Pranoti_Savadi ,The 11000 code is a duplicate key error. This means that one of the uniquely defined keys were duplicated by a document insert attempt.Perhaps the title you are trying to create already exists in the database.Maybe also the email is defined required but you maybe insert a null… not very familiar with mongoose error types.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "The error message is saying that there is already a record with null as the email. If a document does not have a value for the indexed field in a unique index, the index will save a null value for this document. Because of the unique feature, MongoDB will only permit one document that lacks the indexed field. So just remove that null document and it will work.Or Simply… Just remove the collection (“users”) and insert the data it will work.", "username": "Chandresh_Kesri" }, { "code": "", "text": "Since you recommendSo just remove that null document and it will work.orJust remove the collection (“users”) and insert the data it will work.for an issue about multiple null value with a unique index, perhaps you are not aware about partial indexes.", "username": "steevej" }, { "code": "", "text": "My problem is a bit more complicated, I use idNumber as unique but sometimes I need to use null value, in these cases it shouldn’t check for uniqueness and unfortunately sparse:true doesn’t solve my problem.", "username": "Ali_KAYA1" } ]
Code -11000 --"keyPattern": { "email": 1}, "keyValue": { "email": null }
2022-05-12T01:23:06.658Z
Code -11000 &ndash;&ldquo;keyPattern&rdquo;: { &ldquo;email&rdquo;: 1}, &ldquo;keyValue&rdquo;: { &ldquo;email&rdquo;: null }
7,775
null
[ "connector-for-bi" ]
[ { "code": "", "text": "OS: Windows 11 Pro\nPBI: Version: 2.117.984.0 64-bit (May 2023)\nODBC SQL Atlas Interface: V.0.1.4\nPower Bi Connector: V.0.1.4Currently I have configured the SQL atlas Interface beta connector, I read and connect data from a federated database, in all the collections of the database, the load from PBI works fine, however I have 2 collections that give me this error,No se han podido guardar las modificaciones en el servidor. Error devuelto: 'OLE DB or ODBC error: [Expression.Error] Data source error occurred. SQLSTATE: HY000 NativeError: 96 Error message: ODBC: ERROR [HY000] [MongoDB][Core] Trying to execute query failed with error: Kind: Command failed: Error code 96 (OperationFailed): failed getting result set schema: translator error: project fields may not be empty, contain dots, or start with dollars, correlationID = 176a2801865a657aba0d5641, labels: {}. '.I don’t know if it has something to do with the flexibility of the collection, for example, some documents have a string that comes with integers, and other documents this array does not exist… Does that influence the load? I was also unable to load this collection using the Mongo BI connector.", "username": "Adolfo_Adrian" }, { "code": "", "text": "Hi @Adolfo_Adrian Thanks so much for your question and welcome to the community! Sometimes when a collection(s) can’t be loaded or previewed within Power Query it is because of a SQL schema issue. We do a very quick scan and sample 1 document to build the SQL schema so this could be problematic when you have polymorphic data throughout your source collection. You can fix this by going into MongoDB Shell and running a command to generate the SQL Schema, with a larger sample size. You can do this for each collection or for all collections within your federated database. Here is our docs page for this info as well: https://www.mongodb.com/docs/atlas/data-federation/query/sql/schema-management/Here are some instructions to help guide you:In the below command, datalake = the name of my virtual database and testDL is the viratul collection. You could also generate a SQL Schema for all collections within a Federated DB instance virtual database with a wildcard:But this error has me thinking that your collection name(s) may contain a period or dollar sign? Can you see if that is the case? If that is the case, we are looking into a fix that may correct this soon, but for now a work around would be to use a manually created Federated DB (not a SQL Quickstart) and when creating the virtual database and collection names within the Federated DB instance make sure to avoid periods/dots and dollar signs.Let me know if you need any assistance.Best,\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Thanks for the answer, I’ll try and I’ll be leaving the results this way. On the other hand, the collections that do load successfully, how can I segment their load, in the case of SQL Atlas Interface (via federated data bases) does not allow me to write a query to segment by date, for example, as is the case with mongo BI connector.Could you help me with this while I verify the issue.", "username": "Adolfo_Adrian" }, { "code": "", "text": "I am happy to tell you that it was not necessary to carry out your instructions, although it is good to know.The error certainly came from naming the virtual collections in the federated database with a point in between. When renaming it gave no error.On the other hand, you could help me with the question of segmentation in the load.", "username": "Adolfo_Adrian" }, { "code": "", "text": "Hello Adolfo - I am happy to hear that these 2 collections/tables loaded for you within Power BI now. Per your other question, when you say “segmentation” I believe you are asking how to filter by a piece of data to narrow down results? There are multiple ways to narrow the data. First, if you always wanted the data set to be narrowed, you could created a view within Data Federation using something like $match to filter. But if you want your data set to be larger and used for many different purposes once in Power BI there are 2 methods I am aware of. Within Power Query, you can use the column menu to filter or you can use SQL within the Power Query formula bar.\n\nScreenshot 2023-06-21 at 8.31.52 AM571×626 24.6 KB\n\n\nScreenshot 2023-06-21 at 8.34.24 AM1176×663 169 KB\nLet me know if this is the functionality you were looking for, or if I have misunderstood.Best to you!", "username": "Alexi_Antonino" }, { "code": "", "text": "It works wonderfully, I just couldn’t figure out how to apply various unwins combined with flattenI tried this one, it worked perfect for meSELECT *\nFROM FLATTEN(\nUNWIND(Flexiweb_Production.POSVersion\nWITH PATH => downloadHistory))However, there are several objectives that I want to treat something like this,SELECT *\nFROM FLATTEN(UNWIND(Flexiweb_Production.POSVersion\nWITH PATH => downloadHistory),UNWIND(Flexiweb_Production.POSVersion\nWITH PATH => Statistics))but it gives me intanxis errors in the documentation there is no example with several unwind https://www.mongodb.com/docs/atlas/data-federation/query/sql/reference/#std-label-sql-limitations", "username": "Adolfo_Adrian" }, { "code": "", "text": "Another question, such as the sistansis to add a where on a date field, intnetn with cast, concvert, but they all give me an error. I want to do something likeSELECT *\nFROM Flexiweb_Production.Batch\nWHERE (date as date)=“2023-01-03 12:35:55.000”", "username": "Adolfo_Adrian" }, { "code": "", "text": "Yay - I am so happy you got things working. Here is some SQL syntax/examples that may help you.This example uses dot notation to unwind an array\nSelect CAST(_id as String),purchaseMethod, customer.age, items.quantity, items.price from UNWIND(Sales WITH PATH=> Sales.items) Where items.quantity = 2\n\nScreenshot 2023-06-21 at 12.44.50 PM1605×337 28 KB\nHere is where I use a cast to extract the year and filter on year extraction:\n= MongoDBAtlasODBC.Query(“mongodb://asql-rotpc.a.query.mongodb.net/Supplies?ssl=true&authSource=admin”,“Supplies”,“Select CAST(_id as String),purchaseMethod, customer.age, CAST(EXTRACT(YEAR FROM saleDate)as integer) as SalesDate, items.quantity, Cast(items.price as varchar(20)) as price from UNWIND(Sales WITH PATH=> Sales.items) where CAST(EXTRACT(YEAR FROM saleDate)as integer)>2015”)If you would be so kind to send me your email address, I will provide you with a pdf that has all kinds of mongosql examples like these 2 above.Here is my email: [email protected]", "username": "Alexi_Antonino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SQL Atlas Interface Error Connect Power BI
2023-06-19T20:11:46.083Z
SQL Atlas Interface Error Connect Power BI
1,401
null
[ "aggregation", "queries", "compass" ]
[ { "code": "\n[\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n League: \"English Premier League\",\n Pos: {\n $in: [\"MF\", \"MFFW\"],\n },\n Min: {\n $gt: 0,\n },\n Gls: {\n $gt: 0,\n },\n },\n },\n {\n $sort:\n /**\n * Provide any number of field/order pairs.\n */\n {\n Gls: -1,\n },\n },\n {\n $limit:\n /**\n * Provide the number of documents to limit.\n */\n 100,\n },\n {\n $sort:\n /**\n * Provide any number of field/order pairs.\n */\n {\n Gls: 1,\n Gls_90: 1,\n Ast: 1,\n },\n },\n {\n $setWindowFields:\n /**\n * partitionBy: partitioning of data.\n * sortBy: fields to sort by.\n * output: {\n * path: {\n * function: The window function to compute over the given window.\n * window: {\n * documents: A number of documents before and after the current document.\n * range: A range of possible values around the value in the current document's sortBy field.\n * unit: Specifies the units for the window bounds.\n * }\n * }\n * }\n */\n\n {\n sortBy: {\n Gls: 1,\n },\n output: {\n rank: {\n $documentNumber: {},\n },\n },\n },\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n Player: \"$Player\",\n Club: \"$Club\",\n League: \"$League\",\n Season: \"$Season\",\n Gls_score: \"$rank\",\n },\n },\n {\n $out:\n /**\n * Provide the name of the output collection.\n */\n \"Rankings\",\n },\n]\n", "text": "I have the following code from Compass on an pipeline aggregation that I’m looking to create. I would like to be able to rank by goals scored, but want to sort the data so that the best possible stats appear in a rank (1 being the lowest and somewhere near 100 being the highest) for players. The issue is, with the code below, the $setWindowFields stage needs the sortby command in order to get ranking and it basically throws away my previous sorting for the final 100 documents. Is there a way to get the ranking I need by more than 1 field?", "username": "Steve_Gilliard" }, { "code": "", "text": "Hi @Steve_Gilliard and welcome to MongoDB community forums!!As mentioned in the MongoDB documentations for $setWindowFields, multiple fields could be used for the sortBy attribute, however, to understand your concern better and suggest you with another aggregation pipeline or help you with a possible solution, it important for us to understand a few things. It would be great if you could share information like:Regards\nAasawari", "username": "Aasawari" }, { "code": "[{\n \"Player\": \"Player 1\",\n \"Pos\": \"FW\",\n \"Gls\": 29,\n \"Ast\": 5,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.93\"\n },\n \"Club\": \"Club 1\",\n \"Season\": \"22/23\"\n},\n{\n \"Player\": \"Player 2\",\n \"Pos\": \"FW\",\n \"Gls\": 3,\n \"Ast\": 4,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.23\"\n },\n \"Club\": \"Club 2\",\n \"Season\": \"22/23\"\n},\n{\n \"Player\": \"Player 3\",\n \"Pos\": \"FW\",\n \"Gls\": 25,\n \"Ast\": 2,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.85\"\n },\n \"Club\": \"Club 3\",\n \"Season\": \"22/23\"\n},\n{\n \"Player\": \"Player 4\",\n \"Pos\": \"FW\",\n \"Gls\": 18,\n \"Ast\": 7,\n \"Gls_90\": {\n \"$numberDecimal\": \".70\"\n },\n \"Club\": \"Club 4\",\n \"Season\": \"22/23\"\n},\n]\n", "text": "I’m looking to sort on Gls, then on Ast and finally Gls_90. This would give me the order that I’d like to have the data sorted by that would give me the best ranking results.Currently, I have this indexed on Player and Club. The version of mongodb I’m using is : “version”: “6.0.6”Thank you for any help you can provide,Steve", "username": "Steve_Gilliard" }, { "code": "[{\n \"Player\": \"Player 1\",\n \"Pos\": \"FW\",\n \"Gls\": 29,\n \"Ast\": 5,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.93\"\n },\n \"Club\": \"Club 1\",\n \"Season\": \"22/23\",\n \"Rank\": 1,\n},\n{\n \"Player\": \"Player 2\",\n \"Pos\": \"FW\",\n \"Gls\": 3,\n \"Ast\": 4,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.23\"\n },\n \"Club\": \"Club 2\",\n \"Season\": \"22/23\",\n \"Rank\": 4,\n},\n{\n \"Player\": \"Player 3\",\n \"Pos\": \"FW\",\n \"Gls\": 25,\n \"Ast\": 2,\n \"Gls_90\": {\n \"$numberDecimal\": \"0.85\"\n },\n \"Club\": \"Club 3\",\n \"Season\": \"22/23\",\n \"Rank\": 2,\n},\n{\n \"Player\": \"Player 4\",\n \"Pos\": \"FW\",\n \"Gls\": 18,\n \"Ast\": 7,\n \"Gls_90\": {\n \"$numberDecimal\": \".70\"\n },\n \"Club\": \"Club 4\",\n \"Season\": \"22/23\",\n \"Rank\": 3,\n},\n]\n", "text": "Also, for the desired results, I’m looking for something similar to:", "username": "Steve_Gilliard" }, { "code": "{\n combinedRankingField: {\n $add: [\"$Gls\", \"$Ast\", \"$G+A_90\"],\n },\n}\n{\n sortBy: {\n combinedRankingField: 1,\n },\n output: {\n rank: {\n $documentNumber: {},\n },\n },\n}\n{\n Player: \"$Player\",\n Club: \"$Club\",\n Pos: \"$Pos\",\n League: \"$League\",\n Season: \"$Season\",\n Gls: \"$Gls\",\n Gls_90: \"$Gls_90\",\n Ast: \"$Ast\",\n \"G+A_90\": \"$G+A_90\",\n Score: \"$combinedRankingField\",\n Scoring_Rank: \"$rank\",\n}\n", "text": "I figured this out to get my desired solution:Instead of sortby being 1 field, I combined fields that gave me the number I needed to rank the players. There are numerous ways to do this, but I added the Gls, Ast and Gls_90, to get the order I needed them by. From this, I was able to create a combinedRankField of the 3 and use that as the sortby field. $addfields:$setWindowFields:$project:", "username": "Steve_Gilliard" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Ranking data efficiently
2023-06-14T19:56:51.403Z
Ranking data efficiently
580
null
[ "replication", "atlas-cluster", "performance", "atlas-search" ]
[ { "code": "$search", "text": "We run an M20 replica set with a primary and a couple secondaries. We run an M10 Analytics node in order to help facilitate heavy, ad-hoc real-time reads (non $search).A couple weeks ago we started using Atlas Search and didn’t realize that the Analytics node would be used. With the setup that we have, any heavy Search activity/new indexing/etc. has a detrimental effect on our analytics node (if you look at our metrics, the node is on fire).Is there any way to drop Atlas Search activity on Analytics nodes?", "username": "randytarampi" }, { "code": "", "text": "Hey @randytarampi – question for you, would you have any interest in isolating your search workload entirely? We are about to announce an Private Preview Program around Dedicated Search Nodes, which may just off the bat resolve this issue for you?", "username": "Elle_Shwer" }, { "code": "", "text": "We are about to announce an Early Access Program around Dedicated Search Nodes, which may just off the bat resolve this issue for you?Hey @Elle_Shwer – thanks for the quick reply. I’m definitely interested! How can we learn more?", "username": "randytarampi" }, { "code": "", "text": "DM me your email and I can send you a form to fill out and link you with the Product Manager covering this.", "username": "Elle_Shwer" } ]
Can we disable Atlas Search on Analytics nodes?
2023-06-21T14:38:49.854Z
Can we disable Atlas Search on Analytics nodes?
685
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.7-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.6. The next stable release 6.0.7 will be a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.7-rc0 is released
2023-06-21T15:36:17.884Z
MongoDB 6.0.7-rc0 is released
650
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.8.0.This release provides support for new features in MongoDB 6.0 and MongoDB 7.0.Please note that this version of mongocxx requires MongoDB C Driver 1.24.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.8.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C++11 Driver 3.8.0 Released
2023-06-21T15:10:29.977Z
MongoDB C++11 Driver 3.8.0 Released
664
null
[ "node-js", "atlas-functions", "app-services-user-auth" ]
[ { "code": "exports = async function (payload) {\n const { token } = payload;\n await context.http.get({\n url: 'https://graph.microsoft.com/v1.0/me',\n headers: {\n Authorization: \"Bearer \" + token,\n \"Content-Type\": \"application/json\",\n },\n });\n};\nawait axios({ method: 'GET', url: 'https://graph.microsoft.com/v1.0/me', headers: { Authorization: 'Bearer ' + token, },await axios.get('https://graph.microsoft.com/v1.0/me', { headers: { Authorization: 'Bearer ' + token } });", "text": "I am attempting to implement a custom authentication function on an app service. This involves verifying the existence of a user using the Graph API in order to authenticate the client.Code:Also tried using other dependencies but getting error as below:Error : Failed to validate token: Not a function: [object Object]\nawait axios({ method: 'GET', url: 'https://graph.microsoft.com/v1.0/me', headers: { Authorization: 'Bearer ' + token, },Error: Failed to validate token: ‘get’ is not a function\nawait axios.get('https://graph.microsoft.com/v1.0/me', { headers: { Authorization: 'Bearer ' + token } });", "username": "Rajan_Braiya" }, { "code": "headersexports = async function (payload) {\n const { token } = payload;\n await context.http.get({\n url: 'https://graph.microsoft.com/v1.0/me',\n headers: {\n Authorization: [ \"Bearer \" + token ],\n \"Content-Type\": [ \"application/json\" ],\n },\n });\n};\n", "text": "Hi @Rajan_Braiya,The values in the headers documents must be arrays. See https://www.mongodb.com/docs/atlas/app-services/services/http-actions/http.get/#parameters for reference. So the correct syntax would be", "username": "Kiro_Morkos" }, { "code": "\n> result: \n{\n \"$undefined\": true\n}\n> result (JavaScript): \nEJSON.parse('{\"$undefined\":true}')\nconst checkToken = async (req, res, next) => {\n try {\n const { data } = await axios.get(\"https://graph.microsoft.com/v1.0/me\", {\n headers: {\n Authorization: `Bearer ${bearerToken}`,\n },\n });\n req.user = data;\n next();\n } catch (error) {\n console.error(error);\n res.status(500).json({ error: \"Error while retrieving user information\" });\n }\n};\n", "text": "Hi @Kiro_Morkos, Thank you very much for your reply. I am first time using mongoDB function,using your code solve that error. but still don’t know why its not returning the data, current received output:I am already using this function in express and working fine. node-js Code:", "username": "Rajan_Braiya" }, { "code": "const res = await context.http.get( ... );\nconst body = JSON.parse(res.body.text());\nreturn body;\ntrycatch", "text": "Given the snippet that you shared in the original post, it appears that you’re not returning anything from the function. If you want to return the result of the HTTP request, you could do something like the following:You’ll also want to ensure you implement proper error handling (wrap the request in try/catch, check the returned status code, etc.)", "username": "Kiro_Morkos" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Http request: "headers" argument must be a object containing only string keys and string array values
2023-06-21T13:38:42.996Z
Http request: &ldquo;headers&rdquo; argument must be a object containing only string keys and string array values
986
null
[]
[ { "code": "", "text": "I have an organization with a few projects and need enhanced pro support only for the production projects, not the development ones.\nby now I have managed to add support plans only by organization level and as this is charged by a percent of the entire bill it has a high impact on my costs.\nThe question is if there is an option to set the support plans per group.", "username": "Yehsuf" }, { "code": "", "text": "Hello @Yehsuf ,Welcome to The MongoDB Community Forums! Please contact the Atlas support team via the in-app chat to investigate any operational and billing issues or queries related to your Atlas account. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Best Regards,\nTarun", "username": "Tarun_Gaur" } ]
MongoDB support plan subscription on project level
2023-06-19T15:31:18.867Z
MongoDB support plan subscription on project level
522
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,I am using MongoDB 5.0 Version for my project and using Ubuntu 20.04 LTS, The issue I am facing in Replication of MongoDB.Basically I have 2 Ubuntu Servers instead of 3 servers ,with 3 mongod services. In Primary one mongo which is running on 27017, and In secondary it’s using 27017 & 27018, So totally 3 mongod services are running. All are connected in the same network, services also running fine.I’ve initiated MongoDB replica from primary server with high priority value 3, and secodary servers mongod having priority 2, 1 respectively.Now I’ll explain the scenario, I am testing automatic failover. When both the servers are UP, I am manually shutting DOWN primery server, after 2 to 4 secs secondary server changing his mode to primary and data also inserting into db, here no issues.But, When both the servers are UP, I am shutting DOWN secondary server, In this test case, after few seconds primary server changing his behavior to SECONDARY, I am clueless now. When I power on secondary server, Primary changing to Primary DB. I don’t know whats happening and I am stuck with this now.Kindly help me to solve this scenario, Please let me know for further info.Expecting positive response, thanks in advance.", "username": "Vignesh_Ganesan" }, { "code": "", "text": "am shutting DOWN secondary serverare you shutting down the ubuntu server or one of the mongod processes?", "username": "Kobe_W" }, { "code": "", "text": "From the observed behaviour, I suspect the same as you suspect. That is, he is shutting down the second Ubuntu machine, so he is shutting down 2 mongod processes, so he does not have majority, so he does not have a primary.", "username": "steevej" }, { "code": "", "text": "Apologize for the confusion, here I testing with two scenarios. One is I am powering of the secondary server, another one is just unplugging the ethernet cable of secondary server.", "username": "Vignesh_Ganesan" }, { "code": "", "text": "Exactly, but the thing is I am not touching primary sever, while primary server is UP, I am powering off / unplugging the ethernet.", "username": "Vignesh_Ganesan" }, { "code": "", "text": "Read about replica set election to understand why the single mongod running on your main Ubuntu server cannot be PRIMARY once it cannot reach the 2 other mongod instances running on the second Ubuntu server what ever you shut it down, you disconnect it from the network or it crashes.You will also understand why one of the mongod running on the second Ubuntu server becomes the PRIMARY when you unplug the network cable.Apologize for the confusionYes confusion because you did not mentionedjust unplugging the ethernet cable of secondary serverin the original post.", "username": "steevej" }, { "code": "", "text": "So, Is it not possible to implement Replica using two servers? Even though MongDB official doc mentioned need three servers, but I am using two servers. Sorry for too many questions.", "username": "Vignesh_Ganesan" }, { "code": "", "text": "Is it not possible to implement Replica using two servers?It is NOT what I wrote. Documentation says three and you try with two. It works but NOT in the way you think it does. It works like it is documented. If there is a majority then you have a primary. If 1 instance on one server cannot read any of the other two instances, it has not majority and does not become primary. The other 2 instances being on the same hardware have majority and one will become primary.", "username": "steevej" } ]
Database Replication Failing
2023-06-20T16:37:47.294Z
Database Replication Failing
641
null
[ "python", "monitoring", "storage" ]
[ { "code": "serverStatus'storageEngine': {\n 'name': 'wiredTiger',\n 'supportsCommittedReads': True,\n ...\n}\n", "text": "Hello Team,\nI am using MongoDB Atlas 6, and the storage engine is wiredTiger.I am using a Python driver for MongoDB and on running the serverStatus command, the result has a storage engine field but not a wiredTiger field, also inside the storageEngine field, the name field does mention that wiredTiger engine is being used -I wanted to access the wiredTiger field to check the available read and write tickets.\nCan anyone help me with this? Thanks", "username": "Sarthak_Girotra" }, { "code": "serverStatus", "text": "Some commands are limited or unsupported in Atlas, this can also vary between the free/shared and dedicated tiers. There are also other limitations between the free/shared and dedicated tiers that you may want to be aware of, links below for both.serverStatus is limited in its response fields in the free and shared tiers.", "username": "chris" }, { "code": "", "text": "Thanks a lot Chris! I was using the M0 cluster which does not have the wiredTiger field.", "username": "Sarthak_Girotra" }, { "code": "", "text": "I get that serverStatus commands has limitations on the M0 cluster, but is there any other way i can access the available read and write tickets?", "username": "Sarthak_Girotra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
db.serverStatus().wiredTiger not shown but db.serverStatus().storageEngine is shown and has name as wiredTiger
2023-06-20T07:28:49.605Z
db.serverStatus().wiredTiger not shown but db.serverStatus().storageEngine is shown and has name as wiredTiger
642
https://www.mongodb.com/…6_2_1024x576.png
[ "leeds-mug" ]
[ { "code": "Head of Architecture, Universal Credit DWP DigitalSenior Solutions Architect", "text": "_London MUG - Design (1)1920×1080 215 KBJoin us for a meet-up on Thursday 30th November at Hippo Digital to network with other developers over food and drinks.We will kick off the event with some networking and fun! Next, you will hear from Sam Redman, Solutions Architect at MongoDB in his session “Lacking Trust? Use Client Side Field Level Encryption” to learn about how to use Client Side Field Level Encryption to encrypt data between the server and the application.Whether you have regulatory requirements or need the added layer of security, don’t make your application go through a breakup. Let us show you how to use Client Side Field Level Encryption to encrypt data between the server and the application.We’ll also then have a session from Paul Brennan, Head of Architecture, Universal Credit DWP Digital titled: ‘Tech Evolution at Scale - a look inside Universal Credit.’ Paul will provide insights into how Universal Credit has leveraged MongoDB to improve its digital infrastructure and modernize its technology stack.We will close the event with a quick fun trivia and MongoDB Swags to win!To RSVP - Please click on the “✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.You could RSVP for the event on meetup.com as well here Data Localisation and Client Side Field Level Encryption, Thu, Nov 30, 2023, 6:00 PM | MeetupEvent Type: In-Person\nLocation: Hippo Digital, 24-26 Aire Street · LeedsHead of Architecture, Universal Credit DWP Digital–Screenshot 2023-10-27 at 12.30.19 (1)924×918 74.7 KBSenior Solutions Architect", "username": "Harshit" }, { "code": "", "text": "Hi All,Unfortunately, we have decided to postpone this meet-up until later in the year after the summer holiday period. We really appreciate your interest and hope you’ll be able to make the new date.", "username": "Harshit" } ]
MUG Leeds: Data Localisation and Client Side Field Level Encryption
2023-06-14T20:04:35.966Z
MUG Leeds: Data Localisation and Client Side Field Level Encryption
3,193
null
[ "flutter" ]
[ { "code": "", "text": "Is it a bad idea to open the realm when I enter a certain screen (where it is actually used) and close it when the screen is popped? Is there any performance implications other than the latency of opening the realm on entering the screen? I must mention that I have the realm encrypted too.In another note, What if I did that in another isolate? What is the best practice to using realms in isolates?", "username": "Ahmed_Aboelyazeed" }, { "code": "realm.write()", "text": "You can open the realm as needed in a specific screen, but you are right that keeping it open and using it will lead to a better performance. Keeping the realm open is preferred. Al;so when a realm.write() call completes your objects are already persisted so you don’t need to open/close the realm for that.We advise to use the main isolate when using Realm. Realm is designed to be used in the main isolate with same for using the Realm objects where needed. You can work with Realm from any isolate but this means you can not work with the Realm objects in the main isolate and you can not get notification changes etc.", "username": "Lyubomir_Blagoev" }, { "code": "", "text": "Thanks for your answer! I’m actually concerned with memory usage. Do you have any advice for me? Do you think I should make the trade off (close the realm and release memory fast I mean)? I could just warm up (open the realm just before it is used to minimize opening time).", "username": "Ahmed_Aboelyazeed" }, { "code": "realm.write", "text": "Realm is designed to be used on mobile devices with as little memory as possible. All object properties are lazily read and not kept in memory. Just use the Realm in the main isolate , preferably opening it only once. You can pass around Realm objects and if you want to update them just wrap the setters in a realm.write call. This is the most optimal way of using Realm.\nClosing eagerly the Realm will not lead to saving much memory, if at all.", "username": "Lyubomir_Blagoev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the best practice regarding opening, closing, and creating realms?
2023-06-21T10:52:28.021Z
What is the best practice regarding opening, closing, and creating realms?
736
null
[ "android" ]
[ { "code": "", "text": "We have around 4 to 15 devices running at each location and when we save, we want it distributed across those devices and NOT sync to the cloud especially if the site internet WAN is out. We want to sync with cloud sometimes but more likely want to control it ourselves.QUESTION:\nDoes realm support write/read to distributed android devices?\nDoes ream allow us to not sync with cloud and do this ourselves?\nIf we turn on sync with cloud, how much control do we have?thanks,\nDean", "username": "Dean_Hiller" }, { "code": "", "text": "Hi @Dean_Hiller,Does realm support write/read to distributed android devices?YesDoes ream allow us to not sync with cloud and do this ourselves?Not at this time: keep an eye on tomorrow’s .local announcements, however, as there’s been work done in that area that may be of interest.", "username": "Paolo_Manna" }, { "code": "", "text": "Hi @Dean_Hiller ,Apparently the functionality is already in Private Preview, and you can register your interest!", "username": "Paolo_Manna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does realm have device to device synchronization?
2023-06-21T05:18:27.733Z
Does realm have device to device synchronization?
670
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hello, in the official documentation it says that the MongoDB Connector for BI is free for evaluation. What does that mean exactly?, am I able to connect to SSRS through ODBC just by downloading the free version to test it out? or am I limited?. Basically what are \"for evaluation’s limitations?.\nThanks in advance.\n-Mike.", "username": "Mike289" }, { "code": "", "text": "Hi @Mike289 Welcome to the community. I am the product manager for the BI Connector and the new Atlas SQL Interface. I am unaware of an evaluation version. But I will try to assist you. First, is your database in Atlas or On-Premise?\nAlso, can you point me to the documentation you saw that made this statement? I went through our tech docs but didn’t see this line. https://www.mongodb.com/docs/bi-connector/current/what-is-the-bi-connector/Thanks in advance,\nAlexi Antonino", "username": "Alexi_Antonino" } ]
MongoDB BI Connectors
2023-06-15T05:31:01.021Z
MongoDB BI Connectors
725
null
[ "production", "c-driver" ]
[ { "code": "mongoc_bulk_operation_new", "text": "Announcing 1.24.1 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.24.0. Version incremented to match the libmongoc version.Fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.24.1 Released
2023-06-21T12:39:12.270Z
MongoDB C Driver 1.24.1 Released
565
null
[ "aggregation", "time-series" ]
[ { "code": "TimeStamp:2023-05-10T22:34:52.875+00:00\nHeater2DutyC:0\nCellPressure:0\n_id: ObjectID('645c1e066b393fd1d7f1e9f8')\netc\n{\n \"$bucket\": {\n \"groupBy\": \"$TimeStamp\",\n \"boundaries\": [\n ISODate(\"2023-05-24T21:44:09.222Z\"),\n ISODate(\"2023-05-24T21:44:10.222Z\"),\n ISODate(\"2023-05-24T21:44:11.222Z\"),\n ISODate(\"2023-05-24T21:44:12.222Z\"),\n ISODate(\"2023-05-24T21:44:13.222Z\"),\n ISODate(\"2023-05-24T21:44:14.222Z\"),\n ISODate(\"2023-05-24T21:44:15.222Z\"),\n ISODate(\"2023-05-24T21:44:16.222Z\"),\n ISODate(\"2023-05-24T21:44:17.222Z\"),\n ISODate(\"2023-05-24T21:44:18.222Z\"),\n ISODate(\"2023-05-24T21:44:19.222Z\"),\n ISODate(\"2023-05-24T21:44:20.222Z\"),\n ISODate(\"2023-05-24T21:44:21.222Z\"),\n ISODate(\"2023-05-24T21:44:22.222Z\"),\n ISODate(\"2023-05-24T21:44:23.222Z\"),\n ISODate(\"2023-05-24T21:44:24.222Z\"),\n ISODate(\"2023-05-24T21:44:25.222Z\"),\n ISODate(\"2023-05-24T21:44:26.222Z\"),\n ISODate(\"2023-05-24T21:44:27.222Z\"),\n ISODate(\"2023-05-24T21:44:28.222Z\"),\n ISODate(\"2023-05-24T21:44:29.222Z\"),\n ISODate(\"2023-05-24T21:44:30.222Z\"),\n ISODate(\"2023-05-24T21:44:31.222Z\"),\n ISODate(\"2023-05-24T21:44:32.222Z\"),\n ISODate(\"2023-05-24T21:44:33.222Z\"),\n ISODate(\"2023-05-24T21:44:34.222Z\"),\n ISODate(\"2023-05-24T21:44:35.222Z\"),\n ISODate(\"2023-05-24T21:44:36.222Z\"),\n ISODate(\"2023-05-24T21:44:37.222Z\"),\n ISODate(\"2023-05-24T21:44:38.222Z\"),\n ISODate(\"2023-05-24T21:44:39.222Z\"),\n ISODate(\"2023-05-24T21:44:40.222Z\"),\n ISODate(\"2023-05-24T21:44:41.222Z\"),\n ISODate(\"2023-05-24T21:44:42.222Z\"),\n ISODate(\"2023-05-24T21:44:43.222Z\"),\n ISODate(\"2023-05-24T21:44:44.222Z\"),\n ISODate(\"2023-05-24T21:44:45.222Z\"),\n ISODate(\"2023-05-24T21:44:46.222Z\"),\n ISODate(\"2023-05-24T21:44:47.222Z\"),\n ISODate(\"2023-05-24T21:44:48.222Z\"),\n ISODate(\"2023-05-24T21:44:49.222Z\"),\n ISODate(\"2023-05-24T21:44:50.222Z\"),\n ISODate(\"2023-05-24T21:44:51.222Z\"),\n ISODate(\"2023-05-24T21:44:52.222Z\"),\n ISODate(\"2023-05-24T21:44:53.222Z\"),\n ISODate(\"2023-05-24T21:44:54.222Z\"),\n ISODate(\"2023-05-24T21:44:55.222Z\"),\n ISODate(\"2023-05-24T21:44:56.222Z\"),\n ISODate(\"2023-05-24T21:44:57.222Z\"),\n ISODate(\"2023-05-24T21:44:58.222Z\"),\n ISODate(\"2023-05-24T21:44:59.222Z\"),\n ISODate(\"2023-05-24T21:45:00.222Z\"),\n ISODate(\"2023-05-24T21:45:01.222Z\"),\n ISODate(\"2023-05-24T21:45:02.222Z\"),\n ISODate(\"2023-05-24T21:45:03.222Z\"),\n ISODate(\"2023-05-24T21:45:04.222Z\"),\n ISODate(\"2023-05-24T21:45:05.222Z\"),\n ISODate(\"2023-05-24T21:45:06.222Z\"),\n ISODate(\"2023-05-24T21:45:07.222Z\"),\n ISODate(\"2023-05-24T21:45:08.222Z\"),\n ISODate(\"2023-05-24T21:45:09.222Z\"),\n ISODate(\"2023-05-24T21:45:10.222Z\")\n ],\n \"default\": \"overflow\",\n \"output\": {\n \"count\": {\n \"$sum\": 1\n },\n \"Average\": {\n \"$avg\": \"$ConcentrationNO2\"\n },\n \"TimeStamp\": {\n \"$push\": \"$TimeStamp\"\n }\n }\n }\n}\n\n{\n \"name\": \"crdsperiodic\",\n \"type\": \"timeseries\",\n \"options\": {\n \"expireAfterSeconds\": 15552000,\n \"timeseries\": {\n \"timeField\": \"TimeStamp\",\n \"granularity\": \"seconds\",\n \"bucketMaxSpanSeconds\": 3600\n }\n }\n}\n", "text": "I have per second data in a time series collection like this:I’m pulling data out using aggregating bucket like this:BsonElementExcept I can have up to 400 aggregations.The problem is that in tests (I’m hoping to switch from MS SQL Server) the data retrieval is slower in MongoDB than in SQL Server. This is especially noticeable as the amount of aggregation increases.My MongoDB Collection is defined as a time series with second intervals (data is in second intervals ± a few ms).Any idea what’s going on here?Chris", "username": "Chris_Swainson" }, { "code": "", "text": "Hello @Chris_Swainson,Thank you for reaching out to the MongoDB Community forums!Based on the aggregation pipeline you shared, it seems that you are trying to bucket the data with consecutive second intervals. However, to better understand the issue, could you please provide more details about your requirements and the expected output result?It would be helpful if you could share the expected output example document. Additionally, please let us know which version of MongoDB you are using and the deployment environment (e.g., on-prem, MongoDB Atlas, or local).Except I can have up to 400 aggregations.Regarding your statement about having up to 400 aggregations, could you please clarify what exactly you mean? Any additional context or examples would be helpful.This is especially noticeable as the amount of aggregation increases.Could you please give an example of the scenario that you have in mind?Also, it would be helpful to know the volume of data you are working with and any indexes you have set up.To read more about the Time Series, please refer to the following resources:Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{\n version: '6.0.5',\n gitVersion: 'c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d',\n targetMinOS: 'Windows 7/Windows Server 2008 R2',\n modules: [],\n allocator: 'tcmalloc',\n javascriptEngine: 'mozjs',\n sysInfo: 'deprecated',\n versionArray: [ 6, 0, 5, 0 ],\n openssl: { running: 'Windows SChannel' },\n buildEnvironment: {\n distmod: 'windows',\n distarch: 'x86_64',\n cc: 'cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.31.31107 for x64',\n ccflags: '/nologo /WX /FImongo/platform/basic.h /fp:strict /EHsc /W3 /wd4068 /wd4244 /wd4267 /wd4290 /wd4351 /wd4355 /wd4373 /wd4800 /wd4251 /wd4291 /we4013 /we4099 /we4930 /errorReport:none /MD /O2 /Oy- /bigobj /utf-8 /permissive- /Zc:__cplusplus /Zc:sizedDealloc /volatile:iso /diagnostics:caret /std:c++17 /Gw /Gy /Zc:inline',\n cxx: 'cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.31.31107 for x64',\n cxxflags: '/TP',\n linkflags: '/nologo /DEBUG /INCREMENTAL:NO /LARGEADDRESSAWARE /OPT:REF',\n target_arch: 'x86_64',\n target_os: 'windows',\n cppdefines: 'SAFEINT_USE_INTRINSICS 0 PCRE_STATIC NDEBUG BOOST_ALL_NO_LIB _UNICODE UNICODE _SILENCE_CXX17_ALLOCATOR_VOID_DEPRECATION_WARNING _SILENCE_CXX17_OLD_ALLOCATOR_MEMBERS_DEPRECATION_WARNING _SILENCE_CXX17_CODECVT_HEADER_DEPRECATION_WARNING _SILENCE_ALL_CXX20_DEPRECATION_WARNINGS _CONSOLE _CRT_SECURE_NO_WARNINGS _ENABLE_EXTENDED_ALIGNED_STORAGE _SCL_SECURE_NO_WARNINGS _WIN32_WINNT 0x0A00 BOOST_USE_WINAPI_VERSION 0x0A00 NTDDI_VERSION 0x0A000000 BOOST_THREAD_VERSION 5 BOOST_THREAD_USES_DATETIME BOOST_SYSTEM_NO_DEPRECATED BOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS BOOST_ENABLE_ASSERT_DEBUG_HANDLER BOOST_LOG_NO_SHORTHAND_NAMES BOOST_LOG_USE_NATIVE_SYSLOG BOOST_LOG_WITHOUT_THREAD_ATTR ABSL_FORCE_ALIGNED_ACCESS'\n },\n bits: 64,\n debug: false,\n maxBsonObjectSize: 16777216,\n storageEngines: [ 'devnull', 'ephemeralForTest', 'wiredTiger' ],\n ok: 1\n}\n\"timeseries\": {\n \"timeField\": \"TimeStamp\",\n", "text": "Hi Kushagra,Thanks for getting back to me promptly.Version:It’s running locally. Same machine as SQL.Basically I’m storing per second time series data. I have to pull out data for different time periods. Sometimes up to one week but normally just a few hours. However only ever require a maximum of 400 data points. Thus if the timespan is 4000 seconds each retrieved value becomes an average of 10 seconds. If the timespan is 40000 seconds then each data point becomes an average of 100 seconds etc. Note that if the timespan is 1 minute then 60 per second data points are pulled out (as it’s 400 max). In SQL I had SQL code that did all these averaging aggregations. Also note that in SQL Server I was storing everything in one table.As for indexes. This is all I have:I’ve looked through the documentation but nothing really strikes me as being helpful.I hope that further info helps.Chris", "username": "Chris_Swainson" }, { "code": "", "text": "Hello @Chris_Swainson,Thank you for providing the information about your current setup. However, to better understand the problem and provide relevant suggestions, It would be helpful if you could share the schema or table structure of your SQL database, as well as some example data since apparently you’re more familiar with the SQL solution and less familiar with MongoDB.Additionally, it would be helpful if you could provide the SQL query you are currently using to retrieve the desired results.Having this information will allow us to understand your issue more effectively and provide you with appropriate recommendations.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "CREATE TABLE [dbo].[HighFreq](\n\t[TimeStamp] [datetime] NOT NULL,\n\t[Heater1Temp] [float] NULL,\n\t[Heater2Temp] [float] NULL,\n\t[Heater1DutyC] [float] NULL,\n\t[Heater2DutyC] [float] NULL,\n\t[ChassisPressure] [float] NULL\n\t--more variables here following same pattern\nPRIMARY KEY CLUSTERED \n(\n\t[TimeStamp] ASC\n)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]\n) ON [PRIMARY]\nGO\nSQL = \"SELECT Average,Timestamp FROM( \" +\n \"SELECT \" +\n \"ROW_NUMBER() OVER( \" +\n \"ORDER BY [TimeStamp]) as Rownum, \" +\n $\"AVG({sVariable}) \" +\n $\"OVER(ORDER BY[TimeStamp] ROWS BETWEEN {mod / 2} PRECEDING AND {mod / 2} FOLLOWING) \" +\n \"AS Average, [TimeStamp] FROM[Aether].[dbo].[HighFreq] WHERE[TimeStamp] \" +\n $\">= '{tmpStartDate}' AND[TimeStamp] <= '{tmpEndDate}' \" +\n \") as\" +\n \"firstQuery \" +\n \"WHERE \" +\n $\"Rownum % {mod} = 0 \" +\n \"ORDER BY TimeStamp DESC\";\nvar matchStage = new BsonDocument\n{\n {\n \"$match\", new BsonDocument\n {\n {\"TimeStamp\", new BsonDocument\n {\n {\"$gte\", new DateTime(2023, 1, 1, 0, 0, 0, DateTimeKind.Utc)},\n {\"$lte\", new DateTime(2023, 3, 31, 23, 59, 59, DateTimeKind.Utc)}\n }\n }\n }\n }\n};\n\nvar sortStage = new BsonDocument\n{\n {\n \"$sort\", new BsonDocument\n {\n {\"TimeStamp\", 1}\n }\n }\n};\n\nvar bucketStage = new BsonDocument\n{\n {\n \"$bucket\", new BsonDocument\n {\n {\"groupBy\", \"$TimeStamp\"},\n {\"boundaries\", new BsonArray(GetBoundaries(X))},\n {\"default\", \"overflow\"},\n {\n \"output\", new BsonDocument\n {\n {\"count\", new BsonDocument {{\"$sum\", 1}}},\n {\"Average\", new BsonDocument {{\"$avg\", \"$Heater1Temp\"}}},\n {\"TimeStamp\", new BsonDocument {{\"$push\", \"$TimeStamp\"}}}\n }\n }\n }\n }\n};\n\nvar projectStage = new BsonDocument\n{\n {\n \"$project\", new BsonDocument\n {\n {\"_id\", 0},\n {\"TimeStamp\", new BsonDocument {{\"$first\", \"$TimeStamp\"}}},\n {\"Average\", 1}\n }\n }\n};\n\nvar finalSortStage = new BsonDocument\n{\n {\n \"$sort\", new BsonDocument\n {\n {\"TimeStamp\", -1}\n }\n }\n};\n\nvar pipeline = new[]\n{\n matchStage,\n sortStage,\n bucketStage,\n projectStage,\n finalSortStage\n};\n\nvar result = collection.Aggregate<BsonDocument>(pipeline).ToList();\n", "text": "Sure, here is the table:This is a SQL query that does the averaging I require.Also I should note in Mongo I’m doing two sortings. I’m wondering on time series data stored sequentially if these are really required:Hope that helps.Chris", "username": "Chris_Swainson" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Slow, Poor performance
2023-05-24T22:00:40.846Z
Slow, Poor performance
943
null
[ "aggregation", "java", "compass", "atlas-cluster" ]
[ { "code": "Connection refused' on server cluster0-shard-00-01.sbboc.mongodb.net:27017. The full response is { \"ok\" : 0.0, \"errmsg\" : \"PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: Error connecting to localhost:28000 (127.0.0.1:28000) :: caused by :: Connection refused\", \"code\" : 6, \"codeName\" : \"HostUnreachable\", \"$clusterTime\" : { \"clusterTime\" : { \"$timestamp\" : { \"t\" : 1686741884, \"i\" : 9 } }, \"signature\" : { \"hash\" : { \"$binary\" : \"1nec9xxjW91akgRA3Q0fSBeIxoE=\", \"$type\" : \"00\" }, \"keyId\" : { \"$numberLong\" : \"7212675207377453057\" } } }, \"operationTime\" : { \"$timestamp\" : { \"t\" : 1686741884, \"i\" : 9 } } }\n at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:179)\n", "text": "Getting following error when run query on index pipeline, although it works from Compass", "username": "Shopi_Ads" }, { "code": "$search$searchmongotMongoError: Remote error from mongot :: caused by :: Error connecting to localhost:28000.\nmongot", "text": "Hello @Shopi_Ads ,Getting following error when run query on index pipeline, although it works from CompassI note you have written that it works on Compass. Can you confirm you’re running the exact same $search query for both?As per this documentation on Troubleshooting Search.The following error is returned if you run $search queries when the Atlas Search mongot process isn’t installed or running:The mongot process is installed only when the first Atlas Search index is defined. If you don’t have any Atlas Search index in your Atlas cluster, create at least one Atlas Search index to resolve this error.To learn how to create Atlas Search Index, please referGet started quickly with Atlas Search by first creating an Atlas Search index using the Atlas UI, Atlas Search API, or Atlas CLI.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "thanks its resolved. developer was connecting to different cluster.", "username": "Shopi_Ads" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
connection-refused-from-Spring-Data-when-using-Atlas-search
2023-06-14T14:50:13.950Z
connection-refused-from-Spring-Data-when-using-Atlas-search
666
null
[ "aggregation" ]
[ { "code": "Document 1\n{\n _id:\n {\n \"$oid\": \"64678a20d22348271c05f102\"\n },\n taskPredictions: [\n {\n id: 2900868,\n predictedCompletionStatus: \"Early\",\n priority: 1,\n },\n {\n id: 2900488,\n predictedCompletionStatus: \"OnTimeInFull\",\n priority: 4,\n },\n ],\n linkId:\n {\n \"$oid\": \"64678a20d22348271c05f101\"\n },\n}\n\nDocument 2\n{\n _id:\n {\n \"$oid\": \"64678a27d22348271c05f10a\"\n },\n tasks: [\n {\n id: 2900868,\n name: \"Task 1\",\n\n },\n {\n id: 2900488,\n name: \"Task 2\",\n\n },\n ],\n\n \"linkId\":\n {\n \"$oid\": \"64678a20d22348271c05f101\"\n }\n}\n{\n tasks: [\n {\n id: 2900868,\n name: \"Task 1\",\n predictedCompletionStatus: \"Early\",\n priority: 1, \n },\n {\n id: 2900488,\n name: \"Task 2\",\n predictedCompletionStatus: \"OnTimeInFull\",\n priority: 4, \n },\n ]\n}\n", "text": "Hi,I have two documents which I can join using $lookup using field linkId. Once joined I needed to be able to join the netsed array from both documents on the id field of the items in the array. Is it possible to do this?the output I’m trying to achieve is like thisIf someone could point me in the right direction I’d appreciate it.", "username": "mc_m0ng0" }, { "code": "\"collection\"[\n { '$unwind': '$tasks' },\n {\n '$lookup': {\n from: 'collection',\n as: 'test',\n pipeline: [ { '$unwind': '$taskPredictions' } ],\n foreignField: 'linkId',\n localField: 'linkId'\n }\n },\n {\n '$addFields': {\n combinedTaskDetails: {\n '$map': {\n input: '$test',\n as: 'test',\n in: {\n '$cond': {\n if: { '$eq': [ '$$test.taskPredictions.id', '$tasks.id' ] },\n then: {\n tasksv2: {\n id: '$tasks.id',\n name: '$tasks.name',\n predictedCompletionStatus: '$$test.taskPredictions.predictedCompletionStatus',\n priority: '$$test.taskPredictions.priority'\n }\n },\n else: null\n }\n }\n }\n }\n }\n },\n {\n '$group': {\n _id: '$linkId',\n tasks: { '$push': '$combinedTaskDetails.tasksv2' }\n }\n }\n]\n[\n {\n _id: ObjectId(\"64678a20d22348271c05f101\"),\n tasks: [\n [\n {\n id: 2900868,\n name: 'Task 1',\n predictedCompletionStatus: 'Early',\n priority: 1\n }\n ],\n [\n {\n id: 2900488,\n name: 'Task 2',\n predictedCompletionStatus: 'OnTimeInFull',\n priority: 4\n }\n ]\n ]\n }\n]\n$match$lookup", "text": "Hi @mc_m0ng0,I have two documents which I can join using $lookup using field linkId.In my below example, I assume they belong to the same collection \"collection\" but you can alter and test it accordingly.Pipeline:Output (based on the sample documents you provided):Is it possible to do this?It seems it is possible based off the sample documents you provided. However, although the above may give you your desired output, it is far from optimal in terms of performance. You could possibly try adding in $match stage at the start with index usage which could help reduce the amount of documents processed to help out to some degree.In saying so, if you’re joining these using $lookup initially, is there a reason you don’t just insert these into a single collection in the desired format in the first place?There could be another way to get the desired output but will let other community members chime in as well.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks. The documents are stored separately as one is calculated from the other and this data is not always needed. The data I provided is just a small sample similar to the structure of my problem data. If I try to store all the data in one document I will potentially hit the 16MB limit. I use match and filter to reduce the data sets before processing.Your solution has given me some ideas but the $map only seems to work if both nested arrays are sorted on the same field. Any ideas how to work around this as I can’t seem to work out how to sort the arrays on the field I want to join them on. Really what I want in this case is an inner join on the nested arrays.", "username": "mc_m0ng0" } ]
Join documents and then join arrays in documents
2023-06-19T13:00:33.946Z
Join documents and then join arrays in documents
374
null
[]
[ { "code": "atlas clusters loadSampleData myAtlasClusterEDUroot@mongodb:~# atlas clusters loadSampleData myAtlasClusterEDU Command \"loadSampleData\" is deprecated, use 'atlas clusters sampleData load' instead Error: https://cloud.mongodb.com/api/atlas/v2/groups/6492b7aa34ee970ad0e652a0/sampleDatasetLoad/myAtlasClusterEDU POST: HTTP 400 (Error code: \"SAMPLE_DATASET_LOAD_IN_PROGRESS\") Detail: A sample dataset load is already in progress for cluster myAtlasClusterEDU in group 6492b7aa34ee970ad0e652a0. Reason: Bad Request. Params: [myAtlasClusterEDU 6492b7aa34ee970ad0e652a0]root@mongodb:~# atlas clusters sampleData load myAtlasClusterEDU Sample Data Job 6492b8e18e114c2f1f3c51f9 created.", "text": "Hi all, just wanted to let you know about this lab.At one point it says to run this command\natlas clusters loadSampleData myAtlasClusterEDUwhich results inroot@mongodb:~# atlas clusters loadSampleData myAtlasClusterEDU Command \"loadSampleData\" is deprecated, use 'atlas clusters sampleData load' instead Error: https://cloud.mongodb.com/api/atlas/v2/groups/6492b7aa34ee970ad0e652a0/sampleDatasetLoad/myAtlasClusterEDU POST: HTTP 400 (Error code: \"SAMPLE_DATASET_LOAD_IN_PROGRESS\") Detail: A sample dataset load is already in progress for cluster myAtlasClusterEDU in group 6492b7aa34ee970ad0e652a0. Reason: Bad Request. Params: [myAtlasClusterEDU 6492b7aa34ee970ad0e652a0]All good if I use the suggested commandroot@mongodb:~# atlas clusters sampleData load myAtlasClusterEDU Sample Data Job 6492b8e18e114c2f1f3c51f9 created.", "username": "Alexandru_47292" }, { "code": "", "text": "Hey @Alexandru_47292,Thank you for surfacing it. We will notify the concerned team about it.If you have any other concerns or questions, please don’t hesitate to reach out.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Outdated command in "Load the Sample Dataset Into Your Atlas Cluster"
2023-06-21T08:51:20.942Z
Outdated command in &ldquo;Load the Sample Dataset Into Your Atlas Cluster&rdquo;
797
null
[ "compass", "swift" ]
[ { "code": "CLLocationManagerCLLocationManagerimport CoreLocation\n\nclass ViewController: UIViewController, CLLocationManagerDelegate {\n\n let locationManager = CLLocationManager()\n\n override func viewDidLoad() {\n super.viewDidLoad()\n\n // Request authorization to use location services\n locationManager.requestWhenInUseAuthorization()\n\n // Set up location manager\n locationManager.delegate = self\n locationManager.desiredAccuracy = kCLLocationAccuracyBestForNavigation\n locationManager.startUpdatingLocation()\n locationManager.startUpdatingHeading()\n }\n\n func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {\n // Get the current location\n guard let currentLocation = locations.last else { return }\n\n // Calculate the distance and bearing to the target location\n let targetLocation = CLLocation(latitude: 37.7749, longitude: -122.4194) // target location (e.g., moving object)\n let distance = currentLocation.distance(from: targetLocation)\n let bearing = currentLocation.bearing(to: targetLocation)\n\n // Rotate the view to point in the direction of the target location\n // (assuming that the view is a compass or map view)\n let heading = locationManager.heading?.trueHeading ?? 0.0 // get the current heading\n let relativeBearing = bearing - heading // calculate the relative bearing\n // rotate the view by relativeBearing degrees\n // (assuming that the view's orientation is measured in degrees clockwise from true north)\n }\n}\n\nCLLocationManagerselflocationManager(_:didUpdateLocations:)locations", "text": "I’m writing this for a local university with students who are working on an iOS project to track the locations of an object at a fixed location. I advised them to ensure that the data models input into Realm match those in the script and be cautioned that further work will be required for the mobile app, such as recalculating things as your device location changes in relation to the object you’re tracking.To track and follow a moving object using Swift, you can use the Core Location framework, specifically the CLLocationManager class. Here’s an example code snippet that shows how to track and follow a moving object using the CLLocationManager:In this example, we create a CLLocationManager object and request authorization to use location services. We set the delegate to self and configure the location manager to use the best available accuracy for navigation. We start updating the location and heading of the device.In the locationManager(_:didUpdateLocations:) method, we get the current location from the locations array. We then calculate the distance and bearing to the target location, which in this case is a fixed location that represents the moving object.We calculate the relative bearing by subtracting the current heading from the bearing to the target location. We then rotate the view by the relative bearing to point in the direction of the target location.Note that this is a simplified example and assumes that the target location is a fixed location. If the target location is a moving object, you’ll need to update its location periodically and recalculate the distance and bearing to it. You can do this using a timer or by subscribing to location updates from a remote API.", "username": "Brock" }, { "code": "@objcMembers import RealmSwift\n\n @objcMembers\n class Object: Object {\n dynamic var id = UUID().uuidString\n dynamic var name = \"\"\n dynamic var latitude = 0.0\n dynamic var longitude = 0.0\n }\n\n let config = Realm.Configuration(schemaVersion: 1)\n Realm.Configuration.defaultConfiguration = config\n let realm = try! Realm()\n\n try! realm.write {\n let object1 = Object()\n object1.name = \"Object 1\"\n object1.latitude = 37.7749\n object1.longitude = -122.4194\n realm.add(object1)\n\n let object2 = Object()\n object2.name = \"Object 2\"\n object2.latitude = 37.773972\n object2.longitude = -122.431297\n realm.add(object2)\n }\n let objects = realm.objects(Object.self)\n\n let myLocation = CLLocation(latitude: 37.7749, longitude: -122.4194)\n for object in objects {\n let objectLocation = CLLocation(latitude: object.latitude, longitude: object.longitude)\n let distance = myLocation.distance(from: objectLocation)\n let bearing = myLocation.bearing(to: objectLocation)\n print(\"Object: \\(object.name), Distance: \\(distance)m, Bearing: \\(bearing)°\")\n }\nCLLocationCLLocationdistance(from:)bearing(to:)bearing(to:)", "text": "Just leaving this hereTo use Swift with MongoDB Realm to calculate the locations of objects relative to your location, you can follow these general steps:Set up a MongoDB Realm app and configure it to work with your Swift app.Create a data model in Swift that represents the objects you want to track. You can use the @objcMembers attribute to expose the Swift properties to Objective-C.In this example, we create a CLLocation object for our location and loop through each object in the Realm database. We create a CLLocation object for each object’s location and use the distance(from:) method to calculate the distance between the two locations. We use the bearing(to:) method to calculate the bearing from our location to the object’s location.Note that the bearing(to:) method returns the bearing in degrees clockwise from true north, so you may need to convert it to a different format depending on your use case.Update the UI to display the calculated distances and bearings.You can update the UI of your app to display the calculated distances and bearings. You may also want to add functionality to dynamically update the locations of the objects in the database, as well as your own location, so that the distances and bearings are continually updated in real time.", "username": "Brock" } ]
Location Tracking with Swift
2023-04-11T17:56:58.762Z
Location Tracking with Swift
1,209
null
[ "replication", "mongodb-shell" ]
[ { "code": "mongo -u admin --authenticationDatabase admin mongosh \"mongodb://pmarbiter1a:27017/admin\" -u admin -p", "text": "Good morning\nI configured a replica set with 3 nodes and 1 arbiter. I’m using mongodb 5\nIf I connect from the linux machine where mongo is installed with the arbiter via command\nmongo -u admin --authenticationDatabase admin\ni can do the rs.status command\nwhile if I connect with the command\n mongosh \"mongodb://pmarbiter1a:27017/admin\" -u admin -p (or from any other client) that command doesn’t work and gives me errorMongoServerError: command replSetGetStatus requires authenticationbut I am authenticated.Where could the problem be and how can it be solved?Thanks Alessio", "username": "Alessio_Rossato" }, { "code": "compass at mongodb dot com", "text": "Hi Alessio.Could you please share your mongosh log file? Please double-check that it does not contain any sensitive information. If there is info in the log file that you’d rather not share here, email compass at mongodb dot com, attach the file there and reference this thread.", "username": "Massimiliano_Marcon" }, { "code": "tail -n 50 /mongodb/log/mongod.log\n{\"t\":{\"$date\":\"2023-06-19T13:27:18.153+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174038:153735][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 8994, snapshot max: 8994 snapshot count: 0, oldest timestamp: (1687174015, 1) , meta checkpoint timestamp: (1687174015, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:27:40.600+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1127\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58140\",\"client\":\"conn1127\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:27:40.602+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1128\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58142\",\"client\":\"conn1128\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:27:40.605+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1129\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58144\",\"client\":\"conn1129\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:28:18.156+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174098:156445][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 8996, snapshot max: 8996 snapshot count: 0, oldest timestamp: (1687174069, 5) , meta checkpoint timestamp: (1687174069, 5) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:29:18.158+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174158:158883][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 8998, snapshot max: 8998 snapshot count: 0, oldest timestamp: (1687174135, 1) , meta checkpoint timestamp: (1687174135, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:30:18.161+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174218:161389][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9000, snapshot max: 9000 snapshot count: 0, oldest timestamp: (1687174195, 1) , meta checkpoint timestamp: (1687174195, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:31:18.163+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174278:163953][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9002, snapshot max: 9002 snapshot count: 0, oldest timestamp: (1687174255, 1) , meta checkpoint timestamp: (1687174255, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:32:18.167+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174338:167381][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9004, snapshot max: 9004 snapshot count: 0, oldest timestamp: (1687174307, 2) , meta checkpoint timestamp: (1687174307, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:33:18.170+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174398:170481][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9006, snapshot max: 9006 snapshot count: 0, oldest timestamp: (1687174383, 2) , meta checkpoint timestamp: (1687174383, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:34:18.173+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174458:173267][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9008, snapshot max: 9008 snapshot count: 0, oldest timestamp: (1687174438, 2) , meta checkpoint timestamp: (1687174438, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:35:18.176+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174518:176214][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9010, snapshot max: 9010 snapshot count: 0, oldest timestamp: (1687174495, 1) , meta checkpoint timestamp: (1687174495, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:36:18.178+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174578:178701][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9012, snapshot max: 9012 snapshot count: 0, oldest timestamp: (1687174555, 1) , meta checkpoint timestamp: (1687174555, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:37:18.181+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174638:181108][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9014, snapshot max: 9014 snapshot count: 0, oldest timestamp: (1687174615, 1) , meta checkpoint timestamp: (1687174615, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:38:18.183+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174698:183795][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9016, snapshot max: 9016 snapshot count: 0, oldest timestamp: (1687174679, 2) , meta checkpoint timestamp: (1687174679, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:39:18.186+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174758:186412][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9018, snapshot max: 9018 snapshot count: 0, oldest timestamp: (1687174734, 2) , meta checkpoint timestamp: (1687174734, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:40:18.189+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174818:189018][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9020, snapshot max: 9020 snapshot count: 0, oldest timestamp: (1687174798, 2) , meta checkpoint timestamp: (1687174798, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:41:18.192+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174878:192064][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9022, snapshot max: 9022 snapshot count: 0, oldest timestamp: (1687174851, 2) , meta checkpoint timestamp: (1687174851, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:42:18.194+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174938:194623][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9024, snapshot max: 9024 snapshot count: 0, oldest timestamp: (1687174914, 1) , meta checkpoint timestamp: (1687174914, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:43:18.197+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687174998:197359][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9026, snapshot max: 9026 snapshot count: 0, oldest timestamp: (1687174969, 5) , meta checkpoint timestamp: (1687174969, 5) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:43:40.592+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1130\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58488\",\"client\":\"conn1130\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:43:40.594+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1131\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58490\",\"client\":\"conn1131\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:43:40.596+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1132\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58492\",\"client\":\"conn1132\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:44:18.200+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175058:200252][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9028, snapshot max: 9028 snapshot count: 0, oldest timestamp: (1687175029, 2) , meta checkpoint timestamp: (1687175029, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:45:18.202+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175118:202631][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9030, snapshot max: 9030 snapshot count: 0, oldest timestamp: (1687175102, 2) , meta checkpoint timestamp: (1687175102, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:46:18.205+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175178:205476][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9032, snapshot max: 9032 snapshot count: 0, oldest timestamp: (1687175147, 2) , meta checkpoint timestamp: (1687175147, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:47:18.207+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175238:207945][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9034, snapshot max: 9034 snapshot count: 0, oldest timestamp: (1687175215, 1) , meta checkpoint timestamp: (1687175215, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:48:18.211+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175298:211092][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9036, snapshot max: 9036 snapshot count: 0, oldest timestamp: (1687175269, 7) , meta checkpoint timestamp: (1687175269, 7) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:49:18.213+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175358:213716][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9038, snapshot max: 9038 snapshot count: 0, oldest timestamp: (1687175335, 1) , meta checkpoint timestamp: (1687175335, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:50:18.216+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175418:216104][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9040, snapshot max: 9040 snapshot count: 0, oldest timestamp: (1687175393, 2) , meta checkpoint timestamp: (1687175393, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:51:18.218+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175478:218486][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9042, snapshot max: 9042 snapshot count: 0, oldest timestamp: (1687175446, 2) , meta checkpoint timestamp: (1687175446, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:52:18.221+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175538:221206][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9044, snapshot max: 9044 snapshot count: 0, oldest timestamp: (1687175515, 1) , meta checkpoint timestamp: (1687175515, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:53:18.224+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175598:224014][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9046, snapshot max: 9046 snapshot count: 0, oldest timestamp: (1687175573, 2) , meta checkpoint timestamp: (1687175573, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:54:18.226+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175658:226806][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9048, snapshot max: 9048 snapshot count: 0, oldest timestamp: (1687175635, 1) , meta checkpoint timestamp: (1687175635, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:55:18.229+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175718:229328][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9050, snapshot max: 9050 snapshot count: 0, oldest timestamp: (1687175696, 2) , meta checkpoint timestamp: (1687175696, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:56:18.231+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175778:231698][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9052, snapshot max: 9052 snapshot count: 0, oldest timestamp: (1687175760, 2) , meta checkpoint timestamp: (1687175760, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:56:39.817+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn1112\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":8277079}}\n{\"t\":{\"$date\":\"2023-06-19T13:57:18.234+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175838:234214][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9054, snapshot max: 9054 snapshot count: 0, oldest timestamp: (1687175815, 1) , meta checkpoint timestamp: (1687175815, 1) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:18.236+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175898:236535][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9056, snapshot max: 9056 snapshot count: 0, oldest timestamp: (1687175874, 2) , meta checkpoint timestamp: (1687175874, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:28.319+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1133\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.3.145:39278\",\"client\":\"conn1133\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"3.10.0-1160.88.1.el7.x86_64\"},\"platform\":\"Node.js v16.19.1, LE (unified)\",\"version\":\"5.1.0|1.8.0\",\"application\":{\"name\":\"mongosh 1.8.0\"}}}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:28.327+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1134\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.3.145:39280\",\"client\":\"conn1134\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"3.10.0-1160.88.1.el7.x86_64\"},\"platform\":\"Node.js v16.19.1, LE (unified)\",\"version\":\"5.1.0|1.8.0\",\"application\":{\"name\":\"mongosh 1.8.0\"}}}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:28.343+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1135\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.3.145:39282\",\"client\":\"conn1135\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"3.10.0-1160.88.1.el7.x86_64\"},\"platform\":\"Node.js v16.19.1, LE (unified)\",\"version\":\"5.1.0|1.8.0\",\"application\":{\"name\":\"mongosh 1.8.0\"}}}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:28.343+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1136\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.3.145:39284\",\"client\":\"conn1136\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"3.10.0-1160.88.1.el7.x86_64\"},\"platform\":\"Node.js v16.19.1, LE (unified)\",\"version\":\"5.1.0|1.8.0\",\"application\":{\"name\":\"mongosh 1.8.0\"}}}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:36.978+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn1133\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":8280485}}\n{\"t\":{\"$date\":\"2023-06-19T13:58:48.357+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1137\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:53442\",\"client\":\"conn1137\",\"doc\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"5.0.15\"},\"os\":{\"type\":\"Linux\",\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"architecture\":\"x86_64\",\"version\":\"Kernel 3.10.0-1160.88.1.el7.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-19T13:59:18.239+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687175958:239604][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9058, snapshot max: 9058 snapshot count: 0, oldest timestamp: (1687175938, 2) , meta checkpoint timestamp: (1687175938, 2) base write gen: 452500\"}}\n{\"t\":{\"$date\":\"2023-06-19T13:59:40.584+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1138\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58824\",\"client\":\"conn1138\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:59:40.587+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1139\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58826\",\"client\":\"conn1139\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T13:59:40.589+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1140\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.214.1.219:58828\",\"client\":\"conn1140\",\"doc\":{\"driver\":{\"name\":\"mongo-java-driver|mongo-scala-driver\",\"version\":\"unknown|2.3.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"amd64\",\"version\":\"3.10.0-957.27.2.el7.x86_64\"},\"platform\":\"Java/Oracle Corporation/1.8.0_151-b12|Scala/2.12.6\"}}}\n{\"t\":{\"$date\":\"2023-06-19T14:00:18.242+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1687176018:242184][3635:0x7f896e3e0700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 9060, snapshot max: 9060 snapshot count: 0, oldest timestamp: (1687175995, 1) , meta checkpoint timestamp: (1687175995, 1) base write gen: 452500\"}}\n", "text": "", "username": "Alessio_Rossato" }, { "code": "", "text": "@Alessio_Rossato that’s the mongod log. Can you share the mongosh log (https://www.mongodb.com/docs/mongodb-shell/logs/) for a session where the connection to the arbiter fails?", "username": "Massimiliano_Marcon" }, { "code": "{\"t\":{\"$date\":\"2023-06-21T09:03:29.839Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000000,\"ctx\":\"log\",\"msg\":\"Starting log\",\"attr\":{\"execPath\":\"/usr/bin/mongosh\",\"envInfo\":{\"EDITOR\":null,\"NODE_OPTIONS\":null,\"TERM\":\"xterm\"},\"version\":\"1.8.0\",\"distributionKind\":\"compiled\",\"buildArch\":\"x64\",\"buildPlatform\":\"linux\",\"buildTarget\":\"linux-x64\",\"buildTime\":\"2023-02-28T13:57:25.355Z\",\"gitVersion\":\"9cf53bc336c79e505cf034bf5e6f3b3b3796cf25\",\"nodeVersion\":\"v16.19.1\",\"opensslVersion\":\"1.1.1t+quic\",\"sharedOpenssl\":false}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.843Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000005,\"ctx\":\"config\",\"msg\":\"User updated\"}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.843Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000048,\"ctx\":\"config\",\"msg\":\"Loading global configuration file\",\"attr\":{\"filename\":\"/etc/mongosh.conf\",\"found\":false}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.868Z\"},\"s\":\"I\",\"c\":\"DEVTOOLS-CONNECT\",\"id\":1000000042,\"ctx\":\"mongosh-connect\",\"msg\":\"Initiating connection attempt\",\"attr\":{\"uri\":\"mongodb://<credentials>@127.0.0.1:27017/admin?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.8.0\",\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.1.0\"},\"devtoolsConnectVersion\":\"1.4.4\",\"host\":\"127.0.0.1:27017\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.875Z\"},\"s\":\"I\",\"c\":\"DEVTOOLS-CONNECT\",\"id\":1000000035,\"ctx\":\"mongosh-connect\",\"msg\":\"Server heartbeat succeeded\",\"attr\":{\"connectionId\":\"127.0.0.1:27017\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.881Z\"},\"s\":\"I\",\"c\":\"DEVTOOLS-CONNECT\",\"id\":1000000037,\"ctx\":\"mongosh-connect\",\"msg\":\"Connection attempt finished\"}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.903Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000004,\"ctx\":\"connect\",\"msg\":\"Connecting to server\",\"attr\":{\"session_id\":\"6492bce1d996ea5d77380059\",\"userId\":null,\"telemetryAnonymousId\":\"642438a6fba198fe090614bf\",\"connectionUri\":\"<mongodb uri><ip address>:27017/admin?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.8.0\",\"is_atlas\":false,\"is_localhost\":true,\"is_do\":false,\"server_version\":\"5.0.15\",\"node_version\":\"v16.19.1\",\"mongosh_version\":\"1.8.0\",\"server_os\":\"linux\",\"server_arch\":\"x86_64\",\"is_enterprise\":false,\"auth_type\":\"DEFAULT\",\"is_data_federation\":false,\"dl_version\":null,\"atlas_version\":null,\"is_genuine\":true,\"non_genuine_server_name\":\"mongodb\",\"fcv\":null,\"api_version\":null,\"api_strict\":null,\"api_deprecation_errors\":null}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.907Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"adminCommand\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"cmd\":{\"ping\":1}}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.908Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"getSiblingDB\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.962Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000010,\"ctx\":\"shell-api\",\"msg\":\"Initialized context\",\"attr\":{\"method\":\"setCtx\",\"arguments\":{}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.965Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000009,\"ctx\":\"shell-api\",\"msg\":\"Used \\\"show\\\" command\",\"attr\":{\"method\":\"show startupWarnings\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.966Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"adminCommand\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"cmd\":{\"getLog\":\"startupWarnings\"}}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.967Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"getSiblingDB\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.968Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000009,\"ctx\":\"shell-api\",\"msg\":\"Used \\\"show\\\" command\",\"attr\":{\"method\":\"show freeMonitoring\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.968Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"adminCommand\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"cmd\":{\"getFreeMonitoringStatus\":1}}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.969Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"getSiblingDB\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.969Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000009,\"ctx\":\"shell-api\",\"msg\":\"Used \\\"show\\\" command\",\"attr\":{\"method\":\"show automationNotices\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.970Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"hello\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.985Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000009,\"ctx\":\"shell-api\",\"msg\":\"Used \\\"show\\\" command\",\"attr\":{\"method\":\"show nonGenuineMongoDBCheck\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.985Z\"},\"s\":\"E\",\"c\":\"MONGOSH\",\"id\":1000000006,\"ctx\":\"shell-api\",\"msg\":\"MongoServerError: command getLog requires authentication\",\"attr\":{\"stack\":\"MongoServerError: command getLog requires authentication\\n at Connection.onMessage (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1267623)\\n at MessageStream.<anonymous> (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1265503)\\n at MessageStream.emit (node:events:513:28)\\n at MessageStream.emit (node:domain:489:12)\\n at p (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1287145)\\n at MessageStream._write (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1285766)\\n at writeOrBuffer (node:internal/streams/writable:391:12)\\n at _write (node:internal/streams/writable:332:10)\\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\\n at Socket.ondata (node:internal/streams/readable:754:22)\",\"name\":\"MongoServerError\",\"message\":\"command getLog requires authentication\",\"code\":13,\"ok\":0,\"codeName\":\"Unauthorized\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.986Z\"},\"s\":\"E\",\"c\":\"MONGOSH\",\"id\":1000000006,\"ctx\":\"shell-api\",\"msg\":\"MongoServerError: command getFreeMonitoringStatus requires authentication\",\"attr\":{\"stack\":\"MongoServerError: command getFreeMonitoringStatus requires authentication\\n at Connection.onMessage (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1267623)\\n at MessageStream.<anonymous> (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1265503)\\n at MessageStream.emit (node:events:513:28)\\n at MessageStream.emit (node:domain:489:12)\\n at p (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1287145)\\n at MessageStream._write (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1285766)\\n at writeOrBuffer (node:internal/streams/writable:391:12)\\n at _write (node:internal/streams/writable:332:10)\\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\\n at Socket.ondata (node:internal/streams/readable:754:22)\",\"name\":\"MongoServerError\",\"message\":\"command getFreeMonitoringStatus requires authentication\",\"code\":13,\"ok\":0,\"codeName\":\"Unauthorized\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.987Z\"},\"s\":\"I\",\"c\":\"MONGOSH-SNIPPETS\",\"id\":1000000024,\"ctx\":\"snippets\",\"msg\":\"Fetching snippet index\",\"attr\":{\"refreshMode\":\"allow-cached\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.987Z\"},\"s\":\"I\",\"c\":\"MONGOSH-SNIPPETS\",\"id\":1000000019,\"ctx\":\"snippets\",\"msg\":\"Loaded snippets\",\"attr\":{\"installdir\":\"/root/.mongodb/mongosh/snippets\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:29.987Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000015,\"ctx\":\"repl\",\"msg\":\"Warning about .mongorc.js/.mongoshrc.js mismatch\"}\n{\"t\":{\"$date\":\"2023-06-21T09:03:30Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000002,\"ctx\":\"repl\",\"msg\":\"Started REPL\",\"attr\":{\"version\":\"1.8.0\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:34.385Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000007,\"ctx\":\"repl\",\"msg\":\"Evaluating input\",\"attr\":{\"input\":\"rs.status()\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:34.470Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"status\",\"class\":\"ReplicaSet\",\"arguments\":{}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:34.482Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000011,\"ctx\":\"shell-api\",\"msg\":\"Performed API call\",\"attr\":{\"method\":\"getSiblingDB\",\"class\":\"Database\",\"db\":\"admin\",\"arguments\":{\"db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:34.517Z\"},\"s\":\"E\",\"c\":\"MONGOSH\",\"id\":1000000006,\"ctx\":\"repl\",\"msg\":\"MongoServerError: command replSetGetStatus requires authentication\",\"attr\":{\"ok\":0,\"code\":13,\"codeName\":\"Unauthorized\",\"message\":\"command replSetGetStatus requires authentication\",\"name\":\"MongoServerError\",\"stack\":\"MongoServerError: command replSetGetStatus requires authentication\\n at Connection.onMessage (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1267623)\\n at MessageStream.<anonymous> (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1265503)\\n at MessageStream.emit (node:events:513:28)\\n at MessageStream.emit (node:domain:489:12)\\n at p (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1287145)\\n at MessageStream._write (/tmp/m/boxednode/mongosh/node-v16.19.1/out/Release/node:3902:1285766)\\n at writeOrBuffer (node:internal/streams/writable:391:12)\\n at _write (node:internal/streams/writable:332:10)\\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\\n at Socket.ondata (node:internal/streams/readable:754:22)\"}}\n{\"t\":{\"$date\":\"2023-06-21T09:03:37.525Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000045,\"ctx\":\"analytics\",\"msg\":\"Flushed outstanding data\",\"attr\":{\"flushError\":\"read ECONNRESET\",\"flushDuration\":350}}\n", "text": "This is the log mongosh", "username": "Alessio_Rossato" } ]
Connect arbiter mongosh
2023-06-19T10:28:00.202Z
Connect arbiter mongosh
676
https://www.mongodb.com/…7_2_1024x576.png
[]
[ { "code": "", "text": "I am starting the Mongodb DBA course and I’m trying to verify my account in the Atlas CLI. I am following the instructions to the letter but can not log in. Here is a screenshot of my issue\n\nScreenshot (11)1920×1080 439 KB\n", "username": "Lamar_Wells" }, { "code": "atlas auth registerCheck", "text": "Hey @Lamar_Wells,Welcome to the MongoDB Community Forums! Once the Atlas registration is complete, please open the lab. Kindly ensure that you have registered using an email address associated with the new Project and not using any previous Atlas Project. After completing this step, please follow the instructions below:In the lab, type the atlas auth register command as given in the lab.The command line would display a link. You should click on the link.Since you’re is already registered, please click on Have an account? Log in nowOnce you log in, it will ask you to enter the code that the lab displays. You would be able to enter it there. As soon as one enters the code, the lab would display something like this: Successfully logged in as [email protected] can then click on Check to complete the lab.Hope this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help with Mongodb university tutorial login
2023-06-16T21:49:48.105Z
Help with Mongodb university tutorial login
803
null
[ "sharding" ]
[ { "code": "", "text": "We are currently experiencing an issue with chunk balancing, we scheduled a chunk balancingonce a day during periods of low traffic. When chunk balancing takes place, we notice a significant increase in the number of dirty pages and dirty cache. Based on my understanding, this could be due to the movement of data chunks between the shards. However, it seems unusual because we are only inserting less than 1 GB of data per day, yet we observe nearly 2 GB of dirty cache during the chunk balancing process. Consequently, this high disk utilization during chunk balancing negatively affects the performance of our application server.To provide some context, we are operating a 3-shard cluster, and our shard key is a compound key composed of three fields: “a,” “b,” and “c.” Both “b” and “c” are UUIDs, while “a” is a string with only a few unique values. Could the selection of this shard key, with its specific combination of fields, be contributing to the problem we are experiencing?", "username": "Kiran_Sunkari" }, { "code": "sh.status()", "text": "Hey @Kiran_Sunkari,Thank you for reaching out to the MongoDB Community forums.We are currently experiencing an issue with chunk balancing, we scheduled a chunk balancingonce a day during periods of low traffic.Have you observed whether the balancing window is long enough to achieve a balanced state for the cluster every day? If not, it is possible that the balancer is unable to keep up with the balancing work and, as a result, the cluster will never reach a balanced state.When chunk balancing takes place, we notice a significant increase in the number of dirty pages and dirty cache.If you have a substantial amount of data to balance, it could lead to a significant accumulation of dirty pages and cache as the process involves writing to the disk, which can result in additional storage usage.However, it seems unusual because we are only inserting less than 1 GB of data per dayYou mentioned that you are inserting less than 1 GB of data per day. Could you clarify if this is the data size per shard?yet we observe nearly 2 GB of dirty cache during the chunk-balancing process.Could you provide details on where you see this number? Additionally, is this a 2 GB number per shard or for the entire deployment? Note that a large dirty cache doesn’t imply any issues. Furthermore, as workload balancing involves data migration to different shards, dirtying the cache is a natural part of the process when there is a significant amount of data to be moved.To provide some context, we are operating a 3-shard clusterTo better understand your deployment environment, please provide the following additional information:Feel free to provide any further details related to your deployment so that we can assist you more effectively.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
During chunk balancing, we are seeing high dirty pages and hight dirty cache
2023-06-21T02:31:14.354Z
During chunk balancing, we are seeing high dirty pages and hight dirty cache
629
null
[ "sharding", "change-streams" ]
[ { "code": "", "text": "Hey guys!Our use case is pretty simple - we have a sharded MongoDB cluster with replicas and multiple shards. Currently, we are watching the changes (by using .watch() and connecting to the mongos). These changes are streamed into other parts of our data pipeline.We are using MongoDB 4.2 community.When we added a new shard (because our data grows up), I saw an error “Error on remote shard mongoprodnew:27020 :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.le, as the resu…” (I guess the last was truncated). And our replication script crashed, as well as the whole feature.I tried both resumeAfter and startAtOperationTime params to set the starting point. Both caused that error, “Resume of change stream was not possible” - but hey I don’t need to resume, just re-create it for me please?So whenever we need to add/replace a shard now, we have to completely stop the whole logical replication process, add a shard, wait until it fetches the data chunks, and then start the replication again. What’s even worse, we can’t really write anything into the DB because the changes will be lost - we won’t be able to resume from the point that’s in the past, before the shard is really up and running.Is there any way to do that without such an unpleasant downtime?Thanks", "username": "Andrey_N" }, { "code": "", "text": "Hi @Andrey_N ,\nI noticed this in my application as well.\nDid you find any fix for this?", "username": "Oded_Raiches" }, { "code": "", "text": "SERVER-42232 should have fixed this issue on MongoDB 4.2.", "username": "Garaudy_Etienne" }, { "code": "", "text": "Thanks @Garaudy_Etienne for the response,\nI am seeing this issue again in MongoDB 6.0.\nOpened a different thread with the info:", "username": "Oded_Raiches" } ]
Resume/Restart a change stream after adding a new shard
2021-07-09T15:56:20.915Z
Resume/Restart a change stream after adding a new shard
3,601
https://www.mongodb.com/…1_2_1024x657.png
[ "react-native" ]
[ { "code": "type UserProfileData = {\n id: string\n name: string\n email: string\n companies: Array<{ _id: int; name: string }>\n}\nconst ItemSchema = {\n name: 'Item',\n properties: {\n _id: { type: 'objectId' },\n company_id: { type: 'int' },\n name: { type: 'string' }\n },\n primaryKey: '_id',\n}\ncompany_id: 1company_id = 1company_id", "text": "I’m working on a React Native project which uses Realm + Flexible Sync to sync data between Atlas and the app. The app is used by multiple companies, and a user could potentially be part of multiple companies.I’m trying to set up a filter so that once a user logs in, the local Realm can only sync data that belongs to companies they’re part of.When logging in, I store an array of companies in the Atuh provider custom data:In Atlas I also have a collection of Items:To keep things simple, I’ve set up an Atlas App with anonymous authentication, as well as a filter on the Item collection to only return items that have a company_id: 1:\nimage1664×1068 96 KB\nHowever, when running the app, Atlas is not applying the filter when the Realm is first initialised. I’ve created a simple app to demonstrate what it looks like in this gist, it also includes a screenshot showing that Realm is pulling through all the items, and not just the ones for company_id = 1.Of course, I could programmatically set a filter by company_id from within my React Native code, but in terms of security setting a filter from within Atlas is a much better approach.", "username": "one_abdullah" }, { "code": "document_filters\"document_filters\": {\n \"read\": { \"company_id\": { \"$in\": \"%%user.custom_data.companies\" } },\n \"write\": { \"company_id\": { \"$in\": \"%%user.custom_data.companies\" } }\n},\ncompanies_idscompany_id", "text": "Hi @one_abdullah,(Request-level) filters (the list of objects in the “filters” blob) are unsupported in flexible sync. I would recommend trying to use the document_filters field in the role configuration instead to achieve the permissions scheme that you are describing above:Side note - I believe this suggestion will only work if the companies array in the custom user data is an array of _ids as opposed to an array of objects. Also, this will require adding company_id as a queryable field in the sync configuration.Let me know if you have any other questions,\nJonathan", "username": "Jonathan_Lee" }, { "code": "company_id: 99", "text": "Hi Jonathan,Thanks for your reply, I’ve tried your suggestion though it looks like it’s still not working. I’ve set the filter to company_id: 99 but it’s still returning all the items.\nScreenshot 2023-06-20 at 16.30.00676×852 49.5 KB\n", "username": "one_abdullah" }, { "code": "\"document_filters.write\": true\"document_filters.write\"\"company_id\": 99\"document_filters\": {\n \"read\": { \"company_id\": 99 },\n \"write\": { \"company_id\": 99 }\n},\n", "text": "In the Atlas App Services rules system, write access implies read access. So in this role configuration, the \"document_filters.write\": true implies both read and write access at the document-level for all items. If you want to restrict both write and read access to only certain companies, you’ll need update the \"document_filters.write\" expression as well. For the example of \"company_id\": 99, it should look something like:Let me know if that works for you,\nJonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "Thank you very much Jonathan, that did the trick!", "username": "one_abdullah" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filters defined in Atlas are not being applied from Realm subscriptions
2023-06-20T14:26:05.600Z
Filters defined in Atlas are not being applied from Realm subscriptions
702
null
[ "replication" ]
[ { "code": "", "text": "Hi everyone.Mongodb Replica set(PSA) my secondary node one database size is 560 GB and same database in primary node 500 GB… please help out !", "username": "sindhu_K" }, { "code": "", "text": "storage size can differ (e.g. due to how space is managed by mongodb), what about number of docs?", "username": "Kobe_W" }, { "code": "", "text": "Hi Kobe\ndatabase contain 48 billon doc.", "username": "sindhu_K" } ]
Mongodb Replica set
2023-06-20T13:30:54.515Z
Mongodb Replica set
636
null
[ "node-js", "atlas-cluster", "serverless" ]
[ { "code": "// marcador de posición de código\nconst { MongoClient } = require('mongodb')\nconst URI = process.env.MONGO_URI\nlet cachedDb = null\nconst connect = async () => { const payload = { success: false, data: null, message: [], errors: [] } try { if (!cachedDb) { // Si no está conectado, establece la conexión const client = await MongoClient.connect(URI) cachedDb = client.db() payload.success = true payload.message.push('Mongo connection established successfully') payload.data = cachedDb } return payload } catch (error) { console.error('Error connecting with database:', error) payload.success = false payload.data = null payload.errors.push('Uncontrolled error in mongoConnection.service.connect') return payload }}\nmodule.exports = { connect }\n// marcador de posición de código\n2023-05-22T22:30:55.830Z b4f960ec-6c55-43d6-b6a0-b4ea59a63538 ERROR Error conecting with database: MongoServerSelectionError: Server selection timed out after 30000 ms at Timeout._onTimeout (/var/task/node_modules/mongodb/lib/sdam/topology.js:278:38) at listOnTimeout (internal/timers.js:557:17) at processTimers (internal/timers.js:500:7) { reason: TopologyDescription { type: 'ReplicaSetNoPrimary', servers: Map(3) { 'plata-dev-shard-00-01.w4zuh.mongodb.net:27017' => [ServerDescription], 'plata-dev-shard-00-02.w4zuh.mongodb.net:27017' => [ServerDescription], 'plata-dev-shard-00-00.w4zuh.mongodb.net:27017' => [ServerDescription] }, stale: false, compatible: true, heartbeatFrequencyMS: 10000, localThresholdMS: 15, setName: 'atlas-yu8x8e-shard-0', maxElectionId: null, maxSetVersion: null, commonWireVersion: 0, logicalSessionTimeoutMinutes: null }, code: undefined, [Symbol(errorLabels)]: Set(0) {} }\n", "text": "What problem are you facing?\nhaving issue trying to connect with mongo atlas using aws lambda.What driver and relevant dependency versions are you using?\nnodejs 14.x\nmongodb 5.5 ( mongo client )\nmongo atlas 6.0.6Steps to reproduce?\nthis is my code when try to connect:then when I try to run it in aws lambda, serverless.com report me this error (from aws)—I already peering with my VPC, allow public connection, and did all the tips that I found in internet, but I don’t know why it still failling", "username": "francisco_Innocenti" }, { "code": "pingtelnet", "text": "Thanks for providing those details @francisco_Innocenti.To try narrow down what the issue could be, please confirm / provide the following information:I already peering with my VPC, allow public connection,Additionally, please review the following articles which may be of use:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello @Jason_Tran ,sure, here is more details:Also I am following los post about AWS Lambda and MongoDB. there is some post that are contradictory, making some confusion of what is right:Second one explaining what the problem that we are having with the “Connection pool”,After Apply second one, we realize that is some functions in our lambda that are running issues with “connection pool”, going deep in the code we check that we are using a iteration and making queries in parallel. Could be that an problem with “connection pool” ?", "username": "francisco_Innocenti" }, { "code": "M0M2M5", "text": "Based off your answers it seems the cluster can be connected to but from other instances (as opposed to AWS lambda). However, since you’ve also stated its an M0 tier cluster, it won’t be able to utilise the network peering connection as per the Set Up a Network Peering Connection documentation:Can you advise the output of the networking tests from the client that you tested it from that was from the same VPC as the AWS lambda instance?Second one explaining what the problem that we are having with the “Connection pool”,After Apply second one, we realize that is some functions in our lambda that are running issues with “connection pool”, going deep in the code we check that we are using a iteration and making queries in parallel. Could be that an problem with “connection pool” ?It is difficult to say at this moment whether or not this would be related to the connection issue. Can you further explain here? Are you having a scenario where you’re maxing out of connections?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I am using M0 for testing but we are having the same issue from M30.the instance and problem from peering is not. This error starting after some weeks we deploy our first lambda.When we deployed our first lambda functions. everything was working good THEN after some weeks, this issues start to appears. No reason and I can find explainiation of why. SOMETIMES it connect to intance M30 and other time NOT.This is a seriues issue that is putting our business in risk be cause we can not connect with Mongo Atlas. this is terrible!we need to fix this AS SOON AS POSIBLEI found that many other users have been having same issues but NO response of why that is happening: even the post doesnt seems that have a solution", "username": "francisco_Innocenti" }, { "code": "MongoServerSelectionError: connection <monitor> to <redacted>:27017 closed\n at Timeout._onTimeout (/home/ubuntu/tour/node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-<redacted>-shard-00-01.qemgxcq.mongodb.net:27017' => [ServerDescription],\n 'ac-<redacted>-shard-00-02.qemgxcq.mongodb.net:27017' => [ServerDescription],\n 'ac-<redacted>-shard-00-00.qemgxcq.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-<redacted>-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "I found that many other users have been having same issues but NO response of why that is happening: even the post doesnt seems that have a solutionThe community forums can be a starting point for discussion on development or product questions if you do not have a paid support plan. There is no SLA (or guarantee) around responses, but anyone in the community is encouraged to share suggestions or experience so you should get more eyes on your posts. Our engineering and product teams also look for community discussions where we can help, but have to balance availability with development and product priorities. If this project is of the utmost importance to you, then perhaps raising a support case with agreed SLA is the best option. Details on support plans are available through the UI as part of the procedure to change your support plan or by contacting MongoDB.Regarding the post that you’ve linked, although the error is the same, I do not believe that alone can be a direct indicator that the issue / root cause is the same. For example, I have a test environment in which I’ve removed all Network Access List entries and tried connecting and got the same error:Note: The test client is using the MongoDB NodeJS Driver version 4.11.0 for the below test.However, with the above example and as stated previously, the error may be the same but the root cause may differ. In this example, it was due to the client’s IP not being on the Network Access List for my test environment.In saying the above, this is probably not the case for you as you’ve allowed all IP entries and you’re stating that the connection issue is intermittent. Assuming there are no cluster issues (resource exhaustion, outages, etc.) and although I understand it is not ideal but for the purposes of troubleshooting, you can try connecting from a non AWS Lambda instance and see if the same intermittent connection issue is also happening. This could determine if the issue is from AWS Lambda or Atlas.You can get this client to connect during the same periods to see if the timeout error occurs as well during the same period when connecting to the same cluster ensuring most, if not all, other variables are the same (driver, driver version, etc).This will be one step to help narrow down what the root cause could be. On top of this, to ensure we cover as much possibilities as we can, you can also consider contacting AWS support to see if there is anything lambda specific that may cause the intermittent time outs.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "A post was split to a new topic: M10 and AWS Lambda connection issue", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Error in Topology: ReplicaSetNoPrimary - AWS Lambda
2023-05-23T15:28:46.561Z
Error in Topology: ReplicaSetNoPrimary - AWS Lambda
1,678
null
[ "data-modeling" ]
[ { "code": "OneToOne\nCourses -> Course_Details\n\nOneToOne\nCourse_Details -> Course_Content\n\nOneToMany \nCourse_Content -> Videos\n", "text": "Hello everyone, glad to be here.\nI have some experience with mongo but maybe not enough, which is maybe why I think my use case is better suited for SQL. However I am not sure, so I am seeking some advice.I have a rather simple data structureWith SQL this is rather simple, but could this be something I could also in mongo implement?Of course we would later have session management, user login token etc. This is where I start to think postgres better fits my use case.Can anyone point me in the right direction? If I am being too vague, please let me know. I am also trying to avoid a TLDR post.", "username": "dj2108" }, { "code": "{\n \"course_id\": \"MTH101-23\",\n \"course_name\": \"Math 101\",\n \"course_description\": \"Learn the basics of math\",\n \"teacher\": \"Bob Smith\",\n \"students\": [\"student1\", \"student2\", \"student3\"],\n \"videos\": [\n {\n \"video_name\": \"1. Introduction\",\n \"video_description\": \"introduction video\",\n \"video_location (url)\": \"URL of video\"\n \n },\n {\n \"video_name\": \"2. Lesson 1\",\n \"video_description\": \"Lesson one\",\n \"video_location (url)\": \"URL of video\"\n \n }\n ]\n}\n", "text": "You could do this with MongoDB, something like this…create a collection called courses and have a document that represents one courseWith this model all you have to search by the course_id and you get all the information about the course. The members, teach, date, content links (videos, papers, etc).", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I this looks like it could work. I will give it a shot, thanks!", "username": "dj2108" } ]
Hello everyone - Should I use MongoDB for this
2023-06-19T19:38:27.809Z
Hello everyone - Should I use MongoDB for this
669
null
[]
[ { "code": " {\n \"_id\": \"6466279bec6576a00b527434\",\n \"brand\": \"SS\",\n \"product_name\": \"Gunther \",\n \"gst\": 0\n }\nvar product = productColl.findOne({\"_id\": new BSON.ObjectId(\"6466279bec6576a00b527434\") })\n const product_item = EJSON.parse(product);\n \n var orderObj\n orderObj = order_items.map((item, idx) => {\n return {\n ...item,\n brand: product_item.brand,\n product_name: product_item.product_name\n }\n})\n", "text": "I have a collection order and in order collection I am having a document order ItemsI using findOne in mongodb atlas function and then trying to access the brand name but unable to parse and get the brand value, I have tried EJSON.parse(item).What I am trying to achieve is", "username": "Zubair_Rajput" }, { "code": "exports = async function(arg){\n // Find the name of the MongoDB service you want to use (see \"Linked Data Sources\" tab)\n var serviceName = \"mongodb-atlas\";\n\n // Update these to reflect your db/collection\n var dbName = \"db_name\";\n var collName = \"collection_name\";\n\n // Get a collection from the context\n var productColl = context.services.get(serviceName).db(dbName).collection(collName);\n\n var product = await productColl.findOne({ \"_id\": \"6466279bec6576a00b527434\" });\n const product_item = JSON.parse(EJSON.stringify(product));\n \n if (product_item) {\n const brand = product_item.brand;\n return brand;\n }\n return null; // Return null if the product_item is not found\n}\n> result: \n\"SS\"\n> result (JavaScript): \nEJSON.parse('\"SS\"')\n", "text": "Hey @Zubair_Rajput,Thanks for reaching out to the MongoDB Community forums then trying to access the brand nameBased on the sample documents you shared, I’ve written a JavaScript code snippet that retrieves the brand name when executed in a MongoDB Atlas function. Sharing the code snippet for your reference:It will return the following output:Please let us know if you have any further questions.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "\"mrp\": {\n \"$numberInt\": \"160000\"\n },\n", "text": "Thanks for the fast reply, yea I am getting result.\nBut one more query I am getting mrp like thisNow I want to access 16000 and do some calculation.\nThanks again bro", "username": "Zubair_Rajput" }, { "code": "", "text": "Hello @Zubair_Rajput,I noticed you asked a similar question on another thread. If the answer in that thread solved your issue, I will close this thread. If not, let’s continue the discussion in that thread to keep all the information in one place.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
How to get the value of a document in mongodb atlas function
2023-06-19T04:14:07.595Z
How to get the value of a document in mongodb atlas function
813
null
[ "crud" ]
[ { "code": "", "text": "Hello Team,I’m trying to insert document(s) in the collection through Data API (insertOne/insertMany). But each time I’m getting an error“Failed to insert document: FunctionError: Failed to insert documents: bulk write exception: write errors: [Document failed validation]”But the same document I’m able to insert through Add Data Functionality (both through JSON file and insert document) . But after inserting the document, when i fetch it through Data API, same data gets failed while insert action through Data API.Also logs for Data API are not giving details about this error. Is there a way we can check what causing Data API insert errors in logs ?Thanks\nNikhil", "username": "Nikhil_Chawla" }, { "code": "", "text": "Hi @Nikhil_Chawla,Can you provide some sample code snippets and steps to reproduce this behaviour?But the same document I’m able to insert through Add Data Functionality (both through JSON file and insert document)In addition to that, could you clarify what you mean by “Add Data Functionality”?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello @Jason_Tran ,Add Data Functionality I’m referring to here through the Compass tool. Please see below image for references.\nimage1448×598 24.6 KB\n\nimage850×513 7.22 KB\nIf you see second image, all 28 documents were inserted fine. But when these documents are inserted through Data API insert operation then they get failed because of below error :“Failed to insert document: FunctionError: Failed to insert documents: bulk write exception: write errors: [Document failed validation]”Thanks\nNikhil", "username": "Nikhil_Chawla" }, { "code": "db.getCollectionInfos()", "text": "Thanks for clarifying. Do you have the Data API insert operation details you could provide? You can copy and paste the request here whilst redacting any personal or sensitive information.In addition to that, could you provide the output of db.getCollectionInfos()?I’m trying to replicate this behaviour but it would be helpful if you can provide some exact steps to try reproduce this. If you are able to share any sample documents as well that you have attempted to insert via the Data API, this would be useful as well.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Data API for insert
2023-06-15T15:25:20.891Z
MongoDB Data API for insert
853
null
[]
[ { "code": "", "text": "I can’t able to see the result", "username": "SUGAPRIYAN_N_A" }, { "code": "", "text": "Hey @SUGAPRIYAN_N_A,If you haven’t received your result yet, kindly mail our Certification Team at [email protected] to request it and they’ll provide it to you.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "I tried but there is no reply status: open more than a day.", "username": "SUGAPRIYAN_N_A" }, { "code": "", "text": "Thank you for your patience. I’m glad that we were able to resolve your issue and provide you with your score report. As a note, the MongoDB certification team operates weekdays from 9am-6pm EST. Our team is located in the US and yesterday was a US holiday.To anyone else who is curious about the score report for certification exams, a breakdown of the exam categories and your score (as a percentage) in each of them is provided immediately after you’ve taken your exam. This screen also tells you whether or not you passed the exam. You will be sent a copy of this score report automatically upon completion. You can also request a copy by reaching out to the certification team.", "username": "Aiyana_McConnell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I get the result breakdown percentage?
2023-06-18T13:46:48.519Z
How can I get the result breakdown percentage?
772
null
[]
[ { "code": "", "text": "Hello, I’m trying to get 50$ credit using my code provided with GitHub Student Pack, but I’m getting the error I mentioned in the title. Is there anyone who can assist? This code may be out of date, but I don’t remember using it.", "username": "enes" }, { "code": "", "text": "Hi Enes, I’m going to reach out to you via DMs to try to resolve this issue.", "username": "Aiyana_McConnell" }, { "code": "", "text": "I have the same problem can you help me by DM @Aiyana_McConnell ?", "username": "Nicolas_Lavanderos" }, { "code": "", "text": "Hi Nicolas, sorry you’re experiencing this issue. Reaching out to you via DM right now.", "username": "Aiyana_McConnell" }, { "code": "", "text": "I also have the same problem, moderators told me to contact you, can you help me in DM’s @Aiyana_McConnell", "username": "mqrkelich_N_A" }, { "code": "", "text": "Hi there,Welcome to the forums! Yes, I’ll reach out to you via DM.", "username": "Aiyana_McConnell" }, { "code": "", "text": "As an update to anyone who might be experiencing this issue: Likely what is happening is the Atlas promo code has expired.To avoid this issue, do not generate your Atlas code on your MongoDB for Students account until you are ready to apply it to your Atlas instance.If you’re encountering this issue, feel free to post in this thread or create a topic in the forums with your issue and we will reach out to you to help resolve it.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Bruh how did it expire? Doesnt it last like github student pack? And my pack didnt expire yet", "username": "mqrkelich_N_A" }, { "code": "", "text": "Unfortunately not. You typically have 6 months to apply the Atlas promotional code before it will expire. If you’ve already applied it to an Atlas instance, you have 12 months to use the credits. I will reach out to you about getting your code extended.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Same here, just generated a coupon but the same message, if you can contact me @Aiyana_McConnell ! ", "username": "Kilian_Pichard1" }, { "code": "", "text": "Yes, reaching out now.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Hey Aiyana, I’m facing the same problem, can you please help me out?", "username": "Atiq_Israk" }, { "code": "", "text": "Indeed! I will reach out to via DM in a moment.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Hi Aiyana, I have the same issue with my coupon, if possible could you help me out?", "username": "David_Dickinson" }, { "code": "", "text": "Hi David. Yes, I will DM you shortly.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Hi Aiyana - I’m experincing the same trouble with my coupon, please can you assist?", "username": "Muhammed_Bulbulia" }, { "code": "", "text": "Hi Muhammed,Yes, DMing you shortly.", "username": "Aiyana_McConnell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The coupon is either applied before start date or after end date
2023-03-13T15:21:52.696Z
The coupon is either applied before start date or after end date
2,192
null
[ "replication" ]
[ { "code": "db.serverStatus().connections\n{\n current: 49,\n available: 51151,\n totalCreated: 407521,\n active: 16,\n exhaustIsMaster: 3,\n exhaustHello: 12,\n awaitingTopologyChanges: 203193\n}\nhelloisMaster", "text": "Hello,I’m running a PSA replica set with mongod instances v4.4.5. The Primary and Arbiter instances are on the same server, all running on Ubuntu 20.04.4 .It’s come to my attention that when I check the connections from server status on both the Primary and Secondary instances, the value of connections awaiting topology changes is disproportionally large.I understood that this number reflectsThe number of clients currently waiting in a hello or isMaster request for a topology change.I have quite a few questions about this.", "username": "Max_Hermez" }, { "code": "", "text": "Did you find response for your questions? I have the exact questions.", "username": "Fabio_Santos" } ]
Mongod instance report a high number of "awaitingTopologyChanges" connections
2022-08-16T22:24:03.154Z
Mongod instance report a high number of &ldquo;awaitingTopologyChanges&rdquo; connections
1,924
null
[ "java" ]
[ { "code": "", "text": "I’m trying to connect my java application to my cloud cluster but I’m having connection timeout issues so it won’t make the connection. It looks like it’s security related to the connection handshake possibly.\nI don’t use package managers - I’m just trying to locate the jar file for the latest mongodb java driver.\nAnyone know where I can find this?", "username": "Keith_Pittner" }, { "code": "", "text": "The jar file for the latest driver release can be found on Maven: https://mvnrepository.com/artifact/org.mongodb/mongodb-driver-sync.Once you choose your desired version, there is a “Files” section which will contain the jar file for that version of the driver. Hope this helps!", "username": "Ashni_Mehta" }, { "code": "", "text": "Hello! thanks for the answer!I’ve downloaded the latest jar", "username": "Keith_Pittner" } ]
Java driver jar file
2023-06-20T13:55:48.722Z
Java driver jar file
717
null
[ "spring-data-odm" ]
[ { "code": "", "text": "I am trying to setup a project with spring to use mongodb.\nI have previously used nodejs to connect to mongodb and I can see my updates in my cluster but when I try to use spring, I dont see my documents.\nIf i query the db it returns what I added locally which means data is not replicated to the cluster.\nI see this prompt from spring and not sure why it is connecting locally. → “Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017”\nCan somebody help with this?\nAny pointers or assistance will be appreciatedMy URI is defined correclty\nspring:\ndata:\nmongodb:\nuri : mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\ndatabase : Tasks", "username": "Enahoro_Oriero" }, { "code": "", "text": "Hello @Enahoro_Oriero ,Welcome to The MongoDB Community Forums! I see this prompt from spring and not sure why it is connecting locally.Spring Boot has a feature called “auto configuration”. It could be that the MongoAutoConfiguration is activated with default values, which point to localhost:27017. If you don’t want that behaviour, you can either configure the properties for MongoDB (see Spring Boot Reference Documentation for valid property keys) or disable the MongoAutoConfiguration.For more information, please refer Connection Guide for Java.You can also refer below tutorial for setup and configurationIn this tutorial, we demonstrate how Spring Boot provides for integration with MongoDB, connect to an Atlas cluster, and perform simple CRUD examples.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Spring writes to local mongodb but not cluster
2023-06-18T01:48:41.942Z
Spring writes to local mongodb but not cluster
661
https://www.mongodb.com/…d783b57c8603.png
[ "cloud-manager" ]
[ { "code": "", "text": "Hello MongoDB experts,Please, are there any plans to release MongoDB Agent binary also for Ubuntu 20.04 on ARM architecture?\nAccording to requirements, it is an unsupported combination of OS and architecture:(Possibly if there is any public road map which I can check?)We have our own AWS EC2 deployment with an installed MongoDB Community Edition using Ubuntu on ARM architecture to save some money and we don’t want to switch to Amazon Linux 2 or RHEL/Centos 8 just because of the Agent.Is there something we can do to help with this release? I suppose MongoDB Agent is not open sourced, so it cannot be compiled …", "username": "Tomas_65355" }, { "code": "", "text": "Hello @Tomas_65355 ,Welcome to The MongoDB Community Forums! Cloud Manager is a part of Enterprise Advanced and is needed to be setup for every node on the replica set.A competitive alternate for this can be using MongoDB Atlas as this can also provide monitoring. There are different tiers to choose from, some cheaper than others and one needs to decide according to the requirements if a shared tier will be enough or a dedicated tier is required and how much storage/ram and other resources are required for each node.Let me know, in case you have any further questions, I will be happy to help you! Regards,\nTarun", "username": "Tarun_Gaur" } ]
MongoDB Agent for Ubuntu 20.04 on ARM architecture
2023-06-12T17:01:46.768Z
MongoDB Agent for Ubuntu 20.04 on ARM architecture
704
null
[ "production", "c-driver" ]
[ { "code": "bson_array_as_canonical_extended_jsonbson_array_as_relaxed_extended_jsonmongoc_client_encryption_create_encrypted_collectionmongoc_collection_create_indexes_with_optsmongoc-statENABLE_BSONENABLE_BSON=SYSTEMUSE_SYSTEM_LIBBSON=TRUEENABLE_BSON=SYSTEMUSE_SYSTEM_LIBBSON=TRUE", "text": "Announcing 1.24.0 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.New Features:Language Standard Support:Platform Support:New Features:Fixes:Language Standard Support:Platform Support:Build Configuration:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.24.0 Released
2023-06-20T13:14:34.824Z
MongoDB C Driver 1.24.0 Released
798
null
[ "java" ]
[ { "code": "connector.class=com.mongodb.kafka.connect.MongoSourceConnector\nconnection.uri=mongodb+srv://mytestuser:[email protected]/?connectTimeoutMS=30000\nstartup.mode=latest\ntasks.max=1\ncollection=MyCollection\ndatabase=MyDatabase\npublish.full.document.only=true\noutput.schema.key={\"type\":\"record\",\"name\":\"keySchema\",\"fields\":[{\"name\":\"id\",\"type\":\"string\"}]}\noutput.format.key=schema\n\n2023-06-19T13:04:44.000+02:00\t[Worker-<XXX>] [2023-06-19 11:04:44,041] INFO Adding discovered server <XXX>:27017 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-19T13:04:44.000+02:00\t[Worker-<XXX>] [2023-06-19 11:04:44,045] INFO Adding discovered server <XXX>:27017 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-19T13:04:44.000+02:00\t[Worker-<XXX>] [2023-06-19 11:04:44,051] INFO Adding discovered server <XXX>:27017 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] [2023-06-19 11:04:48,293] INFO Exception in monitor thread while connecting to server <XXX>:27017 (org.mongodb.driver.cluster:76)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] com.mongodb.MongoSocketOpenException: Exception opening socket\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:180)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:193)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] at java.base/java.lang.Thread.run(Thread.java:829)\n\n2023-06-19T13:04:48.000+02:00\t[Worker-<XXX>] Caused by: java.net.SocketTimeoutException: connect timed out\n", "text": "Hi,I’m experimenting with the official Mongo Kafka Source Connector. The Kafka cluster runs in AWS MSK, and so does the Connector. Mongo Cluster runs in Mongo Atlas version 6.06 and the connector mongo-kafka-connect-1.10.1-all.jar.Strange thing is that It has successfully connected a few times… But mostly it gets error (see logs below)Here is the connector cofig:From logs in CloudWatch:", "username": "Kristoffer_Almas" }, { "code": "", "text": "Make sure you connect over Private Link to MongoDB from MSK. See the steps in the blog:In this article, learn how to set up Amazon MSK, configure the MongoDB Connector for Apache Kafka, and how it can be used as both a source and sink for data integration with MongoDB Atlas running in AWS.", "username": "igor_alekseev" }, { "code": "2023-06-20T08:47:11.000+02:00\t[Worker-XXX] [2023-06-20 06:47:11,487] INFO Adding discovered server pl-0-eu-west-1.XXX.mongodb.net:1025 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-20T08:47:11.000+02:00\t[Worker-XXX] [2023-06-20 06:47:11,488] INFO Adding discovered server pl-0-eu-west-1.XXX.mongodb.net:1024 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-20T08:47:11.000+02:00\t[Worker-XXX] [2023-06-20 06:47:11,491] INFO Adding discovered server pl-0-eu-west-1.XXX.mongodb.net:1026 to client view of cluster (org.mongodb.driver.cluster:71)\n\n2023-06-20T08:47:26.000+02:00\t[Worker-XXX] [2023-06-20 06:47:26,985] INFO AbstractConfig values:\n\n2023-06-20T08:47:26.000+02:00\t[Worker-XXX] (org.apache.kafka.common.config.AbstractConfig:361)\n", "text": "Thanks. Tried this now and connected with the new private endpoint connection string from MSK Connect. It’s still not able to connect to the Atlas cluster though.Logs: ", "username": "Kristoffer_Almas" }, { "code": "", "text": "Ok - the security group for the vpc endpoint interface was missing inbound rules. So now it works.Would like to get it working with VPC Peering though.", "username": "Kristoffer_Almas" }, { "code": "", "text": "Oh, great. I was about to suggest reachability analyzer. Any particular reasons for VPC peering? PL should be the first choice normally.", "username": "igor_alekseev" }, { "code": "", "text": "Just that it has a lower cost ", "username": "Kristoffer_Almas" } ]
Source Connector - Connectivity issues
2023-06-19T11:34:52.840Z
Source Connector - Connectivity issues
589
https://www.mongodb.com/…_2_502x1023.jpeg
[ "crud", "indexes" ]
[ { "code": "", "text": "When I update an indexed field, the index size grows so much. What’s the reason behind this? (Values of the indexed fields were ‘deneme’ in all the documents before the update.)\n2023-02-13_14-51-14 (2)765×1559 109 KB\n", "username": "Samet_Turgut" }, { "code": "", "text": "One thing i can think of is that old index tree is still there and yet to be deleted. Try running the updateMany a few more times and see if it keeps growing.", "username": "Kobe_W" }, { "code": "", "text": "yes, it keeps growing.", "username": "Samet_Turgut" }, { "code": "", "text": "If theValues of the indexed fields were ‘deneme’ in all the documents before the update.I would expectthe index size grows so muchBefore you had 1 key in your index that was pointing to all documents.After you have many keys in your index, each key pointing to a subset of documents.More keys in an index necessarily imply a bigger index size.", "username": "steevej" }, { "code": "", "text": "I got same problem, I find the index size keep growing in fast", "username": "JerryGss" }, { "code": "", "text": "I see the same thing. frequently updated index grows from 150 Mb to over 5 Gb in size. I update record timestamps, but some records can live for longer time. I guess this keeps index pages used and mongodb cannot release it. don’t think there is easy way out, apart for limiting retention period (I delete records after some time). otherwise some option to rebalance indices in mongo would be nice. not sure if doable.", "username": "Goran_Sliskovic" } ]
Index Size Growth on updateMany()
2023-02-13T17:02:05.735Z
Index Size Growth on updateMany()
1,369
null
[ "node-js", "crud" ]
[ { "code": "// Query for a movie that has the title 'Back to the Future'\n// const query = { title: 'Back to the Future' };\n// const fruit = await fruits.findOne(query);\nawait fruits.insertMany([\n {\n name: \"Apple\",\n score: 8,\n review: \"Great fruit\"\n },\n\n {\n name: \"Orange\",\n score: 6,\n review: \"Kinda Sour\"\n },\n \n {\n name: \"Banana\",\n score: 9,\n review: \"Great stuff!\"\n }\n \n])\nconsole.log(fruit);\n", "text": "const { MongoClient } = require(“mongodb”);\n//const assert = require(‘assert’);\n// Replace the uri string with your connection string.\nconst url = \" mongodb://localhost:27017\";const client = new MongoClient(url);//const dbName= ‘myproject’;async function run() {\ntry {\nconst database = client.db(‘fruitsDB’);\nconst fruits = database.collection(‘fruits’);} finally {\n// Ensures that the client will close when you finish/error\nawait client.close();\n}\n}\nrun().catch(console.dir);Error :\nC:\\Users\\USER\\Desktop\\fruitsDb\\node_modules\\mongodb-connection-string-url\\lib\\index.js:86\nthrow new MongoParseError(‘Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”’);\n^MongoParseError: Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”\nat new ConnectionString (C:\\Users\\USER\\Desktop\\fruitsDb\\node_modules\\mongodb-connection-string-url\\lib\\index.js:86:19)\nat parseOptions (C:\\Users\\USER\\Desktop\\fruitsDb\\node_modules\\mongodb\\lib\\connection_string.js:191:17)\nat new MongoClient (C:\\Users\\USER\\Desktop\\fruitsDb\\node_modules\\mongodb\\lib\\mongo_client.js:48:63)\nat Object. (C:\\Users\\USER\\Desktop\\fruitsDb\\app.js:6:16)\nat Module._compile (node:internal/modules/cjs/loader:1254:14)\nat Module._extensions…js (node:internal/modules/cjs/loader:1308:10)\nat Module.load (node:internal/modules/cjs/loader:1117:32)\nat Module._load (node:internal/modules/cjs/loader:958:12)\nat Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\nat node:internal/main/run_main_module:23:47Node.js v18.16.0\n[nodemon] app crashed - waiting for file changes before starting…", "username": "Satvikmittal_Mittal" }, { "code": "const url = \" mongodb://localhost:27017\";const url = \"mongodb://localhost:27017\";", "text": "Please remove the space before “mongodb://” in the line of code const url = \" mongodb://localhost:27017\"; . It should be const url = \"mongodb://localhost:27017\"; . This correction should resolve the issue. If you have any further questions or encounter any other queries related to this, please let me know.", "username": "R_Hasan" }, { "code": "", "text": "It is not showing error but the database which I have created in Nodejs is not showing in mongodb", "username": "Satvikmittal_Mittal" } ]
I am getting the error in connecting nodejs with mongodb
2023-06-18T19:38:25.967Z
I am getting the error in connecting nodejs with mongodb
689
null
[ "swift" ]
[ { "code": "The primary key property on a synchronized Realm must be named '_id' but found 'id'", "text": "Hello everyone,I have the following situation: I have a Swift project in which I want to integrate a bundled realm with static data from a package locally and a synced realm next to it. Both are independent of each other.When I use only the bundled realm in a non-synced test project, everything works fine.But as soon as I use the bundled realm in the project, in which the synced one is, there are problems with the schema of the attached bundled realm models. It seems that this one is checked like the synced one (rasing a schema error: The primary key property on a synchronized Realm must be named '_id' but found 'id') for the non-synced realm - and if you enable development mode on the server - it also wants to upload the model-schema of the non-synced realm…How can I prevent that the bundled realm is treated like the synced one in the realm client app? How do you manage that these two realms and their data model (schema) remain independent of each other for the realm app and the synced model-schema?", "username": "Dan_Ivan" }, { "code": "", "text": "Huh - that’s an interesting issue. I would guess it a result of how Realm is being accessed; sync vs bundled. Also, is this flex or partition sync.If it’s flex, you must also define what models are sync’ing. Can you include sample code how you’re accessing the local vs the sync realm, and also include the code for the flex sync subscriptions? Perhaps including the non-sync vs sync models may give us some insight.", "username": "Jay" }, { "code": " var settingsConf = new RealmConfiguration(\"Settings\");\n settingsConf.ObjectClasses = new Type[] {\n typeof(Settings),\n typeof(UserDetails),\n typeof(AppointmentCategoryTypes)\n };\n settingsConf.SchemaVersion = 5;\n", "text": "Hey @Dan_Ivan did you explicitly specify which classes are used in each Realm?As soon as you’re opening two Realms, all your declarations are assumed to be in both, unless you manually specify this in the config. This is compilation level - it doesn’t know which classes you will use per-Realm unless you say so. There’s no runtime analysis to determine which schemas need including.That would explain your error. eg:", "username": "Andy_Dent" }, { "code": "", "text": "It would be a nice extension to be able to do this by exclusion - assuming a small subset of classes used in one Realm (maybe bundled) alongside a main Sync realm. Otherwise you have to repeat the explicit setup above for each and every Realm.", "username": "Andy_Dent" }, { "code": "", "text": "@Andy_Dent : Thanks very much! That’s exactly what I was looking for!", "username": "Dan_Ivan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problems with a bundled realm to be used alongside a synced one
2023-06-19T06:22:30.771Z
Problems with a bundled realm to be used alongside a synced one
643
null
[ "queries", "node-js", "crud" ]
[ { "code": "router.put('/api/v3/app/events/:id', (req, res) => {\n\n const filter = { _id: new ObjectId(req.params.id) };\n\n const options = { upsert: true };\n\n const updatedDoc = {\n $set: {\n name: req.params.body.name, tagline: req.params.body.tagline,\n schedule: req.params.body.schedule, description: req.params.body.description, files: req.params.body.files,\n moderator: req.params.body.moderator, category: req.params.body.category,\n sub_category: req.params.body.sub_category, rigor_rank: parseInt(req.params.body.rigor_rank)\n },\n }\n\n updateData(filter, updatedDoc, options)\n .then((result) => {\n console.log(${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s))\n res.status(200).end()\n\n })\n .catch((err) => {\n console.log(err)\n res.status(500).end()\n })\n}) const updateData = (filter,body,options) => {\n const collection = db.collection('events')\n const result = collection.updateOne(filter, body, options)\n return result;\n}\n", "text": "Hi All! I am learning how to fetch data from MongoDB using Express (Node). I am unable to perform updateOne() function as I am getting the error - TypeError: Cannot read property ‘name’ of undefined. Here’s the code snippet:", "username": "Geethika_S" }, { "code": " name: req.params.body.name, tagline: req.params.body.tagline,\n schedule: req.params.body.schedule, description: req.params.body.description, files: req.params.body.files,\n moderator: req.params.body.moderator, category: req.params.body.category,\n sub_category: req.params.body.sub_category\nreq.params.body.namereq.body.nameparamsbody// Route handler\napp.put('/api/v3/app/events/:id', async (req, res) => {\n try {\n const filter = { _id: new ObjectId(req.params.id) };\n const options = { upsert: true };\n\n const updatedDoc = {\n $set: {\n name: req.body.name,\n tagline: req.body.tagline,\n schedule: req.body.schedule,\n description: req.body.description,\n files: req.body.files,\n moderator: req.body.moderator,\n category: req.body.category,\n sub_category: req.body.sub_category,\n rigor_rank: parseInt(req.body.rigor_rank)\n },\n };\n\n const result = await updateData(filter, updatedDoc, options);\n console.log(`${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s)`);\n res.status(200).end();\n } catch (err) {\n console.log(err);\n res.status(500).end();\n }\n});\n\nconst updateData = async (filter, body, options) => {\n const collection = db.collection('events');\n const result = await collection.updateOne(filter, body, options);\n return result;\n};\n", "text": "Hey @Geethika_S,I guess instead of req.params.body.name, you should use req.body.name to access the properties of the request body. The params object is used specifically for extracting route parameters, while the body object is used to access the data sent in the request body.Here is the updated code snippet for your reference:Please let us know if you have any further questions.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error when updating record in MongoDB through Node.js
2023-06-19T09:43:15.100Z
Error when updating record in MongoDB through Node.js
504
https://www.mongodb.com/…e_2_1024x481.png
[ "sharding", "ops-manager" ]
[ { "code": "", "text": "Im deploying shard cluster using config as follows:apiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\nname: mongo-shard\nspec:\nshardCount: 2\nmongodsPerShardCount: 3\nmongosCount: 1\nconfigServerCount: 3\nversion: “4.2.2-ent”\nopsManager:\nconfigMapRef:\nname: mongodb-project\ncredentials: mongo-api-keys\ntype: ShardedCluster\npersistent: trueBut status of my deployment is:\n$ kubectl get pods -n mongodb\n\nimage1082×509 9.2 KB\n$ kubectl describe pod/mongo-shard-0-0 -n mongodb\nType Reason Age From MessageWarning Unhealthy 2m1s (x8190 over 10h) kubelet Readiness probe failed:$ kubectl describe pod/mongo-shard-mongos-0 -n mongodb\nEvents:\nType Reason Age From MessageWarning Unhealthy 3m8s (x8180 over 10h) kubelet Readiness probe failed:Someone please help ", "username": "krishna_shedbalkar" }, { "code": "", "text": "Hi @krishna_shedbalkar and welcome to the MongoDB Community forum!!As mentioned in the Kubernetes documentations:Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.The Readiness probe failure could possibly be resolved by increasing the readiness timeout value set for the pods in the deployment.yaml files.\nSo, if you could increase the value to higher value and see if the nodes/pods come up and start.However, to understand the issue in detail, could you help me with some information regarding the deployment:Lastly, I would recommend you to check the resource utilisation of the pods to ensure that they have enough resources allocated to them. If the pods are running out of memory or CPU, they may not be able to respond to readiness probes.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari ,We are also facing same readiness probe problem .Below is the yaml file used for cluster deplpyment apiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\nname: nwcc-sharded-cluster\nspec:\nshardCount: 1\nmongodsPerShardCount: 3\nmongosCount: 2\nconfigServerCount: 3\nversion: “5.0.7-ent”\nopsManager:\nconfigMapRef:\nname: nwcc-lab\ncredentials: nwcc-organization-secret\ntype: ShardedCluster\npersistent: truemongosPodSpec:\nnodeAffinity:\nrequiredDuringSchedulingIgnoredDuringExecution:\nnodeSelectorTerms:\n- matchExpressions:\n- key: node-role.kubernetes.io/mongodb\noperator: In\nvalues:\n- nwcc\npodTemplate:\nspec:\ncontainers:\n- name: mongodb-enterprise-database\nresources:\nlimits:\nmemory: 16G\npodAntiAffinityTopologyKey: “kubernetes.io/hostname”shardPodSpec:\nnodeAffinity:\nrequiredDuringSchedulingIgnoredDuringExecution:\nnodeSelectorTerms:\n- matchExpressions:\n- key: node-role.kubernetes.io/mongodb\noperator: In\nvalues:\n- nwcc\npodTemplate:\nspec:\ncontainers:\n- name: mongodb-enterprise-database\nresources:\nlimits:\nmemory: 64G\npodAntiAffinityTopologyKey: “kubernetes.io/hostname”configSrvPodSpec:\nnodeAffinity:\nrequiredDuringSchedulingIgnoredDuringExecution:\nnodeSelectorTerms:\n- matchExpressions:\n- key: node-role.kubernetes.io/mongodb\noperator: In\nvalues:\n- nwcc\npodTemplate:\nspec:\ncontainers:\n- name: mongodb-enterprise-database\nresources:\nlimits:\nmemory: 16G\npodAntiAffinityTopologyKey: “kubernetes.io/hostname”==================================================Below is the POD description:-[root@nwcc1-servicenode1 mongodb-procedure-manifests]# oc describe pod nwcc-sharded-cluster-config-0\nName: nwcc-sharded-cluster-config-0\nNamespace: mongodb\nPriority: 0\nNode: nwcc1-worker5.nwcc-wifi-analytics.wifi-analytics.singnet.com.sg/172.16.10.9\nStart Time: Mon, 12 Jun 2023 18:58:45 +0800\nLabels: app=nwcc-sharded-cluster-cs\ncontroller=mongodb-enterprise-operator\ncontroller-revision-hash=nwcc-sharded-cluster-config-6ccffbc774\npod-anti-affinity=nwcc-sharded-cluster-config\nstatefulset.kubernetes.io/pod-name=nwcc-sharded-cluster-config-0\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n[{\n“name”: “openshift-sdn”,\n“interface”: “eth0”,\n“ips”: [\n“10.131.1.191”\n],\n“default”: true,\n“dns”: {}\n}]\nk8s.v1.cni.cncf.io/networks-status:\n[{\n“name”: “openshift-sdn”,\n“interface”: “eth0”,\n“ips”: [\n“10.131.1.191”\n],\n“default”: true,\n“dns”: {}\n}]\nopenshift.io/scc: restricted-v2\nseccomp.security.alpha.kubernetes.io/pod: runtime/default\nStatus: Running\nIP: 10.131.1.191\nIPs:\nIP: 10.131.1.191\nControlled By: StatefulSet/nwcc-sharded-cluster-config\nInit Containers:\nmongodb-enterprise-init-database:\nContainer ID: cri-o://56abcca8beaa4dcb09a1ac266af32b06379bb15f651d4ceecba5b325fa1f27f6\nImage: registry.nwcc-wifi-analytics.wifi-analytics.singnet.com.sg:5000/mongodb/mongodb-enterprise-init-database-ubi:1.0.15\nImage ID: registry.nwcc-wifi-analytics.wifi-analytics.singnet.com.sg:5000/mongodb/mongodb-enterprise-init-database-ubi@sha256:50ad43c3172b335148ff9174426ccb20a8452e3bc8347ea419a3015fa65f390a\nPort: \nHost Port: \nState: Terminated\nReason: Completed\nExit Code: 0\nStarted: Mon, 12 Jun 2023 18:58:58 +0800\nFinished: Mon, 12 Jun 2023 18:58:58 +0800\nReady: True\nRestart Count: 0\nEnvironment: \nMounts:\n/opt/scripts from database-scripts (rw)\n/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx8q9 (ro)\nContainers:\nmongodb-enterprise-database:\nContainer ID: cri-o://4b647be008a6242df1cf61c66b9f4ff208081f6d041c78c530a0df3c5520e7d7\nImage: registry.nwcc-wifi-analytics.wifi-analytics.singnet.com.sg:5000/mongodb/mongodb-enterprise-database-ubi:2.0.2\nImage ID: registry.nwcc-wifi-analytics.wifi-analytics.singnet.com.sg:5000/mongodb/mongodb-enterprise-database-ubi@sha256:10eda5c39dda93a2d00ebbfd28e2c3cc5ea5e92337bd8ad539795affaad16d82\nPort: 27017/TCP\nHost Port: 0/TCP\nCommand:\n/opt/scripts/agent-launcher.sh\nState: Running\nStarted: Mon, 12 Jun 2023 18:58:59 +0800\nReady: False\nRestart Count: 0\nLimits:\nmemory: 16G\nRequests:\nmemory: 16G\nLiveness: exec [/opt/scripts/probe.sh] delay=10s timeout=30s period=30s #success=1 #failure=6\nReadiness: exec [/opt/scripts/readinessprobe] delay=5s timeout=1s period=5s #success=1 #failure=4\nStartup: exec [/opt/scripts/probe.sh] delay=1s timeout=30s period=20s #success=1 #failure=10\nEnvironment:\nAGENT_FLAGS: -logFile,/var/log/mongodb-mms-automation/automation-agent.log,\nBASE_URL: http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080\nGROUP_ID: 6483123e672c203f9e99fa42\nLOG_LEVEL:\nMULTI_CLUSTER_MODE: false\nSSL_REQUIRE_VALID_MMS_CERTIFICATES: true\nUSER_LOGIN: yidelwie\nMounts:\n/data from data (rw,path=“data”)\n/journal from data (rw,path=“journal”)\n/mongodb-automation from agent (rw,path=“mongodb-automation”)\n/mongodb-automation/agent-api-key from agent-api-key (rw)\n/opt/scripts from database-scripts (ro)\n/tmp from agent (rw,path=“tmp”)\n/var/lib/mongodb-mms-automation from agent (rw,path=“mongodb-mms-automation”)\n/var/log/mongodb-mms-automation from data (rw,path=“logs”)\n/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx8q9 (ro)\nConditions:\nType Status\nInitialized True\nReady False\nContainersReady False\nPodScheduled True\nVolumes:\ndata:\nType: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\nClaimName: data-nwcc-sharded-cluster-config-0\nReadOnly: false\nagent:\nType: EmptyDir (a temporary directory that shares a pod’s lifetime)\nMedium:\nSizeLimit: \nagent-api-key:\nType: Secret (a volume populated by a Secret)\nSecretName: 6483123e672c203f9e99fa42-group-secret\nOptional: false\ndatabase-scripts:\nType: EmptyDir (a temporary directory that shares a pod’s lifetime)\nMedium:\nSizeLimit: \nkube-api-access-lx8q9:\nType: Projected (a volume that contains injected data from multiple sources)\nTokenExpirationSeconds: 3607\nConfigMapName: kube-root-ca.crt\nConfigMapOptional: \nDownwardAPI: true\nConfigMapName: openshift-service-ca.crt\nConfigMapOptional: \nQoS Class: Burstable\nNode-Selectors: \nTolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists\nnode.kubernetes.io/not-ready:NoExecute op=Exists for 300s\nnode.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\nType Reason Age From MessageFollowing error received in pod logs in next comment :-Please suggest a work around", "username": "Piyush_Harshwal" }, { "code": "", "text": "{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:22.279+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:22.279] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:22.380+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:22.380] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:22.381+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:22.380] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:22.380] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:23.301+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:23.301] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:23.402+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:23.402] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:23.402+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:23.402] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:23.402] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:24.292+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:24.292] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:24.394+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:24.394] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:24.394+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:24.394] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:24.394] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.293+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:25.293] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.395+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:25.395] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.395+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:25.395] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:25.395] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.648+0000] [.info] [src/director/director.go:computePlan:278] [06:38:25.648] … process has a plan : Download,DownloadMongosh,Start,WaitAllRsMembersUp,RsInit,WaitFeatureCompatibilityVersionCorrect”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.649+0000] [.info] [src/director/director.go:tracef:806] [06:38:25.649] Running step: ‘Download’ of move ‘Download’”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.649+0000] [.info] [src/director/director.go:tracef:806] [06:38:25.649] because “}\n{“logType”:“automation-agent-verbose”,“contents”:”[‘desiredState.FullVersion’ is not a member of ‘currentState.VersionsOnDisk’ (‘desiredState.FullVersion’={\"trueName\":\"5.0.7-ent\",\"gitVersion\":\"b977129dc70eed766cbee7e412d901ee213acbda\",\"modules\":[\"enterprise\"],\"major\":5,\"minor\":0,\"patch\":7}, ‘currentState.VersionsOnDisk’=)]”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.649+0000] [.info] [src/action/helpers.go:touchMarkerFile:793] [06:38:25.649] Marker file /var/lib/mongodb-mms-automation created”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.649+0000] [.info] [src/action/downloadmongo.go:downloadUngzipUntarMongoDb:294] [06:38:25.649] Starting to download and extract http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz into /var/lib/mongodb-mms-automation”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.653+0000] [.error] [src/util/download.go:downloadCustomClient:272] [06:38:25.653] Got 404 status code for url=http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz.”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.653+0000] [.error] [src/action/downloadmongo.go:downloadUngzipUntarMongoDb:313] [06:38:25.653] Error downloading url=http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz to /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-5.0.7-ent : [06:38:25.653] Got 404 status code for url=http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz.”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.653+0000] [.info] [src/action/downloadmongo.go:downloadMongoBinary:241] [06:38:25.653] Error downloading http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz : sleeping for 30 seconds and trying the download again.”}\n{“logType”:“automation-agent-verbose”,“contents”:“err = [06:38:25.653] Error downloading url=http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz to /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-5.0.7-ent : [06:38:25.653] Got 404 status code for url=http://172.16.10.28:8080/mongodb/linux/mongodb-linux-x86_64-enterprise-rhel80-5.0.7.tgz.”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.953+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:25.953] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:25.991+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:25.991] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:26.293+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:26.293] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:26.394+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:26.394] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:26.394+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:26.394] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:26.394] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:27.026+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:27.026] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:27.034+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:27.034] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:27.293+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:27.293] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:27.395+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:27.395] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:27.395+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:27.395] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:27.395] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:28.045+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:28.045] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:28.052+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:28.052] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:28.294+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:28.294] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:28.396+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:28.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:28.396+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:28.396] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:28.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:29.054+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:29.054] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:29.061+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:29.061] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:29.295+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:29.295] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:29.396+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:29.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:29.396+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:29.396] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:29.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:30.082+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:30.082] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:30.091+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:30.091] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:30.295+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:30.295] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:30.396+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:30.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:30.396+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:30.396] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:30.396] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:31.123+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:31.122] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:31.160+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:31.160] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:31.296+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:31.296] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:31.397+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:31.397] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:31.397+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:31.397] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:31.397] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:32.163+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:32.163] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:32.171+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [06:38:32.171] clusterConfig unchanged”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:32.297+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:32.297] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:32.398+0000] [.error] [src/mongoctl/processctl.go:RunCommand:1105] [06:38:32.398] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:32.398+0000] [.warn] [metrics/collector/util.go:getPingStatus:84] [06:38:32.398] Failed to fetch replStatus for nwcc-sharded-cluster-config-0 : [06:38:32.398] Server at nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local:27017 (local=false) is down”}\n{“logType”:“automation-agent-verbose”,“contents”:“[2023-06-13T06:38:33.217+0000] [.info] [src/config/config.go:ReadClusterConfig:440] [06:38:33.217] Retrieving cluster config from http://nwcc-opsmanager-svc.mongodb.svc.cluster.local:8080/agents/api/automation/conf/v1/6483123e672c203f9e99fa42?av=12.0.14.7630&aos=linux&aa=x86_64&ab=64&ad=rhel83&ah=nwcc-sharded-cluster-config-0.nwcc-sharded-cluster-cs.mongodb.svc.cluster.local&ahs=nwcc-sharded-cluster-config-0&at=1686567541084…”}", "username": "Piyush_Harshwal" }, { "code": "", "text": " Hi @Piyush_HarshwalIn general it is preferable to start a new discussion to keep the details of different environments/questions separate and improve visibility of new discussions. That will also allow you to mark your topic as “Solved” when you resolve any outstanding questions.Mentioning the url of an existing discussion on the forum will automatically create links between related discussions for other users to follow.Please have a look at How to write a good post/question for some ideas on best practices.I also recommend reading Getting Started with the MongoDB Community: README.1ST for some tips to help improve your community outcomes.Regards,\nAasawari", "username": "Aasawari" } ]
Sharded cluster deployment pods in unhealthy state forever
2023-04-02T06:48:20.346Z
Sharded cluster deployment pods in unhealthy state forever
1,622
null
[ "java", "crud", "transactions", "spring-data-odm" ]
[ { "code": "BulkOperations bulkOperations = mongoTemplate.bulkOps(BulkOperations.BulkMode.ORDERED, CollectionConstants.COLLECTION_NAME_EQD_FIXING_AUDIT);\nbulkOperations.updateOne(queryUpdates);\nbulkOperations.execute();\n", "text": "Can someone help on the below issue. While doing BulkOperations update, mongoDB throws WriteConflict error.Code:Note: Here queryUpdates are List<Pair<Query, Update>>. In the Query object passing “_id” i.e ObjectId and all the 30K Query updates has unique ObjectId but still getting WriteConflict error. Also no other transaction is happening.Error:Could not commit Mongo transaction for session [ClientSessionImpl@d9d19bd id = {“id”: {“$binary”:{“base64”: “vNL284T+Smeq2jcthOulSA==”, “subType”: “04”}}}, causallyConsistent = true, txActive = false, txNumber = 11, error = d != java.lang.Boolean].; nested exception is com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): ‘WiredTigerRecordStore::insertRecord :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.’ on server x01btsypdb1a:27017. The full response is {“errorLabels”: [“TransientTransactionError”], “ok”: 0.0, “errmsg”: “WiredTigerRecordStore::insertRecord :: caused by :: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.”, “code”: 112, “codeName”: “WriteConflict”, “$clusterTime”: {“clusterTime”: {“$timestamp”: {“t”: 1686888951, “i”: 22173}}, “signature”: {“hash”: {“$binary”: {“base64”: “sIYH92YWxYoaLCjcH8CyjZiHV08=”, “subType”: “00”}}, “keyId”: 7189425660345974791}}, “operationTime”: {“$timestamp”:{“t”: 1686888951, “i”: 1}}}", "username": "JayaprakashNarayanan_Ramanathan" }, { "code": "", "text": "from the error it looks like all the 30k operations are happening inside the same transaction, and another concurrent transaction modifies something overlapping with this one, so you get an exception.otherwise if the 30k are each a separate transaction (and their _id are all unique) you should never get a write conflict.I’m not sure why that’s the behaviour. Maybe something related on the driver side. (perhaps that’s expected with “ordered”? no idea).", "username": "Kobe_W" }, { "code": "", "text": "I used @Transactional. All 30K updates within single transaction. Got error for both ORDERED and UNORDERED. Is there any tool or options to monitor transactions hitting DB?", "username": "JayaprakashNarayanan_Ramanathan" }, { "code": "", "text": "", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the info. I will go through it.", "username": "JayaprakashNarayanan_Ramanathan" } ]
WriteConflict Error while doing BulkOperations update using Spring mongodb
2023-06-16T10:32:11.291Z
WriteConflict Error while doing BulkOperations update using Spring mongodb
824
null
[ "python" ]
[ { "code": "expiresAfterSeconds", "text": "I’m using the latest MongoDB community version. Using Pymongo I want to set the password expiration for the given user. I tried using the expiresAfterSeconds parameter in the command function with the updateUser query. But it gives an error that this parameter is unknown. I have to use Username/password auth mechanism.\nHow can I achieve this? please provide any pointers to the relevant doc.", "username": "Shaktisinh_Jhala" }, { "code": "", "text": "Hi @Shaktisinh_Jhala, the updateUser command does not give the option of setting a password expiration. Here is a relevant answer on the subject: Set user password expiry every 30 days - #5 by Stennie_X.", "username": "Steve_Silvester" }, { "code": "", "text": "@Steve_Silvester Thank you for the answer.My actual purpose is to deactivate the user for a defined timeframe and activate again after some time. Basically, I have created one password rotation script where I’m using two users to switch while changing other users’ passwords to avoid downtime. I want to restrict customers to use the older user once we create the alternate user. But this action should be performed after some time only. In short, I’m looking for a similar kind of functionality like ValidUntil clause of PostgresDBPlease let me know if we have any similar functionality to this in MongoDB", "username": "Shaktisinh_Jhala" }, { "code": "", "text": "That functionality was proposed in https://jira.mongodb.org/browse/SERVER-3197, but ultimately not implemented. Unfortunately you’d have to do the password rotation manually, using a CRON job or some other mechanism, unless you use LDAP or Kerberos to manage passwords externally.", "username": "Steve_Silvester" }, { "code": "", "text": "Have a look at https://www.vaultproject.io/ it offers some interesting credential rotation.Here is a quick tutorial on it:Use Vault's database secrets engine to dynamically generate, manage, and revoke MongoDB credentials for each application and user.", "username": "chris" } ]
Password expiration settings in community version using PyMongo
2023-06-15T11:30:26.426Z
Password expiration settings in community version using PyMongo
666
null
[ "crud" ]
[ { "code": "{\n _id: ....,\n members: [\n { \n _id: ....,\n email_statuses: [\n {\n _id: ....,\n prevalent_status: 'SENDING',\n statuses: {\n sent_at: ''\n }\n }\n ]\n }\n ]\n}\ndb.collection.updateOne({\n \"_id\": parentID\n},\n{\n$set: {\n\t\"members.$[e1].email_status.$[e2].statuses.sent_at\": \"2023-05-22 16:00:00\",\n\t\"members.$[e1].email_status.$[e2].prevalent_status\": {\n\t\t$cond: {\n\t\t\tif: {\n\t\t\t\t\"$members.$[e1].email_status.$[e2].prevalent_status\": {\n\t\t\t\t\t$nin: [\n\t\t\t\t\t\t\"ACCEPTED\",\n\t\t\t\t\t\t\"DELIVERED\",\n\t\t\t\t\t\t\"VIEWED\",\n\t\t\t\t\t\t\"VISITED\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\tthen: \"SENT\",\n\t\t\telse: \"$members.$[e1].email_status.$[e2].prevalent_status\"\n\t\t}\n\t}\n}\n},\n{\n\"arrayFilters\": [\n\t{\n\t\t\"e1._id\": memberID\n\t},\n\t{\n\t\t\"e2._id\": email_statusID\n\t}\n]\n})\n", "text": "II am trying to update specific fields in multilevel array entries that match specific ids. One of the changes has to based on a condition. I have managed to update the static value field correctly but I am not able to apply the conditional change.the collection items contain a members array, who’s members contain an email_status array, matching the dictionary id, member id, and email_status id, i need to update 2 fields in that specific email_status entry.document:With the curent code all the $cond block is set as the value of “prevalent_status”", "username": "Dario_Grilli" }, { "code": " email_statuses:$setemail_statusemail_statuses", "text": "Hi @Dario_Grilli,I have managed to update the static value field correctly but I am not able to apply the conditional change.To better understand what you are after, can you provide the following information:I also noted the following: email_statuses:The $set you provided references email_status as opposed to email_statuses. Is this a typo?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" } ]
Nested array conditional update
2023-06-15T13:51:29.206Z
Nested array conditional update
494
null
[ "aggregation" ]
[ { "code": "$lat$lngdb.dataofweek9of2023.aggregate([\n {\n $match: {\n \"data.featureName\": \"TripsBoard\",\n \"data.lat\": { $ne: null },\n \"data.lng\": { $ne: null }\n }\n },\n {\n $limit: 1\n },\n {\n $addFields: {\n lat: { $toDouble: \"$data.lat\" },\n lng: { $toDouble: \"$data.lng\" }\n }\n },\n {\n $lookup: {\n from: \"bangladesh_geojson\",\n let: {\n lat: \"$lat\",\n lng: \"$lng\"\n },\n pipeline: [\n {\n $unwind: \"$features\"\n },\n {\n $match: {\n \"features.geometry\": {\n $geoIntersects: {\n $geometry: {\n type: \"Point\",\n coordinates: [ 90.416, 23.7935 ]\n }\n }\n }\n }\n },\n {\n $project: {\n _id: 0,\n district: \"$features.properties.shapeName\"\n }\n }\n ],\n as: \"districts\"\n }\n }\n])\ndb.dataofweek9of2023.aggregate([\n {\n $match: {\n \"data.featureName\": \"TripsBoard\",\n \"data.lat\": { $ne: null },\n \"data.lng\": { $ne: null }\n }\n },\n {\n $limit: 1\n },\n {\n $addFields: {\n lat: { $toDouble: \"$data.lat\" },\n lng: { $toDouble: \"$data.lng\" }\n }\n },\n {\n $lookup: {\n from: \"bangladesh_geojson\",\n let: {\n lat: \"$lat\",\n lng: \"$lng\"\n },\n pipeline: [\n {\n $unwind: \"$features\"\n },\n {\n $match: {\n \"features.geometry\": {\n $geoIntersects: {\n $geometry: {\n type: \"Point\",\n coordinates: [ \"$$lat\", \"$$lng\" ]\n }\n }\n }\n }\n },\n {\n $project: {\n _id: 0,\n district: \"$features.properties.shapeName\"\n }\n }\n ],\n as: \"districts\"\n }\n }\n])\n{\n\t\"message\" : \"Point must only contain numeric elements\",\n\t\"ok\" : 0,\n\t\"code\" : 2,\n\t\"codeName\" : \"BadValue\"\n}\n", "text": "When attempting to use dynamic coordinates ($lat and $lng ) in a MongoDB aggregation pipeline for geospatial queries, I encountered the error message “Point must only contain numeric elements.” This error suggests that the dynamic values are not being recognized as numeric elements within the geospatial query. As a result, the query fails to execute properly. I am seeking a solution to resolve this issue and successfully use dynamic values in geospatial queries in MongoDB.The working solution\"The non working solution:Error log:", "username": "R_Hasan" }, { "code": "$matchpipeline$lookup$expr$lookuplet$lookuppipeline$match$expr$expr$match$match$expr$match$lookuplatlong", "text": "Hi @R_Hasan,The $match stage you’ve provided in the pipeline for the $lookup won’t make use of the variables as it is requires use of the $expr operator. As per the $lookup documentation, specific to the let field:The let variables can be accessed by the stages in the pipeline, including additional $lookup stages nested in the pipeline.I’m not sure if this particular feedback post relates to what you’re after but could possibly help with the $match stage you’ve provided in the pipeline.You could also consider performing the query in 2 parts if that works for your use case(s). Off the top of my head, one example is to perform the stages prior to the $lookup as the first aggregation and then use the lat and long values retrieved from the initial aggregation to perform another query with those values with a second aggregation query. Although this example is based off what I assume the resulting fields would appear like since I am not sure what the input documents actually look like.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you, @Jason_Tran for your response. I appreciate your suggestion of using two separate aggregations to achieve the desired result. While that approach works well with MongoDB clients like NoSQLBooster or similar tools, my specific use case requires the use of Metabase as the MongoDB client.In Metabase, I don’t have the flexibility to write separate queries or utilize JavaScript-like operations to retrieve results from one query and use them in another. Instead, I need to find a solution that can be implemented within the constraints of the Metabase platform.I understand that your proposed solution involving separate queries may not be applicable in this scenario. I will continue to focus on finding a solution that works seamlessly within Metabase’s capabilities.If you have any further suggestions or recommendations specific to using Metabase as a MongoDB client, I would greatly appreciate your insights.Thank you once again for your understanding and assistance.Best regards,\nR_Hasan", "username": "R_Hasan" }, { "code": "$expr$match$expr$match$expr$match", "text": "Hello again, @Jason_Tran,After considering your feedback and suggestions, I wanted to ask if you could provide guidance regarding the use of the $expr operator inside the $match stage. As you mentioned, the $expr operator allows the use of aggregation expressions within the $match syntax, and it is necessary to access the variables defined in the let field.Since my use case involves working with Metabase as the MongoDB client, I am looking for a solution that can be implemented within the limitations of the Metabase platform. It would be immensely helpful if you could provide an example or further information on utilizing the $expr operator inside the $match stage.Thank you in advance for your assistance and support. I truly appreciate your expertise and willingness to help.Here is the link for the geoJson file I’ve imported as the collection named bangladesh_geojson Bangladesh GeoJSON", "username": "R_Hasan" }, { "code": "$expr$match$expr$matchpipeline$lookuplet{ order_item: \"$item\", order_qty: \"$ordered\" }$match", "text": "Hi @R_Hasan,It would be immensely helpful if you could provide an example or further information on utilizing the $expr operator inside the $match stage.There is an example of $expr used within the $match stage of the pipeline field in the $lookup documented here. You can see from that particular example, the let field variables: { order_item: \"$item\", order_qty: \"$ordered\" } are used in the $match stage.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Error when using dynamic coordinates in MongoDB aggregation pipeline for geospatial queries
2023-06-18T20:06:18.601Z
Error when using dynamic coordinates in MongoDB aggregation pipeline for geospatial queries
387
null
[ "cxx", "c-driver" ]
[ { "code": "mongo-c-driver.1.23.5~/mongo-c-driver-1.23.5/cmake-buildC:\\msys64\\home\\Mingtendo\\mongo-c-driver-1.23.5C:\\msys64\\usr\\bin\\bash.execmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF-- Building for: Ninja\n-- The C compiler identification is GNU 13.1.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: C:/msys64/mingw64/bin/cc.exe - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Looking for a CXX compiler\n-- Looking for a CXX compiler - C:/msys64/mingw64/bin/c++.exe\n-- The CXX compiler identification is GNU 13.1.0\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Check for working CXX compiler: C:/msys64/mingw64/bin/c++.exe - skipped\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\n-- No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo\n-- Performing Test HAVE_LLD_LINKER_SUPPORT\n-- Performing Test HAVE_LLD_LINKER_SUPPORT - Failed\nfile VERSION_CURRENT contained BUILD_VERSION 1.23.5\n-- Build and install static libraries\n -- Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 1.23.5\n-- Looking for snprintf\n-- Looking for snprintf - found\n-- Performing Test BSON_HAVE_TIMESPEC\n-- Performing Test BSON_HAVE_TIMESPEC - Success\n-- struct timespec found\n-- Looking for gmtime_r\n-- Looking for gmtime_r - not found\n-- Looking for rand_r\n-- Looking for rand_r - not found\n-- Looking for strings.h\n-- Looking for strings.h - found\n-- Looking for strlcpy\n-- Looking for strlcpy - not found\n-- Looking for stdbool.h\n-- Looking for stdbool.h - found\n-- Looking for clock_gettime\n-- Looking for clock_gettime - found\n-- Looking for strnlen\n-- Looking for strnlen - found\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\n-- Found Threads: TRUE \nlibmongoc version (from VERSION_CURRENT file): 1.23.5\n-- Searching for zlib CMake packages\n-- Found ZLIB: C:/msys64/mingw64/lib/libz.dll.a (found version \"1.2.13\") \n-- zlib found version \"1.2.13\"\n-- zlib include path \"C:/msys64/mingw64/include\"\n-- zlib libraries \"C:/msys64/mingw64/lib/libz.dll.a\"\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: C:/msys64/mingw64/bin/pkg-config.exe (found version \"1.8.0\") \n-- Checking for module 'libzstd'\n-- Found libzstd, version 1.5.5\n-- Found zstd version 1.5.5 in \n-- Looking for sys/types.h\n-- Looking for sys/types.h - found\n-- Looking for stdint.h\n-- Looking for stdint.h - found\n-- Looking for stddef.h\n-- Looking for stddef.h - found\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\n-- No ICU library found, SASLPrep disabled for SCRAM-SHA-256 authentication.\n-- If ICU is installed in a non-standard directory, define ICU_ROOT as the ICU installation path.\nSearching for libmongocrypt\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Failed\n-- Compiling against Secure Channel\n-- Compiling against Windows SSPI\n-- Building with MONGODB-AWS auth support\n-- Build files generated for:\n-- build system: Ninja\n-- Configuring done (14.4s)\n-- Generating done (0.4s)\n-- Build files have been written to: C:/msys64/home/Mingtendo/mongo-c-driver-1.23.5/cmake-build\nsudo cmake --build . --target install$ sudo cmake --build . --target install\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\n[1/505] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-error.c.obj\n[2/505] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-md5.c.obj\n[3/505] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-clock.c.obj\n[4/505] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-memory.c.obj\n[5/505] Building C object src/libbson/CMakeFiles/bson_shared.dir/__/common/common-thread.c.obj\n\n-- 500 more lines of building files --\n\n[504/505] Install the project...\n-- Install configuration: \"RelWithDebInfo\"\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/share/mongo-c-driver/COPYING\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/share/mongo-c-driver/NEWS\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/share/mongo-c-driver/README.rst\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/share/mongo-c-driver/THIRD_PARTY_NOTICES\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/libbson-1.0.dll.a\n-- Installing: C:/Program Files (x86)/mongo-c-driver/bin/libbson-1.0.dll\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/libbson-static-1.0.a\n-- Installing: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-config.h\n-- Installing: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-version.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bcon.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-atomic.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-clock.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-cmp.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-compat.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-context.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-decimal128.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-endian.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-error.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-iter.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-json.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-keys.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-macros.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-md5.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-memory.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-oid.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-prelude.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-reader.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-string.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-types.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-utf8.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-value.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-version-functions.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson/bson-writer.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libbson-1.0/bson.h\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/pkgconfig/libbson-1.0.pc\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/pkgconfig/libbson-static-1.0.pc\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/bson-1.0/bson-targets.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/bson-1.0/bson-targets-relwithdebinfo.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/bson-1.0/bson-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/bson-1.0/bson-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libbson-1.0/libbson-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libbson-1.0/libbson-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libbson-static-1.0/libbson-static-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libbson-static-1.0/libbson-static-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/libmongoc-1.0.dll.a\n-- Installing: C:/Program Files (x86)/mongo-c-driver/bin/libmongoc-1.0.dll\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/libmongoc-static-1.0.a\n-- Installing: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-config.h\n-- Installing: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-version.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-apm.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-bulk-operation.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-change-stream.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-pool.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-side-encryption.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-collection.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-cursor.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-database.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-error.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-flags.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-find-and-modify.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-bucket.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-page.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-list.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-handshake.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-host-list.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-init.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-index.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-iovec.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-log.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-macros.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-matcher.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-opcode.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-optional.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-prelude.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-read-concern.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-read-prefs.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-server-api.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-server-description.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-session.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-socket.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls-libressl.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls-openssl.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-buffered.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-file.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-gridfs.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-socket.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-topology-description.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-uri.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-version-functions.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-write-concern.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-rand.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-ssl.h\n-- Up-to-date: C:/Program Files (x86)/mongo-c-driver/include/libmongoc-1.0/mongoc.h\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/pkgconfig/libmongoc-1.0.pc\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/pkgconfig/libmongoc-static-1.0.pc\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/pkgconfig/libmongoc-ssl-1.0.pc\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-targets.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-targets-relwithdebinfo.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config-version.cmake\n-- Installing: C:/Program Files (x86)/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config-version.cmake\nCMake Warning (dev) at generate_uninstall/cmake_install.cmake:55:\n Syntax Warning in cmake code at column 36\n\n Argument not separated from preceding token by whitespace.\nCall Stack (most recent call first):\n cmake_install.cmake:57 (include)\nThis warning is for project developers. Use -Wno-dev to suppress it.\n\nmongo-c-driver was unexpected at this time.\n-- Installing: C:/Program Files (x86)/mongo-c-driver/share/mongo-c-driver/uninstall.cmd\nC:\\msys64\\home\\Mingtendo$ cmake -DBOOST_ROOT='C:\\boost_1_82_0' -DCMAKE_PREFIX_PATH='C:/Program Files (x86)/mongo-c-driver/' -DCMAKE_INSTALL_PREFIX='C:\\Program Files\\mongo-cxx-driver' ..\n-- No build type selected, default is Release\n-- Auto-configuring bsoncxx to use MNMLSTC for polyfills since C++17 is inactive\nbsoncxx version: 3.7.2\nfound libbson version 1.23.5\nmongocxx version: 3.7.2\nfound libmongoc version 1.23.5\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\n-- Found Threads: TRUE \n-- Build files generated for:\n-- build system: Ninja\n-- Configuring done (1.4s)\n-- Generating done (0.5s)\nCMake Warning (dev):\n Policy CMP0058 is not set: Ninja requires custom command byproducts to be\n explicit. Run \"cmake --help-policy CMP0058\" for policy details. Use the\n cmake_policy command to set the policy and suppress this warning.\n\n This project specifies custom command DEPENDS on files in the build tree\n that are not specified as the OUTPUT or BYPRODUCTS of any\n add_custom_command or add_custom_target:\n\n .gitignore\n VERSION_CURRENT\n\n For compatibility with versions of CMake that did not have the BYPRODUCTS\n option, CMake is generating phony rules for such files to convince 'ninja'\n to build.\n\n Project authors should add the missing BYPRODUCTS or OUTPUT options to the\n custom commands that produce these files.\nThis warning is for project developers. Use -Wno-dev to suppress it.\n\n-- Build files have been written to: C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build\ncmake --build . --target install$ cmake --build . --target install\n[1/390] Performing install step for 'EP_mnmlstc_core'\nFAILED: src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install \ncmd.exe /C \"cd /D C:\\msys64\\home\\Mingtendo\\mongo-cxx-driver-r3.7.2\\build\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core-build && C:\\msys64\\mingw64\\bin\\cmake.exe -P C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-Release.cmake && C:\\msys64\\mingw64\\bin\\cmake.exe -E touch C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install\"\nCMake Error at C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-Release.cmake:49 (message):\n Command failed: 1\n\n 'C:/msys64/mingw64/bin/cmake.exe' '--build' '.' '--target' 'install'\n\n See also\n\n C:/msys64/home/Mingtendo/mongo-cxx-driver-r3.7.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-*.log\n\n\nninja: build stopped: subcommand failed.\n$ cat EP_mnmlstc_core-install-*.log\nCMake Error at cmake_install.cmake:41 (file):\n file cannot create directory: C:/Program\n Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/third_party/mnmlstc/share/cmake/core.\n Maybe need administrative privileges.\n\n\n[0/1] Install the project...\n-- Install configuration: \"Release\"\nFAILED: CMakeFiles/install.util\ncmd.exe /C \"cd /D C:\\msys64\\home\\Mingtendo\\mongo-cxx-driver-r3.7.2\\build\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core-build && C:\\msys64\\mingw64\\bin\\cmake.exe -P cmake_install.cmake\"\nninja: build stopped: subcommand failed.\n$ sudo cmake --build . --target install\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\n[1/390] Performing install step for 'EP_mnmlstc_core'\n[2/390] Performing fix-includes step for 'EP_mnmlstc_core'\n[3/390] Completed 'EP_mnmlstc_core'\n[4/390] Building CXX object src/bsoncxx/CMakeFiles/bsoncxx_shared.dir/private/itoa.cpp.obj\n[5/390] Building CXX object src/bsoncxx/CMakeFiles/bsoncxx_testing.dir/private/itoa.cpp.obj\n\n-- 385 more lines of building and linking files --\n\n[389/390] Install the project...\n-- Install configuration: \"Release\"\n-- Installing: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/LICENSE\n-- Installing: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/README.md\n-- Installing: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/THIRD-PARTY-NOTICES\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx\n-- Installing: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array\n-- Installing: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/element.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/value.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/view.hpp\n\n-- Couple hundred more lines of installing stuff --\n\n-- Installing: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/write_type.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/export.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/libmongocxx-3.7.2/libmongocxx-config.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/libmongocxx-3.7.2/libmongocxx-config-version.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/libmongocxx.dll.a\n-- Installing: C:/Program Files/mongo-cxx-driver/bin/libmongocxx.dll\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx_targets.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx_targets-release.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx-config-version.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx-config.cmake\n-- Installing: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/config.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/version.hpp\n-- Installing: C:/Program Files/mongo-cxx-driver/lib/pkgconfig/libmongocxx.pc\n****** B A T C H R E C U R S I O N exceeds STACK limits ******\nRecursion Count=289, Stack Usage=90 percent\n****** B A T C H PROCESSING IS A B O R T E D ******\n-- Installing: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/uninstall.cmd\nC:\\Program Files\\mongo-cxx-driver$ sudo cmake --build . --target install\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\nFailed to create ConsoleBuf!\nsetActiveInputCodepage failed!\n[0/1] Install the project...\n-- Install configuration: \"Release\"\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/LICENSE\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/README.md\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/THIRD-PARTY-NOTICES\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/element.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/value.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/view.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/bsoncxx/v_noabi/bsoncxx/array/view_or_value.hpp\n\n-- Ditto for 200+ lines --\n\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/write_concern.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/write_type.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/export.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/libmongocxx-3.7.2/libmongocxx-config.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/libmongocxx-3.7.2/libmongocxx-config-version.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/libmongocxx.dll.a\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/bin/libmongocxx.dll\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx_targets.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx_targets-release.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx-config-version.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/cmake/mongocxx-3.7.2/mongocxx-config.cmake\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/config.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/include/mongocxx/v_noabi/mongocxx/config/version.hpp\n-- Up-to-date: C:/Program Files/mongo-cxx-driver/lib/pkgconfig/libmongocxx.pc\n****** B A T C H R E C U R S I O N exceeds STACK limits ******\nRecursion Count=288, Stack Usage=90 percent\n****** B A T C H PROCESSING IS A B O R T E D ******\n-- Installing: C:/Program Files/mongo-cxx-driver/share/mongo-cxx-driver/uninstall.cmd\n", "text": "Hello, I’m new to MongoDB, and I’m trying to install the C++ drivers to use for educational purposes. I’m trying to build the C drivers from mongo-c-driver.1.23.5, so that I can then build the C++ drivers on top I’m having issues doing both. I’m on Windows 10, 64-bit AMD with GNU 13.1.0, which I installed using MSYS2. By default, running cmake builds files for Ninja. Here is what I’ve tried so far.The last line looks like what I’m supposed to get, so despite some of the include files not being found I just push on. (Side note: I do have Visual Studio 17 2022 installed, and its generators for cmake via MSYS2, but attempting to build and install both C and C++ drivers didn’t seem to work either).I wind up with this. I’m not sure what this means, but it doesn’t sound good. Nevertheless, when I check that directory, it does seem like all the files are there, in my C: drive. So to me it seems to have worked. I’m not sure if this is where things went wrong, but it could be. The guide doesn’t tell me what sort of output to expect for a successful build.This seems to have gone well, so time to install it.It fails spectacularly, but fortunately there’s a log file, so I check it out:So I run the command again, this time using sudo (reminder, I’m on Windows 10, I have win-sudo installed):It says the batch processing is aborted, but when I check the destination folder, C:\\Program Files\\mongo-cxx-driver, it seems to be filled with the necessary .h/.hpp/.cpp files. So I run it again, maybe that will help?At this point I’m not sure what to do. Is my driver installed or not? Is it safe to use? I have no idea, because the guide doesn’t tell me what I should expect as an output for a successful operation. Any help would be much appreciated, and I apologize if this post was too long, since I’m not sure where things went wrong.", "username": "mingtendo_N_A" }, { "code": "", "text": "If you find the library installed at the given path (C:/Program Files/mongo-cxx-driver) as per your log, ie. include, bin and lib folders - it means the driver has been installed.\nYou can also look at this tutorial for reference - Getting Started with MongoDB and C++ | MongoDB", "username": "Rishabh_Bisht" }, { "code": "", "text": "This was exactly the sort of thing I was looking for to help me, but it didn’t seem to pop up in my Google searches anywhere. My steps shown here were basically hacked together from various pages of information on the drivers and how to install them from compile. Thanks so much! This page should be more visible and easier to find.", "username": "mingtendo_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difficulty installing mongo-c-driver and mongo-cxx-driver
2023-06-16T17:20:58.320Z
Difficulty installing mongo-c-driver and mongo-cxx-driver
1,040
null
[ "atlas-functions" ]
[ { "code": "", "text": "I have a function. Within that function, I call another function, but the function I call will not produce a log. I need this as I’m receiving an ERROR from the main function, but the ERROR is not from the main function, it’s from the one I called.That’s an example. When you’re calling multiple different function within one function, this becomes a nightmare to troubleshoot.", "username": "Protrakit_Support" }, { "code": "", "text": "I know it’s been a whole year since you had this issue, but I was wondering if you ever found a solution to this? I am having the same problem when using context.functions.execute()", "username": "Israel_Davila" } ]
Context.functions are not being sent to the Log after running
2022-02-16T15:29:01.033Z
Context.functions are not being sent to the Log after running
2,371
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 5.0.18 is out and is ready for production deployment. This release contains only fixes since 5.0.17, and is a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "What is the end of life date for this version i.e., MongoDB 5.0.18?", "username": "Sushil_Chandra" }, { "code": "", "text": "\nimage766×537 11.7 KB\n", "username": "tapiocaPENGUIN" }, { "code": "", "text": "As reference for future readers: Here you can find details about the MongoDB lifecycle schedules including EOL Dates .Best,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.18 is released
2023-05-18T20:17:06.357Z
MongoDB 5.0.18 is released
1,329
null
[]
[ { "code": "{\"t\":{\"$date\":\"2023-06-10T13:33:05.288+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"11.1.1.1:27017\",\"error\":\"HostUnreachable: Error connecting to 11.1.1.1:27017 :: caused by :: Too many open files\"}}ulimits;\n\ncore file size (blocks, -c) unlimited\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 337612\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 200000\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 200000\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n", "text": "I have 3 node replica. One of them has crashed. Start to initial sync by deleting data directory. After 7-8 hours later primary mongo was restarted because of Too many open files.Two times initial sync failed. Is there a solution ? How many limit should I set ?{\"t\":{\"$date\":\"2023-06-10T13:33:05.288+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"11.1.1.1:27017\",\"error\":\"HostUnreachable: Error connecting to 11.1.1.1:27017 :: caused by :: Too many open files\"}}", "username": "Yunus_Dal" }, { "code": "", "text": "Hi @Yunus_Dal,recommended settings are:As mentioned from documentazione:", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "I have tried to set 64K but it is not work. if I set 64K, primary mongodb reach this limit in a hour and fire Too Many Open Files error.", "username": "Yunus_Dal" }, { "code": "", "text": "Hi @Yunus_Dal ,\nCheck that indeed the values are set correctly, under /proc/pid_process_mongo/limits.BR", "username": "Fabio_Ramohitaj" }, { "code": "open files (-n) 200000\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 200000\n", "text": "What are these? 200,000 is already larger than recommended value (65535) for both open file and max user processes.", "username": "Kobe_W" }, { "code": "", "text": "I increased limit. İt worked.core file size (blocks, -c) unlimited\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 337612\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 900000\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 900000\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited", "username": "Yunus_Dal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
İnitial sync failed because of Too many open files
2023-06-10T14:37:55.225Z
İnitial sync failed because of Too many open files
603
https://www.mongodb.com/…4_2_1024x570.png
[ "graphql", "schema-validation" ]
[ { "code": "{\n \"type\": \"array\",\n \"title\": \"agentsWithAds\",\n \"items\": {\n \"anyOf\": [\n {\n \"bsonType\": \"object\",\n \"properties\": { \n \"_id\": { \"bsonType\": \"objectId\" },\n \"advertiser\": { \"bsonType\": \"string\" }\n }\n },\n {\n \"bsonType\": \"object\",\n \"properties\": { \n \"_id\": { \"bsonType\": \"objectId\" },\n \"agent\": { \"bsonType\": \"string\"}\n }\n }\n ]\n }\n}\narrayanyOf", "text": "I have an Atlas Function which is returning an array of two different objects (each one is from a different Collection and has a different schema). I want to use the Realm GraphQL API to return results so I’m creating a custom resolver. The problem I’m having is in defining the Payload Type for that resolver. JSON Schema would normally allow for anyOf to allow for multiple acceptable types but this doesn’t seem to be accepted in Atlas Services.Here is a stripped down version of the two objects and the schema that I attempt to copy into the Payload Type field. (They are just simple strings for the example but in my actual use case they contain many other fields):The type is array and I want it to accept any of the stated object types .I receive the following error which prevents me from saving the custom resolver:\nmongodb-custom-resolver-payload-error2002×1116 41.7 KB\nHow can I get the GraphQL API to serve up results from two different object types?Is there a different syntax to use for the Payload Type which would allow me to do the same thing? Or is there another way altogether that will allow me to return two different object types from one resolver?ReferenceI’ve read through this topic in depth which suggests using the anyOf syntax but only on a field. The original poster didn’t seem to be able to get that to work either. I also didn’t understand what was meant by:The generated schema will ignore this field, but it can still be accessed via the custom resolver, (likely by having to define two different types).", "username": "Ian" }, { "code": "{\n \"bsonType\": \"object\",\n \"properties\": { \n \"_id\": { \"bsonType\": \"objectId\" },\n \"advertiser\": { \"bsonType\": \"string\" },\n \"agent\": { \"bsonType\": \"string\" }\n }\n}\n{ \"collectionName\": {\"bsonType\": \"string\"} }", "text": "I have to return an unique node for each type.So you’d need to return aI’ll use a field to write out the type, so something like { \"collectionName\": {\"bsonType\": \"string\"} } with either advertiser or agent strings. And yes, I’ve also returned a JSON.stringified object and parsed on the client.In all honesty, the features available in app services’ GraphQL leave much desired.", "username": "Jon_Shipman" } ]
GraphQL Custom Resolver: Custom Payload Type for different object types in an array
2023-01-21T13:22:25.598Z
GraphQL Custom Resolver: Custom Payload Type for different object types in an array
1,703
null
[ "change-streams" ]
[ { "code": "", "text": "I am working on some realtime data update application so in that I am thinking to use change stream to get the realtime db updates but while setting up change stream, I am unable to find any way by which I can set timeout for the change stream. I need timeout because if there will be no update in db then it will keep blocking further execution of the code. Is there any way by which this can be achieved?Thanks in advance!!", "username": "Ronak_Mangal" }, { "code": "", "text": "Drivers may support timeout feature, here’s one example for nodejs", "username": "Kobe_W" }, { "code": "", "text": "Thanks for you help @Kobe_W. I am using Python for my implementation. I thought of one solution by using thread to start change stream in background and set timeout for that thread, so once the thread timed out, change stream will not block further code. I am not sure whether this is good implementation or not. It will be a great help if you put some light on it.", "username": "Ronak_Mangal" }, { "code": "", "text": "https://pymongo.readthedocs.io/en/4.3.3/api/pymongo/change_stream.html", "username": "Kobe_W" } ]
How to set timeout in change stream
2023-06-18T12:51:55.427Z
How to set timeout in change stream
947
https://www.mongodb.com/…b_2_1024x581.png
[ "queries", "node-js", "mongoose-odm", "compass" ]
[ { "code": "findAndDelete", "text": "HiWe created a new collection and we inserted documents in batches (500 documents) with consecutive numbers (10000, 10001, 10002,… 10XXX). We checked on Mongo Atlas and Mongo Compass and the documents were inserted in order. I also checked the order during the next few days and the order was fine. However, I checked them this morning and the order is wrong.Screenshot 2023-06-19 at 10.46.021362×773 57 KBThe documents with the numbers 10217, 10218, 10219 and 10220 already exist but they are several pages after. As I said, we confirmed that the documents were sorted before. The field contactNumber is an index.Do you know if MongoDB “re-sorts” the documents in the collections (for performance purposes or whatever)? I’m looking at the Mongo documentation and Google, but it’s not clear.We are using Nodejs & Mongoose (5.12.3), and MongoDB 5.0.18.Our backend logic is querying this collection by findAndDelete to pick the first document with the first number available, so currently is not picking the lowest number available. We can add a sort by contactNumber to fix this but first, we want to understand why this “re-sort” happened.Thank you.", "username": "Jose_Antonio_Herrera" }, { "code": "sort()findAndDeletefindOneAndDelete()", "text": "Hey @Jose_Antonio_Herrera,Thank you for reaching out to the MongoDB Community forums.We checked on Mongo Atlas and Mongo Compass and the documents were inserted in order. I also checked the order during the next few days and the order was fine. However, I checked them this morning and the order is wrong.It’s worth noting that the order of storing documents in MongoDB is not guaranteed, nor is the view. It can change depending on the query you perform.Do you know if MongoDB “re-sorts” the documents in the collections (for performance purposes or whatever)? I’m looking at the Mongo documentation and Google, but it’s not clear.By default, MongoDB does not enforce any specific order on the documents within a collection. It’s important to note that without an explicit sort, the order in which documents are returned follows the natural order. The natural order is not guaranteed to match the insertion order, except in the special case of capped collections (which have significant usage restrictions ).The natural order is not a stable sort order; it is determined “as documents are found”:This ordering is an internal implementation feature, and you should not rely on any particular ordering of the documents.If you require a predictable sort order for the retrieved documents, you must include an explicit sort() in your query and ensure unique values for the sort key. The natural order is described as an internal implementation detail because the storage engine decides how to store data most efficiently and it may not correspond to the order of insertion.Our backend logic is querying this collection by findAndDelete to pick the first document with the first number available, so currently is not picking the lowest number available.Therefore, I recommend not depending on the way the documents are viewed on your end, as the order in which the documents are returned could be different. Please refer to the findOneAndDelete() behavior to learn more.I hope this clarifies your questions! Let us know if you need further help.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "findOneAndDelete()findOneAndDelete({}, {sort: {contactNumber: 1}})sort", "text": "Thank you! We won’t depends on the way the documents are viewed. Anyways, I’m still surprised that the “view ordering” suddenly changed from one day to the next.We launched a benchmark comparing findOneAndDelete() and findOneAndDelete({}, {sort: {contactNumber: 1}}) and the difference is minimum, even slightly faster with sort since contactNumber is an index.", "username": "Jose_Antonio_Herrera" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does MongoDB resort the collections?
2023-06-19T09:49:14.745Z
Does MongoDB resort the collections?
439
null
[ "graphql" ]
[ { "code": "{\n\"properties\": {\n\"_id\": {\n \"bsonType\": \"objectId\"\n},\n\"prescriptions\": {\n \"bsonType\": \"array\",\n \"items\": [\n {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"left_eye\": {\n \"bsonType\": \"string\"\n }\n }\n }\n ]\n }\n }\n", "text": "I have the following schemaWhen I try to query the nested “prescriptions” object using GraphQL I get the following error:\n“Unknown argument “limit” on field “prescriptions” of type “Client””Is there anything I can do so that i can query/filter the nested array? So that I limit to get just 1 entry and order it by date added", "username": "Paula_farrugia" }, { "code": "", "text": "Sounds like you need a custom resolver. You can create a custom resolver and have it be a child of that schema. So you can add a field called “prescriptionsFilter” or something along that idea.", "username": "Jon_Shipman" } ]
How do you filter nested objects using the autogenerated GraphQL schema?
2021-08-02T05:20:09.430Z
How do you filter nested objects using the autogenerated GraphQL schema?
4,739
null
[ "atlas-search", "on-premises" ]
[ { "code": "", "text": "Wondering if the full text search on Atlas will remain as Atlas platform only facility or is it planned or already possible to use the full text search API in privately hosted clusters on Open stack cloud ?Thanks\nPronab", "username": "Pronab_Pal" }, { "code": "", "text": "Can you tell us more about your use case and your plans? We do not plan to offer Lucene on-premise at this time because of complexities associated with managing it. Is there a reason you require it on-premise?", "username": "Marcus" }, { "code": "", "text": "Can you confirm that the Atlas Search feature will never be available on-premise, or at least not in a near future ? Our business environment has very strict security constraints, which imposes self hosting solutions, even if Atlas provides data encryption solutions.\nIn the feature request page, this possibility is marked as ‘under review’. But your last message implies that Atlas Search will never be fully available locally, is it right ? If so we have to find other solutions.", "username": "Mounir_Farhat" }, { "code": "", "text": "I’m also wondering if Atlas Search will become available for deployment on other platforms than Atlas.\nLet’s say we are using a managed MongoDB service delivered by another cloud provider. Will this other cloud provider be able to ever provide a managed MongoDB Search similar to or the same as Atlas Search?\nWill you make it possible for other providers to host “Atlas Search” together with MongoDB?", "username": "Bobby_Nielsen" }, { "code": "", "text": "Hi Bobby, I’d never say never: it’s a question of how we can continue the velocity that comes from our ability to uniquely deliver and iterate rapidly on evolving Atlas Search in Atlas; the game of shrink-wrap software requires a ton of ongoing focus to get right that would be a distraction from our focus to deliver value for where most of our customers are as efficiently as possible. On the flip side, we are committed to bringing the power of search to more of the SDLC and are open minded/aiming to be creative here. Care to share more details about where you’d like to run?", "username": "Andrew_Davidson" }, { "code": "", "text": "Since we are in 2023… Is there any plan to offer to on-premisse? 90% of our customer are using in cloud with Atlas, but I can’t choose to use Atlas Search and ignore the 10% on-promisse.", "username": "Cleiton_dos_Santos_Garcia" }, { "code": "", "text": "Hello everyone,I’d like to join the ongoing discussion and express my interest in having the option to use Atlas Search on-premise.In our organization, we operate a number of applications based on MongoDB, running on our own hosting. The primary reason for this is strict data privacy and security policies we must adhere to, necessitating control over our data and where it is stored.I wish to emphasize that the possibility of using Atlas Search on-premise holds great significance for us. Atlas Search offers a range of features that would be very beneficial to our requirements, such as full-text search, numeric range search, geographic search, and faceted search. It significantly enhances the capabilities of data querying, and in our specific case, it could provide high value.I understand and respect the complexities involved in providing Atlas Search on-premise, but I’m convinced that there are organizations like ours that would be ready and capable of tackling this challenge. It would be wonderful if MongoDB could provide support or even a separate solution for on-premise scenarios.I hope this is taken into consideration, as Atlas Search on-premise, in our opinion, would bring substantial value to MongoDB and its users.Thank you for your attention and for your continuous work on improving MongoDB.Best Regards,\nElian", "username": "Elian_N_A" } ]
Atlas Search on Non atlas Hosting?
2021-05-01T15:09:12.695Z
Atlas Search on Non atlas Hosting?
4,940
null
[ "dot-net" ]
[ { "code": " [PrimaryKey]\n private ObjectId _id { get; } = ObjectId.GenerateNewId();\nprivate set;set;{ get; } [PrimaryKey]\n private ObjectId _id { get; set; } = ObjectId.GenerateNewId();\n [PrimaryKey]\n public ObjectId _id { get; private set; } = ObjectId.GenerateNewId();\n", "text": "I had a property defined aswhich caused the compilation warning:SharedWith.cs(12,26,12,30): warning : Fody/Realm: SharedWith.Id has [MapTo], [PrimaryKey] applied, but it’s not persisted, so those attributes will be ignored.It took me an annoyingly long time to realise what was wrong with the property declaration as I was writing it thinking as a C# developer - this should not be settable. I’d started with it as a public property with private set; and, for some reason, instead of specifying set; when I made it private, just left it as { get; }It took a long time looking at this declaration vs other working code to realise the important semantic difference.I’m adding this topic because I couldn’t find any discussion of the message elsewhere.The correct declaration in full:or could useIn case anyone’s still lost - Realm requires the object always be at least privately settable even though it has an initialiser.", "username": "Andy_Dent" }, { "code": "", "text": "Hi @Andy_Dent, thanks for your post.\nI agree that the error message is not very clear, we should probably improve it. I’ve created a ticket about it: Improve warning message for ignored property with attribute · Issue #3352 · realm/realm-dotnet · GitHub", "username": "papafe" } ]
Confusing error "it's not persisted, so those attributes will be ignored" in Realm dotnet SDK
2023-06-16T02:13:21.719Z
Confusing error &ldquo;it&rsquo;s not persisted, so those attributes will be ignored&rdquo; in Realm dotnet SDK
506