image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "queries" ]
[ { "code": "", "text": "I’m new with mongodb. I’ve tried to add new entry and get this error.[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)", "username": "Egor_Gerimo" }, { "code": "new MongoDB\\Client($mongodb_uri, b['tls' => true, 'tlsCAFile' => $mongodb_cert_path]);", "text": "Are you getting this Compass or mongosh or in a program?\nIf your installation has a self-signed certificate, you have to tell MongoDB tooling where to get a copy of the acceptable certificate.\nE.g., in PHP, you’d connect something like\nnew MongoDB\\Client($mongodb_uri, b['tls' => true, 'tlsCAFile' => $mongodb_cert_path]);", "username": "Jack_Woehr" }, { "code": "/Applications/Python 3.10/Install Certificates.command\n", "text": "I saw an answer that worked for me, it appears i had not yet installed the python certificates on my mac, so from the following path i went and installed itOnly change the version of your python, after that everything, worked fine for mePS: I had been trying to solve the problem for half a day, I even asked ChatGPT", "username": "Andres_Castaneda" }, { "code": "", "text": "Unfortunately that didn’t help", "username": "Egor_Gerimo" }, { "code": "from pymongo import MongoClient\ncluster = MongoClient('mongodb+srv://pystudy:<password>@cluster0.fyglnvy.mongodb.net/?retryWrites=true&w=majority')\ndb = cluster['test']\ncollection = db['test']\n\npost = {'_id': 0, 'name': 'Jeff', 'score': 9}\ncollection.insert_one(post)\n", "text": "Using pyCharm, macOs Ventura 13.1Full code:", "username": "Egor_Gerimo" }, { "code": "mkdir test && cd testpython3 -m venv .venv.venv/Scripts/activatepip3 install pymongopip3 listpython3", "text": "open a new terminal and follow these (use “python” and “pip” if “python3” and “pip3” does not work):Package Versiondnspython 2.3.0\npip 22.2.2\npymongo 4.3.3\nsetuptools 63.2.0if your code works this way, then the installation pycharm uses might be broken and need a repair or full reinstllation. (or pycharm is broken)if not, then either your system has a problem or you messed with some settings in Atlas (I can’t point any).", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This worked for me.\nThank you mate!", "username": "Joe_Montanari" }, { "code": "", "text": "I had similar problems and it felt like I tried everything. MacOS Ventura 13.2.1, PyCharm 2022.3.3 Professional, Python 3.11.2The final fix…I rebooted!!!So, some of these changes don’t take effect until a complete reboot.", "username": "Jim_Olivi" }, { "code": "", "text": "For me - After a few hours I found this thread that said that I need to open the client using certifiAble to use Compass with no problems, Atlas UI all looks good but couldn’t connect using Python.Using Python 3.11, pymongo==3.11, Mac Monterey, VS CodeAs a newbie I am disappointed that I had to spend a lot of time on ChatGPT and Google to resolve this.\nNot a fantastic first impression but happy to blame myself for something i missed - maybe others will miss whatever I missed too…", "username": "Info_Pixienetwork" }, { "code": "", "text": "Thank you mate!This worked for me.", "username": "suzyy_tom" }, { "code": "", "text": "I had similar problems and it felt like I tried everything...", "username": "suzyy_tom" }, { "code": "", "text": "Thanks, worked for me!", "username": "Adil_Azeez" } ]
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
2023-01-16T12:57:46.747Z
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
23,122
null
[ "aggregation", "atlas-triggers" ]
[ { "code": "", "text": "I recently created a database trigger that executes on insert operations on a particular collection. However, when I check the real-time metrics in my MongoDB Atlas dashboard, I noticed that the collection associated with the trigger is listed as the slowest operation with an “aggregate” action. The slowest operation has an execution time that starts at 999ms and decreases gradually.I am confused as to why this is happening and whether it’s related to the trigger I created. Could someone please explain what might be causing this issue and how to resolve it?", "username": "Alexander_Henry_Obispo_Buendia" }, { "code": "", "text": "We also have the same concern. When we asked this question at one of the Dev summits (MongoDB Pune, India, May 23), they said it’s a bug and will be rectified soon.", "username": "shashank-agrawal" } ]
Slowest operation appears in real-time metrics after creating database triggers
2023-04-20T16:00:54.855Z
Slowest operation appears in real-time metrics after creating database triggers
846
null
[ "queries" ]
[ { "code": "", "text": "Hey Team,\nI am thinking to ignore the updates come in previous timestamps for example I have field called dateUpdated which is DateTime / TimeStamp and If I get concurrent updates I would to take the latest data with latest timestamp.\nFor this, I would be using Mongo Compare and set atomic operations and utlimately I would be relying on Mongo clocks and My System Clocks as well.\nMy Question here is it good practice to go with clock comparision or shall I go with versioining ??\nWhat are the Mongo recommendatations on that ??Thanks,\nGheri Rupchandani", "username": "Gheri_Rupchandani1" }, { "code": "", "text": "It’s not a mongodb-specific question.The rule is wall clock is not reliable. So you have to use your own “ordered” sequence to make decisions. something like “lamport clock”.This is the same questions as “how to find out which write is the last one in a last write wins strategy ?”", "username": "Kobe_W" } ]
Should we rely on timestamps or clocks?
2023-06-30T09:20:35.740Z
Should we rely on timestamps or clocks?
251
null
[ "transactions", "change-streams" ]
[ { "code": "", "text": "Hello,I need to know if change stream events get generated as part of the original multi-document transaction from the business application code. My understanding is not, since only committed transactions will generate change stream events. That means, only after the business transaction is committed, then change stream events get generated after the committing.Can somebody please help to verify if my understanding is correct? It’ll be helpful to point to the MongoDB documentation on this aspect.Thanks!", "username": "Linda_Peng" }, { "code": "", "text": "I don’t recall this is explicitly mentioned in manual, but i suppose its true.Nobody wants to get notified on an event that may never happen. So notify-after-commit is the way to go.", "username": "Kobe_W" } ]
Are change stream events generated as part of the same multi-document transaction that inserts/updates data?
2023-06-29T21:46:27.080Z
Are change stream events generated as part of the same multi-document transaction that inserts/updates data?
627
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.8-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.7. The next stable release 6.0.8 will be a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.8-rc0 is released
2023-06-30T22:39:55.201Z
MongoDB 6.0.8-rc0 is released
697
null
[ "server", "transactions", "installation", "field-encryption", "storage" ]
[ { "code": "MacBook-Air-77:db username$ brew services start [email protected]\nError: Permission denied @ rb_sysopen - /Users/username/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nMacBook-Air-77:~ username$ ls -alh /opt/homebrew/var/mongodb/*\n-rw------- 1 username admin 50B 30 Jun 13:36 /opt/homebrew/var/mongodb/WiredTiger\n-rw------- 1 username admin 21B 30 Jun 13:36 /opt/homebrew/var/mongodb/WiredTiger.lock\n-rw------- 1 username admin 1.4K 30 Jun 13:42 /opt/homebrew/var/mongodb/WiredTiger.turtle\n-rw------- 1 username admin 68K 30 Jun 13:42 /opt/homebrew/var/mongodb/WiredTiger.wt\n-rw------- 1 username admin 4.0K 30 Jun 13:42 /opt/homebrew/var/mongodb/WiredTigerHS.wt\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/_mdb_catalog.wt\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/collection-0--8131925775124359905.wt\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/collection-2--8131925775124359905.wt\n-rw------- 1 username admin 4.0K 30 Jun 13:42 /opt/homebrew/var/mongodb/collection-4--8131925775124359905.wt\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/index-1--8131925775124359905.wt\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/index-3--8131925775124359905.wt\n-rw------- 1 username admin 4.0K 30 Jun 13:42 /opt/homebrew/var/mongodb/index-5--8131925775124359905.wt\n-rw------- 1 username admin 4.0K 30 Jun 13:42 /opt/homebrew/var/mongodb/index-6--8131925775124359905.wt\n-rw------- 1 username admin 0B 30 Jun 13:42 /opt/homebrew/var/mongodb/mongod.lock\n-rw------- 1 username admin 20K 30 Jun 13:42 /opt/homebrew/var/mongodb/sizeStorer.wt\n-rw------- 1 username admin 114B 30 Jun 13:36 /opt/homebrew/var/mongodb/storage.bson\n\n/opt/homebrew/var/mongodb/diagnostic.data:\ntotal 72\ndrwx------ 3 username admin 96B 30 Jun 13:42 .\ndrwxr-xr-x 20 username admin 640B 30 Jun 13:42 ..\n-rw------- 1 username admin 32K 30 Jun 13:42 metrics.2023-06-30T17-36-47Z-00000\n\n/opt/homebrew/var/mongodb/journal:\ntotal 80\ndrwx------ 5 username admin 160B 30 Jun 13:36 .\ndrwxr-xr-x 20 username admin 640B 30 Jun 13:42 ..\n-rw------- 1 username admin 100M 30 Jun 13:42 WiredTigerLog.0000000001\n-rw------- 1 username admin 100M 30 Jun 13:36 WiredTigerPreplog.0000000001\n-rw------- 1 username admin 100M 30 Jun 13:36 WiredTigerPreplog.0000000002\nMacBook-Air-77:~ username$ \nsystemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /Users/username/data/db\nnet:\n bindIp: 127.0.0.1, ::1\n ipv6: true\nMacBook-Air-77:~ username$ ls -alh /Users/username/data/*\ntotal 496\ndrwxrwxrwx 20 username staff 640B 30 Jun 15:47 .\ndrwxrwxrwx 3 username staff 96B 30 Jun 14:56 ..\n-rwxrwxrwx 1 username staff 50B 30 Jun 14:58 WiredTiger\n-rwxrwxrwx 1 username staff 21B 30 Jun 14:58 WiredTiger.lock\n-rwxrwxrwx 1 username staff 1.4K 30 Jun 15:47 WiredTiger.turtle\n-rwxrwxrwx 1 username staff 48K 30 Jun 15:47 WiredTiger.wt\n-rwxrwxrwx 1 username staff 4.0K 30 Jun 15:47 WiredTigerHS.wt\n-rwxrwxrwx 1 username staff 20K 30 Jun 15:47 _mdb_catalog.wt\n-rwxrwxrwx 1 username staff 20K 30 Jun 15:47 collection-0--6682170025943106396.wt\n-rwxrwxrwx 1 username staff 36K 30 Jun 15:47 collection-2--6682170025943106396.wt\n-rwxrwxrwx 1 username staff 4.0K 30 Jun 14:59 collection-4--6682170025943106396.wt\ndrwxrwxrwx 11 username staff 352B 30 Jun 15:47 diagnostic.data\n-rwxrwxrwx 1 username staff 20K 30 Jun 15:47 index-1--6682170025943106396.wt\n-rwxrwxrwx 1 username staff 36K 30 Jun 15:47 index-3--6682170025943106396.wt\n-rwxrwxrwx 1 username staff 4.0K 30 Jun 14:59 index-5--6682170025943106396.wt\n-rwxrwxrwx 1 username staff 4.0K 30 Jun 15:41 index-6--6682170025943106396.wt\ndrwxrwxrwx 5 username staff 160B 30 Jun 15:47 journal\n-rwxrwxrwx 1 username staff 0B 30 Jun 15:47 mongod.lock\n-rwxrwxrwx 1 username staff 36K 30 Jun 15:47 sizeStorer.wt\n-rwxrwxrwx 1 username staff 114B 30 Jun 14:58 storage.bson\nMacBook-Air-77:~ username$ \nMacBook-Air-77:~ username$ mongod --dbpath ~/data/db\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.907-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.925-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.925-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":99021,\"port\":27017,\"dbPath\":\"/Users/username/data/db\",\"architecture\":\"64-bit\",\"host\":\"MacBook-Air-77\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.930-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/Users/username/data/db\"}}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.932-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.932-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/Users/username/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:49.933-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.894-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":961}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.894-04:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.952-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.952-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.952-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22184, \"ctx\":\"initandlisten\",\"msg\":\"Soft rlimits for open file descriptors too low\",\"attr\":{\"currentValue\":2560,\"recommendedMinimum\":64000},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.972-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.972-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.972-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.975-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.975-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/Users/username/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.984-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.984-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.984-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.984-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:50.984-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n^C{\"t\":{\"$date\":\"2023-06-30T16:05:56.817-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":2,\"error\":\"Interrupt: 2\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.817-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.818-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.821-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.821-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.821-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.821-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.822-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.823-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.824-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.824-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.824-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:56.842-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.076-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":234}}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.076-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.076-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.076-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.078-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-06-30T16:05:57.079-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\nMacBook-Air-77:~ username$ \n\n", "text": "Tried to start mongodb community, but failed due to Permission denied.I already granted all permission to non-root user ‘username’Also, I set up dbpath to ~/data/db\nmongod.conf:Permission and privilege of ~data/dbHere is the log after executing mongod --dbpath ~/data/dbThanks", "username": "John_Au" }, { "code": "", "text": "Consider going back and reading the installation instructions and following them closely.", "username": "Jack_Woehr" } ]
Unable to start mongodb-community
2023-06-30T20:07:06.422Z
Unable to start mongodb-community
740
null
[ "node-js", "mongoose-odm", "mongodb-shell" ]
[ { "code": "", "text": "Hi I want to ask that why I am able to connect to mongodb atlas database through mongosh option but not able to do that with Drivers option can any body tell me ? By the way I am using mongoose to connect to the mongodb atlas in node js", "username": "Guardian_Tech" }, { "code": "", "text": "If you are using the same URI from the same location it should work in both.It is absolutely impossible for us to know what you are doing wrong and why you areable to connect to mongodb atlas database through mongosh option but not able to do that with DriversYou will need to share with us how you try to connect in both. You will need to share with us any error message you get.", "username": "steevej" } ]
Mongodb Atlas Connection Connects Through mongosh but not with Drivers option
2023-06-30T11:17:12.721Z
Mongodb Atlas Connection Connects Through mongosh but not with Drivers option
385
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "Hello,\nI saw there are some liquibase plugin for mongodb, such as MongoDB - contribute.liquibase.com or GitHub - liquibase/liquibase-mongodb: MongoDB extension for Liquibase\nAre they good approach? which one to choose? does someone use such kind of extension? how do they integrate?\nwe have a java spring boot application for information\nThank you", "username": "Veronique_Loutelier" }, { "code": "- PJ", "text": "Hi @Veronique_Loutelier,I am the Developer Advocate at Liquibase and wanted to help with your question.There are two options within the Liquibase ecosystem:For the Community extension, you found the right information but here’s a better link for the tutorial:\nhttps://contribute.liquibase.com/extensions-integrations/directory/tutorials/mongodbAs of Liquibase 4.21+ with Liquibase Pro, you also have the option to use the MongoDB Pro Extension, which is officially supported by Liquibase. One of the key highlights is:Native support for MongoSH. Like Liquibase’s support for embedded SQL scripts, it now supports embedded MongoSH scripts. This means seamless developer adoption. New teams get the power of Liquibase using existing MongoSH scripts – no changes needed.You can see more about the Liquibase MongoDB Pro Extension here:\nhttps://docs.liquibase.com/start/tutorials/mongodb-pro.html\nhttps://www.liquibase.com/videos/automate-mongodb-build-successful-cicd-in-the-cloudI hope that’s helpful. Feel free to ping me if you have any questions.- PJ\nDeveloper Advocate\npj at liquibase dot com", "username": "PJ_Julius" }, { "code": "Mongodb cannot execute liquibase.statement.core.CreateTableStatementUnable to locate Executor mongosh for changeset", "text": "Hi @PJ_JuliusI 'm trying your liquibase tutorial for mongodb and i’m facing an error\nMongodb cannot execute liquibase.statement.core.CreateTableStatementFor my seconde test i’m trying to use mongosh and then also I face an error\nUnable to locate Executor mongosh for changeset\neven if i followed the tutorial and I added a liquibase pro licence.I also have a question for my culture since when liquibase support mongo and also allow mongoshThank you", "username": "Ayoub_AKIRA" } ]
What is the best liquibase plugin for mongo
2023-02-14T10:21:05.829Z
What is the best liquibase plugin for mongo
2,539
null
[ "aggregation" ]
[ { "code": "db.getCollection('room').aggregate(\n[\n {\n $lookup: {\n from: \"messages\",\n localField: \"_id\",\n foreignField: \"room.id\",\n as: \"firstMessage\",\n pipeline: [\n {\n $sort: {\n \"createdAt\": 1\n },\n },\n {\n $limit: 1\n },\n {\n $project:{\n _id: 0,\n createdAt: 1\n }\n }\n ]\n }\n },\n {\n $lookup: {\n from: \"messages\",\n localField: \"_id\",\n foreignField: \"room.id\",\n as: \"lastMessage\",\n pipeline: [\n {\n $sort: {\n \"createdAt\": -1\n },\n },\n {\n $limit: 1\n },\n {\n $project: {\n _id: 0,\n createdAt: 1\n }\n }\n ]\n }\n },\n {\n $unwind: \"$firstMessage\"\n },\n {\n $unwind: \"$lastMessage\"\n },\n {\n $project: {\n _id:1,\n firstMessage: \"$firstMessage.createdAt\",\n lastMessage: \"$lastMessage.createdAt\",\n roomTime: {\n $dateDiff: {\n startDate: \"$firstMessage.createdAt\",\n endDate: \"$lastMessage.createdAt\",\n unit: 'second'\n }\n }\n }\n },\n {\n $group: {\n _id: null,\n totalRoomTime: {\n $avg: \"$roomTime\"\n }\n }\n }\n]\n)\n", "text": "Here’s my aggregation looks like", "username": "T_RAZIN" }, { "code": "db.collection.stats()db.collection.explain('executionStats').aggregate(...)", "text": "Hey @T_RAZIN,Thank you for reaching out to the MongoDB Community forums.To better understand the issue, could you please share the following information:Looking forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "[\n {\n \"avgObjSize\": 341,\n \"capped\": false,\n \"count\": 4716,\n \"freeStorageSize\": 192512,\n \"indexBuilds\": [],\n \"indexSizes\": {\n \"_id_\": 151552,\n \"createdAt_1\": 86016,\n \"createdAt_-1\": 98304\n },\n \"nindexes\": 3,\n \"ns\": \"chat.room\",\n \"numOrphanDocs\": 0,\n \"ok\": 1,\n \"scaleFactor\": 1,\n \"size\": 1609816,\n \"storageSize\": 626688,\n \"totalIndexSize\": 335872,\n \"totalSize\": 962560,\n \"wiredTiger\": {\n \"metadata\": {\n \"formatVersion\": 1\n },\n \"creationString\": \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\": \"file\",\n \"uri\": \"statistics:table:collection-29--6292082429505695878\",\n \"LSM\": {\n \"bloom filter false positives\": 0,\n \"bloom filter hits\": 0,\n \"bloom filter misses\": 0,\n \"bloom filter pages evicted from cache\": 0,\n \"bloom filter pages read into cache\": 0,\n \"bloom filters in the LSM tree\": 0,\n \"chunks in the LSM tree\": 0,\n \"highest merge generation in the LSM tree\": 0,\n \"queries that could have benefited from a Bloom filter that did not exist\": 0,\n \"sleep for LSM checkpoint throttle\": 0,\n \"sleep for LSM merge throttle\": 0,\n \"total size of bloom filters\": 0\n },\n \"autocommit\": {\n \"retries for readonly operations\": 0,\n \"retries for update operations\": 0\n },\n \"block-manager\": {\n \"allocations requiring file extension\": 152,\n \"blocks allocated\": 6975,\n \"blocks freed\": 4491,\n \"checkpoint size\": 417792,\n \"file allocation unit size\": 4096,\n \"file bytes available for reuse\": 192512,\n \"file magic number\": 120897,\n \"file major version number\": 1,\n \"file size in bytes\": 626688,\n \"minor version number\": 0\n },\n \"btree\": {\n \"btree checkpoint generation\": 3560,\n \"btree clean tree checkpoint expiration time\": 9223372036854775807,\n \"btree compact pages reviewed\": 0,\n \"btree compact pages rewritten\": 0,\n \"btree compact pages skipped\": 0,\n \"btree skipped by compaction as process would not reduce size\": 0,\n \"column-store fixed-size leaf pages\": 0,\n \"column-store fixed-size time windows\": 0,\n \"column-store internal pages\": 0,\n \"column-store variable-size RLE encoded values\": 0,\n \"column-store variable-size deleted values\": 0,\n \"column-store variable-size leaf pages\": 0,\n \"fixed-record size\": 0,\n \"maximum internal page size\": 4096,\n \"maximum leaf page key size\": 2867,\n \"maximum leaf page size\": 32768,\n \"maximum leaf page value size\": 67108864,\n \"maximum tree depth\": 3,\n \"number of key/value pairs\": 0,\n \"overflow pages\": 0,\n \"row-store empty values\": 0,\n \"row-store internal pages\": 0,\n \"row-store leaf pages\": 0\n },\n \"cache\": {\n \"bytes currently in the cache\": 1907737,\n \"bytes dirty in the cache cumulative\": 649287004,\n \"bytes read into cache\": 0,\n \"bytes written from cache\": 481554088,\n \"checkpoint blocked page eviction\": 0,\n \"checkpoint of history store file blocked non-history store page eviction\": 0,\n \"data source pages selected for eviction unable to be evicted\": 0,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\": 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\": 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\": 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\": 0,\n \"eviction gave up due to needing to remove a record from the history store but checkpoint is running\": 0,\n \"eviction walk passes of a file\": 28,\n \"eviction walk target pages histogram - 0-9\": 0,\n \"eviction walk target pages histogram - 10-31\": 28,\n \"eviction walk target pages histogram - 128 and higher\": 0,\n \"eviction walk target pages histogram - 32-63\": 0,\n \"eviction walk target pages histogram - 64-128\": 0,\n \"eviction walk target pages reduced due to history store cache pressure\": 0,\n \"eviction walks abandoned\": 0,\n \"eviction walks gave up because they restarted their walk twice\": 28,\n \"eviction walks gave up because they saw too many pages and found no candidates\": 0,\n \"eviction walks gave up because they saw too many pages and found too few candidates\": 0,\n \"eviction walks reached end of tree\": 56,\n \"eviction walks restarted\": 0,\n \"eviction walks started from root of tree\": 28,\n \"eviction walks started from saved location in tree\": 0,\n \"hazard pointer blocked page eviction\": 0,\n \"history store table insert calls\": 0,\n \"history store table insert calls that returned restart\": 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\": 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\": 0,\n \"history store table reads\": 0,\n \"history store table reads missed\": 0,\n \"history store table reads requiring squashed modifies\": 0,\n \"history store table truncation by rollback to stable to remove an unstable update\": 0,\n \"history store table truncation by rollback to stable to remove an update\": 0,\n \"history store table truncation to remove an update\": 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\": 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\": 0,\n \"history store table writes requiring squashed modifies\": 0,\n \"in-memory page passed criteria to be split\": 0,\n \"in-memory page splits\": 0,\n \"internal pages evicted\": 0,\n \"internal pages split during eviction\": 0,\n \"leaf pages split during eviction\": 1,\n \"modified pages evicted\": 1,\n \"overflow pages read into cache\": 0,\n \"page split during eviction deepened the tree\": 0,\n \"page written requiring history store records\": 0,\n \"pages read into cache\": 0,\n \"pages read into cache after truncate\": 1,\n \"pages read into cache after truncate in prepare state\": 0,\n \"pages requested from the cache\": 606110,\n \"pages seen by eviction walk\": 67,\n \"pages written from cache\": 5329,\n \"pages written requiring in-memory restoration\": 1,\n \"the number of times full update inserted to history store\": 0,\n \"the number of times reverse modify inserted to history store\": 0,\n \"tracked dirty bytes in the cache\": 0,\n \"unmodified pages evicted\": 0\n },\n \"cache_walk\": {\n \"Average difference between current eviction generation when the page was last considered\": 0,\n \"Average on-disk page image size seen\": 0,\n \"Average time in cache for pages that have been visited by the eviction server\": 0,\n \"Average time in cache for pages that have not been visited by the eviction server\": 0,\n \"Clean pages currently in cache\": 0,\n \"Current eviction generation\": 0,\n \"Dirty pages currently in cache\": 0,\n \"Entries in the root page\": 0,\n \"Internal pages currently in cache\": 0,\n \"Leaf pages currently in cache\": 0,\n \"Maximum difference between current eviction generation when the page was last considered\": 0,\n \"Maximum page size seen\": 0,\n \"Minimum on-disk page image size seen\": 0,\n \"Number of pages never visited by eviction server\": 0,\n \"On-disk page image sizes smaller than a single allocation unit\": 0,\n \"Pages created in memory and never written\": 0,\n \"Pages currently queued for eviction\": 0,\n \"Pages that could not be queued for eviction\": 0,\n \"Refs skipped during cache traversal\": 0,\n \"Size of the root page\": 0,\n \"Total number of pages currently in cache\": 0\n },\n \"checkpoint-cleanup\": {\n \"pages added for eviction\": 0,\n \"pages removed\": 0,\n \"pages skipped during tree walk\": 0,\n \"pages visited\": 6873\n },\n \"compression\": {\n \"compressed page maximum internal page size prior to compression\": 4096,\n \"compressed page maximum leaf page size prior to compression \": 117968,\n \"compressed pages read\": 0,\n \"compressed pages written\": 4506,\n \"number of blocks with compress ratio greater than 64\": 0,\n \"number of blocks with compress ratio smaller than 16\": 0,\n \"number of blocks with compress ratio smaller than 2\": 0,\n \"number of blocks with compress ratio smaller than 32\": 0,\n \"number of blocks with compress ratio smaller than 4\": 0,\n \"number of blocks with compress ratio smaller than 64\": 0,\n \"number of blocks with compress ratio smaller than 8\": 0,\n \"page written failed to compress\": 0,\n \"page written was too small to compress\": 823\n },\n \"cursor\": {\n \"Total number of entries skipped by cursor next calls\": 0,\n \"Total number of entries skipped by cursor prev calls\": 0,\n \"Total number of entries skipped to position the history store cursor\": 0,\n \"Total number of times a search near has exited due to prefix config\": 0,\n \"bulk loaded cursor insert calls\": 0,\n \"cache cursors reuse count\": 72665,\n \"close calls that result in cache\": 72666,\n \"create calls\": 113,\n \"cursor next calls that skip due to a globally visible history store tombstone\": 0,\n \"cursor next calls that skip greater than or equal to 100 entries\": 0,\n \"cursor next calls that skip less than 100 entries\": 198586487,\n \"cursor prev calls that skip due to a globally visible history store tombstone\": 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\": 0,\n \"cursor prev calls that skip less than 100 entries\": 1,\n \"insert calls\": 4716,\n \"insert key and value bytes\": 1619185,\n \"modify\": 0,\n \"modify key and value bytes affected\": 0,\n \"modify value bytes modified\": 0,\n \"next calls\": 198586487,\n \"open cursor count\": 0,\n \"operation restarted\": 0,\n \"prev calls\": 1,\n \"remove calls\": 0,\n \"remove key bytes removed\": 0,\n \"reserve calls\": 0,\n \"reset calls\": 320801,\n \"search calls\": 20609,\n \"search history store calls\": 0,\n \"search near calls\": 175446,\n \"truncate calls\": 0,\n \"update calls\": 0,\n \"update key and value bytes\": 0,\n \"update value size change\": 0\n },\n \"reconciliation\": {\n \"approximate byte size of timestamps in pages written\": 0,\n \"approximate byte size of transaction IDs in pages written\": 0,\n \"dictionary matches\": 0,\n \"fast-path pages deleted\": 0,\n \"internal page key bytes discarded using suffix compression\": 7827,\n \"internal page multi-block writes\": 0,\n \"leaf page key bytes discarded using prefix compression\": 0,\n \"leaf page multi-block writes\": 796,\n \"leaf-page overflow keys\": 0,\n \"maximum blocks required for a page\": 1,\n \"overflow values written\": 0,\n \"page checksum matches\": 0,\n \"page reconciliation calls\": 1647,\n \"page reconciliation calls for eviction\": 1,\n \"pages deleted\": 0,\n \"pages written including an aggregated newest start durable timestamp \": 0,\n \"pages written including an aggregated newest stop durable timestamp \": 0,\n \"pages written including an aggregated newest stop timestamp \": 0,\n \"pages written including an aggregated newest stop transaction ID\": 0,\n \"pages written including an aggregated newest transaction ID \": 0,\n \"pages written including an aggregated oldest start timestamp \": 0,\n \"pages written including an aggregated prepare\": 0,\n \"pages written including at least one prepare\": 0,\n \"pages written including at least one start durable timestamp\": 0,\n \"pages written including at least one start timestamp\": 0,\n \"pages written including at least one start transaction ID\": 0,\n \"pages written including at least one stop durable timestamp\": 0,\n \"pages written including at least one stop timestamp\": 0,\n \"pages written including at least one stop transaction ID\": 0,\n \"records written including a prepare\": 0,\n \"records written including a start durable timestamp\": 0,\n \"records written including a start timestamp\": 0,\n \"records written including a start transaction ID\": 0,\n \"records written including a stop durable timestamp\": 0,\n \"records written including a stop timestamp\": 0,\n \"records written including a stop transaction ID\": 0\n },\n \"session\": {\n \"object compaction\": 0,\n \"tiered operations dequeued and processed\": 0,\n \"tiered operations scheduled\": 0,\n \"tiered storage local retention time (secs)\": 0\n },\n \"transaction\": {\n \"race to read prepared update retry\": 0,\n \"rollback to stable history store records with stop timestamps older than newer records\": 0,\n \"rollback to stable inconsistent checkpoint\": 0,\n \"rollback to stable keys removed\": 0,\n \"rollback to stable keys restored\": 0,\n \"rollback to stable restored tombstones from history store\": 0,\n \"rollback to stable restored updates from history store\": 0,\n \"rollback to stable skipping delete rle\": 0,\n \"rollback to stable skipping stable rle\": 0,\n \"rollback to stable sweeping history store keys\": 0,\n \"rollback to stable updates removed from history store\": 0,\n \"transaction checkpoints due to obsolete pages\": 0,\n \"update conflicts\": 0\n }\n }\n }\n]\n[\n {\n \"avgObjSize\": 651,\n \"capped\": false,\n \"count\": 64467,\n \"freeStorageSize\": 1216512,\n \"indexBuilds\": [],\n \"indexSizes\": {\n \"_id_\": 1671168,\n \"IDX_6ce6acdb0801254590f8a78c08\": 995328,\n \"createdAt_-1\": 729088,\n \"room.id_text\": 8192,\n \"room.createdAt_1\": 708608,\n \"room.createdAt_-1\": 729088\n },\n \"nindexes\": 6,\n \"ns\": \"chat.messages\",\n \"numOrphanDocs\": 0,\n \"ok\": 1,\n \"scaleFactor\": 1,\n \"size\": 42022512,\n \"storageSize\": 11235328,\n \"totalIndexSize\": 4841472,\n \"totalSize\": 16076800,\n \"wiredTiger\": {\n \"metadata\": {\n \"formatVersion\": 1\n },\n \"creationString\": \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none,write_timestamp=off),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,import=(compare_timestamp=oldest_timestamp,enabled=false,file_metadata=,metadata_file=,repair=false),internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,readonly=false,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,tiered_object=false,tiered_storage=(auth_token=,bucket=,bucket_prefix=,cache_directory=,local_retention=300,name=,object_target_size=0),type=file,value_format=u,verbose=[],write_timestamp_usage=none\",\n \"type\": \"file\",\n \"uri\": \"statistics:table:collection-27--6292082429505695878\",\n \"LSM\": {\n \"bloom filter false positives\": 0,\n \"bloom filter hits\": 0,\n \"bloom filter misses\": 0,\n \"bloom filter pages evicted from cache\": 0,\n \"bloom filter pages read into cache\": 0,\n \"bloom filters in the LSM tree\": 0,\n \"chunks in the LSM tree\": 0,\n \"highest merge generation in the LSM tree\": 0,\n \"queries that could have benefited from a Bloom filter that did not exist\": 0,\n \"sleep for LSM checkpoint throttle\": 0,\n \"sleep for LSM merge throttle\": 0,\n \"total size of bloom filters\": 0\n },\n \"autocommit\": {\n \"retries for readonly operations\": 0,\n \"retries for update operations\": 0\n },\n \"block-manager\": {\n \"allocations requiring file extension\": 476,\n \"blocks allocated\": 47742,\n \"blocks freed\": 42795,\n \"checkpoint size\": 10002432,\n \"file allocation unit size\": 4096,\n \"file bytes available for reuse\": 1216512,\n \"file magic number\": 120897,\n \"file major version number\": 1,\n \"file size in bytes\": 11235328,\n \"minor version number\": 0\n },\n \"btree\": {\n \"btree checkpoint generation\": 3563,\n \"btree clean tree checkpoint expiration time\": 9223372036854775807,\n \"btree compact pages reviewed\": 0,\n \"btree compact pages rewritten\": 0,\n \"btree compact pages skipped\": 0,\n \"btree skipped by compaction as process would not reduce size\": 0,\n \"column-store fixed-size leaf pages\": 0,\n \"column-store fixed-size time windows\": 0,\n \"column-store internal pages\": 0,\n \"column-store variable-size RLE encoded values\": 0,\n \"column-store variable-size deleted values\": 0,\n \"column-store variable-size leaf pages\": 0,\n \"fixed-record size\": 0,\n \"maximum internal page size\": 4096,\n \"maximum leaf page key size\": 2867,\n \"maximum leaf page size\": 32768,\n \"maximum leaf page value size\": 67108864,\n \"maximum tree depth\": 3,\n \"number of key/value pairs\": 0,\n \"overflow pages\": 0,\n \"row-store empty values\": 0,\n \"row-store internal pages\": 0,\n \"row-store leaf pages\": 0\n },\n \"cache\": {\n \"bytes currently in the cache\": 47959188,\n \"bytes dirty in the cache cumulative\": 5802493187,\n \"bytes read into cache\": 29819304,\n \"bytes written from cache\": 4818012955,\n \"checkpoint blocked page eviction\": 0,\n \"checkpoint of history store file blocked non-history store page eviction\": 0,\n \"data source pages selected for eviction unable to be evicted\": 0,\n \"eviction gave up due to detecting an out of order on disk value behind the last update on the chain\": 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update\": 0,\n \"eviction gave up due to detecting an out of order tombstone ahead of the selected on disk update after validating the update chain\": 0,\n \"eviction gave up due to detecting out of order timestamps on the update chain after the selected on disk update\": 0,\n \"eviction gave up due to needing to remove a record from the history store but checkpoint is running\": 0,\n \"eviction walk passes of a file\": 28,\n \"eviction walk target pages histogram - 0-9\": 0,\n \"eviction walk target pages histogram - 10-31\": 2,\n \"eviction walk target pages histogram - 128 and higher\": 0,\n \"eviction walk target pages histogram - 32-63\": 25,\n \"eviction walk target pages histogram - 64-128\": 1,\n \"eviction walk target pages reduced due to history store cache pressure\": 0,\n \"eviction walks abandoned\": 0,\n \"eviction walks gave up because they restarted their walk twice\": 28,\n \"eviction walks gave up because they saw too many pages and found no candidates\": 0,\n \"eviction walks gave up because they saw too many pages and found too few candidates\": 0,\n \"eviction walks reached end of tree\": 56,\n \"eviction walks restarted\": 0,\n \"eviction walks started from root of tree\": 28,\n \"eviction walks started from saved location in tree\": 0,\n \"hazard pointer blocked page eviction\": 0,\n \"history store table insert calls\": 0,\n \"history store table insert calls that returned restart\": 0,\n \"history store table out-of-order resolved updates that lose their durable timestamp\": 0,\n \"history store table out-of-order updates that were fixed up by reinserting with the fixed timestamp\": 0,\n \"history store table reads\": 0,\n \"history store table reads missed\": 0,\n \"history store table reads requiring squashed modifies\": 0,\n \"history store table truncation by rollback to stable to remove an unstable update\": 0,\n \"history store table truncation by rollback to stable to remove an update\": 0,\n \"history store table truncation to remove an update\": 0,\n \"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation\": 0,\n \"history store table truncation to remove range of updates due to out-of-order timestamp update on data page\": 0,\n \"history store table writes requiring squashed modifies\": 0,\n \"in-memory page passed criteria to be split\": 10,\n \"in-memory page splits\": 5,\n \"internal pages evicted\": 0,\n \"internal pages split during eviction\": 0,\n \"leaf pages split during eviction\": 5,\n \"modified pages evicted\": 5,\n \"overflow pages read into cache\": 0,\n \"page split during eviction deepened the tree\": 0,\n \"page written requiring history store records\": 0,\n \"pages read into cache\": 253,\n \"pages read into cache after truncate\": 1,\n \"pages read into cache after truncate in prepare state\": 0,\n \"pages requested from the cache\": 79364709,\n \"pages seen by eviction walk\": 275,\n \"pages written from cache\": 44686,\n \"pages written requiring in-memory restoration\": 1,\n \"the number of times full update inserted to history store\": 0,\n \"the number of times reverse modify inserted to history store\": 0,\n \"tracked dirty bytes in the cache\": 0,\n \"unmodified pages evicted\": 0\n },\n \"cache_walk\": {\n \"Average difference between current eviction generation when the page was last considered\": 0,\n \"Average on-disk page image size seen\": 0,\n \"Average time in cache for pages that have been visited by the eviction server\": 0,\n \"Average time in cache for pages that have not been visited by the eviction server\": 0,\n \"Clean pages currently in cache\": 0,\n \"Current eviction generation\": 0,\n \"Dirty pages currently in cache\": 0,\n \"Entries in the root page\": 0,\n \"Internal pages currently in cache\": 0,\n \"Leaf pages currently in cache\": 0,\n \"Maximum difference between current eviction generation when the page was last considered\": 0,\n \"Maximum page size seen\": 0,\n \"Minimum on-disk page image size seen\": 0,\n \"Number of pages never visited by eviction server\": 0,\n \"On-disk page image sizes smaller than a single allocation unit\": 0,\n \"Pages created in memory and never written\": 0,\n \"Pages currently queued for eviction\": 0,\n \"Pages that could not be queued for eviction\": 0,\n \"Refs skipped during cache traversal\": 0,\n \"Size of the root page\": 0,\n \"Total number of pages currently in cache\": 0\n },\n \"checkpoint-cleanup\": {\n \"pages added for eviction\": 0,\n \"pages removed\": 0,\n \"pages skipped during tree walk\": 0,\n \"pages visited\": 166507\n },\n \"compression\": {\n \"compressed page maximum internal page size prior to compression\": 4096,\n \"compressed page maximum leaf page size prior to compression \": 131072,\n \"compressed pages read\": 253,\n \"compressed pages written\": 41232,\n \"number of blocks with compress ratio greater than 64\": 0,\n \"number of blocks with compress ratio smaller than 16\": 0,\n \"number of blocks with compress ratio smaller than 2\": 0,\n \"number of blocks with compress ratio smaller than 32\": 0,\n \"number of blocks with compress ratio smaller than 4\": 40,\n \"number of blocks with compress ratio smaller than 64\": 0,\n \"number of blocks with compress ratio smaller than 8\": 213,\n \"page written failed to compress\": 0,\n \"page written was too small to compress\": 3454\n },\n \"cursor\": {\n \"Total number of entries skipped by cursor next calls\": 129756,\n \"Total number of entries skipped by cursor prev calls\": 0,\n \"Total number of entries skipped to position the history store cursor\": 0,\n \"Total number of times a search near has exited due to prefix config\": 0,\n \"bulk loaded cursor insert calls\": 0,\n \"cache cursors reuse count\": 227645,\n \"close calls that result in cache\": 227646,\n \"create calls\": 164,\n \"cursor next calls that skip due to a globally visible history store tombstone\": 0,\n \"cursor next calls that skip greater than or equal to 100 entries\": 0,\n \"cursor next calls that skip less than 100 entries\": 10713915444,\n \"cursor prev calls that skip due to a globally visible history store tombstone\": 0,\n \"cursor prev calls that skip greater than or equal to 100 entries\": 0,\n \"cursor prev calls that skip less than 100 entries\": 1,\n \"insert calls\": 64468,\n \"insert key and value bytes\": 42207910,\n \"modify\": 0,\n \"modify key and value bytes affected\": 0,\n \"modify value bytes modified\": 0,\n \"next calls\": 10713915444,\n \"open cursor count\": 0,\n \"operation restarted\": 1,\n \"prev calls\": 1,\n \"remove calls\": 0,\n \"remove key bytes removed\": 0,\n \"reserve calls\": 0,\n \"reset calls\": 17113357,\n \"search calls\": 5285704847,\n \"search history store calls\": 0,\n \"search near calls\": 11000725,\n \"truncate calls\": 0,\n \"update calls\": 0,\n \"update key and value bytes\": 0,\n \"update value size change\": 0\n },\n \"reconciliation\": {\n \"approximate byte size of timestamps in pages written\": 0,\n \"approximate byte size of transaction IDs in pages written\": 0,\n \"dictionary matches\": 0,\n \"fast-path pages deleted\": 0,\n \"internal page key bytes discarded using suffix compression\": 80959,\n \"internal page multi-block writes\": 963,\n \"leaf page key bytes discarded using prefix compression\": 0,\n \"leaf page multi-block writes\": 1520,\n \"leaf-page overflow keys\": 0,\n \"maximum blocks required for a page\": 1,\n \"overflow values written\": 0,\n \"page checksum matches\": 0,\n \"page reconciliation calls\": 4025,\n \"page reconciliation calls for eviction\": 1,\n \"pages deleted\": 0,\n \"pages written including an aggregated newest start durable timestamp \": 0,\n \"pages written including an aggregated newest stop durable timestamp \": 0,\n \"pages written including an aggregated newest stop timestamp \": 0,\n \"pages written including an aggregated newest stop transaction ID\": 0,\n \"pages written including an aggregated newest transaction ID \": 1,\n \"pages written including an aggregated oldest start timestamp \": 0,\n \"pages written including an aggregated prepare\": 0,\n \"pages written including at least one prepare\": 0,\n \"pages written including at least one start durable timestamp\": 0,\n \"pages written including at least one start timestamp\": 0,\n \"pages written including at least one start transaction ID\": 0,\n \"pages written including at least one stop durable timestamp\": 0,\n \"pages written including at least one stop timestamp\": 0,\n \"pages written including at least one stop transaction ID\": 0,\n \"records written including a prepare\": 0,\n \"records written including a start durable timestamp\": 0,\n \"records written including a start timestamp\": 0,\n \"records written including a start transaction ID\": 0,\n \"records written including a stop durable timestamp\": 0,\n \"records written including a stop timestamp\": 0,\n \"records written including a stop transaction ID\": 0\n },\n \"session\": {\n \"object compaction\": 0,\n \"tiered operations dequeued and processed\": 0,\n \"tiered operations scheduled\": 0,\n \"tiered storage local retention time (secs)\": 0\n },\n \"transaction\": {\n \"race to read prepared update retry\": 0,\n \"rollback to stable history store records with stop timestamps older than newer records\": 0,\n \"rollback to stable inconsistent checkpoint\": 0,\n \"rollback to stable keys removed\": 0,\n \"rollback to stable keys restored\": 0,\n \"rollback to stable restored tombstones from history store\": 0,\n \"rollback to stable restored updates from history store\": 0,\n \"rollback to stable skipping delete rle\": 0,\n \"rollback to stable skipping stable rle\": 0,\n \"rollback to stable sweeping history store keys\": 0,\n \"rollback to stable updates removed from history store\": 0,\n \"transaction checkpoints due to obsolete pages\": 0,\n \"update conflicts\": 0\n }\n }\n }\n]\n", "text": "Hi @Kushagra_Kesav, Thanks for responding\nI’m using MongoDB version v6.0.6\nroom statsmessages stats", "username": "T_RAZIN" }, { "code": "[\n {\n \"command\": {\n \"aggregate\": \"room\",\n \"pipeline\": [\n {\n \"$lookup\": {\n \"from\": \"messages\",\n \"localField\": \"_id\",\n \"foreignField\": \"room.id\",\n \"as\": \"firstMessage\",\n \"pipeline\": [\n {\n \"$sort\": {\n \"createdAt\": 1\n }\n },\n {\n \"$limit\": 1\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"createdAt\": 1\n }\n }\n ]\n }\n },\n {\n \"$lookup\": {\n \"from\": \"messages\",\n \"localField\": \"_id\",\n \"foreignField\": \"room.id\",\n \"as\": \"lastMessage\",\n \"pipeline\": [\n {\n \"$sort\": {\n \"createdAt\": -1\n }\n },\n {\n \"$limit\": 1\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"createdAt\": 1\n }\n }\n ]\n }\n },\n {\n \"$unwind\": \"$firstMessage\"\n },\n {\n \"$unwind\": \"$lastMessage\"\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"firstMessage\": \"$firstMessage.createdAt\",\n \"lastMessage\": \"$lastMessage.createdAt\",\n \"roomTime\": {\n \"$dateDiff\": {\n \"startDate\": \"$firstMessage.createdAt\",\n \"endDate\": \"$lastMessage.createdAt\",\n \"unit\": \"second\"\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": null,\n \"totalRoomTime\": {\n \"$avg\": \"$roomTime\"\n }\n }\n }\n ],\n \"explain\": true,\n \"$db\": \"chat\",\n \"lsid\": {\n \"id\": {\"$binary\": {\"base64\": \"JgYIiH8eQZ2zdGWV78w5hw==\", \"subType\": \"04\"}}\n }\n },\n \"explainVersion\": \"1\",\n \"ok\": 1,\n \"serverInfo\": {\n \"host\": \"chat-engin-db\",\n \"port\": 27017,\n \"version\": \"6.0.6\",\n \"gitVersion\": \"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"chat.room\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n },\n \"queryHash\": \"180FC727\",\n \"planCacheKey\": \"180FC727\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"_id\": 1,\n \"firstMessage\": 1,\n \"lastMessage\": 1\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"direction\": \"forward\"\n }\n },\n \"rejectedPlans\": []\n }\n }\n },\n {\n \"$lookup\": {\n \"from\": \"messages\",\n \"as\": \"firstMessage\",\n \"localField\": \"_id\",\n \"foreignField\": \"room.id\",\n \"let\": {\n },\n \"pipeline\": [\n {\n \"$sort\": {\n \"createdAt\": 1\n }\n },\n {\n \"$limit\": 1\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"createdAt\": 1\n }\n }\n ]\n }\n },\n {\n \"$lookup\": {\n \"from\": \"messages\",\n \"as\": \"lastMessage\",\n \"localField\": \"_id\",\n \"foreignField\": \"room.id\",\n \"let\": {\n },\n \"pipeline\": [\n {\n \"$sort\": {\n \"createdAt\": -1\n }\n },\n {\n \"$limit\": 1\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"createdAt\": 1\n }\n }\n ]\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$firstMessage\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$lastMessage\"\n }\n },\n {\n \"$project\": {\n \"_id\": true,\n \"firstMessage\": \"$firstMessage.createdAt\",\n \"lastMessage\": \"$lastMessage.createdAt\",\n \"roomTime\": {\n \"$dateDiff\": {\n \"startDate\": \"$firstMessage.createdAt\",\n \"endDate\": \"$lastMessage.createdAt\",\n \"unit\": {\n \"$const\": \"second\"\n }\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"totalRoomTime\": {\n \"$avg\": \"$roomTime\"\n }\n }\n }\n ]\n }\n]\n", "text": "Here’s db.room.explain(‘executionStats’).aggregate(…) result", "username": "T_RAZIN" }, { "code": "{\n \"_id\": {\n \"$oid\": \"64819ba918af3d0059e059e5\"\n },\n \"userOne\": {\n \"id\": {\n \"$oid\": \"64255842bf3c9200517015e2\"\n },\n \"name\": \"...\",\n \"email\": \"...\",\n \"userId\": \"6c3ca9cc-ab0b-4816-9575-1e20c8c0d606\",\n \"createdAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n }\n },\n \"userTwo\": {\n \"id\": {\n \"$oid\": \"640386d6cb61685af3798355\"\n },\n \"name\": \"...\",\n \"email\": \"...\",\n \"createdAt\": {\n \"$date\": \"2023-03-04T17:58:46.602Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-03-04T17:58:46.602Z\"\n }\n },\n \"createdAt\": {\n \"$date\": \"2023-06-08T09:13:13.968Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-06-08T09:13:13.968Z\"\n },\n \"ratings\": 4,\n \"feedbackMessage\": \"Good\"\n}\n{\n \"_id\": {\n \"$oid\": \"6481d2f218af3d0059e059ea\"\n },\n \"isSeen\": true,\n \"message\": \"Hi Nesya\",\n \"room\": {\n \"id\": {\n \"$oid\": \"6481d2f118af3d0059e059e9\"\n },\n \"userOne\": {\n \"name\": \"...\",\n \"email\": \"...\",\n \"createdAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n },\n \"id\": [\n {\n \"$oid\": \"64255842bf3c9200517015e2\"\n }\n ]\n },\n \"userTwo\": {\n \"name\": \"...\",\n \"email\": \"...\",\n \"createdAt\": {\n \"$date\": \"2023-03-04T17:58:46.602Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-03-04T17:58:46.602Z\"\n },\n \"id\": [\n {\n \"$oid\": \"640386d6cb61685af3798355\"\n }\n ]\n },\n \"createdAt\": {\n \"$date\": \"2023-06-08T13:09:05.791Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-06-08T13:09:05.791Z\"\n }\n },\n \"author\": {\n \"id\": {\n \"$oid\": \"64255842bf3c9200517015e2\"\n },\n \"name\": \"...\",\n \"email\": \"...\",\n \"userId\": \"6c3ca9cc-ab0b-4816-9575-1e20c8c0d606\",\n \"createdAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-03-30T09:37:06.507Z\"\n }\n },\n \"createdAt\": {\n \"$date\": \"2023-06-08T13:09:06.452Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-06-08T13:09:06.452Z\"\n }\n}\n", "text": "sample document roomsample document messages", "username": "T_RAZIN" }, { "code": "", "text": "Here’s indexes in messages\n\nScreen Shot 2023-06-30 at 00.45.432134×602 106 KB\n", "username": "T_RAZIN" }, { "code": "", "text": "indexes in room\n\nScreen Shot 2023-06-30 at 00.46.532322×634 85.4 KB\n", "username": "T_RAZIN" }, { "code": "{ \"room.id\" : 1 , \"createdAt\" : 1 }\n", "text": "I noticed that you $lookup twice in messages to get the first and last createdAt. You could achieve the same with a single $lookup if you $group with _id:null and use $first and $last accumulators.Since you $lookup on room.id and $sort on createdAt, you definitively need a compound index in messages likeAlso, I do not think you will get totalRoomTime if you accumulate using $avg.", "username": "steevej" }, { "code": "", "text": "One thing I forgot to mentioned.If the query is a frequent use-case, it might be worth the effort to update your model and logic to store in the original collect the lastMessageCreatedAt and firstMessageCreatedAt in the room documents. You simply update them when you successfully insert a new document in messages.", "username": "steevej" }, { "code": "", "text": "Hi @steevey, thanks for your advice, it solved my problem. I’m new with mongoDB and didn’t know to much about compound index. Thanks", "username": "T_RAZIN" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Very Slow aggregation with lookup
2023-06-28T09:35:10.187Z
Very Slow aggregation with lookup
563
null
[ "ops-manager" ]
[ { "code": "", "text": "I use mms to monitor mongodb cluster just ( include shard-cluster and replset )\nseveral instances at a host that belong to different replset and different project, some project’ monitor don’t work correctly\nmmsgroupid is the same , that means all the instance monitored by zhe mms agent shoud to be on the same project , do i misunderstand it ? i want to start multi agent , but the lock file’s path is same and it can’t define by the cfg , so the lock file is mutually exclusive\ndo u have any way to resolve or i misunderstand ? help ! thanks !", "username": "feng_deng" }, { "code": "", "text": "Hi @feng_deng,Thank you for your question! Can you help me better understand your situation? Ops Manager does support several projects running on the same host, however, it will require separate agents and separate config files for each project. If you’re saying that the mmsgroupid is the same, that may be a misconfiguration.Thanks,\nFrank", "username": "Frank_Sun" } ]
Mongodb ops manager don't monitor several instances from different project at the same hosts correctlly
2023-03-31T06:03:14.974Z
Mongodb ops manager don&rsquo;t monitor several instances from different project at the same hosts correctlly
842
null
[ "transactions", "database-tools", "containers", "kafka-connector", "data-api" ]
[ { "code": "{\n \"schema\": {\n \"type\": \"struct\",\n \"fields\": [\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Json\",\n \"version\": 1,\n \"field\": \"before\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Json\",\n \"version\": 1,\n \"field\": \"after\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Json\",\n \"version\": 1,\n \"field\": \"patch\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Json\",\n \"version\": 1,\n \"field\": \"filter\"\n },\n {\n \"type\": \"struct\",\n \"fields\": [\n {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\",\n \"optional\": false\n },\n \"optional\": true,\n \"field\": \"removedFields\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Json\",\n \"version\": 1,\n \"field\": \"updatedFields\"\n },\n {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"struct\",\n \"fields\": [\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"field\"\n },\n {\n \"type\": \"int32\",\n \"optional\": false,\n \"field\": \"size\"\n }\n ],\n \"optional\": false,\n \"name\": \"io.debezium.connector.mongodb.changestream.truncatedarray\",\n \"version\": 1\n },\n \"optional\": true,\n \"field\": \"truncatedArrays\"\n }\n ],\n \"optional\": true,\n \"name\": \"io.debezium.connector.mongodb.changestream.updatedescription\",\n \"version\": 1,\n \"field\": \"updateDescription\"\n },\n {\n \"type\": \"struct\",\n \"fields\": [\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"version\"\n },\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"connector\"\n },\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"name\"\n },\n {\n \"type\": \"int64\",\n \"optional\": false,\n \"field\": \"ts_ms\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"name\": \"io.debezium.data.Enum\",\n \"version\": 1,\n \"parameters\": {\n \"allowed\": \"true,last,false,incremental\"\n },\n \"default\": \"false\",\n \"field\": \"snapshot\"\n },\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"db\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"field\": \"sequence\"\n },\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"rs\"\n },\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"collection\"\n },\n {\n \"type\": \"int32\",\n \"optional\": false,\n \"field\": \"ord\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"field\": \"lsid\"\n },\n {\n \"type\": \"int64\",\n \"optional\": true,\n \"field\": \"txnNumber\"\n }\n ],\n \"optional\": false,\n \"name\": \"io.debezium.connector.mongo.Source\",\n \"field\": \"source\"\n },\n {\n \"type\": \"string\",\n \"optional\": true,\n \"field\": \"op\"\n },\n {\n \"type\": \"int64\",\n \"optional\": true,\n \"field\": \"ts_ms\"\n },\n {\n \"type\": \"struct\",\n \"fields\": [\n {\n \"type\": \"string\",\n \"optional\": false,\n \"field\": \"id\"\n },\n {\n \"type\": \"int64\",\n \"optional\": false,\n \"field\": \"total_order\"\n },\n {\n \"type\": \"int64\",\n \"optional\": false,\n \"field\": \"data_collection_order\"\n }\n ],\n \"optional\": true,\n \"name\": \"event.block\",\n \"version\": 1,\n \"field\": \"transaction\"\n }\n ],\n \"optional\": false,\n \"name\": \"src.metrics.customers.Envelope\"\n },\n \"payload\": {\n \"before\": null,\n \"after\": \"{\\\"_id\\\": {\\\"$numberLong\\\": \\\"1001\\\"},\\\"first_name\\\": \\\"Sallyddf\\\",\\\"last_name\\\": \\\"Thomas\\\",\\\"email\\\": \\\"[email protected]\\\"}\",\n \"patch\": null,\n \"filter\": null,\n \"updateDescription\": {\n \"removedFields\": null,\n \"updatedFields\": \"{\\\"first_name\\\": \\\"Sallyddf\\\"}\",\n \"truncatedArrays\": null\n },\n \"source\": {\n \"version\": \"2.0.0.Final\",\n \"connector\": \"mongodb\",\n \"name\": \"src\",\n \"ts_ms\": 1669244642000,\n \"snapshot\": \"false\",\n \"db\": \"metrics\",\n \"sequence\": null,\n \"rs\": \"rs0\",\n \"collection\": \"customers\",\n \"ord\": 2,\n \"lsid\": null,\n \"txnNumber\": null\n },\n \"op\": \"u\",\n \"ts_ms\": 1669244642381,\n \"transaction\": null\n }\n}\nERROR Unable to process record SinkRecord{kafkaOffset=4, timestampType=CreateTime} ConnectRecord{topic='src.metrics.customers', kafkaPartition=0, key={id=1001}, keySchema=null, value=Struct{after={\"_id\": {\"$numberLong\": \"1001\"},\"first_name\": \"Sallyddf\",\"last_name\": \"Thomas\",\"email\": \"[email protected]\"},updateDescription=Struct{updatedFields={\"first_name\": \"Sallyddf\"}},source=Struct{version=2.0.0.Final,connector=mongodb,name=src,ts_ms=1669244642000,snapshot=false,db=metrics,rs=rs0,collection=customers,ord=2},op=u,ts_ms=1669244642381}, valueSchema=Schema{src.metrics.customers.Envelope:STRUCT}, timestamp=1669244642856, headers=ConnectHeaders(headers=)} (com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData)\nmetrics-sink-connect | org.apache.kafka.connect.errors.DataException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:69)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler.handle(MongoDbHandler.java:82)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$3(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at java.base/java.util.Optional.flatMap(Optional.java:294)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$4(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.tryProcess(MongoProcessedSinkRecordData.java:105)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.buildWriteModelCDC(MongoProcessedSinkRecordData.java:98)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.createWriteModel(MongoProcessedSinkRecordData.java:81)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.<init>(MongoProcessedSinkRecordData.java:51)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkRecordProcessor.orderedGroupByTopicAndNamespace(MongoSinkRecordProcessor.java:45)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:101)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)\nmetrics-sink-connect | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nmetrics-sink-connect | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nmetrics-sink-connect | at java.base/java.lang.Thread.run(Thread.java:829)\nmetrics-sink-connect | Caused by: org.bson.BsonInvalidOperationException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419)\nmetrics-sink-connect | at org.bson.BsonValue.asString(BsonValue.java:69)\nmetrics-sink-connect | at org.bson.BsonDocument.getString(BsonDocument.java:252)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.handleOplogEvent(MongoDbUpdate.java:80)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:61)\nmetrics-sink-connect | ... 22 more\nmetrics-sink-connect | [2022-11-23 23:04:02,876] ERROR WorkerSinkTask{id=metrics-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: Value expected to be of type STRING is of unexpected type NULL (org.apache.kafka.connect.runtime.WorkerSinkTask)\nmetrics-sink-connect | org.apache.kafka.connect.errors.DataException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:69)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler.handle(MongoDbHandler.java:82)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$3(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at java.base/java.util.Optional.flatMap(Optional.java:294)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$4(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.tryProcess(MongoProcessedSinkRecordData.java:105)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.buildWriteModelCDC(MongoProcessedSinkRecordData.java:98)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.createWriteModel(MongoProcessedSinkRecordData.java:81)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.<init>(MongoProcessedSinkRecordData.java:51)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkRecordProcessor.orderedGroupByTopicAndNamespace(MongoSinkRecordProcessor.java:45)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:101)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)\nmetrics-sink-connect | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nmetrics-sink-connect | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nmetrics-sink-connect | at java.base/java.lang.Thread.run(Thread.java:829)\nmetrics-sink-connect | Caused by: org.bson.BsonInvalidOperationException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419)\nmetrics-sink-connect | at org.bson.BsonValue.asString(BsonValue.java:69)\nmetrics-sink-connect | at org.bson.BsonDocument.getString(BsonDocument.java:252)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.handleOplogEvent(MongoDbUpdate.java:80)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:61)\nmetrics-sink-connect | ... 22 more\nmetrics-sink-connect | [2022-11-23 23:04:02,878] ERROR WorkerSinkTask{id=metrics-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)\nmetrics-sink-connect | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:611)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)\nmetrics-sink-connect | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nmetrics-sink-connect | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nmetrics-sink-connect | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nmetrics-sink-connect | at java.base/java.lang.Thread.run(Thread.java:829)\nmetrics-sink-connect | Caused by: org.apache.kafka.connect.errors.DataException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:69)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler.handle(MongoDbHandler.java:82)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$3(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at java.base/java.util.Optional.flatMap(Optional.java:294)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModelCDC$4(MongoProcessedSinkRecordData.java:99)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.tryProcess(MongoProcessedSinkRecordData.java:105)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.buildWriteModelCDC(MongoProcessedSinkRecordData.java:98)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.createWriteModel(MongoProcessedSinkRecordData.java:81)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.<init>(MongoProcessedSinkRecordData.java:51)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkRecordProcessor.orderedGroupByTopicAndNamespace(MongoSinkRecordProcessor.java:45)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:101)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)\nmetrics-sink-connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)\nmetrics-sink-connect | ... 10 more\nmetrics-sink-connect | Caused by: org.bson.BsonInvalidOperationException: Value expected to be of type STRING is of unexpected type NULL\nmetrics-sink-connect | at org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419)\nmetrics-sink-connect | at org.bson.BsonValue.asString(BsonValue.java:69)\nmetrics-sink-connect | at org.bson.BsonDocument.getString(BsonDocument.java:252)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.handleOplogEvent(MongoDbUpdate.java:80)\nmetrics-sink-connect | at com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbUpdate.perform(MongoDbUpdate.java:61)\nmetrics-sink-connect | ... 22 more\nFROM quay.io/debezium/connect:2.0\nENV KAFKA_CONNECT_MONGODB_DIR=$KAFKA_CONNECT_PLUGINS_DIR/kafka-connect-mongodb\n\nUSER root\nRUN microdnf -y install git maven java-11-openjdk-devel && microdnf clean all\n\nUSER kafka\n\n# Deploy MongoDB Sink Connector\nRUN mkdir -p $KAFKA_CONNECT_MONGODB_DIR && cd $KAFKA_CONNECT_MONGODB_DIR && \\\n git clone https://github.com/hpgrahsl/kafka-connect-mongodb.git && \\\n cd kafka-connect-mongodb && \\\n git fetch --tags && \\\n git checkout tags/v1.2.0 && \\\n mvn clean package -DskipTests=true -DskipITs=true && \\\n mv target/kafka-connect-mongodb/kafka-connect-mongodb-1.2.0-jar-with-dependencies.jar $KAFKA_CONNECT_MONGODB_DIR && \\\n cd .. && rm -rf $KAFKA_CONNECT_MONGODB_DIR/kafka-connect-mongodb\nFROM confluentinc/cp-kafka-connect:7.2.2\nRUN confluent-hub install --no-prompt mongodb/kafka-connect-mongodb:1.8.0\nENV CONNECT_PLUGIN_PATH=\"/usr/share/java,/usr/share/confluent-hub-components\"\n{\n \"name\": \"metrics\",\n \"config\": {\n \"connector.class\": \"io.debezium.connector.mongodb.MongoDbConnector\",\n \"mongodb.name\": \"metrics-src\",\n \"mongodb.user\": \"admin\",\n \"mongodb.password\": \"admin\",\n \"mongodb.authsource\": \"admin\",\n \"mongodb.hosts\": \"rs0/metrics-src:27017\",\n \"topic.prefix\": \"src\",\n \"database.include.list\": \"metrics\"\n }\n}\n{\n \"name\": \"metrics\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler\",\n \"connection.uri\": \"mongodb://metrics-sink:27017/metrics\",\n \"database\": \"metrics\",\n \"collection\": \"metrics\",\n \"topics\": \"src.metrics.customers\"\n }\n}\nversion: '3.4'\n\nservices:\n zookeeper:\n image: confluentinc/cp-zookeeper:7.0.1\n container_name: zookeeper\n restart: always\n networks:\n - sync-network\n environment:\n ZOOKEEPER_CLIENT_PORT: 2181\n ZOOKEEPER_TICK_TIME: 2000\n ZOO_4LW_COMMANDS_WHITELIST: \"*\"\n KAFKA_OPTS: \"-Dzookeeper.4lw.commands.whitelist=ruok\"\n healthcheck:\n test: nc -z localhost 2181 || exit -1\n interval: 10s\n timeout: 5s\n retries: 3\n start_period: 10s\n extra_hosts:\n - \"moby:127.0.0.1\"\n\n broker:\n image: confluentinc/cp-kafka:7.0.1\n container_name: broker\n restart: always\n networks:\n - sync-network\n ports:\n - \"9092:9092\"\n - \"39092:39092\"\n depends_on:\n zookeeper:\n condition: service_healthy\n environment:\n KAFKA_BROKER_ID: 1\n KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_LISTENERS: DOCKER_LISTENER://broker:9092,HOST_LISTENER://broker:19092,EXTERNAL_LISTENER://0.0.0.0:39092\n KAFKA_ADVERTISED_LISTENERS: DOCKER_LISTENER://broker:9092,HOST_LISTENER://localhost:19092,EXTERNAL_LISTENER://150.230.85.73:39092\n KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER_LISTENER:PLAINTEXT,HOST_LISTENER:PLAINTEXT,EXTERNAL_LISTENER:PLAINTEXT\n KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER_LISTENER\n KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1\n KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1\n KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1\n extra_hosts:\n - \"moby:127.0.0.1\"\n healthcheck:\n test: echo \"ruok\" | timeout 2 nc -w 2 zookeeper 2181 | grep imok\n interval: 10s\n timeout: 5s\n retries: 3\n\n kafdrop:\n image: obsidiandynamics/kafdrop:latest\n container_name: kafdrop\n # network_mode: host\n ports:\n - 9000:9000\n networks:\n - sync-network\n depends_on:\n broker:\n condition: service_healthy\n environment:\n KAFKA_BROKERCONNECT: broker:9092\n\n metrics-src:\n image: mongo:5.0.5\n hostname: metrics-src\n restart: always\n container_name: metrics-src\n ports:\n - 27040:27017\n networks:\n - sync-network\n environment:\n MONGO_INITDB_DATABASE: metrics\n volumes:\n - ./scripts:/scripts\n healthcheck:\n test: test $$(echo \"rs.initiate().ok || rs.status().ok\" | mongo -u admin -p admin --quiet) -eq 1\n interval: 10s\n start_period: 30s\n command: --replSet rs0 --bind_ip_all\n\n metrics-sink:\n image: mongo:5.0.5\n hostname: metrics-sink\n restart: always\n container_name: metrics-sink\n ports:\n - 27020:27017\n networks:\n - sync-network\n environment:\n MONGO_INITDB_DATABASE: metrics\n volumes:\n - ./scripts:/scripts\n healthcheck:\n test: test $$(echo \"rs.initiate().ok || rs.status().ok\" | mongo -u admin -p admin --quiet) -eq 1\n interval: 10s\n start_period: 30s\n command: --replSet rs0 --bind_ip_all\n\n metrics-src-connect:\n image: quay.io/debezium/connect:2.0\n container_name: metrics-connect\n ports:\n - 8083:8083\n links:\n - broker\n - metrics-src\n networks:\n - sync-network\n volumes:\n - kafka-src-config:/kafka/config\n environment:\n - BOOTSTRAP_SERVERS=broker:9092\n - REST_HOST_NAME=0.0.0.0\n - GROUP_ID=1\n - CONFIG_STORAGE_TOPIC=metrics_src_connect_configs\n - OFFSET_STORAGE_TOPIC=metrics_src_connect_offsets\n - STATUS_STORAGE_TOPIC=metrics_src_connect_status\n - CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false\n - CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter\n - CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=true\n - CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter\n\n # container with mongo kafka plugins\n metrics-sink-connect:\n image: confluentinc/cp-kafka-connect-base:7.2.2\n build:\n context: ./mongodb-kafka-connect\n ports:\n - \"8084:8083\"\n hostname: metrics-sink-connect\n container_name: metrics-sink-connect\n depends_on:\n - zookeeper\n - broker\n networks:\n - sync-network\n volumes:\n - kafka-sink-config:/kafka/config\n environment:\n KAFKA_JMX_PORT: 35000\n KAFKA_JMX_HOSTNAME: localhost\n CONNECT_BOOTSTRAP_SERVERS: \"broker:9092\"\n CONNECT_REST_ADVERTISED_HOST_NAME: metrics-sink-connect\n CONNECT_REST_PORT: 8083\n CONNECT_GROUP_ID: connect-cluster-group\n CONNECT_CONFIG_STORAGE_TOPIC: metrics_sink_connect_configs\n CONNECT_OFFSET_STORAGE_TOPIC: metrics_sink_connect_offsets\n CONNECT_STATUS_STORAGE_TOPIC: metrics_sink_connect_status\n CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\n CONNECT_METADATA_MAX_AGE_MS: 180000\n CONNECT_CONNECTIONS_MAX_IDLE_MS: 180000\n CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\n CONNECT_ZOOKEEPER_CONNECT: \"zookeeper:2181\"\n CONNECT_PLUGIN_PATH: \"/usr/share/java,/usr/share/confluent-hub-components\"\n CONNECT_AUTO_CREATE_TOPICS_ENABLE: \"true\"\n CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: \"false\"\n CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: \"true\"\n CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n\nnetworks:\n sync-network:\n driver: bridge\n\nvolumes:\n kafka-sink-config:\n driver: local\n driver_opts:\n type: none\n o: bind\n device: ./kafka/sink-config\n\n kafka-src-config:\n driver: local\n driver_opts:\n type: none\n o: bind\n device: ./kafka/src-config\n", "text": "Hi everyone,I am trying to set up a CDC replication pipeline with the Debezium and Kafka Sink Connector but I’am having problems with Update Operations.In on hand, I have a MongoDB source database configured as a single node replica set. Connected to the source DB, I have the Debezium source connector that is streaming all CDC events to a Kafka Topic.On the other hand, I have a MongoDb acting as a sink database. The sink databased is feed by the MongoDb Sink Connector with the Debezium MongoDB CDC Handler.The source data is properly replicated into the sink only in insertion and deletion operations. If I try to update a document in the source collection, the sink connector will raise the following exception for this CDC event:DEBEZIUM CDC UPDATE EVENTSink Connector Exception:I followed all the examples and documentation from Debezium and MondoDb Sink Connector and I still have no clue why this is happening.Please find below the dockerfiles and my configurations:Debezium Sink Connector DockefilePlease find below the dockerfiles and my configurations:MongoDB Kafka Sink Connector DockerfileDebezium Source Connector ConfigurationMongoDb Sink Configuration with CDC HandlerDocker Compose FileCould someone help figure what I could be possibly missing in the configuration?Best regards,\nPaulo", "username": "Paulo_Henrique_Favero_Pereira" }, { "code": "", "text": "Hi @Paulo_Henrique_Favero_PereiraFirst of all thanks for your detailed information around your question / challenge. Based on the things I know, I can tell you right away that there doesn’t seem to be a “straight-forward one sentence solution” However I try to highlight a few things that caught my eye and point you to some potential workarounds:From what I can tell you are using the debezium 2.0 mongo source connector. Under the covers this connector uses mongodb’s changestreams feature as well. The problem is that debezium changed the actual CDC event payload format which was in fact a “breaking change” when you’d compare it with the CDC payload format used until version 1.7 of the connector.That being said the MongoDB sink connector is currently not prepared to properly deal with this new event payload format from Debezium, neither the official connector from MongoDB, nor my community sink connector which was integrated into the official at some point in the past (back then feature parity). Even if it doesn’t fix the issue, I’d highly recommend you switch to the official MongoDB connector in your docker file instead of using my community sink connector - what’s even more strange to me is the fact that if you want to rely on my community version, you shouldn’t use tag 1.2.0 which points to an even older version. The latest version of my community sink was tagged 1.4.0.Coming back to the actual problem and the breaking change in the event payload format you might have the following options:a) you could try to move away from DBZ source and maybe find a way to get your use case working based on the official source connector - I can’t tell if that will work because I don’t know enough details about your use case / requirements. It could be that you need tombstones events which depending on the capture mode aren’t supported in the official mongo source if I’m not mistaken.b) if you want to stick to debezium source connector, you might get away with using version 1.9 which still allows to configure the “legacy oplog” based CDC and which produces the “old” and AFAIK still compatible CDC event payloads for the sink connector.c) if neither a) nor b) work for your case and you want to continue using the DBZ 2.0 source connector I’m afraid you need to take some of the following actions to get this solved:Anyway, I think it’s good that you reported this issue and raised awareness. Since it’s not trivial to work around this problem I hope that someone will update the MongoDB sink connector’s CDC handler for Debezium MongoDB so that it is capable to process the new event payload format.I hope this helps you. Feel free to comment or ask again if anything is unclear.THX!", "username": "hpgrahsl" }, { "code": "", "text": "Hi @hpgrahsl,Thanks for your swift reply.That being said the MongoDB sink connector is currently not prepared to properly deal with this new event payload format from Debezium, neither the official connector from MongoDB nor my community sink connector which was integrated into the official at some point in the past (back then feature parity)Later on, I found this issue that states a similar problem regarding update operations with the Debezium CDC Handler. Unfortunately, it does not seem to be receiving proper attention. Would be nice to have documented, in the MongoDB sink connector, a version compatibility table between MongoDB, Debezium, or any other CDC.I tried to use your lib when I exhausted all other options and I didn’t notice that I was using version 1.2.0 .Moving on to the presented options…I tried option “b” before but I wasn’t aware of the “legacy oplog” config so it didn’t work. I managed to get it working using the MongoDB Source Connector instead of the Debezium Connector as you mentioned in option “a” and I even got a step forward:In my use case, I have local DBS with capped collections in multiple clients. I want to synchronize all the data into a global database. The global DB does not have any capped collection. What was happening was that the source connector was generating “delete” events when the capped collection was full.I checked out the MongoDB Kafka Connector Repository and I modified the code to create my custom connector for capped collections. I created a sink and a CDC Change Stream Connector that do not process delete operations. I don’t know if there were any other ways to solve this but it’s working smoothly as it should be .Thanks for your input. It helped a lot to expand the horizon of possibilities. I didn’t know that it was possible to solve my issue in the way you mentioned in option “c”.I hope that the MongoDB team solves the issue regarding the sink connector soon.Thanks.", "username": "Paulo_Henrique_Favero_Pereira" }, { "code": "", "text": "hi @Paulo_Henrique_Favero_Pereira and @hpgrahsl ,Is there any update regarding this issue for updating the data using cdc by kafka and debezium", "username": "siva_ganaesh" } ]
MongoDB Kafka Sink Connector w/ Debezium CDC Handler Fails on Update Operations
2022-11-24T12:39:04.851Z
MongoDB Kafka Sink Connector w/ Debezium CDC Handler Fails on Update Operations
4,560
null
[]
[ { "code": "", "text": "We suspend some of our Atlas clusters overnight when not in use as they are quite large and there is no point keeping some of the testing environment active when all the developers are offline.\nThe issue we have is that the triggers fail when this happens, we have the advanced option set to Auto Resume the trigger but I believe this is just if the resume token fails as opposed to the whole data connection going away.Currently we need to manually resume the trigger after the resume has taken place in the morning.What’s the best solution to this? Get the DBA script to pause the triggers before the cluster is suspended and then resume it after? Is there an alternative setting that could be set to cope with this a touch more gracefully?Many thanks,John", "username": "John_Sewell" }, { "code": "", "text": "Hi, triggers under the hood are just a Change Stream that we operate for you. Therefore, when you pause your cluster, the trigger attempts to connect to the cluster and open a change stream (it retries this for a while) and ultimately errors and enters the failed state.We have some customers that do similar things to you and the best solution is to do the following:You can hit the App Services Admin API to pause/resume triggers. Please see here: MongoDB Atlas App Services Admin APIBest,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks Tyler, I suspected that was the approach to take. Our DBA team already use the API for pausing the cluster so I’ll get them to add that call to their scripts.", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Triggers failing when cluster suspended overnight
2023-06-30T10:58:07.528Z
Triggers failing when cluster suspended overnight
632
null
[ "sharding", "transactions", "database-tools", "backup" ]
[ { "code": "mongodumpmongorestoremongodump", "text": "Hello,I would like to confirm the statement below: mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.Are we not allowed to use mongodump and restore on a running mongodb database in a production environment?", "username": "Ralph_Anthony_Plante" }, { "code": "", "text": "Hi @Ralph_Anthony_PlanteAre we not allowed to use mongodump and restore on a running mongodb database in a production environment?You definitely can use mongodump & mongorestore in production. The statement was saying that if there are sharded transaction in progress, then the backup will contain things that are in flight inside the transaction, and thus the backup do not provide a consistent view of the database.Having said that, it’s best if you stop all writes while mongodump is in progress anyway, just to be sure that you’re not backing up an inconsistent view of database.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Can we know what is the strategy which ops manager and cloud manager is using in this case", "username": "Balram_Parmar" }, { "code": "", "text": "i guess it’s using some internal clock time and has to also coordinate with transaction mechanism.", "username": "Kobe_W" }, { "code": "", "text": "@Balram_Parmar in both Ops Manager and Cloud Manager, we do not use mongodump to backup the database. We developed a backup system specifically for Ops Manager and Cloud Manager that is designed to ensure consistency and correctness for both replica sets and sharded clusters, while the database is still fully operational. This ability is unique to Ops Manager and Cloud Manager due to the capabilities of the agent and other features designed specifically to ensure accurate backups for all deployment types.If you are interested, you can learn more here Backup Process — MongoDB Ops Manager 6.0 .Best Regards,\nEvin", "username": "Evin_Roesle" } ]
Backup and Restore in MOngoDB 4.2 Sharded Cluster
2022-03-04T01:44:41.352Z
Backup and Restore in MOngoDB 4.2 Sharded Cluster
2,335
null
[ "node-js", "field-encryption" ]
[ { "code": "MongoServerError: Expected a value for eccCollection\n at CryptoConnection.onMessage (C:\\Users\\Desktop\\docs-in-use-encryption-examples\\queryable-encryption\\node\\local\\reader\\node_modules\\mongodb\\lib\\cmap\\connection.js:231:30)\n at MessageStream.<anonymous> (C:\\Users\\Desktop\\docs-in-use-encryption-examples\\queryable-encryption\\node\\local\\reader\\node_modules\\mongodb\\lib\\cmap\\connection.js:61:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (C:\\Users\\Desktop\\docs-in-use-encryption-examples\\queryable-encryption\\node\\local\\reader\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:125:16)\n at MessageStream._write (C:\\Users\\Desktop\\docs-in-use-encryption-examples\\queryable-encryption\\node\\local\\reader\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:391:12)\n at _write (node:internal/streams/writable:332:10)\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\n at TLSSocket.ondata (node:internal/streams/readable:754:22)\n at TLSSocket.emit (node:events:513:28) {\n ok: 0,\n code: 6371206,\n codeName: 'Location6371206',\n '$clusterTime': {\n clusterTime: Timestamp { low: 5, high: 1687931276, unsigned: true },\n signature: { hash: [Binary], keyId: [Long] }\n },\n operationTime: Timestamp { low: 5, high: 1687931276, unsigned: true },\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "While working with queryable encryption i am facing with this issue:", "username": "Ronak_Patel1" }, { "code": "MongoServerError: Expected a value for eccCollection\neccCollection", "text": "Hey @Ronak_Patel1,Welcome to the MongoDB Community!The eccCollection is a metadata collection that is created when you create an encrypted collection using Queryable Encryption. To read more, please refer to Encrypted Collection Management documentation.However, could you share the code snippet you are executing and the versions of MongoDB and Node.js? Additionally, can you confirm if you are following any specific documentation or article to implement this?Also, was it working previously? If yes, could you share if anything has changed recently?Looking forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "const { MongoClient, Binary } = require(\"mongodb\");\n\nconst { getCredentials } = require(\"./your_credentials\");\nconst credentials = getCredentials();\n\n// start-key-vault\nconst eDB = \"encryption\";\nconst eKV = \"__keyVault\";\nconst keyVaultNamespace = `${eDB}.${eKV}`;\n// end-key-vault\n\n// start-kmsproviders\nconst fs = require(\"fs\");\nconst provider = \"local\";\nconst path = \"./master-key.txt\";\n// WARNING: Do not use a local key file in a production application\nconst localMasterKey = fs.readFileSync(path);\nconst kmsProviders = {\n local: {\n key: localMasterKey,\n },\n};\n// end-kmsproviders\n\nasync function run() {\n // start-schema\n const uri = credentials.MONGODB_URI;\n const unencryptedClient = new MongoClient(uri);\n await unencryptedClient.connect();\n const keyVaultClient = unencryptedClient.db(eDB).collection(eKV);\n\n const dek1 = await keyVaultClient.findOne({ keyAltNames: \"dataKey1\" });\n const dek2 = await keyVaultClient.findOne({ keyAltNames: \"dataKey2\" });\n const dek3 = await keyVaultClient.findOne({ keyAltNames: \"dataKey3\" });\n const dek4 = await keyVaultClient.findOne({ keyAltNames: \"dataKey4\" });\n const secretDB = \"medicalRecords\";\n const secretCollection = \"patients\";\n\n const encryptedFieldsMap = {\n [`${secretDB}.${secretCollection}`]: {\n fields: [\n {\n keyId: dek1._id,\n path: \"patientId\",\n bsonType: \"int\",\n queries: { queryType: \"equality\" },\n },\n {\n keyId: dek2._id,\n path: \"medications\",\n bsonType: \"array\",\n },\n {\n keyId: dek3._id,\n path: \"patientRecord.ssn\",\n bsonType: \"string\",\n queries: { queryType: \"equality\" },\n },\n {\n keyId: dek4._id,\n path: \"patientRecord.billing\",\n bsonType: \"object\",\n },\n ],\n },\n };\n // end-schema\n console.log(\"dekq\",encryptedFieldsMap)\n\n // start-extra-options\n const extraOptions = {\n cryptSharedLibPath: credentials[\"SHARED_LIB_PATH\"],\n };\n // end-extra-options\n\n // start-client\n const encryptedClient = new MongoClient(uri, {\n autoEncryption: {\n keyVaultNamespace:keyVaultNamespace ,\n kmsProviders :kmsProviders,\n extraOptions : extraOptions,\n encryptedFieldsMap:encryptedFieldsMap,\n },\n });\n await encryptedClient.connect().then(async()=>{\n try {\n const unencryptedColl = unencryptedClient\n .db(secretDB)\n .collection(secretCollection);\n // start-insert\n const encryptedColl = await encryptedClient.db(secretDB).collection(secretCollection);\n \n console.log(\"encryptedColl\",encryptedColl)\n await encryptedColl.insertOne({\n firstName: \"Jon\",\n lastName: \"Doe\",\n patientId: 12345678,\n address: \"157 Electric Ave.\",\n patientRecord: {\n ssn: \"987-65-4320\",\n billing: {\n type: \"Visa\",\n number: \"4111111111111111\",\n },\n },\n // medications: [\"Atorvastatin\", \"Levothyroxine\"],\n },\n );\n // end-insert\n // start-find\n console.log(\"Finding a document with regular (non-encrypted) client.\");\n // console.log(await unencryptedColl.findOne({ firstName: /Jon/ }));\n console.log(\n \"Finding a document with encrypted client, searching on an encrypted field\"\n );\n // console.log(\n // await encryptedColl.findOne({ \"patientRecord.ssn\": \"987-65-4320\" })\n // );\n // end-find\n } finally {\n await unencryptedClient.close();\n await encryptedClient.close();\n }\n })\n // end-client\n \n}\n\nrun().catch(console.dir)\n", "text": "@Kushagra_Kesav Thanks for the reply!I am using MongoDB enterprise 6.0.6 and Node v16.17.1Yes, I am following Quick Start — MongoDB Manual to implement this.I am attempting it for first time.", "username": "Ronak_Patel1" }, { "code": "", "text": "Hello, @Kushagra_Kesav! We are currently in the early stages of implementing this feature, and we greatly value your assistance and time. We are diligently working to navigate through this process and make necessary adjustments. Your support is sincerely appreciated as we strive to optimize and refine this feature at our end. Thank you!", "username": "Pranav_Tiwari" }, { "code": "", "text": "Hey @Pranav_Tiwari/@Ronak_Patel1,Thanks for sharing the code snippet, and I’d be glad to help.Could you please confirm that you have installed all the required packages listed in the Installation Requirements including the mongodb-client-encryption - npm package?Also, have you checked the GitHub repository for the Node.js Queryable Encryption? It provides a helpful resource for setting up and testing your requirements.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hello @Ronak_Patel1 and @Pranav_Tiwari,We’re excited that you are going to be implementing Queryable Encryption. As our docs say, the 6.0 server version is Preview only and not to be used in Production deployments. When the 7.0 version GA’s later this summer it will be production ready. Please note that we have made breaking changes between 6.0 and 7.0, which is why you are getting the error you see, and 6.0 should not be used. There is currently a 7.0 rc release available for testing only (this should also not be used for production but is great for testing), on Atlas Dedicated or Enterprise Advanced deployments if you have either of those.Thank you,Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Hey @Cynthia_Braund and @Kushagra_Kesav ,We are extremely grateful for your time and support in guiding us on our queryable encryption journey. We are delighted to share that we have successfully implemented this feature at the local level, and it is functioning seamlessly with Enterprise Advanced (7.0 rc) and the crypt_shared library (7.0). Your assistance has been invaluable, and we truly appreciate your efforts.Thank You,\nRonak", "username": "Ronak_Patel1" } ]
MongoServerError: Expected a value for eccCollection in Node.js
2023-06-28T05:57:31.401Z
MongoServerError: Expected a value for eccCollection in Node.js
751
null
[ "aggregation", "production", "time-series", "ruby", "mongoid-odm" ]
[ { "code": "config.load_defaults.", "text": "Mongoid 8.1.0 is a feature release in 8.x series with the following significant new functionality:The following issues were fixed:The following additional improvements were made:", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Mongoid 8.1 released
2023-06-30T09:37:58.633Z
Mongoid 8.1 released
637
null
[ "swift" ]
[ { "code": "", "text": "I’ve just completed an article on how to integrate Realm and Realm Sync into an iOS chat app. It was timed to coincide with the GA of MongoDB Realm Sync.Realm is used for both persisting data on the iOS device and synchronizing the data between instances of the mobile app.The app is currently iOS-only (using SwiftUI), but we plan on building an Android version soon. One of the nice things about Realm Sync is that there’s no extra work needed to map between operating systems and languages when syncing data between iOS and Android.That data is also synced to MongoDB Atlas and so can be accessed from web or other kinds of apps too.The data stored and synced covers everything in the app:You can download all of the code from the GitHub repo.Checkout Building a Mobile Chat App Using Realm – Integrating Realm into Your App for all of the details.Also, if you want to learn more about how the app was built and ask some questions then I’ll be speaking at a virtual meetup on 17th Feb.", "username": "Andrew_Morgan" }, { "code": "", "text": "Is it possible to view these articles somewhere? I see the new posts for SwiftUI but all of the links to the old blog posts using UIKit now return 404 errors. I use UIKit for my app so it would be helpful to see those", "username": "Campbell_Affleck" }, { "code": "", "text": "I’ve updated the link, but it’s quite an old post now. This article brings the app up to date to use the newer sync features: Using Realm Flexible Sync in Your App—an iOS Tutorial | MongoDBNote that both posts use SwiftUI rather than UIKit.Andrew.", "username": "Andrew_Morgan" } ]
Building a Mobile Chat App Using Realm – Integrating with Realm
2021-02-08T11:10:51.015Z
Building a Mobile Chat App Using Realm – Integrating with Realm
2,517
null
[ "compass", "server" ]
[ { "code": "bindIpmongod.cfg# network interfaces\nnet:\n port: 27017\n bindIp: 10.xx.xx.xx\nmongodb://10.xx.xx.xx:27017bindIpStart parameters", "text": "Good day MongoDB Community,I have setup a Windows MongoDB server as a service and another PC in the same network going to connect using Compass.I have updated the bindIp in mongod.cfg as such:I tried connecting via Compass mongodb://10.xx.xx.xx:27017 but connection timed out.\nI even added bindIp in Start parameters when running the service but still same result.\nEvery update, I made sure service is stopped and restarted.Are there other steps I need to perform to allow the on-premise MongoDB server to be accessed via Compass?", "username": "joseph-d-p" }, { "code": "# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n", "text": "Hi @joseph-d-p ,\nFrom the documentation:\n“Make sure that your mongod and mongos instances are only accessible on trusted networks. If your system has more than one network interface, bind MongoDB programs to the private or internal network interface.”\nhttps://www.mongodb.com/docs/manual/core/security-mongodb-configuration/#:~:text=Make%20sure%20that%20your%20mongod%20and%20mongos%20instances%20are%20only%20accessible%20on%20trusted%20networks.%20If%20your%20system%20has%20more%20than%20one%20network%20interface%2C%20bind%20MongoDB%20programs%20to%20the%20private%20or%20internal%20network%20interface.So, you’ve bind the correct nic in your server?\nIf you haven’ t particular requirement, set the bind ip in this way:Regards", "username": "Fabio_Ramohitaj" } ]
Unable to access Windows MongoDB server via Compass: Connection Timed Out
2023-06-30T07:33:59.508Z
Unable to access Windows MongoDB server via Compass: Connection Timed Out
514
https://www.mongodb.com/…f01922cb3c26.png
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "import pkg from \"mongoose\";\n\nconst { Schema, model, models } = pkg;\n\nconst child = new Schema({ name: String, studentid: Number});\nconst schema = new Schema({ name: String, age: Number, children: [child] });\n\nconst Test = model(\"Test\", schema) \n\nexport default Test;\nagestudentidimport { connectToDB } from \"./utils/database.js\";\nimport Test from \"./models/test.js\";\n\nasync function updateData() {\n await connectToDB();\n\n const res = Test.updateMany(\n {\n age: 34,\n name: \"name2\",\n },\n {\n $set: {\n \"children.$[element]\": {\n name: \"updatedname\",\n studentid: 123456789,\n },\n },\n },\n {\n arrayFilters: [\n {\n element: {\n name: \"childr344en1\",\n studentid: 137,\n },\n },\n ],\n },\n {upsert: true}\n );\n console.log(res)\n}\n\nawait updateData();\n", "text": "My schema:Data sample:My update function, I wish to update the entire object inside the array base on age and studentid:After executing the update function, the data didn’t update. May I ask why? Where am I doing wrong?", "username": "WONG_TUNG_TUNG" }, { "code": "db.getCollection(\"Test\").updateMany(\n {\n age: 34,\n name: \"name2\",\n },\n {\n $set:{\n 'children.$[element]':{\n name: \"updatedname\",\n studentid: 123456789,\n }\n }\n },\n {\n arrayFilters:[\n {\n element:{\n name: \"childr344en1\",\n studentid: 137, \n }\n }\n ]\n },\n {upsert:true}\n)\n", "text": "I tried to replicate what you’re doing in a local mongo instance and it seemed to work, unless anyone with better eyes than me can see a difference?Mongo playground: a simple sandbox to test and share MongoDB queries onlineWhat’s in the return object from the server? What does it report as the matched / modified counts?", "username": "John_Sewell" }, { "code": "", "text": "What’s in the return object from the server? What does it report as the matched / modified counts?Do you mean “res”, what key and value inside “res” should I look for? How can I get the matched/modified counts in above code? Thank you.", "username": "WONG_TUNG_TUNG" }, { "code": " const res = await Test.updateMany(\n\n", "text": "Actually you’re not awaiting the results of the execution, can you add an await:", "username": "John_Sewell1" }, { "code": "", "text": "It works, can you explain why not adding await will lead to fail update?", "username": "WONG_TUNG_TUNG" }, { "code": "const mongoCon = new mongoClient();\nawait mongoCon.open()\nmongoCon.db.col.insertOnce({'a':'b'})\nmongoCon.close()\n\nconst mongoCon = new mongoClient();\nawait mongoCon.open()\nconst retVal = await mongoCon.db.col.insertOnce({'a':'b'})\nmongoCon.close()\n\n", "text": "The call to mongo is async so it’ll exit the function without waiting for it to even be sent to the server, I imagine that the connection or client is disposed of at some point which happens before the query has been executed.\nSo if you had something like this (pseudocode):The connection would close before the statement was executed, if you’re not seeing an error in the output it’s being thrown away somewhere.Doing this would cause the statement to be executed though:", "username": "John_Sewell" } ]
Can not update the object inside array
2023-06-29T09:34:37.399Z
Can not update the object inside array
505
https://www.mongodb.com/…8_2_1024x576.png
[ "london-mug" ]
[ { "code": "Founder, HaibridSoftware Engineer at DWPData Management Practice Lead for UKI, Google CloudSenior Solutions Architect, MongoDB", "text": "\n_London MUG - Design (5)1920×1080 236 KB\nLondon MUG is excited to announce its next meetup on June 29th at Google Cloud Office. The meetup will feature two engaging presentations, fun tech games, pizzas, and the opportunity to win some exciting swag and free passes to MongoDB .local! The event will begin with an introduction from Svitlana Gavrylova, the Data Management Practice Lead for UKI, discussing how MongoDB and Google Cloud collaborate to create even better solutions for your next big project.The introduction will be followed by two sessions. The first session by Sam Brown will focus on Data Localisation in MongoDB, which is a crucial aspect of data management in today’s global business landscape. Sam will discuss how you can use scale MongoDB globally and still provide localized access for either performance or data residency concerns.In the next session, Thomas Chamberlain, a Software Engineer at DWP, will discuss the process of creating a job search engine microservice at DWP. This involved developing a new set of microservices using NextJs, Spring, and MongoDB to build a platform for claimants on Universal Credit. The presentation will highlight the effective approaches used in building a continuous deployment pipeline.We invite you to join us for an evening filled with learning and networking! With MongoDB.local London a few weeks after the event, we will be awarding some free passes at the event. You could also register right now with coupon code MUG50 and stack it with Sani10 to get 60% off.Detailed agenda with the duration and activities will be available soon!Event Type: In-Person\nLocation: Google UK, Belgrave House, 76 Buckingham Palace Road, Victoria, LONDON ,SW1W 9TQFounder, Haibrid–\nTom Chamberlain800×800 103 KB\nSoftware Engineer at DWP–\n\nimage800×800 74.6 KB\nData Management Practice Lead for UKI, Google Cloud,\n–\nSam Brown (headshot) (1)512×512 300 KB\nSenior Solutions Architect, MongoDB", "username": "Sani_Yusuf" }, { "code": "", "text": "@Sani_YusufGood morning,On the email I received yesterday, it mentioned a government issued ids. I have taken that to mean Driving License, etc?ThanksNeil", "username": "NeilM" }, { "code": "", "text": "Yes, @NeilM - Driving License, etc would work ", "username": "Harshit" }, { "code": "", "text": "Thanks for last night, to Google for hosting and the speakers.", "username": "NeilM" } ]
MUG London: Data Localisation and Tech Evolution at Scale with MongoDB
2023-05-05T14:31:56.718Z
MUG London: Data Localisation and Tech Evolution at Scale with MongoDB
2,908
null
[ "kubernetes-operator" ]
[ { "code": "", "text": "If I install the kubernetes operator on a namespace that already has MongoDB installed, will operator be managing the MongoDB?", "username": "Tommy_Park" }, { "code": "", "text": "Hi @Tommy_Park , do you mean installing the Operator in a namespace that contains an install of MongoDB that was not created using our Operator?If so, no, the Operator will not automatically start managing that. We don’t have a supported way to make the Operator take ownership of an existing MongoDB deployment. Any deployments managed by the Operator need to have been created using it, by creating the corresponding yaml Custom Resource.", "username": "Dan_Mckean" }, { "code": "", "text": "Thanks for the reply. \nI needed to hear that.", "username": "Tommy_Park" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Curiosity.. kubernetes-operator behavior
2023-06-30T05:43:39.567Z
Curiosity.. kubernetes-operator behavior
518
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "const userSchema = new mongoose.Schema({\n weekPlan: {\n type: [{\n title: { type: String, required: true },\n color: { type: String, required: true },\n recipeID: { type: mongoose.ObjectId, required: true },\n date: { type: Date, required: true }\n }]\n },\n});\nconst query = User.findOne({\n _id: req.user,\n weekPlan: { $elemMatch: { date: { $gte: new Date(2023, 5, 10, 0, 0, 0), $lte: new Date() } } }\n },\n { _id: 0, weekPlan: 1 },\n )\n\n// Seccond try\n const query = User.aggregate(\n { $match: { _id: req.user } },\n { $unwind: \"$weekPlan\" },\n {\n $match: {\n \"weekPlan.dates\": { $gte: new Date(2023, 5, 10, 0, 0, 0), $lte: new Date() }\n }\n }\n ) \n", "text": "My data set looks like this.I try to get back from the array weekPlan only the elements that are between the two dates. However, I get either no result at all or all results unfiltered where it is clear that the date is not in the range.I have tried the following two queries (js)", "username": "Marvin_N_A" }, { "code": "", "text": "Your second query match looks wrong. Do the two checks in an $and block, one for gte and one for le etc.", "username": "John_Sewell" }, { "code": " const query = User.aggregate(\n { $match: { _id: req.user } },\n { $unwind: \"$weekPlan\" },\n {\n $match: {\n \"weekPlan.dates\": { $and: [{ $gte: new Date(2023, 5, 10, 0, 0, 0) }, { $lte: new Date() }] }\n }\n }\n )\n", "text": "No result with this changes", "username": "Marvin_N_A" }, { "code": "db.collection.aggregate([\n {\n $match: {\n $and: [\n {\n \"type.date\": {\n $gt: 0\n }\n },\n {\n \"type.date\": {\n $lt: 3\n }\n },\n \n ]\n }\n }\n])\n", "text": "My personal account was limited to 8 posts a day and I seem to have blown that so replying on my work account but see below:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell1" }, { "code": "$match: { \"type.date\" : { $gt: 0 ,$lt: 3 } }\n", "text": "The explicit $and is not required. The following $match will return the same documents.But I think the use-case is about only returning the matching element in the array for matching documents. I think that for this you need a $filter in a $project stage.The $unwind version is also promising since an $unwind and $match is almost like a $filter.", "username": "steevej" }, { "code": "", "text": "Ha, that’s something I’ve done forever, thanks for that reply! Probably as at some point in the past I added two filters against the same field and spend ages working out why only one of the conditions was picking up!\nShall be making use of that!", "username": "John_Sewell" }, { "code": "", "text": "The $unwind is on the wrong field given the schema isn’t it? “type” is the array and not the weekPlan.", "username": "John_Sewell" } ]
Find all elements in an array between two dates
2023-06-28T21:24:24.106Z
Find all elements in an array between two dates
844
null
[ "aggregation", "connector-for-bi" ]
[ { "code": "", "text": "Hi all,at my job, we’ve been evaluating the MongoDB Connector for BI for our visualization project. Unfortunately there are some problems we have noticed, especially in the interplay with Power BI and the Power Query editor. If we cannot resolve them, we will have to change our planned software stack and remove the BI Connector from the equation.Concerns our largest collection:After storing about ~20m documents, performance gets severely degraded. Most times an update of Power BI reports is possible after 1 or 2 retries. We are looking to move to higher Power BI tiers anyway, where we’d do refresh partitions to avoid re-importing the whole DB on every report refresh.However, the really big problem is that the Power Query editor stops working. Various error messages emerge but no real indication of what is wrong. Through the MySQL endpoint I usually got an error code somewhere very close to Int32.MinValue, if that means anything to anyone.Things we tried:Things not yet tried:So, we currently persist an older subset of the data from xyz_collection to xyz_collection_archive and then delete them from the source collection. Hardly ideal, as we were hoping to visualize without a time limit.", "username": "Fabian_Schneider" }, { "code": "", "text": "Sorry to hear about the problem", "username": "Verten_Saltan" }, { "code": "", "text": "Hello @Fabian_Schneider My name is Alexi Antonino and I am the Product Manger for the BI Connector and Atlas SQL. First up, I am sorry the BI Connector with Power BI is not giving you the results necessary.My team is currently working on making Atlas SQL generally available (expected June of 2023). Atlas SQL will eventually replace the Atlas BI Connector. Atlas SQL is built upon a SQL-92 compatible dialog (not MySQL) and uses our Atlas Data Federation for a query engine allowing us to limit namespaces. Also, we are running an Private Preview program now for an Atlas SQL Power BI Connector - it is a custom connector build specifically for MongoDB. This first version of the connector does not offer support for Direct Query (which I think you might benefit from based on your data volumes). We plan to release a version with DQ later this summer though. The version today, while it only supports the import mode (which can limit data based on memory), might still be ok if we are able to limit your namespaces or data using Data Federation and views. If your MongoDB instance is in Atlas - let’s connect right away and I can show you this new connector and share with you the roadmap. I can even show you this if you are using an On-Premise version of MongoDB because while it is very future looking, we still plan to offer an on-prem SQL interface as well (but we are in the early phases of scoping this out).\nIf you are interesting in seeing a demo of Atlas SQL +Power BI, or just hearing/seeing the roadmap for the MongoDB SQL Interface, please email me and we can schedule some time.\nAlso, if the BI Connector is not going to work for you I can provide you with a list of some 3rd party connectors that might suit your needs (MongoDB+Power BI with DQ support). This might be a path to consider if the Atlas SQL connectors and their timeline don’t align with yours.Please email me - [email protected],\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Dealing with performance issues when working with large datasets can be a real pain. From what you’ve described, it seems like you’ve tried a lot of things already, but still no luck. Have you considered looking into some power bi courses that might help you optimize your workflow and get the most out of your data? As for your specific question about image format and social media, it depends on the platform you’re using. For example, Instagram tends to prefer square images with a resolution of 1080x1080 pixels. Facebook, on the other hand, allows for a wider variety of image sizes and resolutions. In general, it’s a good idea to aim for high-quality images that are optimized for the specific platform you’re using.", "username": "Celik_Dewen" }, { "code": "", "text": "Hi!\nJust found this topic through Google and I’m experience the same problems with the performance.\nSince you said that June 2023 will be probably the first release date I’m wondering if this is still the case and when we can expect the first version?", "username": "DHNiek" }, { "code": "", "text": "Try using atlas SQL interface, performance improves, you should evaluate its use with federated databases", "username": "Adolfo_Adrian" } ]
Connector for BI to Power BI performance problems with very large collections
2022-08-31T16:37:36.887Z
Connector for BI to Power BI performance problems with very large collections
3,685
null
[ "node-js", "atlas-cluster" ]
[ { "code": "\"mongodb+srv://<username>:<password>@crdata.mg0nclu.mongodb.net/courseLZ?retryWrites=true&w=majority\"", "text": "I have developed a web-app using Express (Node.js/JavaScript) web frameworks, initially following a MDN tutorial but flying solo now. There is an MongoDB Atlas database, for the data, connected to the app and working very well. On deploying the app via Railway, without a database service, I expected to be able to connect to the same Atlas database using the same connection script \"mongodb+srv://<username>:<password>@crdata.mg0nclu.mongodb.net/courseLZ?retryWrites=true&w=majority\". This failed to connect even though I had a white-listed IP (0.0.0.0/0).Many hours of reading and experimenting followed including replacing the Atlas DB with a MongoDB service in the Railway container. Long story short over 24 hours after having added a white-list IP to my Atlas account the app on Railway connected to the Atlas database!! Clearly my problem is resolved but I thought it might be a good discussion topic for my first post on this forum. Thanks for “listening”.", "username": "Jim_Rublee" }, { "code": "", "text": "Hey, where do you find your Railway outbound IP’s to add to the Atlas database?", "username": "Cre8tive_Studio" } ]
Connecting the same MongoDB Atlas database to a local app and a railway deployed app
2023-03-16T18:43:48.223Z
Connecting the same MongoDB Atlas database to a local app and a railway deployed app
1,613
null
[ "dot-net", "change-streams" ]
[ { "code": "", "text": "Using C# driver 2.20.0 and subscribing for changed documents with MongoClient.WatchAsync() (i.e. subscribing for Inserted documents for the whole deployment, all databases, all collections.\nThis works just fine, but I read in the documentation that a ChangeStreamOperationType.Invalidate will be the consequence of e.g. a renamed or dropped collection if the stream is setup on the collection, ditto if database, but nothing is stated if the stream is setup on MongoClient.Is ChangeStreamOperationType.Invalidate at all possible for a stream setup on a MongoClient?\nIf yes, what can I do with the deployment to test for it?", "username": "Magne_Edvardsdal_Ryholt" }, { "code": "", "text": "setup on the collection vs setup on the MongoClientwhat you mean by these two ?", "username": "Kobe_W" }, { "code": "", "text": "We can get a cursor by using one of the IMongoCollection. Watch() methods, one of the IMongoDatabase. Watch() methods or IMongoClient. Watch() methods.These have different “scopes”, IMongoCollection. Watch() reurns a cursor to get change document from that specific collection only while IMongoClient.Watch returns a cursor to get change documents from all collections in all databases (the whole “deployment”)", "username": "Magne_Edvardsdal_Ryholt" }, { "code": "invalidateinvalidate", "text": "Change streams opened on collections raise an invalidate event when a drop, rename, or dropDatabase operation occurs that affects the watched collection.Change streams opened on databases raise an invalidate event when a dropDatabase event occurs that affects the watched database.in official manual, it doesn’t say if a “mongodb client” stream can raise invalidate event or not. So only the implementer can give an answer here.", "username": "Kobe_W" } ]
Invalidate event in ChangeStream on MongoClient
2023-06-28T18:29:42.736Z
Invalidate event in ChangeStream on MongoClient
635
null
[ "aggregation", "queries", "sharding" ]
[ { "code": "", "text": "Pretty new to Mongo and have a question or two around slow performance, i have followed the tips\nusing Mongo 3.7, the following query took about 84 seconds though if Mongos stats are correct the\ntime to read from storage was 803946 micros (should i believe this?). If the storage figures are to be believed then it took 82 seconds or so to parse the rturned docs in memory ?\nIt’s no small chunk at 1,729,965,030 bytes im wondering if even that was the compressed size.Im wondering if the replication locking is normal in a standalone env i searched and could not find much:\nReplicationStateTransition: { acquireCount: { w: 72619 } } and could this cause a delay in processing?\nOther than that any hints on speeding this up would be greatly appreciated !command: aggregate { aggregate: “query.hourly”, pipeline: [ { $match: { endTime: { $gt: new Date(1686355200000) } } }, { $group: { _id: “$contextId” } }, { $project: { contextId: “$_id”, _id: 0 } }, { $sort: { contextId: 1 } }, { $skip: 0 }, { $limit: 10000 } ], allowDiskUse: true, cursor: { batchSize: 2147483647 }, $db: “presidio”, $readPreference: { mode: “primaryPreferred” } } planSummary: IXSCAN { endTime: 1 } keysExamined:9182654 docsExamined:9182654 hasSortStage:1 usedDisk:1 cursorExhausted:1 numYields:71985 nreturned:10000 queryHash:96AD493D planCacheKey:5A080EEB reslen:629010 locks:{ ReplicationStateTransition: { acquireCount: { w: 72619 } }, Global: { acquireCount: { r: 72619 } }, Database: { acquireCount: { r: 72619 } }, Collection: { acquireCount: { r: 72619 } }, Mutex: { acquireCount: { r: 634 } } } storage:{ data: { bytesRead: 1729965030, timeReadingMicros: 803946 } } protocol:op_msg 83971ms", "username": "Con_O_Donnell" }, { "code": "", "text": "i have followed the tips\nusing Mongo 3.7,MongoDB 3.7 is not a version of MongoDB so what do you mean by this?Mongos stats are correctm wondering if the replication locking is normal in a standaloneAre you using a standalone or a sharded cluster?Could you provide some clarity of what type of deployment and the mongodb version you are using? Also if you provide the query with the .explain(“executionStats”) output that would be helpful.", "username": "tapiocaPENGUIN" }, { "code": "Mongo db version v4.2.17\n\nHere are my execution stats:\n\nexecution stats\n\n\n`> db.getCollection(\"test\").explain(\"executionStats\").aggregate([ { $match: { endTime: { $gt: new Date(1686355200000) } } },{ $group: { _id: \"$contextId\" }}, { $project: { contextId: \"$_id\", _id: 0 } }], {\"allowDiskUse\" : true})\n{\n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"endTime\" : {\n \"$gt\" : ISODate(\"2023-06-10T00:00:00Z\")\n }\n },\n \"fields\" : {\n \"contextId\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"mydb.test\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"endTime\" : {\n \"$gt\" : ISODate(\"2023-06-10T00:00:00Z\")\n }\n },\n \"queryHash\" : \"96AD493D\",\n \"planCacheKey\" : \"5A080EEB\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"endTime\" : 1\n },\n \"indexName\" : \"end\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"endTime\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"endTime\" : [\n \"(new Date(1686355200000), new Date(9223372036854775807)]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 10227381,\n \"executionTimeMillis\" : 91474,\n \"totalKeysExamined\" : 10227381,\n \"totalDocsExamined\" : 10227381,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 10227381,\n \"executionTimeMillisEstimate\" : 3196,\n \"works\" : 10227382,\n \"advanced\" : 10227381,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 80872,\n \"restoreState\" : 80872,\n \"isEOF\" : 1,\n \"docsExamined\" : 10227381,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 10227381,\n \"executionTimeMillisEstimate\" : 1560,\n \"works\" : 10227382,\n \"advanced\" : 10227381,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 80872,\n \"restoreState\" : 80872,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"endTime\" : 1\n },\n \"indexName\" : \"end\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"endTime\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"endTime\" : [\n \"(new Date(1686355200000), new Date(9223372036854775807)]\"\n ]\n },\n \"keysExamined\" : 10227381,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$contextId\"\n }\n },\n {\n \"$project\" : {\n \"_id\" : false,\n \"contextId\" : \"$_id\"\n }\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"MY-DB-HOST\",\n \"port\" : 27017,\n \"version\" : \"4.2.17\",\n \"gitVersion\" : \"be089838c55d33b6f6039c4219896ee4a3cd704f\"\n },\n \"ok\" : 1\n}\n>\n `\n\nand my collection stats\n`db.getCollection(\"test\").stats()\n{\n \"ns\" : \"mydb.test\",\n \"size\" : 4002064919,\n \"count\" : 10227409,\n \"avgObjSize\" : 391,\n \"storageSize\" : 1337253888,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:collection-1560--5498070625033782530\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 0,\n \"blocks allocated\" : 1960,\n \"blocks freed\" : 1122,\n \"checkpoint size\" : 710680576,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 626548736,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 1337253888,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 323,\n \"btree clean tree checkpoint expiration time\" : NumberLong(\"9223372036854775807\"),\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 5,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 4519654858,\n \"bytes dirty in the cache cumulative\" : 131064624,\n \"bytes read into cache\" : 10085864857,\n \"bytes written from cache\" : 178306687,\n \"checkpoint blocked page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 114,\n \"eviction walk passes of a file\" : 19408,\n \"eviction walk target pages histogram - 0-9\" : 7309,\n \"eviction walk target pages histogram - 10-31\" : 9991,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 2108,\n \"eviction walk target pages histogram - 64-128\" : 0,\n \"eviction walks abandoned\" : 1811,\n \"eviction walks gave up because they restarted their walk twice\" : 3062,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 2340,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 233,\n \"eviction walks reached end of tree\" : 9774,\n \"eviction walks started from root of tree\" : 7451,\n \"eviction walks started from saved location in tree\" : 11957,\n \"hazard pointer blocked page eviction\" : 6,\n \"in-memory page passed criteria to be split\" : 18,\n \"in-memory page splits\" : 9,\n \"internal pages evicted\" : 2752,\n \"internal pages split during eviction\" : 0,\n \"leaf pages split during eviction\" : 10,\n \"modified pages evicted\" : 10,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 0,\n \"page written requiring cache overflow records\" : 0,\n \"pages read into cache\" : 88141,\n \"pages read into cache after truncate\" : 0,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages read into cache requiring cache overflow entries\" : 0,\n \"pages requested from the cache\" : 27656375,\n \"pages seen by eviction walk\" : 422674,\n \"pages written from cache\" : 1798,\n \"pages written requiring in-memory restoration\" : 0,\n \"tracked dirty bytes in the cache\" : 1483332,\n \"unmodified pages evicted\" : 53517\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 85133,\n \"compressed pages written\" : 1551,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 247\n },\n \"cursor\" : {\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 177959,\n \"close calls that result in cache\" : 0,\n \"create calls\" : 22,\n \"insert calls\" : 172003,\n \"insert key and value bytes\" : 68688868,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 202,\n \"open cursor count\" : 1,\n \"operation restarted\" : 0,\n \"prev calls\" : 1,\n \"remove calls\" : 1093,\n \"remove key bytes removed\" : 5465,\n \"reserve calls\" : 0,\n \"reset calls\" : 6710744,\n \"search calls\" : 803663619,\n \"search near calls\" : 0,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 2404,\n \"internal page multi-block writes\" : 5,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 15,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 144,\n \"page reconciliation calls\" : 1211,\n \"page reconciliation calls for eviction\" : 3,\n \"pages deleted\" : 1\n },\n \"session\" : {\n \"object compaction\" : 0\n },\n \"transaction\" : {\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 5,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 2992058368,\n \"indexSizes\" : {\n \"_id_\" : 224673792,\n \"sessionEnd\" : 125272064,\n \"end\" : 86822912,\n \"sessionCtxEnd\" : 1283137536,\n \"ctxEnd\" : 1272152064\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1\n}\n\nThanks !\n", "text": "", "username": "Con_O_Donnell" }, { "code": "", "text": "also It’s a standalone non-sharded setup running with 128GB memory", "username": "Con_O_Donnell" } ]
Mongodb replication locks and general slowness
2023-06-27T20:20:02.916Z
Mongodb replication locks and general slowness
523
null
[]
[ { "code": "", "text": "Hi. I’m new here.Silly question: does this MongoDB Developer Community site use MongoDB on the back end? I have seen this type of forum software on other sites before. What software is it?", "username": "Versie_Boer" }, { "code": "/** @license React v17.0.2\n * react-jsx-runtime.production.min.js\n *\n * Copyright (c) Facebook, Inc. and its affiliates.\n *\n * This source code is licensed under the MIT license found in the\n * LICENSE file in the root directory of this source tree.\n", "text": "If you View Source for this page, at the bottom you will see:So one infers that it’s React backed by the house vintage, i.e., MongoDB.", "username": "Jack_Woehr" }, { "code": "", "text": "And then there is React | MongoDB", "username": "Jack_Woehr" }, { "code": "", "text": "So one infers that it’s React backed by the house vintage, i.e., MongoDB.Thanks @Jack_Woehr. Hey, fyi, I noticed your https is broken on your site. You might want to look into that.\n\nScreenshot from 2023-06-27 04-39-421052×1067 59.8 KB\n", "username": "Versie_Boer" }, { "code": "", "text": "What software is it?Probably DiscourseEdit: Yes, if you search Discourse you’ll come up with a few threads.", "username": "chris" }, { "code": "", "text": "https not broken. Just not present! Thanks ", "username": "Jack_Woehr" }, { "code": "", "text": "not presentJust curious, why not?", "username": "Versie_Boer" }, { "code": "", "text": "Not needed. All static content.", "username": "Jack_Woehr" } ]
Does this MongoDB Developer Community site use MongoDB on the back end?
2023-06-26T12:24:10.114Z
Does this MongoDB Developer Community site use MongoDB on the back end?
732
null
[ "node-js", "student-developer-pack" ]
[ { "code": "", "text": "I’m taking the node.js developer learning path for MongoDB and I would like to attempt the associate developer exam course soon. Do I need knowledge of the courses with the ‘elective’ tag to pass the exam or can I skip those courses and still pass ?", "username": "David_Adeyemi" }, { "code": "", "text": "Hey @David_Adeyemi,Thank you for reaching out to the MongoDB Community forums.Do I need knowledge of the courses with the ‘elective’ tag to pass the exam or can I skip those courses and still pass?Electives are not mandatory for exam coverage, but rather optional.In case of any further questions feel free to reach out.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I skip Electives on MongoDB developer learning Path and still pass the associate developer exam?
2023-06-27T12:07:20.655Z
Can I skip Electives on MongoDB developer learning Path and still pass the associate developer exam?
684
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hey,I’ve been using the MongoDB Connector for Kafka Connect for a while on a Kubernetes cluster (using the Strimzi operator for deployment/config). Until now all seems to have been working perfectly well… tbh it still is working well until I hit very high load. In this case I am seeing the distribution of messages across topic partitions to be uneven. I would say that 50% of the partitions are not really being utilised.According the the Kafka Connect docs it is down to the producer to define the partitioner in use and I do not see a place this could be configured with the MongoDB connector.So my question is this… is the connector using the DefaultPartitioner, and if so is it possible to force round-robin behaviour?Thanks. ", "username": "Crispy_N_A" }, { "code": "partitioner.class", "text": "As an update I figured out that the normal Kafka producer config can be passed down to the connector specifying the desired partitioner (partitioner.class). However I am still seeing half of the partitions for a topic unused. As a test I manually published to one of the unused partitions which worked fine.So the question remains… why would half of the partitions for a perfectly valid Kafka topic not be published to by the Kafka Connect connector?", "username": "Crispy_N_A" } ]
MongoDB Connector (source)
2023-06-26T14:35:21.234Z
MongoDB Connector (source)
675
null
[ "dot-net", "transactions" ]
[ { "code": "", "text": "Hello to all,This is my first post here since I am new in mongodb We are building a system using C# that collects transactions from an external provider by calling their APIs.\nThe transactions are stored in a collection in our DB.\nThe transactions model from the provider is like this:\n{\nid: int,\ndate:datetime\namount:decimal,\nvalidated:true/false\n}We call this endpoint daily and we get back the daily transactions and we store them to our collection with the same values except the id where we use mongodb’s default one (objectid).When we call the external endpoint we get always the same data with a change on the validated field when the transaction has validated.So, is there a way to update all the existing documents at once without looping on the stored DB transactions and compare them with the incoming ones?Thanks for your help\nYannis", "username": "Yannis_Ragkavas" }, { "code": "", "text": "Do you retain the original ID from the API when you save to Mongo? If so you could get all the transactions on the API call, and put them in a working collection then run an aggregation with a $merge into the existing collection with the $merge matching on the ID that identifies the transaction, this would then be one server call to update all records and the server would do the heavy lifting.\nJust need to make sure indexes are in place for fast matching.MongoDB on-demand materialized view, SELECT INTOYou can even specify when match or not, so you can insert a new item or update the existing one and have different logic applied.Then when done, clear the working collection.", "username": "John_Sewell" }, { "code": "", "text": "Thanks a lot for your reply! I will test your solution and come back to you", "username": "Yannis_Ragkavas" }, { "code": "db.pendingTransactions.aggregate([\n {\n $merge:\n \n {\n into: \"transactions\",\n on: [\"transactionId\"],\n whenMatched: \"replace\",\n whenNotMatched: \"insert\",\n },\n },\n]);\n", "text": "I ve tried your proposal. But I have this error and I cannot resolve it.\n“Cannot find index to verify that join fields will be unique”So, I have a pendingTransactions collection and a transactions collection\nI store data to pendingTransactions from calling the external API\nthen I am runningtransactionId field belongs to a unique index on the transactions collection. But when I run the aggregation then I see this error.Any ideas on that?—Update —I found it. I was using mongo compass to create the indexes and for some reason although the index was present gave this error.\nAnyway, I created the index though mongosh commands db.transactions.createIndex and workedThanks again\nYannis", "username": "Yannis_Ragkavas" }, { "code": "db.getCollection(\"PendingTransactions\").aggregate([\n{\n $project:{\n _id:0,\n transactionId:'$_id',\n 'Approved':1\n }\n},\n{\n $merge:{\n into:'Transactions',\n on:'transactionId',\n }\n}\n])\n", "text": "I had a play and you’re right, it throws an issue, I got around it with two things:You can play about with the merge options to work out if you want to replace or merge objects if they already exist…if you’ve updated some to add new fields etc, you may not want to splat them with all the data in the incoming pending transaction.", "username": "John_Sewell" }, { "code": "", "text": "I found it. I was using mongo compass to create the indexes and for some reason although the index was present gave this error.\nAnyway, I created the index though mongosh commands db.transactions.createIndex and workedThanks again\nYannis", "username": "Yannis_Ragkavas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best way to update multiple documents
2023-06-28T13:08:36.315Z
Best way to update multiple documents
601
null
[ "aggregation" ]
[ { "code": "[\n {\n $lookup: {\n as: \"resultcheck\",\n foreignField: \"checkId\",\n from: \"checks\",\n localField: \"checkId\",\n },\n },\n {\n $lookup: {\n as: \"resultDepartment\",\n foreignField: \"DepId\",\n from: \"department\",\n localField: \"DepartmentId\",\n },\n },\n {\n $unwind: {\n path: \"$X\",\n },\n },\n {\n $unwind: {\n path: \"$X.X\",\n },\n },\n {\n $unwind: {\n path: \"$resultDepartment\",\n },\n },\n {\n $unwind: {\n path: \"$resultcheck\",\n },\n },\n {\n $match: {\n $and: [\n {\n \"X.X.TotalType\":\n \"Forecast\",\n \"X.X.FiscalYear\":\n {\n $ne: null,\n },\n },\n ],\n },\n },\n {\n $project: {\n _id: true,\n number: \"$Number\",\n responsible: \"$rResponsible\",\n calendarMonth: \"$Period\",\n department: \"$resultDepartment.Name\",\n finishDate: \"$FinishDate\",\n xYear: \"$X.X.Year\",\n year: \"$Year\",\n level: \"$Level\",\n check: \"$resultcheck.check\",\n period: {\n $switch: {\n branches: [\n {\n case: {\n $eq: [\"$Period\", 10],\n },\n then: 1,\n },\n {\n case: {\n $eq: [\"$Period\", 11],\n },\n then: 2,\n },\n {\n case: {\n $eq: [\"$Period\", 12],\n },\n then: 3,\n },\n {\n case: {\n $eq: [\"$Period\", 1],\n },\n then: 4,\n },\n {\n case: {\n $eq: [\"$Period\", 2],\n },\n then: 5,\n },\n {\n case: {\n $eq: [\"$Period\", 3],\n },\n then: 6,\n },\n {\n case: {\n $eq: [\"$Period\", 4],\n },\n then: 7,\n },\n {\n case: {\n $eq: [\"$Period\", 5],\n },\n then: 8,\n },\n {\n case: {\n $eq: [\"$Period\", 6],\n },\n then: 9,\n },\n {\n case: {\n $eq: [\"$Period\", 7],\n },\n then: 10,\n },\n {\n case: {\n $eq: [\"$Period\", 8],\n },\n then: 11,\n },\n {\n case: {\n $eq: [\"$Period\", 9],\n },\n then: 12,\n },\n ],\n },\n },\n title: \"$Title\",\n type: \"$X.X.TotalType\",\n value:\n \"$X.X.Value\",\n },\n },\n]\n", "text": "hi,\ndoes anyone have an idea, what’s the reason for missing data in my aggregation?\nThis is my pipeline:i’ve got from over 17k documents only 16k.\nThe original document contains a lot of objects with arrays inside the objects, which i resolve with unwind i.e.Can someone help?Kind Regards\nLJPS: i have anonymized some parts of this code", "username": "Mighty_Musterfrau" }, { "code": "", "text": "Project should not limit documents output, have you commenting out all stages and gradually running and counting document with each stage gradually introduced?", "username": "John_Sewell" }, { "code": "", "text": "almost done. If all is commented out i’ve got all docs.\nI also checked more than one time the lookups, to be sure that nothin is happen there. Could not find any issues after lookup or match", "username": "Mighty_Musterfrau" }, { "code": "db.getCollection(\"numbers\").aggregate([\n{\n $addFields:{\n newNum:{\n $add:[\n {\n $mod:[\n {$add:['$num', 2]},\n 12\n ]\n },\n 1\n ]\n } \n }\n}\n])\n", "text": "Your switch statement could also be simplified using the modulus operator:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "Check the following $unwind option:", "username": "steevej" }, { "code": "{\n$unwind: {\npath: \"myPath\",\nincludeArrayIndex: \"string\",\npreserveNullAndEmptyArrays: true,\n},\n}\n", "text": "It seems to be an issue with the X.X Values which where needed.i changed the $unwind to:, changed the match statement to 2 single statements and the missing data popped up and where available now…Thy very much ", "username": "Mighty_Musterfrau" }, { "code": "", "text": "i will try, thy for the tip", "username": "Mighty_Musterfrau" } ]
Why do i miss data when i use $project
2023-06-29T07:14:51.075Z
Why do i miss data when i use $project
276
null
[ "replication", "containers", "kubernetes-operator" ]
[ { "code": "", "text": "Hi,I currently have MongoDB as a replica on 3 Docker containers on Machine_1. All is running great.\nBut, I wish to create a Production environment running on Machine_2. It will be the same architecture, so using a replica of MongoDB and running in 3 containers. It will be my master DB.So Machine_1 with its replica will remain a copy to Machine_2 (Production).I would love to have both replicas to be synchronised (Machine_1 replica and Machine_2 replica).\nMy Machine_1 is my laptop where I develop an App, so it updates the MongoDB replica on that machine (1) as a local DB.\nOn Machine_2, there is a production App running, which will also update the local MongoDB replica on that machine (2)\nI wish them to sync if connected in the same network or if seeing each other.What would you suggest doing?\nRun Kubernetes Cluster with two nodes (Machine_1, Machine_2), and in each node runs MongoDB replicas.\nBut then, how do they sync?The IP of the machines can change; there is no DNS as of now.Thanks, Jakub", "username": "Jakub_Polec" }, { "code": "", "text": "Hi @Jakub_Polec and welcome to MongoDB community forums!!As I understand from your requirements, you have two mongoDB deployments in your local and in a production environment and you wish to sync the two databases.If yes, could you help me understand a few things regarding the deployments.So Machine_1 with its replica will remain a copy to Machine_2 (Production).By replica, are you referring to a replica set ? or is the local database a copy of production database in terms of schema and architecture?I would love to have both replicas to be synchronised (Machine_1 replica and Machine_2 replica).Could you help me understand why would you want the local database where development is ongoing to be synced with the production environment?In addition, you could also explore the Cluster to Cluster Sync feature of MongoDB if the databases are deployed on Atlas with 6.0 version or above.Regards\nAasawari", "username": "Aasawari" } ]
Mongodb in replica / kubernetes
2023-06-29T04:56:08.037Z
Mongodb in replica / kubernetes
563
null
[ "aggregation", "data-modeling" ]
[ { "code": "", "text": "from postman send a message to a mongodb collection, it creates the id and returns ok, but does not save the object properties with their value", "username": "FRANKLIN_EDUARDO_MARTINEZ_AVILA" }, { "code": "", "text": "Hey @FRANKLIN_EDUARDO_MARTINEZ_AVILA,Thank you for reaching out to the MongoDB Community forums.from postman send a message to a mongodb collection, it creates the id and returns ok, but does not save the object properties with their valueTo assist you better in troubleshooting the issue, could you please share the code snippet and the workflow you are following to send the POST request to MongoDB? Additionally, it would be helpful if you could provide details about the body parameter you are passing in the POST request.Could you also confirm whether you are performing any schema validation?Furthermore, please provide the version of MongoDB and the framework you are using. Also, kindly confirm the type of MongoDB deployment, whether it is on-premises or MongoDB Atlas.I look forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Mongo creates id does not save properties
2023-06-25T01:02:14.414Z
Mongo creates id does not save properties
573
https://www.mongodb.com/…7_2_888x1024.png
[ "compass" ]
[ { "code": "", "text": "Hello,We are currently facing difficulties connecting to Atlas using MongoDB Compass via SSH with the identity file option. We are encountering an error stating “All configured authentication methods failed.”In our troubleshooting process, we have already converted our RSA key to a .pem file format. Additionally, we have whitelisted the IP address of our Baiston server in the Atlas network access settings.Interestingly, we are able to establish a connection by using the CLI by SSH to our Baiston server, followed by the ‘mongo connect’ command.To assist us in resolving this issue, we would appreciate any further details or error messages you can provide.\nScreenshot 2023-06-14 at 6.11.53 PM1484×1710 201 KB\n", "username": "Rahul_Sawaria" }, { "code": "", "text": "Hi @Rahul_Sawaria and welcome to MongoDB community forums!!In order to debug the issue, it would be helpful if you could share the below details which would help me reproduce the issue:In the mean while, you can take a look at the post that mentions a similar issue and has solution to the same.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari\nThanks for reaching out.1.) I have followed this doc to generate the ssh key\n2.) Mongo db compass Version -1.37.0 (1.37.0)\n3.) Macbook pro with M1 chip (Os - ventura 13.2.1)Thanks & Regards!", "username": "Rahul_Sawaria" } ]
Unable to connect to atlas using compass via ssh with identity file
2023-06-14T12:46:13.799Z
Unable to connect to atlas using compass via ssh with identity file
996
null
[ "aggregation" ]
[ { "code": "{\n \"conditions\": {\n \"category\": \"d10\",\n \"brand\": \"BMW\"\n },\n \"result\": {\n \"isTransportable\": false\n }\n }\n{\n \"conditions\": {\n \"brand\": \"Ferrari\"\n },\n \"result\": {\n \"isPricey\": true\n }\n }\nisPriceyisTransportable{\n \"conditions\": {\n \"category\": \"d10\",\n \"brand\": \"BMW\"\n },\n \"result\": {\n \"isTransportable\": false\n }\n }\n", "text": "I have two collections: vehicles and settersSetters\nA setter document has two main fields: conditions and result. It’s used to store some rules like “if Category = d10 and Brand = BMW then isTransportable = false”. The way we store this in the setters collections is :Another example: “if brand = Ferrari then isPricey = true” is storedRemarks:Vehicles:\nA vehicle document has a flat structure, is has the fields Category, brand, isTransportable, isPricey.What I am trying to do\nI want to update the values of vehicles according to the rules.\nIf I have a vehicle that have “category” =“d10” and “brand” =“BMW”, I want to update its isTransportable value using the result of the setter document:So it will be set the vehicle will have isTransportable equal to false…However, if my vehicle have category = “d10” and brand != “bmw” then I will not use that rule.\nIs there a way to do this using the aggregates, without having to fetch vehicles and setters", "username": "Philippe_Lafourcade" }, { "code": "db.getCollection(\"vehicles\").aggregate([\n{\n $lookup:{\n from:'conditions',\n let:{\n 'brandMatch':'$brand',\n 'categoryMatch':'$category',\n },\n pipeline:[\n {\n $match:{\n $and:[\n {\n $expr:{\n $or:[\n { \n $eq:[\n '$conditions.brand',\n '$$brandMatch'\n ]\n \n },\n {\n $eq:[\n '$conditions.brand', null\n ]\n }\n ]\n }\n }, \n {\n $expr:{\n $or:[\n { \n $eq:[\n '$conditions.category',\n '$$categoryMatch'\n ]\n \n },\n {\n $eq:[\n '$conditions.category', null\n ]\n }\n ]\n }\n }, \n ]\n }\n }\n ],\n as: 'matchedConditions'\n }\n},\n{\n $match:{\n \"matchedConditions.0\":{$exists:true}\n }\n},\n{\n $unwind:'$matchedConditions'\n},\n{\n $project:{\n _id:1,\n result:'$matchedConditions.result' \n }\n},\n{\n $addFields:{\n 'result._id':'$_id'\n }\n},\n{\n $replaceRoot:{\n newRoot:'$result'\n }\n},\n{\n $merge:{\n into:'vehicles',\n on:'_id',\n whenMatched:'merge'\n }\n}\n])\n\n", "text": "I had a play and came up with something like this:This is what it looks like:I’ve not played with this on a large scale of document and the downer is that you need to update the query for each condition you add, I guess you could create a script that pulls the unique ones out first and THEN build this based on that list…\nI’ve also set it so that it updates if a condition matches or the condition does not exist, you may want to tweak that to perform as you want…I’m not sure this is the best way, but it’s a way!", "username": "John_Sewell" } ]
Is there a way to do this using aggregation?
2023-06-28T16:59:14.283Z
Is there a way to do this using aggregation?
254
null
[ "queries", "node-js", "performance" ]
[ { "code": "MONGO_URL = 'mongodb://xxxx:[email protected]/zzzz';\nconst { MongoClient } = require('mongodb');\n(async () => {\n const client = await MongoClient.connect(MONGO_URL, { useNewUrlParser: true });\n const db = await client.db();\n const audit = db.collection('Audit');\n const start = Date.now();\n const count = (await audit.find().toArray()).length;\n console.log(Date.now() - start, 'ms', count, 'recs');\n})();\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.189+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.17.0.1:34096\",\"uuid\":\"77810513-8218-4bd2-8c76-7f6dc494e062\",\"connectionId\":11,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.191+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn11\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.17.0.1:34096\",\"client\":\"conn11\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.2.7\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"arm64\",\"version\":\"5.15.49-linuxkit\"},\"platform\":\"Node.js v18.13.0, LE, mongodb-core: 3.2.7\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.191+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn11\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.194+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn11\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.194+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.$cmd\",\"command\":{\"saslStart\":1,\"mechanism\":\"SCRAM-SHA-256\",\"payload\":\"xxx\",\"autoAuthorize\":1,\"$db\":\"peek\"},\"numYields\":0,\"reslen\":211,\"locks\":{},\"authorization\":{\"startedUserCacheAcquisitionAttempts\":1,\"completedUserCacheAcquisitionAttempts\":1,\"userCacheWaitTimeMicros\":2},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.198+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.$cmd\",\"command\":{\"saslContinue\":1,\"conversationId\":1,\"payload\":\"xxx\",\"$db\":\"peek\"},\"numYields\":0,\"reslen\":140,\"locks\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.199+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn11\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.199+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn11\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"peek\",\"authenticationDatabase\":\"peek\",\"remote\":\"172.17.0.1:34096\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.199+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.$cmd\",\"command\":{\"saslContinue\":1,\"conversationId\":1,\"payload\":\"xxx\",\"$db\":\"peek\"},\"numYields\":0,\"reslen\":94,\"locks\":{},\"authorization\":{\"startedUserCacheAcquisitionAttempts\":1,\"completedUserCacheAcquisitionAttempts\":1,\"userCacheWaitTimeMicros\":2},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.204+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":101,\"numYields\":0,\"nreturned\":101,\"queryHash\":\"17830885\",\"queryFramework\":\"classic\",\"reslen\":37956,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.208+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":442742,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.227+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":463903,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.237+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":443412,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.255+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":482736,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.266+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":485793,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.276+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":475154,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.286+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":516914,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.299+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":526351,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.309+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":525263,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.321+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":508402,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.331+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":452847,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:11:16.351+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn11\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":7259635789622636012,\"collection\":\"Audit\",\"batchSize\":1000,\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"returnKey\":false,\"showRecordId\":false,\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":7259635789622636012,\"keysExamined\":0,\"docsExamined\":95,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":95,\"queryHash\":\"17830885\",\"reslen\":41793,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:34096\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.374+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.17.0.1:36362\",\"uuid\":\"6d25fb37-6878-4ea0-a0ed-e6496619c659\",\"connectionId\":2,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.376+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.17.0.1:36362\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.5.0\"},\"platform\":\"Node.js v18.13.0, LE\",\"os\":{\"name\":\"linux\",\"architecture\":\"arm64\",\"version\":\"5.15.49-linuxkit\",\"type\":\"Linux\"}}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.379+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.17.0.1:36368\",\"uuid\":\"c97a3041-4b94-45a1-ae8e-a2d74d80acaf\",\"connectionId\":3,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.381+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.17.0.1:36368\",\"client\":\"conn3\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.5.0\"},\"platform\":\"Node.js v18.13.0, LE\",\"os\":{\"name\":\"linux\",\"architecture\":\"arm64\",\"version\":\"5.15.49-linuxkit\",\"type\":\"Linux\"}}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.381+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn3\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.381+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn3\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.387+00:00\"},\"s\":\"D1\", \"c\":\"ACCESS\", \"id\":20226, \"ctx\":\"conn3\",\"msg\":\"Returning user from cache\",\"attr\":{\"user\":{\"user\":\"peek\",\"db\":\"peek\"}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.387+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn3\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"peek\",\"authenticationDatabase\":\"peek\",\"remote\":\"172.17.0.1:36368\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.387+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.$cmd\",\"command\":{\"saslContinue\":1,\"conversationId\":1,\"payload\":\"xxx\",\"$db\":\"peek\"},\"numYields\":0,\"reslen\":125,\"locks\":{},\"authorization\":{\"startedUserCacheAcquisitionAttempts\":1,\"completedUserCacheAcquisitionAttempts\":1,\"userCacheWaitTimeMicros\":2},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.392+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":101,\"numYields\":0,\"nreturned\":101,\"queryHash\":\"17830885\",\"queryFramework\":\"classic\",\"reslen\":37956,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.399+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":442742,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.430+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":463903,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.459+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":443412,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.478+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":482736,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.495+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":485793,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.512+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":475154,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.534+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":516914,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.555+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":526351,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.581+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":525263,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.602+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":508402,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.619+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":1000,\"numYields\":1,\"nreturned\":1000,\"queryHash\":\"17830885\",\"reslen\":452847,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-19T18:05:08.636+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"peek.Audit\",\"command\":{\"getMore\":104885741916121744,\"collection\":\"Audit\",\"batchSize\":1000,\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"originatingCommand\":{\"find\":\"Audit\",\"filter\":{},\"lsid\":{\"id\":{\"$uuid\":\"2673876e-e4b2-4c6b-bb05-4fcb06845ab3\"}},\"$db\":\"peek\"},\"planSummary\":\"COLLSCAN\",\"cursorid\":104885741916121744,\"keysExamined\":0,\"docsExamined\":95,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":95,\"queryHash\":\"17830885\",\"reslen\":41793,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:36368\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n", "text": "Hello there. I am upgrading from node connector 3.2.7 to 5.0.x.\nMy mongo server is on 6.0.4. My node version is v18.13.0.On a moderately large query, the new connector is considerably slower.Below I have include…Is there some new connection or query parameters that I should be using, perhaps?\nMany thanks!CodeV3.2.7 Results\n154 ms 11196 recsV5.0.1 Results\n248 ms 11196 recsV3.2.7 logsV5.0.1 logs", "username": "Richard_Evans" }, { "code": "", "text": "The behaviour is not improved by upgrade to 5.5.0", "username": "Richard_Evans" }, { "code": "", "text": "The biggest drop in performance I have found is with the introduction of V4/", "username": "Richard_Evans" }, { "code": "Node JS version 3.2.7: ~10 ms\nNode JS version 5.0.1: ~9.48 ms\nNode JS version 5.5.0: ~9.48 ms\n", "text": "Hi @Richard_Evans and welcome to MongoDB community forums!!Based on the above sample code shared, I tried to replicate in my local environment by switching between versions 3.3.7, 5.0.0 and 5.5.0.Below are the results for different connector versions:The above execution time is calculated based on multiple (~20 ) times execution of the similar query to read the data from the database and tried tested with multiple connector versions.In order to understand concern more clearly and help you with relevant solutions if possible, could you help me with some details regarding the deployment.On a moderately large query,When you mention this, what is meant by large query? Are you targeting large number of collections or the query involves multiple complicated stages?\n4. Is there index defined in the for the documents ?\n5. Can you share the explain output for the query in all different versions?\n6. Can you also confirm, if with this upgrade process, there was some change made in the application code which would impact the performance.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Thank you very much for looking into this, Aasawari!My database server is running locally. Both node client and mongodb server are running in docker containers on my MacBook Pro. The difference has been reproduced on a colleague’s machine with a similar setup.I notice your timings are very small, suggesting a small dataset. It material that the query returns a fairly large amount of data (a few thousand documents). If I look at the mongo logs, the v5 client calls getMore every 20ms whereas the V3 client calls more or less twice as fast.I confess I have NOT checked whether the complexity of the documents is material. My documents have a nested structure averaging around 0.5k each.Thanks again,\nRichard", "username": "Richard_Evans" }, { "code": "", "text": "Hello again, Aasawari.The collection has indices but the query is {} - so no predicates.\nThe query and the explain are in the logs above (line 10 of each version)\nThere was no change in the code, which is included in my original query.Thanks!", "username": "Richard_Evans" }, { "code": "", "text": "Hello Aasawari. Were you able to make any progress with this?", "username": "Richard_Evans" }, { "code": "", "text": "Hi @Richard_EvansApologies for the delay in response.It would be helpful if you share some additional details for the deployment.Regards\nAasawari", "username": "Aasawari" }, { "code": "const count = (await audit.find().toArray()).length;", "text": "I just saw this thread.The following is absolutely the worst way to get the number of documents in a collection.const count = (await audit.find().toArray()).length;Can’t you use count() or countDocuments(), an aggregation with $count? If not, you could at least use a projection to only send the _id field over the wire.", "username": "steevej" }, { "code": "", "text": "Thanks @steevej . Sure. I realise that. The point of the ticket is that the paging of the documents is much slower in the current connector than it used to be so I need to retrieve the documents to show this. Audit.find().toArray() was simple code to reproduce the problem. I output the length just as evidence of the size of the retrieval.@Aasawari - Sorry - I had not seen your response, thanks. Some of the information you have asked for is in the original ticket audit trail (the query was {} and the explain is in the verbose log). I will come back to this later with a line or two to populate the collection.)", "username": "Richard_Evans" }, { "code": "MONGO_URL = 'mongodb://insert-your-url';\nconst { MongoClient } = require('mongodb');\nconst collectionName = 'peekTestData';\nconst count = 10000;\n\n(async () => {\n console.log('CONNECTING...');\n const client = await MongoClient.connect(MONGO_URL, { useNewUrlParser: true });\n console.log('CONNECTED');\n const db = await client.db();\n const collection = db.collection(collectionName);\n\n try {\n await collection.drop();\n } catch (e) {\n }\n\n let _id = 0;\n await collection.insertMany(new Array(count).fill({}).map(() => ({ _id: _id++, text })));\n const start = Date.now();\n const found = (await collection.find().toArray()).length;\n console.log(Date.now() - start, found);\n process.exit(0);\n})();\n\nconst text = `\"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce quis tristique lacus, nec dictum sapien. Nunc rutrum ligula a efficitur lacinia. Sed sit amet pulvinar sapien, ac feugiat enim. Aliquam ultrices lectus vitae sollicitudin tincidunt. Nulla facilisi. Morbi condimentum ipsum et tortor commodo, sed convallis purus efficitur. Sed tristique pellentesque eros, eu interdum elit vestibulum eu. Praesent varius velit vel ex varius, sed auctor justo elementum. Cras tempor lectus eu risus finibus, eget auctor metus ullamcorper. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Proin facilisis risus ut justo rutrum, vitae maximus massa hendrerit. Aenean non arcu nec justo interdum cursus a a sem. Curabitur tristique nisl at dolor scelerisque, eu venenatis velit varius. Fusce vel lorem ac massa dictum malesuada. Suspendisse potenti.\nSed eu gravida velit, sit amet auctor lorem. Nam sagittis tincidunt orci, vitae hendrerit leo dictum non. Mauris vitae lacinia orci, eu tincidunt ex. Integer sed interdum quam. Suspendisse a risus feugiat, commodo orci id, lobortis lacus. Donec vestibulum auctor semper. Sed ut urna id lacus elementum viverra eu quis sapien. Suspendisse rhoncus ante vitae vulputate rutrum. Etiam sed tempor libero, sed lobortis justo. Vestibulum accumsan condimentum odio, non condimentum lectus pharetra id. Morbi nec malesuada sapien. Fusce sodales mauris eu eros dignissim finibus. Aenean ac orci ullamcorper, commodo tortor eget, consectetur mauris. Nullam vehicula augue sit amet tellus elementum, at dapibus ipsum dapibus. Morbi pulvinar libero vel eros fringilla, eget tincidunt nisl luctus. Ut ullamcorper mi sed est congue, sit amet hendrerit velit venenatis.\nVivamus in leo sed massa interdum finibus. Nunc vitae ligula fringilla, cursus turpis nec, lacinia mi. Mauris interdum nibh ut dui laoreet, id eleifend risus volutpat. Integer rhoncus justo et nisi efficitur, ut tempus neque ultrices. Aliquam in est sit amet tellus hendrerit gravida vitae et massa. Phasellus sed justo eget diam varius viverra. Sed sed neque tristique, hendrerit mauris eget, placerat elit. Integer bibendum justo vitae fermentum consequat. Pellentesque nec ex tincidunt, consequat turpis id, lacinia dolor. Sed hendrerit turpis eu dui fermentum, id semper arcu interdum. Nulla facilisi. Suspendisse quis tellus quis felis gravida tempor.\nPellentesque nec feugiat diam. Suspendisse potenti. Morbi ac iaculis odio. Sed lacinia posuere odio, at dignissim elit auctor in. Mauris consequat velit eu velit feugiat condimentum. Sed non ne`;\n\n", "text": "Hello again. Here this is a slightly longer script which generates its own data in a collection called peekTestData. Note the collection is new and has no indices (other than the default _id). The documents are large but very simple.All I do is find() with no constraint so retrieve all documents. The same v6.04 database is used in both tests. On my box the query runs slower with the new driver. It really does look as if the new driver just pages through data significantly more slowlyFor V3.2.7 ~ 77 ms.\nFor V5.0.1 ~ 148 ms.", "username": "Richard_Evans" }, { "code": "(await collection.find().toArray()).length", "text": "Like I already mentioned(await collection.find().toArray()).lengthis the worst way to get document counts. It is a waste of time to try to understand why it is slower.If the issue is yourpaging of the documents is much slower in the current connectorthen it is the code you need to share. Both your paging and collection.find.toArray might be slower but it does not mean it is for the same reason. And the solution is probably different.", "username": "steevej" }, { "code": "", "text": "Thanks for chipping in here. @steevej . As I said, I am very aware that this is not a sensible way to do a count. No need to explain! However my question is about the fact that “await collection.find().toArray()” is much slower on the new connector. Just ignore the .length if it helps. In real life I’d do something useful with the results but that would distract from this question.Under the scenes the connector retrieves a large query in batches. If you look at the log for my original ticket, you can see this happening. The frequency of the paging has become much lower since 3.2.7. Put another way, queries for large datasets are much slower in the current version of the connector.", "username": "Richard_Evans" } ]
Node connector v5 much slower than v3.2.7
2023-05-19T18:23:30.166Z
Node connector v5 much slower than v3.2.7
1,139
https://www.mongodb.com/…4_2_1024x512.png
[ "php" ]
[ { "code": "Warning: file_exists(): Unable to find the wrapper \"channel\" - did you forget to enable it when you configured PHP? in PEAR\\Downloader\\Package.php on line 1511\n\nWarning: is_file(): Unable to find the wrapper \"channel\" - did you forget to enable it when you configured PHP? in PEAR\\Downloader\\Package.php on line 1521\n\nWarning: Trying to access array offset on value of type bool in PEAR\\REST.php on line 186\n\nWarning: Trying to access array offset on value of type bool in D:\\work\\Softwares\\xampp\\php\\pear\\PEAR\\REST.php on line 186\n\nWarning: fsockopen(): Unable to connect to ssl://pecl.php.net:443 (Unable to find the socket transport \"ssl\" - did you forget to enable it when you configured PHP?) in PEAR\\REST.php on line 432\nNo releases available for package \"pecl.php.net/mongodb\"\ninstall failed\n", "text": "hi,\nbeen trying to follow the guide on site for integrating mongodb to laravel on a windows 10 machine\nbut the problem is it is written to php 7.4\nonce working with php 8^ we get a whole host of problems most of which are fixable since they are deprecations or function arguments errors.\nthe big problem is the following errors i get when running “pecl install mongodb”:don’t know how to solve them and have no leads in google.\nany help will be much appreciated and also an updated guideHow to build APIs and web applications integrating MongoDB and the Laravel PHP framework.", "username": "nadav.siv" }, { "code": "", "text": "On Windows, you’ll have to grab the correct DLL for the driver, which are attached to each GitHub release. As documented in the Windows Installation Instructions, you’ll have to ensure to get the correct version for your system.", "username": "Andreas_Braun" } ]
Cant install mongodb with laravel 10 and php 8^
2023-06-28T21:02:23.134Z
Cant install mongodb with laravel 10 and php 8^
1,279
null
[ "cxx", "field-encryption", "c-driver" ]
[ { "code": " No build type selected, default is Release\n bsoncxx version: 3.8.0\n CMake Error at src/bsoncxx/CMakeLists.txt:114 (find_package):\n Could not find a configuration file for package \"libbson-1.0\" that is\n compatible with requested version \"1.24.0\".\n\n The following configuration files were considered but not accepted:\n\n /usr/local/lib/cmake/libbson-1.0/libbson-1.0-config.cmake, version: 1.0.0\n\n\n Configuring incomplete, errors occurred!\n", "text": "Hi, all, I still met the following ERROR while building mongo-cxx-driver today.I’m building mongo-cxx-driver under Ubuntu 22.04, and I’ve already successfully built and installed mongo-c-driver and libmongocrypt from source.By the way, libbson was automatically installed together with libmongoc while I installed mongo-c-driver.Did anybody meet the same issue?Cheers", "username": "Pei_JIA" }, { "code": "/usr/local/lib/cmake/libbson-1.0/libbson-1.0-config.cmake\n/usr/local/lib/cmake/libbson-1.0/libbson-1.0-config-version.cmake\n/usr/local/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake\n/usr/local/lib/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake\n", "text": "Problem solved by modifying the versions in the following 4 files:Have to manually sync all versions… Looks like 1.24.0 or 1.24.1 is the newest version, but NEVER being shown in the above configuration files.", "username": "Pei_JIA" }, { "code": "", "text": "Hi @Pei_JIA, I am curious to know the C driver version that you built & installed? Was it not 1.24 or 1.24.1?", "username": "Rishabh_Bisht" }, { "code": "BUILD_VERSION0.0.01.24.1✗ find . -depth -iname \"*.so\" \n./src/libbson/libbson-1.0.so\n./src/libmongoc/libmongoc-1.0.so\n", "text": "It is 1.24.1. But whether I leave BUILD_VERSION as its original value 0.0.0, or I modify it into 1.24.1, what I built out by make is ALWAYS version *1.0.What can I say??? Why is it so??", "username": "Pei_JIA" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Could not find a configuration file for package "libbson-1.0" that is compatible with requested version "1.24.0"
2023-06-29T00:11:53.801Z
Could not find a configuration file for package &ldquo;libbson-1.0&rdquo; that is compatible with requested version &ldquo;1.24.0&rdquo;
1,180
null
[ "aggregation" ]
[ { "code": "[\n {\n $group: {\n _id: { Field1: \"$Field1\", Field2: \"$Field2\" },\n Field1: {\n $first: \"$Field1\",\n },\n Field2: {\n $first: \"$Field2\",\n },\n Field3: {\n $push: {\n Value: \"$Value\",\n Date: \"$Date\",\n },\n },\n },\n },\n]\n[\n {\n $group: {\n _id: [\"$Field1\", \"$Field2\"],\n Field1: {\n $first: \"$Field1\",\n },\n Field2: {\n $first: \"$Field2\",\n },\n Field3: {\n $push: {\n Value: \"$Value\",\n Date: \"$Date\",\n },\n },\n },\n },\n]\n", "text": "Hello,I stumbled into this yesterday and couldn’t find anything about it online. An identical aggregation became much faster than switching the object key to array. Examples:Slow:Fast:Any clues as to why this is? I assume it has something to do with how MongoDB treats the keys internally and I also noticed that the execution plan changed (the one with object using slot-based, the other not).I haven’t found any examples using an array online, but it yields an identical resultset in our case, just much faster.Thanks!", "username": "alexwchr" }, { "code": "", "text": "Hi @alexwchr and welcome to MongoDB community forums!!Ideally, the group stage would treat both the fields in the similar way. However, to understand more, could you help me with some details to replicate the same in my local environment:Regards\nAasawari", "username": "Aasawari" }, { "code": "explainVersion \"1\"\nexplainVersion \"2\"\n", "text": "assume it has something to do with how MongoDB treats the keys internally and I also noticed that the execution plan changed (the one with object using slot-based, the other not).I haven’t found any examples using an arHi,MongoDB version is 6.0.2. Collection contains 20M docs. Before the group stage we run a $match that filters out approx 1M of these docs.I am not able to provide you with the other details you requested on a public forum since it’s too cumbersome to anonymize the schema details. Is there a way for me to e-mail these to you?What I can say is that with the array I getand with the objectIndicating slot vs non-slot based execution.", "username": "alexwchr" }, { "code": "db.collection.explain('executionStats').aggregate(...)db.collection.getIndexes()db.collection.stats()", "text": "Hi @alexwchrThank you for sharing the details.I am not able to provide you with the other details you requested on a public forum since it’s too cumbersome to anonymize the schema details. Is there a way for me to e-mail these to you?Unfortunately without some document example and explain plan output, it’s difficult to determine what’s really going on. This is because MongoDB query planner could tailor its approach depending on how the documents are structured, indexes on the collection, and other information.Unlike SQL databases with rigid schema, MongoDB does not have the luxury of knowing beforehand the content and datatype of each collection.It will be helpful if you can provide an example document, the output of db.collection.explain('executionStats').aggregate(...) for both queries, the output of db.collection.getIndexes() , and db.collection.stats().I don’t think any of those commands will print an actual document in their output, only some statistics about the collection and the aggregation’s execution. However we do need an example document to recreate what you’re seeing.Regards\nAasawari", "username": "Aasawari" } ]
Grouping on multiple fields faster when using an array as key?
2023-05-23T08:30:41.891Z
Grouping on multiple fields faster when using an array as key?
626
null
[]
[ { "code": "", "text": "I don’t understand why when I connect my application locally in mongodb atlas it works but when I conncete the ip address of the site it doesn’t work.\nI need your help", "username": "Ousmane_DIAW" }, { "code": "270152701727017mongoshping/// example\nping cluster0-shard-00-00.ze4xc.mongodb.net\ntelnet27017/// example\ntelnet cluster0-shard-00-00.ze4cx.mongodb.net 27017\n", "text": "Hi @Ousmane_DIAW - Welcome to the community!I don’t understand why when I connect my application locally in mongodb atlas it works but when I conncete the ip address of the site it doesn’t work.If you’re able to connect to the Atlas cluster locally but not from cPanel, it may be possible that there is a cPanel configuration issue. However to confirm this, we’ll require more information which I have requested below.I’ve not used cPanel in the past but it appears it is a hosting platform which means you may require them to allow outbound traffic to ports 27015 to 27017 as noted here. Additionally, you can see from the following post here that it appears the cPanel support (Namecheap?) unblocked port 27017 which resolved the issue. Hopefully this helps in your case. You may also want to ensure the client attempting to connect from the cPanel end is on the Network Access List for the project.If further assistance is required after confirming the ports are not blocked, please provide the following:Note: You can find the hostname in the metrics page of your clusterRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi\nI followed your instructions: port 27017 is not blocked and the ip address is on the network access list.Here is the information you requested:Thanks in advance", "username": "Ousmane_DIAW" }, { "code": "pingtelnetPRIMARYping senechoicetours-shard-00-01.ovhwy.mongodb.net\ntelnet senechoicetours-shard-00-01.ovhwy.mongodb.net 27017\n", "text": "I followed your instructions: port 27017 is not blocked and the ip address is on the network access list.Thanks for providing those details. Could you provide the output from the ping and telnet commands from that client? I cannot see the output from either from the screenshot you attached? This will at least help confirm some level of connectivity from the cPanel client to the Atlas cluster’s node(s).If the hostname:port of the PRIMARY is the value you provided here, then the commands that should be run from the cPanel client are:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi\nI have also problem with connect mongo db to cpanel.\nI have project that run locally but not work on c panel ,\nWhat can I do that its work ?\nThank", "username": "T_B1" }, { "code": "", "text": "Hi @T_B1 - Welcome to the community.I would advise taking a look at the following post’s discussion that also mentions cPanel connection attempts and the associated solutions involving port opening from the cPanel side. I presume since you are able to connect locally, that there may be configuration from cPanel’s end that needs to be done for connectivity.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank\nBut can u tel me what is Namecheap that they spoke about it?\nI didnt understand how it connect to C panel\nThank\n\nimage1299×439 42.8 KB\n", "username": "T_B1" }, { "code": "", "text": "PING cluster0-shard-00-00.ze4xc.mongodb.net.secureclouddns.net (51.210.156.16) 56(84) bytes of data.\n64 bytes from server61.secureclouddns.net (51.210.156.16): icmp_seq=1 ttl=54 time=219 ms", "username": "Aditya_Sharma8" }, { "code": "", "text": "telnet: command not found", "username": "Aditya_Sharma8" }, { "code": "monogosh", "text": "@Aditya_Sharma8,I would try connecting from a client hosted your own machine that has access to the cluster over the public internet first. Make sure you’ve allowed the IP address so that you can connect from your own machine and then use something like monogosh or MongoDB Compass. If you can connect it would generally mean the Atlas cluster is fine in which case you might want to contact the cPanel support team with regards to why the client(s) there are unable to connect.", "username": "Jason_Tran" }, { "code": "err = new ServerSelectionError();\n^\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB A tlas cluster. One common reason is that you're trying to access the database fro m an IP that isn't whitelisted. Make sure your current IP address is on your Atl as cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist /\nat _handleConnectionErrors (/home/marshaltrans/marshal_app/node_modules/mong oose/lib/connection.js:792:11)\nat NativeConnection.openUri (/home/marshaltrans/marshal_app/node_modules/mon goose/lib/connection.js:767:11) {\nreason: TopologyDescription {\ntype: 'ReplicaSetNoPrimary',\nservers: Map(3) {\n'ac-dp3uge5-shard-00-00.nerrton.mongodb.net:27017' => ServerDescription {\naddress: 'ac-dp3uge5-shard-00-00.nerrton.mongodb.net:27017',\ntype: 'Unknown',\nhosts: [],\npassives: [],\narbiters: [],\ntags: {},\nminWireVersion: 0,\nmaxWireVersion: 0,\nroundTripTime: -1,\nlastUpdateTime: 111847082664,\nlastWriteDate: 0,\nerror: MongoNetworkError: unable to get local issuer certificate\n", "text": "I have the same problem with Cpanel\nEverything is working on localhost and cyclic.sh.\nBut, if I try to run the same simple app it doesn’t connect.I’ve opened the access to cluster.\nHosting support checked connection, and it’s ok.\n\nscreen1980×293 204 KB\n", "username": "Ivan_Rocket" }, { "code": "", "text": "I’ve bought VPS, and all troubles has gone.", "username": "Ivan_Rocket" }, { "code": "", "text": "@Ivan_Rocket Yes we have also brought the VPS", "username": "Aditya_Sharma8" }, { "code": "", "text": "Thanks for the help @Jason_Tran", "username": "Aditya_Sharma8" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem connecting my database with my ip address (cpanel)
2022-04-26T17:23:15.139Z
Problem connecting my database with my ip address (cpanel)
4,525
null
[]
[ { "code": "", "text": "No matter what I do I get a SHA256 error when installing through homebrew", "username": "Conrad_Berganza" }, { "code": "", "text": "Hello,Welcome to The MongoDB Community Forums! Could you please share a few more details to help me understand the issue?Lastly, you can follow below documentation to install MongoDB to your machine via Homebrew.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Can't install locally
2023-06-27T22:43:06.663Z
Can&rsquo;t install locally
422
https://www.mongodb.com/…a13b878581cc.png
[ "atlas-search" ]
[ { "code": "{\n\t\"mappings\": {\n\t\t\"dynamic\": false,\n\t\t\"fields\": {\n\t\t\t\"title\": {\n\t\t\t\t\"fields\": {\n\t\t\t\t\t\"english\": {\n\t\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"type\": \"document\"\n\t\t\t}\n\t\t}\n\t},\n\t\"synonyms\": [{\n\t\t\"analyzer\": \"lucene.standard\",\n\t\t\"name\": \"synonym_mapping\",\n\t\t\"source\": {\n\t\t\t\"collection\": \"animeSynCollection\"\n\t\t}\n\t}],\n\t\"storedSource\": {\n\t\t\"include\": [\n\t\t\t\"title.english\",\n\t\t\t\"bannerImage\"\n\t\t]\n\t}\n}\n", "text": "Hi,I was recently working on mongo atlas free tier (M0 cluster) to create a search index for a POC. I needed to include synonyms in the search index to make it easier to search for similar words. But when I try to create the atlas index I encounter the error below.\nmongo948×552 18.2 KB\nThe error doesn’t provide any extra information on where exactly my document is wrong.I followed the documentation and my synonyms collection follows the format (I even queried to check if there were any invalid docs).I also reduced the collection limit to 10000 as mentioned but it didn’t work either.Here is the JSON for the index configuration:", "username": "Ashik_N_A" }, { "code": "", "text": "Hi @Ashik_N_A - Welcome to the community Could you clarify on the query used to verify if there were any invalid docs?Additionally, could you provide the source synonym collection (redacting any personal or sensitive information) a long with a few sample documents that are being searched on so that I can try reproduce this behaviour?Regards,\nJason", "username": "Jason_Tran" }, { "code": "{'_id': ObjectId('6496c27616a431fd95eba537'), 'mappingType': 'equivalent', 'synonyms': ['∞']}\n", "text": "Hey, thanks for the response. I figured out where the problem was. The query I made only checked the format of synonyms (array), and the string (empty strings) but I forgot to consider symbols.The document that caused the index build to fail looked like this:But still, it would be nice if Atlas provided a detailed error, suggesting which document was causing the build to fail (at least the document _id).Regards,\nAshik", "username": "Ashik_N_A" }, { "code": "_id", "text": "Glad to hear you found the document causing the error.But still, it would be nice if Atlas provided a detailed error, suggesting which document was causing the build to fail (at least the document _id).Thanks for the feedback above. I agree that it would be helpful if some further information (e.g. _id value could be returned in the error) to help locate the document(s) causing errors. I’ll raise this with the team internally as feedback.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search Index creation fails when synonyms list are provided
2023-06-24T10:37:17.091Z
Search Index creation fails when synonyms list are provided
663
null
[]
[ { "code": "", "text": "Hello, I have one question about the online practice quiz.\nHow representative is of the original examination? Is the original test include a timer in the same website or different?Let me know whatever is possible, or if I need to download any additional software on my system.", "username": "crhjik_dejsk" }, { "code": "", "text": "Hello @crhjik_dejsk, Welcome to the MongoDB community forum,If you are talking about practice quizzes then there is no time limit.But in your real exam, you will get instructions when you register for the exam, and also you will get reminders before the exam.You can read the basic instructions for “Online Proctoring Set-up” in this guide,Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About: Practice Quiz
2023-06-28T20:18:25.617Z
About: Practice Quiz
662
null
[]
[ { "code": "", "text": "Hi I am not able to run the db..getSearchIndexes(“default”) it says\nTypeError: db.<>.getSearchIndexes is not a function\nTypeError: db.assetItem.getSearchIndexes is not a function\nif (_fs === “returned”) return _srv;else if (_fs === “threw”) throw _srv;\n^TypeError: db.assetItem.getSearchIndexes is not a function\nat evalmachine.:34:196\nat evalmachine.:48:5\nat evalmachine.:53:3\nat Script.runInContext (vm.js:143:18)\nat Object.runInContext (vm.js:294:6)", "username": "VIKASH_RANJAN1" }, { "code": "mongosh$listSearchIndexes", "text": "Hi @VIKASH_RANJAN1,How are you running the command? From mongosh or through a driver?If through a driver, then please take a look at the $listSearchIndexes documentation.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
db.collection.getSearchIndexes() not work
2023-06-28T05:46:33.846Z
db.collection.getSearchIndexes() not work
272
null
[]
[ { "code": "", "text": "Hello, I hope I’m in the right place with my postI created my own community server.\nIt works well so far and I can also connect\nBut I would like to be able to connect to +srv\nSRV record was created but I still can’t connect.\nAnyone have any ideas what else I might have forgotten?\nOr whether the SRV record is correct at all?Thanks for your answers!", "username": "Tobias_Horacek" }, { "code": "authSourcereplicaSettls=false", "text": "Hi @Tobias_HoracekIn addition to SRV records you can create a TXT record for the connection options such as authSource replicaSet among others.A mongodb+srv uri implicitly enables TLS on the client so ensure your server is setup with TLS or add tls=false to the connection optionsAn example of your working uri, the SRV, TXT records as well as any error messages when connecting will help us assist you with this.", "username": "chris" }, { "code": "SRV\n_mongodb._tcp.mongo.ns.company.com\n0 0 30001 mongo_1.mongo.ns.company.com\n0 0 30002 mongo_2.mongo.ns.company.com\n0 0 30003 mongo_3.mongo.ns.company.com\nTXT\nmongo.ns.company.com\nA\nmongo_1.mongo.ns.company.com 10.0.1.21\nmongo_2.mongo.ns.company.com 10.0.1.22\nmongo_3.mongo.ns.company.com 10.0.1.23\n", "text": "Hi @Tobias_Horacek,I asume you have a replicaset so, you say you already create SRV record but, just like @chris comment, I was wondering if you have created the TXT record and the A records, one per each server in your cluster.This is an example of my config:", "username": "Alejandro_Nino" } ]
MongoDB Community Server URI
2023-06-25T17:17:37.256Z
MongoDB Community Server URI
369
null
[ "mongodb-shell", "database-tools", "backup" ]
[ { "code": "", "text": "Hi all,mongodump tool offers --query parameter which allows to filter data to be dumped.\nWhen using mongorestore thereafter, in some cases some data could be still there (causing duplicate key error) and it’s a good practice to clean them up first, before restoring from the dump. But, mongorestore only offers to drop the whole database before restoring from the dump.\nGiven I used --query in mongodump, I would also need to clean up data using the same query, before running mongorestore.\nMeaning, I can’t drop the whole database, I only want to delete data given the query (for a specific date).What is the easiest/simplest/most straightforward way to do this clean up?I used to use:\nmongo myhost:myport/myDb --eval “db.myCollection.remove({date: ISODate(\"${ISODATE}\")})”but the mongo shell is deprecated now, so maybe I need to move to mongosh now?Thank you for your ideas.Ivan", "username": "Ivan_Pilis" }, { "code": "", "text": "both mongo and mongosh should work. mongo is deprecated, but it is still supported.So just use the clients to remote those data manually before restore.", "username": "Kobe_W" } ]
Partial data removal before using mongorestore
2023-06-28T09:36:28.741Z
Partial data removal before using mongorestore
605
null
[ "replication" ]
[ { "code": "", "text": "I have a mongodb instance with a database and collection running on my Windows server. If I wanted to have some sort of continuous replication of the data to a cloud hosted instance (either Azure or Atlas) of mongodb, where should I start in the docs?", "username": "Adam_Cook" }, { "code": "", "text": "Hi @Adam_CookCheck out: https://www.mongodb.com/docs/cluster-to-cluster-sync/current/", "username": "chris" }, { "code": "", "text": "Thanks.\nWhile reading the limitations of mongosync, it seems serverless or shared clusters are not supported in Atlas - this is a shame.", "username": "Adam_Cook" }, { "code": "", "text": "Another option is to start a node in the cloud and add that to the replica set as a hidden member.", "username": "chris" }, { "code": "", "text": "Oh I see.I don’t currently have a cluster now, literally just installed mongodb on my server to use while I’m developing locally. So I guess I would create a cluster, where my local one is the primary, and maybe use a cloud node as a secondary node to the cluster. I think I also need some kind of witness or something, because I won’t have three nodes, right?", "username": "Adam_Cook" }, { "code": "", "text": "You can set up a single node replicaSet, and when you add the cloud node set it up as a hidden member.Primary would always be your local server.", "username": "chris" } ]
Replicate local mongodb to a cloud instance
2023-06-22T08:05:47.666Z
Replicate local mongodb to a cloud instance
668
null
[]
[ { "code": "{\n \"roles\": [\n {\n \"name\": \"Admin\",\n \"apply_when\": {\n \"%%user.custom_data.custom_data.role\": \"admin\"\n },\n \"document_filters\": {\n \"write\": true,\n \"read\": true\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n },\n {\n \"name\": \"user-read-write\",\n \"apply_when\": {\n \"%%user.identities.provider_type\": \"custom-token\"\n },\n \"document_filters\": {\n \"read\": {\n \"customer_id\": \"%%user.id\"\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\":{\n \"%%user.identities.provider_type\": \"custom-token\"\n },\n \"delete\": false,\n \"search\": true\n }\n ]\n}\n\nHow to set user permission so the registered user can insert their own document?", "text": "I am building an app and I have a collection Order and I want to set permission,\nit’s a multi hierarchy app which have Admin and Registered User and Anonymous userNow I want to set roles like this,What I have achieved till isAlthough I have tried and partly successful to set permission but still not enable to set permission for User,I am sharing what I have doneUsing this permission admin is enable to read and create document\nUser is also read their own document but user is unable to insert document", "username": "Zubair_Rajput" }, { "code": "{\n \"name\": \"user-read-write\",\n \"apply_when\": {\n \"%%user.identities.provider_type\": \"custom-token\"\n },\n \"document_filters\": {\n \"read\": {\n \"customer_id\": \"%%user.id\"\n },\n \"write\": true, // Note this is the change\n },\n \"read\": true,\n \"write\": true,\n \"insert\":{\n \"%%user.identities.provider_type\": \"custom-token\"\n },\n \"delete\": false,\n \"search\": true\n }\n", "text": "Hi, I suspect your issue might be that the document_filters.write is missing. Try updating it to this:Not specifying a permissions defaults to effectively setting it to “false”.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler thanks for the quick response I have tried that and after add write to true in filter,\nUser is enable to read as I already mention but not enable to insert\nYou can see the screenshot of the postman client. Please help out on this.\n\nCapture855×219 16.1 KB\n", "username": "Zubair_Rajput" }, { "code": "\"%%user.identities.provider_type\": \"custom-token\"", "text": "Hi, 2 things come to mind looking more at this:Having your insert permission as \"%%user.identities.provider_type\": \"custom-token\" is effectively always true given the apply_when is the same. This means that implicitly if this role is selected then insert is always true.Write access implies read access. So I suspect you might also want to set “write” to be the same as “read” in the document_filters section.Lastly, can you share with me the document you are trying to insert and possibly a link to the log in the UI?Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "{\n \"customerId\": \"649bc2a58679469c1fed2bf4\",\n \"createdBy\": \"649bc2a58679469c1fed2bf4\",\n \"firstName\": \"zubair\",\n \"lastName\": \"khan\",\n \"customerEmail\": \"[email protected]\",\n \"customerWhatsappNo\": \"+9198999999999\",\n \"orderAmount\": 123376,\n \"status\": \"pending\",\n \"message\": \"this is a message\",\n \"orderDate\": \"2023-04-09\",\n \"lastUpdatedOn\": \"2023-04-09\",\n \"order_items\": [\n {\n \"_id\": \"6466279bec6576a00b5274a0\",\n \"order_qty\": 3\n },\n {\n \"_id\": \"6466279bec6576a00b5274a1\",\n \"order_qty\": 4\n }\n ] \n}\n", "text": "This is the document to be inserted by the Registered User or Admin, anonymous user can’t do any operation on order collection. I have tried and changes so many things in permission object\nbut nothing work and I am reading fundamentals of permission from last three days, spending days and nights.I hope you understand the problem\n\nCapture11090×460 16.1 KB\n", "username": "Zubair_Rajput" }, { "code": "", "text": "After doing this change registered user can insert but now they can read other user’s document too", "username": "Zubair_Rajput" }, { "code": "", "text": "Hi, sorry you are not having a great experience. It would be nice if we provided better details about why permissions are rejecting a change, but that is technically speaking not something permissioning systems should do from a security perspective.I notice that your rule references customer_id but your document has customerId. My hunch is that you need to modify your rule to reference customerId.Let me know if that works.", "username": "Tyler_Kaye" }, { "code": "{\n \"roles\": [\n {\n \"name\": \"Admin\",\n \"apply_when\": {\n \"%%user.custom_data.custom_data.role\": \"admin\"\n },\n \"document_filters\": {\n \"write\": true,\n \"read\": true\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n },\n {\n \"name\": \"user-read-write\",\n \"apply_when\": {\n \"%%user.custom_data.custom_data.role\": \"customer\"\n },\n \"document_filters\": {\n \"read\": {\n \"customer_id\": \"%%user.id\"\n },\n \"write\": {\n \"%%user.custom_data.custom_data.role\": \"customer\"\n }\n },\n \"read\": {\n \"%%user.custom_data.custom_data.role\": \"customer\"\n },\n \"write\": {\n \"%%user.custom_data.custom_data.role\": \"customer\"\n },\n \"insert\":{\n \"%%user.custom_data.custom_data.role\": \"customer\"\n },\n \"delete\": false,\n \"search\": true\n }\n ]\n}\n", "text": "Hello Tyler thanks for being in touch\nI am modifying and doing changes and what I learned I am just experimenting it.\nPlease take a look on this now and I also change customerId to customer_idBut getting the same result Register User can now insert but they can also read other Registered User documents which I don’t let them do.\nI hope we will solve this puzzle", "username": "Zubair_Rajput" }, { "code": "", "text": "Hi Tyler I found an interesting thing I am getting 4 order out of 5 which is correct in UI or User Panel\nbut in postman I am still getting 5 order.\nI think it is User Management or token related problem, but I am sending the same access token in postman.\nYou can see in the screenshot of Registered User Panel, Admin Panel and Postman Client\ncan put some light on this why is this happening, what point I am getting now is User Management Related issue, I still don’t know will this work full proof or not\n11353×297 25.7 KB\n\n\n21354×289 35.2 KB\n\n\n3928×563 39.6 KB\n", "username": "Zubair_Rajput" }, { "code": "{\n \"roles\": [\n {\n \"name\": \"Admin\",\n \"apply_when\": {\n \"%%user.custom_data.custom_data.role\": \"admin\"\n },\n \"document_filters\": {\n \"write\": true,\n \"read\": true\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n },\n {\n \"name\": \"user-read-write\",\n \"apply_when\": {\n \"%%user.custom_data.custom_data.role\": \"customer\"\n },\n \"document_filters\": {\n \"read\": {\n \"customer_id\": \"%%user.id\"\n },\n \"write\": \n \"customer_id\": \"%%user.id\"\n },\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": false,\n \"search\": true\n }\n ]\n}\n", "text": "Hi, can you try changing your role to this, I suspect this is more in line with what you are trying to do:Note, that it is not possible with our rules system currently to allow inserts but not updates. Inserts are a higher-priority operation so you can allow writes and not inserts, but you cannot do the opposite.", "username": "Tyler_Kaye" }, { "code": "", "text": "Yuppp it’s working, but how??\nCould you please give me some resource or article for better fundamentals and understanding.\nAll operation working fine.I think there would not be any problem with update on both Admin and RegisterI want to understand the basis of this situation and I also read read flow permission flowcharts\nin the documentation", "username": "Zubair_Rajput" }, { "code": "", "text": "The flow charts defined here should ideally make it clearer (though it sounds like you have seen them): https://www.mongodb.com/docs/atlas/app-services/rules/roles/#write-permissions-flowchartIf you have any feedback for what can be clearer I would be happy to pass it along to the documentation team. Unfortunately in order to be as expressive as they are, rules can sometimes be a bit difficult to fully parse and understand.", "username": "Tyler_Kaye" }, { "code": "{\n \"error\": \"uncaught promise rejection: update not permitted\",\n \"error_code\": \"UncaughtPromiseRejection\",\n \"link\": \"https://realm.mongodb.com/groups/62e61c902459d97e2adf92d1/apps/644fd75ceff19ed45a851cfc/logs?co_id=649c710be40e79c34f34dc09\"\n}\n", "text": "Thanks for the supports, it’s working fine now but\nit stuck in the update query as Register User needs to update the documentIn the current Permission Admin can update the document but again when Register User\ntry to update it return into error.\nI think it should work with the given permission but not working.Please again help me on this too. I think this will solve many problems of mine and other developers too.", "username": "Zubair_Rajput" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set multiple hierarchy permission to document
2023-06-28T14:12:14.877Z
How to set multiple hierarchy permission to document
574
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 6.0.7 is out and is ready for production deployment. This release contains only fixes since 6.0.6, and is a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.7 is released
2023-06-28T18:27:32.680Z
MongoDB 6.0.7 is released
1,097
null
[ "realm-web" ]
[ { "code": "", "text": "“Error: Cannot Access Closed Realm Instance”\nHello community members, I encountered an unexpected error while attempting to open a Realm instance. The error message states: “Cannot access realm that has been closed.” This issue is perplexing because the application has been working flawlessly until today. I am seeking assistance from the community to help me resolve this error.", "username": "Mohammed_Kapadia" }, { "code": "", "text": "Cross post over to StackOverflow where more info is needed.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
**"Error: Cannot Access Closed Realm Instance"**
2023-06-27T06:44:55.529Z
**&rdquo;Error: Cannot Access Closed Realm Instance&rdquo;**
780
null
[ "production", "ruby", "mongoid-odm" ]
[ { "code": "", "text": "Mongoid 7.5.3 is a patch release in 7.5 series with a few bug fixes:The following additional improvements were made:", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Mongoid 7.5.3 released
2023-06-28T16:39:19.130Z
Mongoid 7.5.3 released
569
null
[ "aggregation", "data-modeling", "transactions", "connector-for-bi" ]
[ { "code": "", "text": "In the world of databases, SQL (Structured Query Language) has long been the standard for querying and manipulating data. However, over the past few years, a new type of database has gained significant popularity among developers and businesses alike: NoSQL databases. MongoDB, in particular, has emerged as a leading NoSQL database, offering a flexible and scalable solution for managing data.If you’re considering making the switch from SQL to MongoDB, there are a few things you should know. In this blog, we’ll take a closer look at the key differences between these two types of databases and provide some tips for making the transition.Before we dive into the differences between SQL and MongoDB, let’s take a quick look at what each of these databases is all about.SQL databases are based on the relational data model, which means that data is organized into tables with predefined relationships between them. This makes SQL databases ideal for managing structured data, such as financial records, customer information, and inventory data. SQL is a standardized language that can be used across many different database platforms, including MySQL, Oracle, and Microsoft SQL Server.On the other hand, MongoDB is a NoSQL database that uses a document-based data model. Instead of tables, data is stored in JSON-like documents with dynamic schemas. This makes MongoDB more flexible and scalable than SQL databases, as it can easily handle unstructured or semi-structured data, such as social media posts, sensor data, and user-generated content.Now that we have a basic understanding of what SQL and MongoDB are, let’s take a closer look at the key differences between these two types of databases:As we mentioned earlier, SQL databases use a relational data model, which means that data is organized into tables with predefined relationships between them. In contrast, MongoDB uses a document-based data model, where data is stored in JSON-like documents with dynamic schemas. This makes MongoDB more flexible and scalable than SQL databases, as it can easily handle unstructured or semi-structured data.SQL is a standardized language that can be used across many different database platforms. It uses a structured query language to retrieve and manipulate data, and supports a wide range of operations such as SELECT, INSERT, UPDATE, and DELETE. MongoDB, on the other hand, uses a query language that is based on JavaScript syntax. It supports a similar range of operations to SQL, but with some key differences in syntax and behavior.SQL databases are generally designed to handle a limited number of concurrent users and transactions. This can make them less suitable for large-scale applications that require high availability and scalability. MongoDB, on the other hand, is designed to be highly scalable and can easily handle large volumes of data and high levels of concurrent traffic.The performance of SQL and MongoDB databases can vary depending on the specific use case and workload. In general, SQL databases are optimized for complex queries that involve multiple tables and joins. MongoDB, on the other hand, is optimized for simple, fast queries on large volumes of data.If you’re considering making the switch from SQL to MongoDB, there are a few things you should keep in mind. Here are some tips to help you make the transition:One of the biggest differences between SQL and MongoDB is the data model. Before you start migrating your data, you should take the time to understand how MongoDB’s document-based data model works, and how it differs from the relational data model used by SQL databases.There are a number of tools and utilities available to help you migrate your data from SQL to MongoDB. Some popular options include the MongoDB Connector for BI, which allows you to use SQL.When it comes to NoSQL databases, there are many different options to choose from. Each database has its own strengths and weaknesses, and choosing the right one for your project depends on several factors, such as the type and size of data you need to store, your performance and scalability requirements, and your development team’s skills and expertise.Here are some key factors to consider when choosing a NoSQL database for your project:Different NoSQL databases have different data models, and the choice of data model depends on the type of data you need to store and how you plan to use it. For example, document databases like MongoDB are well-suited for managing unstructured and semi-structured data, while key-value stores like Redis are ideal for managing high-volume data that can be accessed quickly.One of the biggest advantages of NoSQL databases is their ability to scale horizontally, meaning they can easily handle large volumes of data and high levels of traffic. However, the performance and scalability of each database can vary depending on the specific use case and workload. For example, some databases are better suited for read-heavy workloads, while others are better suited for write-heavy workloads.NoSQL databases use different consistency models to ensure data consistency across distributed systems. Strong consistency models, like those used by relational databases, ensure that data is always consistent across all nodes in the system. Eventual consistency models, on the other hand, allow for some degree of inconsistency between nodes, which can improve performance and scalability but may not be suitable for all use cases.NoSQL databases often require different development skills than traditional relational databases. For example, some databases use specialized query languages or APIs, which may require developers to learn new skills. When choosing a NoSQL database, it’s important to consider the skills and expertise of your development team and choose a database that is well-suited to their strengths.Finally, cost is an important factor to consider when choosing a NoSQL database. Some databases are open source and free to use, while others require paid licenses or subscriptions. Additionally, the cost of running and scaling a NoSQL database can vary depending on the specific platform and infrastructure you choose.In conclusion, choosing the right NoSQL database for your project requires careful consideration of several factors, including data model, performance and scalability, consistency model, development skills, and cost. By taking the time to evaluate your needs and priorities, you can choose a database that is well-suited to your project’s goals and requirements.Benefits:Special use cases:Benefits:Special use cases:Benefits:Special use cases:Benefits:Special use cases:Benefits:Special use cases:These are just a few examples of the benefits and special use cases for popular NoSQL databases. The right choice of database for your project depends on your specific requirements and use case.There are several reasons why MongoDB may be a good choice for your project:Overall, MongoDB’s flexibility, scalability, rich query language, and community support make it a strong choice for modern applications that require flexible and scalable data storage and retrieval.In conclusion, transitioning from SQL to MongoDB can open up new possibilities for managing and manipulating data in your projects. While SQL databases excel in structured data management, MongoDB’s NoSQL approach with its document-based data model and flexible schema provides greater agility and scalability for handling unstructured and semi-structured data.Remember to carefully consider your project’s requirements and goals when choosing a NoSQL database. Each database, whether it’s MongoDB, Cassandra, Redis, Couchbase, Neo4j, or others, has its unique strengths and special use cases. Understanding these strengths and aligning them with your specific needs will help you make an informed decision.Whether you’re working on a high-traffic web application, real-time analytics platform, or data-intensive project, the world of NoSQL databases provides a wide range of options to suit your needs. Embracing the flexibility, scalability, and performance advantages of NoSQL can help you build robust and efficient applications that meet the demands of today’s data-driven world.So, dive into the world of NoSQL databases, explore their features, and unleash the potential of your data-driven projects. With the right choice of database and a solid understanding of its capabilities, you can take your data management and application development to new heights.Happy NoSQL journey!Hemant Sachdeva\nAssociate Software Engineer\nH & R BlockFeel free to contact me, you can find my handles on HemantSachdeva.dev", "username": "HemantSachdeva" }, { "code": "", "text": "Moving from SQL to MongoDB involves a significant shift in mindset. SQL is a relational database, whereas MongoDB is a NoSQL, document-oriented database. To make the transition, you will need to learn about the MongoDB data model, which is based on collections of documents. Documents in MongoDB are similar to JSON objects and are stored in a binary format called BSON. You will also need to learn MongoDB’s query language, which is based on JSON-like syntax. Migrating data from SQL to MongoDB can be done using various tools and libraries, such as the MongoDB Connector for BI.\nLearn more about Top 17 Emerging Databases to Use in 2023", "username": "Jacelyn_Sia" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How one can switch from SQL to MongoDB?
2023-06-17T13:31:31.710Z
How one can switch from SQL to MongoDB?
1,082
null
[ "aggregation", "production", "ruby", "mongoid-odm" ]
[ { "code": "", "text": "Mongoid 8.0.4 is a patch release in 8.0 series with a few bug fixes:The following additional improvements were made:", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Mongoid 8.0.4 released
2023-06-28T13:55:33.697Z
Mongoid 8.0.4 released
590
https://www.mongodb.com/…1_2_1024x576.png
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "image1920×1080 246 KB", "username": "Hung_Viet" }, { "code": "", "text": "downgrade the version to 6.10.0", "username": "Divide_And_Conquer_N_A" } ]
Throw new MongooseError('Mongoose.prototype.connect() no longer accepts a callback');
2023-05-26T11:12:26.855Z
Throw new MongooseError(&lsquo;Mongoose.prototype.connect() no longer accepts a callback&rsquo;);
2,842
null
[]
[ { "code": "", "text": "Hello,We are facing issues in integrating LDAP with MongoDB Enterprise trial version, facing issues in binding mongo with LDAP server.\nCan anyone please provide some insights on binding MongoDB with LDAP server.Thanks\nKarthicK", "username": "DreamSKY_CreationS" }, { "code": "", "text": "Hello @DreamSKY_CreationS ,Welcome to The MongoDB Community Forums! Could you please confirm if you are still facing issue with this?\nIf yes, kindly send me a DM here in forums and I’ll get you in touch with the relevant team.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi @Tarun_Gaur,Yes, we are still facing the issue with binding.\nWe are planning to upgrade to enterprise edition in our organization . But before that we are trying out LDAP integration with Trial Enterprise image.\nCould you please help on this.Thanks,\nKarthicK", "username": "DreamSKY_CreationS" }, { "code": "", "text": "Hey KarthicK,Unfortunately, ldap is not one of my strong suit. There are resources for this atIn case, if these documentations did not solve your issue, please DM me your contact details so I can notify the relevant teams. They should be contacting you shortly.Feel free to reach out again for any help required.Tarun", "username": "Tarun_Gaur" }, { "code": "**mongoldap --user <user_name> --password <password> -f <mongo.conf file path>**\nsecurity:\n authorization: \"enabled\"\n ldap:\n servers: \"activedirectory.example.net\"\n bind:\n queryUser: \"[email protected]\"\n queryPassword: \"secret123\"\n userToDNMapping:\n '[\n {\n match: \"(.+)\",\n ldapQuery: \"DC=example,DC=com??sub?(userPrincipalName={0})\"\n }\n ]'\n authz:\n queryTemplate: \"{USER}?memberOf?base\"\nsetParameter:\n authenticationMechanisms: \"PLAIN\"\n", "text": "Hi Tarun,Thanks for your response.Since we are from the organization, we can’t share official contact details and can’t connect with it. Hence explaining the issues here.We are getting the below errors while trying to execute the Mongoldap command as below.[FAIL] Attempted to bind to LDAP server without TLS with a plaintext password.\n* Sending a password over a network in plaintext is insecure.\n* To fix this issue, enable TLS or switch to a different LDAP bind mechanism.We are following the below configurations (with our LDAP server details )from MongoDB’s official documentation.Source: https://www.mongodb.com/docs/manual/tutorial/authenticate-nativeldap-activedirectory/So how we can implement LDAP bind mechanism to overcome these errors and connect to the LDAP server?Thanks,\nKarthicK", "username": "DreamSKY_CreationS" }, { "code": "", "text": "Sadly, I won’t be able to help you with this error as I do not have experience with LDAP, I would recommend you contact the support via Contact Us | MongoDB and you can also use the in-app chat support by clicking on chat icon on bottom right of the same support page.", "username": "Tarun_Gaur" } ]
Integrate LDAP authentication with MongoDB Enterprise failing with Bind Errors
2023-06-21T07:54:05.204Z
Integrate LDAP authentication with MongoDB Enterprise failing with Bind Errors
751
null
[]
[ { "code": "", "text": "Hii~! I’m copyleft researcher from Finland Planning to attend MongoDB.local Stockholm this November so I figured I’d come here and check out the community.I actually have some questions from the get go ", "username": "akira" }, { "code": "", "text": "Hello @akira ,Welcome to The MongoDB Community Forums! Who should I contact about the bug in the front page of ‘Manage Cookies’ button not working?Could you elaborate on this issue you’re seeing? Could you either point to the page in question, or provide a screenshot?Should I create an instant-messaging group for the attendees of the .local Stockholm if someone wants to hang out afterwards or ask some ad-hoc questions?The event includes Lunch and Happy Hour time where you could meet and interact with other attendees and staff after the event. You can also join our User Group in Stockholm to engage in discussions and events (even outside the cadence of .local).Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thanks I’ll definitely join the User Group in Stockholm!The bug happens in https://www.mongodb.com/. Here’s a screenshot of the aforementioned buggy button ‘Manage Cookies’. It seems to point to MongoDB: The Developer Data Platform | MongoDB which, at least in my browser, doesn’t point really anywhere and doesn’t reload the page because it’s an anchor.\n\nimage950×525 17.4 KB\n", "username": "akira" }, { "code": "", "text": "Hi @akira ,I hope you are doing well!I wanted to thank you for reporting this, it has been forwarded to the relevant team.\nFeel free to explore the forums and be in touch!Cheers! ", "username": "Tarun_Gaur" } ]
Hello from Finland
2023-06-16T17:50:20.050Z
Hello from Finland
892
https://www.mongodb.com/…592a19df4b9f.png
[ "swift", "react-native", "cxx", "flutter", "dart" ]
[ { "code": "useAuthuseEmailPasswordAuthuseQuery RealmProvideruseQueryuseObjectuseRealm@realm/reactcreateRealmContext", "text": "What’s happening in the Realm Community? Here’s a quick recap:The Realm JS team has recently released a new version of the Realm React library. This includes the following features to make integrating Realm in your React Native application even easier:For more details, check out the team blog.Realm Flutter 1.3.0 was released on pub.dev. This release includes support for full-text search, decimal 128, and raw binary, as well as numerous other features, fixes and performance improvements. Find out more at Realm product team blog by our Flutter SDK engineer Kasper Nielson.The Realm C++ SDK has been updated to preview that includes support for windows, and schema declarations via macro. The C++ team has curated some examples to try out and is looking for your feedback to further enhance the SDK.Our Sync Engineering Team leader Tyler Kaye has written an article that describes performance of Atlas Device Sync and created scripts to help our developers run their own tests to monitor performance of device sync for their application. Find more details on Team Medium Blog.Kudos to our community member @Rossicler_Junior for finding a solution on sync session reset after changes to custom user data.Thanks to our community member @Josh_Whitehouse for reporting the issue with Flutter SDK and mentioning steps to recreate the issue where server schema is not populated on running the local application and leads to client-reset. The flutter team is working on finding the solution as soon as possible.Henna SinghCommunity Manager\nMongoDB Community Team\nRealm Community ForumKnow someone who might be interested in this newsletter? Share it with them. Subscribe here to receive a copy in your inbox.", "username": "henna.s" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Realm React, Flutter & C++ SDK | Community Update June 2023
2023-06-28T12:14:54.641Z
Realm React, Flutter &amp; C++ SDK | Community Update June 2023
728
null
[ "time-series" ]
[ { "code": "", "text": "Hello everyone,I’ve been using influxdb for time series for the last past year but I wanted to try mongo time series recently. and I’ve been doing some tests however I couldnt find out why it allows duplicate values and a way to prevent it. Basically I’m writing data in bulk every few minutes and usually 1 minute of data is duplicate in the batch. with the same metadata and everything. I expect mongodb to skip these duplicate values. How can I achive this without creating an _id field out of metadataThanks", "username": "Gorkem_Erdogan" }, { "code": "", "text": "oh man, I just noticed even generating _id out of metadata on the application side doesn’t work. It allows duplicate _id fields. I hope there is a fix for this issue.", "username": "Gorkem_Erdogan" }, { "code": "", "text": "Hi @Gorkem_ErdoganIf you’re using MongoDB 5.0.0, 5.0.1, or 5.0.2, please upgrade to the latest version (5.0.4 currently). The same applies if you’re using the 4.4 series. Latest is 4.4.10 in the 4.4 series.There was an issue that was detected in early versions of the 5.0 series that allows duplicate unique index entries (see SERVER-58936).Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hello,I use serverless mongodb version 5.1.0", "username": "Gorkem_Erdogan" }, { "code": "", "text": "Hi @Gorkem_ErdoganThat is interesting. The duplicate _id should be fixed in 5.1.0. Do you mind providing more information about this?Best regards\nKevin", "username": "kevinadi" }, { "code": "{\n \"timestamp\": {\n \"$date\": \"2021-12-01T18:58:00.000Z\"\n },\n \"metadata\": {\n \"chart\": \"candles\",\n \"interval\": \"1min\",\n \"market\": \"USD\",\n \"symbol\": \"REQ\"\n },\n \"open\": 0.7027,\n \"low\": 0.7015,\n \"high\": 0.7083,\n \"close\": 0.7074,\n \"volume\": 163137\n}\n{\n \"timestamp\": {\n \"$date\": \"2021-12-01T18:58:00.000Z\"\n },\n \"metadata\": {\n \"chart\": \"candles\",\n \"interval\": \"1min\",\n \"market\": \"USD\",\n \"symbol\": \"REQ\"\n },\n \"open\": 0.7027,\n \"_id\": \"REQ-USD-1min-candles-1638385080\",\n \"low\": 0.7015,\n \"high\": 0.7083,\n \"close\": 0.7074,\n \"volume\": 163137\n}\n", "text": "Hello,Well it is nothing really special. I’ve been testing it on a jupyter notebook using pymongo Version: 4.0At first, I let mongodb to handle _id field thinking it could handle duplicate data with the same metadata and timestamp. and I think this is really important for a timeseries database. It should not let insert if the metadata and the timestamp are the same. we shouldnt be dealing with generating _id field for this purpose…using insertMany, inserted couple of hundred documents. Below is one of the documents;then noticed it does not handle duplicate data so I decided to generate _id field on my application side. _id generated using metadata field and timestamp as below;but still, duplicates were allowed… here is a screenshot\nScreenshot 2021-12-02 144234747×716 27.6 KB\n", "username": "Gorkem_Erdogan" }, { "code": "> db.createCollection(\"test\", { timeseries: { timeField: \"timestamp\" } } )\n> doc = {_id:0, timestamp: new Date()}\n> db.test.insertOne(doc)\n> db.test.insertOne(doc)\n> db.test.insertOne(doc)\n> db.test.find()\n[\n { timestamp: ISODate(\"2021-12-03T08:43:50.503Z\"), _id: 0 },\n { timestamp: ISODate(\"2021-12-03T08:43:50.503Z\"), _id: 0 },\n { timestamp: ISODate(\"2021-12-03T08:43:50.503Z\"), _id: 0 }\n]\n> show collections\ntest [time-series]\nsystem.buckets.test\nsystem.views\ntest> db.system.buckets.test.find()\n[\n {\n _id: ObjectId(\"61a9d8947dfd3e5b32de6144\"),\n control: {\n version: 1,\n min: { _id: 0, timestamp: ISODate(\"2021-12-03T08:43:00.000Z\") },\n max: { _id: 0, timestamp: ISODate(\"2021-12-03T08:43:50.503Z\") }\n },\n data: {\n _id: { '0': 0, '1': 0, '2': 0 },\n timestamp: {\n '0': ISODate(\"2021-12-03T08:43:50.503Z\"),\n '1': ISODate(\"2021-12-03T08:43:50.503Z\"),\n '2': ISODate(\"2021-12-03T08:43:50.503Z\")\n }\n }\n }\n]\ntestsystem.buckets.test", "text": "Hi @Gorkem_ErdoganA timeseries collection is quite different from a normal MongoDB collection. This is because although superficially it behaves like a normal collection, MongoDB treats time series collections as writable non-materialized views on internal collections that automatically organize time series data into an optimized storage format on insert (see Time Series Collections: Behavior).For this reason, indexing a time series collection involves creating an index in the underlying internal collection, instead of creating it on the visible collection. There are index types that are unsupported at this time: TTL, partial, and unique (see Time Series Collection Limitations: Secondary Indexes).For example, let’s create a new timeseries collection:then create a document to insert:and let’s insert three of those into the collection:if you then see the content of the collection, all three documents with identical content will be present:however, if you check the collection list, there is a mystery collection there:if you delve into the mystery collection, you’ll see how the test collection is actually stored:so the test collection is just a view to the actual system.buckets.test. Inside the actual underlying collection, the three documents are stored in a single “bucket”. This is why as it currently stands, you cannot create a unique index on timeseries data.In conclusion, timeseries collection is a special collection type that is basically a view into a special underlying collection, thus it behaves differently from a normal MongoDB collection. This is done to allow MongoDB-managed storage of timeseries documents that is otherwise quite expensive to do if it’s done using a regular MongoDB document. However, having this capability also comes with some caveats, namely the unique index limitation that you came across.Having said that, if you feel that having a secondary unique index is a must, you can create the collection in the normal manner, but lose the compactness of the timeseries collection storage. I suggest to benchmark your workload, and check if you can manage with a normal collection to store your data if the features you lose by using timeseries are important to your use case.Hopefully this is useful.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for your details explanations @kevinadi\nLike @Gorkem_Erdogan I was assuming timeseries collections would have a mechanisms to ensure uniqueness of timestamp/sensor data.\nIt’s clearly a must have features for time series as network unstable connectivity may cause data to be sent multiple times.I think I’ll go with this suggestions of querying data only through aggregation pipeline with a group stage to remove duplicates.\nBut it has impact in performance and complexity of development…", "username": "Sylvain_GOUMY" }, { "code": "", "text": "Any updates to this issue? We are building a time series collection for sensor data that might send duplicates and tried to require uniqueness on the timestamp but it still allows duplicates. I’d love to leverage the performance improvements of the collection type but that a deal breaker if we can’t prevent duplication.\nthanks,", "username": "Tyler_Hudgins" }, { "code": "", "text": "Same here. Planned to use time series collections, yet not being able to enforce uniqueness is a show stopper for us.", "username": "Benjamin_Behringer" }, { "code": "", "text": "Not a blocker for us, but I do wish we had at least a way to insert conditional greater than last timestamp. That would allow us to ingest with redundancy and make our pipeline more reliable. With the new index based time sorted queries for time series in 6.0, I’d think this may not be too expensive to add?", "username": "Christian_Rorvik" }, { "code": "", "text": "It would be helpful by introducing the optional add on functionality of overwriting the document if it is already exists for an insert operation based on “_id” field for only timeseries collection in upcoming releases. Coz if we have huge amount of sensors then data should be blindly written into timeseries collection without querying for duplicate check (Conditional inserts will definitely impact the performance).", "username": "Sainath_Kotapati" } ]
Duplicate Data Issue
2021-12-01T19:24:49.769Z
Duplicate Data Issue
13,447
https://www.mongodb.com/…6_2_1024x150.png
[ "realm-web" ]
[ { "code": "", "text": "Hi, I have upgraded our cluster from Shared to the M10. However, our web application no longer connects. I do see a set of errors on the Realm Logs.\nScreenshot 2022-08-11 at 11.42.112904×426 96.3 KB\n\n\nScreenshot 2022-08-11 at 11.42.281740×1154 259 KB\nI assumed a migration to a dedicated cluster should be seamless and our app should have connected without issues. Now we have clients not able to see their data and we are not sure what to do. Any help would be appreciated.", "username": "Sani_Yusuf" }, { "code": "", "text": "Hi @Sani_Yusuf ,I assumed a migration to a dedicated cluster should be seamlessNo, updating from a shared to a dedicated cluster isn’t seamless, and has indeed a documented procedure to go through. Shared and dedicated clusters have a completely different structure, and some processes need to be redirected.If you have Device Sync active, Terminate and re-Enable it, and try to re-assign the Data Source. If that still doesn’t work, we’d have to look into the setup, so we’d need to know some more details (like, app ID and such): if you have a Support contract, opening a case would speed up resolution.", "username": "Paolo_Manna" }, { "code": "", "text": "Hi,\nI dont have SYNC. I only have a realm-web application. The documented procedure is similar to what I followed, as the SYNC side doesn’t apply to me. Any Ideas?", "username": "Sani_Yusuf" }, { "code": "", "text": "Hi @Sani_YusufAny Ideas?Yes, plenty of possibilities there, but we’d need to have a look, a screenshot of a list isn’t much to work on…Again, any further detail about your app is needed here: I can understand you don’t want to put too much on a public forum (hence the suggestion to open a Support case, if you can), but we’ve to start from somewhere…", "username": "Paolo_Manna" }, { "code": "", "text": "Never mind, I’ve been able to find the app, looking into it now: is there any additional detail you can share?", "username": "Paolo_Manna" }, { "code": "", "text": "Hi Paolo, this is all I have is this as its the only error. I have clients unable to use the app and its scary because this migration should have been seamless but on this occasion, I think I may have found a bug. All I can tell you is the Cluster was upgraded from Shared M1 to M10 hosted on Amazon in London.", "username": "Sani_Yusuf" }, { "code": "", "text": "Have you tried what I suggested, i.e. changing/re-assigning the Data Source? At first, you can do a simple change (like, toggle the MongoDB Connection String and save the change), and if that doesn’t work, deleting and re-assigning it from scratch.", "username": "Paolo_Manna" }, { "code": "", "text": "Hi Paolo, Looks like enabling the String makes everything work. I think this should either be documented or be escalated as a bug which should be fixed as it has terminal consequences for applications using realm-web. . Super thanks", "username": "Sani_Yusuf" }, { "code": "", "text": "That’s great @Sani_Yusuf ,Glad I could be of help.No, the Connection String change was a workaround to force the setup to re-evaluate the Data Source the app wasn’t able to connect to, you can toggle it back now.Yes, the problem may well be a bug, I’ve opened an internal ticket for that, thanks for your collaboration, hopefully it helps others that may incur in the same issue (better for it not to happen anymore, obviously)", "username": "Paolo_Manna" }, { "code": "", "text": "Hi @Paolo_Manna Please help Me.\nAlmost the same thing here. We Upgrade from M2 to M10 and know nothing is connecting.\nWe are using all in Atlas.\nThe error I get in Compas:\n“Hostname/IP does not match certificate’s altnames: Host: xxxx-shard-00-00.jrraz.mongodb.net. is not in the cert’s altnames: DNS:*.mongodb.net, DNS:mongodb.net”", "username": "wiliam_buzatto" }, { "code": "", "text": "Update!As we are using Kubernetes, I just have to redeploy our services and everything get back to work.", "username": "wiliam_buzatto" } ]
MongoDBError Error After Upgrading To M10 From Shared
2022-08-11T10:43:56.724Z
MongoDBError Error After Upgrading To M10 From Shared
3,711
null
[ "aggregation", "time-series", "views" ]
[ { "code": "exports = async function (request, response) {\n const mongo = context.services.get(\"PDCCluster\");\n collection = mongo.db(\"PDC\").collection(\"Events_TS\");\n const pipeline = [{\n \"$group\": {\n \"_id\": {\n \"ProblemCorrelation\": \"$ProblemCorrelation\",\n \"pyTextValue(1)\": {\n \"$substr\": [\n {\n \"$cond\": [\n {\n \"$ne\": [\"$GeneratedDateTime\", null]\n },\n {\n \"$toString\": \"$GeneratedDateTime\"\n },\n \"\"\n ]\n },\n 0,\n 13\n ]\n },\n \"RuleApplication\": \"$RuleApplication\",\n \"pyTextValue(2)\": {\n \"$substr\": [\"$requestorid\", 0, 1]\n },\n \"MsgID\": \"$MsgID\",\n \"ClusterName\": \"$ClusterName\"\n },\n \"pySummaryValue(1)\": {\n \"$sum\": \"$KPIValue\"\n },\n \"pySummaryCount(1)\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$sort\": {\n \"pyTextValue(1)\": 1,\n \"_id.ProblemCorrelation\": 1,\n \"_id.pyTextValue(2)\": 1,\n \"_id.RuleApplication\": 1,\n \"_id.MsgID\": 1,\n \"_id.ClusterName\": 1\n }\n }\n ] ;\n documents = await collection.aggregate(pipeline).toArray();\n return {documents}}\nhttps://ap-south-1.aws.data.mongodb-api.com/app/data/endpoint/PDCAggregate\n{\n\"collection\":\"Events_TS\",\n\"database\":\"PDC\",\n\"dataSource\":\"PDCCluster\",\n\"pipeline\": [\n{\n \"$match\": {\n \"$and\": [\n {\n \"GeneratedDateTime\": {\n \"$gte\": \"2023-06-01T07:00:20.435Z\",\n \"$lt\": \"2023-06-20T07:20:20.435Z\"\n }\n }\n ]\n }\n },\n{ \"$merge\" : { \"into\" : { \"coll\": \"Aggreated_Events\" }, \"on\": \"_id\", \"whenNotMatched\": \"insert\" } }]}\n", "text": "I am using M0 free tier shared cluster for some Mongo DB POC. I am trying to aggregate data from a Time Series collection (Events_TS )and putting the output of the aggregation pipeline into another collection(Aggreated_Events). For that, I found that The Data API needs to hit a custom HTTPS endpoint as a System user. So I have created a custom HTTPS endpoint with a function. The function looks like belowFunction:-URL which I am trying to hit from Data API:-The JSON which I am trying to pass thru Data APi as requested:-But on sending the above JSON as a request to the URL I am getting 204 no content response. Please help me out here.", "username": "Priyabrata_Nath" }, { "code": "Respond With Result", "text": "Hey @Priyabrata_Nath,Welcome to the MongoDB Community!I am getting 204 no-content response.The 204 status code is not an error. It indicates that the server successfully processed the request, but is not returning anything.Could you please confirm if the “Respond With Result” option is enabled in your HTTPS endpoints? This could be the reason why it is not providing any response and returning a 204 No Content status.\nimage2970×692 118 KB\nAlso, please ensure that you test the functionality of your Atlas function to verify if it is working properly and returning the output results.Looking forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Data API using $merge/$out is getting 204 no content response
2023-06-20T05:53:03.970Z
Data API using $merge/$out is getting 204 no content response
726
https://www.mongodb.com/…7_2_1023x449.png
[ "atlas-cluster" ]
[ { "code": "", "text": "I upgraded from free 500MB tier to M10. CLuster Upgrade has been stuck for more than 3 hours.\nNot sure what to do further. Can some one help me here ?Our 500 MB is full and our Production is down.Stuck in this forever.\n\nScreenshot 2023-06-28 at 12.06.39 PM3188×1398 286 KB\n", "username": "Rajath_S_K" }, { "code": "", "text": "Hey @Rajath_S_KCLuster Upgrade has been stuck for more than 3 hours.\nNot sure what to do further. Can some one help me here ?Our 500 MB is full and our Production is down.Could I ask that you contact the in-app chat support as soon as possible regarding this issue? The in-app chat support does not require any payment to use and can be found at the bottom right corner of the Atlas UI:Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "HI Kushagra,Thanks for response. I contacted support via email and its sorted now.\nBut its strange that this happens.Regards,\nRajath", "username": "Rajath_S_K" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Upgrade from Free to AWS Cluster is stuck for hours
2023-06-28T06:01:32.538Z
Upgrade from Free to AWS Cluster is stuck for hours
468
null
[ "cxx" ]
[ { "code": "MONGOCXX_EXPORTS create_collection_deprecated& max(std::int64_t max_documents)\nError (active)\tE0106\tinvalid type for a bit field\tDataBase\t103\t\tlibs\\mongo-cxx-driver-r3.8.0\\src\\mongocxx\\options\\create_collection.hpp\n find& max(bsoncxx::document::view_or_value max)\nError\tC2146\tsyntax error: missing ')' before identifier 'max'\tDataBase\t301\n\\libs\\mongo-cxx-driver-r3.8.0\\src\\mongocxx\\options\\find.hpp\n", "text": "HiI followed Getting Started with MongoDB and C++ | MongoDB\nI have built the following correctly on windows 10.and I have all libs available now,Now I need to use Mongo DB c++ in the VS2019 x64 DLL project, I have all C++ include paths and lib paths and libs in linker when I include #include <mongocxx/client.hpp>\nin my cpp file and building the project, I get many errors. look like this is only for a DLL project. is there any macro to define in the preprocessor? I already tried MONGOCXX_EXPORTS but still, the same errors appeared …very 1st errorI get errors inI guess the same issue is mentioned on another topic Can build console app with mongocxx but not DLL project VS2019", "username": "Galaxy_Core" }, { "code": "#define NOMINMAX#include <windows.h>", "text": "Hi @Galaxy_Core,You may be colliding with the windows min and max macros.\nWhat happens if you add #define NOMINMAX before #include <windows.h> in your code? Does the problem go away?See this article for reference - The min/max problem in C++ and Windows", "username": "Rishabh_Bisht" } ]
VS2019 x64 dll project build issues with mongo-cxx-driver-r3.8.0
2023-06-25T19:08:28.721Z
VS2019 x64 dll project build issues with mongo-cxx-driver-r3.8.0
600
null
[ "queries", "performance" ]
[ { "code": "", "text": "My max time for queries is 2secs (2000ms).However when I try to manually query the script via Studio3t, there is nothing wrong with the query as it finishes executing after 0.002 secs.I am running on Azure VM Ubuntu 18.04 2vCPU 8GB RAMMy wired tiger cache is set to 5GBI’ve checked my server resources that timeframe but its usage seems normal.Here is a sample of my error log.{“t”:{\"$date\":“2022-09-20T12:27:31.486+08:00”},“s”:“W”, “c”:“QUERY”, “id”:23798, “ctx”:“conn225631”,“msg”:“Plan executor error during find command”,“attr”:{“error”:{“code”:50,“codeName”:“MaxTimeMSExpired”,“errmsg”:“operation exceeded time limit”},“stats”:{“stage”:“PROJECTION_SIMPLE”,“nReturned”:1,“works”:1,“advanced”:1,“needTime”:0,“needYield”:0,“saveState”:1,“restoreState”:0,“isEOF”:1,“transformBy”:{},“inputStage”:{“stage”:“IDHACK”,“nReturned”:1,“works”:1,“advanced”:1,“needTime”:0,“needYield”:0,“saveState”:1,“restoreState”:0,“isEOF”:1,“keysExamined”:1,“docsExamined”:1}},“cmd”:{“find”:“UserApiToken”,“filter”:{\"_id\":“324018-1608032429-lQcadddddDdzicbLCBfKs7HW5VNFz8h-kXoDjXVgcMP-igd”},“projection”:{“UserId”:1,\"_id\":1,“DeviceId”:1,“RequestComingFromType”:1,“DeviceName”:1,“IpAddress”:1,“CountryCode”:1,“ExpiredDate”:1,“DefaultExchange”:1},“limit”:1,“maxTimeMS”:2000,\"$db\":“igd_db”,“lsid”:{“id”:{\"$uuid\":“c18bc29f-81a2-45f8-8b88-c0dafd7ec56w”}},\"$clusterTime\":{“clusterTime”:{\"$timestamp\":{“t”:1663648049,“i”:507}},“signature”:{“hash”:{\"$binary\":{“base64”:“2H3SxchRR2H3I8S/caW1Xg4Qyco=”,“subType”:“0”}},“keyId”:709645542335163252}}}}}", "username": "jca" }, { "code": "", "text": "@jca any luck in findong out whats happening here? I’m observing the same behavior in repl mongo 5. In profiler i do see these queries taking over 30mins. Maybe its a driver bug?", "username": "Nicholas_Galantowicz" }, { "code": "", "text": "MaxTimeMSExpiredthe query doesn’t finish within time limit.", "username": "Kobe_W" } ]
I'm getting random "Plan executor error during find command" errors
2022-09-20T06:28:16.830Z
I&rsquo;m getting random &ldquo;Plan executor error during find command&rdquo; errors
2,395
null
[ "golang" ]
[ { "code": "MainRepSet:PRIMARY> db.getUser(\"ingestion_user\", {\n... showCredentials: true\n... });\n{\n\t\"_id\" : \"admin.inge_user\",\n\t\"userId\" : UUID(\"2202a545-f284-48c3-a185-58a7fd355c3c\"),\n\t\"user\" : \"ingestion_user\",\n\t\"db\" : \"admin\",\n\t\"credentials\" : {\n\t\t\"SCRAM-SHA-1\" : {\n\t\t\t\"iterationCount\" : 10000,\n\t\t\t\"salt\" : \"salt1\",\n\t\t\t\"storedKey\" : \" storedkey11dummy\",\n\t\t\t\"serverKey\" : \" serverKey2somedummy\"\n\t\t}\n\t},\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"readWrite\",\n\t\t\t\"db\" : \"ads\"\n\t\t}\n\t],\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-1\"\n\t]\n}\npayloadsaltstoredKeyserver key", "text": "Here is my use case:I would like to fetch a user’s password from a secret manager service and check that password against the MongoDB user’s password. when the password is not matching, I should update it on the MongoDB side.\nI ran the below query which gives the credentials responsewithout using a connection to the respective user, how can I validate my plain password against the above credentials payload, salt, storedKey, server key? I would like to validate the logic using Golang.\nPlease, let me know if there is any algorithm for how the plain password can be validated.", "username": "ganesh_rs" }, { "code": "", "text": "Hey @ganesh_rs,Welcome to the MongoDB Community!I would like to fetch a user’s password from a secret manager service and check that password against the MongoDB user’s password. when the password is not matching, I should update it on the MongoDB side.Based on the details you’ve shared I think that you’re looking for a way to compare a plaintext password stored in some system against the SCRAM-SHA-1 password hash stored in MongoDB. Is this correct?It’s important to note two things:Further, if you need to implement federated login functionality, it is best to leverage established protocols like Kerberos. These protocols provide secure authentication and single sign-on capabilities.However, feel free to reach out, in case you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How to validate a plain password (clear text) against credentials ("SCRAM-SHA-1" type)
2023-06-15T07:05:14.516Z
How to validate a plain password (clear text) against credentials (&ldquo;SCRAM-SHA-1&rdquo; type)
828
null
[ "queries", "crud" ]
[ { "code": "[\n {\n name: \"foo\",\n equipment: {\n hand: { name: \"Broken Sword\" }\n gloves: null,\n boots: null,\n }\n inventory: {\n { name: \"some item\" }, null, null, null,...\n }\n },\n ...\n]\n// unequip item\ndb.characters.updateOne(\n // an item is equipped in the hand slot, and there is at least one inventory space \n { name: \"foo\", \"equipment.hand\": { $ne: null }, inventory: { $all: [null] } },\n [\n { $set: \n {\n \"inventory.1\": \"$equipment.hand\",\n \"equipment.hand\": null\n } \n }\n ]\n)\n", "text": "I have a ‘characters’ collection that looks like this:My question is how can I update this document to have equipment.hand moved to first available index of inventory (equal to null), and equipment.hand set to null afterwards?I tried the following to update index 1, but it sets ALL inventory slots to equipment.hand item.\n(I also need it to find the first available inventory slot automatically)", "username": "falekdev" }, { "code": "", "text": "That is overly complicated.Get rid of all the nulls and just $push and $pull from inventory with $set and $unset from equipment.", "username": "steevej" }, { "code": "", "text": "It is a hard requirement that items in the inventory can be moved to any position, not just one after another.\nSo the inventory might look like this:\n[ null, null, { name: “item 1” }, null, { name: “item 2” } ]", "username": "falekdev" }, { "code": "[\n { name : \"item 1\" , inventory : 2 } ,\n { name : \"item 2\" , inventory : 4 } ,\n { name : \"Broken Sword\" , equipment : \"hand\" }\n]\n[\n { name : \"item 1\" , inventory : 2 } ,\n { name : \"item 2\" , inventory : 4 } ,\n { name : \"Broken Sword\" , inventory : 1 }\n]\n", "text": "This hard requirement looks more like a presentation layer requirement than a data storage requirement.If position is important, position should be a field and the same can be achieve with the following that would allows simpler code:And moving the Broken Sword from the hand to inventory gives:If you limit your data storage model to what you want at the presentation layer you deprive your self of having the most efficient storage and processing.", "username": "steevej" } ]
Move value from a field to first index that equals null in a different field in the same document
2023-06-21T20:57:10.056Z
Move value from a field to first index that equals null in a different field in the same document
512
null
[ "app-services-cli" ]
[ { "code": "realm-cli push 'path/to//functions/folder'realm-cli function", "text": "Does anyone on MongoDB Atlas team know how I can push only a specific folder of a Realm App project and not the entire App, using realm cli? Meaning, if I am making changes to realm function, I would like to upload only those changes and not anything else within the Realm App Project folder.For example, I would like to run this command:\nrealm-cli push 'path/to//functions/folder' the push command though, pushes all changes and will remove any functions created using the Realm App Web UI that it does not find in your project.realm-cli function unfortunately only runs the function (you can’t update the source code, or view it)", "username": "Rolando_Carias" }, { "code": "", "text": "Hi Rolando,Unfortunately, the full realm app will get updated at the moment. Is there a use-case where your local directory contains a different set of functions than the ones that are deployed and shown on the Cloud UI?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I think that there is a common use case on where more than one developer is working in the same app and they don’t want to overwrite the work that other developers are doing in different artifact. By pushing a single artifact I can warrant to not impact the app state my team is expecting to work with.", "username": "Demian_Caldelas" }, { "code": "", "text": "I agree with Demian, in addition I have environment specific values and would like to promote changes from dev to staging without updating the values.", "username": "Russ_Decker" }, { "code": "", "text": "when can we expect it ready?", "username": "Christopher_Eavestone" } ]
Realm CLI push only specific changes
2022-01-30T23:07:28.012Z
Realm CLI push only specific changes
3,967
null
[ "python", "compass" ]
[ { "code": "Traceback (most recent call last):\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/srv_resolver.py\", line 89, in _resolve_uri\n results = _resolve(\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/srv_resolver.py\", line 43, in _resolve\n return resolver.resolve(*args, **kwargs)\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/dns/resolver.py\", line 1193, in resolve\n return get_default_resolver().resolve(qname, rdtype, rdclass, tcp, source,\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/dns/resolver.py\", line 1066, in resolve\n timeout = self._compute_timeout(start, lifetime,\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/dns/resolver.py\", line 879, in _compute_timeout\n raise LifetimeTimeout(timeout=duration, errors=errors)\ndns.resolver.LifetimeTimeout: The resolution lifetime expired after 21.612 seconds: Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Users/Jayson/Documents/CEA/gitlab/Colibri/Colibri_v2/colibri_flask_api/app/server.py\", line 1, in <module>\n from api_folder.controller.main import app\n File \"/Users/Jayson/Documents/CEA/gitlab/Colibri/Colibri_v2/colibri_flask_api/app/api_folder/__init__.py\", line 54, in <module>\n from api_folder.models.known_sources import *\n File \"/Users/Jayson/Documents/CEA/gitlab/Colibri/Colibri_v2/colibri_flask_api/app/api_folder/models/known_sources.py\", line 6, in <module>\n db_mongo = Get_MongoDB()\n File \"/Users/Jayson/Documents/CEA/gitlab/Colibri/Colibri_v2/colibri_flask_api/app/api_folder/config/mongoDB.py\", line 18, in Get_MongoDB\n client = pymongo.MongoClient(Params)\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 736, in __init__\n res = uri_parser.parse_uri(\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/uri_parser.py\", line 542, in parse_uri\n nodes = dns_resolver.get_hosts()\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/srv_resolver.py\", line 121, in get_hosts\n _, nodes = self._get_srv_response_and_hosts(True)\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/srv_resolver.py\", line 101, in _get_srv_response_and_hosts\n results = self._resolve_uri(encapsulate_errors)\n File \"/Users/Jayson/opt/anaconda3/envs/colibriapi/lib/python3.10/site-packages/pymongo/srv_resolver.py\", line 97, in _resolve_uri\n raise ConfigurationError(str(exc))\npymongo.errors.ConfigurationError: The resolution lifetime expired after 21.612 seconds: Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.4 UDP port 53 answered The DNS operation timed out.; Server 132.167.198.5 UDP port 53 answered The DNS operation timed out.\nfrom api_folder.controller.main import app\n\nif __name__ == \"__main__\":\n app.run(debug=True, host='0.0.0.0', port=8080)\n", "text": "Hello,I’m encountering an issue while trying to launch my Flask server, specifically with the connection to MongoDB. I’m receiving the following error message:I have tried connecting to MongoDB using MongoDB Compass, and it works perfectly fine. However, when I launch the Flask server, I encounter this error. Here’s how I am starting the server:I would appreciate any assistance or insights into resolving this issue. Thank you in advance for your help![EDIT]\npymongo version: 4.3.3\npymongo.MongoClient(“mongodb+srv://user:[email protected]/myFirstDatabase?retryWrites=true&w=majority”)", "username": "Jayson_Mourier" }, { "code": "", "text": "Python version: 3.9.12\nOS version: macOS Ventura 13.2.1", "username": "Jayson_Mourier" } ]
Flask Server Connection Issue - Resolution Lifetime Expired
2023-06-27T13:48:24.513Z
Flask Server Connection Issue - Resolution Lifetime Expired
705
null
[]
[ { "code": "", "text": "Hello everyone!I’ve recently completed some courses with MongoDB community and I’m quite fond of the format y’all are teaching these subjects.Since I’d like to stay up to date on the University topics, is there a chance to subscribe to news from MongoDB University? I.e. Email notifications whenever a new course launches or an existing course is being revised? Maybe as weekly digest, if you’re releasing daily.Thanks in advance!\nMax", "username": "MaxR" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Subscribe to updates on new or revised content
2023-06-27T15:24:02.302Z
Subscribe to updates on new or revised content
564
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "await colllections.findOne({}, { sort: { timestamp: -1} })\n\nawait colllections.findOne({}).({ sort: { timestamp: -1} })\n", "text": "both methods are not working with findOne :", "username": "Shafa_vp" }, { "code": "", "text": "The function findOne returns only one document. Why do you want to sort a single document?", "username": "steevej" }, { "code": "findOne({}, null, { sort: { timestamp: -1} })", "text": "I faced the same issue. There might be multiple documents that match find operator so I need to sort them to prevent ambiguity.\nI found that solution: findOne({}, null, { sort: { timestamp: -1} })", "username": "Anton_P" }, { "code": "", "text": "findOne( ) returns the first document found with the given criteria. The criteria could match multiple documents before that “first” is returned.\nSo, it is important to be able to pre-sort in some situations, to get a predictable result.In any case @Anton_P 's solution addresses this, so we’re good.", "username": "Wojtek_Tomalik" }, { "code": "", "text": "it is important to be able to pre-sort in some situations, to get a predictable resultIt makes a lot of sense. Thanks for the clarification.", "username": "steevej" }, { "code": "db.collection.find(filter).sort(xxx).limit(1)", "text": "An alternative is probably usingdb.collection.find(filter).sort(xxx).limit(1)", "username": "Kobe_W" }, { "code": "db.foo.find().sort({_id:-1}).limit(x);\n", "text": "If you want to just find the latest inserted document as well, the ID field created by MongoDB actually has date associated with it, so you can sort for the latest N documents:", "username": "tapiocaPENGUIN" }, { "code": "", "text": "If you use this approach, make sure that all you IDs were generated by Mongo. Custom IDs are possible, including arbitrary objects IDs that look just like auto generated ones.", "username": "Wojtek_Tomalik" }, { "code": "", "text": "Since the _id is generated by the driver it is possible to have an “older” _id from a more recent document created by a driver running of a different machine where the clock is off.The order might also be off if 2 clients create 2 documents with the same timestamp when the driver with lowest (5 bytes random value) creates the document after.To really sort the document on the creation time, the safest way is to use a field initialized with $currentTime because $currentTime is evaluated by the server.", "username": "steevej" }, { "code": "sort()lean()findOne()exec()awaitfindOne()findOne()awaitsort()lean()", "text": "actually findOne gives a promise . When you chain methods like sort() and lean() after findOne() , you are building up the query object with additional instructions.\nIn the case of Mongoose queries, the promise is resolved internally by Mongoose when you either call an executor method like exec() or use await to await the query. The findOne() method itself returns a Query object, which is a thenable object representing a pending operation.To summarize, the promise returned by the findOne() method is resolved by calling an executor method or using await to await the query. The sort() and lean() methods help build the query but do not directly resolve the promise.", "username": "Mohammad_Shariar_Parvez" } ]
How I can use sort function in findOne() function?
2023-01-05T13:21:13.189Z
How I can use sort function in findOne() function?
8,817
null
[ "database-tools" ]
[ { "code": "database-tools", "text": "We are pleased to announce version 100.7.0 of the MongoDB Database Tools.This release adds tests against MongoDB 6.3. Highlights include new tests for Column Store Indexes, updating the minimum Go version to 1.19, fixing a bug that caused the Tools to ignore a password supplied via a prompt. Several build failures are also fixed in this version.The Database Tools are available on the MongoDB Download Center.\nInstallation instructions and documentation can be found on docs.mongodb.com/database-tools.\nQuestions and inquiries can be asked on the MongoDB Developer Community Forum.\nPlease make sure to tag forum posts with database-tools.\nBugs and feature requests can be reported in the Database Tools Jira where a list of current issues can be found.", "username": "Jian_Guan1" }, { "code": "", "text": "does database tools version 100.7.0 supports the dumpa and restore with mongodb 3.0 version?", "username": "Bhagyashree_Patil" }, { "code": "", "text": "Hi @Bhagyashree_Patil, No it doesn’t support 3.0. You can learn more about Database Tools compatibility here: https://www.mongodb.com/docs/database-tools/installation/installation/#compatibilityThe End of Life date for MongoDB 3.0 was 5 years ago. We highly recommend you upgrade your deployment to a newer version.", "username": "Tim_Fogarty" } ]
Database Tools 100.7.0 Released
2023-03-01T18:12:37.773Z
Database Tools 100.7.0 Released
1,364
https://www.mongodb.com/…c_2_1024x639.png
[ "node-js", "mongodb-shell" ]
[ { "code": "", "text": "\nimage1349×843 15.6 KB\nHow to solve this?", "username": "Shahriar_Shatil" }, { "code": "", "text": "Hi @Shahriar_Shatil,I suspect that the inactivity for a certain time period caused this to happen. Could you please try refreshing the page and see if it resolves the issue? If not, can you try opening the lab in a different tab or browser from the MongoDB course page?Feel free to reach out in case the issue persists.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection Closed on MongoDB Uni Lab
2023-06-27T06:55:38.500Z
Connection Closed on MongoDB Uni Lab
580
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"649a8ce117954132f6fbbb07\"\n },\n \"type\": \"123\",\n \"desc\": \"...\",\n \"from\": \"A\"\n},\n{\n \"_id\": {\n \"$oid\": \"649a8d4c17954132f6fbbb0b\"\n },\n \"type\": \"456\",\n \"desc\": \"...\",\n \"from\": \"B\"\n}...\n{\n \"_id\": {\n \"$oid\": \"649a8e0b17954132f6fbbb0e\"\n },\n \"name\": \"xxx\",\n \"ary\": [\n {\n \"type\": \"123\",\n \"from\": \"B\"\n },\n {\n \"type\": \"456\",\n \"from\": \"B\"\n }\n ]\n}...\n[{\n $lookup: {\n from: 'test_1',\n 'let': {\n type: '$ary.type',\n from: '$ary.from'\n },\n pipeline: [{\n $match: {\n $expr: {\n $and: [{\n $in: [\n '$type',\n '$$type'\n ]\n },\n {\n $in: [\n '$from',\n '$$from'\n ]\n }\n ]\n }\n }\n }],\n as: 'result'\n }\n}]\n{ \"type\": 1, \"from\": 1 },\n{ \"type\": 1 },\n{ \"from\": 1 }\n", "text": "Hello,I got a questions on index using lookup with $in operator.\nthe data structure look like below:Collection test_1Collection test_2Aggregation plpelineIndex created on test_1For other reason it is not possible to embed the test_1 into test_2.When performing the lookup, a collection scan is done. Are there anyway to improve the performance on this lookup?Thanks.", "username": "Yuk_Po_Tse" }, { "code": "$lookup: {\n from: 'test_1',\n localField: 'ary.type', /* $lookup is smart enough to have an array for localField */\n foreignField: 'type' ,\n 'let': {\n from: '$ary.from'\n },\n pipeline: [{\n $match: {\n $expr: { $in: [ '$from', '$$from' ] }\n }\n }],\n as: 'result'\n }\n", "text": "I would try using a mix of localField/foreignField with $match. Something like:", "username": "steevej" }, { "code": "", "text": "Thanks for reply. This could use the index for “type”.\nSo can conclude that compound indexes in lookup sub-pipeline $expr would not be supported in current version of MongoDB?", "username": "Yuk_Po_Tse" }, { "code": "{\n \"_id\": {\n \"$oid\": \"649a8ce117954132f6fbbb07\"\n },\n source:{\"type\": \"123\", \"from\": \"A\"},\n \"desc\": \"...\"\n},\n{\n \"_id\": {\n \"$oid\": \"649a8d4c17954132f6fbbb0b\"\n },\n source:{\"type\": \"456\", \"from\": \"B\"},\n \"desc\": \"...\"\n}\n$lookup: {\n from: 'test_1',\n localField: 'ary',\n foreignField: 'source' ,\n as: 'result'\n }\n", "text": "So can conclude that compound indexes in lookup sub-pipeline $expr would not be supported in current version of MongoDB?I cannot really comment on what is supported or not by the current version of MongoDB.If your explain plan indicated a collection scan then it meant that your query could not leverage any of the index you have in the version of MongoDB you are using.An other thing you may try to further optimize the use-case. You could swap and try to use from as the localField/foreignField and $match using type. Depending of the granularity of both fields one might be more efficient.Another idea is to modify test_1 to an object field that contains both type and from and have the index on that top object. LikeIndex on test_1 would be { “source” : 1 } and then the lookup would be:", "username": "steevej" }, { "code": "", "text": "Understood.\nThank you very much!", "username": "Yuk_Po_Tse" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lookup sub-pipeline is not using index with the $in operator
2023-06-27T07:45:02.876Z
Lookup sub-pipeline is not using index with the $in operator
370
null
[]
[ { "code": "", "text": "i am pasting my connection link (Use this connection string in your application) in the mongodb shell after pasting its asking for password but when i am trying to paste or type its not taking any input what should i do?", "username": "Aditya_Kulkarni" }, { "code": "the blinking cursor will be invisible, so don't worry)", "text": "Hey @Aditya_Kulkarni,Thank you for reaching out to the MongoDB Community forums.its asking for password but when i am trying to paste or type its not taking any input what should i do?After pasting the connection string, hit the enter/return button and input the password ( the blinking cursor will be invisible, so don't worry) then press enter/return button to connect to your MongoDB Atlas cloud cluster.In case of any issues, feel free to reach out.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
I am not able to enter password in mongodb shell after pasting connection link
2023-06-27T14:12:59.238Z
I am not able to enter password in mongodb shell after pasting connection link
324
https://www.mongodb.com/…_2_1024x576.jpeg
[ "telaviv-mug" ]
[ { "code": "Principal Engineer, MongoDB, New YorkEnterprise Account Executive, MongoDB, Tel AvivLead Developer Advocate, MongoDB, Tel Aviv", "text": "\nMUG - Tel Aviv Template (1)1920×1080 116 KB\nTo RSVP - Please click on the “✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.MongoDB Team is extremely pleased to host a “Data Modeling - Ask US Anything” event with a Special World Expert guest as part of the aspiring MongoDB User Group meetups. Come to learn and expand your knowledge with the worlds best NoSQL database and cloud platforms.This is a teaser event for the yearly MongoDB .local Tel Aviv event happening on Sept 5th 2023. Get yourself registered today!Event Type: In-Person\nLocation: Ermetic Office, Derech Menachem Begin 144 , EY Building, 7th Floor - Tel AvivPrincipal Engineer, MongoDB, New YorkAsya is a world known expert for building scalable and performant MongoDB solutions.\n1637334908900800×800 126 KB\nEnterprise Account Executive, MongoDB, Tel AvivNitzan is a leading Enterprise Account Executive in Tel Aviv’s Growth team.\navatar1024×1024 137 KB\nLead Developer Advocate, MongoDB, Tel AvivPavel is a lead developer advocate, part of the strategic account team in MongoDB world wide.", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny\nHow do i register to the even ? I couldn’t find any form to fill in", "username": "Alexander_Izrailov" }, { "code": "", "text": "@Alexander_Izrailov and anyone else:To RSVP - Please click on the “✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button.", "username": "Pavel_Duchovny" } ]
Tel Aviv MUG : Data Modeling - Ask US Anything (Special Guest)
2023-06-19T08:01:56.707Z
Tel Aviv MUG : Data Modeling - Ask US Anything (Special Guest)
1,510
null
[ "replication", "java", "atlas-cluster", "morphia-odm" ]
[ { "code": "java.lang.IllegalAccessError: failed to access class org.bson.BSON from class com.mongodb.DBObjectCodec (org.bson.BSON and com.mongodb.DBObjectCodec are in unnamed module of loader org.eclipse.jetty.webapp.WebAppClassLoader @7434ee13)\nfinal String database = \"<database name>\";\n \nfinal MongoClientURI databaseURI = new MongoClientURI(\"<connection string>\");\n\nMorphia morphia = new Morphia();\n\nfinal Datastore datastore = morphia.createDatastore(new MongoClient(databaseURI), database);\n\nModel model = new Model();\n\nmodel.setCostumer(\"TestCostumer\");\n\ndatastore.save(model);\n", "text": "I’m trying to make an API using RESTeasy and jetty, and I’m trying to use Morphia to do my database operations. However I’m running into the following error message:I’m using Java 11 and for Morphia I’m using 1.3.2 (I know this is an older version but the newer versions seem to not even get to this point and crash upon creating a datastore.The code that is generating this error is as follows:If further info is required I’ll do my best to provide it.", "username": "Diego_Bencherif" }, { "code": "", "text": "My first guess is that your application is pulling in an incompatible version of the MongoDB Java driver (on which Morphia depends). Check your dependencies to rule that out. (From morphia/gradle.properties at r1.3.2 · MorphiaOrg/morphia · GitHub it looks like you need the 3.4 driver)Good luck", "username": "Jeffrey_Yemin" }, { "code": "<dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongo-java-driver</artifactId>\n <version>3.12.14</version>\n</dependency>\n<dependency>\n <groupId>org.mongodb.morphia</groupId>\n <artifactId>morphia</artifactId>\n <version>1.3.2</version>\n</dependency>\n", "text": "Many thanks for your reply.Whilst the version you named was not the correct version you did put me on the right track.\nI ended up using the following MondoDB and Morphia dependencies in my pomI’m listing these in case someone else might run into the same issue I ran into.", "username": "Diego_Bencherif" }, { "code": "", "text": "I’d love to see the stacktrace from that crash. That’s a fundamental method in Morphia so crashing should “never” happen. I have a feeling it’s likely a version mismatch somewhere but if you’re up for it, I’d love to try and diagnose that with you so you can move to a newer Morphia. 1.3.2 isn’t just old. It’s practically paleolithic. ", "username": "Justin_Lee" }, { "code": "", "text": "I’ll try looking up the stacktrace later. I’m currently busy.", "username": "Diego_bencherif1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Failed to access class org.bson.BSON from class com.mongodb.DBObjectCodec
2023-06-21T08:44:19.622Z
Failed to access class org.bson.BSON from class com.mongodb.DBObjectCodec
1,102
null
[ "java", "atlas-cluster" ]
[ { "code": "", "text": "I can’t find any reason why I am unable to connect to the shared cloud database. I’ve pretty much copied and pasted the code from Atlas connection code and it keeps throwing back this error:\nTimed out after 30000 ms while waiting to connect.\nClient view of cluster state is {\ntype=UNKNOWN,\nservers=[{address:27017=clusterfortesting.9hpj0y7.mongodb.net, type=UNKNOWN, state=CONNECTING,\nexception={com.mongodb.MongoSocketException: clusterfortesting.9hpj0y7.mongodb.net},\ncaused by {java.net.UnknownHostException: clusterfortesting.9hpj0y7.mongodb.net}}]If I’m reading this right, I shouldn’t be getting an unknown host exception if I’m using the exact connection string that Atlas is telling me to use.thanks in advance", "username": "Keith_Pittner" }, { "code": "% nslookup -q=SRV clusterfortesting.9hpj0y7.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\nclusterfortesting.9hpj0y7.mongodb.net\tservice = 0 0 27017 ac-96gp9vs-shard-00-00.9hpj0y7.mongodb.net.\nclusterfortesting.9hpj0y7.mongodb.net\tservice = 0 0 27017 ac-96gp9vs-shard-00-01.9hpj0y7.mongodb.net.\nclusterfortesting.9hpj0y7.mongodb.net\tservice = 0 0 27017 ac-96gp9vs-shard-00-02.9hpj0y7.mongodb.net.\n", "text": "clusterfortesting.9hpj0y7.mongodb.net looks like it’s an SRV record:So make sure you’re using a mongodb+srv connection string, e.g. “mongodb+srv://@clusterfortesting.9hpj0y7.mongodb.net”.You said that you “pretty much copied and pasted the code from Atlas connection code”. Can you paste your actual connection code in a reply? I’m curious whether you’re using a connection string at all, or MongoClientSettings directly.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": " String _connection_string = \"mongodb://pittner:<my_password>@clusterfortesting.9hpj0y7.mongodb.net/Events_Database?retryWrites=true&w=majority\";\n\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(_connection_string))\n .build();\n MongoClient _mongo_client = MongoClients.create(settings);\n\n try {\n // Send a ping to confirm a successful connection\n MongoDatabase database = _mongo_client.getDatabase(\"Events_Database\");\n for (String collectionName : database.listCollectionNames()) {\n System.out.println(\" collectionName : \"+ collectionName);\n }\n\n } catch (MongoException e) {\n e.printStackTrace();\n }\n\n _mongo_client.close();\n", "text": "GM Jeffrey,Thanks for the reply. This example gives the error I previously mentioned :however, if I use the +srv in the connection string\nString _connection_string = “mongodb+srv://pittner:<my_password>@clusterfortesting.9hpj0y7.mongodb.net/Events_Database?retryWrites=true&w=majority”;I get a different error :\ncaused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetThanks for helping me with this!Keith", "username": "Keith_Pittner" }, { "code": "mongodb+srv://sudo update-ca-certificates", "text": "An SRV connection mongodb+srv:// is what you should be using.caused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetThis indicates java has no CA that can verify the server certificate. You might have to update your java installation. Some linux OS using a packaged java install you might be able to use sudo update-ca-certificatesAlso see:\nconfigure-the-jvm-trust-store", "username": "chris" }, { "code": "", "text": "Thank you very much Chris! I will certainly look into that.Keith", "username": "Keith_Pittner" } ]
Using Java library to connect to cloud cluster
2023-06-20T16:53:20.065Z
Using Java library to connect to cloud cluster
880
null
[ "indexes" ]
[ { "code": "", "text": "I have large number off data in single collection approx 25M. Now always we are doing search on category user have to select at least 1 category and perform the search action. We have 30 category of data in collection now user select 20+ category sometime at that time query not getting fast result and taking more than 40 or 45 sec. so which index I need to use for that category column. in category column we have inserted category id.Thanks", "username": "Sanjay_Patel" }, { "code": "", "text": "what is your query like?", "username": "Kobe_W" }, { "code": "db.table.count({\"$and\":[{\"category_id\":{\"$in\":[1,2,3,4,5,7,8,9,10,11,12,14,16,17,18,19,18775,23,24,25,26]}},{\"report_status\":\"passed\"},{\"$and\":[{\"is_branded\":0},{\"avg\":{\"$gt\":0}}]}]})", "text": "db.table.count({\"$and\":[{\"category_id\":{\"$in\":[1,2,3,4,5,7,8,9,10,11,12,14,16,17,18,19,18775,23,24,25,26]}},{\"report_status\":\"passed\"},{\"$and\":[{\"is_branded\":0},{\"avg\":{\"$gt\":0}}]}]})user have to select category compulsory so multiple catergory_id is always in $and condition.", "username": "Sanjay_Patel" }, { "code": "", "text": "Check ESR rule.Explain output is also a good start.", "username": "Kobe_W" } ]
Need help for which index is best when always using $in query in field
2023-06-23T17:41:06.136Z
Need help for which index is best when always using $in query in field
703
null
[ "replication" ]
[ { "code": "1.) MongoNode1:27017 (Primary) IP: 10.10.15.100\nCFG File: bindIp: 127.0.0.1,MongoNode1,10.10.15.100\n2.) MongoNode2:27017 (Secondery) IP: 10.10.15.101\nCFG File: bindIp: 127.0.0.1,MongoNode2,10.10.15.101\n3.) MongoNode3:27017 (Arbitar) IP: 10.10.15.102\nCFG File: bindIp: 127.0.0.1,MongoNode3,10.10.15.102\n1.) MongoNode4:27017 (Primary) IP: 10.10.15.100\nCFG File: bindIp: 127.0.0.1,MongoNode4,10.10.15.100\n2.) MongoNode1:27017 (Secondery) IP: 10.10.15.103\nCFG File: bindIp: 127.0.0.1,MongoNode1,10.10.15.103\n3.) MongoNode2:27017 (Secondery) IP: 10.10.15.101\nCFG File: bindIp: 127.0.0.1,MongoNode2,10.10.15.101\n4.) MongoNode3:27017 (Arbitar) IP: 10.10.15.102\nCFG File: bindIp: 127.0.0.1,MongoNode3,10.10.15.102\n{\"t\":{\"$date\":\"2023-06-22T07:46:08.622+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"The requested address is not valid in its context.\"}}}\n", "text": "Hello Everyone, Hope all are doing well,I will share an stranger issue regarding the Replica Set and hope to figure an solution for it.I have a three nodes Replica Set:Everything was grate until we decided to buy a new server to work as primary instead of MongoNode1But We forced to assign the MongoNode1 IP to The new PC, So the structure converted to the following:In this case we just replaced the local IP address between 2 machines the new primary and the old primary and for sure we rebuild the replica set again but there are a stranger issue happened, the service is not running automatically in the new primary with the following error:We tried everything even we tried to reinstall the new primary machine OS, also tried to reset the network many-times from the routers and PC.Note: we can start the mongo manually from the services after we login to windows but it cannot start automatically never if we restarted the primary pc.so what is the advice in this this case?Thank you.", "username": "Mina_Ezeet" }, { "code": "The requested address is not valid in its context.ping -c 2 MongoNode1\nping -c 2 MongoNode2\nping -c 2 MongoNode3\nping -c 2 MongoNode4\n", "text": "The requested address is not valid in its context.What ever IP address you using in the configuration for bindIp, it is not valid for this machine. You do not need to bindIp to both the host name, like MongoNode3 and its IP, like 10.10.15.102. From all nodes, share the output of the 4 commands:I asked because you might have name resolver misconfiguration.", "username": "steevej" }, { "code": "C:\\Windows\\system32>ping MongoNode1\n\nPinging MongoNode1 [10.10.15.103] with 32 bytes of data:\nReply from 10.10.15.103: bytes=32 time=1ms TTL=128\nReply from 10.10.15.103: bytes=32 time=1ms TTL=128\nReply from 10.10.15.103: bytes=32 time=1ms TTL=128\nReply from 10.10.15.103: bytes=32 time=1ms TTL=128\n\nPing statistics for 10.10.15.103:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 1ms, Maximum = 1ms, Average = 1ms\nC:\\Windows\\system32>ping MongoNode2\n\nPinging MongoNode2 [10.10.15.101] with 32 bytes of data:\nReply from 10.10.15.101: bytes=32 time=1ms TTL=128\nReply from 10.10.15.101: bytes=32 time=1ms TTL=128\nReply from 10.10.15.101: bytes=32 time=1ms TTL=128\nReply from 10.10.15.101: bytes=32 time=2ms TTL=128\n\nPing statistics for 10.10.15.101:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 1ms, Maximum = 2ms, Average = 1ms\nC:\\Windows\\system32>ping MongoNode3\n\nPinging MongoNode3 [10.10.15.102] with 32 bytes of data:\nReply from 10.10.15.102: bytes=32 time=1ms TTL=128\nReply from 10.10.15.102: bytes=32 time=1ms TTL=128\nReply from 10.10.15.102: bytes=32 time=1ms TTL=128\nReply from 10.10.15.102: bytes=32 time=2ms TTL=128\n\nPing statistics for 10.10.15.102:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 1ms, Maximum = 2ms, Average = 1ms\nC:\\Windows\\system32>ping MongoNode4\n\nPinging MongoNode4 [::1] with 32 bytes of data:\nReply from ::1: time<1ms\nReply from ::1: time<1ms\nReply from ::1: time<1ms\nReply from ::1: time<1ms\n\nPing statistics for ::1:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 0ms, Maximum = 0ms, Average = 0ms\n", "text": "Thank you for your reply, I already tried to use host names only without adding local ip(s) but I got an error during the replica set setup: node not found during rs.add()Anyway here you go the output for every ping command:Note: All of the following ping CMD are made in the MongoNode4 (New Master PC)Note: Kindly note that the mongo is running very good after I log into the windows and start it manually from the windows services, I am wondering why it is not working automatically after startup Why I am needing to login to windows?Thank you,", "username": "Mina_Ezeet" }, { "code": "C:\\Windows\\system32>ping MongoNode4\n\nPinging MongoNode4 [::1] with 32 bytes of data:\nReply from ::1: time<1ms\nReply from ::1: time<1ms\nReply from ::1: time<1ms\nReply from ::1: time<1ms\n\nPing statistics for ::1:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 0ms, Maximum = 0ms, Average = 0ms\n", "text": "You need to run the 4 ping commands from all the machines.So\nFrom MongoNode1, you run and share the result of the 4 pings.\nFrom MongoNode2, you run and share the result of the 4 pings\nFrom MongoNode3, you run and share the result of the 4 pings\nFrom MongoNode4, you run and share the result of the 4 pingsBut already we see that your networking has issues because MongoNode4 does not respond to ping:You need to fix networking issues first.I am wondering why it is not working automatically after startupIt will be hard for use to tell if you do not share the error message you get and the server logs.", "username": "steevej" }, { "code": "", "text": "Dear steevej ,\nThank you for trying to help me to solve this issue,\nI guessed that there are something wrong with the new instance LAN Card, So I tried something new,\nI forced the MongoDB Service to run as Automatic Delayed, in this case all the Windows OS services including The Lan/DHCP etc, Services will run at the first then the MongoDB Service will start later as (automatic delayed), and it is worked fine now and the issue has been resolved,Note: I know it is takes around 2-3 mins more than usual to be ready but it is finally working as an automatic service.Sorry for my bad english Thanks.", "username": "Mina_Ezeet" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica Set Network Issue - Windows OS
2023-06-22T05:15:50.189Z
Replica Set Network Issue - Windows OS
432
null
[ "queries", "change-streams" ]
[ { "code": "insert/delete/updateoplog/ChangeStreamdb.getUsers()db.runCommand( { usersInfo: 1 } )", "text": "Hi Team,How can I get the details of the user that has performed DML operations like insert/delete/update to a collection?The document in the oplog/ChangeStream doesn’t have any information on the user that has performed the operation.I see db.getUsers() or db.runCommand( { usersInfo: 1 } ) to fetch the user details but not sure how to associate these user details with the document that has been inserted.Any help on this?", "username": "Sabareesh_Babu" }, { "code": "", "text": "Hi @Sabareesh_BabuI think you’re looking for Auditing. However this feature is part of the MongoDB Enterprise Server, which requires an Enterprise Advanced subscription if you’re looking for an on-prem deployment of this feature.Alternatively I believe you can do the same using Atlas M10+ deployments (i.e. not available for free & shared tier clusters): https://www.mongodb.com/docs/atlas/database-auditing/(Yet) another idea is to create an API layer in front of the database server. This custom layer can then log anything you need.Best regards\nKevin", "username": "kevinadi" } ]
How to get user information of the document that has been inserted
2023-06-26T05:52:35.052Z
How to get user information of the document that has been inserted
634
null
[ "data-modeling", "swift", "flexible-sync" ]
[ { "code": "ServiceisArchivedServiceAppointmentServiceafterResetafterAppointmentisArchivedfunc handleClientReset() {\n // Report the client reset error to the user, or do some custom logic.\n}\nSyncSession.immediatelyHandleErrorafterResetafterbeforeschemaVersionmigrationBlockwriteCopybeforeAll client changes must be integrated in server before writing copy", "text": "Hello, I’m using flexible sync with dev mode off and I added a required bool to the Service object: isArchived, in the iOS app and in the schema.\nAfter the automatic client reset, the new realm has some issues: all Service objects have new ids and the Appointment objects, which hold an optional reference to a Service object, now have null instead.How can I recover the user data? (I have a backup of the realm file)I would also like to understand how this whole schema change and client reset process works, to avoid future issues. I read all the documentation, many times actually, but I still have some open questions:What exactly happens during an automatic client reset? I looked at the afterReset callback and the after Realm has some data but different ids, which in my opinion beats the purpose of recovery since the Appointment objects are now unusable with the null references.Why is there a need for a client reset at all when adding a new property with a default value? And why can’t I do a simple migration like with a local Realm?I tried to remove the isArchived property and I got a client reset error. In the documentation there is no guidance (or example) on what to do in this situation to recover the data. There is just this code sample:In the docs I’m instructed to call SyncSession.immediatelyHandleError after handling the client reset but I don’t understand what this does exactly besides making a copy of the old Realm file and creating a new, empty one.I tried to use the afterReset callback to delete everything from the after Realm and copy everything from before, but then I got another sync error. Is this a valid approach?Are schemaVersion and migrationBlock used at all for synced Realms?I tried to writeCopy on the before realm but I got this error: All client changes must be integrated in server before writing copy.How can I avoid issues like this which lead to data loss in production if I make simple changes to the schema? I read about the partner collection strategy but it didn’t seem necessary at this point when I also updated the iOS client with the new Bool.Please help me with these questions, it’s a very serious issue and I already spent two days on it trying everything I could.", "username": "Madalin_Sava" }, { "code": "beforeafterafterfilePath", "text": "Also, what is the relationship between the before and after realms with the realm I open initially? Is after the same as the original one? What happens if I try to add objects to it? And how come the filePath is the same?", "username": "Madalin_Sava" }, { "code": "before2023-06-20 14:19:02.576016+0300 BeePalApp[47583:35385783] Sync: Connection[5]: Session[5]: A previous reset was detected of type: 'Recover' at: 2023-06-20 11:18:07\n2023-06-20 14:19:02.579105+0300 BeePalApp[47583:35385783] Sync: Connection[5]: Session[5]: A fatal error occured during client reset: 'A previous 'Recover' mode reset from 2023-06-20 11:18:07 did not succeed, giving up on 'Recover' mode to prevent a cycle'\nError Domain=io.realm.sync Code=7 \"A fatal error occured during client reset: 'A previous 'Recover' mode reset from 2023-06-20 11:18:07 did not succeed, giving up on 'Recover' mode to prevent a cycle'\" UserInfo={error_action_token=<RLMSyncErrorActionToken: 0x60000089cbd0>, NSLocalizedDescription=A fatal error occured during client reset: 'A previous 'Recover' mode reset from 2023-06-20 11:18:07 did not succeed, giving up on 'Recover' mode to prevent a cycle', recovered_realm_location_path=/Users/madalin/Library/Developer/CoreSimulator/Devices/22B46ACA-DB24-43FF-9C1F-5E599264E822/data/Containers/Data/Application/B11DB35D-D738-41CC-BA7B-F02186FD8466/Documents/mongodb-realm/billy-jgeoz/recovered-realms/recovered_realm-20230620-141902-dW0mnokH} Optional(<RLMSyncSession: 0x6000006a9180> {\n\tstate = 1;\n\tconnectionState = 0;\n\trealmURL = wss://ws.eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/billy-jgeoz/realm-sync;\n\tuser = 64917413171d5b33b3615b0c;\n})\n2023-06-20 14:19:02.586948+0300 BeePalApp[47583:35385783] Sync: Connection[5]: Disconnected\n2023-06-20 16:17:06.429906+0300 BeePalApp[60784:35596271] Sync: Connection[2]: Session[2]: Received: ERROR \"Bad client file identifier (IDENT)\" (error_code=208, try_again=false, error_action=ClientReset)\n2023-06-20 16:17:06.446290+0300 BeePalApp[60784:35596271] Sync: Connection[2]: Disconnected\nError Domain=io.realm.sync Code=7 \"Bad client file identifier (IDENT)\" UserInfo={Server Log URL=https://realm.mongodb.com/groups/64591410ffb83f492c3916c7/apps/645915213a82d1d7fbafefc9/logs?co_id=6491a6d29a5ae54fdc0db8b6, recovered_realm_location_path=/Users/madalin/Library/Developer/CoreSimulator/Devices/22B46ACA-DB24-43FF-9C1F-5E599264E822/data/Containers/Data/Application/FCF45155-184D-401F-890D-FB42DE2F941F/Documents/mongodb-realm/billy-jgeoz/recovered-realms/recovered_realm-20230620-161706-s0XP5egR, error_action_token=<RLMSyncErrorActionToken: 0x6000026ab9f0>, NSLocalizedDescription=Bad client file identifier (IDENT)} Optional(<RLMSyncSession: 0x60000289bbc0> {\n\tstate = 1;\n\tconnectionState = 0;\n\trealmURL = wss://ws.eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/billy-jgeoz/realm-sync;\n\tuser = 64917413171d5b33b3615b0c;\n})\n", "text": "Update: I tried to drop the database then copy all objects from the before realm (using the backup file) to the newly created realm, and after a few seconds I’m getting this error:I also tried the following:", "username": "Madalin_Sava" }, { "code": "isArchivedbeforeafterafter", "text": "Adding properties does not require a client reset.Did you add the isArchived field to all of your documents on the server prior to adding it as a required field in the server-side schema? Objects disappearing as part of the initial client reset with recovery means that the object creations had been synchronized to the server, but the documents for them either no longer exist or are unsyncable - such as if they’re missing a required field.Automatic client resets work by downloading a fresh copy of the Realm, and then modifying the existing file to make it compatible with the server-side state. The before Realm passed to the callback is the Realm frozen before making these changes, while the after Realm is a live view of the same file. It should normally not be neccesary to do anything in the callback and it’s mostly just informational.Schema versions and migration blocks are not used for synchronized Realms.Deleting everything in the after Realm and attempting to copy over the data is an extremely bad idea. At best it’s deleting all objects created by other clients which had not yet been synchronized to the current one and result in a ton of extra network usage.If your current state is that you have a Realm file which you need to recover data from and are okay with discarding all server-side state, then you should do each of the steps you did (drop database, restart sync, delete local synced Realm file), but instead of trying to copy the backup file into place, you’ll need to open the backup file in read-only mode (with no sync configuration) and then copy the objects from it into a newly created synchronized Realm.", "username": "Thomas_Goyne" }, { "code": "ClientisArchivedbeforeResetafterResetafterClientClientisArchivedSyncSession.immediatelyHandleError", "text": "Thank you for the explanations, this helped. In the end that worked (drop db, restart sync etc) but is this a good practice for a production app? If I drop the database then all the data is distributed on the client apps, which might get deleted and clients won’t be able to recover their data on reinstall/login.Is there a better way to handle breaking schema changes in production? I’m not sure what “perform client reset” means in the documentation.Right now I created another app+cluster to isolate some of these use cases and test how to best handle these scenarios.\nI have a Client object and I added a new required parameter isArchived to the backend schema. After restarting sync and the client app, the beforeReset and afterReset callbacks were called and the after realm was missing the Client object I had created earlier. I don’t this this should be happening, right?\nMaybe the reason is (as you mentioned) that I didn’t add the required field to all documents in the database beforehand. Is there an easy way to do that? And what is the order of operations which also minimizes downtime? For example, it could be: terminate sync, run the pipeline to add the new property to all documents, deploy schema change, start sync. Or maybe I could do all these things in a single deployment using a draft from CLI.\nIt still seems weird since now I can create Client documents from the iOS app without adding the isArchived property to the Swift Object. Why is this working but the client reset doesn’t?Lastly, can you give me some details on the following?", "username": "Madalin_Sava" }, { "code": "SyncSession.immediatelyHandleErrorSyncSession.immediatelyHandleError ", "text": "Dropping the database and restarting sync will generally result in all data being lost, so it is not something I would particularly recommend doing.When adding properties to the schema you should not be terminating or restarting sync. That is the step which causes client resets, and it is not required when adding properties. You should run a pipeline to add the required field to the documents* and then deploy the schema change without ever terminating sync, and no downtime at all is required.If you don’t restart sync but add a required field which isn’t present in your documents you’ll see similar behavior, just without the client reset. The documents missing the field will be marked as unsyncable and the client-side objects corresponding to them will be deleted. After the server-side schema change has been synced to the client, objects created by the client will contain the new field even without any updates to the client app (with a default value of zero/nil/false/empty as appropriate for the type).* There is a race condition here in active production usage: between when you run the pipeline and when the schema change is completed new documents may be created by clients which are missing the new field. If a temporary bit of weirdness is acceptable (some objects may disappear and then reappear on the client) you can rerun the pipeline after changing the schema. If not, you can use triggers to add the field to newly created documents during the migration period.Lastly, can you give me some details on the following?That function is for manual client resets and you shouldn’t call it from the automatic client reset handler. The documentation for manual client resets is somewhat vague because it proved to be something which was very difficult to implement in apps, and even more difficult to provide any sort of general guidance on how to implement them. The flow for manual resets roughly is:Step 2 requires that the existing Realm be closed, which is a somewhat complicated thing (we can’t just unilaterally close the file as you may be reading from it on another thread). As a result, we normally do most of this the next time the Realm is opened rather than immediately upon receiving the error. SyncSession.immediatelyHandleError tells us to instead do it right away. This requires that you’ve ensured that you no longer have any references to the Realm, which in practice is a rather difficult thing to do.", "username": "Thomas_Goyne" }, { "code": "_idafterbefore", "text": "This makes sense, so basically whenever I make a schema change that might mess with existing documents, I should have a pipeline and a trigger in place to make sure there is no data loss.\nNow if I want to remove a required property, can I avoid a client reset error and the need for manual handling it by running a pipeline to update the documents to match the schema? Or is it better to employ the partner collection strategy?Can you also tell me if there is a possibility of automatic client reset creating the corresponding objects in the new realm with different _id values? This happened during my previous tests but I’m not sure if I did something wrong of this is a valid scenario. Basically, the objects that had the schema change had different ids in the after realm compared to before, and also the objects that were directly referencing them had null values instead.", "username": "Madalin_Sava" }, { "code": "_id", "text": "Removing a required field is currently not supported and is where you’d need the partner collection approach. We’re currently working on adding support for removing fields and switching fields between optional and required without restarting sync, but no ETA on that being released.The concept of a “corresponding object” with a different _id value doesn’t make much sense. The _id field is the objects’ unique identifiers, so two objects with different _ids are just two unrelated objects. Are you doing some sort of data initialization where you check if any objects exist, and if none do create a set of expected objects? If so then what you saw happen could make sense: Client A creates objects, and then some other objects linking to those objects. Schema is updated on the server without adding the required field to the existing documents, so all existing documents are marked as non-syncable. Client B starts and creates a new set of objects. Client A starts and gets a client reset due to sync being restarted. The fresh data downloaded doesn’t contain any of the objects it created (as they’re unsyncable) but does have the objects Client B created after the schema change. All of the links are pointing to null because the objects they linked to have been deleted.", "username": "Thomas_Goyne" } ]
Client reset issues
2023-06-15T12:18:21.351Z
Client reset issues
988
https://www.mongodb.com/…3_2_759x1024.png
[ "performance", "atlas" ]
[ { "code": "", "text": "Dear Community\nI have noticed repeated spikes in the performance monitor of my Atlas M20 5.0.18 on GCPThe regularity is noteworthy.\nI have no such scheduled logical operation on my database (and no deletes at all).Any idea where these com from?\nimage860×1160 53.7 KB\n", "username": "Frederic_Klein" }, { "code": "", "text": "This looks like you might have a TTL index that does automatic delete on expired documents.", "username": "steevej" } ]
High cpu every 15 minutes and hight ops every 5 minutes
2023-06-24T12:12:15.051Z
High cpu every 15 minutes and hight ops every 5 minutes
640
null
[]
[ { "code": "exports = async function() {\n \n // Forneça projectID e clusterNames...\n const projectID = 'XXXXXXXXXXXXXXXXXXXXX';\n const clusterName = 'cloud-prod';\n\n // Obter credenciais armazenadas...\n const username = context.values.get(\"AtlasPublicKey\");\n const password = context.values.get(\"AtlasPrivadeKeySecret\");\n\n // Defina o tamanho de instância desejado...\n const body = {\n \"providerSettings\" : {\n \"providerName\" : \"AWS\",\n \"instanceSizeName\" : \"M50\"\n }\n };\n \n result = await context.functions.execute('modifyCluster', username, password, projectID, clusterName, body);\n console.log(EJSON.stringify(result));\n \n if (result.error) {\n return result;\n }\n\n return clusterName + \" scaled down\"; \n};\n", "text": "Hello,I made a function to use in APP Services, where they run Scale UP and Scale Down instances in the mongodb cluster, but when Triggers fire and execute the function, for example:\nM60 (Low-CPU) it migrates to the M50 it stays as General does not maintain the (Low-CPU).I tried to find something in this documentation but I couldn’t find it:\nhttps://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Multi-Cloud-Clusters/operation/updateClusterMy Function is this:", "username": "Edson_Fernandes_Cunha" }, { "code": "", "text": "I found this doc where the available instances are mentioned, I changed it to R (Low-CPU):Cluster Tier & API Naming Conventions\nFor purposes of management with the Atlas Administration API, cluster tier names that are prepended with R instead of an M (R40 for example) run a low-CPU version of the cluster. When creating or modifying a cluster with the API, be sure to specify your desired cluster class by name with the providerSettings.instanceSizeName attribute.", "username": "Edson_Fernandes_Cunha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem in the migration of instances by the APP Service
2023-06-22T12:56:22.007Z
Problem in the migration of instances by the APP Service
336
null
[ "time-series", "schema-validation" ]
[ { "code": "ExpireAfterSecondsListCollections{{ \"name\" : \"system.buckets.test_document\", \"type\" : \"collection\", \"options\" : { \"validator\" : { \"$jsonSchema\" : { \"bsonType\" : \"object\", \"required\" : [\"_id\", \"control\", \"data\"], \"properties\" : { \"_id\" : { \"bsonType\" : \"objectId\" }, \"control\" : { \"bsonType\" : \"object\", \"required\" : [\"version\", \"min\", \"max\"], \"properties\" : { \"version\" : { \"bsonType\" : \"number\" }, \"min\" : { \"bsonType\" : \"object\", \"required\" : [\"Timestamp\"], \"properties\" : { \"Timestamp\" : { \"bsonType\" : \"date\" } } }, \"max\" : { \"bsonType\" : \"object\", \"required\" : [\"Timestamp\"], \"properties\" : { \"Timestamp\" : { \"bsonType\" : \"date\" } } }, \"closed\" : { \"bsonType\" : \"bool\" } } }, \"data\" : { \"bsonType\" : \"object\" }, \"meta\" : { } }, \"additionalProperties\" : false } }, \"clusteredIndex\" : true, **\"expireAfterSeconds\" : NumberLong(604801)**, \"timeseries\" : { \"timeField\" : \"Timestamp\", \"metaField\" : \"Hash\", \"granularity\" : \"hours\", \"bucketMaxSpanSeconds\" : 2592000 } }, \"info\" : { \"readOnly\" : false, \"uuid\" : CSUUID(\"id\") } }}\n{{ \"name\" : \"test_document\", \"type\" : \"timeseries\", \"options\" : { **\"expireAfterSeconds\" : NumberLong(604801)**, \"timeseries\" : { \"timeField\" : \"Timestamp\", \"metaField\" : \"Hash\", \"granularity\" : \"hours\", \"bucketMaxSpanSeconds\" : 2592000 } }, \"info\" : { \"readOnly\" : false } }}\nexpireAfterSeconds", "text": "Hi, I have a time-series collection with granularity set to 1 hour. Originally I had set ExpireAfterSeconds to 6 months. However, we had a design change and now we would like to store them for only 7 days. I have modified the value according to the documentation and can see the values got changed when I do ListCollections. Here is what is being returned:Both the actual document and the system bucket have the correct expireAfterSeconds but I can still see records older than 7 days even after 12 hours of modifying this value. Any help would be appreciated.", "username": "Ege_Dey-Aydin" }, { "code": "", "text": "Did you ever check this.", "username": "Kobe_W" }, { "code": "", "text": "Yes, I did, multiple times actually. I don’t understand what you mean. It’s been days since I made the change and the old records are still there. It says it runs every 60 seconds and there could be delays. The delay is taking days?", "username": "Ege_Dey-Aydin" } ]
Documents are not being expired after modifying ExpireAfterSeconds
2023-06-22T20:08:48.950Z
Documents are not being expired after modifying ExpireAfterSeconds
674
null
[ "aggregation", "queries" ]
[ { "code": "{\n\t\"_id\": \"61e7e78372d3221d2c5fb242\",\n\t\"name\": \"First Task\",\n\t\"desc\": \"This is description\",\n\t\"status\": \"completed\",\n\t\"done\": 100,\n\t\"level\": \"medium\",\n\t\"company\": \"BBB\",\n\t\"project\": \"AAA\",\n\t\"from_date\": \"2022/01/01 00:00:00\",\n\t\"to_date\": \"2022/01/01 00:00:00\",\n\t\"todos\": [\n\t\t{\n\t\t\t\"name\": \"sub1\",\n\t\t\t\"desc\": \"desc1\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t},\n\t\t{\n\t\t\t\"name\": \"sub2\",\n\t\t\t\"desc\": \"desc2\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t}\n\t],\n\t\"owner\": \"61e6125db0f102060951aa53\"\n}\n\"todos\": [\n\t\t{\n\t\t\t\"0\": {\n\t\t\t\t\"active\": false\n\t\t\t},\n\t\t\t\"name\": \"sub1\",\n\t\t\t\"desc\": \"desc1\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t},\n\t\t{\n\t\t\t\"0\": {\n\t\t\t\t\"active\": false\n\t\t\t},\n\t\t\t\"name\": \"sub2\",\n\t\t\t\"desc\": \"desc2\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t}\n\t]\ndb.ecom.updateOne(\n\t{_id: ObjectId('61e7e78372d3221d2c5fb242')},\n\t[\n\t\t{$set: {\"todos.0.active\": false}}\n\t]\n)\ndb.ecom.updateOne(\n\t{_id: ObjectId('61e7e78372d3221d2c5fb242')},\n\t[\n\t\t{$set: {\"todos.$[0].active\": false}} // error with $\n\t]\n)\ndb.ecom.updateOne(\n\t{_id: ObjectId('61e7e78372d3221d2c5fb242')},\n\t[\n\t\t{$set: {\"todos\": {\n\t\t\t\"$map\": function(item, index) {\n\t\t\t\tif ([0, 1].inArray(index)) {\n\t\t\t\t\titem.active = false;\n\t\t\t\t}\n\t\t\t\treturn item;\n\t\t\t}\n\t\t}}}\n\t]\n)\n", "text": "I have a document like that:I want to update “todos” array with rule: update sub item by index like “todos.0.active”: false.\nWhen I try it every sub item are updated like:0 doesn’t mean index (0, 1, 2…), it means key in every items. Im only need to update active to false with index in [0,1,…].\nI think about map operator like javascript but I dont know how to do. My full query is:and:I want to code like:", "username": "MAY_CHEAPER" }, { "code": "db.collection.aggregate([\n {\n \"$addFields\": {\n \"todos\": {\n \"$function\": {\n \"body\": function(todos){\n \n const el1 = todos[1];\n \n if(el1.active===false){ todos[1].active=true }\n \n return todos\n },\n \"args\": [\"$todos\"],\n \"lang\": \"js\"\n }\n }\n }\n }\n])\n$todos", "text": "Hi @MAY_CHEAPER,I’m not very advanced so this is more an idea… there should be a built in way to do this. Anyways, this is the query, you should test with update before doing it in the actual collection:May not be a good prettify as it is from ffox browser console.Basically, it adds the field “todos” by reading the current $todos and setting an element with index 1 to a particular value. Feel free to extend this function, for example, you could set up an array of indexes and then iterate, etc.", "username": "santimir" }, { "code": "", "text": "@santimir it’s amazing when I can code like this, thank you so much.", "username": "MAY_CHEAPER" }, { "code": "{\n\t\"_id\": \"61e7e78372d3221d2c5fb242\",\n\t\"name\": \"First Task\",\n\t\"desc\": \"This is description\",\n\t\"status\": \"completed\",\n\t\"done\": 100,\n\t\"level\": \"medium\",\n\t\"company\": \"BBB\",\n\t\"project\": \"AAA\",\n\t\"from_date\": \"2022/01/01 00:00:00\",\n\t\"to_date\": \"2022/01/01 00:00:00\",\n\t\"todos\": [\n\t\t{\n\t\t\t\"name\": \"sub1\",\n\t\t\t\"desc\": \"desc1\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t},\n\t\t{\n\t\t\t\"name\": \"sub2\",\n\t\t\t\"desc\": \"desc2\",\n\t\t\t\"active\": true,\n\t\t\t\"owner\": \"61e6125db0f102060951aa53\"\n\t\t}\n\t],\n\t\"owner\": \"61e6125db0f102060951aa53\"\n}\n", "text": "Sí sólo vas actualizar la posición 0 del arreglo de todos los registros bastaría con que hicieras\ndb.pruebacomunity.updateMany({},{$set:{“todos.0.active”:false}})", "username": "Sergio_Alfredo_Flores_Alfonso" } ]
How to update an sub array with map operation by index in array?
2022-01-21T11:52:03.867Z
How to update an sub array with map operation by index in array?
5,156
null
[ "transactions", "flutter" ]
[ { "code": "object.property = 'abc';realm!.write(() { object.property = 'abc' });", "text": "I’m trying to update a property from a object with this code:\nobject.property = 'abc';\nBut i’m getting this error: Error: RealmException: Error code: 2005 . Message: Trying to modify database while in read transaction)\nI know if i use the below code it will work:\nrealm!.write(() { object.property = 'abc' });But the problem is that i dont want it to be synced with the database at this moment, i’ll sync it with the database later when/if the user clicks on the save button.Is there any way to copy this object without the realm reference or detach it from realm?\nThis way i would be able to do that, or do you guys have any other sugestion?", "username": "Rodrigo_Real1" }, { "code": "", "text": "Hi,\nYou could use async transactions to achieve what you describe. You can open an async transaction, allow each property of this object to be changed and commit the transaction on Save or revert the transaction on cancel.Update: I mean you can use beginWrite/Transaction.commit APIs", "username": "Lyubomir_Blagoev" }, { "code": "", "text": "We run into this all the time where the user has the ability to edit components of an object and then it should only be written out when the Save button is clicked.There are two strategies that we useSeparate the UI from the Model - e.g. pass in the object to the view (sheet/window/etc) and populate the UI from it, allowing the user to edit at will.\n• Once the user clicks Save then within a write transaction, update the properties on the object from the UIUpon opening the view (sheet/window/etc) make a copy of the object and update the object copy as the user makes changes.\n• Once the user clicks Save, then within a write transaction, write out the object via it’s primary key and the fields will be updated accordingly.Jay", "username": "Jay" } ]
Update object property without synchronizing it to the database
2023-06-26T04:58:23.769Z
Update object property without synchronizing it to the database
1,043
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Inside of mongosh shell I can do this fine…use admin\ndb.createUser({user: “admin”, pwd: “admin”, roles: [{role: “root”, db: “admin”}]})Now I need to do it in a script with a one liner but it doesn’t seem to work. It doesn’t\ngive any errors. However, the user will NOT be created?Here is what I’ve tried…echo ‘use admin ; db.createUser({user: “admin”, pwd: “admin”, roles: [{role: “root”, db: “admin”}]}) ; exit;’ | mongoshAlso:mongosh --eval ‘use admin ; db.createUser({user: “admin”, pwd: “admin”, roles: [{role: “root”, db: “admin”}]}) ; exit;’Is there a better way to create user in a script with a one liner?Thanks,", "username": "Christian_Seberino" }, { "code": "--eval \"db.getSiblingDB('admin').createUser(...)\"", "text": "Hi @Christian_SeberinoTry this form:--eval \"db.getSiblingDB('admin').createUser(...)\"", "username": "chris" }, { "code": "", "text": "Thanks. I figured it out. Apparently my problem was trying to switch databases inside my mongosh code.In other words, this doesn’t work:mongosh --eval ‘use admin ; db.createUser({user: “admin”, pwd: “admin”, roles: [{role: “root”, db: “admin”}]}) ;’However, if you remove the ‘use admin;’ and switch to admin another way, then it works…mongosh admin --eval ‘db.createUser({user: “admin”, pwd: “admin”, roles: [{role: “root”, db: “admin”}]}) ;’", "username": "Christian_Seberino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trouble creating users IN A SCRIPT with a one liner. How do that?
2023-06-23T21:00:31.769Z
Trouble creating users IN A SCRIPT with a one liner. How do that?
453
null
[ "replication", "java" ]
[ { "code": " HighLevel Problem statement: We are getting the exception \"com.mongodb.MongoClientException: Sessions are not supported by the MongoDB\" while creating the session from the MongoClient API when using ReactiveStreams MongoDB Java driver version 3.6.0.\n\n Recently we have migrated from MongoDB version 2.2 to 3.6.\n We used ReactiveStreams MongoDB Java driver version 3.6.0\n", "text": "Hi,", "username": "Sabareesh_Babu" }, { "code": "", "text": "A search on this error brings up this relevant post suggesting that the upgrade from 3.4 to 3.6 was not completed correctly.", "username": "chris" } ]
Unable to create session after MongoDB migration from version 2.2 to 3.6 while using ReactiveStreams Java driver
2023-06-26T05:35:51.737Z
Unable to create session after MongoDB migration from version 2.2 to 3.6 while using ReactiveStreams Java driver
424
null
[ "node-js" ]
[ { "code": "ObjectId.isValid()mongodb", "text": "How can I determine that a mongodb Object ID is valid? Using the ObjectId.isValid() method from the mongodb nodejs package isn’t enough, it returns true for any string that contains 12 characters. More so, I think this is a bug so how can I report this to the mongodb maintainers? The github repo has disabled issues.", "username": "Uchechukwu_Ozoemena" }, { "code": "", "text": "Hi @Uchechukwu_Ozoemena,Did you end up reporting an issue for this?Per the Bugs/Feature Requests section of the README in this GitHub repo, the NODE project in MongoDB JIra is the appropriate place to report issues directly to the library maintainers. It would be helpful to include more details on your use case, the level of validation you are expecting, and the version of the library you are using.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi, I have faced similar issue in my production environment. My Mongo Client version is 3.3.4 and my MongoDB Server Version is 4.2.14.", "username": "Arvin_Mathias" } ]
ObjectId.isValid() returns true for non-object IDs
2022-05-17T19:42:13.401Z
ObjectId.isValid() returns true for non-object IDs
8,760
null
[ "realm-web" ]
[ { "code": "", "text": "This article says to use createIndex to set the TTL, but realm-web did not have it. How can I do this? If not, how can I do the same thing using Triggers, etc.?", "username": "_8888" }, { "code": "mongoshcreateIndex", "text": "Hi @_8888,Those instructions are for creating a TTL index on your MongoDB collections, which is not possible via the Realm SDKs. You can instead connect to your cluster using mongosh and run the createIndex command there. See https://www.mongodb.com/docs/atlas/tutorial/connect-to-your-cluster-v2/ for more instructions on connecting to your Atlas cluster from the shell.", "username": "Kiro_Morkos" } ]
How do I set TTL for a document using realm-web?
2023-06-26T07:28:11.461Z
How do I set TTL for a document using realm-web?
596
null
[ "queries", "node-js", "crud", "mongoose-odm", "server" ]
[ { "code": "/Users/FruitsProjectMongoose/node_modules/mongoose/lib/model.js:3519\n \n for (let i = 0; i < error.writeErrors.length; ++i) {\n ^\n\nTypeError: Cannot read properties of undefined (reading 'length')\n at /Users/FruitsProjectMongoose/node_modules/mongoose/lib/model.js:3519:47\n at collectionOperationCallback (/Users/FruitsProjectMongoose/node_modules/mongoose/lib/drivers/node-mongodb-native/collection.js:194:24)\n at /Users/FruitsProjectMongoose/node_modules/mongodb/lib/utils.js:349:66\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\nconst mongoose = require(\"mongoose\");\nmongoose.set('strictQuery', false);\n\n// Connect to MongoDB by port and catch errors.\nmain().catch(err => console.log(err));\n\nasync function main() {\n await mongoose.connect('mongodb://127.0.0.1:27017/fruitsDB')\n .then(() => console.log('Connected!'));\n\n\n // Defining a Model Schema.\n const Schema = mongoose.Schema;\n const fruitSchema = new Schema({\n name: {\n type: String,\n require: true\n },\n rating: {\n type: Number,\n require: true\n },\n review: {\n type: String,\n require: true\n }\n });\n\n const peopleSchema = new Schema({\n name: String,\n age: Number\n });\n\n\n // Create a Model.\n const Fruit = new mongoose.model(\"Fruit\", fruitSchema);\n const People = new mongoose.model(\"People\", peopleSchema);\n\n\n // Create & Save a Document.\n const fruit = new Fruit({\n name: \"Banana\",\n rating: 10,\n review: \"Perfection!\"\n });\n // await fruit.save();\n\n const people = new People({\n name: \"Eduard\",\n age: 25\n });\n // await people.save();\n\n\n // Create & Save docs. in Bulk.\n const kiwi = new Fruit({\n name: \"Kiwi\",\n rating: 9,\n review: \"Great, kinda expensive!\"\n });\n \n const orange = new Fruit({\n name: \"Orange\",\n rating: 6,\n review: \"Too sweet.\"\n });\n\n const apple = new Fruit({\n name: \"Apple\",\n rating: 7,\n review: \"Great fruit!\"\n });\n\n Fruit.insertMany([kiwi, orange, apple], function(err) {\n if (err) {\n console.log(err);\n } else {\n console.log(\"Succesfully saved to fruitsDB\");\n }\n });\n\n\n mongoose.connection.close();\n};\nbrew services start mongodb-community", "text": "I’ve looked everywhere and I cannot figure out why I get this error while I try to create and save multiple documents with Mongoose.It is working to save individual documents, but when I run the script to add multiple documents with .insertMany() I get the following message in terminal and I have no clue what to do with it.It would be an understatement to say that I have tried everything that I could think of/find across the web. I’ve messaged a few devs. and they’ve recommended me to try some things but no luck. I really need some help with this. I start to think it might be something wrong with my system.I’ve installed MongoDB through HomeBrew and Mongoose through npm in the past two days so everything is up-to-date.Here is my simple JS script:MongoDB server is running on brew services start mongodb-community .", "username": "Eduard_Radd" }, { "code": "", "text": "I encountered the same issue,E:\\Web Development Bootcamp\\Projects\\todolist-v2\\node_modules\\mongoose\\lib\\model.js:3519\nfor (let i = 0; i < error.writeErrors.length; ++i) {\n^TypeError: Cannot read properties of undefined (reading ‘length’)\nat E:\\Web Development Bootcamp\\Projects\\todolist-v2\\node_modules\\mongoose\\lib\\model.js:3519:47\nat collectionOperationCallback (E:\\Web Development Bootcamp\\Projects\\todolist-v2\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:150:26)\nat Timeout. (E:\\Web Development Bootcamp\\Projects\\todolist-v2\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:177:11)\nat listOnTimeout (node:internal/timers:564:17)\nat process.processTimers (node:internal/timers:507:7)Node.js v18.13.0when i change the localhost to 127.0.0.1 in the connection → It worked.\nThis may help you to narrow down the issue.", "username": "G_G1" }, { "code": "app.listen(3000, function() {\n console.log(\"Server started on port 3000\");\n});\n", "text": "Can you send through an example of where you changed localhost to 127.0.0.1 in the connection? I’ve tried within app.listen and it didn’t work.", "username": "Lauren_Robinette" }, { "code": "mongoose.connect(\"mongodb://localhost:27017/todolistDB\", {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nmongoose.connect(\"mongodb://127.0.0.1:27017/todolistDB\", {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\n", "text": "I have facing this same issue. But after i change the localhost connection path with replacing localhost to\n127.0.0.1 It works…", "username": "Jaydip_Baraiya" }, { "code": "", "text": "I was stuck on this for a couple of hours. I tried multiple methods from stack overflow, and mongodb. I got it to work, not sure how accurate this is but this is the script i used. From my understanding else/if can no longer be used with insertMany. I believe that the ‘then’, and ‘catch’ solved this.const mongoose = require(‘mongoose’);mongoose.connect(“mongodb://127.0.0.1:27017/fruitsDB”)const fruitSchema = new mongoose.Schema({\nname: String,\nrating: Number,\nreview: String\n});const Fruit = mongoose.model(“Fruit”, fruitSchema);const fruit = new Fruit({\nname: “apple”,\nrating: 7,\nreview: “Pretty solid as a fruit.”\n});fruit.save()const personSchema = new mongoose.Schema({\nname: String,\nage: Number\n});const Person = mongoose.model(“Person”, personSchema);const person = new Person({\nname: “John”,\nage: 37\n});//person.save();const kiwi = new Fruit({\nname: “Kiwi”,\nrating: 9,\nreview: “Great, kinda expensive!”\n});const orange = new Fruit({\nname: “Orange”,\nrating: 6,\nreview: “Too sweet.”\n});const apple = new Fruit({\nname: “Apple”,\nrating: 7,\nreview: “Great fruit!”\n});const newFruits = [kiwi, orange, apple];Fruit.insertMany(newFruits)\n.then(function () {\nconsole.log(“Successfully saved defult items to DB”);\n})\n.catch(function (err) {\nconsole.log(err);\n});", "username": "Alberto_Camacho" } ]
Mongoose Error on .insertMany()
2023-01-25T22:35:04.010Z
Mongoose Error on .insertMany()
6,311
null
[]
[ { "code": "", "text": "W: GPG error: MongoDB Repositories bionic/mongodb-org/4.0 Release: The following signatures were invalid: EXPKEYSIG 68818C72E52529D4 MongoDB 4.0 Release Signing Key [email protected]\nE: The repository ‘MongoDB Repositories bionic/mongodb-org/4.0 Release’ is not signed.\nN: Updating from such a repository can’t be done securely, and is therefore disabled by default.As we need to upgrade from 3.6 to 5.0 in ubuntu 18", "username": "Pramod_Prajapat" }, { "code": "", "text": "were you able to get the updated key?", "username": "Shivam_Tewari" }, { "code": "", "text": "Any update on this issue ?", "username": "Roland_Bole1" }, { "code": "", "text": "this issue ?No update provide by yet from Mongo Community", "username": "Pramod_Prajapat" }, { "code": "", "text": "Answered: Mongo db 4.0 GPG key expired for ubuntu 18.04 - #2 by chris", "username": "chris" } ]
MongoDB 4.0 Debian GPG key expired few days ago for ubuntu 18
2023-05-08T07:40:57.633Z
MongoDB 4.0 Debian GPG key expired few days ago for ubuntu 18
2,146
null
[ "aggregation", "queries", "crud" ]
[ { "code": "$sumneg-bigneg-biggerpos-bigpos-biggersmallnumbertest> db.test.insertMany(\n[\n { _id: 'number', val: 41.13 },\n { _id: 'small', val: 5e-324 },\n { _id: 'pos-big', val: 9223372036854776000 },\n { _id: 'neg-big', val: -9223372036854776000 },\n { _id: 'pos-bigger', val: 9223372036854778000 },\n { _id: 'neg-bigger', val: -9223372036854778000 }\n]\n)\n$sort_idtest> db.test.aggregate([ {$sort: {_id: 1} } ])\n[\n { _id: 'neg-big', val: -9223372036854776000 },\n { _id: 'neg-bigger', val: -9223372036854778000 },\n { _id: 'number', val: 41.13 },\n { _id: 'pos-big', val: 9223372036854776000 },\n { _id: 'pos-bigger', val: 9223372036854778000 },\n { _id: 'small', val: 5e-324 }\n]\ntest> db.test.aggregate([{$sort: {_id: -1} } ])\n[\n { _id: 'small', val: 5e-324 },\n { _id: 'pos-bigger', val: 9223372036854778000 },\n { _id: 'pos-big', val: 9223372036854776000 },\n { _id: 'number', val: 41.13 },\n { _id: 'neg-bigger', val: -9223372036854778000 },\n { _id: 'neg-big', val: -9223372036854776000 }\n]\n$group$sum41.13000000000011test> db.test.aggregate([ {$sort: {_id: 1} }, { $group: { _id: null, total: { $sum: \"$val\" } } } ])\n[ { _id: null, total: 41.13000000000011 } ]\n0test> db.test.aggregate([{$sort: {_id: -1} }, { $group: { _id: null, total: { $sum: \"$val\" } } } ])\n[ { _id: null, total: 0 } ]\n41.13000000000011number41.1341.1300000000001110", "text": "I would like to understand how aggregation $sum accumulator works for big numbers and small numbers. In this example, I’m using neg-big, neg-bigger, pos-big and pos-bigger for doubles that are big enough to lose precision, a very small number small and a standard double number.Below shows $sort responses for applying ascending sort and descending sort on _id.When I apply $group’s $sum accumulator after ascending sort, it returns 41.13000000000011.While if I apply it after descending sort, it returns 0.How does ascending sort sums up to41.13000000000011. I think number contributes to 41.13 but where does the rest of it comes from? I would understand if there is lost precision resulting in such as 41.1300000000001, but double 11 at the end is mystery to me.Also how does the descending sort result sums up to 0?I would like to understand them so I know how I could handle large numbers.", "username": "KaiJ" }, { "code": "", "text": "Is anyone able to help me on this?I would love to figure out how I can handle big numbers.", "username": "KaiJ" } ]
Using aggregation group `$sum` accumulator on large and small doubles
2023-06-15T09:43:12.193Z
Using aggregation group `$sum` accumulator on large and small doubles
574
https://www.mongodb.com/…5_2_1024x762.png
[ "queries", "node-js" ]
[ { "code": "", "text": "\nScreenshot 2023-06-26 1548051061×790 34.5 KB\nLab URL : MongoDB Courses and Trainings | MongoDB University", "username": "Shahriar_Shatil" }, { "code": "", "text": "Hey @Shahriar_Shatil,Thank you for surfacing it. We are aware of this issue and will update you once it has been resolved.If you have any other concerns or questions, please don’t hesitate to reach out.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Shahriar_Shatil,\nYou’re trying to execute MongoDB command in the bash Shell, so before you need to connect to the cluster with mongo or mongosh utility and the run this commands.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
MongoDB Uni Lab Shell Environment not working
2023-06-26T09:48:49.680Z
MongoDB Uni Lab Shell Environment not working
794
https://www.mongodb.com/…cb039fb683ee.png
[ "queries", "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "//Use batch object to retrieve multiple objects in one operation\napp.get('/users/batchquery', passport.authenticate('jwt', { session: false }), async (req, res) => {\n const { userIds } = req.query;\n\n try {\n // Split the userIds into an array\n const ids = userIds.split(',');\n\n // Perform the query using the $in operator to fetch multiple users at once\n const users = await Users.find({ _id: { $in: ids } });\n\n // Send the response with the retrieved users\n res.json(users);\n } catch (error) {\n // Handle errors\n res.status(500).json({ error: 'An error occurred' });\n }\n});\n", "text": "I’m getting this error in my API using NodeJS, Express, and MongoDB when doing a batch query (GET). I am able to POST, PUT, and DELETE multiple documents by _id but for some reason I get this error when doing this query:GET: http://localhost:3001/users/batchquery?userIds=649627038375fa0beb5d8102,6496209b9fa3f73cf1885dfa,648e8afe2ad965a4222f5afeimage954×282 25.4 KBHere is my API:Have been at this for hours and so far have not found any fix online. Would appreciate any advice Thanks!", "username": "D_M2" }, { "code": "/users/batchquery/users/:userId/users/batchquery/users/:userId", "text": "Found the fix (hope this helps someone else )The route /users/batchquery was conflicting with the route parameter /users/:userId that handles individual user retrieval.What I did was modified the order of the route handlers so that the /users/batchquery route is defined before the /users/:userId rout-- and it worked! ", "username": "D_M2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: Cast to ObjectId failed for value \"batchquery\" (type string) at path \"_id\" for model \"Users\"
2023-06-26T07:22:25.744Z
Error: Cast to ObjectId failed for value \&rdquo;batchquery\&rdquo; (type string) at path \&rdquo;_id\&rdquo; for model \&rdquo;Users\&rdquo;
2,184
https://www.mongodb.com/…8_2_1023x512.png
[ "mongodb-shell" ]
[ { "code": "", "text": "As title, trying to do the deleteOne and deleteMany labs but the labs don’t have a mongosh connection.\nScreenshot 2023-06-25 at 14.50.401882×942 89.7 KB\n", "username": "Alexandru_47292" }, { "code": "", "text": "Your mongod should be up & running before you can connect with mongosh\nIs your lab exercise done on your cluster which is on cloud or local installation\nIf it is your own sandbox cluster on cloud you need to connect to it using the connect string", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This is a lab where it says there’s no need to install anything and it opens a tab with the instructions on the left hand side and a terminal on the right", "username": "Alexandru_47292" }, { "code": "", "text": "Check your lab instructions and previous lessons\nIf you issue just mongosh it looks for locally running mongod\nYou need a connect string to connect to your Atlas mongodb", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hey @Alexandru_47292,Thanks for reaching out to the MongoDB Community forums!As title, trying to do the deleteOne and deleteMany labs but the labs don’t have a mongosh connection.We have checked on our end and encountered the same issue with the labs. It seems that there is currently no connection established with the MongoDB Atlas cluster. Please allow us some time, and we will update you once it is resolved.Thank you for your patience.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
No mongosh connection for delete labs
2023-06-25T13:52:35.905Z
No mongosh connection for delete labs
917
null
[ "charts" ]
[ { "code": "", "text": "Hi. We are able to render the dashboard url into an iframe. Is there any way the url can be protected as now, anyone with the url can access the information.\nWe want to make sure the data is safe and not being able to access by any unauthorised personnel with the link.", "username": "Gokul_Raj" }, { "code": "getUserToken", "text": "Hi @Gokul_Raj -Yes you can do this, but you need to embed using the Embedding SDK, not the iframe method. The basic approach is follows:For details and examples see Configure Embedding Authentication Providers — MongoDB ChartsTom", "username": "tomhollander" } ]
Protecting the dashboard URL
2023-06-22T00:01:40.372Z
Protecting the dashboard URL
642
https://www.mongodb.com/…a_2_1024x677.png
[]
[ { "code": "", "text": "Hi,I ran out of space on my database and clicked to upgrade from to M2.\nI clicked continue and everything and then I continued and all of my databases are gone.\nOn a production server.Did I do something wrong?\nimage1600×1058 55.9 KB\n", "username": "Michael_Tarnorutsky1" }, { "code": "", "text": "Hi @Michael_Tarnorutsky1I clicked continue and everything and then I continued and all of my databases are gone.I hope you take regular backups as you may need that to recover. Your first step will be to get in touch with MongoDB support.If you have a backup I’d suggest creating a new Cluster and restore to that and leave your existing one to determine what the issue was upgrading tier.You can use the in app chat support Mon-Fri. Or you can sign up for the free trial support.\nimage1669×736 84.6 KB\n", "username": "chris" }, { "code": "", "text": "Well, I texted them 3 hours ago and no reply since.\nI also can’t purchase a new plan since I don’t have any database deployments since they were all deleted and I am afraid of making new ones.I have a backup from a week ago thankfully but imagine the reliability of the cloud service if it can get deleted on upgrade like this.", "username": "Michael_Tarnorutsky1" }, { "code": "", "text": "Well, I texted them 3 hours ago and no reply since.In App support Mon - Fri response time may vary. It’s only just coming up to Monday in APAC, apply patience.If you get past your fear of making a new cluster your could sign up for support and ‘start the timer’ for a response.", "username": "chris" } ]
Databases got deleted after an upgrade
2023-06-25T15:18:43.501Z
Databases got deleted after an upgrade
327
null
[ "aggregation" ]
[ { "code": "$trim:{input: $causeofdeath}/**\n * specifications: The fields to\n * include or exclude.\n */\n{\n cemetery: {\n $trim: {\n input: \"$cemetery\",\n },\n },\n lname: {\n $trim: {\n input: \"$lname\",\n },\n },\n fname: {\n $trim: {\n input: \"$fname\",\n },\n },\n causeofdeath: {\n $trim: {\n input: \"$causeofdeath\",\n },\n },\n ddate: 1,\n bdate: 1,\n}\n", "text": "Would it be possible to use $ifNull to conditionally pass a field through an aggregation pipeline based on whether or not data is present in the field? I’m using an $project after an $sort, but I want to pass $trim:{input: $causeofdeath} through if data is in the field.Can I do that, or do I have to pass the field through all the time?", "username": "Douglas_Carmichael" }, { "code": "$trim {\n $project: {\n causeofdeath: {\n $ifNull: [\n { $trim: { input: \"$causeofdeath\" } },\n \"$$REMOVE\"\n ]\n }\n }\n }\n$$REMOVE", "text": "Hello @Douglas_Carmichael, Welcome to the MongoDB community forum,The $trim will return null if the field does not exist, or it is null, are you saying that you need to exclude the field in the result if it is null?If yes then try this:The $$REMOVE will remove the property if the field’s value is null.Out of the question, I would suggest you insert a trimmed value in the database, so you can avoid these extra operations when you retrieve the data from the database.", "username": "turivishal" }, { "code": "", "text": "Thanks! Unfortunately, I’m working with a “dirty” data set that has been mostly maintained by inexperienced people up to this point.Would csvkit (csvkit 1.1.1 documentation) be able to trim the values on the CSV before I import them with mongoimport, or would I have to write my own script?", "username": "Douglas_Carmichael" }, { "code": "updateMany()", "text": "I don’t know more about csvkit, but if you google it you will find plenty of resources/tools to trim the CSV values.If you want to update your existing document’s value then you can use update with aggregation pipeline with updateMany() method and the solution that I have provided will work in update query.", "username": "turivishal" }, { "code": "updateMany()", "text": "How would I create the updateMany() query? Could I use Compass?", "username": "Douglas_Carmichael" }, { "code": "", "text": "Are you asking about how to create a query or where to execute a query?You can execute it in the Compass shell and Mongo shell as well.", "username": "turivishal" }, { "code": "", "text": "How to create a query.", "username": "Douglas_Carmichael" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can you use $ifNull to conditionally pass a field through an aggregation pipeline?
2023-06-24T22:08:20.864Z
Can you use $ifNull to conditionally pass a field through an aggregation pipeline?
325
null
[ "node-js", "data-modeling", "crud", "serverless" ]
[ { "code": "", "text": "We are developing a website using Angular and Node.js, utilizing a cluster on the free shared tier. The total storage size of the cluster is approximately 3.0 MB, spread across six collections. One collection stands out as it has a significant storage size of around 2000 KB and contains 957 documents.Although the current sizes are relatively small, we are experiencing slowdowns in API response, particularly when making a ‘get-all’ request on the dominant collection. It takes up to 1 minute to receive a response, and we are concerned about the application’s speed as we anticipate a 10x to 20x increase in storage size, especially for the dominant collection, in the future.We have the following questions:", "username": "Rajan_Braiya" }, { "code": "", "text": "Hey @Rajan_Braiya,Welcome to the MongoDB Community forums Although the current sizes are relatively small, we are experiencing slowdowns in API response, particularly when making a ‘get-all’ request on the dominant collection.To better understand the scenario, could you please share the workflow you followed to run the query, the explain output of the same, and the sample document?It depends on various factors, including your schema design and the complexity of the query you are planning to execute.If we decide to go with a subscription plan, should we choose a serverless or dedicated option?Serverless is more suitable for workloads that are not consistent. However, if you are expecting a consistent workload, then a dedicated server would be a good choice. It depends on your preference and workload. To learn more about serverless and dedicated servers, please refer to the FAQ - Atlas Serverless Instances.If we opt for a dedicated server, how can we determine the CPU and RAM requirements based on the collection size and number of documents?As per my knowledge, it doesn’t depend on the collection size; it depends on the working set. It could be the case that you have a huge dataset, but the working set could be small. Also, MongoDB Atlas provides an auto-scaling feature that you can configure, and MongoDB Atlas will automatically scale your cluster tier, storage capacity, or both in response to cluster usage.Look forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How can I determine the CPU and RAM requirements based on data size?
2023-06-15T15:19:49.964Z
How can I determine the CPU and RAM requirements based on data size?
934
null
[]
[ { "code": "", "text": "cannot see the databases in the cluster(created in Atlas) i have created in VS Code after connecting with atlas", "username": "supra_sarkar" }, { "code": "", "text": "How are you verifying the dbs/collections?From vcs code or some other tool?\nCan you check those from shell", "username": "Ramachandra_Tummala" }, { "code": "", "text": "thanks for taking time to answer.\nI went to database access in atlas, clicked on edit. then chaned my roles to admin and clusterMonitor . And now i can see the databases in VS Code.", "username": "supra_sarkar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot see the databases in the cluster(created in Atlas) i have created in VS Code
2023-06-24T19:31:38.739Z
Cannot see the databases in the cluster(created in Atlas) i have created in VS Code
428
null
[ "node-js" ]
[ { "code": "", "text": "Incorrect solution 5/5\nfailed to sampleData load for cluster ‘myAtlasClusterEDU’Lab URL: MongoDB Courses and Trainings | MongoDB University", "username": "Shahriar_Shatil" }, { "code": "", "text": "Check this thread", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connecting to MongoDB using NodeJS
2023-06-24T06:52:33.086Z
Connecting to MongoDB using NodeJS
837