image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "compass", "indexes" ]
[ { "code": "", "text": "Hi Team,I have a dev env database with a collection size of 2.5 Billion records. The are 5 existing regular indexes. I am trying to create a new compound index with 2 fields using the Mongo Compass tool.I have tried with more than 5 times and it’s failing. Each time it takes more than 8 hours and fails. How can I debug it? Where can I find the error logs?I am new MongoDb so please let me know what else info I can provide. appreciate the help.Other existing 5 indexes size:\n_id: 100.4 GB\nindex_1: 78 GB\nindex:_2: 100.4 GB\nindex_3: 78 GB\nindex_4: 95.8 GB", "username": "Ahmad_Sayeed" }, { "code": "/var/log/mongo/mongod.log db.serverCmdLineOpts()\"c\":\"INDEX\"", "text": "Logs are your best bet. Usually logs will be found in /var/log/mongo/mongod.log on a linux system.Check the mongod configuration file to be certain or you could run db.serverCmdLineOpts()Another limited option is getLogIn either case a good starting point will be to filter on \"c\":\"INDEX\"", "username": "chris" } ]
Compound index creation failed and no error logs
2023-01-01T10:02:18.156Z
Compound index creation failed and no error logs
1,782
null
[]
[ { "code": " const client = await MongoClient.connect(uri, {\n useNewUrlParser: true,\n ssl: true,\n sslValidate: true,\n sslCert: fs.readFileSync(\"./rootCA.pem\"),\n });\n", "text": "error: ENAMETOOLONG: name too long", "username": "Zil_D" }, { "code": "", "text": "sslCert should be the path to the certificate, you’re passing the contents of the file.", "username": "chris" } ]
While connection setup ssl certificate file error too long
2023-01-02T15:45:58.950Z
While connection setup ssl certificate file error too long
1,582
null
[ "aggregation", "crud" ]
[ { "code": "$map", "text": "Given a collection of documents, and a list of documents that need to be updated in the collection, I’m struggling to understand whether in a single query I can execute a single operation to upsert each of the items in the list.For example, if I have a list of 10 items, I’d like to:I tried looking at updateMany but it looks like it is built to update each item the same way rather than process a list of single items against the same rule.I considered generating an aggregation pipeline via $map to do this, but I thought that maybe I was overcomplicating the issue. What is the idiomatic way to handle this scenario? Thanks!", "username": "Brian_Sump" }, { "code": "const documentsToUpsert = [\n {\n updateOne: {\n filter: { _id: 1 },\n update: { $set: { field1: \"value1\" } },\n upsert: true\n }\n },\n {\n updateOne: {\n filter: { _id: 2 },\n update: { $set: { field2: \"value2\" } },\n upsert: true\n }\n },\n {\n updateOne: {\n filter: { _id: 3 },\n update: { $set: { field3: \"value3\" } },\n upsert: true\n }\n }\n];\n\nawait collection.bulkWrite(documentsToUpsert);\n", "text": "Hi @Brian_Sump ,You can use the bulkWrite method to achieve this behavior. The bulkWrite method allows you to perform a number of different write operations in a single command, including update and insert operations.Here’s an example of how you could use bulkWrite to upsert a list of documents:This will update the documents with the matching _id field in the filter clause, or insert a new document if no matching documents were found. The upsert option is set to true to specify that an upsert should be performed.You can also use the bulkWrite method to perform other types of write operations, such as delete operations.Will that work?Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "This should work - thanks!", "username": "Brian_Sump" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upserting List of Documents
2023-01-02T05:16:48.539Z
Upserting List of Documents
1,232
null
[ "python", "connecting", "mongodb-shell", "pymodm-odm" ]
[ { "code": "pymongo.errors.ServerSelectionTimeoutError: Connection refused\nimport\n\nimport pandas as pd\n\nfrom pymongo import MongoClient\n\nclient = MongoClient('1xx.xx.xx.1:27017')\n\ndb = client ['(practice_12_29)img_to_text-001_new']\n\ncollection = db ['img_to_text_listOfElems']\n\ndata = pd.read_csv('file_listOfElems.csv',encoding = 'UTF-8')\n\ndata_json = json.loads(data.to_json(orient='records'))\n\ncollection.insert(data_json)\n\n\njetson@jetson-desktop:~/Desktop/test_12.26$ python3 csv_to_mongoDB.py\n\nTraceback (most recent call last):\n\n File \"csv_to_mongoDB.py\", line 13, in <module>\n\n collection.insert(data_json)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/collection.py\", line 3182, in insert\n\n check_keys, manipulate, write_concern)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/collection.py\", line 646, in _insert\n\n blk.execute(write_concern, session=session)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/bulk.py\", line 511, in execute\n\n return self.execute_command(generator, write_concern, session)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/bulk.py\", line 344, in execute_command\n\n with client._tmp_session(session) as s:\n\n File \"/usr/lib/python3.6/contextlib.py\", line 81, in __enter__\n\n return next(self.gen)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1820, in _tmp_session\n\n s = self._ensure_session(session)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1807, in _ensure_session\n\n return self.__start_session(True, causal_consistency=False)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1760, in __start_session\n\n server_session = self._get_server_session()\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1793, in _get_server_session\n\n return self._topology.get_server_session()\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/topology.py\", line 477, in get_server_session\n\n None)\n\n File \"/home/jetson/.local/lib/python3.6/site-packages/pymongo/topology.py\", line 205, in _select_servers_loop\n\n self._error_message(selector))\n\npymongo.errors.ServerSelectionTimeoutError: 1xx.xx.xx.1:27017: [Errno 111] Connection refused\n\n\njetson@jetson-desktop:~/Desktop/test_12.26$ sudo rm /var/lib/mongodb/mongod.lock\n\nrm: cannot remove '/var/lib/mongodb/mongod.lock': No such file or directory\n\n<username><password>cluster-details\n# this worked fine, that I don't remember I put user name and password \n\nimport pandas as pd\n\nfrom pymongo import MongoClient\n\naaa = pd.read_excel(\"T1_new.xls\")\n\nprint(aaa.head)\n\nclient = MongoClient('1xx.xx.xx.1:27017')\n\ndb = client['sample_data_in_DB']\n\ncollection = db['sample_collection']\n\ncollection.insert_many(aaa.to_dict('records'))\n\n", "text": "I’m running the script wish to save the csv file to MOngoDB, and face pymongo.errors.ServerSelectionTimeoutError: Connection refused(ps. ‘1xx.xx.xx.1:27017’ is correct mongoDB ip)I tried one of the similar issue’s solution,but is not work tooI also find this python - Mongodb pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, - Stack Overflowbut how do I know the <username> , <password> and cluster-details , my last time experience with other computer can just upload excel with below codeif any idea just let me know,thanks", "username": "j_ton" }, { "code": "", "text": "The error Connection Refused means one and only one thing. It means your client, that is your python code, cannot reach the server specified by the URI.If your client cannot reach the server then\n1 - there is not server listening at the given address port\nor\n2 - you have a firewall, vpn or other security measures stopping you from accessing the server", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Csv file to MOngoDB, with `pymongo.errors.ServerSelectionTimeoutError: Connection refused`
2022-12-29T08:27:27.089Z
Csv file to MOngoDB, with `pymongo.errors.ServerSelectionTimeoutError: Connection refused`
2,995
null
[ "aggregation", "node-js", "mongoose-odm", "indexes", "performance" ]
[ { "code": "{\n commonFilter: {\n type: mongoose.Schema.Types.Mixed,\n },\n requestid: String,\n userData: mongoose.Schema.Types.Mixed,\n dateTime: {\n type: Date,\n default: Date.now,\n }\n}\ncommonFilter: {\n field1: \"as\",\n field2: 23,\n field3: [\"sa\", \"re\"],\n field4: [{\n as: \"as\",\n ASA: \"as\"\n }]\n}\n", "text": "so i have a collection schema as followand following is an example of commonfilter data,I am using an aggregation query that searches based dateTime, requestid, and fields1-3 from common filter,", "username": "Sagar_Agrawal" }, { "code": "", "text": "Hi Sagar,\nWere you able to get an answer on this? I have run into a similar problem and am very much interested in understanding how/if you have been able to move forward.Thanks!", "username": "Prasad_Kini" }, { "code": "", "text": "Not yet,Planning to do some experiments this weekend if I don’t get any response from others.", "username": "Sagar_Agrawal" } ]
Can i index an mixed type in mongo?
2022-12-31T05:38:23.245Z
Can i index an mixed type in mongo?
2,154
null
[ "node-js", "data-modeling", "connecting", "sharding", "next-js" ]
[ { "code": "", "text": "Hello,I’m designing a multi-tenant database for a SaaS application. Each tenant gets their own NextJS application which is hosted in Vercel. All of these NextJS applications can have up to hundreds of concurrent users. This causes a connection limit problem.How should I design the database so that there won’t be too many simultaneous connections? My initial plan was to use a single ServerlessCluster in which each tenant would get their own database containing relevant data coming mainly from the NextJS application’s users. The problem with this approach seems to be that the ServerlessCluster has a limit of 500 simultaneous connections, which basically means that there cannot be even 500 concurrent users interacting with the NextJS applications. I don’t expect more than 20 tenants, which means that the peak connections should not get over 4000 as 500 connections per tenant concurrently is likely to be enough.I’ve tried to look into connection pooling, but I haven’t figured a way to make it work with many different databases. So, in short: how can this connection limit problem be solved?", "username": "7be1cbd7e8b42024c9c2ca2990fc7cb" }, { "code": "", "text": "@7be1cbd7e8b42024c9c2ca2990fc7cb\nI have the almost same issue that you faced here. can you help me with how did you overpass the problem?\nI found some pool solutions on the internet but I am not satisfied with that solutions.", "username": "oguzhan_atasever" }, { "code": "", "text": "The best solution I could find is to use a separate Express server hosted in Heroku for handling all database interactions. This way the serverless functions don’t create new connection pools themselves, and the server has a single connection pool per tenant database. In case of a large amount of tenants, it is then possible to manually disconnect and reconnect", "username": "7be1cbd7e8b42024c9c2ca2990fc7cb" } ]
MongoDB Atlas Multi-Tenant Architecture Without Exceeding Connection Limit
2022-12-27T19:46:34.187Z
MongoDB Atlas Multi-Tenant Architecture Without Exceeding Connection Limit
2,274
null
[]
[ { "code": "{\n \"id\": 1,\n \"foo\": {\n \"id\": 11,\n \"barList\": [\n {\n \"id\": 111,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 112,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 113,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n }\n ]\n }\n}\nbarListupsertBar(docId, newBar)newBar{\n \"id\": 113,\n \"name\": \"Nested Object Modified\",\n}\n{\n \"id\": 1,\n \"foo\": {\n \"id\": 11,\n \"barList\": [\n {\n \"id\": 111,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 112,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 113,\n \"name\": \"Nested Object Modified\",\n \"otherProps\": \"Continue here\"\n }\n ]\n }\n}\nnewBar{\n \"id\": 114,\n \"name\": \"New Fourth Bar\",\n \"otherProps\": \"Continue here\"\n}\n{\n \"id\": 1,\n \"foo\": {\n \"id\": 11,\n \"barList\": [\n {\n \"id\": 111,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 112,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 113,\n \"name\": \"Nested Object\",\n \"otherProps\": \"Continue here\"\n },\n {\n \"id\": 114,\n \"name\": \"New Fourth Bar\",\n \"otherProps\": \"Continue here\"\n }\n ]\n }\n}\n", "text": "I have a collection, where each document roughly takes the following pattern:If possible, in a single operation, I’d like to do an upsert on the array barList. For example,\nif I have a function upsertBar(docId, newBar), here is how I’d like it to be have.Given newBar as:I’d like the collection to be modified as:Given newBar asI’d like the collection to return:How would I do this with MongoDb?", "username": "Brian_Sump" }, { "code": "", "text": "Hello @Brian_Sump, Welcome to the MongoDB community forum,Refer to this similar topic,You can also refer to this answer as well,", "username": "turivishal" }, { "code": "", "text": "Thanks - examining.This is unexpectedly complex for an upsert operation…", "username": "Brian_Sump" }, { "code": "db.collection.update(\n { _id: _id },\n [{\n $set: {\n myarray: {\n $cond: [\n { $in: [updateDoc.userId, \"$myarray.userId\"] },\n {\n $map: {\n input: \"$myarray\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n $cond: [\n { $eq: [\"$$this.userId\", updateDoc.userId] },\n uodateDoc,\n {}\n ]\n }\n ]\n }\n }\n },\n { $concatArrays: [\"$myarray\", [uodateDoc]] }\n ]\n }\n }\n }]\n)\n", "text": "Looking at this solution from the Stack Overflow Example:Is this performant? Am I rewriting the entire array every time?", "username": "Brian_Sump" }, { "code": "", "text": "Am I rewriting the entire array every time?Yes, you are, and i think there is no better option to do this operation.", "username": "turivishal" }, { "code": "", "text": "In addition, documents are completely rewritten to permanent storage when updated, so rewriting the entire array is not a major overhead. Unless you are implementing the massive array anti-pattern.", "username": "steevej" }, { "code": "", "text": "Thanks - good to know. In my use case, it’s an array of small objects (3 or 4 attributes with short strings), though there could be around 50 - 100 elements. I’m presuming that this is less than “massive”, and access patterns are such that it really fits in this collection.", "username": "Brian_Sump" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upserting Nested Document
2022-12-29T12:47:17.055Z
Upserting Nested Document
2,628
null
[ "crud" ]
[ { "code": "updateOneupdateOneupdateMany`Widget.updateOne({\n where: {_id: widget.id},\n update: {\n {gizmo.color: red WHERE gizmoId: gizmo[0].id},\n {gizmo.color: blue WHERE gizmoId: gizmo[1].id}\n }\n})`\n", "text": "Use case:\nA Widget has many Gizmos. I’m using the Embedded Document pattern.I need to update (potentially) each Gizmo. Can I do this with one updateOne? I do understand that I need updateOne instead of updateMany since I’m working within ONE document in a collection.I’m thinking that this won’t be possible because the query selector would need to specify which embedded documents to update.Essentially what I’m looking for is something like:", "username": "Michael_Jay2" }, { "code": "{ _id: ObjectId(\"639dd1cf0305687fe1a600b8\"),\n gizmos: [ { id: 1, color: 2 }, { id: 3, color: 4 } ] }\nfilter1 = { \"filter1.id\" : 1 }\nfilter3 = { \"filter3.id\" : 3 }\nupdate1 = { 'gizmos.$[filter1].color': red }\nupdate3 = { 'gizmos.$[filter3].color': blue }\nc.updateOne( {} ,\n { \"$set\" : { ...update1 , ...update2 } } ,\n { arrayFilters : [ filter1 , filter3 ] } )\n", "text": "This has been in my bookmarks for a long time.The following seems to work.Starting with the collection:You may update using multiple arrayFilters such as:", "username": "steevej" } ]
How to update multiple embedded documents in an array
2022-12-15T16:52:45.830Z
How to update multiple embedded documents in an array
1,338
null
[ "node-js", "transactions", "field-encryption", "storage" ]
[ { "code": "\"t\":{\"$date\":\"2022-12-24T23:10:01.743+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.743+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.743+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.744+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":56021,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"C27630\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.3\",\"gitVersion\":\"f803681c3ae19817d31958965850193de067c516\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.747+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.748+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.748+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-12-24T23:10:01.748+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=31590M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.031+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":283}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.031+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.054+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.054+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.056+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.056+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.056+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.063+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.063+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.068+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.068+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.069+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.070+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.070+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.683+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:42540\",\"uuid\":\"4d794336-e9ca-4ab1-8716-a8362b7326ff\",\"connectionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.684+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:42540\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.6.4\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.4.0-135-generic\"},\"platform\":\"'Node.js v10.24.1, LE (legacy)\"}}}\n{\"t\":{\"$date\":\"2022-12-24T23:10:02.690+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"xminder\",\"authenticationDatabase\":\"notitia\",\"remote\":\"127.0.0.1:42540\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.716+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.717+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.718+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.719+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.745+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":26}}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.745+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.745+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.745+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.756+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-12-24T23:20:01.757+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n", "text": "Hi, my mongodb is continuously getting shutdown, and restarting automatically. While going through the logs only thing which I came across was that it received signal 15 and got terminated. I am not sure of the root cause. If there is something which I am missing please do let me know. Below are the logs…", "username": "Syed_Ahsan_Hasan_Khan" }, { "code": "Signal 15SIGTERMmongodhow to change systemd service timeout value", "text": "Hello @Syed_Ahsan_Hasan_Khan ,“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}As discussed in this thread. Signal 15 also known as SIGTERM is sent to terminate a program, and is relatively normal behaviour. This indicates system has delivered a SIGTERM to the process. This is usually at the request of some other process but could also be sent by your process to itself.Without knowing any details about your deployment, one avenue you can investigate is checking the default service timeout, maybe it is set lower than the startup time required due to unclean shutdown of your mongod process. You might need a larger timeout similar to this discussion. Also you can refer to this thread as this might have some helpful pointers on how to change systemd service timeout value .Hope this helps! Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Receiving Signal 15 and Restarting
2022-12-24T23:23:41.043Z
Mongodb Receiving Signal 15 and Restarting
5,434
null
[ "aggregation", "queries", "node-js" ]
[ { "code": " \"_id\": \"63b14c1852f8a7e6c15efd06\",\n \"productName\": \"xxxxxxxxxx\",\n \"playerID\": \"xxxxxxxxxxxxx\",\n \"requestEmail\": \"[email protected]\",\n \"timestampDetails\": {\n \"dayName\": \"Sun\",\n \"day\": \"01\",\n \"monthName\": \"Jan\",\n \"year\": \"2023\"\n },\n \"date\": \"Sun, 01 Jan 2023 09:02:14 GMT\",\n \"orderId\": 9,\n \"paid\": true,\n \"status\": \"পরিশোধ করা হয়েছে\",\n \"status\": \"Paid\",\n \"createdAt\": \"2023-01-01T09:02:16.265Z\",\n \"updatedAt\": \"2023-01-01T09:04:58.334Z\",\n \"__v\": 0\n },\n", "text": "Hello,\nI have some data in DB like this:\n{now I need to find data using month name and year under timestampDetails.\nI tried to find those data like this:\n<> const result = await OrderSchema.aggregate({\ntimestampDetails: {\nmonthName: “Jan”,\nyear: “2023”,}\n,status: “Paid”, }).sort({createdAt: -1,}); </>but it doesn’t work. I’m not expert in mongodb/mongosse. Could you someone help me?", "username": "Saiful_Islam_Shakil" }, { "code": "timestampDetails: {\n monthName: \"Jan\",\n year: \"2023\" }\n{\n monthName: \"Jan\",\n year: \"2023\" }\n\"timestampDetails.monthName\" : \"Jan\" ,\n\"timestampDetails.year\" : \"2023\" ,\n\"status\" , \"Paid\"\n", "text": "When you writeyou ask for the field timestampDetails to be equal to the objectIn, JS and MongoDB, objects are equals when they have the same fields and the same values in the same order. In your timestampDetails you have extra fields that are not specified in your query so the objects are not equal. To do what you want you need to use the dot notation to specify a field to field query such as:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trying to find the data using a condition which is not on root level
2023-01-01T10:07:24.856Z
Trying to find the data using a condition which is not on root level
535
https://www.mongodb.com/…_2_1024x811.jpeg
[]
[ { "code": "", "text": "Hi,My proof of completion for M001 and M121 is missing in the new MongoDB University. Please see the screenshots below for your reference. Kindly help in resolving this issue at the earliest.\nM001_proof_of_completion1135×900 49.6 KB\n", "username": "Gaurav_Sahu1" }, { "code": "", "text": "What do you see under my completed courses?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hey @Gaurav_Sahu1,Welcome to the MongoDB Community Forums! You should be able to see your Completion Certificates under the Proof of Completion tab in My Dashboard in the new LMS.Kindly let us know if you are still facing some issues accessing your completion proofs. Please feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "I see only one proof of completion (M201: MongoDB Performance). However, I’ve completed 3 courses (M001, M121 and M201)\nimage3548×1226 201 KB\n", "username": "Gaurav_Sahu1" }, { "code": "", "text": "Hey @Gaurav_Sahu1,I checked and found that you completed M001 and M121 recently this month. You should be able to see your proofs of completion post Dec 1, when we fully migrate to the new LMS. In the meanwhile, you can access your course completion certificates from the old University platform. Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Hi @Satyam ,I can say the old \"proof of completion\"s can be seen under the new Learner Dashboard (generic link) (at least in my account, it shows)I don’t know how it will be after full migration, but I hope that also includes having a link to them under the course pages themselves.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you @Satyam. I’ll wait for the migration to complete and check again later.", "username": "Gaurav_Sahu1" }, { "code": "", "text": "@Satyam I hope the migration would have been completed by now. However, I still do not see all 3 Proof of Completions here. Also, the ‘Completed’ tab shows only one course (M201) instead of 3.\n\nimage3582×1706 272 KB\n", "username": "Gaurav_Sahu1" }, { "code": "", "text": "Hey @Gaurav_Sahu1,I have raised this issue with the concerned team. Will update you once I hear back from them. Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Hi @Satyam\nAny update on this thread? Wondering when would this issue be resolved. It’s been almost a month since this issue was first reported.", "username": "Gaurav_Sahu1" }, { "code": "", "text": "Hey @Gaurav_Sahu1,You should have received an email from one of our team members ([email protected]). Kindly respond to that mail if the issue still persists.Please feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Thanks @Satyam. The issue is now resolved.", "username": "Gaurav_Sahu1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Missing Proof of Completion in new university
2022-11-27T13:44:44.560Z
Missing Proof of Completion in new university
4,060
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "try{\n\n await mongoose.connect(db, {\n\n useNewUrlParser: true,\n\n });\n\n console.log(\"Mongo DB is connected\")\n\n}catch(err){\n\n console.error(err.message);\n\n process.exit(1);\n\n}\n\n", "text": "Here is the Error…> querySrv ECONNREFUSED _mongodb._tcp.nutritionalera.g9ughbo.mongodb.netHere is the connection code:`const mongoose = require(‘mongoose’);const config = require(‘config’);const db = config.get(‘mongoURI’);// mongoose.connect(db);const connectDB = async () => {}module.exports = connectDB;`“mongoURI”:“mongodb+srv://:nutritionalera.g9ughbo.mongodb.net/?retryWrites=true&w=majority”,", "username": "Ali_Hassan4" }, { "code": "", "text": "See the possible answer here, and the link therein.It asks you to go to Atlas and get the long connection string. Seems to work as people can normally connect afterwards (also indicated here Atlas Troubleshooting guide)If that doesn’t work, continue with this suggestions.", "username": "santimir" } ]
Error in Database Connection
2023-01-01T07:30:37.750Z
Error in Database Connection
1,370
null
[]
[ { "code": "{\n \"ok\": 0,\n \"errmsg\": \"cannot find user account after reload\",\n \"code\": 8000,\n \"codeName\": \"AtlasError\"\n}\n", "text": "Hello.\nI’m getting this error when accessing Mongo Atlas from my Spring Boot app:I couldn’t find any topic about this exact error message. Does anybody know, what does it mean?", "username": "Tomas_Laubr" }, { "code": "", "text": "Hi Tomáš, did you figure this out? Spring Boot is widely used with MongoDB Atlas: I wonder if you’re using a version of the driver that’s potentially older and doesn’t support MongoDB Atlas’ short SRV connection string? (If so you can get the long legacy connection string in the Atlas UI). However this may be a red herring, inspired by a similar issue reported in java - Command failed with error 8000 (AtlasError) when try to insert data into collection on Atlast server - Stack Overflow", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi.\nThank you.\nNo, the link you post seems to be a different error. Before, our application has been running about a month without any error. Now, after restart, the error also disappeared. However I have filled a bug and waiting for an answer. See Connection to MongoDB Atlas lost, error \"cannot find user account after reload\" · Issue #3584 · spring-projects/spring-data-mongodb · GitHub", "username": "Tomas_Laubr" }, { "code": "", "text": "Its basically You need to go to Database Access Priviledges —> build in roles and select the Select one [built-in role] - Admin,read and write thats it.", "username": "Nirupam_Barman" } ]
MongoDB Atlas returns error "cannot find user account after reload"
2021-03-09T11:22:05.001Z
MongoDB Atlas returns error &ldquo;cannot find user account after reload&rdquo;
3,067
null
[ "queries", "app-services-data-access" ]
[ { "code": "db.collection('todos').find({userId:'123')}", "text": "Hi how do you set the user context to execute a database call as a specific user as my current rules are preventing the documents i’m querying for from being return?I’m using realm sdk in cloudflare worker runtime environment using API key authenication.the rule i have for the collection is a simple owner role rule : { “userId”: “%%user.id”}when i try to do a db.collection('todos').find({userId:'123')} the result is empty.", "username": "clueless_dev" }, { "code": "", "text": "Solved my issue by adding another rule that allowed reading the collection for the given API key. Took solution from Define rule with API Key", "username": "clueless_dev" } ]
How to set user context to execute database function as specific user on server side
2022-12-29T01:35:31.082Z
How to set user context to execute database function as specific user on server side
1,790
null
[ "transactions", "database-tools", "migration" ]
[ { "code": "mongoimport --db=crypto --collection=t --type=csv \\\n --columnsHaveTypes \\\n --fields=\"timestamp.date(), transaction_type.string(), token.string(), amount.double()\" \\\n --file=\"text.csv\"\nFailed: type coercion failure in document #1 for column 'timestamp', could not parse token '1571967208' to type date\n", "text": "Not able to migrate data from csv file include unix timestampcsv sample ;\ntimestamp,transaction_type,token,amount\n1571967208,DEPOSIT,BTC,0.298660\n1571967200,DEPOSIT,ETH,0.683640\n1571967189,WITHDRAWAL,ETH,0.493839error output", "username": "Shafa_vp" }, { "code": "date_ms(yyyy-MM-dd H:mm:ss)awk -F, '{OFS=\",\" ;$1=strftime(\"%Y-%m-%d %H:%M:%S\", $1); print $0}' import.csv | \\\nmongoimport --db=database --collection=collection --type csv --drop --columnsHaveTypes \\\n--fields=\"timestamp.date_ms(yyyy-MM-dd H:mm:ss), transaction_type.string(), token.string(), amount.double()\"\n", "text": "It does not appear the the unix timestamp is supported with csv mongoimport.Assuming you don’t have control over the csv export process you can preprocess the file before or during import.Here awk is being used to process the the first field to a timestamp. The mongoimport field is updated to date_ms(yyyy-MM-dd H:mm:ss) and will consume the output of awk.You may need to use a different awk program(depending on version your have) or another program entirely.", "username": "chris" } ]
Not able to migrate data from csv file include unix timestamp
2022-12-31T07:50:27.869Z
Not able to migrate data from csv file include unix timestamp
1,782
null
[ "aggregation", "queries", "data-modeling", "indexes" ]
[ { "code": " \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 16,\n \"executionTimeMillis\": 552,\n \"totalKeysExamined\": 12114,\n \"totalDocsExamined\": 2045,\ntotalKeysExamined / nReturned = 757\ntotalDocsExamined / nReturned = 128\n", "text": "Here is an example of a query run.My Database has a collection with 100K documents totalling 300MB and 3 compound indexes totalling 200mb.Are these numbers scalable?\nIf this query represents the most intensive and common query of my APP, how far can it scale?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Are you sure you aren’t wasting more time asking the same question again and again and again…than you would dedicating a few days to make a good analysis of the situation yourself, and finally asking a single, detailed, and useful question?Just a suggestion.", "username": "santimir" } ]
What range of doc scanned to doc returned ratio will be scalable?
2022-12-31T04:52:57.508Z
What range of doc scanned to doc returned ratio will be scalable?
1,004
null
[ "storage" ]
[ { "code": "> db.serverStatus().wiredTiger.cache[\"maximum bytes configured\"]/1024/1024/1024storage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n wiredTiger:\n engineConfig:\n configString: cache_size=600M\n", "text": "Hello ,I am trying to set the mongo cache size via mongoshell but the setting is not persisting across restart.\[email protected]#mongo -u admin -p xxx\nMongoDB shell version v4.4.13\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nMongoDB server version: 4.4.13\n…db.serverStatus().wiredTiger.cache[“maximum bytes configured”]/1024/1024/1024\n7.2060546875\ndb.adminCommand( { “setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “cache_size=512M”})\n{ “was” : “”, “ok” : 1 }\ndb.serverStatus().wiredTiger.cache[“maximum bytes configured”]/1024/1024/1024\n0.5\nexit\nbye\[email protected]#systemctl restart mongod\[email protected]#mongo -u admin -p xxx\n…\n> db.serverStatus().wiredTiger.cache[\"maximum bytes configured\"]/1024/1024/1024\n7.2060546875BTW, I tried this setting via /etc/mongod.conf but restart is not working due to some error? I also need help with this syntax\nI have this snippet:I also tried several other variants (above) but no luck\nthanks", "username": "Sriram_Bhamidipati" }, { "code": "", "text": "What error are you getting with the config file?\nIndentation should be correct for YAML\nUse space bar not tab after colon(\":\") while inserting values", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi Ram,\nthanks for your quick reply. There is no TAB anywhere in the config file\nsystemctl restart mongod\nJob for mongod.service failed because the control process exited with error code. See “systemctl status mongod.service” and “journalctl -xe” for details.This is journalctl -xe output––\n– The result is dependency.\nDec 30 21:24:01 xxxx.lan systemd[1]: Job [email protected]/start failed with result ‘dependency’.\nDec 30 21:24:01 xxxx.lan systemd[1]: Unit mongod.service entered failed state.\nDec 30 21:24:01 xxxx.lan systemd[1]: mongod.service failed.\nDec 30 21:24:01 xxxx.lan polkitd[12362]: Unregistered Authentication Agent for unix-process:12331:173162 (system bus name :1.154, object path /org/frDo you need any thing else to help me figure out the issue?\nthanks\nSriram", "username": "Sriram_Bhamidipati" }, { "code": "storage:\n wiredTiger:\n engineConfig: \n cacheSizeGB: 0.6\n", "text": "The correct configuration would be:", "username": "chris" }, { "code": "db.adminCommand( { “setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “cache_size=512M”})\n{ “was” : “”, “ok” : 1 }\n", "text": "Thanks, Chris! I tried this as well (earlier) but mongo wouldnt start unless I comment out these lines\nDoes indenting play a role? Are there any other ways (aparf from config file) ? I tried setting in mongoshell (see below)but its not persistent when service restarts", "username": "Sriram_Bhamidipati" }, { "code": "", "text": "Yes indenting is important, the configuration file is yaml. Also be aware that the parameter is cacheSizeGB not MB. Its possible in your config some of these sections exist, in that case nest the required options appropriately.It can be configured on the command line, but you’d have to update the systemd unit file. Really this works in the configuration file.but mongo wouldnt start unless I comment out these linesLooking into the error returned and correcting that will be a better path to resolution.", "username": "chris" } ]
wiredTiger cacheSize setting not persistent across restart
2022-12-30T04:24:30.864Z
wiredTiger cacheSize setting not persistent across restart
2,606
null
[ "node-js", "java" ]
[ { "code": "", "text": "Hi, I am super green and a nubie. So please be kind. I am having issues connecting my Mongodb. I am studying Java from a great teacher, however when I follow his direction I am getting stuck with the following error message in my command promp:Node.js v18.12.1\n[nodemon] app crashed - waiting for file changes before\nstarting…Any help is greatly appreciated. I cannot move onto m next lesson. SO I am stuck Please help", "username": "Graeme_Cohen" }, { "code": "", "text": "Is there an associated error?\nHave you sourced your env file\nIs your uri correct\nCan you connect to your db using shell?", "username": "Ramachandra_Tummala" } ]
Connecting MogoDb Issues
2022-12-31T01:55:10.444Z
Connecting MogoDb Issues
1,170
null
[]
[ { "code": "", "text": "Hi,Where are the other parts of Tutorial: Build a Movie Search Application Using Atlas Search?Especially Part 2:Make it even easier for our users by building more advanced search queries with fuzzy matching and wildcard paths to forgive them for fat fingers and misspellings. We’ll introduce custom score modifiers to allow us to influence our movie results.", "username": "MBee" }, { "code": "", "text": "Hi MBee! Thank you for the interest in the tutorial. I am updating it to be hosted on a website this month, but in the meantime, you should check out this video: MongoDB Atlas Search: The Restaurant Finder Demo App - YouTube which explains the features, how to add them to your search stage, as well as point you to an interactive website where you can see the code being built out as you interact with the UI: www.atlassearchrestaurants.com.", "username": "Karen_Huaulme" }, { "code": "", "text": "Super. Thanks. Will do.", "username": "MBee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Where are other parts of Tutorial: Build a Movie Search Application Using Atlas Search
2022-12-27T22:19:57.778Z
Where are other parts of Tutorial: Build a Movie Search Application Using Atlas Search
1,174
null
[ "replication", "containers" ]
[ { "code": "version: '3.1'\n\nservices:\n\n mongo:\n image: mongo\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n ports:\n - 27017:27017\n environment:\n MONGO_INITDB_ROOT_USERNAME: admin\n MONGO_INITDB_ROOT_PASSWORD: ridiculouslydifficulypassword\n volumes:\n - type: bind\n source: ./data\n target: /data/db%\nmongodb://admin: ridiculouslydifficulypassword@PRIMARY_IP:27027/?authMechanism=DEFAULTbind", "text": "In a DEV environment I have a replica set with 3 nodes, 2 are residing at the same external IP, 1 is physically separate (and on another IP).My docker-compose file for all three (with minor, unrelated variations):All 3 instances share this INITDB_ROOT USERNAME/PASSWORD.Connection string to PRIMARY:\nmongodb://admin: ridiculouslydifficulypassword@PRIMARY_IP:27027/?authMechanism=DEFAULTNow, I got the “READ_ME” blablablabla db + document stating my data was captured, however it wasn’t deleted?My questions:", "username": "Sander_de_Ruiter" }, { "code": "--authcommand:bind--auth", "text": "In general you have misused the container image.Fatally you have overridden the entrypoint and have not specificed --auth. The entrypoint is where the environment variables are used to set the ROOT username and password and more importantly, detect they are configured and enable authentication.So essentially the mongo container is running without authentication.https://hub.docker.com/_/mongo has fairly good instructions on how to use the container image correctly. Most additional options should be passed as values to the command: key in the compose-file.I get that --bind-ip-all is bad, although I still don’t understand what bind means? If I want to restrict access to Mongo by limiting the IP address that can connect, is this what bind is for?This is actually the default for the container image.Restricting access by IP address should be done via your firewall/security groups.AFAIK I have a user account with long enough password to not be cracked. How come an attacker is still able to create a db in my instance?Addressed above, you’re running without authorization enabled because the entrypoint was overridden and --auth was not specified in the replacement.", "username": "chris" }, { "code": "entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--auth\", \"--replSet\", \"rs0\" ]\n", "text": "Thank you for this response. My takeaway from your message and reading the information on the docker page, is that I should (at least) change the entry point line to:And this would enable authentication (hopefully with the given root username/password combo?", "username": "Sander_de_Ruiter" }, { "code": "command", "text": "No. That is the wrong takeaway.I’d still put the options on the command section. The container entrypoint is very good and it is is how the container is designed to be used.However you will still be missing a couple of things that should be on any installation. Cluster auth (keyfile or x509) will be needed for replicaset members to connect to each other when auth is enabled. And TLS should be enabled.There are additional steps to take to have a correctly configured mongodb:\nProduction Notes\nOperations ChecklistYou can also upskill at MongoDB UniversityMuch of this can be avoided by using MongoDB Atlas.", "username": "chris" }, { "code": "", "text": "Ok, thank you. I’ll be reading up on MongoDB university. On Atlas, I get that, but pricing just isn’t there for the project I’m working on, with at least 50-75Gb of data, so I’m stuck using a local version.", "username": "Sander_de_Ruiter" } ]
Adequate security, got hit on DEV box with ransom
2022-12-30T11:22:50.264Z
Adequate security, got hit on DEV box with ransom
1,682
null
[ "aggregation", "queries", "node-js" ]
[ { "code": " .find(\n { _id: objectId },\n {\n limit: 1,\n projection: { gizmos: 1, ownerId: 1, type: 1, _id: 0 },\n sort: [['gizmos.order', 'asc']],\n },\n );\n", "text": "I thought I had the syntax right for sorting an embedded documents array - but I can see now why what I was trying doesn’t work.As you can see, what I’m hoping for is to sort the document’s embedded Gizmos document array by order. Reading more carefully, it looks like the sort findOption is only for sorting the target documents themselves - not the target documents’ embedded documents array.From the reading I’ve done, it looks like I’m going to need to use the aggregation pipeline. Is that correct? In my dev environment, I’m on the community DB, so I think I may not be able to use aggregation functionality.Any tips? Thanks.Edit to add:\nThe sort property is also optional - in case that matters as far as tips go.", "username": "Michael_Jay2" }, { "code": "", "text": "If running version 5.2 and up you might be able to $sortArray in your projection.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve. I’m going to give it a try. I’m not super hopeful this will work in my dev environment, unless something has changed. The last time I checked, the free tier DB was ineligible for using the aggregation pipeline. But that was a year or more ago so I’m not sure if that’s changed. Will report back.", "username": "Michael_Jay2" } ]
Sorting embedded documents array?
2022-12-29T17:33:55.862Z
Sorting embedded documents array?
1,318
null
[ "kafka-connector" ]
[ { "code": "}\n", "text": "Update or Delete not working with MongodbSink connector… Can you please help… com.mongodb.kafka.connect.MongoSinkConnector.\nIam using below source connector and sink connector…Source connector:\n{\n“name”: “inventory-connector”,\n“config”: {\n“connector.class” : “io.debezium.connector.mongodb.MongoDbConnector”,\n“errors.tolerance”: “all”,\n“errors.log.enable”: “true”,\n“tasks.max” : “1”,\n“topic.prefix” : “dbserver1”,\n“mongodb.hosts” : “rs0/mongodb:27017”,\n“mongodb.user” : “debezium”,\n“mongodb.password” : “dbz”,\n“database.include.list”: “inventory”,\n“database.history.kafka.bootstrap.servers”: “kafka:29092”,\n“database.history.kafka.topic”: “schema-changes.inventory”,\n“collection.include.list”:“inventory.customers”,\n“key.converter.schemas.enable”: false,\n“value.converter.schemas.enable”: false,\n“key.converter”: “org.apache.kafka.connect.json.JsonConverter”,\n“value.converter”: “org.apache.kafka.connect.json.JsonConverter”,\n“tombstones.on.delete”: “false”,\n“transforms”: “route”,\n“transforms.route.type” : “org.apache.kafka.connect.transforms.RegexRouter”,\n“transforms.route.regex” : “([^.]+)\\.([^.]+)\\.([^.]+)”,\n“transforms.route.replacement” : “$3”}Sink connector:\n{\n“name”: “mongodb-sink”,\n“config”: {\n“connection.uri”: “mongodb+srv:///?retryWrites=true&w=majority”,\n“connector.class” : “com.mongodb.kafka.connect.MongoSinkConnector”,\n“errors.tolerance”: “all”,\n“errors.log.enable”: “true”,\n“tasks.max”: “1”,\n“key.ignore”: “true”,\n“database”:“iradev”,\n“collection”:“customers_copy”,\n“topics”:“customers”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”: “org.apache.kafka.connect.json.JsonConverter”,\n“value.converter.schemas.enable”: false,\n“change.data.capture.handler”: “com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler”,\n“mongo.errors.tolerance”: “all”,\n“mongo.errors.log.enable”: “true”,\n“insert.mode”: “insert”,\n“auto.create”: “true”,\n“auto.evolve”: “true”\n}\n}any missing config for MongodbSinkConnector to take the update/delete transactions", "username": "Lakshminarayana_U" }, { "code": "", "text": "We do have a ticket https://jira.mongodb.org/browse/KAFKA-299 for supporting Debezium as a source for update operations. Can you use MongoDB connector as a source for now ?", "username": "Robert_Walters" } ]
Update or delete not working for com.mongodb.kafka.connect.MongoSinkConnector
2022-12-26T14:30:04.760Z
Update or delete not working for com.mongodb.kafka.connect.MongoSinkConnector
2,344
null
[ "aggregation", "queries", "data-modeling", "time-series" ]
[ { "code": "", "text": "I am trying to model data that is both geo-spatial and time-variant. The time-variant part lends itself to a time-series collection. Geo-spatial is less clear, as the data is actually a grid of sample values (at least 1000x1000) that map over a region. The queries performed on the collection fundamentally consist of aggregating the samples both over a polygon (intersect) and over time. I am unsure as to the best way of modelling the grid of samples. The sample values per cell could heavily correlate over time (e.g. weather data).Options considered:Any thoughts/guidance is appreciated.", "username": "Richard_Hannah" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Modelling geospatial grids with time
2022-12-30T15:28:01.448Z
Modelling geospatial grids with time
1,484
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6013838305cbb735a07481be\"\n },\n \"schemaVersion\": \"1.0.0\",\n \"releaseType\": \"Stable\",\n \"version\": \"0.6.2\",\n \"createdAt\": \"2021-01-14T21:34:39.838Z\",\n \"updatedAt\": {\n \"$date\": \"2021-09-14T02:59:57.605Z\"\n },\n \"availability\": \"discontinue\",\n \"default\": false\n}\n{\n \"_id\": {\n \"$oid\": \"6068b22556a8375b842af8e4\"\n },\n \"server\": {\n \"licenseValid\": true,\n \"allowedStart\": true,\n \"restorePoint\": {\n \"$date\": \"2022-09-29T19:33:16.889Z\"\n },\n \"licenseKey\": \"DEMO ACCOUNT\",\n \"releaseVersion\": {\n \"$oid\": \"6013838305cbb735a07481be\"\n }\n },\n \n}\n{\n version: \"0.6.2\",\n count: 1\n}\n", "text": "I have two collections, users and versions. Here is a sample of the version collectionHere is a record from the users that references the version collection.I am trying to create an aggregation that the end result looks like:I want to query all the versions and then count how many users are using that version. I have tried all kinds of permutations and just can’t seem to get it. In the old mysql days… this was pretty easy. Any help would be greatly appreciated.", "username": "Brad_Knorr" }, { "code": "$lookup$group", "text": "Did you try anything?From the text, it seems you will need:This wont get just one but all the versions.I think it will benefit you to try and share it. Use https://mongoplayground.net if you like.", "username": "santimir" } ]
How to count $lookup data
2022-12-29T22:34:19.575Z
How to count $lookup data
2,051
null
[ "dot-net" ]
[ { "code": "", "text": "I can not seem (despite lots of searching) to find any help with deleting a key from a Realm dictionary (along with value) with the .NET Realm SDK. What I have managed is to delete the value using realm.Remove(), but it leaves a key with the value as null in the Realm collection. What I am currently doing …\n’\nrealm.Write(() =>\n{\nvar dict = realm.All<my_dict>();\nvar my_val = dict[my_key];\nrealm.Remove(my_val);\n}which leaves the key still in the dictionary, and I would also like the key removed. I feel a little silly, like it is staring me in the face! The closest I could find searching was the $unset keyword, but do not see an equivalent in the .NET Realm SDK.Thanks in advance for helping me with this silly question!Josh", "username": "Josh_Whitehouse" }, { "code": "dict.Remove(my_key)var val = dict[my_key];\nrealm.Remove(my_val);\ndict.Remove(my_key);\n", "text": "You can call dict.Remove(my_key). If the value of the dictionary is an object (as it seems to be in your case), that’ll not delete the object from the Realm, just from the dictionary. If you want to remove it both from the dictionary and the Realm, you can either do something like:or use embedded objects for the dictionary value.", "username": "nirinchev" }, { "code": "", "text": "Thanks for the prompt response, doing dict.Remove(my_key) worked fine. I am using embedded objects, and the embedded object and key were removed from the dictionary. I knew this answer was simple! Thanks again! Josh", "username": "Josh_Whitehouse" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# .Net SDK - how to remove a key from a dictionary along with the value
2022-12-29T20:04:43.271Z
C# .Net SDK - how to remove a key from a dictionary along with the value
1,829
null
[]
[ { "code": "(name:'İstanbul') <-- problem big İ\nfilter: { 'Adi' :/is/i} { locale: 'tr', strength: 1 }\n", "text": "I am insert dataresult: not found…\nhow can I solve the problem?", "username": "suleyman_yalcin" }, { "code": "$regexnames_id: ObjectId('6350d8ccb0ec2b79cdd7576c')\nname: \"İstanbul\"\n{\n \"analyzer\": \"lucene.turkish\",\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"name\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\ndb.names.aggregate([\n {\n $search: {\n index: 'name_index',\n autocomplete: {\n query: 'is',\n path: 'name'\n }\n }\n}\n ])\nOutput:\n[ { _id: ObjectId(\"6350d8ccb0ec2b79cdd7576c\"), name: 'İstanbul' } ]\n", "text": "Hi @suleyman_yalcin,Welcome to the MongoDB Community forums If you refer to the MongoDB documentation for $regex hereIt states:Case-insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilize case-insensitive indexes.So, in this case, I’ll suggest you use MongoDB Atlas Search.For example, consider the following sample data collection called names:You can create an Atlas search index from the MongoDB Atlas dashboard:\nMongoDB Atlas Search1678×536 48.1 KB\nThe index will look something like this in JSON format:After that, you can run the query using $search:It will return the output as follows:For more info refer to the Atlas search documentation hereI hope it helps!Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "let text = searchText;\n // Replace special characters\n text = text.replace(/[-\\/\\\\^$*+?.()|[\\]{}]/g, '')\n let array = text.split('')\n let newArray = array.map((char: any) => {\n if (char === 'i' || char === 'I' || char === 'ı' || char === 'İ' || char === 'İ') {\n char = '(ı|i|İ|I|İ)'\n return char\n }\n else if (char === 'g' || char === 'G' || char === 'ğ' || char === 'Ğ') {\n char = '(ğ|g|Ğ|G)'\n return char\n }\n else if (char === 'u' || char === 'U' || char === 'ü' || char === 'Ü') {\n char = '(ü|u|Ü|U)'\n return char\n }\n else if (char === 's' || char === 'S' || char === 'ş' || char === 'Ş') {\n char = '(ş|s|Ş|S)'\n return char\n }\n else if (char === 'o' || char === 'O' || char === 'ö' || char === 'Ö') {\n char = '(ö|o|Ö|O)'\n return char\n }\n else if (char === 'c' || char === 'C' || char === 'ç' || char === 'Ç') {\n char = '(ç|c|Ç|C)'\n return char\n }\n else {\n return char\n }\n })\n\n // Array values are joined with no spaces\n text = newArray.join('')\nfilename: new RegExp('(.*)' + text + '(.*)', \"ig\")\nİstanbul/(.*)(ı|i|İ|I|İ)(ş|s|Ş|S)tanb(ü|u|Ü|U)l(.*)/gi", "text": "I solved this problem by tweaking the regex patterns a bit. I hope it helps you tooWhen search İstanbul its converted to /(.*)(ı|i|İ|I|İ)(ş|s|Ş|S)tanb(ü|u|Ü|U)l(.*)/giThis result worked great for me.", "username": "sahinersever" } ]
Mongodb Turkish character set problem
2022-10-13T20:10:44.271Z
Mongodb Turkish character set problem
1,966
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "document constructed by $facet is 104857724 bytes, which exceeds the limit of 104857600 bytes$facet$addFields$lookup$match$addFields$sort$project$addFields$sort$project[\n { $match: my_global_query },\n {\n $facet: {\n outputField1: [\n { $match: my_specific_query },\n { $unwind: { path: \"array\" } },\n { $addFields: {} }, // Miraculous $addField (or $sort or $project)\n {\n $lookup: {\n from: \"collection\",\n localField: \"local_field\",\n foreignField: \"foreign_field\",\n as: \"output\",\n },\n },\n ],\n },\n },\n];\n", "text": "Hello,I’m using MongoDB 4.2 for now and use it with a NodeJs application with Mongoose 6.0.14.I’ve created an aggregate (see below) to retrieve some data and it has a strange behavior. With the base aggregate too many documents are filtered and I receive an error 4031700 (document constructed by $facet is 104857724 bytes, which exceeds the limit of 104857600 bytes) on the $facet stage.\nBut if I add an empty $addFields stage before my $lookup, there is no more error. In fact, I can retrieve about 10 times more documents with the base query filter in the $match stage.\nThis behavior is the same when replacing the $addFields by a $sort or a $project with all fields within documents.So my question is : Do aggregates have a specific behavior with $addFields, $sort or $project concerning the way the data is stored for later stages ?My aggregate :", "username": "Axel_Morvan" }, { "code": "", "text": "Your pipeline seems to be highly redacted so it is hard to make a real assessment of what is going on.It is really hard to imagine that increasing the data would reduce the data of the $facet. May be some detrimental optimization is performed without the empty $addFields. A $project that eliminates data, yes. A $sort may be, as some data might be piped to the next stage faster.Do you get the error if you only $match and $unwind? That is without any other stages in the $facet pipeline.Doing $unwind inside the $facet might be useless and the culprit as it multiplies the data. You do not need to $unwind even if localField refers to the array. The $lookup is smart enough to get all the elements of the array.Having $facet with only 1 field is useless, but again you might have more but the redacted pipeline you shared stops us from doing a real assessment.", "username": "steevej" }, { "code": "[\n { $match: my_global_query },\n {\n $facet: {\n outputField1: [\n { $match: my_specific_query },\n { $unwind: { path: \"array\" } },\n { $addFields: {} }, // Miraculous $addField (or $sort or $project)\n {\n $lookup: {\n from: \"collection\",\n localField: \"local_field\",\n foreignField: \"foreign_field\",\n as: \"output\",\n },\n },\n { $unwind: { path: \"second_array\" } }\n ],\n },\n },\n];\n", "text": "Thank you for response.I understand that not having a lot of details about the aggregate make it hard to understand, but I’m using it in a professional context and so I cannot shared sensitive information, including the schemas. However, I’ve just noticed that I forgot a stage in the upper aggregate (another $unwind) and it should looks like :With only the $match and first $unwind, I have no issue. I think this is the second $unwind which create a lot of data (all of them are unique). The fact is even if I $project all existing fields (so there is theoretically no eliminated data) or I use $sort or $addFields, I have no longer any problem. Maybe MongoDB is doing some weird operations to transform data and change the way it is memory stored in one case, what do you think ?About the $facet with only 1 field, it’s because I omited the second one to not be confusing as it is similar to the first one with only one $unwind). The complete aggregate have hundreds of lines to avoid javascript compute (maybe not the best idea we had, but we do not have the time to re-write it for now). So I only focussed on the problematic stage.", "username": "Axel_Morvan" } ]
Aggregate stage lighter with a $addField before
2022-12-28T10:37:23.351Z
Aggregate stage lighter with a $addField before
1,296
null
[ "aggregation" ]
[ { "code": "", "text": "HelloWhat are the implications of enabling public read access to my charts dashboard and the relevant data sources?Will this give public users access over the entire collection?I would like to expose a charts dashboard to our player base. The chart will have some ‘fun’ stats like the number of matches today or the hourly active player count. (Similar to something like https://steamcharts.com/)But I don’t want to give access to the entire collection publicly. I just want to expose the charts themselves.So I guess my question is… unauthenticated access is enabled does this mean a malicious user could read the data and perform their own aggregation/look ups on our data much like the dashboard can? Or is the ‘unathenticated access’ limited to the queries the charts need to execute to populate?", "username": "Tim_Aksu" }, { "code": "", "text": "Unauthenticated users can only access the data that is visible on the charts. They cannot get access to the raw data in the collection.", "username": "tomhollander" } ]
Public charts and data access
2022-12-30T07:47:49.610Z
Public charts and data access
1,467
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi ,We are planning to create a multi tenant application in node js using mongodb as database .\nWhat will be the database architecture ? we are planning to create databases per tenant. How to connect using moongoose ODM to implement this multi database system in mongodb?\nPlease suggest.Please find the below details for your reference.How many tenants are there - Expecting 200 to 300 tenants within 2 years.Are they all have the same sizes or differ drastically - Differ size.Is there query pattern different or alike - query pattern\nWhat is the expected data size - May be drastically increase based on their data.What is the expected growth over next 2 years - Medium level\nAre all tenants and application are in a single dc or multiple - singleWhat are the security considerations? Can developers see different tenant data? Can tenants see different tenanat data - Developers can see different tenants data but tenants can’t see other tenants data. Data sensitivity and encryption need to add.What MongoDB version are you expect to use - 6.0Thanks\nHemanth", "username": "Developer_Testing" }, { "code": "db.model(modelName).find()\n", "text": "hi @Developer_Testing,\nI have almost same the problem. I found some solutions like this Node.js MongoDB - multi-tenant app by example - DEV Community 👩‍💻👨‍💻 but I don’t want to use\nmodelsI want to use ModelName.find() to better development experience", "username": "oguzhan_atasever" } ]
Multi tenant SAAS application
2022-08-28T06:04:28.047Z
Multi tenant SAAS application
2,268
null
[ "aggregation", "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"6399260eaa30a55748d5d\"),\n \"name\": \"name product\",\n \"size\": \"size\",\n\"sku\" : [\n {\n \"ean\" : \"100\",\n \"sku\" : \"12\"\n },\n {\n \"ean\" : \"101\",\n \"sku\" : \"13\"\n }\n ]\n ...\n}\n{\n \"_id\" : ObjectId(\"63756659ee48ff285373939c\"),\n \"sku\" : \"12\", \n \"ff\" : 11,\n **\"id_product\" : ObjectId(\"6399260eaa30a55748d5d\") ---> _id from collection 1**\n}\n{\n \"_id\" : ObjectId(\"63756659ee48ff28537393op\"),\n \"sku\" : \"13\",\n \"ff\" : 12,\n **\"id_product\" : ObjectId(\"6399260eaa30a55748d5d\") _id from collection 1**\n}\n{\n\nall the data of the collection 1\n.\n.\n.\n\"sku\": [\n\t{\n \"ean\" : \"100\",\n \"sku\" : \"12\",\n **\"ff\" : 11, ---> field from collections 2**\n },\n {\n \"ean\" : \"101\",\n \"sku\" : \"13\",\n **\"ff\" : 12 ---> field from collections 2**\n }\n]\n\n}\n", "text": "Hello, I am a beginner in mongodb, I have the following problem:\nI want to join two collections as follows:collection 1:productcollection 2:properties:data expected after join:Thanks for the suggestions…", "username": "Ruben_Quiroz" }, { "code": "$lookup$lookup", "text": "Hello @Ruben_Quiroz ,Welcome to The MongoDB Community Forums! Instead of Join, MongoDB has $lookup, it performs a left outer join to a collection in the same database to filter in documents from the “joined” collection for processing. The $lookup stage adds a new array field to each input document. The new array field contains the matching documents from the “joined” collection.Let us know in case you need more help or have any other queries, I’ll be happy to help you!Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Join collections
2022-12-20T01:04:12.808Z
Join collections
1,087
https://www.mongodb.com/…2_2_1024x449.png
[ "aggregation", "queries" ]
[ { "code": "> db.commands.find({authorID: {$ne: 127251834153861120}}).limit(2)\n{ \"_id\" : ObjectId(\"63aa8a72dc9c7ee5bab2e274\"), \"authorID\" : NumberLong(\"174370290145427457\"), \"guildID\" : NumberLong(\"1054865604584144946\"), \"channelID\" : NumberLong(\"1054882549404549152\"), \"command\" : \"info\", \"appCommand\" : false, \"timestamp\" : ISODate(\"2022-12-27T06:02:24.825Z\") }\n{ \"_id\" : ObjectId(\"63ab7857b478b29cfea6cd53\"), \"authorID\" : NumberLong(\"266511987738017792\"), \"guildID\" : NumberLong(\"1054865604584144946\"), \"channelID\" : NumberLong(\"1054882576164208720\"), \"command\" : \"personalizar myanimelist\", \"appCommand\" : true, \"timestamp\" : ISODate(\"2022-12-27T22:57:26.003Z\") }\n> db.commands.find({\"authorID\": 266511987738017792}).limit(2) /*This returns data*/\n> db.commands.find({\"authorID\": 174370290145427457}).limit(2) /*This does not return data*/\n> db.commands.aggregate([{$group: {_id: \"$authorID\", count:{$sum:1}}}])\n{ \"_id\" : NumberLong(\"266511987738017792\"), \"count\" : 10 }\n{ \"_id\" : NumberLong(\"127251834153861120\"), \"count\" : 55 }\n{ \"_id\" : NumberLong(\"174370290145427457\"), \"count\" : 2 }\n> db.commands.find({$and: [ { authorID: { $nin: [127251834153861120, 174370290145427457, 266511987738017792] } } ] }).limit(2)\n{ \"_id\" : ObjectId(\"63aa8a72dc9c7ee5bab2e274\"), \"authorID\" : NumberLong(\"174370290145427457\"), \"guildID\" : NumberLong(\"1054865604584144946\"), \"channelID\" : NumberLong(\"1054882549404549152\"), \"command\" : \"info\", \"appCommand\" : false, \"timestamp\" : ISODate(\"2022-12-27T06:02:24.825Z\") }\n{ \"_id\" : ObjectId(\"63ae12b716c1064a474e3b43\"), \"authorID\" : NumberLong(\"174370290145427457\"), \"guildID\" : NumberLong(\"1054865604584144946\"), \"channelID\" : NumberLong(\"1054882618346328074\"), \"command\" : \"estadísticas\", \"appCommand\" : false, \"timestamp\" : ISODate(\"2022-12-29T22:20:39.436Z\") }\n", "text": "Hi all,I have the following example documents in my DB:When I try to execute a simple find with one of the authorID’s it returns data:But if I use the other authorID, it does not return any record, even though I see them in the DB:Something very interesting is that, when I try to group the data by authorID, it does return the mentioned ID:But If I do a $nin filter, all the ids, except for that specific ID, is not filtered:\nimage1040×457 41.6 KB\nI really don’t understand what is happening here, any guidance would be appreciated.Thanks in advance.", "username": "Aguileitus" }, { "code": "NumberLong()$nindb.collection.find({\n \"authorID\": {\n \"$nin\": [\n NumberLong(174370290145427457)\n ]\n }\n})\n", "text": "Hi @Aguileitus,You should add NumberLong() to your filter when you specify the $nin items:Working example", "username": "NeNaD" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filters not working with a specific ID
2022-12-30T00:12:12.484Z
Filters not working with a specific ID
768
null
[ "time-series" ]
[ { "code": "sensorId: \"SXT001\" // is a metafield, \"meta.sensorId\"Jan/01Jan/31db.getCollection(\"timeseries\").deleteMany({\n \"meta.sensorId\": \"SXT001\",\n \"ts\": {\n \"$gte\": ISODate(\"2022-01-01\"),\n \"$lte\": ISODate(\"2022-01-31\")\n }\n})\nMongoServerError: Cannot perform an update or delete on a time-series collection when querying on a field that is not the metaField 'meta'", "text": "I would like to delete from my time series collection all the documents related to thesensorId: \"SXT001\" // is a metafield, \"meta.sensorId\"from Jan/01 to Jan/31, because I need to replace that range of values (previous imported data for that month are wrong).I’m trying to execute the querybut I get this error:MongoServerError: Cannot perform an update or delete on a time-series collection when querying on a field that is not the metaField 'meta'", "username": "Maurizio_Merli" }, { "code": "tsmeta", "text": "Hello @Maurizio_Merli!I’m not sure this is a bug, since you are also matching on ts which isn’t from meta?Delete commands must meet the following requirements: Time Series Collection LimitationsIs the issue you need to delete just the data in that date range, but data newer or older that is fine?There is a suggestion to use a TTL index to remove data that is older, but I’m not sure if there is a easy way to remove just a range, at this time.", "username": "Justin_Jenkins" }, { "code": "", "text": "You are right! I read better the doc now.\nAnd I change the title of the thread from BUG… to FEATURE REQUEST…\nI’m surprised to not be able to delete a portion of time series.\nI know that internally samples are clustered in document based on meta field.\nBut be able to fix a portion of a timeseries can be very useful.", "username": "Maurizio_Merli" } ]
FEATURE REQUEST: Timeserie and deleteMany operation
2022-12-30T05:20:20.480Z
FEATURE REQUEST: Timeserie and deleteMany operation
1,784
null
[ "kotlin" ]
[ { "code": "kotlin-coroutinesexecuteTransactionAwaitwithContext(Dispatchers.IO) {\n\topenRealm().use { realm ->\n\t\trealm.executeTransactionAwait {\n\t\t\trunTransaction()\n\t\t}\n\t}\n\temitUpdate(value)\n}\n", "text": "Hi,Could you please describe how to use (or ideas behind of) kotlin-coroutines extensions?In particular, I open realm and run executeTransactionAwait on IO dispatcher, and it throwsjava.lang.IllegalStateException: Realm access from incorrect thread. Realm objects can only be accessed on the thread they were createdThe code is similar to followingPS. We’re using MongoDB Realm 10.3.1 in the way that we don’t keep Realm opened, but instead for each transaction:\nOpen realm -> execute transaction -> close realm", "username": "Kirill_Zotin" }, { "code": "", "text": "Was you able to fix it? i have the same doubt", "username": "Juan_Silvestre_Ramir" }, { "code": "", "text": "No, we have abandoned built-in extensions and use custom code", "username": "Kirill_Zotin" }, { "code": "executeTransactionAwait", "text": "executeTransactionAwaitexecuteTransactionAwait will call Realm.refresh() on current CoroutineContext to ensure current realm get latest changed,\nso use withContext(Dispatchers.Main) or withContext(SingleThreadExecutor().asCoroutineDispatcher())", "username": "lotosbin" } ]
Not clear how to use coroutines extensions (executeTransactionAwait)
2021-02-12T18:27:54.984Z
Not clear how to use coroutines extensions (executeTransactionAwait)
4,821
null
[ "replication", "containers" ]
[ { "code": "{\"t\":{\"$date\":\"2022-12-24T11:00:54.895+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"{Replset_name}\",\"host\":\"{VPS_IP}:27019\",\"error\":{\"code\":18,\"codeName\":\"AuthenticationFailed\",\"errmsg\":\"Authentication failed.\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck\n\"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"10.5.0.11(staticIP-container1):27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 258904,\n \"optime\" : {\n \"ts\" : Timestamp(1672026076, 1),\n \"t\" : NumberLong(67)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1672026076, 1),\n \"t\" : NumberLong(67)\n },\n \"optimeDate\" : ISODate(\"2022-12-26T03:41:16Z\"),\n \"optimeDurableDate\" : ISODate(\"2022-12-26T03:41:16Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2022-12-26T03:41:16.739Z\"),\n \"lastDurableWallTime\" : ISODate(\"2022-12-26T03:41:16.739Z\"),\n \"lastHeartbeat\" : ISODate(\"2022-12-26T03:41:17.962Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2022-12-26T03:41:18.521Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"10.5.0.12:27017\",\n \"syncSourceId\" : 1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 17,\n \"configTerm\" : 67\n },\n {\n \"_id\" : 1,\n \"name\" : \"10.5.0.12(staticIP-container2):27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 589529,\n \"optime\" : {\n \"ts\" : Timestamp(1672026076, 1),\n \"t\" : NumberLong(67)\n },\n \"optimeDate\" : ISODate(\"2022-12-26T03:41:16Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2022-12-26T03:41:16.739Z\"),\n \"lastDurableWallTime\" : ISODate(\"2022-12-26T03:41:16.739Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1671767185, 1),\n \"electionDate\" : ISODate(\"2022-12-23T03:46:25Z\"),\n \"configVersion\" : 17,\n \"configTerm\" : 67,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 2,\n \"name\" : \"178.128.xx.xxx(IP-VPS):27019\",\n \"health\" : 0,\n \"state\" : 6,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastAppliedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastDurableWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2022-12-26T03:41:17.427Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"authenticated\" : false,\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -1,\n \"configTerm\" : -1\n }\n ],\n", "text": "Hi everyone,I received the error below:That error is shown in mongod.log when I add Secondary in the existing replica set MongoDB between different docker containers in 2 server machines.My replica set structure includes the following:Details in rs.status()And I ensure that I follow some rules:Many thanks !!", "username": "Khiem_Nguy_n" }, { "code": "net.bindIp: addresses0.0.0.0net.bindIpAll:true", "text": "Thanks for pinging the other post, I would not see this otherwise.I suspect your servers start without proper IP whitelisting. Instead of net.bindIp: addresses, either use 0.0.0.0 or use net.bindIpAll:true. as primary may change anytime, apply this to all and restart.\nConfiguration File Options — MongoDB Manualif this change solves the issue, then you need to set a proper IP list.if not, then share your config file here (remove sensitive parts)", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Its seem to my PRIMARY is running on subnet with static IP and the SECONDARY_02 on another VPS cant ping to that. ChatGPT advised me set up a overlay network between 2 VPS to direct comunicate each other containers. But could I set up direct without network overlay?", "username": "Khiem_Nguy_n" }, { "code": "10.5.0.0 or 10.5.0.110.5.0.1/8", "text": "direct communication needs you open ports on each VPS, forward these ports to containers, allow containers to access outside network ( network type, so not just containers’ localhost resources), and set MongoDB to also listen to the IP of your VPSs.you will need to set it to listen to localhost (127.0.0.1), local docker network (10.xx.xx.xx, 172.xx.xx.xx ), vps network (192.168.xx.xx).you can listen to all IPs in a network with a single entry but I don’t know (for now) what to use. it might be to end address with 0 or 1, 10.5.0.0 or 10.5.0.1 or maybe it is like 10.5.0.1/8. I haven’t tried this many variations and it is a chance to try it out on your side if you don’t already know the answer ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Does your idea is set up a VPN?If no, in secondary02 I exposed container 0.0.0.0:27017:27019 to outside VPS2. And I can ping directly from container docker in VPS1 (subnet 10.5.0.1/24) with static IP 10.5.0.11 to VPS2 (178.128.xx.xxx):27019 that was exposed from container:27017.But the opposite, I worried that Secondary can’t ping back to PrimaryPlease clarify your idea, many thanks!", "username": "Khiem_Nguy_n" }, { "code": "0.0.0.0", "text": "You have not given the result of having 0.0.0.0 in your config files. This is an important step to identify possible problems.and a TL:DR for my above post would be if you need to connect from multiple networks, mongodb server should be set to listen to:otherwise, the server will just reject all incoming connections that are not included in the whitelist.you seem to have done most of the job but seeing your config file would really help to find a solution faster.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "in other words, to make a replica set in different networks, say A and B, you need a two-way connection between them. Both A-to-B and B-to-A connections must be clear.other than knowing IP addresses and ports, you also need to have them in the config so that when “mongod” starts, it will allow connection from them.PS: by the way, what I am saying is not VPN. VPN takes time to setup, but it gives a single IP range to containers. it would then be easier to have simpler mongo config. but again, setting up vpn has its own overhead. the decision is your.", "username": "Yilmaz_Durmaz" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /data/db\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n # fork: true # fork and run in background\n # pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n authorization: \"enabled\"\n keyFile: /etc/secret.kf\n#operationProfiling:\n\nreplication:\n replSetName: \"marketplace_nft\"\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n", "text": "This is my simple file configAnd I think don’t need to add IP to the whitelist, A and B in the replica set just need to share the same key file for authentication. And then, just config the whitelist for safety IP’s services using this DB (if need).By the way, I think A (static IP - subnet container) can ping to B (public IP - be forwarded outside) but B can’t ping back to A.", "username": "Khiem_Nguy_n" }, { "code": "version: \"3.9\"\nservices:\n mongodb:\n image: mongodb_local:latest\n container_name: stag_marketplace_nft_mongodb01\n ports:\n - \"27019:27017\"\n networks:\n outer:\n ipv4_address: \"10.5.0.11\"\n env_file:\n - .env\n volumes:\n - \"./mongodb_configuration/:/docker-entrypoint-initdb.d/:ro\"\n - \"./mongodb_configuration/init-mongodb.sh:/docker-entrypoint-initdb.d/init-mongodb.sh:ro\"\n - \"./config/mongod.conf:/data/configdb/mongod.conf:ro\"\n - \"./config/mongod.conf:/etc/mongod.conf.orig:ro\"\n - \"./config/secret.kf:/etc/secret.kf:ro\"\n - \"./data:/data/db\"\n - \"./log:/var/log/mongodb\"\n command: [\"/usr/bin/mongod\",\"-f\",\"/data/configdb/mongod.conf\"]\n restart: on-failure\nvolumes:\n log: null\nnetworks:\n outer:\n external:\n name: marketplace\n", "text": "Moreover, this is my docker-compose.yaml file config. I run the container based on the custom image built from mongo:5.0.6 (mongodb_local:latest)", "username": "Khiem_Nguy_n" }, { "code": "", "text": "that makes the whitelist’s possibility eliminated. do you have the nerves to try some more possibilities? (if not, you may try VPN option. setting it may come difficult but should just work, can’t say it performs better or worse)there can be more indicators in the mongod log file if this relates to the server. login to container on VPS2, stop the server, remove the mongod log file, restart the server, wait for a while like 30 seconds (3 times of heartbeat timeout should be enough). check if you can read it if errors are present, or try to identify if it has sensitive information, make redactions, and share the log file here so we can check here.another possible cause is the firewall settings on those VPSs that prohibits these ports to outside sources. I believe you have admin control over their exposed ports to the outside world. can you try connecting to all servers from the outside world, preferably your own pc if it sits outside the VPSs. Because of this problem I believe you don’t have valued data yet, so remove authentication from the config and restart all containers but do not initiate the replicaset yet (rebuild images if your customization requires it), then try connecting from outside with mongo shell or Compass.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "In order to resemble more of the actual network communication, you should instead use telnet or netcat to see if you can access the listening port from each host to others.\nPing only proof that the ICMP communication works between thoses hosts, but doesn’t proof that you can access the service listening on that specified port. If this is a firewall issue, for example, the firewall might allow ICMP but not the TCP service.", "username": "Daniel_Baktiar1" }, { "code": "", "text": "Was there any OS user password change done.\nTry to do ssh from mongod user.", "username": "Prince_Das" }, { "code": "", "text": "Whenever I see replication error due to bad auth it’s because the keyfile is not the same on each server. The keyFile has to match on each of the replica set members exactly. I had this issue previously and this resolved the issue (I had a copy error).I would double check the Keyfile and if you see they don’t match make sure they do and then restart any node you had to update it on and check again.They keyfile is how they nodes authenticate to each other internally so seeing a bad auth on replication is the reason I suspect this could be an issue.****I usuall use mongodb on VMs and I see this is in Docker but I would assume the same is true.", "username": "tapiocaPENGUIN" }, { "code": "{\"t\":{\"$date\":\"2022-12-24T11:00:54.895+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"{Replset_name}\",\"host\":\"{VPS_IP}:27019\",\"error\":{\"code\":18,\"codeName\":\"AuthenticationFailed\",\"errmsg\":\"Authentication failed.\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck", "text": "Hi @Khiem_Nguy_n,looking at {\"t\":{\"$date\":\"2022-12-24T11:00:54.895+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"{Replset_name}\",\"host\":\"{VPS_IP}:27019\",\"error\":{\"code\":18,\"codeName\":\"AuthenticationFailed\",\"errmsg\":\"Authentication failed.\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheckIt seems like the {Replset_name} and {VPS_IP} variables were not expanded to the actual variable values.\nPlease check if your setting on the parameterization are setup correctly.", "username": "Daniel_Baktiar1" }, { "code": "", "text": "Yeah that’s my real intent. It’s log file and I hide sensitive variables.", "username": "Khiem_Nguy_n" } ]
MongoDB replicaSet error AuthenticationFailed (code 18)
2022-12-26T03:34:26.093Z
MongoDB replicaSet error AuthenticationFailed (code 18)
7,125
null
[ "aggregation", "data-modeling" ]
[ { "code": "{\n\tclient_id: 123,\n\tproduct_id: 456,\n\tcategory: \"Home\",\n\tproduct_name: \"Television\",\n\tlisting_date: \"2022-12-28\",\n\tquantity: 97\n}\n{\n\tclient_id: 123,\n\tproduct_id: 456,\n\tfields:[\n\t\t{\n\t\t\tkey: \"category\",\n\t\t\tvalue: \"Home\"\n\t\t},\n\t\t{\n\t\t\tkey: \"product_name\",\n\t\t\tvalue: \"Television\"\n\t\t},\n\t\t{\n\t\t\tkey: \"listing_date\",\n\t\t\tvalue: \"2022-12-28\"\n\t\t},\n\t\t{\n\t\t\tkey: \"quantity\",\n\t\t\tvalue: 97\n\t\t}\n\t]\n}\n", "text": "Hello,\nI am very new to Mongo and I am trying to get an understanding on best practices for modeling the data.\nThese are my requirements:I have tried out a couple of models so far.Are there any other options for modeling such data?\nPlease note that the number of documents in the collection per client will run into 100s of thousands and the sorting, grouping etc need to be highly performant.Thanks in advance for the guidance!", "username": "Prasad_Kini" }, { "code": "", "text": "What you describe is polymorphic behavior and is well suited for document databases, and MongoDB. as long as you have a discriminator for documents ( this belongs to client A ), and have few static fields (at least an “_id” and “version/owner”) you can store any kind of document in a single collection. you can even add relations inside the same collection with different keys.the downside will be creating indexes as each client would require a different kind of fields to index, but that is a story for another time.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi Yilmaz,\nThanks for the quick response!Glad to know that Mongo will be able to meet my requirements.My understanding is that each collection cannot have more than 64 indexes. We want to build a generic solution that’d avoid having to create unique indexes per client. Moreover, having different indexes for each client would mean that I would have to build some module that’d create them at runtime based on some data analytics or some other logic.Which of the options that I have listed will work best for the expected functionality? Is there a third option?Thanks again!", "username": "Prasad_Kini" }, { "code": "", "text": "let me ask you a few important questions before trying any further:because there is a border where you need to decide between using a database and implementing whole database servicing.Honestly, my opinion is to implement an administration interface (if it is not Atlas), then interact with the database to create users, create databases and assign them to users, set other security measures, and let users create their collections and fill documents as as they like, and also let them create their own indexes since they own the database, and charge them for their usage (disk, cpu, indexing etc.). This is similar to what Atlas does for shared clusters. (you have more control and resources on private paid clusters)By the way, since you said being new, here is an important recap:Databases and Collections — MongoDB Manualyour design does actually belong to the database and server layer. so trying to give a solid answer depends on your own resources", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi Yilmaz,\nAnswers to your questions below:Hope this answers your questions. Thanks again for indulging me Regards,\nPrasad", "username": "Prasad_Kini" }, { "code": "mongod", "text": "Please bear with me,I forgot to mention the 5th layer that has a name easily confused: server itself. I mean the host pc so there is a real/virtual host server pc and there are database server programs. we can have only 1, or as many as we need, mongod (the program) instance on a single host pc. the same holds true if zoom out our sight: a data center can have a single powerful data hosting pc, or as many as needed.The reason I raise this ordered architecture is about making decisions about where exactly our single document should belong. If your design to keep client data in a single collection proves to be hard to implement, you need to consider having a collection for each user which will eliminate the indexing problem. If a single client’s data exceeds a certain amount, you need to consider having a database (not the server, naming can also be confusing here). And If you want to serve a bigger degree of data, you get the idea, go one layer up.In all of these possibilities, if you design carefully, your clients would not notice any difference if you choose one or another. they would not even notice if you change to some other database other SQL/NoSQL server other than MongoDB. in fact, you can leverage cooperation between them to cover their weak sides. None of these would be noticable by clients if your design is good.Now back to your “document” focused design. If you try an all-free approach, the indexing will become bloated and thus performance will degrade. I think you need to pour some thinking into field names and types to guide your clients. for example, try preventing them to name the field “my_age” and enter their pet’s name (a string) as the value.The second approach from your first post has an advantage over the other: you can have a “search index” over the key field. but you still have to deal with the array structure.“Third option” as you asked, along with the first and second, need a longer time to discuss than we have here because this will be the heart of your design. Considering “100s of thousands” documents per client, I would go up on the layers and settle on a collection or database per client. think of it as folder tree structure; a folder per client. you are required to implement functions to switch between “folders” for each client, but the logic you end up with for the “document” does not need to change, and it will be much easier to administer each client plus better performance.By the way, I am sorry for the long lines to read. Model designing sounds like an easy thing, but might be the hardest because there are too many things to consider. But again, as long as you keep backup data, you can craft a whole new model and apply it without clients ever noticing. So, decide on one model and start developing so you can actually test things on the way if you prefer hands-on experience.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi Yilmaz,\nThank you so much for your pointers. This is a new db that I am looking to migrate to from SQL server and I want the foundation to be laid right so that we don’t run into fundamental issues later.It seems like at this point option 2 (key value pairs) and 3 (different collection/client) are the only viable options so far. Seems like I have a lot of thinking to do.Does option 3 require any design considerations on Atlas right away? Would it cause space/memory requirements to grow faster than option 2?Thanks again!", "username": "Prasad_Kini" }, { "code": "", "text": "you may want to check this post before moving on:\nCreating a new database vs a new collection vs a new cluster - Ops and Admin / Installation & Upgrades - MongoDB Developer Community Forumscollection per client will give flexibility for indexes plus queries will be faster as you will be searching only on that client’s collection. Yet there is a limit (about 10000) on the total collections you can have in one database. so if the number of clients you will have may go above that, consider the next layer.database per client will give more power if you want to allow clients to have still-relational data as the tables in an xyzSQL server corresponds to the collections in MongoDB.database is the highest level in a mongod instance, and instead of scaling database your collection resides, you can create new clusters if the number of clients starts increasing. you can even group client types into clusters.things to consider can really be overwhelming. but again, take your time. as long as you keep a backup, feel free to experiment with ideas.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi Yilmaz, thanks for sharing the post on db vs collection vs cluster. I will go through it carefully and let you know if I need further help.Thank you so much for all your inputs!", "username": "Prasad_Kini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo model for dynamic fields
2022-12-28T15:48:52.654Z
Mongo model for dynamic fields
7,222
null
[ "aggregation", "queries", "java", "android", "app-services-data-access" ]
[ { "code": "", "text": "Hello community,I have made a function wich works well in Atlas App Services but after integrated in my android app using the same method from https://www.mongodb.com/docs/realm/sdk/java/examples/call-a-function/#std-label-java-call-a-function (that works pretty well with other of my functions), I have got that error:E/EXAMPLE: failed to call function with: MONGODB_ERROR(realm::app::ServiceError:12): (AtlasError) $ifNull is not allowed or the syntax is incorrect, see the Atlas documentation for more informationIn that function, I don’t use $cond or $ifnull statement inside find query or aggregation pipeline. I only use if/else statement from javascript language outside my querys, because I need to teste some arguments before doing my querys.I use a simple Atlas M0 free cluster and i didn’t find any limitations from Atlas M0 (Free Cluster), M2, and M5 Limitations — MongoDB Atlas.Could someone help me with that issue?Damien.", "username": "Damien" }, { "code": "", "text": "I have found the error which was from the fact that I didn’t deploy my function. So basically, I used my old functions in the app, note the last one which was in draft mode. I also managed some other errors.", "username": "Damien" } ]
MONGODB_ERROR(realm::app::ServiceError:12): (AtlasError) $ifNull is not allowed or the syntax is incorrect, see the Atlas documentation for more information
2022-12-27T17:44:48.941Z
MONGODB_ERROR(realm::app::ServiceError:12): (AtlasError) $ifNull is not allowed or the syntax is incorrect, see the Atlas documentation for more information
1,963
null
[ "aggregation", "queries", "node-js", "crud", "compass" ]
[ { "code": "\"empty\" const query = await db.collection('events').updateOne({\n _id: new ObjectId(eventId),\n createdBy: new ObjectId(createdBy),\n \"weights.weight\": weight\n },\n {\n $set: {\n \"weights.$.spotsAvailable.$[el2]\": {\n \"name\": applicantName,\n \"userId\": new ObjectId(applicantId)\n }\n }\n },\n {\n arrayFilters: [\n {\n \"el2.userId\": \"empty\"\n }\n ]\n })\nnew ObjectIdnew ObjectIdconst acceptOrRemoveApplicant = async (eventId: ObjectId, createdBy: ObjectId, applicantId: ObjectId, applicantName: string, boolean: boolean, weight: number): Promise<boolean | undefined> => {\n console.log({ eventId, createdBy, applicantId, applicantName, boolean, weight })\n if (boolean == true) {\n try {\n /*\n * Requires the MongoDB Node.js Driver\n * https://mongodb.github.io/node-mongodb-native\n */\n\n const agg = [\n {\n '$match': {\n '_id': new ObjectId('6398c34ca67dbe3286452f23'),\n 'createdBy': new ObjectId('636c1778f1d09191074f9690')\n }\n }, {\n '$unwind': {\n 'path': '$weights'\n }\n }, {\n '$unwind': {\n 'path': '$weights.spotsAvailable'\n }\n }, {\n '$match': {\n 'weights.spotsAvailable.name': 'empty',\n 'weights.weight': 15\n }\n }, {\n '$limit': 1\n }, {\n '$set': {\n 'weights.spotsAvailable.name': 'Wayen',\n 'weights.spotsAvailable.userId': '123456'\n }\n }\n ]\n\n const client = await clientPromise;\n const db = client.db();\n const query = db.collection('events').aggregate(agg);\n\n\n // const query = await db.collection('events').updateOne({\n // _id: new ObjectId(eventId),\n // createdBy: new ObjectId(createdBy),\n // \"weights.weight\": weight\n // },\n // {\n // $set: {\n // \"weights.$.spotsAvailable.$[el2]\": {\n // \"name\": applicantName,\n // \"userId\": new ObjectId(applicantId)\n // }\n // }\n // },\n // {\n // arrayFilters: [\n // {\n // \"el2.userId\": \"empty\"\n // }\n // ]\n // })\n\n if (query) {\n console.log(\"we queried\")\n console.log({ query })\n return true\n } else {\n throw new Error(\"User not added to event\")\n }\n\n } catch (e) {\n console.error(e);\n }\n", "text": "I need to only update one document in a nested array of subdocuments. My previous query was updating all matching documents which is no good Example Below. So I decided to use aggregation so that I could add a limit stage so that I could only update one item, but I cannot get the update to happen through node and I am not even getting errors.This query updates all documents that match the shape of userId: \"empty\"I have tested the aggregation in the MongoDB compass aggregation builder and it works fine.\nBut in the actual node code no luckI have tried:", "username": "Wayne_Barker" }, { "code": "\"arrayFilters\" : [ { \"el2.userId\": \"empty\" } ]\n\"arrayFilters\" : [ { \"el2\" : { \"userId\": \"empty\" } } ]\n", "text": "I have tested the aggregation in the MongoDB compass aggregation builder and it works fine.\nBut in the actual node code no luckIf the code works in one place it should at the other.I am suspicion aboutIt does not seem to match the documented syntax. From what I read it should beHow is the actual result differs from the expected modification? Share the UpdateResult?Please share some sample documents.", "username": "steevej" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"name\": \"test\",\n \"createdBy\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n },\n \"description\": \"testing\",\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 0,\n 0\n ]\n },\n \"weights\": [\n {\n \"spotsAvailable\": [\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n },\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n },\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n }\n ],\n \"weight\": 12\n },\n {\n \"spotsAvailable\": [\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n },\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n }\n ],\n \"weight\": 15\n }\n ],\n \"eventApplicants\": [\n {\n \"userId\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n },\n \"name\": \"Wayne Wrestler\",\n \"weight\": 15\n }\n ]\n}\n", "text": "", "username": "Wayne_Barker" }, { "code": "el2.userId: \"empty\"el2: {userId:\"empty\"}", "text": "Thank you so much for taking the time to help me with my question I really appreciate it. How do you know when you should use the el2.userId: \"empty\" or the el2: {userId:\"empty\"} syntax. Also, do you have any idea why my aggregation isn’t working? I copied the exact code from the mongo compass aggregation builder that worked as I wanted, but my code isn’t working in node driver.", "username": "Wayne_Barker" }, { "code": " query: {\n acknowledged: true,\n modifiedCount: 0,\n upsertedId: null,\n upsertedCount: 0,\n matchedCount: 1\n }\n", "text": "Also for whatever reason Your array filter returns the following query object without updating the document:", "username": "Wayne_Barker" }, { "code": "el2.userId: \"empty\"el2: {userId:\"empty\"}<identifier> : <expression> matchedCount: 1 _id: new ObjectId(eventId),\n createdBy: new ObjectId(createdBy),\n \"weights.weight\": weight\n modifiedCount: 0,", "text": "How do you know when you should use the el2.userId: \"empty\" or the el2: {userId:\"empty\"} syntaxIt is always, as documented:<identifier> : <expression>The identifier being the name you use withing the square brakets, el2 in your case. And the expression, the query to perform on the array element, userId:empty in your case.As for matchedCount: 1it means that 1 document matches the query part, that is:and modifiedCount: 0,means that no array element matched the array filter, so there was nothing to update.If you think it should, please share the exact input document and the exact updateOne you used.", "username": "steevej" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"name\": \"test\",\n \"createdBy\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n },\n \"description\": \"testing\",\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 0,\n 0\n ]\n },\n \"weights\": [\n {\n \"spotsAvailable\": [\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n },\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n },\n {\n \"name\": \"empty\",\n \"userId\": \"empty\"\n }\n ],\n \"weight\": 12\n },\n {\n \"spotsAvailable\": [\n {\n \"name\": \"Wayne Wrestler\",\n \"userId\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n }\n },\n {\n \"name\": \"Wayne Wrestler\",\n \"userId\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n }\n }\n ],\n \"weight\": 15\n }\n ],\n \"eventApplicants\": [\n {\n \"userId\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n },\n \"name\": \"Wayne Wrestler\",\n \"weight\": 12\n }\n ]\n}\n{\n \"_id\": {\n \"$oid\": \"636c1778f1d09191074f9690\"\n },\n \"name\": \"Wayne Wrestler\",\n \"email\": \"[email protected]\",\n \"image\": \"https://lh3.googleusercontent.com/a/ALm5wu32gXjDIRxncjjQA9I4Yl-sjFH5EWsTlmvdM_0kiw=s96-c\",\n \"emailVerified\": {\n \"$date\": {\n \"$numberLong\": \"1670864727212\"\n }\n },\n \"createdEvents\": [\n {\n \"createdEventName\": \"test\",\n \"createdEventDate\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"createdEventDescription\": \"testing\",\n \"createdEventWeights\": [\n {\n \"weight\": \"12\",\n \"filled\": [\n false,\n false,\n false\n ]\n },\n {\n \"weight\": \"15\",\n \"filled\": [\n false,\n false\n ]\n }\n ],\n \"createdEventId\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n }\n }\n ],\n \"userSignedUpEvents\": [],\n \"availableWeights\": [\n 1,\n 123\n ],\n \"signedUpEvents\": [\n {\n \"eventId\": {\n \"$oid\": \"636c722f67642c30dc5ffc30\"\n },\n \"eventName\": \"Utah\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1667913330000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"636c722f67642c30dc5ffc30\"\n },\n \"eventName\": \"Utah\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1667913330000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"637ec484ac2d675b30590b47\"\n },\n \"eventName\": \"Maybe?\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1672272000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"636c722f67642c30dc5ffc30\"\n },\n \"eventName\": \"Utah\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1667913330000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"638d5274628db2a7bf61df49\"\n },\n \"eventName\": \"Eva's\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1698019200000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"636c722f67642c30dc5ffc30\"\n },\n \"eventName\": \"Utah\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1667913330000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398a922abb5c168ede595fb\"\n },\n \"eventName\": \"Nikko's event\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1670976000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398a922abb5c168ede595fb\"\n },\n \"eventName\": \"Nikko's event\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1670976000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"eventName\": \"test\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"eventName\": \"test\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"eventName\": \"test\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"accepted\": false\n },\n {\n \"eventId\": {\n \"$oid\": \"6398c34ca67dbe3286452f23\"\n },\n \"eventName\": \"test\",\n \"eventDate\": {\n \"$date\": {\n \"$numberLong\": \"1645488000000\"\n }\n },\n \"accepted\": false\n }\n ]\n}\n const query = await db.collection('events').updateOne({\n _id: new ObjectId(\"6398c34ca67dbe3286452f23\"),\n createdBy: new ObjectId(\"636c1778f1d09191074f9690\"),\n \"weights.weight\": 12\n },\n {\n $set: {\n \"weights.$.spotsAvailable.$[el2]\": {\n \"name\": \"Wayne Wrestler\",\n \"userId\": new ObjectId(\"636c1778f1d09191074f9690\")\n }\n }\n },\n {\n arrayFilters: [{ \"el2\": { \"userId\": \"empty\" } }]\n })\n\n if (query) {\n console.log(\"we queried\")\n console.log({ query })\n return true\n } else {\n throw new Error(\"User not added to event\")\n }\n", "text": "", "username": "Wayne_Barker" }, { "code": "", "text": "Don’t feel bad you are taking the time to help me and I really appreciate it. In the above, I meant to say schema validation. Not form I hope that didn’t make me sound like I don’t know anything", "username": "Wayne_Barker" }, { "code": "", "text": "This is still in my bookmarks.", "username": "steevej" }, { "code": " arrayFilters: [ { \"el2.userId\": \"empty\" } ]\n/* fields not related to the use case have been edited out */\n{\n \"_id\" : 1 ,\n \"weights\": [\n {\n \"spots\": [ ] ,\n \"weight\": 12 ,\n \"spotsAvailable\" : 3\n },\n {\n \"spots\": [ ] ,\n \"weight\": 15 ,\n \"spotsAvailable\" : 2\n }\n ]\n}\nquery = {\n \"_id\" : 1 ,\n \"weights\" : { \"$elemMatch\" : { \"weight\" : 12 ,\"spotsAvailable\" : { \"$gt\" : 0 } }\n}\nupdate = {\n \"$inc\" : { \"weights.$.spotAvailable\" : -1} ,\n \"$push\" : { \"weights.$.spots\" : { name : \"steevej\" , \"userId\" : 369 } }\n}\ndb.events.updateOne( query , update )\n", "text": "I feel really really bad.You had the correct the correct syntax withAnd I sent you in the wrong direction. You had the correct syntax, I am not sure why you hadthe actual node code no luckIt does work in mongosh with your syntax but it does not do what I think you want it to do. I think you want to assign the applicant to an available spot. With arrayFilters like this all elements matching are updated so in your case the applicant will be assign all available spots.I understand that you pre-fill the spotsAvailable array with spots to fill but with the arrayFilters issue above I do not think that this could work without complex $map and/or $filter. I would consider toTo resume I would start with a document that looks like:And the the update query", "username": "steevej" }, { "code": "", "text": "But my aggregation pipeline limits to one before it runs the $set. So shouldn’t it only be updating one? And even if my aggregation is updating all spots I can’t even get that behavior to work. When I run my aggregation pipeline nothing happens and I don’t even get an error", "username": "Wayne_Barker" }, { "code": "{ \"el2\" : { \"userId\" : \"empty\" , \"name\" : \"empty\" } }\n{\n \"_id\" : 1 ,\n \"weights\": [\n {\n \"spots\": [ ] ,\n \"weight\": 12 ,\n \"spotsAvailable\" : 3\n },\n {\n \"spots\": [ ] ,\n \"weight\": 15 ,\n \"spotsAvailable\" : 2\n }\n ]\n}\nquery = {\n \"_id\" : 1 ,\n \"weights\" : { \"$elemMatch\" : { \"weight\" : 12 ,\"spotsAvailable\" : { \"$gt\" : 0 } }\n}\nupdate = {\n \"$inc\" : { \"weights.$.spotAvailable\" : -1} ,\n \"$push\" : { \"weights.$.spots\" : { name : \"steevej\" , \"userId\" : 369 } }\n}\ndb.events.updateOne( query , update )\n", "text": "So shouldn’t it only be updating one?An aggregation does not update the original document and since you are calling updateOne, I assume that you want to update the original document. So whatever what you end up doing in your aggregation with limit:1 or not, your original document will not be updated.I was able to update all wieghts.$.spotsAvailable by using the arrayFilters:I still do not know how to set up a filter that test only one of the field of a sub-object.I still do not know how to limit the update to a single element using arrayFilters.The only way I could make work what I think you want to achieve (assign an available spot to a participant) is by modifying your model toand update with", "username": "steevej" }, { "code": "", "text": "Thank you I didn’t realized that the aggregation pipeline only updates a newly created document and not the original. Lesson learned, I will definitely have to update my database structer.", "username": "Wayne_Barker" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why is my Mongodb aggregation pipeline updateOne() working in compass, but not working in my node driver
2022-12-14T01:53:26.965Z
Why is my Mongodb aggregation pipeline updateOne() working in compass, but not working in my node driver
4,231
https://www.mongodb.com/…_2_1024x446.jpeg
[ "atlas-functions", "graphql" ]
[ { "code": "[Function logging](https://www.mongodb.com/docs/atlas/app-services/logs/#log-lines)", "text": "New to functions. Confused about the conditions under which logging works–especially for functions.The documentation says that “console.log” will create an App Services log entry: [Function logging](https://www.mongodb.com/docs/atlas/app-services/logs/#log-lines)Within the Atlas web IDE, my function works. Console.log messages are displayed within the “Result” tab.\n(Tried putting IDE image here, but multiple images are not allowed.)I have configured the function as a GraphQL custom resolver, and it is returning the expected results.Through all of this, I can find no logging within the App Services logging. There is nothing from the custom resolver function. I see that there are previous log entries for GraphQL, and I’ve been running lots of GraphQL queries both with and without the custom resolver over the last couple of days. I’m seeing nothing. (I have played with, cleared log filters, with no luck.)\n\nimage1920×837 80.8 KB\nUnder what conditions does App Services (not) create log entries? How can one log information from a custom resolver function? Is there clear documentation for this somewhere (a link)?", "username": "John_Huschka" }, { "code": "LogsLogs:\n\n[\n \"[object Object]\"\n]\nJSON.stringify(JSON.parse(obj))", "text": "Hi there,App Services stores each output as a single string in the log entry’s Logs field.I tested whether that appears in the logs by query against endpoint and console logging the request itself and it works fine, if you press on the arrow under the status tab you find:If you want the whole object there are tricks out there (maybe JSON.stringify(JSON.parse(obj))) , otherwise you can still log values.@John_Huschka", "username": "santimir" }, { "code": "", "text": "@santimir Thanks for your response. Can you give me a few more details? Query against which endpoint? The app services logging endpoint? “Status” tab where?", "username": "John_Huschka" }, { "code": "", "text": "You are welcome.", "username": "santimir" }, { "code": "", "text": "@santimir Thanks for your response. Ok–gotcha on the status, chevron, and endpoint. However, I simply do not have any events on which to click. I have been working all day today (Dec 28) with my custom resolver function (presumably doing its console.logs), and I don’t have a single entry in the App Service log (“Logs” menu option) for today.", "username": "John_Huschka" }, { "code": "", "text": "Im afraid I dont know much more (only exploring just as you do.)But hopefully some more experienced users will come for rescue ", "username": "santimir" }, { "code": "", "text": "Submitted support ticket to MongoDB. Received response from David Griffith:When running functions from the Atlas > App Services > Functions Page UI, logs are not written to the App Services Logs, but instead they write directly to the console on the Functions Page UI. However, when running functions “normally”, for instance, from a trigger, html endpoint, or via an SDK, logs are written to the App Services Logs.Likewise, when running Custom Resolvers from the Atlas > App Services > GraphQL Page UI, logs are not written to the App Services Logs. This is presumably because these are tests and you can see the results in the GraphQL Page UI. However, when running the Custom Resolver normally, a log entry is written to the App Services Logs.I have verified that calling from Postman produces the expected log entries.To me, the essential conclusion is “To test GraphQL custom resolver functions using the log, you must call the GraphQL endpoint with Postman or similar tool. You cannot use the Atlas GraphQL UI.”", "username": "John_Huschka" } ]
Atlas custom function logging not working
2022-12-28T17:45:13.102Z
Atlas custom function logging not working
1,986
https://www.mongodb.com/…f351b62d8f19.png
[ "database-tools" ]
[ { "code": "", "text": "Hello! I have one MongoDB Server 4.2.1 installed on RedHat and I need install the new Database Tools (Version > 100.0.0 and started in MongoDB 4.4).ASK: Have one way to install this Database Tools ( > 100) in the same machine of the MongoDB server 4.2.1 without have conflits? (I know that this Database Tools started in 4.4)Command: sudo yum install -y --bugfix mongodb-database-tools-rhel70-x86_64-100.6.1.rpm\nimage851×36 2.32 KB\nMy problem is that have the 4.2.1 MongoDB Server installed in the same package of Tools earlier 4.4 MongoDB. If not is possiblem install in the same machine, I think install in other server without MondoDB, only to execute my jobs with new features of Database Tools in a 4.2 MongoDB.\n\nimage757×223 4.64 KB\nThanks,\nHenrique,", "username": "Henrique_Souza" }, { "code": "", "text": "I’d recommend uninstalling the package Database Tools you installed with 4.2 and then install the v100.6.1.", "username": "chris" }, { "code": "", "text": "But this is the problem, the packge is the same of MongoDB Server, only on Database Tools the packges were divided (Born on MongoDB 4.4):Is possiblem uninstall only the Tools on 4.2.1 MongoDB Server?", "username": "Henrique_Souza" }, { "code": "mongodb-org-toolsmongodb-orgmongodb-org-server", "text": "They have always been a separate package . Only with 4.4 did they become separate from the server package and start their own versioning.Yes you can safely remove mongodb-org-tools it is install as a dependency of the meta-package mongodb-org the server package(mongod) is actually mongodb-org-server.", "username": "chris" }, { "code": "", "text": "Please, if you have the procedure to do this, i will be greatful. Look my try do remove packge mongo-org-tools by my MongoDB 4.2.1 server:\nimage846×577 94.8 KB\nThanks,\nHenrique.", "username": "Henrique_Souza" }, { "code": "yum", "text": "I thought you were on redhat as you had yum in your first post. I can post something tomorrow unless someone beats me to it.What Ubuntu version?", "username": "chris" }, { "code": "", "text": "I’m sorry, this is my LAB on Ubuntu 18.04. This is a long history, because DEV and HML use MongoDB Standalone on Ubuntu 18.04 and PRD runs on RedHat (CentOS 7) Replicaset PSA.But in this case of tests is Ubuntu 18.04.Thanks,\nHenrique.", "username": "Henrique_Souza" }, { "code": "", "text": "Thanks for all Chris!I found the problem: Only remove the Hold, but not only mongodb-org-tools but too mongodb-org:\nimage1176×891 178 KB\nThanks!\nHenrique.", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Install new Database Tools version > 100.0.0 on 4.2 MongoDB Server
2022-12-27T20:27:53.110Z
Install new Database Tools version &gt; 100.0.0 on 4.2 MongoDB Server
2,363
null
[]
[ { "code": "com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:630)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:515)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:355)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:315)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:215)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144)\n\tat java.base/java.lang.Thread.run(Thread.java:832)\nCaused by: java.net.SocketTimeoutException: Read timed out\n\tat java.base/sun.nio.ch.NioSocketImpl.timedRead(NioSocketImpl.java:283)\n\tat java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309)\n\tat java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)\n\tat java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)\n\tat java.base/java.net.Socket$SocketInputStream.read(Socket.java:981)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:478)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:472)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70)\n\tat java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1434)\n\tat java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1038)\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:131)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:647)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:512)\n", "text": "Hi guys,I am experiencing an issue with intermittent (every few hours) timeouts between out java springboot app and Mongo Atlas.We do not see the same error with our node.js app.Any ideas on what we should be doing to prevent such errors? Or ideas on what we could be doing wrong?Many thanksJava Application error log:", "username": "we_eatbricks" }, { "code": "", "text": "Hi @we_eatbricks, have you resolved the issue?I have also encountered the same problem (working app with intermittent MongoSocketReadTimeoutException every few hours) with the same tech stack (Atlas, Spring Boot, Google Cloud Run).Many Thanks", "username": "Dales" }, { "code": "", "text": "@Dales @we_eatbricksI am also facing same problem with the same tech stack .\nHave you resolved the issue ?", "username": "Prabhat_Kumar2" }, { "code": "", "text": "Hi @Prabhat_Kumar2, unfortunately our logs indicate that this is still occurring (since the original post we have also migrated our tech stack to new versions → including spring boot to 3.0 and our MongoDB instance to a serverless, but with no joy).If anybody can shed any more light on this, it would be much appreciated.", "username": "Dale_Southall" }, { "code": "", "text": "Hi there, would you be able to share which driver version you’re using?", "username": "Ashni_Mehta" }, { "code": "", "text": "spring-boot-starter-data-mongodb : 2.6.8\nmongodb-driver-sync: 4.4.2\nmongodb-driver-core: 4.4.2", "username": "Prabhat_Kumar2" }, { "code": "Exception in monitor thread while connecting to server XXX-develop-qa-shard-00-00-pri.XXX.mongodb.net:XXXX\" \n----\ncom.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:701)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:579)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:415)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:374)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:216)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:152)\n\tat java.base/java.lang.Thread.run(Unknown Source)\nCaused by: java.net.SocketTimeoutException: Read timed out\n\tat java.base/java.net.SocketInputStream.socketRead0(Native Method)\n\tat java.base/java.net.SocketInputStream.socketRead(Unknown Source)\n\tat java.base/java.net.SocketInputStream.read(Unknown Source)\n\tat java.base/java.net.SocketInputStream.read(Unknown Source)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.read(Unknown Source)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source)\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(Unknown Source)\n\tat java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(Unknown Source)\n\tat java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(Unknown Source)\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:131)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:718)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:576)\n\t... 5 common frames omitted\n", "text": "May be this will help", "username": "Prabhat_Kumar2" }, { "code": "", "text": "We had the issue with various versions of spring boot 2 (as per Prabhats), but have recently migrated to Spring 3.0, which pulls in version 4.8.0 of the following:\nmongodb-driver-core\nmongodb-driver-legacy\nmongodb-driver-reactivestreams\nmongodb-driver-syncWe pretty much see this exception every day (completely intermittent amongst thousands of transactions).2022-12-28 06:09:03.334 GMTCaused by: com.mongodb.MongoSocketWriteException: Exception sending message2022-12-28 06:09:03.334 GMTat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:687) ~[mongodb-driver-core-4.8.0.jar!/:na]2022-12-28 06:09:03.334 GMTat com.mongodb.internal.connection.InternalStreamConnection.access$700(InternalStreamConnection.java:89) ~[mongodb-driver-core-4.8.0.jar!/:na]2022-12-28 06:09:03.334 GMTat com.mongodb.internal.connection.InternalStreamConnection$3.failed(InternalStreamConnection.java:604) ~[mongodb-driver-core-4.8.0.jar!/:na]2022-12-28 06:09:03.334 GMTat com.mongodb.connection.netty.NettyStream$2.operationComplete(NettyStream.java:256) ~[mongodb-driver-core-4.8.0.jar!/:na]2022-12-28 06:09:03.335 GMTCaused by: io.netty.channel.StacklessClosedChannelException: null2022-12-28 06:09:03.335 GMTat io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object, ChannelPromise)(Unknown Source) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]2022-12-28 06:09:03.335 GMTCaused by: java.io.IOException: Broken pipe2022-12-28 06:09:03.335 GMTat java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:na]2022-12-28 06:09:03.335 GMTat java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:62) ~[na:na]2022-12-28 06:09:03.335 GMTat java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:137) ~[na:na]2022-12-28 06:09:03.335 GMTat java.base/sun.nio.ch.IOUtil.write(IOUtil.java:81) ~[na:na]", "username": "Dale_Southall" }, { "code": "", "text": "Appreciate you sharing this. From the stack trace, it looks like this might be monitoring related. Are you seeing exceptions thrown from application threads as well?", "username": "Ashni_Mehta" }, { "code": "", "text": "Not in main thread for now", "username": "Prabhat_Kumar2" }, { "code": "at io.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:1105) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\n*__checkpoint ⇢ org.springframework.security.web.server.csrf.CsrfWebFilter [DefaultWebFilterChain]\n*__checkpoint ⇢ org.springframework.security.web.server.header.HttpHeaderWriterWebFilter [DefaultWebFilterChain]\n*__checkpoint ⇢ org.springframework.security.config.web.server.ServerHttpSecurity$ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain]\n*__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain]\nOriginal Stack Trace:\nat org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:81) ~[spring-data-mongodb-4.0.0.jar!/:4.0.0]\nat org.springframework.data.mongodb.core.ReactiveMongoTemplate.potentiallyConvertRuntimeException(ReactiveMongoTemplate.java:2574) ~[spring-data-mongodb-4.0.0.jar!/:4.0.0]\nat org.springframework.data.mongodb.core.ReactiveMongoTemplate.lambda$translateException$93(ReactiveMongoTemplate.java:2557) ~[spring-data-mongodb-4.0.0.jar!/:4.0.0]\nat reactor.core.publisher.Flux.lambda$onErrorMap$27(Flux.java:7088) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onError(MonoFlatMapMany.java:255) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.maybeOnError(FluxConcatMapNoPrefetch.java:326) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onError(FluxConcatMapNoPrefetch.java:220) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$BaseSink.error(FluxCreate.java:474) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:802) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$BufferAsyncSink.error(FluxCreate.java:747) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$SerializedFluxSink.drainLoop(FluxCreate.java:237) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$SerializedFluxSink.drain(FluxCreate.java:213) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxCreate$SerializedFluxSink.error(FluxCreate.java:189) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.LambdaMonoSubscriber.doError(LambdaMonoSubscriber.java:155) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.LambdaMonoSubscriber.onError(LambdaMonoSubscriber.java:150) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.FluxMap$MapSubscriber.onError(FluxMap.java:134) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:241) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:315) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onError(MonoPeekTerminal.java:258) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:201) ~[reactor-core-3.5.0.jar!/:3.5.0]\nat com.mongodb.reactivestreams.client.internal.MongoOperationPublisher.lambda$sinkToCallback$31(MongoOperationPublisher.java:573) ~[mongodb-driver-reactivestreams-4.8.0.jar!/:na]\nat com.mongodb.reactivestreams.client.internal.OperationExecutorImpl.lambda$execute$2(OperationExecutorImpl.java:94) ~[mongodb-driver-reactivestreams-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.function.RetryingAsyncCallbackSupplier$RetryingCallback.onResult(RetryingAsyncCallbackSupplier.java:111) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.function.AsyncCallbackSupplier.lambda$whenComplete$1(AsyncCallbackSupplier.java:97) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.operation.FindOperation$1.onResult(FindOperation.java:376) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.operation.CommandOperationHelper.lambda$transformingReadCallback$10(CommandOperationHelper.java:323) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.LoadBalancedServer$LoadBalancedServerProtocolExecutor.lambda$executeAsync$0(LoadBalancedServer.java:182) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.CommandProtocolImpl$1.onResult(CommandProtocolImpl.java:82) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection$1.onResult(DefaultConnectionPool.java:683) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.UsageTrackingInternalConnection$2.onResult(UsageTrackingInternalConnection.java:159) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection$2.onResult(InternalStreamConnection.java:496) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection$2.onResult(InternalStreamConnection.java:490) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:46) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection$3.failed(InternalStreamConnection.java:604) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.connection.netty.NettyStream$2.operationComplete(NettyStream.java:256) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.connection.netty.NettyStream$2.operationComplete(NettyStream.java:252) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:999) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:860) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:877) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:940) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1247) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]\nCaused by: com.mongodb.MongoSocketWriteException: Exception sending message\nat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:687) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.access$700(InternalStreamConnection.java:89) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection$3.failed(InternalStreamConnection.java:604) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.connection.netty.NettyStream$2.operationComplete(NettyStream.java:256) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat com.mongodb.connection.netty.NettyStream$2.operationComplete(NettyStream.java:252) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:999) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:860) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:877) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:940) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1247) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]\nCaused by: io.netty.channel.StacklessClosedChannelException: null\nat io.netty.channel.AbstractChannel$AbstractUnsafe.write(Object, ChannelPromise)(Unknown Source) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nCaused by: java.io.IOException: Broken pipe\nat java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:na]\nat java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:62) ~[na:na]\nat java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:137) ~[na:na]\nat java.base/sun.nio.ch.IOUtil.write(IOUtil.java:81) ~[na:na]\nat java.base/sun.nio.ch.IOUtil.write(IOUtil.java:58) ~[na:na]\nat java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:532) ~[na:na]\nat io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:415) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:931) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:921) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:907) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:893) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.forceFlush(SslHandler.java:2138) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:803) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.flush(SslHandler.java:780) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.flush(SslHandler.java:1972) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.closeOutboundAndChannel(SslHandler.java:1941) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.close(SslHandler.java:731) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:753) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:727) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:560) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat com.mongodb.connection.netty.NettyStream$InboundBufferHandler.exceptionCaught(NettyStream.java:431) ~[mongodb-driver-core-4.8.0.jar!/:na]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:1105) ~[netty-handler-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\nat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.85.Final.jar!/:4.1.85.Final]\n", "text": "Hi Ashni, I see this intermittent exception with the reactive spring boot 3.0 stack, which appears to be on the main thread (unsure whether it is related to the original exception reported on this post, but it has very similar characteristics, besides not directly being caused by a SocketTimeoutException):", "username": "Dale_Southall" } ]
Intermittent timeouts between Mongo Atlas and Java Spring Boot app running on GCP Cloud Run container
2021-12-08T21:21:10.991Z
Intermittent timeouts between Mongo Atlas and Java Spring Boot app running on GCP Cloud Run container
5,107
null
[ "transactions", "monitoring" ]
[ { "code": "", "text": "Hi Team,We need db.current(true ) output send to mail if any query more than 1 second run send to mail else do not send to mailabove condition how to write shell script so any body please help me script full detail.", "username": "hari_dba" }, { "code": "db.currentOp(true)db.setProfilingLevel(1, { slowms: 1000 })system.profile{\n op: 'insert',\n ns: 'test.users',\n command: {\n insert: 'users',\n documents: [{\n name: 'Max',\n _id: ObjectId(\"62262bf2036c3d3f383580d6\")\n }],\n ordered: true,\n lsid: {\n id: UUID(\"e6760d4b-7f0b-4b30-809c-5febe4c27a3b\")\n },\n txnNumber: Long(\"10\"),\n '$clusterTime': {\n clusterTime: Timestamp({\n t: 1646668785,\n i: 3\n }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n '$db': 'test'\n },\n ninserted: 1,\n keysInserted: 1,\n numYield: 0,\n locks: {\n ParallelBatchWriterMode: {\n acquireCount: {\n r: Long(\"2\")\n }\n },\n ReplicationStateTransition: {\n acquireCount: {\n w: Long(\"5\")\n }\n },\n Global: {\n acquireCount: {\n r: Long(\"2\"),\n w: Long(\"2\")\n }\n },\n Database: {\n acquireCount: {\n w: Long(\"2\")\n }\n },\n Collection: {\n acquireCount: {\n w: Long(\"2\")\n }\n },\n Mutex: {\n acquireCount: {\n r: Long(\"2\")\n }\n }\n },\n flowControl: {\n acquireCount: Long(\"1\"),\n timeAcquiringMicros: Long(\"1\")\n },\n readConcern: {\n level: 'local',\n provenance: 'implicitDefault'\n },\n writeConcern: {\n w: 'majority',\n wtimeout: 0,\n provenance: 'implicitDefault'\n },\n storage: {},\n responseLength: 230,\n protocol: 'op_msg',\n millis: 6,\n ts: ISODate(\"2022-03-07T15:59:46.175Z\"),\n client: '127.0.0.1',\n appName: 'mongosh 1.1.9',\n allUsers: [],\n user: ''\n}\nsystem.profile", "text": "Hi @hari_dba,Firstly, I assume you mean db.currentOp(true)?\nSecondly, I think I would use the profiler db.setProfilingLevel(1, { slowms: 1000 }). This will add slow queries in the collection system.profile and provide some information about them.Exemple:Sadly, you can’t use Change Streams on system collections in MongoDB so using a Change Stream on system.profile would have been cool, but it’s not possible.So I would use CRON in this case and run a query every X minutes to see if a new slow query was added in the past X minutes (based on the “ts” field) in this collection and then send an email if something was found.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Yes correct i am looking \" db.currentOp(true) \"I need script and run in CRON. So please send me script details.", "username": "hari_dba" }, { "code": "", "text": "I never wrote a script like this and I don’t have time to write it. But please share it with the community when you come up with something that the community could reuse.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Team,Any body have script please share me this is important my project.\nMy team was created one script below as mentioned what we are excepted that out put not generated. at least could you guide below script are we modify anything ?MONGO_HOME=/hom/bin$MONGO_HOME/mongo --host mongodb:27017 -u ‘hari’ -p ‘xxxx’\n–authenticationDatabase admin --quiet < perf.js > perf.txt\nif [ $? -eq 0 ]\nthen\nmailx -s “QUERY TAKING MORE THAN 1 SECOND” [email protected] < perf.txt\nelse\necho “Unable to connect to MongoDB” | mail -s “Issue while connecting to ADB DB” [email protected]\nfi======================================================================================================$ cat perf.js\nvar result=db.currentOp({“active” : true,“secs_running” : { “$gt” : 1 }})\nprintjson(result)My management excepted conditions like this :How to setup that script and deploy ?", "username": "hari_dba" }, { "code": "", "text": "Hi Team,Please any suggestion …", "username": "hari_dba" }, { "code": "", "text": "I need schedule wise out put of the db.currentOp({“active” : true,“secs_running” : { “$gt” : 1 }})", "username": "hari_dba" }, { "code": "\n", "text": "MONGO_HOME=/hom/bin$MONGO_HOME/mongo --host mongodb:27017 -u ‘hari’ -p ‘xxxx’\n–authenticationDatabase admin --quiet < perf.js > perf.txt\nif [ $? -eq 0 ]\nthen\nmailx -s “QUERY TAKING MORE THAN 1 SECOND” [email protected] < perf.txt\nelse\necho “Unable to connect to MongoDB” | mail -s “Issue while connecting to ADB DB” [email protected]\nfi======================================================================================================$ cat perf.js\nvar result=db.currentOp({“active” : true,“secs_running” : { “$gt” : 1 }})\nprintjson(result)It works for me, i found special characters i your command. the double quote is the culprit. try this,var result=db.currentOp({“active” : true,“secs_running” : { “$gt” : 1 }});\nprintjson(result);", "username": "Balk" } ]
Performance transaction script
2022-03-07T14:09:23.484Z
Performance transaction script
4,247
https://www.mongodb.com/…5256346428ba.png
[]
[ { "code": "", "text": "upper image, my collection data.I want to update “infos”.const updatequery = [{\nuid = 4, count = 12, }, {uid = 3, count = 100}];Actually, I can change it by calling updateOne three times. But I don’t want to.how to update array?let me help me.", "username": "DEV_JUNGLE" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and post sample documents in textual format so that we can cut-n-paste into our systems for experimentation.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to update array form schema #2?
2022-12-29T07:55:26.694Z
How to update array form schema #2?
2,215
null
[ "backup" ]
[ { "code": "", "text": "Does Cloud Mgr dedupe/compress backup data for storage benefits with cloud-native snapshots?", "username": "niko_belic" }, { "code": "", "text": "HI @niko_belic ,\nYes, Cloud Manager does utilize deduplication and compression when storing the backup data in the cloud.Just to be clear, Cloud Manager does not utilize Cloud Provider Snapshots. Only our Atlas offerings utilize Cloud Provider Snapshots. For more information on Cloud Manager backups and how they work, you can see our documentation here.Thank you,\nEvin", "username": "Evin_Roesle" } ]
Does Cloud Mgr dedupe/compresses backup data for storage benefits?
2022-12-02T12:53:22.377Z
Does Cloud Mgr dedupe/compresses backup data for storage benefits?
1,459
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hi,i need to upgrade mongodb from 4.2.20 version to 6.0.3. just set up new instance cause require windows 2019.\ndo i need upgrade step by step following major version like 4.2 > 4.4 or i can just use mongodump in last version 4.2.X then restore it in new instance using mongorestore in mongo version 6.0.3?", "username": "Sulton_Fadlillah" }, { "code": "", "text": "Step by step upgrades are the tested and supported method.Some people have success with a dump/restore but it is not a supported method.", "username": "chris" }, { "code": "", "text": "btw recommended using mongodump/mongorestore version 4 in new instance installed ver 6 or using latest mongodump in mongo database tools to dump from old version and restore it in new instance?", "username": "Sulton_Fadlillah" }, { "code": "", "text": "Use the latest, 100.6.1, good luck.Command line tools available for working with MongoDB deployments. Tools include mongodump, mongorestore, mongoimport, and more. Download now.", "username": "chris" }, { "code": "", "text": "Oh no, i just restore it using mongorestore ver 4, do u know how to drop all database from last restore? or any command i can use for that operation so i can restore again using dump & restore latest version 100.6.1", "username": "Sulton_Fadlillah" }, { "code": "mongorestore--drop", "text": "If it worked and looks good, don’t bother.mongorestore has a --drop option to drop any existing collection before restoring it.", "username": "chris" } ]
Upgrade mongo version from 4.2.20 to 6.0.3
2022-12-28T02:45:09.089Z
Upgrade mongo version from 4.2.20 to 6.0.3
2,177
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const userSchema=new mongoose.Schema({\n\n username:{\n\n type:String,\n\n required:true\n\n },\n\n email:{\n\n type:String,\n\n required:true\n\n },\n\n password:{\n\n type:String,\n\n required:true\n\n },\n\n status:{\n\n type:Boolean,\n\n default:false\n\n },\n\n cards:{\n\n type:[cardSchema],\n\n default:[]\n\n }\n\n})\nconst cardSchema=new mongoose.Schema({\n name:{\n type:String,\n required:true,\n },\n startTime:{\n type:Date,\n required:true\n },\n endTime:{\n type:Date,\n required:true\n },\n description:{\n type:String,\n }\n})\n", "text": "This is my user a user has an array of card objectsThis is my cardWhen i delete a particular card from card collection its not deleted from that user’s array how to solve this i want that the a particular card be flushed out of whole db including the user’s array which contains that card", "username": "Assassin_N_A" }, { "code": "", "text": "Hello @Assassin_N_A, Welcome to the MongoDB community forum,When i delete a particular card from card collection its not deleted from that user’s arrayCould you please share what query you tried and not deleting the elements?", "username": "turivishal" }, { "code": "", "text": "I haven’t tried any query i used went to atlas to delete a particular card from card table just to check if the same card is also deleted from array within user’s card", "username": "Assassin_N_A" }, { "code": "cards$[<identifier>]", "text": "I haven’t tried any query i used went to atlas to delete a particular card from card table just to checkAs I can see in your schema, there is only one parent users schema, and cards is an array that is sub schema, so there should be one collection users, how cards is a different collection?Your question is still not clear to me, if you want to delete elements from cards then you can use update methods (updateOne, updateMany) query with $pull operator and positional operators ($ positional, and $[<identifier>] positional filtered).", "username": "turivishal" }, { "code": "", "text": "Apart from the previous advice which is correct you can test queries in https://mongoplayground.netYou can also share a link to the playground for anything you need help with.Pd: remember that Cards is not a collection, it is just a shape for the elements in the array. I wonder if you are trying to store a reference to a different collection instead.", "username": "santimir" }, { "code": "", "text": "Well when i ran my code i’ve sent above two tables were created(sorry i dont know terminologies that well i’m more familiar with SQL terms) and this is what i have tried in atlas i expect the card inside user’s array to be deleted as well but it is still there\nhere’s what i tried in atlas @santimirGoogle Drive file.", "username": "Assassin_N_A" } ]
Deleting document inside other document
2022-12-28T10:50:05.876Z
Deleting document inside other document
1,598
null
[]
[ { "code": "", "text": "Transcripts are printed with a huge empty cover page. Even though the page itself shows a firm page where there is no such gap, when I attempt to print the page, there comes this block of emptiness.\nMongoDB External TranscriptI wanted to upload this transcript to my LinkedIn account. It takes a snapshot of the first page to display. Unfortunately, this cover page provides no valuable information other than my name and MDBU logo, and that snapshot image can be interpreted as if I have nothing to show despite having a 3-pages worth of courses.please fix this printing issue for the transcripts.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi @Yilmaz_Durmaz ,Thank you for reporting this issue!\nI have forwarded this platform bug to the concerned team and will keep you updated on the resolution.\nNote that there might be some delays due to the holiday season. We regret the inconvenience caused.Kind Regards,\nSonali", "username": "Sonali_Mamgain" }, { "code": "", "text": "This topic was automatically closed after 60 days. New replies are no longer allowed.", "username": "system" } ]
MDBU transcript is, unlike its web page, printed with almost empty cover page
2022-12-28T09:32:46.501Z
MDBU transcript is, unlike its web page, printed with almost empty cover page
1,400
null
[ "replication" ]
[ { "code": "", "text": "I am using mongodb atlas replica set, recently I deleted a lot of data from my db to reduce cost.\nHowever after deleting and running compact on secondary nodes, the free space reserved is still huge. For context my collection’s storage size is about 580G and the free space is about 350G out of 580G.I am reading around the forum and it looks like I have to contact the support team to use rolling maintenance to re-sync one replica set member at a time.My question is where I can do this on cloud.mongodb.com? Do I have to register for at least “Developer” support plan to start talking to them?", "username": "Trung_Ha_Tuan" }, { "code": "", "text": "From your Atlas account you can contact support\nUnder Organization’s you will see support tab\nYou can choose basic support,paid etc", "username": "Ramachandra_Tummala" } ]
How to contact support team to re-sync my replica set
2022-12-28T06:13:41.443Z
How to contact support team to re-sync my replica set
851
null
[ "queries", "php" ]
[ { "code": "<?php\nrequire 'vendor/autoload.php';\n\ntry {\n $mng = new MongoDB\\Driver\\Manager(\"mongodb://localhost:27017/dbname\");\n $bucket = new MongoDB\\GridFS\\Bucket($mng, 'dbname');\n $query = new MongoDB\\Driver\\Query([]);\n $r = $mng->executeQuery(\"dbname.dbname\", $query);\n\n if (!empty($_POST[\"date\"])){\n $document = array(\n \"num_id\" => $_POST[\"num_id\"],\n \"name\" => $_POST[\"name\"],\n \"date\" => $_POST[\"date\"],\n \"detail\" => $_POST[\"detail\"],\n );\n if ($_FILES['pdf']) {\n $stream = $bucket->openUploadStream($_FILES['pdf']['tmp_name']);\n $document['pdf'] = $bucket->uploadFromStream($_FILES['pdf']['name'], $stream);\n }\n $single_insert = new MongoDB\\Driver\\BulkWrite();\n $single_insert->insert($document);\n $mng->executeBulkWrite(\"rs.rs\", $single_insert);\n } \n} \ncatch (Exception $e) {\n echo 'Exception reçue : ',$e->getMessage(),\"\\n\";\n}\n?>\n\n<table>\n <thead>\n //Columns titles for data\n </thead>\n <tbody>\n <?php foreach ($r as $document):\n $bson = MongoDB\\BSON\\fromPHP($document);\n $json = json_decode(MongoDB\\BSON\\toJSON($bson));\n ?>\n <tr>\n <td><?php echo date('d-m-Y',strtotime($json->{'date'})) ?></td>\n <td><?php echo $json->{'num_id'} ?></td>\n <td><?php echo $json->{'name'} ?></td>\n <td><?php echo $json->{'detail'} ?></td>\n <td><?php if (!empty($json->{'pdf'})) {\n echo '<a href=\"file.php?id=' . $document->pdf . '\">File</a>';\n }\n else {\n echo \"\";\n } ?></td>\n </tr>\n <?php endforeach; ?>\n </tbody>\n</table>\n<form action=\"\" enctype=\"multipart/form-data\" method=\"POST\">\n //Other inserts for the main document(date, name, detail,..)\n <p>\n <label for=\"pdf\">Document</label><br>\n <input type=\"file\" id=\"pdf\" name=\"pdf\">\n </p>\n <p>\n <input type=\"submit\" id=\"addform\" value=\"Add\">\n </p>\n</form>\n<?php\nrequire 'vendor/autoload.php';\n\ntry {\n $mng = new MongoDB\\Driver\\Manager(\"mongodb://localhost:27017/dbname\");\n $bucket = (new MongoDB\\Client)->dbname->selectGridFSBucket();\n $fileId = new MongoDB\\BSON\\ObjectID($_GET['id']);\n $stream = $bucket->openDownloadStream($fileId);\n $contents = stream_get_contents($stream);\n} \ncatch (Exception $e) {\n echo 'Exception reçue : ',$e->getMessage(),\"\\n\";\n}\n?>\n", "text": "For a project, I’m want to be able to download a PDF file that was previously saved in a MongoDB document using GridFS.I’m a new developer and this is my first time using the Upload/Download and streams notions so I might be going to it completely wrong and missing something obvious but I can’t see where it is that I might be going at it wrong.\nFor all I know maybe my Upload is bad to start and that’s why I can’t donwload anything !I have been looking at this MongoDB doc to try and get the Download working, but to no effect.I have a page displaying all my MongoDB data in a Table (each row = one document) and in those rows a link to open a new page to Download/Consult the PDF file previously uploaded to that specific document.The problem is that when I access that link, well nothing happens I have a blank page.Here is the page where I display data and have a form to upload new data with a file upload if needed.And here is my file.php where the User is sent after clicking a link in the Table and where I try to access the data stored by GridFSAnd with this code. Even though the $GET[‘id’] is indeed the _id of a document in dbname.fs.files I get nothing in return, not even an error.\nHere is how the structure of the documents looks like using MongoDB Compass.I hope I was clear enough and thank you for reading me thus far.", "username": "Axel_JARNIGON" }, { "code": "", "text": "Hello Axel, did you later solve this issue, I’m currently facing the exact same issue and I’m struggling to figure it out", "username": "Ayomide_Adekoya" }, { "code": "fs.fileslength$ echo -n \"\" | md5sum\nd41d8cd98f00b204e9800998ecf8427e -\n$contextContent-Typeif ($_FILES['pdf']) {\n $stream = $bucket->openUploadStream($_FILES['pdf']['tmp_name']);\n $document['pdf'] = $bucket->uploadFromStream($_FILES['pdf']['name'], $stream);\n}\nopenUploadStreamuploadFromStreamopenUploadStream()fwrite()uploadFromStream()fopen()", "text": "And with this code. Even though the $GET[‘id’] is indeed the _id of a document in dbname.fs.files I get nothing in return, not even an error. Here is how the structure of the documents looks like using MongoDB Compass.Based on the screenshot you shared, the fs.files document exists but contains no data. We can confirm this via the length field, which is zero, and the MD5 checksum corresponding to that of an empty string:In the second script you shared, you only assign the GridFS stream’s contents to a $context variable, which is not printed. But assuming that was just an example and you’re actually printing the string, the output would still be empty since the GridFS file itself is empty.On a related note, remember that you’ll likely need to emit a Content-Type header for the image before outputting its binary data in a web response.If the GridFS file is empty, the root cause is likely how you’re attempting to upload it in the first place. The openUploadStream and uploadFromStream methods used here are mutually exclusive.openUploadStream() returns a stream that you then write (e.g. fwrite()). Upon closing the stream, all of its chunk data will have been written to GridFS and its metadata document (with the length, checksum, etc.) will be created.uploadFromStream() operates inversely. You provide it a stream (e.g. calling fopen() on the temp file). The PHP library then reads that stream in its entirety and writes the contents to GridFS (chunk(s) and metadata).Uploading Files with Writable Streams in the PHP library’s GridFS tutorial includes code examples for both of these APIs.", "username": "jmikola" } ]
Accessing GridFS file stored / PHP
2021-09-09T15:43:53.531Z
Accessing GridFS file stored / PHP
4,566
https://www.mongodb.com/…8defa01c17d9.png
[]
[ { "code": "", "text": "Hi, The PDF exam guide for C100DEV does not include Replication, Sharding, Storage Engines for C100DEV but the online “Cloud: MongoDB Cloud” does.\nCould you confirm if these topics are part of the C100DEV syllabus?", "username": "Nandhini_Madanagopal" }, { "code": "", "text": "Hi @Nandhini_Madanagopal,MongoDB University has recently re-launched the MongoDB Associate Certification Exam. You can find the details of the exam on this Certification about page: learn.mongodb.comYou can also refer to the Associate Developer Exam Study Guide to understand the syllabus of the exam with percentage weightage of each exam section.Please feel free to reach out if you have any other questions.Kind Regards,\nSonali", "username": "Sonali_Mamgain" }, { "code": "", "text": "Thank you!! I cleared the exam yesterday! The exam guide was very helpful and is all-encompassing of the topics covered in the exam.", "username": "Nandhini_Madanagopal" }, { "code": "", "text": "That is great news, congratulations on getting MongoDB Associate Developer Certified @Nandhini_Madanagopal !!I am glad you found the Associate Developer Exam Study Guide helpful. We look forward to having you on the Community forums and sharing your knowledge and experience with other learners who are in early stage of certification exam preparation. Kind Regards,\nSonali", "username": "Sonali_Mamgain" }, { "code": "", "text": "", "username": "Sonali_Mamgain" } ]
Are Replication, Sharding, Storage Engines required for C100DEV Developer Exam preparation?
2022-12-23T17:29:39.414Z
Are Replication, Sharding, Storage Engines required for C100DEV Developer Exam preparation?
2,397
https://www.mongodb.com/…e4ee5c831bf.jpeg
[]
[ { "code": "", "text": "I took the SI Associate program , and after i finished the course , i wanted to take the exam for it i had a problem with the internet , and i downloaded the page twice , so i found the exam closed without even submitting the answer form\n\nIMG-20221225-WA0001800×449 40.3 KB\n", "username": "Rawnaa_Fawzi" }, { "code": "", "text": "Hi @Rawnaa_Fawzi ,Please email your issue to [email protected] and the team will look into this.Kind Regards,\nSonali", "username": "Sonali_Mamgain" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SI Associate quiz
2022-12-25T13:21:01.493Z
SI Associate quiz
2,188
null
[ "replication" ]
[ { "code": "", "text": "Hi,\nI am currently running mongod 4.4 in CentOs 7. I first installed mongod and run with command\nsystemctl start monogd. After that to test replica set cluster I run mongod instance with config file with the command “mongod -f /etc/mongod.conf”. Now when I try to kill the running process and start mongd with the first option i.e. simply mongo and not with mongod -f /etc/mongod.conf, I am unable to do so.\nHow do I run mongo the mongo without config file?\nThanks", "username": "Ravindra_Pandey" }, { "code": "", "text": "Why would you want to do so? There are enough options that you need a configuration file to reasonably start the system.", "username": "Jack_Woehr" }, { "code": "", "text": "And when you use systemctl to start mongod it’s being started with a configuration file.\nSo if it used to start that way and no longer does so, you must have changed something.\nOr you’re not really stopping mongod when you think you have it stopped.", "username": "Jack_Woehr" }, { "code": "mongodfork", "text": "for testing purposes before moving to a config file, you can pass almost all settings as parameters to mongod, including a test config file:\nmongod — MongoDB Manualyou can use the same data folder (and other files) or create a new test folder for everything you need, and then you are good to go. do not use fork option if you want the server to stop when you close the terminal (or ctrl+C) else you have to repeat the sequence of killing from terminal or shutting down by logging into the server. to create multiple instances for the replica set, either use multiple terminals, or use the fork parameter but then do not forget to kill/shutdown them every time you need to refresh. (else you will have hard time to find out why it won’t start, and got frustrated when you realize the ports are in use by those forgotten forked instances).PS: and you need to also remember to have the correct port when you run mongo shell, else you will have hard time to find out why your activity in the shell did not recorded in the database before realizing the activity was on the wrong instance", "username": "Yilmaz_Durmaz" }, { "code": "kill -9SIGKILLmongod\n", "text": "Now when I try to kill the running processYou should not do that. There are proper ways to terminate mongod. In there, pay particular attention to the sentence:Never use kill -9 (i.e. SIGKILL ) to terminate a mongod instance.You probably now have file permissions or ownership issues that you will have to clean-up manually.Share the output of running the commandAlternatively, you may shared the content of the log file.After that we might be able to find out what you have to clean.", "username": "steevej" }, { "code": "systemctlmongodmongod--port--dbpath--logpath", "text": "You probably now have file permissions or ownership issuesAlright, I missed that part. systemctl runs mongod with its own user and group permission. this means “default” paths and files used in the config file are all set unreachable to other users in the system other than root. trying to start the server just by a single mongod command, if you are not root, or don’t use sudo, or don’t switch user to mongod, then your server will fail to start because of the permissions on those default resources.for basic operations, you need to set --port, --dbpath and --logpath , and they should be different for each member of a replica set. create a test folder in your home and create the remaining paths in it.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you all for your response. The problem is solved and I am running my replica set with config option and init script", "username": "Ravindra_Pandey" }, { "code": "", "text": "The problem is solvedFor the benefit of all users of this forum, could you please share what was the underlying issue and how it was solve. This way others that stumble on a similar issue might be able to use your insight.", "username": "steevej" }, { "code": "", "text": "Actually there seems to be no problem at all. I just wanted my mongod to auto initiate whenever the server reboots or restarts. I am setting replica set cluster in my local environment using config file so whenever I reboot the system I had to manually start my mongod manually. Now using init script I am able to auto initiate mongod instance after the init script", "username": "Ravindra_Pandey" } ]
Run mongo normally without configuration file
2022-12-27T04:16:55.547Z
Run mongo normally without configuration file
2,287
null
[ "python", "time-series" ]
[ { "code": "Timestamp('2022-12-27 00:00:00-0500', tz='America/New_York')\n", "text": "Hello,I am trying to save data from yfinance to mongodb in a Timeseries db that I created following the instructions in the documentation. Is there an easy way of converting yfinance data to BSON UTC format? I need to write and read datetimes. I work with pymongo and pandas dataframes.example timestamp:thank you!Timestamp(‘2021-12-28 00:00:00-0500’, tz=‘America/New_York’)", "username": "Yannis_Antypas" }, { "code": "datetime.datetime()", "text": "Use Python datetime.datetime() to instance the insertable value.", "username": "Jack_Woehr" }, { "code": "", "text": "Hello @Jack_WoehrMerry XMas. Could you please provide an example? I’m quite an aspiring dev and I dont fully understan what you mean.", "username": "Yannis_Antypas" }, { "code": "datetime.datetime(year, month, day, 0, 0, 0, 0)year month dayinsert()insert_many()", "text": "In the field where you want a date, put the following in that field:datetime.datetime(year, month, day, 0, 0, 0, 0)\nassuming year month day are all instanced variablesthe pymongo driver will correctly translate this during your insert() or insert_many()", "username": "Jack_Woehr" }, { "code": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nmariadb_to_mongo.py\n\nConvert table from MySQL/MariaDB to mongodb.\nUses the mariadb driver and pymongo\n\nCreated on Sat Aug 8 21:00:44 2020\n\n@author: jwoehr\nCopyright 2020, 2022 Jack Woehr [email protected] PO Box 82, Beulah, CO 81023-0082.\nApache-2 -- See LICENSE which you should have received with this code.\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nWITHOUT ANY EXPRESS OR IMPLIED WARRANTIES.\n\"\"\"\n\nimport argparse\nimport decimal\nimport datetime\n", "text": "@Yannis_Antypas I just remembered that a while ago I published a complete script that handles the date.\nIt converts a mariadb table to mongodb and does the date thing as part of that. Hope this helps.", "username": "Jack_Woehr" }, { "code": "import pymongo\nimport yfinance as yf\nimport pandas as pd\nfrom pymongo import MongoClient\nimport pytz\nimport datetime\nfrom datetime import datetime, timezone\n\n# Connect to the database\nclient = pymongo.MongoClient(\"mongodb://localhost:27017/\")\ndb = client[\"stocks\"]\n\n# Get the \"OHLCV\" collection\nohlc_collection = db[\"OHLCV\"]\n\n# List of ticker symbols\nticker_symbols = [\"AAPL\",\"GOOG\"]\n\n# Create an empty list to store the data for each ticker\nticker_data_list = []\n\n# Loop through the ticker symbols\nfor ticker in ticker_symbols:\n try:\n # Retrieve the stock data for the current ticker\n ticker_data = yf.Ticker(ticker).history(period=\"1y\")\n \n # Add a column with the ticker symbol to the data\n ticker_data.insert(0, \"Ticker\", ticker)\n \n # Append the data to the list\n ticker_data_list.append(ticker_data)\n \n except Exception as e:\n print(f\"An error occurred while retrieving data for {ticker}: {e}\")\n\n# Concatenate the data for each ticker into a single DataFrame\ndf = pd.concat(ticker_data_list).reset_index()\n\n# Insert the 'timestamp' column in the first position\ndf.insert(0, 'timestamp', None)\n\n# Iterate over the rows of the DataFrame\nfor index, row in df.iterrows():\n # Parse the timestamp string\n timestamp = row['Date']\n timestamp_str = str(timestamp)\n dt = datetime.strptime(timestamp_str, '%Y-%m-%d %H:%M:%S%z')\n\n # Convert the timestamp to UTC\n dt_utc = dt.astimezone(timezone.utc)\n\n # Use the strftime method to format the datetime object as a string in the desired format\n formatted_string = dt_utc.strftime(\"%Y-%m-%dT%H:%M:%SZ\")\n dt = datetime.strptime(formatted_string, '%Y-%m-%dT%H:%M:%SZ')\n\n # Store the formatted string in the new column\n df.at[index, 'timestamp'] = dt\n \n# Drop the 'Date' column\ndf = df.drop(columns=['Date'])\n\n# Get the data as a list of dictionaries\ndata_dict = df.to_dict(orient=\"records\")\n\ntry:\n # Insert the data into the collection\n result = ohlc_collection.insert_many(data_dict,\n ordered=True)\n \n # Print a message indicating the number of documents inserted and the current time\n print(f\"Inserted {len(result.inserted_ids)} documents into the collection\")\n print(f\"last modified {datetime.utcnow()}\")\nexcept Exception as e:\n print(f\"An error occurred while inserting data into the collection: {e}\")\nfinally:\n # Close the connection to the database\n client.close()\n", "text": "Hey @Jack_Woehr ,thank you for your replies.I wrote some code - its not the most efficient - but it works.", "username": "Yannis_Antypas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do i send Timestamp data to db using python?
2022-12-28T00:16:17.215Z
How do i send Timestamp data to db using python?
4,412
null
[]
[ { "code": "", "text": "Hi , i am using free version of Mongodb and even my site is development mode and only used by me during the test but i am getting connection reached limit. Sorry, but am i missing something if i need to close connection. Please can you explain me.\nI am using Mongodb with nextjs.", "username": "Waqas_R" }, { "code": "", "text": "Hi @Waqas_R - Welcome to the community.Please refer to the MongoDB Atlas - Fix Connection Issues documentation which includes some possible ways of an immediate fix and details for implementing a more long term solution.Regards,\nJason", "username": "Jason_Tran" } ]
You're nearing the maximum connections threshold
2022-12-23T09:50:11.735Z
You&rsquo;re nearing the maximum connections threshold
1,245
null
[ "security" ]
[ { "code": "NSAppTransportSecurityNSAllowsArbitraryLoads", "text": "We’re getting more and more reports from user experiencing this issue in the app in production. The app is using MongoDB Realm and the users are on iOS.We’ve already tried to change the iOS specific settings regarding SSL, i.e. setting NSAppTransportSecurity > NSAllowsArbitraryLoads to true, but that didn’t change anything.This error happens in various calls to the MongoDB Realm backend: logging in, logging out, client reset, etc…Any idea on how to solve that?", "username": "Jean-Baptiste_Beau" }, { "code": "2022-12-28 11:12:28.158932-0600 MyAwesomeRealmApp[86624:2498121] [SceneConfiguration] Info.plist contained no UIScene configuration dictionary (looking for configuration named \"(no name)\")\n2022-12-28 11:12:28.159059-0600 MyAwesomeRealmApp[86624:2498121] [SceneConfiguration] Info.plist contained no UIScene configuration dictionary (looking for configuration named \"(no name)\")\n2022-12-28 11:12:28.162612-0600 MyAwesomeRealmApp[86624:2498121] You've implemented -[<UIApplicationDelegate> application:performFetchWithCompletionHandler:], but you still need to add \"fetch\" to the list of your supported UIBackgroundModes in your Info.plist.\n2022-12-28 11:12:28.162683-0600 MyAwesomeRealmApp[86624:2498121] You've implemented -[<UIApplicationDelegate> application:didReceiveRemoteNotification:fetchCompletionHandler:], but you still need to add \"remote-notification\" to the list of your supported UIBackgroundModes in your Info.plist.\n2022-12-28 11:12:28.226074-0600 MyAwesomeRealmApp[86624:2498121] [native] Running application main ({\n initialProps = {\n };\n rootTag = 1;\n})\nThread Performance Checker: Thread running at QOS_CLASS_USER_INTERACTIVE waiting on a lower QoS thread running at QOS_CLASS_DEFAULT. Investigate ways to avoid priority inversions", "text": "Also gettings this from the sample app created by following this tutorial:Sample repo:Contribute to lecksfrawen/poc-realm development by creating an account on GitHub.Output I’m getting:\n", "username": "Hector_DD" } ]
SSL server certificate rejected ("An SSL error has occurred and a secure connection to the server cannot be made.")
2022-03-29T14:46:15.787Z
SSL server certificate rejected (&ldquo;An SSL error has occurred and a secure connection to the server cannot be made.&rdquo;)
3,729
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi folks, we have a collection of Pages, and we want to capture changes made to each page. Our system is not yet ready for a full CQRS solution, but we wanted to keep a historical track of changes for each page.Every time a document changes (any of its properties) we want to save a copy of the current version. However there’s two approaches for that:This data would mostly be used for audit, and a restore if needed, from a collection performance perspective does one approach is better than the other? Each page is about 600 bytes in size, there are about 500k pages, and we should probably have about 5-6 revisions for each page.Thanks for your inputs", "username": "Vinicius_Carvalho" }, { "code": "", "text": "Hello @Vinicius_Carvalho\nI’d suggest to move all older revisions to a second collection. A document from the “revisions collection” references to its “top level” page document in a “top level collection”. Every document in the revisions collection gets a version. In other words you apply the document version pattern. By using a second collection you keep the top level collection small (the less data you work on the faster you are)… Retrieving a/the versions of a single page will be a specific read. Indexing will be a little bit use case dependent.\nRegards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks @michael_hoeller That is exactly what I was looking for, I already have a PageRevision collection, the recent page version lives on the Pages collection, upon any updates I copy the old revision to the PageRevisions. The only concern I had is whether I’d keep one single item, with several revisions inside, or one document per revision. The good thing I believe the nested model would bring is that I could just use the same objectId as key for both collections. But I can always use the original page id as an index on the revisions I guess.", "username": "Vinicius_Carvalho" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change Data modeling
2022-12-22T23:28:26.137Z
Change Data modeling
1,039
null
[ "upgrading" ]
[ { "code": "", "text": "Hello, We are planning to upgrade mongdb on various servers. Few questions.", "username": "Ana" }, { "code": "", "text": "When upgrading you can’t skip major versions you have to go from 4.2 → 4.4 → 5.0 → etc.\nhttps://www.mongodb.com/docs/manual/release-notes/6.0-upgrade-replica-set/\n\nimage826×191 8.64 KB\nHere are the release notes for 6.0 and it will mention in a version if there are any issues that would make it not recommended for prod. https://www.mongodb.com/docs/manual/release-notes/6.0/#std-label-release-notes-6.0Make sure to validate your application driver is compatible with the version of MongoDB you are using\nhttps://www.mongodb.com/docs/drivers/driver-compatibility-reference/When you say backup and restores which technology are you referencing?Here is the page that each version of MongoDB and the OS version that is supported\nhttps://www.mongodb.com/docs/manual/administration/production-notes/#platform-support-matrix", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I use mongodump in batch files for daily backps.", "username": "Ana" }, { "code": "", "text": "Thanks alot for your detailed reply", "username": "Ana" }, { "code": "", "text": "It that case I would look into the mongodump/restore docs as it says to restore/dump to the same major version of MongoDB.\nimage815×274 21.5 KB\n", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thank you. I was able to upgrade successfully until 5. Looka like mongo.exe is not available at 6. How do I start mongoshell with 6?", "username": "Ana" }, { "code": "", "text": "Hi @AnaMongosh is available from the download center, link is in the instructions below.", "username": "chris" }, { "code": "", "text": "Thanks, appreciate it. I am able to u[grade to 6 but it is asking for driver update when I was testing application. Any help on this?", "username": "Ana" }, { "code": "", "text": "What programming language and driver are you using in your application?", "username": "tapiocaPENGUIN" } ]
MongoDB Upgrade from 4.2 and 4.4 to 6.0
2022-12-20T14:37:13.231Z
MongoDB Upgrade from 4.2 and 4.4 to 6.0
11,820
null
[]
[ { "code": "\"MongoUser\": [\n {\n \"name\": \"read-only\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false,\n \"fields\": {\n \"first_name\": {\n \"read\": true,\n \"write\": false\n },\n \"last_name\": {\n \"read\": {\n \"_id\": {\n \"$in\": \"%%user.custom_data.contacts\"\n }\n },\n \"write\": false\n },\n }\n]\n", "text": "In one query, I’m trying to return different fields based on what permissions someone has but it doesn’t seem possible based on the current setup. For example, if someone is in someones friend list return their first name and last name, else just return their first name. I thought this would be possible with multiple rules but given it only takes one rule per query it seems tricky.I ideally want something like this:Is this possible in some way that I’m missing, or is this a feature request? When i try and implement something like the above I get the error that the field permission has to be a Bool not an Object.", "username": "Ryan_Lindsey" }, { "code": "", "text": "You should be able to overcome that error as I have seen examples comparing a value to an array of values within the custom user data. Is .contacts a list of objects or a list of id values themselves?A long note: the documentation states that custom user data should not contain ids for controlling access to individual documents. The docs clearly state tho that custom data granting access to groups of documents are ok. While you are not controlling access to an individual document specifically in your custom data, and you are granting access to a group of document fields instead, one caveat came to mind that I wanted to share with you. The docs mention that anytime custom user data changes that affect permissions, a client reset will occur. This means anytime the user’s contact list changes, you will experience a client reset.Wanted to pass that along.", "username": "Joseph_Bittman" }, { "code": "", "text": "Thanks for messaging and yeah in their docs they say not to reference set documents, although referencing users they don’t seem to be opposed to.\nContacts is an array yes. And it does work when referencing and will return if I put it in the applyWhen, or Read part of the json. However it doesn’t seem to control what field can be returned on a document by document basis. I don’t think what I want to do is possible at the moment, but I realllly wish it was. Hopefully someone see’s this and can correct me. ", "username": "Ryan_Lindsey" }, { "code": "", "text": "@Ryan_Lindsey Field Level Permissions for sync currently only support boolean values. So you can only set“write”: falseor“read”: true,For example.Is your example just an example or do you actually want to hide the lastName of users. Sync applies a user a single role per collection on connection. So what you could do is split out the data into another top-level collection and apply FLP there.Another option is to have add a friends array to the User document. If the userId of the user is in the friends array - then apply the friends role, which allows the user to read the lastName. Otherwise do not allow them to read the last name.Hope this helps", "username": "Ian_Ward" }, { "code": "", "text": "Hey Ian, thanks for following up. Our real world use case is for meeting up with people in person through events so we need to be very careful with what data we expose to the client given that we don’t want to allow strangers to be able to see other people’s location. Although if you’re someone’s friend you can see if they are nearby.What we currently do is have a users collection, and then when we want to fetch which users are in an event, we currently have an an API which checks if they are friends or not. If they are not friends, we only return their name, user id, and phone hash. Whereas if they are friends, we return those fields, their current location, and a couple of other secure fields.We want to start using Device sync so we can have more real-time data as opposed to using our APIs, and also that would remove the amount of code we’d have to manage. However if we just have to end up duplicating and manage that data to work with sync then the trade off doesn’t make too much sense for us.I set up a ‘contacts’ array in the users custom_data and in my real world example there are:What I’d like to happen is when I run the query that is on the user collection I only get the location field back for that one user.As only one rule can be picked per session per collection, having field level permissions using an expression seems to be a way this would work perfect for our use case. Given that technically everyone can read the document, but only some can read set fields.I hope that gives you a bit more context on what we are trying to achieve and why. Obviously there are hacky ways around it but the above solution I think would be our perfect use case.", "username": "Ryan_Lindsey" }, { "code": "", "text": "@Ryan_Lindsey unfortunately, the only way to support this at this time is split any fields or data out of that document and into a separate collection which you can then apply different field level permissions to based on if they are a matched friend or not.", "username": "Ian_Ward" }, { "code": "", "text": "Okay thanks Ian, good to know I wasn’t missing anything. Is this a use case you’ve seen often and are likely to add support for in the next year? Or is this likely something that is unlikely to exist in the future, or if so, be very far in the future?", "username": "Ryan_Lindsey" }, { "code": "", "text": "@Ian_Ward Like Ryan asked, any chance additional functionality could be coming soon?", "username": "Joseph_Bittman" }, { "code": "", "text": "@Ian_Ward will field level permissions be supported on sync once device sync permissions are merged with the rules UI in app services?", "username": "Tyler_Collins" } ]
Device sync: granular field permissions
2022-09-08T03:09:06.021Z
Device sync: granular field permissions
2,531
null
[ "queries" ]
[ { "code": "", "text": "Hi All,\nPlease help me out by letting me know how to write the syntax for joining two tables in mongodb. I want all the entries in the left table. If there is no matching entry in the right table then it should be populated as blank field.", "username": "Abhishek_Jain3" }, { "code": "", "text": "In mongo parlance, it is collections rather than tables, it is documents rather than entries. The operation is $lookup rather than join.If you want more details you will have to provide more details by sharing samble documents from both collections. Sample resulting documents are also needed for all use cases. Also dhare what you have tried and explain how it fails to produce the desired output.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,\nThanks for replying. I wrote the following statements.[{\"$project\":{“column names”}},\n{ “$lookup”:{\n“from”:“sample1”,\n“localField”: “xyz”,\n“foreignField”: “abc”,\n“as”:“sample” }\n}]So lets take previously I had 100 entries with xyz as my parent column and sample1 had 50 entries with abc as the foreign column. Now when I am applying $lookup, I am getting only 50 entries which are there in both the tables but I want all the 100 entries where for the 50 matching documents in main collection, documents from sample1 will come and populate in the output section and for the remaining 50 documents it should populate blank documents.for example:\nMain collection is like as following:\nxyz\n1\n2\n3\n4\n5sample 1:\nabc\n1\n2\n3After $lookup I am getting following result:\nxyz abc\n1 1\n2 2\n3 3but I want following:\nxyz abc\n1 1\n2 2\n3 3\n4\n5I hope I have made my doubt clear. Let me know if you need more information for clearing my doubt.", "username": "Abhishek_Jain3" }, { "code": "", "text": "please post real sample documents and real results from running the code you shared. the result you posted are not consistent with the code. ee need real json documents, not tabular data that cannot be used with editing.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,\n[{$project:{“orderRetrieveId”:1}},\n{\"$lookup\":{\n“from”: “sample2”,\n“localField”:“orderRetrieveId”,\n“foreignField”:“orderRetrieveId”,\n“as”: “mainsample”}\n},\n{\"$unwind\":\"$rawfile\"},\n{\"$project\":{“orderRetrieveID”:1,“rawfile.orderRetrieveId”:1}}\n]PFA sample documents.Now in my first file there are 47 entries and in 2nd one there are 46 entries, now after $lookup I am getting 46 entries only which are matching in both files. But i want all the 47 entries with the 2nd column entry as empty for the non-matching entry.", "username": "Abhishek_Jain3" }, { "code": "", "text": "I am not allowed to upload the documents.", "username": "Abhishek_Jain3" }, { "code": "", "text": "Why are you doing the following?{\"$unwind\":\"$rawfile\"}There is no field rawfile. After the $project you only have orderRetrieveId and _id. After the $lookup you now have the fields _id, orderRetreveId and mainsample. That is where you should stop your aggregation.I am not allowed to upload the documents.What do you mean by that? You really cannot cut-n-paste real documents?", "username": "steevej" }, { "code": "", "text": "Hi Steeve,{\"$unwind\":\"$rawfile\"}I mistyped it, in actual I have written {\"$unwind\":\"$mainsample\"}“There is no field rawfile . After the $project you only have orderRetrieveId and _id. After the $lookup you now have the fields _id, orderRetreveId and mainsample. That is where you should stop your aggregation.”Okay I will try this.Thanks for your help.", "username": "Abhishek_Jain3" }, { "code": "", "text": "Hi @Abhishek_Jain3 , I have the same issue as you were, have you found any solution?", "username": "Anonumose_Jack" } ]
Join two tables in mongo even if there is no entry in the right table
2022-06-05T08:59:53.719Z
Join two tables in mongo even if there is no entry in the right table
4,254
null
[ "java", "connecting" ]
[ { "code": "021-02-20 16:37:56.238 INFO 19488 --- [169.4.200:30510] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 192.169.4.200:30510\n\ncom.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message\nat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:562) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:447) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:298) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:258) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:103) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:60) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:131) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat java.lang.Thread.run(Thread.java:748) [na:1.8.0_282]\nCaused by: java.net.SocketTimeoutException: Read timed out\nat java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_282]\nat java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[na:1.8.0_282]\nat java.net.SocketInputStream.read(SocketInputStream.java:171) ~[na:1.8.0_282]\nat java.net.SocketInputStream.read(SocketInputStream.java:141) ~[na:1.8.0_282]\nat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:579) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:444) ~[mongodb-driver-core-4.0.5.jar!/:na]\n... 9 common frames omitted\n\n2021-02-20 16:37:58.970 WARN 19488 --- [onPool-worker-4] org.mongodb.driver.connection : Got socket exception on connection [connectionId{localValue:5, serverValue:104}] to 192.169.4.200:30510. All connections to 192.169.4.200:30510 will be closed.\n2021-02-20 16:37:58.993 INFO 19488 --- [onPool-worker-4] org.mongodb.driver.connection : Closed connection [connectionId{localValue:5, serverValue:104}] to 192.169.4.200:30510 because there was a socket exception raised by this connection.\n2021-02-20 16:37:58.996 INFO 19488 --- [169.4.200:30510] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 192.169.4.200:30510\n\ncom.mongodb.MongoSocketOpenException: Exception opening socket\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat java.lang.Thread.run(Thread.java:748) [na:1.8.0_282]\nCaused by: java.net.ConnectException: 拒绝连接 (Connection refused)\nat java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_282]\nat java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_282]\nat java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_282]\nat java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_282]\nat java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_282]\nat java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_282]\nat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:63) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.0.5.jar!/:na]\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.0.5.jar!/:na]\n... 3 common frames omitted\n\n2021-02-20 16:37:59.036 INFO 19488 --- [ main] ConditionEvaluationReportLoggingListener : \n\nError starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.\n2021-02-20 16:37:59.115 ERROR 19488 --- [ main] o.s.boot.SpringApplication : Application run failed\n\njava.lang.IllegalStateException: Failed to execute ApplicationRunner\nat org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:789) ~[spring-boot-2.3.3.RELEASE.jar!/:2.3.3.RELEASE]\nat org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:776) ~[spring-boot-2.3.3.RELEASE.jar!/:2.3.3.RELEASE]\nat org.springframework.boot.SpringApplication.run(SpringApplication.java:322) ~[spring-boot-2.3.3.RELEASE.jar!/:2.3.3.RELEASE]\nat ai.plantdata.graph.excel.ExcelApplication.main(ExcelApplication.java:59) [classes!/:1.4.2]\nat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_282]\nat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_282]\nat sun.refle\n", "text": "加载更多", "username": "jiawei_chen" }, { "code": "", "text": "any solution for this ?", "username": "Harshana_Samaranayak" }, { "code": "", "text": "Hi! Welcome to the forums! So, we would need more clarifying information to help with this. What versions are you using of the Java driver, MongoDB, and Java? What is causing the timeout? Do you have any code to repro steps? And can you be more specific with the question? Thank you.", "username": "Karen_Huaulme" }, { "code": "", "text": "Any solution for this yet . I am also facing same issue.Java 11\nspring-boot-starter-data-mongodb : 2.6.8\nmongodb-driver-sync: 4.4.2\nmongodb-driver-core: 4.4.2MongoAtlasSteps to reproduce:\nCreate simple spring boot application , use spring-boot-starter-data-mongodb dependency.\nAdd mongodtlas connection string in application.properties.\nStart application and leave it running for a while.\nYou can see error in logs.", "username": "Prabhat_Kumar2" } ]
Org.mongodb.driver.cluster: Exception in monitor thread while connecting to server 1
2021-02-20T10:58:50.710Z
Org.mongodb.driver.cluster: Exception in monitor thread while connecting to server 1
24,198
https://www.mongodb.com/…4_2_1024x362.png
[ "atlas-device-sync", "monitoring" ]
[ { "code": "", "text": "\nCaptura de Tela 2020-12-17 às 16.50.371595×565 49.7 KB\nThe screenshot shows a period of 0 users on an Atlas Cluster linked with a MongoDB Realm app. As you can see, the second shard shows ~3 commands and ~2 getmores per second with ~60 active connections, and the other two shards maintain the same amount of commands and active connections as the second one, even though they have 0 of any other type of operation. This behavior is constant, the command operation never gets to 0.Is this considered normal behavior for a cluster linked with Realm?\nCaptura de Tela 2020-12-17 às 17.28.501595×565 62.4 KB\nNow, this second image shows a period when some users (less than 3) start using the cluster by a MongoDB Realm Sync .NET client. Should the getmore operation go to ~5.5 per second and stay there constantly? And should the active connections go from 47 to 70 with 3 users connecting?", "username": "Luccas_Clezar" }, { "code": "", "text": "Did you ever gain any insight to this? I’m seeing similar things – a very large number of connections with minimal to no known active use.", "username": "Eve_Ragins" }, { "code": "", "text": "Hi @Eve_Ragins, after I created this topic I did a lot of testing and got in contact with someone from the support team. The answer is not really straightforward.This is what the support said:Connections from a Realm to the Realm servers from a client, including multiple connections are combined in to a single connection, meaning you could have multiple connections from a device/client and that this would not translate into the same number of connections to Atlas.Further new devices connecting to Realm may not result in a new connection being made to Atlas from Realm Servers, it may re-use a connection.\nThe way to look at it would be.Multiple/Single Realms/Devices make a connection to the Realm Servers, these connections are bundled in to a singular connection from Realm Servers to Atlas to service all requests.To minimize the number of concurrent listening clients and open change streams:So the connections count is not 1:1 based on either the number of clients or the number of open Realms. Unused connections can stay active on Atlas and later be reused by another Realm client.A connection is always active for each Atlas collection too, so if you have an Atlas database with 10 synced Realm collections, there will be at least 10 active connections.", "username": "Luccas_Clezar" }, { "code": "", "text": "Thank you for sharing!Unfortunately, that doesn’t seem to align with what I’m seeing…\nI noticed something earlier today about growing connections which I put into this post: Realm does not seem to close mongodb connections on app redeploymentI’ll probably end up opening a support ticket and will share what I learn.", "username": "Eve_Ragins" }, { "code": "", "text": "@Eve_RaginsDo you have any updates about the connections? I’m having something very similar on my App. Every time I run some function on the realm app, I’m observing that it increases the number of connections; a single client is consuming 140 connections.If you have some updates on this, please share, thanks =)", "username": "Loe_Lobo" }, { "code": "", "text": "@Loe_LoboThe response I received on August 9th was:We believe that this issue might be related to changes that were introduced last week in MongoDB Realm. We cache MongoDB client connections in a resource pool on each of our servers (to prevent the creation of new connections for every DB request), we made some internal adjustments to how we cache these connections, like when they are evicted, etc. that resulted in additional connections being created unnecessarily.Later on the 12th I received an update:Thank you for your patience. It seems that the latest MongoDB Realm has incorporated the changes required to close idle connections.My connections have dropped from the peak of 700s down to around 178 now. 178 is still extremely high for the number of clients I have (including triggers); but it seems to be holding steady and isn’t a red alert issue for me anymore.Maybe Realm just likes to keep a connection pool of ~150+? I’m pretty new to mongo so I don’t really know how it works or optimizes that – I did see one stack overflow post that implied 1 connection really turns into 4 connections on a standard 3-database replica set. ", "username": "Eve_Ragins" }, { "code": "", "text": "It definitely looks like this is still an issue. One of my realm apps is now consuming 400+ connections, and it keeps increasing with every new deployment via github.\nScreen Shot 2021-08-30 at 14.18.383584×922 242 KB\n", "username": "Adam_Holt" }, { "code": "", "text": "Same issues here…", "username": "ProTrackIT_Support" }, { "code": "", "text": "Hello everybody! I’m very excited to be of assistance!I’m Brock, I am one of the Technical Service Engineers for MongoDB Realm. Please allow me to explain what in particular you’re seeing if I may.Short summary:\nSo simply put, the connections themselves aren’t client connections that you’re seeing. What you’re seeing is a series of redundant connection pathways that are waiting on more and more clients. As you get more clients the connections will begin to even out and disperse to balance the loads of an ever increasing number of clients. (Clients = Users/Devices/Instances of your app)General points:In long summary:\nIf you are using Realm Triggers/functions/Sync, connection count in Atlas connections will increase but that should stabilise after some time(10-15 minutes) of no activity.If you continue to experience any further issues with the connections not being released by the Realm application please let us know Realm app link and we would be happy to look into this further.Realm internally proxies connection by default and keeps them open across multiple invocations so this will definitely be more efficient with connection utilization and may actually be more performant on average (less of a latency hit for opening new connections). Realm generally opens a single connection pool per Realm host and manages connections at the Realm host-level.Further new devices connecting to Realm may not result in a new connection being made to Atlas from Realm Servers, it may re-use a connection.So in another way to explain in short summary, connections count is not directly proportional on either the number of clients or the number of currently open Realms. Some unused connections can stay active on Atlas and later be reused by another Realm client.To minimize the number of concurrent listening clients and open change streams:", "username": "Brock_GL" }, { "code": "", "text": "Brock,Thank you for the quick detailed response.I completely understand the architect behind how the connections get made and thanks for clearing that side of it up.This is the part I don’t understand.What you’re seeing is a series of redundant connection pathways that are waiting on more and more clients.Example:Hopefully, this cleared up what’s going on. The additional connections made are understood, so for example, I log in to my Realm app as ONE client, but it actually makes 10 connections. Understood.But over a little bit of time, those connections that aren’t being used OR if the exact same client re-connects, connections should NOT just keep piling up. Either the existing connections that have been created should be used OR those connections should be dropped and new ones created for this instance.As for the below, I’m not using Watch or Sync.To minimize the number of concurrent listening clients and open change streams:", "username": "ProTrackIT_Support" }, { "code": "", "text": "I’m having what seems to be the same issue. I’m relatively new to Realm and MongoDB and still learning the platform. As such, I only have one or two test clients running at a time and the app has only 3 very small collections to sync with.Today, I received numerous email alerts for getting close to the 500 connection limit when I know there hasn’t been a connected client in over 12 hours. Currently it’s showing 400+ connections and I haven’t connected an app at all day. All of this seems like something new since I’ve been playing with Realm for weeks up until without receiving these alerts. At any rate, the # of connections never seems to really go down.", "username": "Shane_Bridges" }, { "code": "", "text": "I’m seeing this issue myself, and have two Realm apps eating up about 450/500 of my connections – at least, that’s what I’m thinking since it seems to hover at that level even with a solid hour of having nothing set in the NACL.@Brock_GL Are there any quick fixes here? Will these connections be released if I delete my Realm applications?", "username": "randytarampi" }, { "code": "", "text": "Same issue here and I reported it to support who opened an internal ticket. What I did in the meantime was to create a new Cluster and restore the most recent database snapshot back to the new cluster. I then switched my Realm app to the new cluster. I still get a jump of 50+ connections every time I do a deployment but it will keep me going until the problem gets solved properly.That was all fine on my dev environment but I can see the same problem building up on my prod environment where I’ve temporarily stopped doing deployments for fear of pushing the system over the connection limit and bringing the website down.", "username": "ConstantSphere" }, { "code": "", "text": "I deleted the application, waited 10 minutes, the connections were still there. The temporary solution was to do as @ConstantSphere said: a new cluster and application with limited operations on it… I hope it is an issue from server side, because I’m feeling hopeless…", "username": "LRsoft" }, { "code": "", "text": "Besides, the operation count are also non zero when no users are connected. Somehow, it seems each user opens a number of connections that are kept active and asking for data each second. Maybe that’s why the connections are not being released (just a guess).", "username": "LRsoft" }, { "code": "", "text": "I see this morning that Enable Automatic Deployments has been disabled on my Realm app both in development and production and I can’t switch it back on. I tried updating a function via the UI and deploying the changes (in dev) and I still got a big jump in connections (despite it not linking back to GitHub).", "username": "ConstantSphere" }, { "code": "", "text": "About 15 hours ago the number connections on my dev environment fell significantly. I did a deployment to it earlier and it jumped up again so I’m not sure if the issue is completely fixed but I’ve now got head room on my production system to resume deployments again. ", "username": "ConstantSphere" }, { "code": "", "text": "I just did another deployment to my dev environment and unfortunately the number of connections went up again so I don’t think the issue is completely resolved. Perhaps they’ll all get closed down again tonight?", "username": "ConstantSphere" }, { "code": "", "text": "Hello, I am also experiencing the same issue as everyone else. I have made a realm application before about a year ago and did not encounter anything like this before. The connections graph would drop down once a user had closed the application (once you see the realm disconnected message in the console output).2021-09-19 09:56:16.234516-0400 APP_NAME[10099:675941] Sync: Connection[2]: DisconnectedIn the application I am currently developing I am using more functions and triggers which could be the cause of this problem. Maybe when you open a MongoDB Function/Trigger, the connection from there does not get closed and stays active? The reason I say this is because I was just using the MongoDB Realm Functions page for editing and testing a single function and my connections skyrocketed! Its like the function was creating connections that never get disconnected.\nScreen Shot 2021-09-19 at 11.39.54 AM2452×988 188 KB\nYou can see the connections at the beginning of the project are super low because I was not using any functions or triggers at the time and the only connection was the realm connection coming from the device (which got disconnected after use). But once I started to implement functions and triggers, the connections went through the roof. Im assuming they go down after a day or two, but a day or two just to disconnect the connection from a function/trigger (I may be completely mistaken on how that works) ???As I am just in development mode, is there a way to disconnect all connections from the database?", "username": "seby_gadz" }, { "code": "", "text": "I deleted a single trigger, and POOF, ALL CONNECTIONS disappeared! So that must means the trigger thats calls the function is not letting go of the connection towards the atlas cluster. So I would suggest people having this problem to ensure they are closing the connection to the database or collections in there functions because apparently the MongoDB function does not do that…", "username": "seby_gadz" } ]
High number of connections and opcounters without anyone using the cluster
2020-12-17T21:29:58.699Z
High number of connections and opcounters without anyone using the cluster
26,504
null
[ "aggregation", "queries", "indexes" ]
[ { "code": "user$match", "text": "My app has a user collection that has 100 fields plus another field that is an array of subdocuments. Each subdocument will again have 100 fields. Each user has up to 100 of these subdocuments.Each time a user logs in, a query must be run against this collection, where in the $match stage, potentially 200 filter conditions may be specified to query the 200 fields of each document (including the fields of the subdocument). It is not predictable what combination of these conditions will be used on each query.All fields are combination of text, number, and boolean.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "This looks more or less like", "username": "steevej" }, { "code": "", "text": "These kinds of questions are impossible to answer. What you’re describing is far too vague and has a huge amount of unpredictable variables.Ultimately, I would say, anything is possible, but as an app grows and user numbers grow, you will always have to adapt. Twitter, Facebook, etc. were not built from the start to handle the amount of data they handle today.Do some testing with fake data to see how long your queries take to run on one type of instance. Experiment with different indexes. Try different instance types (e.g. on MongoDB Atlas). Think about when do these queries really have to run. Consider running long queries ahead of time (e.g. daily) and cache the result so it is instantly available when the user logs in.I’d say, if your data schema fits well into MongoDB, go for it. A great advantage is that you can make changes to your schema easily at any time.", "username": "Nick" }, { "code": "", "text": "Hi @Big_Cat_Public_Safety_Act,As noted in earlier replies, this seems related to some of your other discussion topics although you have extra questions here.Scalability and feasibility will depend on many factors including your schema design, application design, indexes, deployment resources, workload, performance expectations, and funding. The best way to estimate would be generating some data and workload in a representative test environment.There are different dimensions to scaling (performance scale, cluster scale, data scale) and you can see some examples at Do Things Big with MongoDB at Scale.As @Nick notes, Twitter and Facebook weren’t built from the start to handle the user base they have today. Both have evolved into very large application platforms and companies with 1000s of engineers and millions or billions of users.As per #1, any estimate is going to depend on many factors and this question isn’t directly answerable. The estimated number of users will also vary depending on what those users are doing, and when. An application with 10,000 daily users distributed globally could mean anywhere from 10s to 100s or 1000s of concurrent users depending on session durations, time zones, and how they interact with your app.I recommend reviewing the MongoDB Schema Design Patterns to see which might apply to your application and use cases.For example, the Attribute Pattern would be helpful for the variety of fields you are planning, including unpredictable field names.If you have more ambitious search requirements, Atlas Search has a rich set of search features and operators.If you are looking to optimise some specific use cases, I suggest starting a discussion with more concrete details including example documents with your proposed schema, common queries, and any concerns or findings you have so far.Regards,\nStennie", "username": "Stennie_X" } ]
At how many filter conditions in the $match stage will a query become unfeasible?
2022-12-23T19:43:47.061Z
At how many filter conditions in the $match stage will a query become unfeasible?
1,740
null
[ "queries", "indexes" ]
[ { "code": "{\n field_1: \"string\" // can only have the value of \"A\" or \"B\",\n field_2: \"numeric\",\n}\n{\n field_1: 1,\n field_2: 1\n}\n\ndb.col.find( { field_2: { $gt: 100 } } )\nfield_1db.col.find( { field_1: { $in: [\"A\", \"B\"] }, field_2: { $gt: 100 } } )\n", "text": "The above is the schema for my collection.The following compound index exists:The query in question is below:This query skips the prefix field_1. Hence MongoDB does not use the compound index.So in order to get it to use the compound index, I change the query to this:", "username": "Big_Cat_Public_Safety_Act" }, { "code": "explain()field_1field_2field_2", "text": "Hi,Yes, since your query includes all fields (or a prefix) of the compound index.You can verify index selection using the query explain() feature.Would there be any performance benefits either way?Your compound index doesn’t get selected in the first query because all values of field_1 need to be scanned, which will likely have a high ratio of index keys read compared to results returned.Maintaining and scanning extra index keys will have negative performance impact, but the observed outcome will depend on your environment and resources. There’s some more background in the Unnecessary Indexes schema design anti-pattern.If you sometimes need to query on both fields but mostly query on field_2, I recommend reversing the order of fields in your compound index so your first query will match the index prefix. Alternatively you could create an index on just field_2.Regards,\nStennie", "username": "Stennie_X" }, { "code": "Your compound index doesn’t get selected in the first query because all values of field_1 need to be scanned, which will likely have a high ratio of index keys read compared to results returned.\nfield_1field_1field_1", "text": "I am confused about the above statement.", "username": "Big_Cat_Public_Safety_Act" } ]
Compound index - skipping a prefix vs selecting all values of a prefix
2022-12-28T03:51:30.734Z
Compound index - skipping a prefix vs selecting all values of a prefix
1,432
https://www.mongodb.com/…33152545f5c9.png
[]
[ { "code": "", "text": "Upper Image, My Collection data.I want to change info array data update.\nBut not all data in info array,\njust data of uid : 5 wants to change (count, level, type).How to update array value?I try to update but only one change data in array.", "username": "DEV_JUNGLE" }, { "code": "$$[<identifier>]", "text": "Hi,The positional $ operator will update only the first matching element from an array,You can use $[<identifier>] filtered positional operator to update multiple matching elements in an array,", "username": "turivishal" }, { "code": "", "text": "Tanks, I solved too.very complicated.", "username": "DEV_JUNGLE" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to update array in schema?
2022-12-28T04:51:20.584Z
How to update array in schema?
932
https://www.mongodb.com/…3689fca6f57b.png
[ "next-js" ]
[ { "code": "npx create-next-app --example with-mongodb mflix\ncd mflix\nnpm run build\n", "text": "I’ve just been going through the article \" How to Integrate MongoDB Into Your Next.js App\". Everything worked OK when I used ‘npm run dev’. But when I got near the end where it says to use ‘npm run build’, it gave this error:Then I deleted the mflix directory and ran only these commands:But the same thing happened again.\nI’m using Node v17.0.0 on Windows 10.", "username": "Denis_Vulinovich1" }, { "code": "export async function getServerSideProps(context: any)", "text": "It works on Ubuntu using Node v16.15.1, as long as I change (context) to (context: any):export async function getServerSideProps(context: any)I originally tried that on Windows, but it didn’t work. So maybe it’s a problem with TypeScript on Windows?", "username": "Denis_Vulinovich1" } ]
Integrate MongoDB Into Your Next.js App - build error
2022-12-22T08:58:32.344Z
Integrate MongoDB Into Your Next.js App - build error
1,529
null
[ "aggregation" ]
[ { "code": "", "text": "I just logged back into MongoDB University after a 60 days and it appears all my progress in “Introduction to MongoDB” is erased. I only have a button to “Register” for the course which I am now afraid to click if there is any hope of getting my previous progress back (I had completed all the way through and including aggregation). Is there anyway to get my progress back and would clicking the “Register” button make that harder or easier to resurrect?", "username": "MichaelB" }, { "code": "", "text": "please make sure you are “logged in”. this following link to the course shows, for me, “Register now” button on a private browsing tab, and “Continue” button when logged in.Introduction to MongoDB Course | MongoDB Universityyour browser might also have a fat cache, so try also clearing the browser itself. try not to clear your passwords and other important data, only the cache. you may try this by opening developer tools (menu or F12 or right-click and inspect) then right-clicking on the “refresh button” to get “hard reset” and “empty cache and hard reset” options.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "We have recently moved to a new LMS at learn.mongodb.com. Where you will find all of our courses, plus new ones. Transcripts have been transferred, and you can use your same login, however, if courses were not complete prior to December 2nd they were not transferred. Take a moment to explore our new LMS and be on the lookout for new courses coming soon.", "username": "Grace_Filkins1" }, { "code": "", "text": "This topic was automatically closed after 60 days. New replies are no longer allowed.", "username": "system" } ]
University Progress Erased?
2022-12-27T22:36:21.571Z
University Progress Erased?
1,464
null
[]
[ { "code": "", "text": "I just completed M001: Introduction to MongoDB and have already started the Data modelling course. However, I a concerned that I might end up forgetting what I learnt in the Introduction to MongoDB.How do you recommend I practice what I learnt in the course? Are their sites that allow practising using MongoDB by solving questions and writing queries for MongoDB?I’d be happy to receive any suggestions on this matter.\nThanks in advance", "username": "Collins_Kariuki" }, { "code": "", "text": "Hey @Collins_Kariuki,There are plenty of ways to increase your knowledge and practice what you learned in the course. One of the ways, of course, is to try to do a personal project using MongoDB. Our Developer Center has a wide range of tutorials, videos and code examples that you can go through and build something with MongoDB.Additionally, we also have a MongoDB Bytes section in forums that you can go through to practice basic and advanced MQL, Aggregations, and other concepts through articles and quizzes. Linking a few quizzes for your reference from the MongoDB Bytes that you can go through and practice:\nPractice Basic MQL\nAggregation ChallengeThere are other resources too that you can refer to increase your knowledge of MongoDB like reading the official documentation, or trying to complete one of the developer paths in our MongoDB University. They contain labs and quizzes that you can try to further increase your knowledge.Please let us know if there is anything else you need suggestions or help with. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "ObjectId(\"someid\"){\"$oid\":\"someid\"}", "text": "There are a vast amounts of resources on MongoDB alone, so take your time to absorb them, starting with what @Satyam gave.meanwhile, there is a pretty nice website where you can try ideas fast and also share:Mongo playgroundcreating small datasets can become tedious on real servers and client applications (let alone setting them), and this site helps on trying many aspects quickly. you just need to be a bit familiar with JSON (everything except numbers and true/false is quoted) and eJSON/BSON (extended types are mainly functions but have also string representations, ObjectId(\"someid\") and {\"$oid\":\"someid\"} ).", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Practice working with MongoDB
2022-12-20T03:53:54.213Z
Practice working with MongoDB
4,356
null
[ "aggregation", "queries", "swift", "atlas-functions", "graphql" ]
[ { "code": "", "text": "In multiple places of our app there are complex tasks, such as big aggregation pipelines gathering data with $lookup (using GraphQL), processing this data e.g. by calculating something and writing to the database again and more. At the moment most of this work is done on the client, sometimes sending multiple network requests to the server (read with aggregation pipeline, …, write to the server, …)I realised this could also be implemented using the server functions that are called by the client, and receiving on the client only the final result through a callback for example.Now I am wondering what method is advised? I imagine there is a difference in performance, price, …? (due to calculation times, amount of requests, …)\nI also read that it is advised to let the client do as much work as possible, since it is “free” in terms of server load. Our app will have potentially 1000s of users, making these requests and calculations simultaneously…P.S. In one case the client writes something to the DB, then we have a trigger performing some function. By instead directly calling a function from the client, writing within that function to the DB, we would have the benefit of receiving a callback once the function has finished running, which is not the case when using a trigger…\nP.P.S. We are developing natively for iOS (Swift)", "username": "David_Kessler" }, { "code": "", "text": "This is a good question and the answer will really depend on the overall app design, dataset size, what type of tasks they are and about 10 other variables.In one aspect, the more you do on the client, the lesser the cost. It also may be ‘faster’ since the resulting data is local and more convenient as results can also be processed while offline.On the other hand, servers are really good at processing massive datasets - for example suppose there are 100 million objects and you want some subset or some single result. Well, storing that much data on the device is probably a bad idea; storing it on the server however is ‘easy’ and processing through all of that on the server offloads that task from the device.So - great question but probably unaswerable without really digging deeply into the use case.", "username": "Jay" } ]
Using server function or doing work on client
2022-12-27T11:43:07.837Z
Using server function or doing work on client
1,357
https://www.mongodb.com/…2_2_1024x173.png
[ "containers" ]
[ { "code": "{\n system: {\n currentTime: 2022-11-28T10:23:25.772Z,\n hostname: 'mongo',\n cpuAddrSize: 64,\n memSizeMB: Long(\"31304\"),\n memLimitMB: Long(\"31304\"),\n numCores: 8,\n cpuArch: 'x86_64',\n numaEnabled: false\n }\n", "text": "I use MongoDB v5.0.3 via docker on a single AWS instance. The container kept crashing on long queries exactly after hitting 100% memory.\nThen I noticed this:\nThis is the docker stat. You can notice that the memory is limited to 4 GB.\n\nimage1231×208 17.2 KB\nThis is the MongoDB host info:You can see that the memLimitMB is set to 31 GB.I rebuilt the container several times, but it still has the same issue.What could be the reason?", "username": "Maulin_Tolia" }, { "code": "", "text": "See my answer inHello guys, I have a MongoDB Operator deployed in HA, and I'm facing this proble…m that even limiting the amount of memory that mongod must use, this limit is not being respected, which ends up taking a high memory consumption and consequently the pod is dropped.\n\nThe node itself has 32gb of ram and we are reserving 30gb for the mongod container and limiting consumption to 23GB\n\nI have here some information from Grafana, showing the consumption of the container before a failure.\n\n![memory](https://user-images.githubusercontent.com/20954739/161768821-f190f044-8fe2-432f-ac6d-c047db488f42.png)\n\nJust like the mongodb CRD yaml.\n\n```yaml\napiVersion: v1\nitems:\n- apiVersion: mongodbcommunity.mongodb.com/v1\n kind: MongoDBCommunity\n metadata:\n annotations:\n mongodb.com/v1.lastAppliedMongoDBVersion: 5.0.6\n generation: 9\n name: prd-mongodb\n namespace: shared-database\n spec:\n additionalMongodConfig:\n storage.wiredTiger.engineConfig.cacheSizeGB: 23\n storage.wiredTiger.engineConfig.journalCompressor: zlib\n members: 3\n security:\n authentication:\n modes:\n - SCRAM\n statefulSet:\n spec:\n template:\n spec:\n containers:\n - env:\n - name: MANAGED_SECURITY_CONTEX\n value: \"true\"\n name: mongod\n resources:\n limits:\n cpu: \"3.5\"\n memory: 30Gi\n requests:\n cpu: \"2\"\n memory: 14Gi\n - env:\n - name: MANAGED_SECURITY_CONTEX\n value: \"true\"\n name: mongodb-agent\n resources:\n limits:\n cpu: \"0.5\"\n memory: 512M\n requests:\n cpu: \"0.2\"\n memory: 200M\n volumeClaimTemplates:\n - metadata:\n name: data-volume\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 50Gi\n storageClassName: managed-premium\n - metadata:\n name: logs-volume\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n storageClassName: managed-premium\n type: ReplicaSet\n users:\n - db: admin\n name: prd-mongodb-user\n passwordSecretRef:\n name: prd-mongodb-user-password\n roles:\n - db: admin\n name: clusterAdmin\n - db: admin\n name: userAdminAnyDatabase\n - db: admin\n name: root\n scramCredentialsSecretName: my-scram\n version: 5.0.6\n status:\n currentMongoDBMembers: 3\n currentStatefulSetReplicas: 3\n mongoUri: <HIDE>\n phase: Running\nkind: List\nmetadata:\n resourceVersion: \"\"\n selfLink: \"\"\n```\nThank you so much everyone for your attention.", "username": "John_Moser1" } ]
MongoDB does not respect docker compose limits
2022-11-28T11:50:38.157Z
MongoDB does not respect docker compose limits
2,210
null
[ "aggregation" ]
[ { "code": "or the same I had try that\n", "text": "Retrieve by RevisionNum but return all array element.{\n_id: ObjectId(‘613f16eda156d84fd428510f’),\nTemplates: [\n{\nRevisionNum: “210719”,\nEffectiveDate: 2021-07-19T17:43:06.693+00:00,\nHardwareVer: “A”,\nSoftwareVer: “1.0”,\nWorkTasks: [ “ObjectId(‘60f5b97d60762e6a10002820’)”, “ObjectId(‘60f5b98760762e6a10002822’)” ],\nHasTraveller: true\n},\n{\nRevisionNum: “220104”,\nEffectiveDate: 2022-04-01T17:43:06.693+00:00, HardwareVer: “B”,\nSoftwareVer: “1.5”,\nWorkTasks: [ “ObjectId(‘60f5b97d60762e6a10002820’)” ],\nHasTraveller: false\n}\n}\nI use the query methodMainAssyTemplate.aggregate([\n{\n$match: {\n// _id: req.query.productId,\n“Templates.RevisionNum”: “220104”,\n},\n},MainAssyTemplate.aggregate([\n{\n$match: {\n// _id: req.query.productId,\nTemplates: { $elemMatch: { RevisionNum: “220104” } },\n},\n},How can retrieve another way to get one templates element collection.?", "username": "Min_Thein_Win" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.This is the third time I write you the above. Help us help you by providing your document in a usable form. Editing documents that are badly formatted is time consuming. It is easier to help others that have well formatted documents so we answer them faster. It is really really easy for you to supply documents that are easy to cut-n-paste into our systems.", "username": "steevej" }, { "code": "", "text": "Off-topic: Maybe there should be more users allowed to edit posts, or even some of the staff.Also most of these users can’t really un-notice the markdown issues, (I mean, if you can write 2 lines of code…) so I’d put the posts “on hold” until they are ready to be answered.May be a topic worth discussing at a higher level, by moderators or employees.( Have a nice 24th! )", "username": "santimir" }, { "code": "", "text": "Maybe there should be more users allowed to edit posts, or even some of the staff.NO. People have to learn how to use their tools otherwise they keep repeating the same pattern. Just like @Min_Thein_Win just did in Retrieve data by date and time inside of array element.Doing it for them will just feed them for the day.", "username": "steevej" }, { "code": "", "text": "True. And some kind of putting the post on hold and alert the user ?I see that devs at SO or other pop forums dont have much of a better solution…", "username": "santimir" } ]
Aggregate - return the whole array if query match one element in the array
2022-12-24T15:56:37.020Z
Aggregate - return the whole array if query match one element in the array
2,555
https://www.mongodb.com/…01d86a78afee.png
[ "aggregation" ]
[ { "code": " {\n $group: {\n _id: {\n item: \"$item\",\n status: \"$status\"\n },\n itemCount: {\n $sum: 1\n }\n }\n }\n])\n\ni got this result \n\n[\n {\n \"_id\": {\n \"item\": \"I1\",\n \"status\": \"active\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I5\",\n \"status\": \"disable\"\n },\n \"itemCount\": 2\n },\n {\n \"_id\": {\n \"item\": \"I4\",\n \"status\": \"unactive\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I2\",\n \"status\": \"active\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I4\",\n \"status\": \"active\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I3\",\n \"status\": \"active\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I2\",\n \"status\": \"un-active\"\n },\n \"itemCount\": 1\n },\n {\n \"_id\": {\n \"item\": \"I2\",\n \"status\": \"disable\"\n },\n \"itemCount\": 1\n }\n]```\n\nwhich is fine but i want it to be in different way like this ..\n \n {\n \"_id\": {\n \"item\": \"I1\",\n \"active\": {\"itemCount\": 1},\n \"disable\":{\"itemCount:3\"},\n \"un-active\":{\"item:4\"},\n \"total\":{itemCount:8}\n },\n },\n {\n \"_id\": {\n \"item\": \"I2\",\n \"active\": {\"itemCount\": 1},\n \"disable\":{\"itemCount:3\"},\n \"un-active\":{\"item:4\"},\n \"total\":{itemCount:8}\n },\n },\n ]```\n", "text": "i have a document structure like thisand i tried this queryKindly help …", "username": "M.Mujeeb_Alam" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and publish your sample document in textual JSON so that we can cut-n-paste them into our system.", "username": "steevej" }, { "code": "", "text": "But for starters, you may play with this code in the link belowMongo playground: a simple sandbox to test and share MongoDB queries online", "username": "santimir" } ]
Group by and count
2022-12-27T07:03:43.471Z
Group by and count
2,094
null
[ "queries", "node-js" ]
[ { "code": "", "text": "hello every one ,first time here.\ni was working on FCC project and i needed to use mongodb atlas for the database and when i try to submit a form data to the DB i was having the error on the console:-\nFailed to load resource: the server responded with a status of 500 (Internal Server Error).\ncan any body tell me what is happening and how this issue arise?how to fix it?thanks in advance. ", "username": "ayne_abreham" }, { "code": "", "text": "Hello @ayne_abreham, Welcome to the MongoDB community forum Can you please share more details, about how you implemented the code or anything that we can debug the error?", "username": "turivishal" } ]
Internal Server Error
2022-12-25T22:51:43.871Z
Internal Server Error
1,339
null
[ "aggregation", "crud" ]
[ { "code": "db.profiles.updateMany(\n { $or : [ { \"contacts.phone_number\":{$exists : 1} },{ \"contacts.address\":{$exists : 1} },{ \"contacts.email\":{$exists : 1} } ] },\n { $set: { \n \"contacts.phone_number.$[element].type\" :\"Work\",\n \"contacts.address.$[element].type\" :\"Work\",\n \"contacts.email.$[element].type\" :\"Work\",\n \"contacts.phone_number.$[element].is_enabled\" : 1,\n \"contacts.address.$[element].is_enabled\" : 1,\n \"contacts.email.$[element].is_enabled\" : 1,\n }\n }, \n {arrayFilters: [ { \"element.type\": {$exists : 0} ,\"element.is_enabled\": {$exists : 0} }] }\n)\n", "text": "Hi i was trying to update some data in a nested array and for multiple documents in a collection.\nbut i have some issue with updating multiple fields by using the following scriptcan anyone suggest me a better way to do this ?", "username": "Idayachelvan_Balakrishnan" }, { "code": "", "text": "Hello @Idayachelvan_Balakrishnan, Welcome to the MongoDB community forum i have some issue with updating multiple fields by using the following scriptCould you please share some example documents and explain a bit what is the exact issue you are facing?", "username": "turivishal" } ]
Update nested array for multiple documents in a collection
2022-12-27T04:02:46.923Z
Update nested array for multiple documents in a collection
1,082
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "{\n ...\n department: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Department\",\n required: true,\n },\n category: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Category\",\n required: true,\n },\n variations: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Variation\",\n },\n ],\n ...\n}\n{\n color: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Color\",\n required: true,\n },\n size: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Size\",\n required: true,\n },\n ...\n}\nfind(){ variations: { '$elemMatch': { color: '639b75a799cf2ed9eb2802ec' } } }{ \"variations.color\": \"639b75a799cf2ed9eb2802ec\" }", "text": "Product Model:Variation Model:I tried to find() using mongodb with both of the filters below:{ variations: { '$elemMatch': { color: '639b75a799cf2ed9eb2802ec' } } }\n{ \"variations.color\": \"639b75a799cf2ed9eb2802ec\" }neither worked. any idea?", "username": "Emile_Ibrahim" }, { "code": "", "text": "Because your types are ObjectId and‘639b75a799cf2ed9eb2802ec’is a string. Types and values must match.", "username": "steevej" }, { "code": "mongoose.Types.ObjectId(\"639b75a799cf2ed9eb2802ec\")", "text": "mongoose.Types.ObjectId(\"639b75a799cf2ed9eb2802ec\") didn’t work either", "username": "Emile_Ibrahim" }, { "code": "", "text": "Please share sample documents from both collection.Share exactly the code that you have tried.", "username": "steevej" }, { "code": "{\n \"_id\": {\n \"$oid\": \"63a48b89cd827b16f31ac9b2\"\n },\n \"name\": \"Ali's Boxers\",\n \"description\": \"Boxers made specifically for ALi\",\n \"department\": {\n \"$oid\": \"6398fbf11fc0f2835e898aa8\"\n },\n \"category\": {\n \"$oid\": \"6398fbfd1fc0f2835e898aae\"\n },\n \"variations\": [\n {\n \"$oid\": \"63a48b89cd827b16f31ac9b9\"\n },\n {\n \"$oid\": \"63a48b8acd827b16f31ac9c1\"\n }\n ],\n \"price\": 50,\n \"discount\": 0,\n \"identifier\": \"A\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1671728010297\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1671728010297\"\n }\n },\n \"__v\": 0\n}\n63a48b89cd827b16f31ac9b9{\n \"_id\": {\n \"$oid\": \"63a48b89cd827b16f31ac9b9\"\n },\n \"color\": {\n \"$oid\": \"6398fbdb1fc0f2835e898a91\"\n },\n \"size\": {\n \"$oid\": \"6398fbe61fc0f2835e898aa1\"\n },\n \"stock\": [\n {\n \"$oid\": \"63a48b89cd827b16f31ac9b5\"\n }\n ],\n \"images\": {\n \"$oid\": \"63a48b89cd827b16f31ac9b3\"\n },\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1671728009397\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1671728009397\"\n }\n },\n \"__v\": 0\n}\n6398fbdb1fc0f2835e898a91{\n \"_id\": {\n \"$oid\": \"6398fbdb1fc0f2835e898a91\"\n },\n \"name\": \"red\",\n \"code\": \"#d44a4a\",\n \"skuCode\": \"001\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1670970331140\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1671006396023\"\n }\n },\n \"__v\": 0\n}\nexports.getAll = async (req, res) => {\n try {\n const { offset, limit } = req.query;\n const offsetInt = parseInt(offset) || 0;\n const limitInt = parseInt(limit) || 15;\n\n const queryObj = buildProductQuery(req.query);\n\n const products = await Product.find(queryObj)\n .limit(limitInt)\n .skip(offsetInt)\n .populate(populateQuery);\n\n const totalProductsCount = await Product.countDocuments(queryObj);\n const totalPages = Math.ceil(totalProductsCount / limitInt);\n const currentPage = Math.ceil(offsetInt / limitInt) + 1;\n\n return Response.success(res, {\n products,\n pagination: {\n total: totalProductsCount,\n pages: totalPages,\n current: currentPage,\n },\n });\n } catch (err) {\n return Response.serverError(res, err.message);\n }\n};\n\nconst buildProductQuery = (query) => {\n const { department, category, color, size } = query;\n\n const queryObj = {\n variations: { $ne: [] },\n\n ...(department && { department }),\n ...(category && { category }),\n\n // I tried 2 ways of filtering, you can uncomment the below 2 lines and comment the next block to switch between the 2 (neither worked)\n // I have also tried casting to `mongoose.Types.ObjectId` on both options.\n\n // ...(color && { \"variations.color\": color }),\n // ...(size && { \"variations.size\": size }),\n\n ...((color || size) && {\n variations: {\n $elemMatch: {\n ...(color && { color: mongoose.Types.ObjectId(color) }),\n ...(size && { size: mongoose.Types.ObjectId(size) }),\n },\n },\n }),\n };\n\n console.log(queryObj);\n return queryObj;\n};\n", "text": "Product Document:Variation with id 63a48b89cd827b16f31ac9b9 document:Color with id 6398fbdb1fc0f2835e898a91 document:My API:", "username": "Emile_Ibrahim" }, { "code": "", "text": "I have just realized that you are using mongoose. I guess I do not really look at the post tags.Sorry, but I do not know how mongoose work. Hopefully someone with knowledge in this area will pick up.", "username": "steevej" }, { "code": "Product.findOne({})populateQueryProduct.findOne({}).populate(populateQuery)", "text": "Hi @Emile_Ibrahim , There is no problem with how you convert ObjectId. The problem is probably about how you use them throughout your document.It is possible documents are not populated. can you give output for Product.findOne({})?also, what is populateQuery and the result for Product.findOne({}).populate(populateQuery)?", "username": "Yilmaz_Durmaz" }, { "code": "const populateQuery = [\n {\n path: \"department\",\n model: Department,\n },\n {\n path: \"category\",\n model: Category,\n },\n {\n path: \"variations\",\n model: Variation,\n populate: [\n {\n path: \"color\",\n model: Color,\n },\n {\n path: \"size\",\n model: Size,\n },\n {\n path: \"stock\",\n model: Stock,\n populate: [\n {\n path: \"materials.material\",\n model: Material,\n },\n {\n path: \"labourCost\",\n model: LabourCost,\n },\n ],\n },\n {\n path: \"images\",\n model: ProductImages,\n populate: [\n {\n path: \"images\",\n model: \"Image\",\n },\n ],\n },\n ],\n },\n];\n{\n \"_id\":\"63a5bcc57433d072405cde86\",\n \"name\":\"Ali's Boxers\",\n \"description\":\"Boxers made specifically for ALi\",\n \"department\":\"6398fbf11fc0f2835e898aa8\",\n \"category\":\"6398fbfd1fc0f2835e898aae\",\n \"variations\":[\n \"63a5bcc57433d072405cde8d\",\n \"63a5bcc67433d072405cde95\"\n ],\n \"price\":50,\n \"discount\":0,\n \"identifier\":\"A\",\n \"createdAt\":\"2022-12-23T14:35:50.556Z\",\n \"updatedAt\":\"2022-12-23T14:35:50.556Z\",\n \"__v\":0\n}\n{\n \"_id\":\"63a5bcc57433d072405cde86\",\n \"name\":\"Ali's Boxers\",\n \"description\":\"Boxers made specifically for ALi\",\n \"department\":{\n \"_id\":\"6398fbf11fc0f2835e898aa8\",\n \"name\":\"male\",\n \"description\":\"male\",\n \"skuCode\":\"M\",\n \"createdAt\":\"2022-12-13T22:25:53.549Z\",\n \"updatedAt\":\"2022-12-13T22:25:53.549Z\",\n \"__v\":0\n },\n \"category\":{\n \"_id\":\"6398fbfd1fc0f2835e898aae\",\n \"name\":\"main\",\n \"title\":\"main\",\n \"description\":\"main\",\n \"skuCode\":\"M\",\n \"parent\":null,\n \"createdAt\":\"2022-12-13T22:26:05.996Z\",\n \"updatedAt\":\"2022-12-13T22:26:05.996Z\",\n \"__v\":0\n },\n \"variations\":[\n {\n \"_id\":\"63a5bcc57433d072405cde8d\",\n \"color\":{\n \"_id\":\"6398fbdb1fc0f2835e898a91\",\n \"name\":\"red\",\n \"code\":\"#d44a4a\",\n \"skuCode\":\"001\",\n \"createdAt\":\"2022-12-13T22:25:31.140Z\",\n \"updatedAt\":\"2022-12-14T08:26:36.023Z\",\n \"__v\":0\n },\n \"size\":{\n \"_id\":\"6398fbe61fc0f2835e898aa1\",\n \"name\":\"small\",\n \"skuCode\":\"Y\",\n \"createdAt\":\"2022-12-13T22:25:42.795Z\",\n \"updatedAt\":\"2022-12-13T22:25:42.795Z\",\n \"__v\":0\n },\n \"stock\":[\n {\n \"_id\":\"63a5bcc57433d072405cde89\",\n \"available\":2,\n \"reserved\":0,\n \"store\":0,\n \"labourCost\":{\n \"_id\":\"63a1deb1d5890e2103410bf4\",\n \"title\":\"Ali Boxers Maker\",\n \"description\":\"Ali needs his boxers hand made\",\n \"amount\":20,\n \"createdAt\":\"2022-12-20T16:11:29.690Z\",\n \"updatedAt\":\"2022-12-22T17:06:37.667Z\",\n \"__v\":0\n },\n \"materials\":[\n {\n \"amount\":1,\n \"material\":null,\n \"_id\":\"63a5bcc57433d072405cde8a\"\n },\n {\n \"amount\":1,\n \"material\":{\n \"_id\":\"639a251596ae4b5c1af788c0\",\n \"name\":\"mamam\",\n \"description\":\"2121\",\n \"stock\":121195,\n \"supplier\":\"test\",\n \"costPerUnit\":1,\n \"fabric\":\"polyester\",\n \"createdAt\":\"2022-12-14T19:33:41.047Z\",\n \"updatedAt\":\"2022-12-25T01:40:56.383Z\",\n \"__v\":0\n },\n \"_id\":\"63a5bcc57433d072405cde8b\"\n }\n ],\n \"createdAt\":\"2022-12-23T14:35:49.735Z\",\n \"updatedAt\":\"2022-12-23T14:58:06.156Z\",\n \"__v\":0\n }\n ],\n \"images\":{\n \"_id\":\"63a5bcc57433d072405cde87\",\n \"product\":\"63a5bcc57433d072405cde86\",\n \"color\":\"6398fbdb1fc0f2835e898a91\",\n \"images\":[\n {\n \"_id\":\"6375c3d97d5dc191a77ca7ac\",\n \"name\":\"boutiqueb-hahhaha-1668662233138.jpeg\",\n \"displayName\":\"hahhaha\",\n \"createdAt\":\"2022-11-17T05:17:13.159Z\",\n \"updatedAt\":\"2022-11-17T05:17:13.159Z\",\n \"__v\":0\n },\n {\n \"_id\":\"6375c40d7d5dc191a77ca7c2\",\n \"name\":\"boutiqueb-paypal-1668662285552.jpeg\",\n \"displayName\":\"paypal\",\n \"createdAt\":\"2022-11-17T05:18:05.570Z\",\n \"updatedAt\":\"2022-11-17T05:18:05.570Z\",\n \"__v\":0\n }\n ],\n \"createdAt\":\"2022-12-23T14:35:49.561Z\",\n \"updatedAt\":\"2022-12-23T14:35:49.561Z\",\n \"__v\":0\n },\n \"createdAt\":\"2022-12-23T14:35:49.897Z\",\n \"updatedAt\":\"2022-12-23T14:35:49.897Z\",\n \"__v\":0\n },\n {\n \"_id\":\"63a5bcc67433d072405cde95\",\n \"color\":{\n \"_id\":\"639b75a799cf2ed9eb2802ec\",\n \"name\":\"dr\",\n \"code\":\"#301313\",\n \"skuCode\":\"002\",\n \"createdAt\":\"2022-12-15T19:29:43.798Z\",\n \"updatedAt\":\"2022-12-15T19:29:43.798Z\",\n \"__v\":0\n },\n \"size\":{\n \"_id\":\"639b759899cf2ed9eb2802e4\",\n \"name\":\"medium\",\n \"skuCode\":\"Z\",\n \"createdAt\":\"2022-12-15T19:29:28.047Z\",\n \"updatedAt\":\"2022-12-15T19:29:28.047Z\",\n \"__v\":0\n },\n \"stock\":[\n {\n \"_id\":\"63a5bcc67433d072405cde91\",\n \"available\":2,\n \"reserved\":0,\n \"store\":0,\n \"labourCost\":{\n \"_id\":\"63a1deb1d5890e2103410bf4\",\n \"title\":\"Ali Boxers Maker\",\n \"description\":\"Ali needs his boxers hand made\",\n \"amount\":20,\n \"createdAt\":\"2022-12-20T16:11:29.690Z\",\n \"updatedAt\":\"2022-12-22T17:06:37.667Z\",\n \"__v\":0\n },\n \"materials\":[\n {\n \"amount\":1,\n \"material\":null,\n \"_id\":\"63a5bcc67433d072405cde92\"\n },\n {\n \"amount\":1,\n \"material\":{\n \"_id\":\"639a44fc5a053c1a05e60b44\",\n \"name\":\"test emile\",\n \"description\":\"test test\",\n \"stock\":2,\n \"supplier\":\"emile\",\n \"costPerUnit\":1,\n \"fabric\":\"cotton\",\n \"createdAt\":\"2022-12-14T21:49:48.322Z\",\n \"updatedAt\":\"2022-12-25T15:46:03.105Z\",\n \"__v\":0,\n \"image\":\"63a8703adb0f557f329cabfa\"\n },\n \"_id\":\"63a5bcc67433d072405cde93\"\n }\n ],\n \"createdAt\":\"2022-12-23T14:35:50.229Z\",\n \"updatedAt\":\"2022-12-23T14:35:50.229Z\",\n \"__v\":0\n }\n ],\n \"images\":{\n \"_id\":\"63a5bcc67433d072405cde8f\",\n \"product\":\"63a5bcc57433d072405cde86\",\n \"color\":\"639b75a799cf2ed9eb2802ec\",\n \"images\":[\n {\n \"_id\":\"6375c3d97d5dc191a77ca7ac\",\n \"name\":\"boutiqueb-hahhaha-1668662233138.jpeg\",\n \"displayName\":\"hahhaha\",\n \"createdAt\":\"2022-11-17T05:17:13.159Z\",\n \"updatedAt\":\"2022-11-17T05:17:13.159Z\",\n \"__v\":0\n },\n {\n \"_id\":\"6375c40d7d5dc191a77ca7c2\",\n \"name\":\"boutiqueb-paypal-1668662285552.jpeg\",\n \"displayName\":\"paypal\",\n \"createdAt\":\"2022-11-17T05:18:05.570Z\",\n \"updatedAt\":\"2022-11-17T05:18:05.570Z\",\n \"__v\":0\n }\n ],\n \"createdAt\":\"2022-12-23T14:35:50.067Z\",\n \"updatedAt\":\"2022-12-23T14:35:50.067Z\",\n \"__v\":0\n },\n \"createdAt\":\"2022-12-23T14:35:50.395Z\",\n \"updatedAt\":\"2022-12-23T14:35:50.395Z\",\n \"__v\":0\n }\n ],\n \"price\":50,\n \"discount\":0,\n \"identifier\":\"A\",\n \"createdAt\":\"2022-12-23T14:35:50.556Z\",\n \"updatedAt\":\"2022-12-23T14:35:50.556Z\",\n \"__v\":0\n}\n", "text": "Hello sorry for the late response. Happy Holidays!My populate query is:Product.findOne({}) is:Product.findOne({}).populate(populateQuery) is:", "username": "Emile_Ibrahim" }, { "code": "_id$oid__v:1", "text": "your values in your documents are no longer a collection of ids after population. they now are objects with their own _id fields and you now need to query them accordingly.by the way, it is weird your id values seem to be stored as mere strings, not as $oid as you showed previously. are you using versioning in models and queries? try to get also another sample with __v:1 for example and see how they differ.one more thing about formatting your posts. they are long and makes it hard to follow when someone new comes in. from the icon in the editor, select “Hide Details” and put your “code” sections in it and edit the text it show to reflect the content. chek the below one for example.this way it will be easier to read when your sample code starts to be lengthy", "username": "Yilmaz_Durmaz" }, { "code": "$oidfindOne()_id", "text": "it appears as $oid when I copy it from my document itself. But what I sent was the result of my findOne() and in this case it appears like that.department an category were working. I didn’t need to add _id for it to work. In fact, when I tried it with them, it stopped working.As for color and size, it didn’t work this way either.EDIT: I removed all versionKeys from my models, still didn’t work.", "username": "Emile_Ibrahim" }, { "code": "buildProductQuery{ department, category, color, size } = query", "text": "I might have a mistake as I failed to find my reference line about this: if you use “exec”, previous stages were first combined and then executed. I had the idea that documents are first populated and find operation was applied. I could even misunderstand that line. I will check this later on.anyways, for now, let’s try to check your use of the code. unlike for color and size, you don’t seem to convert any id field for department and category. so, how do you use your buildProductQuery function and what are the values of these variables: { department, category, color, size } = query (edit sensitive information)", "username": "Yilmaz_Durmaz" }, { "code": "6398fbf11fc0f2835e898aa86398fbfd1fc0f2835e898aae639b75a799cf2ed9eb2802ec639b759899cf2ed9eb2802e4{\n variations: { '$ne': [] },\n 'variations.color': '639b75a799cf2ed9eb2802ec'\n}\n", "text": "department: 6398fbf11fc0f2835e898aa8\ncategory: 6398fbfd1fc0f2835e898aaeWhen I try to filter with any (or both) of the above, it’s working perfectly.color: 639b75a799cf2ed9eb2802ec\nsize: 639b759899cf2ed9eb2802e4When I try to filter with any (or both) of the above, it’s not working.\nHere is a query example for when I filter with color:", "username": "Emile_Ibrahim" }, { "code": "await Variation.find({\"color\":\"639b75a799cf2ed9eb2802ec\"}).populate(\"color\")\n[\n {\n _id: new ObjectId(\"63a5bcc67433d072405cde95\"),\n color: { _id: new ObjectId(\"639b75a799cf2ed9eb2802ec\"), name: 'dr' },\n size: new ObjectId(\"639b759899cf2ed9eb2802e4\")\n }\n]\nlet found = await Product.find()\n .populate({\n path: \"variations\",\n model: this.Variation,\n match:{ \"color\": \"639b75a799cf2ed9eb2802ec\" },\n populate: [\n {\n path: \"color\",\n model: this.Color,\n },\n {\n path: \"size\",\n model: this.Size,\n }\n ]\n })\n{\n \"_id\": \"63a5bcc57433d072405cde86\",\n \"name\": \"Ali's Boxers\",\n \"description\": \"Boxers made specifically for ALi\",\n \"department\": \"6398fbf11fc0f2835e898aa8\",\n \"category\": \"6398fbfd1fc0f2835e898aae\",\n \"variations\": [\n {\n \"_id\": \"63a5bcc67433d072405cde95\",\n \"color\": {\n \"_id\": \"639b75a799cf2ed9eb2802ec\",\n \"name\": \"dr\"\n },\n \"size\": {\n \"_id\": \"639b759899cf2ed9eb2802e4\",\n \"name\": \"medium\"\n }\n }\n ]\n}\nbuildProductQueryJSON.stringfy// actual object{\n _id: new ObjectId(\"63a5bcc57433d072405cde86\"),\n name: \"Ali's Boxers\",\n description: 'Boxers made specifically for ALi',\n department: new ObjectId(\"6398fbf11fc0f2835e898aa8\"),\n category: new ObjectId(\"6398fbfd1fc0f2835e898aae\"),\n variations: [\n new ObjectId(\"63a5bcc57433d072405cde8d\"),\n new ObjectId(\"63a5bcc67433d072405cde95\")\n ]\n}\n// JSON.stringify applied\n{\n \"_id\": \"63a5bcc57433d072405cde86\",\n \"name\": \"Ali's Boxers\",\n \"description\": \"Boxers made specifically for ALi\",\n \"department\": \"6398fbf11fc0f2835e898aa8\",\n \"category\": \"6398fbfd1fc0f2835e898aae\",\n \"variations\": [\"63a5bcc57433d072405cde8d\", \"63a5bcc67433d072405cde95\"]\n}\n", "text": "to play with, I modified/minified your model, query, populate query, and data. I kept only id fields and a few keys such as names.It seems mongoose takes string representation of ObjectId as input and converts it internally depending on the model properties. for example the following query on variations collection gets correct color object then populates it.its output isthis explains why the department and category of your query work without manually converting them to ObjectId.as for the sub-fields, I found out that a matching query is possible by the populate method itself. the following link\nMongoose v6.8.1: Query (mongoosejs.com)here, I embed the match query for the color into the populate query:giving the following result:This means removing color and size from your buildProductQuery method and creating a new populate query builder to use them.I don’t know if any other method exists but this seems pretty easy to implement.by the way, I don’t know if it is mongoose or JSON.stringfy but I now can see why your sample data had strings in place of ObjectIds:", "username": "Yilmaz_Durmaz" }, { "code": "buildProductQueryvariations: { $ne: [] }countDocuments(queryObj)buildProductPopulateQuery(req.query)exports.getAll = async (req, res) => {\n try {\n const { offset, limit } = req.query;\n const offsetInt = parseInt(offset) || 0;\n const limitInt = parseInt(limit) || 15;\n\n const queryObj = buildProductQuery(req.query);\n const populateQuery = buildProductPopulateQuery(req.query);\n\n console.log(JSON.stringify(populateQuery));\n\n const products = await Product.find(queryObj)\n .limit(limitInt)\n .skip(offsetInt)\n .populate(populateQuery);\n\n const totalProductsCount = await Product.countDocuments(queryObj);\n const totalPages = Math.ceil(totalProductsCount / limitInt);\n const currentPage = Math.ceil(offsetInt / limitInt) + 1;\n\n return Response.success(res, {\n products,\n pagination: {\n total: totalProductsCount,\n pages: totalPages,\n current: currentPage,\n },\n });\n } catch (err) {\n return Response.serverError(res, err.message);\n }\n};\n", "text": "This means removing color and size from your buildProductQuery method and creating a new populate query builder to use them.I don’t know if any other method exists but this seems pretty easy to implement.That seems like a good idea. However if a color is not found in a variations array, it will still return an empty array, which defies the whole point of variations: { $ne: [] } in my filters, and I do not with for this to happen.Moreover, I am using countDocuments(queryObj) to count how many items with a certain filter I have so I can create a pagination. Using your suggestedway kinda ruins it? Unless there is something I am missing.Here is my function, with the way you suggested, using buildProductPopulateQuery(req.query), in case you need to take a look:", "username": "Emile_Ibrahim" }, { "code": "aggregate()exports.getAll = async (req, res) => {\n\n try {\n\n const { offset, limit } = req.query;\n\n const offsetInt = parseInt(offset) || 0;\n\n const limitInt = parseInt(limit) || 15;\n\n const query = {\n\n departments: req.query.departments\n\n ? JSON.parse(req.query.departments)\n\n : null,\n\n categories: req.query.categories\n\n ? JSON.parse(req.query.categories)\n\n : null,\n\n colors: req.query.colors ? JSON.parse(req.query.colors) : null,\n\n sizes: req.query.sizes ? JSON.parse(req.query.sizes) : null,\n\n };\n\n const pipeline = buildProductPipeline(query);\n\n const countPipeline = buildCountProductPipeline(query);\n\n const products = await Product.aggregate(pipeline)\n\n .skip(offsetInt)\n\n .limit(limitInt);\n\n const totalProductsCount =\n\n (await Product.aggregate(countPipeline))[0]?.count || 0;\n\n const totalPages = Math.ceil(totalProductsCount / limitInt);\n\n const currentPage = Math.ceil(offsetInt / limitInt) + 1;\n\n return Response.success(res, {\n\n products,\n\n pagination: {\n\n total: totalProductsCount,\n\n pages: totalPages,\n\n current: currentPage,\n\n },\n\n });\n\n } catch (err) {\n\n return Response.serverError(res, err.message);\n\n }\n\n};\nconst buildProductPipeline = (query) => {\n const { departments, categories, colors, sizes } = query;\n\n return [\n {\n $match: {\n ...(departments && {\n department: {\n $in: departments.map((d) => mongoose.Types.ObjectId(d)),\n },\n }),\n ...(categories && {\n category: { $in: categories.map((c) => mongoose.Types.ObjectId(c)) },\n }),\n },\n },\n {\n $lookup: {\n from: \"variations\",\n localField: \"variations\",\n foreignField: \"_id\",\n as: \"variations\",\n pipeline: [\n {\n $match: {\n ...(colors && {\n color: {\n $in: colors.map((c) => mongoose.Types.ObjectId(c)),\n },\n }),\n ...(sizes && {\n size: {\n $in: sizes.map((s) => mongoose.Types.ObjectId(s)),\n },\n }),\n },\n },\n {\n $lookup: {\n from: \"colors\",\n localField: \"color\",\n foreignField: \"_id\",\n as: \"color\",\n },\n },\n {\n $unwind: {\n path: \"$color\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: \"sizes\",\n localField: \"size\",\n foreignField: \"_id\",\n as: \"size\",\n },\n },\n {\n $unwind: {\n path: \"$size\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: \"stocks\",\n localField: \"stock\",\n foreignField: \"_id\",\n as: \"stock\",\n pipeline: [\n {\n $lookup: {\n from: \"labourcosts\",\n localField: \"labourCost\",\n foreignField: \"_id\",\n as: \"labourCost\",\n },\n },\n {\n $unwind: {\n path: \"$labourCost\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: \"materials\",\n as: \"materials\",\n pipeline: [\n {\n $lookup: {\n from: \"materials\",\n localField: \"material\",\n foreignField: \"_id\",\n as: \"material\",\n },\n },\n {\n $unwind: {\n path: \"$material\",\n preserveNullAndEmptyArrays: true,\n },\n },\n ],\n },\n },\n ],\n },\n },\n ],\n },\n },\n {\n $match: {\n variations: { $ne: [] },\n },\n },\n {\n $lookup: {\n from: \"departments\",\n localField: \"department\",\n foreignField: \"_id\",\n as: \"department\",\n },\n },\n {\n $unwind: {\n path: \"$department\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: \"categories\",\n localField: \"category\",\n foreignField: \"_id\",\n as: \"category\",\n },\n },\n {\n $unwind: {\n path: \"$category\",\n preserveNullAndEmptyArrays: true,\n },\n },\n ];\n};\nconst buildCountProductPipeline = (query) => {\n const { departments, categories, colors, sizes } = query;\n\n return [\n {\n $match: {\n ...(departments && {\n department: {\n $in: departments.map((d) => mongoose.Types.ObjectId(d)),\n },\n }),\n ...(categories && {\n category: { $in: categories.map((c) => mongoose.Types.ObjectId(c)) },\n }),\n },\n },\n {\n $lookup: {\n from: \"variations\",\n localField: \"variations\",\n foreignField: \"_id\",\n as: \"variations\",\n },\n },\n {\n $match: {\n ...(colors && {\n \"variations.color\": {\n $in: colors.map((c) => mongoose.Types.ObjectId(c)),\n },\n }),\n ...(sizes && {\n \"variations.size\": {\n $in: sizes.map((s) => mongoose.Types.ObjectId(s)),\n },\n }),\n },\n },\n {\n $group: {\n _id: \"$_id\",\n count: { $sum: 1 },\n },\n },\n {\n $count: \"count\",\n },\n ];\n};\n", "text": "Alright I figured out a way using aggregate()", "username": "Emile_Ibrahim" }, { "code": "variations: { $ne: [] } products = (await Product.find({\"variations\":{\"$ne\":[]}}).populate(\"variations\"))\n .filter(function (product) { return product[\"variations\"].length>0; })\n.lean()$lookupModel.aggregate", "text": "lingering references to non-existence variations are not good. this need for checking empty variations shows signs of bad design, either on the creation of a product or accessing the database outside the client app. if you have authority over the design you may consider refactoring your design.anyways, it seems you may have two occurrences where you will have an empty variations array. the array can be empty at creation (or later updated), and “populate” may result in an empty array. for the first one include variations: { $ne: [] } in “buildProductQuery” and then use vanilla js filtering on the result for the second case:however, keep in mind that this works on the app’s host’s memory so use .lean() to reduce memory usage. And again, I still don’t know if any other method exists. Until we find a better way (if any), this one handles the jobs pretty well.One way might be to leverage the $lookup of mongodb through Model.aggregate where you have more (or total) control over your query to run on the database, but you need to spend some time learning it as it gets complicated at times. you can also leverage the counting documents and creating pagination on the database server itself. the bad news is, I use aggregation, but I haven’t used it with mongoose, so that beats me here.", "username": "Yilmaz_Durmaz" }, { "code": "{\n product: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Product\",\n required: true,\n },\n variation: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Variation\",\n required: true,\n },\n}\n", "text": "lingering references to non-existence variations are not good. this need for checking empty variations shows signs of bad design, either on the creation of a product or accessing the database outside the client app. if you have authority over the design you may consider refactoring your design.We do not want to delete a product if it has no variations, as the admins can always add new ones. what causes the variations to be empty is when they get archived and moved to the archive table. In that canse, we want the product to still be available so we reference to it by id.", "username": "Emile_Ibrahim" }, { "code": "", "text": "I did not notice your second post came while I was writing you have a nice pipeline there. I don’t know how mongoose glues “models” after using aggregation, but if it does the job let it do the job, and if so, then you should mark it as “the accepted answer” so the community can see this problem is solved ", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Went ahead and did just that.Thanks again for your help, really appreciate your time.", "username": "Emile_Ibrahim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering multiple nested ObjectId's
2022-12-22T10:14:32.835Z
Filtering multiple nested ObjectId&rsquo;s
10,059
null
[ "aggregation", "performance" ]
[ { "code": "{\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"plannerVersion\": 1,\n \"namespace\": \"test.bets\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n {\n \"timestamp\": {\n \"$lt\": 1672012800000\n }\n },\n {\n \"timestamp\": {\n \"$gte\": 1671408000000\n }\n }\n ]\n },\n \"queryHash\": \"81EC0C6F\",\n \"planCacheKey\": \"AC9017EC\",\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"bet\": 1,\n \"timestamp\": 1,\n \"userId\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"timestamp\": 1\n },\n \"indexName\": \"timestamp_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"timestamp\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"timestamp\": [\n \"[1671408000000.0, 1672012800000.0)\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 114289,\n \"executionTimeMillis\": 2004,\n \"totalKeysExamined\": 114289,\n \"totalDocsExamined\": 114289,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 114289,\n \"executionTimeMillisEstimate\": 1249,\n \"works\": 114290,\n \"advanced\": 114289,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 147,\n \"restoreState\": 147,\n \"isEOF\": 1,\n \"transformBy\": {\n \"bet\": 1,\n \"timestamp\": 1,\n \"userId\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 114289,\n \"executionTimeMillisEstimate\": 1157,\n \"works\": 114290,\n \"advanced\": 114289,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 147,\n \"restoreState\": 147,\n \"isEOF\": 1,\n \"docsExamined\": 114289,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 114289,\n \"executionTimeMillisEstimate\": 682,\n \"works\": 114290,\n \"advanced\": 114289,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 147,\n \"restoreState\": 147,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"timestamp\": 1\n },\n \"indexName\": \"timestamp_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"timestamp\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"timestamp\": [\n \"[1671408000000.0, 1672012800000.0)\"\n ]\n },\n \"keysExamined\": 114289,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n }\n }\n },\n \"nReturned\": 114289,\n \"executionTimeMillisEstimate\": 1678\n },\n {\n \"$group\": {\n \"_id\": \"$userId\",\n \"points\": {\n \"$sum\": \"$bet\"\n },\n \"lastBet\": {\n \"$max\": \"$timestamp\"\n }\n },\n \"nReturned\": 73,\n \"executionTimeMillisEstimate\": 1992\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"points\": -1,\n \"lastBet\": 1\n },\n \"limit\": 10\n },\n \"nReturned\": 10,\n \"executionTimeMillisEstimate\": 1992\n }\n ],\n \"serverInfo\": {\n \"host\": \"test\",\n \"port\": 27017,\n \"version\": \"4.4.17\",\n \"gitVersion\": \"85de0cc83f4dc64dbbac7fe028a4866228c1b5d1\"\n },\n \"ok\": 1\n}\n", "text": "I ran the explain for my pipeline. It’s currently taking a few seconds to run this query. I need to be able to 100x the amount of data, so I’m hoping you can help me figure out what I can do to speed it up?I am trying to get the top 10 list of users by sum of bets made during a certain time period.I have indexes on userId and timestamp in the bets table. Not sure if I’m missing any others?I would appreciate any pointers.Cheers and Merry Christmas!", "username": "uzgvan" }, { "code": "{\nuserId : 1, \nbet: 1, \ntimestamp : -1\n}\n", "text": "Hi @uzgvan ,It looks like index is only present on timestamp, since you aggregation is in userId , bets and timestamp , I would suggest the following index:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Help with aggregation pipeline performance
2022-12-25T10:32:15.590Z
Help with aggregation pipeline performance
1,438
https://www.mongodb.com/…e_2_1024x457.png
[ "security" ]
[ { "code": "db.createUser(\n{\n user: \"user1\",\n pwd: \"user1@ynu!@#\",\n roles: [{\n role: \"readWrite\",\n db: \"user1\"\n },\n {\n role: \"read\",\n db: \"test\"\n }\n ]\n}\n)\n", "text": "Hi, I am new to mongodb, and I want to create a user and grant some permission for it, for example, I create a user1 and allow it to read test and read/write user1 databases, but the created user can manipulate any other database like user2 acturally.The command I use for creating user1.\nmongo-user11915×855 29.6 KB\n", "username": "liudonghua_N_A" }, { "code": "use admindb.createUser(...)", "text": "I presume to first created the “admin user” as adviced here.So currently your video is probably showing this user which is allowed to do anything.But to understand how more restricted users will actuate on the database, you have to connect and identify as the specific user.For example,", "username": "santimir" }, { "code": "db.createUser(\n{\n user: \"user1\",\n pwd: \"user1@ynu!@#\",\n roles: [{\n role: \"readWrite\",\n db: \"user1\"\n },\n {\n role: \"read\",\n db: \"test\"\n }\n ]\n}\n)\nuser1security: authorization: enableduse admin\ndb.createUser(\n {\n user: \"admin\",\n pwd: \"xxx\",\n roles: [ { role: \"userAdminAnyDatabase\", db: \"admin\" } ]\n }\n)\n# configure security: authorization: enabled and restart mongodb\nmongo -u \"admin\" -p 'xxx' --authenticationDatabase \"admin\"\nuse user1\ndb.createUser(\n{\n user: \"user1\",\n pwd: \"xxx\",\n roles: [{\n role: \"readWrite\",\n db: \"user1\"\n },\n {\n role: \"read\",\n db: \"test\"\n }\n ]\n}\n)\nmongo -u \"user1\" -p 'xxx' --authenticationDatabase \"user1\"\n", "text": "Thanks, I have created the normal user1 successfully now.\nI created a admin user firstly, then configure security: authorization: enabled in /etc/mongod.conf, execute service mongod restart.\nThen log into mongodb use admin, then create other normal users like user1 above.see also MongoDB: Server has startup warnings ‘‘Access control is not enabled for the database’’ - Stack Overflow.", "username": "liudonghua_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
User created via db.createUser can manipulate other database
2022-12-26T07:52:34.801Z
User created via db.createUser can manipulate other database
2,739
null
[ "node-js" ]
[ { "code": "", "text": "I’m working with mongodb 4.2.0 driver in node.js. My goal is to use client sessions (https://docs.mongodb.com/manual/reference/method/Session/) to enable causal consistency and ensure that my application reads back its own writes, despite having global read nodes and a single write region. After turning on client sessions and switching connection from primary to nearest node, we noticed some writes silently failed (despite receiving “OK”). Others failed with an explicit error like “Retryable write with txnNumber 1 is prohibited on session because a newer retryable write with txnNumber 2 has already started on this session.”. So to address this, we tried to automatically batch all of our writes (via bulkWrite), but still noticed the error happening. We also noticed in the documentation (https://docs.mongodb.com/manual/core/read-isolation-consistency-recency/#client-sessions-and-causal-consistency-guarantees) that “Applications must ensure that only one thread at a time executes these operations in a client session.”. Since node is single-threaded I assumed this would be fine, but I’m wondering if async is counting as “more than one thread”?Any advice on getting client sessions with causal consistency working on Node? I can try disabling retryable writes, but just wondering how many more undocumented things like this I might run into. For example, can I issue reads in parallel safely in a client session?", "username": "Alex_Coleman" }, { "code": "\"majority\"\"majority\"", "text": "Hi there, have you found a solution?I was checking the documentation and noticed 2 important lines:Causally consistent sessions can only guarantee causal consistency for reads with \"majority\" read concern and writes with \"majority\" write concern.To provide causal consistency … Applications must ensure that only one thread at a time executes these operations in a client sessiondo these lines apply to your situation?", "username": "Yilmaz_Durmaz" } ]
Parallel Writes/Reads in a client Session in Node
2022-03-22T17:30:49.341Z
Parallel Writes/Reads in a client Session in Node
1,630
https://www.mongodb.com/…b_2_1024x605.png
[ "node-js", "mongoose-odm", "connecting" ]
[ { "code": "", "text": "I have 4 VM where Docker containers are running for Node js services which are connecting with MongoDB which are running in a docker container on other 3 VM. when we deploy the service sometimes the only service from 1st VM get connected to mongo sometimes other VM NodeJs service shows the server selection error Shown in the image and all ports are allowed to connect internally within All 7 VM Mongo Version 4.2.8 Node j\nScreenshot 2021-07-02 at 3.54.38 PM1120×662 379 KB\ns Version 14.2.0 Mongoose Version 5.9.6", "username": "Nitish_Yadav" }, { "code": "", "text": "me also can you help me in this error", "username": "Nitu_Sinha" }, { "code": "", "text": "this worked for me:within your mongoDB atlas…\nin Network Access… edit IP access list\nswitch it to “allow access from anywhere”\nimage858×512 17.9 KB\n", "username": "Rachit_Bharadwaj" }, { "code": "", "text": "this worked for meIt did worked for you because you are connecting to an Atlas cluster. The other person is not using Atlas.", "username": "steevej" }, { "code": "", "text": "replace local host with 127.0.0.1 it work every time", "username": "Jaskaran_Singh2" }, { "code": "", "text": "thank you so much !!", "username": "rew" }, { "code": "net.maxIncomingConnections", "text": "Hi all,This kind of error has many root causes but its occurring “sometimes” might be related to the number of connections opened to DB servers. the default number is 64K (65536) and can be changed in the config file under net.maxIncomingConnectionsConfiguration File Options — MongoDB ManualIf your app does not use a singleton client or does not close connections after querying, at some point total connections to the database will accumulate and reach a point where it won’t accept new connections anymore until old ones are closed/expired.for example, if the driver opens 100 connections for each request by default, then only about 655 of them will be served and the 656th connection (and over) will be refused as it can serve only 36 more (for default).", "username": "Yilmaz_Durmaz" } ]
Mongoose Server selection Error
2021-07-05T05:06:13.073Z
Mongoose Server selection Error
11,942
null
[ "queries" ]
[ { "code": "usersschema_version: 1schema_version: 2users = [\n {\n _id: ObjectId(\"1\"),\n full_name: \"Jhon Doe\",\n },\n {\n _id: ObjectId(\"2\"),\n first_name: \"Jane\",\n last_name: \"Doe\",\n schema_version: 2\n }\n]\nfullNamedb.collection('users').find({ fullName: \"Jane Doe\" })\ndb.collection('users').find({ \n $or: [ \n { full_name: \"Jane Doe\" }, \n { first_name: \"Jane\", last_name: \"Doe\" } \n ] \n})\n", "text": "Hi,I want to follow schema versioning pattern in my MongoDB schema. Let’s say I have a collection users which represents my users in the app. This collection contains multiple documents with different versions (e.g. schema_version: 1, schema_version: 2 etc.). The different versions of the document have different fields. Take a look at the example belowLet’s say I need to find a user by a name.Without different schemas I would created a single index on field fullName and use query likeBut, how I can achieve the same result considering multiple schemas in the collection?Does it mean that my query should consider both (or any) existing versions of the document in the collection, likeIf yes, does it mean that I need to create two indexes (each per schema type) to make query performant?Thanks,\nRoman", "username": "Roman_Mahotskyi" }, { "code": "", "text": "Hi @Roman_MahotskyiYes, it does mean so. A simple index for v1 and a compound index for v2. I think they should be sparse indexes, but am no expert.", "username": "santimir" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to find documents by field taking into account different "schema_version"?
2022-12-26T15:21:30.372Z
How to find documents by field taking into account different &ldquo;schema_version&rdquo;?
815
null
[ "aggregation", "queries", "node-js", "indexes" ]
[ { "code": "", "text": "For example, the client sends a request to the API server to retrieve documents / records matching certain filters. Of the matching records, there will be additional processing such as transformations, sorting, etc.Is it a common design in web development to have the API only return the unprocessed documents / records and then have the client perform the additional processing so that the load on the API is lessen?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "It will be decent that you provide closure on all your other open posts.We would be more incline to help you further.", "username": "steevej" } ]
Is it common to have the client process data in order the lessen the load of the API?
2022-12-26T09:49:04.745Z
Is it common to have the client process data in order the lessen the load of the API?
871
null
[ "schema-validation" ]
[ { "code": "", "text": "Hi!\nI’ve been doing some research and I’m having trouble finding information on the relationship between schema validation and performance. I’m specifically interested in whether implementing schema validation for a high-traffic collection, like one serving a busy mobile app with lots of writes, can have a negative impact on performance. Do you have any idea if this is the case? If so, could you give me an idea of how significant the impact might be? Thanks so much for your help.", "username": "yaron_levi" }, { "code": "", "text": "Like anything there is not such thing as no cost computing.Very personal opinion.Schema validation add computing complexity. How much? I do not know. What I know is that when your code is well tested, you do not need schema validation since your code will create and update the data correctly. You do not need this extra layer of protection. Italicized protection or obstruction. Or obstruction, yes. In the field, despite code that is tested, you might need to patch thing and sometimes schema validation is on the way because to make things works you have to break the schema.Yes it is good to prevent human from entering bad data. But my point is validate the data as early as possible because the closer it is to the user, it is more responsive and naturally distributed.", "username": "steevej" }, { "code": "", "text": "@steevej is right about cost, but I’d like to offer a differing perspective.I’m a “belt-and-suspenders” designer. Surely you should validate input at the user interface. But the fallback is validation. It’s not usually so costly in execution, and it takes one more step to guarantee the integrity of the data.I use MongoDB professionally and like it, but there are some odd notions going around. One cannot reasonably hold in one’s head both notions that:So I use validation. It doesn’t cost much and it gives me a warm, safe feeling.", "username": "Jack_Woehr" }, { "code": "", "text": "@steevej @Jack_Woehr\nThank you for the detailed input!So I use validation. It doesn’t cost much and it gives me a warm, safe feeling.I completely agree about the safe warm feeling (-:\nThe question is again, how much overhead are we talking about?\nTo give more context, our application is more “tall” than “wide”. So it’s not that high on features and business entities and sprawl of domain logic (what you might call an “enterprise app”). But it has very high live traffic coming currently from mobile devices.\nAny idea where to dig in order to get a more technical answer on that?", "username": "yaron_levi" }, { "code": "", "text": "I’ve never seen any figures on that.I suppose one might have to set up an experiment and measure it.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does Schema Validation Have a Performance Penalty?
2022-12-22T09:55:00.202Z
Does Schema Validation Have a Performance Penalty?
2,517
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "*await mongoose.connection.db.collection('users').aggregate([\n { $lookup: { from: `education`, localField: `user_id`, foreignField: `user`, as: `education` } },\n { $unwind: { path: `$education`, preserveNullAndEmptyArrays: true } },\n { $lookup: { from: `userinfo`, localField: `user_id`, foreignField: `user`, as: `userinfo` } },\n { $unwind: { path: `$userinfo`, preserveNullAndEmptyArrays: true } },\n { $match: { is_deleted: false } },\n { $match: criteria },\n {\n `$facet`: {\n `totalLocation`: [\n { $match: criteria },\n { `$count`: `count` },\n ],\n }\n },\n {\n `$project`: {\n `totalLocation`: { `$arrayElemAt`: [`$totalLocation.count`, 0] },\n }\n }\n], { allowDiskUse: true, collation: { locale: 'en_US', alternate: `shifted` } }).toArray();\n\nThis query works completely fine and return data as expected. But now as data are growing so this query became slower and we would make it faster. one of solution which I found that to create an index in a way so we can have faster result. I have tried but it doesn't works for me*\n", "text": "", "username": "Vishal_Patel1" }, { "code": "$match$match:{ is_deleted:false, ...criteria }\n$facet$project$facet$groupdb.collection('users').explain().aggregate(yourPipeline)", "text": "Why are those $match stages separated? They could be collapsed to a single one:And I don’t think you need a $facet, do you?As it is written the $project and $facet can be removed and replaced by $group if I get it correctly.You can use db.collection('users').explain().aggregate(yourPipeline) to get some statistics and details about the execution.", "username": "santimir" }, { "code": "", "text": "In addition, is_deleted is a field from documents of the users collections so you should $match this first. What you do right now is that you do 2 $lookup and 2 $unwind on documents that you gonna eliminate anyway. You are better off eliminating them right at the start.The $lookup of userinfo is not dependant on the result of $unwind education. So you should do the $lookup of userinfo before the $unwind of education.Hopefully you have indexes user in both education and userinfo collections.", "username": "steevej" } ]
Mongo aggregation query take more time
2022-12-26T10:35:24.270Z
Mongo aggregation query take more time
1,385
null
[ "react-native" ]
[ { "code": "", "text": "HeyI was trying to implement Authentication using realm-js.\nMy past experience was with realm-web where currentUser would have a method called refreshAccessToken()my issue is that the currentUser that realm-js creates has no such method … so how do i refresh the token when it expires?thanks", "username": "rares.lupascu" }, { "code": "\n \n }\n // Update the access token\n this.accessToken = response.accessToken;\n // Refresh the profile to include the new identity\n await this.refreshProfile();\n }\n \n /**\n * Request a new access token, using the refresh token.\n */\n public async refreshAccessToken(): Promise<void> {\n const response = await this.fetcher.fetchJSON({\n method: \"POST\",\n path: routes.api().auth().session().path,\n tokenType: \"refresh\",\n });\n const { access_token: accessToken } = response as Record<string, unknown>;\n if (typeof accessToken === \"string\") {\n this.accessToken = accessToken;\n } else {\n throw new Error(\"Expected an 'access_token' in the response\");\n \n await (app.currentUser as any).refreshAccessToken();", "text": "Hey! Are you using typescript? I think I had the same issue. Sometimes typings break and cause such issues even though the sdk itself still constains the function.As we can see in here the sdk should contain this function.So we can cheese it for nowawait (app.currentUser as any).refreshAccessToken();", "username": "Jakke_Korpelainen" } ]
Realm-js refresh access token
2022-10-24T22:04:44.875Z
Realm-js refresh access token
2,007
null
[]
[ { "code": "", "text": "This is more a discussion, because I am interested in the following question:I have an application with event sourcing and a common query is the following“Give me every event since [POSITION]”. So position is an value that indicates the position of the event in a global event stream. In the SQL world you would use an auto-incremented integer for that to have an increasing position, in MongoDB this is of course not possible.You cannot use a normal client side generated timestamp because you have no guarantee that the document A has has a lower timestamp than document B is actually written before B.Therefore I use BsonTimestamp which is generated on the database server and fulfills this purpose. But it happens very regularly that after you queried some document from the event collection another event is written that has a smaller timestamp than what you actually received from the server. So the insert order is actually not consistent with the bson timestamps.I wonder how you have solved that with oplogs, because you must have the same issue. My current solution is to have an overlapping window, so instead of “Give me every event since [POSITION]” I ask “Give me every event since [POSITION] and a little bit before that”, but there is no guarantee that it works all the time.", "username": "Sebastian_Stehle" }, { "code": "", "text": "Have you made any progress on investigating this issue?", "username": "Peter_Huang" } ]
BsonTimestamp guarantees
2022-07-29T11:48:25.767Z
BsonTimestamp guarantees
1,308
https://www.mongodb.com/…c6c4e47b8a3b.gif
[ "swift", "developer-hub" ]
[ { "code": "", "text": "I wanted to build an app that I could use at events to demonstrate Realm Sync. It needed to be fun to interact with, and so a multiplayer game made sense. Tic-tac-toe is too simple to get excited about. I’m not a game developer and so Call Of Duty wasn’t an option. Then I remembered Microsoft’s Minesweeper.Minesweeper was a Windows fixture from 1990 until Windows 8 relegated it to the app store in 2012. It was a single-player game, but it struck me as something that could be a lot of fun to play with others. Some family beta-testing of my first version while waiting for a ferry proved that it did get people to interact with each other (even if most interactions involved shouting, “Which of you muppets clicked on that mine?!”).I’ve described how this app is built in Building a collaborative iOS Minesweeper game with Realm.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hello!\nI am playing this game, which is very good. Can you tell me how you create this type of game? Because I am interested to building a game. waiting response. Thanks in advance.", "username": "Richard_Gravener1" }, { "code": "", "text": "Welcome to the forums @Richard_Gravener1Can you tell me how you create this type of game?Yes we can - it’s a great example app and @Andrew_Morgan created a really good article about the process and even included the code as well in the repo.There’s a link to that article in the above post but here it is again. Give it a read!Build a multiplayer iOS Minesweeper game with SwiftUI and Realm", "username": "Jay" }, { "code": "", "text": "Hey there!\nThanks for providing me a link where I learn how to build this type of game. I solute you. Thanks again.", "username": "Richard_Gravener1" }, { "code": "", "text": "Hello!\nI read a post which you provide this link. I am not understand properly. Can any video tutorial is available. If yes please send me video. Because I want to build my game quickly. Waiting response. Thanks", "username": "Richard_Gravener1" }, { "code": "", "text": "@Richard_Gravener1 I am not sure what that link is as it’s broken but if you need introductory course on Swift Programming, there’s a number of them available on the internet and Youtube. I am a big fan of Kodek for learning to code from just getting started to advanced topics.They even have courses on Unity and the Unreal engine!Good stuff - take a look.", "username": "Jay" }, { "code": "", "text": "Hello all!\nMy name is Philip Salt, and I am a beginner developer. I was creating an article builder game that is called Article Tool. But I am facing some issues. When gI click on the open game option so I am redirect backword where I start. If anyone expert here so please tell me what is the problem and how to fix.\nI am waiting for your response.\nThanks in advance", "username": "godaxi_3427" }, { "code": "onDisappear", "text": "If you’re using flexible (rather than partitioned) sync, then this might because your clearing the subscription in your onDisappear function.Are you able to share (simplified if need be) code for the views in question?", "username": "Andrew_Morgan" } ]
New article: Building a collaborative iOS Minesweeper game with Realm
2022-03-14T16:53:55.932Z
New article: Building a collaborative iOS Minesweeper game with Realm
4,356
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hello,I’m having a difficult time getting sign with apple to work with Realm. I followed the guide but I guess I’m still a bit confused:My main issue with the guide is that upon generating a jwt per step 4 of the guide (https://docs.mongodb.com/realm/authentication/apple/), I create a secret in Realm under values but it’s complaining about the length of the secret value: clientSecret length should be less than or equal to 500Any help would be appreciate.\nThanks.", "username": "Trunks99" }, { "code": "", "text": "Hi @Trunks99Have many characters is the script generating? I remember an issue where there was a garbage character at the end and I just had to delete those characters and it worked.", "username": "Lee_Maguire1" }, { "code": "2961 client_secret.txt", "text": "Hi @Lee_Maguire1 ,Thqnks for the prompt response.When I create a serviceID, it’s asking me for a domain. That is what confuses me – why would I need a backend myself?I will take a look.The script is generating over 2900 characters which seems excessive:\n2961 client_secret.txt", "username": "Trunks99" }, { "code": "", "text": "Hey what have u entered in the “domain” and “return Url” field?", "username": "Jannis_Gunther" }, { "code": "", "text": "Hi Jannis,I left those fields empty.I figured I don’t need them if I handle the binding logic in the client. Basically, follow the Sign In With Apple tutorial provided by Apple (ignore Realm’s sample code for this part). Once you get the token back, feed it to realm’s sign in with Apple function.I can provide more specifics next week if you are interested, as I don’t have my laptop with me right now.", "username": "Trunks99" }, { "code": "", "text": "Hey Trunks,im very interested. I still couldn’t figure it out.Thanks!", "username": "Jannis_Gunther" }, { "code": " @objc\nfunc handleAuthorizationAppleIDButtonPress() {\n let appleIDProvider = ASAuthorizationAppleIDProvider()\n let request = appleIDProvider.createRequest()\n request.requestedScopes = [.fullName, .email]\n \n let authorizationController = ASAuthorizationController(authorizationRequests: [request])\n authorizationController.delegate = self\n authorizationController.presentationContextProvider = self\n authorizationController.performRequests()\n}\n\n@available(iOS 13.0, *)\nfunc presentationAnchor(for controller: ASAuthorizationController) -> ASPresentationAnchor {\n return self.view.window!\n}\n\n@available(iOS 13.0, *)\nfunc authorizationController(controller: ASAuthorizationController, didCompleteWithError error: Error) {\n print(\"Something bad happen, \\(error)\")\n}\n\n@available(iOS 13.0, *)\nfunc authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {\n \n switch authorization.credential {\n case let appleIDCredential as ASAuthorizationAppleIDCredential:\n \n // Create an account in your system.\n let userIdentifier = appleIDCredential.user\n let firstName = appleIDCredential.fullName?.givenName ?? \"\"\n let lastName = appleIDCredential.fullName?.familyName ?? \"\"\n let fullName = appleIDCredential.fullName\n let email = appleIDCredential.email\n \n let identityToken = String(data: appleIDCredential.identityToken ?? Data(), encoding: .utf8)\n \n let app = App(id: \"your-app-id\")\n \n // Fetch IDToken via the Apple SDK\n let credentials = Credentials.apple(idToken: identityToken ?? \"\")\n app.login(credentials: credentials) { (result) in\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n print(\"Successfully logged in as user \\(user)\")\n }\n } \n case let passwordCredential as ASPasswordCredential:\n break\n default:\n break\n }\n}", "text": "Hey Jannis,No need for redirect URIs if you do it as follows:", "username": "Trunks99" }, { "code": "", "text": "Could you make it work without a server?I’m pretty sure Realm asks for a backend because it needs to verify the token you provide I believe.It would be great if Realm Sync could handle Apple SIWA only with its userId instead of idToken.", "username": "Jerome_Pasquier" }, { "code": "", "text": "Yes – unless I’m missing something. I posted the code above.First, I generate the token using Sign in with Apple, then I feed it to Realm and I’m assuming that Realm does the verification and talks to the Apple servers for that. Once that process is done, I receive a response and log my user in.", "username": "Trunks99" }, { "code": "", "text": "Hello, I’m trying to figure out how to make a services ID for an iOS App. you cannot leave the fields empty anymore. What can I do for the domain and return url?", "username": "Timothy_Tati" } ]
Issues configuring Sign in with Apple
2021-07-18T17:23:18.724Z
Issues configuring Sign in with Apple
4,887
null
[ "aggregation", "queries", "mongoose-odm" ]
[ { "code": "cashierBoxesallowDiskUse(true)\nawait StoreModel.aggregate([\n {\n $match: {\n _id: Types.ObjectId(storeId)\n }\n },\n {\n $unwind: {\n path: '$cashierBoxes',\n }\n },\n {\n $match: {\n \"cashierBoxes._id\": Types.ObjectId(id)\n }\n },\n {\n $unwind: {\n path: '$cashierBoxes.inOutMovements'\n }\n },\n {\n $lookup: {\n from: \"user\",\n localField: \"cashierBoxes.inOutMovements.createdBy\",\n foreignField: \"_id\",\n as: \"cashierBoxes.inOutMovements.createdBy\",\n pipeline: [\n {\n $project: {\n _id: 1,\n name: 1\n }\n }\n ]\n },\n },\n {\n $unwind: {\n path: \"$cashierBoxes.inOutMovements.createdBy\",\n preserveNullAndEmptyArrays: true,\n },\n },\n { $sort: { \"cashierBoxes.inOutMovements.date\": -1 } },\n { $skip: +pageNum * +limit },\n { $limit: +limit },\n {\n $group:\n {\n _id: {\n _id: \"$cashierBoxes._id\",\n name: \"$cashierBoxes.name\",\n },\n inOutMovements: {\n $push: {\n $cond: [\n { $gte: [\"$cashierBoxes.inOutMovements.amount\", 0] },\n '$cashierBoxes.inOutMovements',\n \"$$REMOVE\"\n ]\n }\n },\n }\n },\n ]).allowDiskUse(true);\nconst StoreSchema = new Schema({\n...\n cashierBoxes: {\n type: [CashierBoxSchema],\n required: true,\n default: []\n },\n...\n})\nStoreSchema.index({\n \"_id\": 1,\n \"cashierBoxes._id\": 1,\n \"cashierBoxes.inOutMovements.date\": - 1\n})\n\nconst CashierBoxSchema = new Schema({\n...\n createdBy: {\n ref: 'user',\n type: Schema.Types.ObjectId,\n },\n inOutMovements: {\n type: [CashierBoxActivity],\n required: false,\n default: []\n },\n...\n})\n\nconst CashierBoxActivity = new Schema({\n...\n date: {\n type: Date,\n required: true\n },\n createdBy: {\n ref: 'user',\n type: Schema.Types.ObjectId,\n required: false\n }\n...\n});\n", "text": "I have this stores collection with embedded docs cashierBoxes and cashierBoxes has embedded docs “inOutMovements”I have been trying to make a query to get the inOutMovements sorted by date (descending) with pagination using ($sort, $skip and $limit) but I’m getting the error “Sort exceeded memory limit of 33554432 bytes”, I already added allowDiskUse(true) and an index to my store schema.Am I doing something in the wrong order during aggregate?", "username": "Rene_Mercado" }, { "code": "allowDiskUsetrue", "text": "Hello @Rene_Mercado ,Welcome to The MongoDB Community Forums! error “Sort exceeded memory limit of 33554432 bytes”Can you please confirm if your index is in accordance to the sort required?\nThis is because, while performing a sort, MongoDB first attempts to fetch documents using order specified in the index. When no index is available it will try to load the documents into memory and sort them. By default this sort memory limit is 32 MB and if it reaches this limit, above error is thrown.Below are some ways to solve this.A look at how the Bucket Pattern can speed up paging for usersA look at how to speed up the Bucket PatternRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Suggestions - Aggregation query for nested embedded documents
2022-12-19T16:15:59.374Z
Suggestions - Aggregation query for nested embedded documents
1,196
null
[ "queries", "python", "atlas-cluster" ]
[ { "code": "def submit(self, obj):\n emp_document = DATA_COLLECTION.find()\n self.layout.clear_widgets()\n\n for doc in emp_document: \n print(doc['empn'], doc['mobile'], doc['message'], doc['msg_ent_dttm'])\n self.empn = TextInput(text=str(doc['empn']))\n self.mobile_no = TextInput(text=str(doc['mobile']))\n self.message = TextInput(text=str(doc['message']))\n\n self.layout.add_widget(self.empn)\n self.layout.add_widget(self.mobile_no)\n self.layout.add_widget(self.message)\n return self.layout\n", "text": "i have used pymongo in my Python coded programming , the entire program is pasted belowfrom pymongo import MongoClient\nfrom kivy.app import App\nfrom kivy.uix.label import Label\nfrom kivy.uix.gridlayout import GridLayout\nfrom kivy.uix.textinput import TextInput\nfrom kivy.uix.button import Buttondb_handle = “mongodb+srv://kcpcloudempn:[email protected]/test”\ndb_client = MongoClient(db_handle)\nDB_NAME = db_client[‘db_notify’]\nDATA_COLLECTION = DB_NAME[‘data_emp_notify’]class MainApp(App):\ndef build(self):\nself.layout = GridLayout(cols=1, row_force_default=True,row_default_height=50,)\nsubmit = Button(text=“InBox”, on_press=self.submit)\nself.layout.add_widget(submit)\nreturn self.layoutMainApp().run()this is running very fine on desktop, when convert this program into .apk file using google colab the app is crashing,.Please help with solution or with suitable references.", "username": "U_V_V_Narasimha_Chary" }, { "code": "", "text": "I was facing same problem with my apk website. Thank you for this guidance.", "username": "Sean_kapri" } ]
Making .apk file with pymongodb, python and google colab
2022-11-30T09:55:59.275Z
Making .apk file with pymongodb, python and google colab
1,739
null
[ "aggregation", "queries", "java", "indexes" ]
[ { "code": "", "text": "Hi Team,Regards,\nRama", "username": "Laks" }, { "code": "", "text": "Hi Team,Can you please provide pointers for the above two queries.Regards,\nLaks", "username": "Laks" } ]
Updates with aggregation pipleline
2022-12-23T10:21:01.508Z
Updates with aggregation pipleline
1,025
null
[ "compass" ]
[ { "code": "", "text": "Hey,I am new to mongoDB and I have a Spark App that writes about 115 M Documents to my MongoDB, that is running on t3.2xl machine (8 cores, 32 Gb memory, gp3-EBS volume with 3000 IOPS baseline).My Spark app is running on an EMR cluster with 4 workers (r6g.16xl: 64 cores, 488 Gb memory) and reads the data from S3, does some minor transformations and then writes to my MongoDB. The storage size of the collection is in MongoDB about 15Gb, thats about 1Kb per doc, in raw JSON format it’s about 80 Gb I think.\nThe data insertion takes about 9 to 10 mins and the cpu usage of my EMR cluster is less than 40% on each node. I also did some test runs with just 2 workers, the cpu usage was a bit higher, but still took bout 10 mins. So I am pretty sure, that the issue is with MongoDB.\nThe MongoDB’s CPU usage is about 60-70% while insertion and the IOPS at max 500. But in MongoDB Compass I can see at slowest operation section that some ops have a waiting time bout 20000ms regularly, that looks like a problem but I dont see what is limiting my MongoDB instance …\nthe average operation per sec is bout 400.0k.do u have any suggestions? thanks!", "username": "Elias_Strauss" }, { "code": "", "text": "Is your MongoDB a single node? Given the small size of your documents, is your schema such that you have one document per data point ?", "username": "Robert_Walters" }, { "code": "", "text": "yes, single node. the schema is the same for every document. the data is basically an activity log, so each data point is one document, which represents an action.", "username": "Elias_Strauss" }, { "code": "", "text": "This is where the issue is. MongoDB documents are not really a one to one with a row in a relational database. Here is a good youtube video describing the differences MongoDB Schema Design Best Practices - YouTubeThat said, if you have data that can’t be bucketed such as IoT/Time-series data you can create a time-series collection. https://www.mongodb.com/docs/v5.0/core/timeseries-collections/ This has some restrictions noted in the documentation.Also note that MongoDB scales horizontal through Sharding. If you’ve bucketed your data and are hitting insertion issues consider sharding. https://www.mongodb.com/docs/manual/sharding/", "username": "Robert_Walters" }, { "code": "", "text": "Alright, I will check this out. Thanks for the help. Happy Holidays!", "username": "Elias_Strauss" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data Insertion is slow (115M docs, about 80Gb raw JSON, 15GB in MongoDB)
2022-12-21T12:38:48.649Z
Data Insertion is slow (115M docs, about 80Gb raw JSON, 15GB in MongoDB)
1,975
null
[]
[ { "code": "", "text": "{\n“_id” : 1,\n“student” : “Maya”,\n“homework” : [ 10, 5, 10 ],\n“quiz” : [ 10, 8 ],\n“extraCredit” : 0,\n“totalHomework” : 25,\n“totalQuiz” : 18,\n“totalScore” : 43\n}\n{\n“_id” : 2,\n“student” : “Ryan”,\n“homework” : [ 5, 6, 5 ],\n“quiz” : [ 8, 8 ],\n“extraCredit” : 8,\n“totalHomework” : 16,\n“totalQuiz” : 16,\n“totalScore” : 40\n}how to add new fields with array\nstudent element value add into one array like this.studentArray : [Maya, Ryan]", "username": "Min_Thein_Win" }, { "code": "", "text": "Hi there,Remember that not getting what you want is sometimes a wonderful stroke of luck.\nDalai LamaWhat did you try?", "username": "santimir" }, { "code": "", "text": "I want try retrieve by Id example _id = 2{\n“_id” : 2,\n“student” : “Ryan”,\n“homework” : [ 5, 6, 5 ],\n“quiz” : [ 8, 8 ],\n“extraCredit” : 8,\n“totalHomework” : 16,\n“totalQuiz” : 16,\n“totalScore” : 40,\n“studentArray” : [Maya, Ryan], //add new/create field for combine two element value from id 1 and 2\n}", "username": "Min_Thein_Win" }, { "code": "async function myAsyncCall(){\n const first = await db.collection.find({_id:2})\n const result = await db.collection.updateOne( \n {_id:1}, \n {$push:{studentArray:first.student}}\n )\n console.log(result)\n}\n\nmyAsyncCall()\n", "text": "Interesting question. I would do it in two steps, for example, maybe something like this:I am not sure how easy is to run this in MDB Charts.", "username": "santimir" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.", "username": "steevej" } ]
Add value to new AddFields Array
2022-12-23T09:53:28.057Z
Add value to new AddFields Array
2,535
null
[ "aggregation", "queries", "node-js" ]
[ { "code": " {\n _id: \"Unit123\",\n name: \"Test Unit\",\n \"sections\": [\n {\n \"_id\": \"63925553eeb147dc9bd894e1\",\n \"name\": \"TestSection 1\",\n \"contents\": [\n {\n _id: \"1\",\n \"type\": \"TEXT_PAGE\",\n \"pageTitle\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit..\",\n \"pageContent\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit.\"\n },\n {\n _id: \"2\",\n \"type\": \"TEXT_PAGE\",\n \"pageTitle\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit..\",\n \"pageContent\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit.\"\n }\n ],\n },\n ],\n },\n {\n \"_id\": \"63925553eeb147dc9bd894e3\",\n unit: \"Unit123\",\n user: 3,\n lastContent: \"2\",\n progressions: [\n {\n _id: \"1\",\n isCompleted: true,\n },\n {\n _id: \"2\",\n isCompleted: true,\n },\n ]\n },\n {\n _id: \"Unit123\",\n name: \"Test Unit\",\n \"sections\": [\n {\n \"_id\": \"63925553eeb147dc9bd894e1\",\n \"name\": \"TestSection 1\",\n \"contents\": [\n {\n _id: \"1\",\n \"type\": \"TEXT_PAGE\",\n \"pageTitle\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit..\",\n \"pageContent\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit.\",\n \"contentProgression\": {\n _id: \"1\",\n isCompleted: true,\n },\n },\n {\n _id: \"2\",\n \"type\": \"TEXT_PAGE\",\n \"pageTitle\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit..\",\n \"pageContent\": \"Lorem ipsum dolor sit amet, consectetur adipiscing elit.\",\n \"contentProgression\": {\n _id: \"2\",\n isCompleted: true,\n },\n }\n ],\n },\n ],\n },\ndb.units.aggregate([\n {\n \"$unwind\": \"$sections\"\n },\n {\n \"$unwind\": \"$sections.contents\"\n },\n {\n \"$lookup\": {\n \"from\": \"unitProgressions\",\n let: {\n contentsId: \"$sections.contents._id\"\n },\n pipeline: [\n {\n \"$unwind\": \"$progressions\"\n },\n {\n $match: {\n $expr: {\n $eq: [\n \"$$contentsId\",\n \"$progressions._id\"\n ]\n }\n }\n },\n \n ],\n \"as\": \"sections.contents.unitProgression\"\n }\n },\n {\n \"$unwind\": {\n path: \"$sections.contents.unitProgression\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n \n])\n", "text": "Hi!I have this scenario:Doc01:Doc02:I want to achieve this structure:I’m stucked in turning the sections and contents a group after the lookup.\nThis is my query:I tried to use group but it only groups the parent fields.\nIs it possible to build this structure? Is there a better way to organize this schemas? It feels that I’ll have problems with this query speed.MongoDB PlaygroundThanks!", "username": "Rafael_Alencar1" }, { "code": "", "text": "I don’t really understand your $lookup. It looks wrong.I do not see anywhere how you know which top level document from unitProgressions to use based on the top level document from units. From the sample documents it would appear that the field unit of unitProgressions, value Unit123 in the sample, is related to the _id of the document in the units collection. This is nowhere to be seen in your pipeline. You do match the _id of the sub-documents, but we do not know which top level document.You also have a collection named users and from the fields of unitProgressions, it looks that a unitProgressions is specific to a given user. I do not see that in your lookup.So what is the use-case exactly? I suspect that you want the progression of a specific user for a specific unit in a single document.", "username": "steevej" }, { "code": "", "text": "Inside the unit I have the sections and contents of it.\nI want to connect the sections content with the progressions. Inside the progressions is only the contents progression.\nYou’re right the use case is: Get the Users progression of a Unit based on its contents.", "username": "Rafael_Alencar1" }, { "code": "", "text": "It is still not clear. In Doc01, the field sections is an array. In your sample you only have 1 section but being an array I suspect that you might have more than one. Otherwise having an array is useless. It looks like in Doc02, you are missing a field that indicates which sections from Doc01.", "username": "steevej" } ]
Lookup in two collections with nested documents
2022-12-15T19:26:53.094Z
Lookup in two collections with nested documents
1,822
null
[ "installation", "storage" ]
[ { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n authorization: 'enabled'\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n", "text": "This is my mongod.conf file, don’t know what’s happening can’t able to start mongo on AWS ec2, but the same process worked fine on my linux machine.", "username": "ajayjb_N_A" }, { "code": "path: /var/log/mongodb/mongod.log\n", "text": "This is not enough information for us to be able to help you.How do you start mongod?The purpose ofis to be able to investigate what is happening. So it will be very useful to share it.", "username": "steevej" }, { "code": "mongod/var/lib/mongodb/var/log/mongodb/mongod.log", "text": "make sure you have attached a disk large enough for data and that these two paths are accessible for mongod process (or change those paths suitably):", "username": "Yilmaz_Durmaz" } ]
Mongod is not starting, every looks file
2022-12-23T19:07:57.350Z
Mongod is not starting, every looks file
1,862
null
[ "crud", "mongodb-shell" ]
[ { "code": "db.users.updateMany(\n {},\n { $set: { \"customer_file_settings.custom_fields\": \"$custom_fields\" } }\n)\n", "text": "Hey there!I have a users collection. In this users collection, an array “custom_fields” is contained, which contains further array objects. Now, I would like to move this array “custom_fields” to a new object in the users collection called “customer_file_settings”.Like: “user.custom_fields” to “user.customer_file_settings.custom_fields”.I tried the following code:Unfortunately, this did not work. $custom_fields is not presented as the array, but as exactly this plain string.Do you have any other suggestions?Thank you! ", "username": "Malte_Hoch" }, { "code": "", "text": "You need to use update with aggregation.", "username": "steevej" }, { "code": "var users = db.users.find({});\n\nusers.forEach(function(user) {\n db.users.updateOne(\n { _id: user._id },\n { $set: { \"customer_file_settings.custom_fields\": user.custom_fields } }\n );\n});\n", "text": "Thank you! I found another solution via JS:", "username": "Malte_Hoch" }, { "code": "", "text": "It is certainly slow as you do updateOne on all documents. A bulk write would be more efficient.You run the risk to have concurrent modification issues since you do the update in a second steps with data that you read in the first step. If user.custom_fields is updated by another process between the time you get the documents in find() and the time you issue the updateOne, you will set customer_file_settings.custom_fields with stale data.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosh: Copy object to another, nested object
2022-12-22T13:42:53.419Z
Mongosh: Copy object to another, nested object
2,228
null
[ "queries", "dot-net" ]
[ { "code": " SP500;US23331A1097;D.R. HORTON\n DowJones;US4592001014;IBM\n SP500;US0200021014;ALLSTATE\n SP500;US4165151048;HARTFORD FINANCIAL SERVICES GROUP\n SP500;US42824C1099;HEWLETT PACKARD ENTERPRISE\n SP500;US5260571048;LENNAR\n SP500;US9113631090;UNITED RENTALS\n Nikkei225;JP3726800000;JAPAN TOBACCO\n Nikkei225;JP3942800008;YAMAHA MOTOR\n Nasdaq100;US3755581036;GILEAD SCIENCES\n Nasdaq100;US60770K1079;MODERNA\n SP500;US0010551028;AFLAC\n SP500;US03076C1062;AMERIPRISE FINANCIAL\n SP500;US09062X1037;BIOGEN\n SP500;US3024913036;FMC\n SP500;US3755581036;GILEAD SCIENCES\n SP500;US6703461052;NUCOR\n SP500;US74834L1008;QUEST DIAGNOSTICS\n SP500;US8760301072;TAPESTRY\n SP500;US9139031002;UNIVERSAL HEALTH SERVICES\n private async Task<List<T>> Paginate<T>(IMongoQueryable<T> query, PaginationData paginationData )\n {\n var list = await query.Skip(paginationData .PageNumber * paginationData .PageSize)\n .Take(paginationData .PageSize)\n .ToListAsync();\n return list;\n }\n ####### Pagenumber 0; Pagesize: 10\n SP500;US23331A1097;D.R. HORTON\n Nikkei225;JP3726800000;JAPAN TOBACCO\n DowJones;US4592001014;IBM\n SP500;US5260571048;LENNAR\n SP500;US0200021014;ALLSTATE\n SP500;US9113631090;UNITED RENTALS\n Nikkei225;JP3942800008;YAMAHA MOTOR\n SP500;US42824C1099;HEWLETT PACKARD ENTERPRISE\n SP500;US4165151048;HARTFORD FINANCIAL SERVICES GROUP\n **SP500;US09062X1037;BIOGEN**\n ####### Pagenumber 1; Pagesize: 10\n SP500;US6703461052;NUCOR\n Nasdaq100;US60770K1079;MODERNA\n SP500;US0010551028;AFLAC\n SP500;US8760301072;TAPESTRY\n SP500;US9139031002;UNIVERSAL HEALTH SERVICES\n SP500;US3024913036;FMC\n **SP500;US09062X1037;BIOGEN**\n Nikkei225;JP3436100006;SOFTBANK\n Nasdaq100;US3755581036;GILEAD SCIENCES\n SP500;US3755581036;GILEAD SCIENCES\n private async Task<List<T>> Paginate<T>(IMongoQueryable<T> query, PaginationData paginationData )\n {\n var result = await query.ToListAsync();\n var list = result.Skip(paginationData .PageNumber * paginationData .PageSize)\n .Take(paginationData .PageSize)\n .ToList();\n return list;\n }\n ####### Pagenumber 0; Pagesize: 10\n SP500;US23331A1097;D.R. HORTON\n DowJones;US4592001014;IBM\n SP500;US0200021014;ALLSTATE\n SP500;US4165151048;HARTFORD FINANCIAL SERVICES GROUP\n SP500;US42824C1099;HEWLETT PACKARD ENTERPRISE\n SP500;US5260571048;LENNAR\n SP500;US9113631090;UNITED RENTALS\n Nikkei225;JP3726800000;JAPAN TOBACCO\n Nikkei225;JP3942800008;YAMAHA MOTOR\n Nasdaq100;US3755581036;GILEAD SCIENCES\n ####### Pagenumber 1; Pagesize: 10\n Nasdaq100;US60770K1079;MODERNA\n SP500;US0010551028;AFLAC\n SP500;US03076C1062;AMERIPRISE FINANCIAL\n SP500;US09062X1037;BIOGEN\n SP500;US3024913036;FMC\n SP500;US3755581036;GILEAD SCIENCES\n SP500;US6703461052;NUCOR\n SP500;US74834L1008;QUEST DIAGNOSTICS\n SP500;US8760301072;TAPESTRY\n SP500;US9139031002;UNIVERSAL HEALTH SERVICES\n", "text": "Hello,\ni have big problems with pagination with the C# Driver (Driver Version 2.18.0 MongoDb Version 5.0.5). I tried a way with the build in Linq (MongoDB.Driver.Linq) and the way over the Fluent-Framework.The Problem is, after some invokes of my pagination method (see below) with the MongoDb-Linq, i got some duplicated values back. The duplication is not in the database itself, and also does not occure in a single invoke of the method. Before i invoke the pagination-method i do some sorting and filtering.It seems the Problem lies in the Take Method, skip works as expected. I tried the examples also with the fluent-framework and got exact the same strange behaviour (limit instead of take, makes the problem).Example Output without pagination and ordering and filtering (var result = await query.ToListAsync();):MongoDB-Driver LINQ: → Works NOT as expectedPagination Output: → BIOGEN is duplicated and the order is wrong compared to output aboveMy Workaround, which is pretty slow but gives the right results:Standard LINQ: → Works as expectedPagination Output: → No Duplicated Values → Output is like Output above.Does anyone have an idea what the problem here is? Is it a bug in the driver?Thanks in advance!", "username": "kyi87_N_A" }, { "code": "{\n \"aggregate\": \"LevermannScoreCollection\",\n \"pipeline\": [\n {\n \"$match\": {\n \"Date\": \"ISODate('2022-12-22T00:00:00Z')\"\n }\n },\n {\n \"$project\": {\n \"_v\": \"$Score\",\n \"_id\": 0\n }\n },\n {\n \"$unwind\": \"$_v\"\n },\n {\n \"$project\": {\n \"Isin\": \"$_v.Isin\",\n \"IndexType\": \"$_v.StockIndexType\",\n \"Result\": \"$$ROOT\",\n \"NameLower\": {\n \"$toLower\": \"$_v.StockInformation.Name\"\n },\n \"_id\": 0\n }\n },\n {\n \"$sort\": {\n \"Result._v.TotalScore\": -1\n } \n }\n ],\n \"cursor\": {},\n \"$db\": \"StockAnalysisDB\",\n \"lsid\": {\n \"id\": \"CSUUID('2624c3dd-f150-4b71-bfad-8853aef347c4')\"\n }\n}\n{\n \"aggregate\": \"LevermannScoreCollection\",\n \"pipeline\": [\n {\n \"$match\": {\n \"Date\": \"ISODate('2022-12-22T00:00:00Z')\"\n }\n },\n {\n \"$project\": {\n \"_v\": \"$Score\",\n \"_id\": 0\n }\n },\n {\n \"$unwind\": \"$_v\"\n },\n {\n \"$project\": {\n \"Isin\": \"$_v.Isin\",\n \"IndexType\": \"$_v.StockIndexType\",\n \"Result\": \"$$ROOT\",\n \"NameLower\": {\n \"$toLower\": \"$_v.StockInformation.Name\"\n },\n \"_id\": 0\n }\n },\n {\n \"$sort\": {\n \"Result._v.TotalScore\": -1\n }\n },\n {\n \"$skip\": \"NumberLong(0)\"\n },\n {\n \"$limit\": \"NumberLong(10)\"\n }\n ],\n \"cursor\": {},\n \"$db\": \"StockAnalysisDB\",\n \"lsid\": {\n \"id\": \"CSUUID('2624c3dd-f150-4b71-bfad-8853aef347c4')\"\n \n}\n", "text": "I logged the queries to see what really goes to MongoDB.This is the query which gives the right order:This is the query with pagination which gives the incorrect element order back:I dont get it… Seems okay", "username": "kyi87_N_A" } ]
Pagination with C# Driver not working as expected
2022-12-23T10:16:48.330Z
Pagination with C# Driver not working as expected
2,576
null
[ "monitoring" ]
[ { "code": "", "text": "Hello Team,Recently in our production environment we are getting the following alert “Query Targeting: Scanned Objects / Returned has gone above 1000” continuously at least twice a week but when our team checks the profiler during the time of alert all the queries execution time was not more than 50ms and also all the queries which were running during the time of alert was insert queries.\nEven in performance advisor I didn’t get any suggestion to add new index based on the generated alerts. Can you please help me with this since it is production platform we are worried about this alert", "username": "Vignesh_Ramesh" }, { "code": "", "text": "Hi @Vignesh_Ramesh,Welcome to MongoDB community.I think the best for you to contact our Atlas support to look into your production specific workload.High query targeting usually indicate on suboptimal queries as more documents are scanned vs returned. However, if you don’t see ant impact in actual performance this alert might not be critical for you and you may ignore it or tune the threshold to fit your monitoring needs.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "How I can tune it so that it works properly? via indexing?", "username": "Chris_Heffernan" }, { "code": "", "text": "@Chris_Heffernan Usually yes, but it depends on the specific env.Look into performance advisor to see if anything is suggested.", "username": "Pavel_Duchovny" }, { "code": "", "text": "Any ideas if index creation will send Query Targeting: Scanned Objects / Returned alert?Thanks!", "username": "Kothan_Jayagopal1" }, { "code": "", "text": "It is possible to receive a Query Targeting alert for an inefficient query without receiving index suggestions from the Performance Advisor if the query exceeds the slow query threshold and the ratio of scanned to returned documents is greater than the threshold specified in the alert.I see the above Note from the below link:\nFix Query Issues — MongoDB AtlasWe are getting slammed with the above alerts without Index creation recommendations.Thanks\nKothan.", "username": "Kothan_Jayagopal1" } ]
Query Targeting: Scanned Objects / Returned has gone above 1000
2021-02-23T18:38:49.576Z
Query Targeting: Scanned Objects / Returned has gone above 1000
13,084
null
[ "aggregation", "queries" ]
[ { "code": "{\n A: \"number\",\n B: [{C: \"number\"}, etc,]\n}\nC", "text": "The above is the schema of my documents.How to write an aggregation pipeline to find all documents where:", "username": "Big_Cat_Public_Safety_Act" }, { "code": "C100db.question.find(\n { \"A\": { $gt: 100 } },\n { \"B\": { \"$elemMatch\": { \"C\": { $gt: 100 } } } }\n)\n[\n { _id: ObjectId(\"63a6180ee1fc395f0e1e131a\"), B: [ { C: 101 } ] }\n]\n", "text": "Hello @Big_Cat_Public_Safety_Act what is the “maximum” of C ?Assuming you want that to be greater than 100 as well, you can use a query like this (you don’t need aggregation proper)…", "username": "Justin_Jenkins" } ]
MongoDB query to filter documents and subdocuments
2022-12-23T06:03:14.459Z
MongoDB query to filter documents and subdocuments
890
https://www.mongodb.com/…e_2_1024x512.png
[ "swift" ]
[ { "code": "", "text": "I have registered a notification handler on a collection as described herePer the documentation “This RealmCollectionChange resolves to an array of index paths that you can pass to a UITableView’s batch update methods.”I my use-case, I am not using a UITableView and the indexPaths have no relevance. Is there any way to use the indexPaths to identify the objects in the collection?", "username": "David_Morgereth" }, { "code": "", "text": "It believe a similar question was also asked on StackOverflow Getting Object Instead of IndexSee if that helps and if not, let us know.", "username": "Jay" }, { "code": "", "text": "Hi Jay - yeah thats my question :-). Thought I’d put a more than one line in the water :-). Thanks for your help", "username": "David_Morgereth" } ]
When using Realm change notifications is it possible to identify the object that was changed?
2022-12-22T20:28:58.021Z
When using Realm change notifications is it possible to identify the object that was changed?
1,043
null
[ "replication", "backup" ]
[ { "code": "", "text": "Hi Everyone!I have a couple of questions regarding the restore scenarios of MongoDB and hope that someone from the Community can share his experience.Since I’m just starting to work with Mongo and planning my environment, I want to deploy a production site and a DR one to which I’ll be performing test restores or where I restore everything in case of a disaster with my production site. The questions are:Many thanks in advance!Best regards,\nPetr", "username": "Petr_Makarov" }, { "code": "", "text": "First off, good on you for thinking about this ahead of time, and also thinking about how you’d proactively test your restore plans. That is a really great practice!There are a number of ways to handle this but one of the easier ways might be to use a member of the replica set themselves.For example, take one of the members offline (from the set) and then test your restore plan, or do any others testing you want, etc. Then you are both testing from the exact same setup and on real data. You don’t necessarily need to have a separate environment for doing this … and you can recycle that member back into the set when you are done.How often do you restore individual collections, databases, or documents? Or do you restore the entire replica set/shareded cluster?The frequency is driven more by your particular needs, and as to “what” exactly you restore, well that will have a lot to with how your data is setup.Generally I’d say restoring by database might be the easiest, but it really depends a lot on how exactly you plan to backup. Do you know?", "username": "Justin_Jenkins" }, { "code": "", "text": "Hi Justin!Many thanks for your reply!Nevertheless, I’m curious to know all pitfalls with the restore of the entire deployment. Let’s say I have a shared cluster on prod, each shard has 5 nodes. I also have Test/Dev site with a reduced capacity, I simply cannot deploy so many nodes/members there. Do I understand correctly that if restore my prod cluster to Test/Dev, I need to have 5 shards on the target site, but for example, each shard can consist of 3 members, not of 5?And the same about replica set: 5 members on prod. How many do I need to have on the target site before doing restore? I guess there is no strict requirement, 3 would be enough.I understand that the frequency is driven by my needs, just wondering what use cases exist and how other users do it. If you could share your experience, it would be awesome.Not sure about backup so far. First of all, I want to understand how will I restore. After that, I’ll decide on the best backup strategy that covers my needs.Best regards,\nPetr", "username": "Petr_Makarov" } ]
MongoDB restore scenarios
2022-12-21T14:38:57.375Z
MongoDB restore scenarios
1,292
null
[ "queries", "python" ]
[ { "code": "validate_collectionCollectionInvalidOperationFailure try:\n client = pymongo.MongoClient(...)\n db = client.get_default_database()\n db.validate_collection(wrong_name)\n except OperationFailure as e:\n print(e.code) # <-- reaches\n except CollectionInvalid as e:\n print(\"Invalid\") # <-- does not reach\n", "text": "database.validate_collectionThe functionvalidate_collection is supposed to raise CollectionInvalid when collection name is invalid as per documentation. However,I when try to catch exception, it does not reach that exception block. Instead I need to trap OperationFailure error object and use code=26 determine invalid collection name.Is this expected behavior?PyMongo : v3.11.3\nPytthon : v3.8.0", "username": "Harshith_JV" }, { "code": "", "text": "If the collection does not exist it cannot be validated, hence the OperationFailure.Collection validation is NOT whether or not the collection name is correct.", "username": "chris" } ]
Not able to catch CollectionInvalid exception
2022-12-23T08:23:27.010Z
Not able to catch CollectionInvalid exception
1,089
null
[ "atlas-data-lake" ]
[ { "code": "ObjectId(\"...", "text": "Hello guys, I’m unable to query json file from s3 bucket, that I have integrated through Data Federation. Getting this errorparse error on JSON document 1 from “s3://Drip/DripJson.json?delimiter=%2F&region=”: parse error at ObjectId(\"...: invalid character, correlationID =How to resolve this? Can’t we upload json with ObjectId to s3 and access through federated queries?", "username": "P_Vivek" }, { "code": "ObjectId(\"xxxxx\")\"_id\":{\"$oid\":\"5ca4bbc7a2dd94ee5816238c\"}", "text": "ObjectId(\"xxxxx\") is not JSON. For a proper MongoDB extended JSON with ObjectID I’d expect to see:\"_id\":{\"$oid\":\"5ca4bbc7a2dd94ee5816238c\"}How are you generating your files ?", "username": "chris" }, { "code": "", "text": "Hi thanks for ur reply. Have exported the json file from Studio3T from the existing mongodb collection we already have and uploaded it to S3. It usually exports as _id:ObjectId(\"\"), and long datatype as “age”: NumberLong(10) etc… Even from Mongodb compass it exports in the same way.", "username": "P_Vivek" }, { "code": "", "text": "My Compass install (1.33.1) and mongodb-database-tools(100.6.0) exporting as expectedWhat OSes, MongoDB version,compass and Studio3T are you using?", "username": "chris" }, { "code": "", "text": "Hi @P_Vivek, there is an unfortunate misunderstanding about JSON file structure versus json content we use in mongodb queries. It is actually eJSON when we want to use ObjectId, NumberLong, and other data types.MongoDB Extended JSON (v2) — MongoDB ManualI don’t know how S3 sends the file to your program, but when parsing you need to use EJSON parser.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi Chris, I’m using Mongodb V5, compass 1.34.2, Windows 11, Studio3T latest.", "username": "P_Vivek" }, { "code": "", "text": "Hi @Yilmaz_Durmaz thanks for ur clarification. The issues is studio3t is generating ejson while exporting, I’m uploading that json file to s3, when reading that as s3 object I’m getting above error.", "username": "P_Vivek" }, { "code": "", "text": "Hi @chris there is another option in studio3t to export the json like u mentioned, tried that now. Will check and get back.", "username": "P_Vivek" }, { "code": "", "text": "Hey @P_Vivek , I’m not sure that’s exactly what’s happening here, can you share the full error with the correlationID value?Atlas Data Federation can read both Extended JSON and JSON, I am expecting that Studio3t is doing something unusual and maybe exporting incorrect extended json somehow.", "username": "Benjamin_Flast" }, { "code": "", "text": "Hi Chris, Studio3T’s has options to export in the format you have mentioned. It is working now. Thanks for the great support.", "username": "P_Vivek" }, { "code": "{ \n \"_id\" : ObjectId(\"xxxxxxxxxxxxxxxxxx\"), \n \"sequenceId\" : NumberInt(xxxx), \n \"type\" : \"ONE_WAY\", \n \"area\" : {\n \"name\" : \"xxxxxxxxxx\"\n }, \n \"cardQuantity\" : 9000.0, \n \"startTime\" : ISODate(\"2022-10-01T00:12:10.000+0000\"), \n \"endTime\" : ISODate(\"2022-10-01T00:28:12.000+0000\"), \n \"status\" : \"COMPLETED\", \n \"billingStatus\" : \"UNAPPROVED\", \n \"unitId\" : NumberInt(xxx), \n \"uId\" : \"xxxxxx\", \n \"timezone\" : \"Asia/Kolkata\", \n \"minimumQuantityPercentage\" : 15.0, \n \"isFetched\" : false, \n \"resendCommandCount\" : NumberInt(x), \n \"createdAt\" : ISODate(\"2022-09-30T23:58:07.484+0000\"), \n \"createdBy\" : \"fsauser\", \n \"active\" : true, \n \"_class\" : \"com.lantrasoft.iot.api.fillingstation.model.Trip\", \n \"balanceVolume\" : 4064.465, \n \"dispensedQuantity\" : 5409.5689999999995, \n}\n", "text": "Hi @Benjamin_Flast Atlas Data Federation is not reading Extended Json. Studio 3T has another option to export usual Json format it is working fine in Data Federation.\nPls let me know how to make extended Json to work in Data Federation.\nBelow is the extended Json studio3T is exporting…", "username": "P_Vivek" }, { "code": "jq", "text": "That’s not extended json, its not even json. This is the format for mongosh/studio3t. If you’re using the tool as documented and not getting the correct results use their support/raise a bug, its what you paid for.Once your tools is exporting correct json or you switch to another tool that does then the Data Federation will work with its output.Tip: If you cannot parse your export with something like jq its not json.", "username": "chris" }, { "code": "{}[]\"\"{ \"key\": { \"just_a_sub_key\": \"value\" } }\n{ \"key\": { \"ejson_sub_key\": \"value\" } }\n$oid$date", "text": "TL:DR; good thing you got your problem fixed Hi again @P_Vivek , Let me extend (pun intended ) my last statement about the misunderstanding of JSON/eJSON with an example.JSON format has only 5 value types: numbers, true/false, objects enclosed in {}, arrays enclosed in [] and everything else is string enclosed in quotes\"\". and then all keys are strings again. oh, and its root is also an object.eJSON is a perfect JSON file with a trick: sub-keys in sub-objects can have a meaning.$oid and $date are two of those meaningful sub keys. if you read with a JSON reader they are just some subkeys, but if you read with an eJSON parser, the “values” they refer to will be converted to ObjectId and Date objects further down your program.a quick warning here is this: if you see any non-quoted variable name (other than true/false and numbers) then that file is not a valid JSON/eJSON file. you may say they are actually partial JavaScript files and are meant to be read without any parsing, mostly by the programs they are written for.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to read json from S3. "Invalid Json...ObjectId" error
2022-12-17T12:02:43.493Z
Unable to read json from S3. &ldquo;Invalid Json&hellip;ObjectId&rdquo; error
3,883