image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"field-encryption"
]
| [
{
"code": "",
"text": "I have downloaded the libmongocrypt (for Client-Side field level encryption) zip file for windows using the link given on the Link page. I am trying to figure out what to do after the download. How to install it on the system?\nCan anyone help me?",
"username": "Kartik_Saini1"
},
{
"code": "",
"text": "Yes doc does not give any steps for Windows.It must be a simple install of that dll\nDid you try to unzip/extract that downloaded file?\nI think it creates a directory and extracts the file under whichever path you saved it\nAfter install you have to update your path and run test as per steps in the doc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I did unzip the file. It contains 2 folders - bin and includes. The content of these folders are uploaded in the links below:Image ss hosted in ImgBBImage ss1 hosted in ImgBBIt is still not clear to me how do I install the library using these contents.\nNothing, in the case of installation in windows, is given in the docs as well.Can you give me step by step instructions to do so?\nThanks.",
"username": "Kartik_Saini1"
},
{
"code": "",
"text": "Sorry I don’t have detailed steps\nMay be you have to write c program to call this include file?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Kartik_Saini1 !Can you provide more details on what you are trying to do:Based on the documentation link you referenced, I assume you may be trying to use the preview of Queryable Encryption. If so, you will need to meet all of the Queryable Encryption Installation Requirements.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I am trying to implement client-side field level encryption using a windows OS. I have already downloaded libmongocrypt zip file for windows 11 (from the Link) and unzipped it. I do not know how to install the libmongocrypt package from that extracted folder.",
"username": "Kartik_Saini1"
}
]
| Install libmongocrypt on windows 11 | 2022-09-13T08:52:31.293Z | Install libmongocrypt on windows 11 | 2,551 |
null | []
| [
{
"code": "",
"text": "as mentioned in topic when i visit MongoDB Student Pack I can see \" Claim Your Free MongoDB Certification\" on the page when clicked on “Start Learning with MongoDB University” it sends me to another page account.mongodb.com where i have to login with my github profile.but website gives me an error “We were unable to log you in with that login method. Ensure that you have a public verified email address set on your GitHub account.”My github accounts email setting is (unchecked) turned off “Keep my email addresses private” and showing my emails are visible.Thank you.",
"username": "Sumit_kumar8"
},
{
"code": "",
"text": "Note:\ni have verified github students accounti have made this current account with gmail idk if i will be eligible for certifications if i use another account for learning and github for certification.kindly help me on following as well.",
"username": "Sumit_kumar8"
},
{
"code": "",
"text": "Hi @Sumit_kumar8Welcome to the forums. Could it be that you may have unchecked the Keep my email addresses private ” without actually designating a public email address? Please see these steps in our docs here.",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "Thank you turned out that solution worked now i can continue with my education.",
"username": "Sumit_kumar8"
},
{
"code": "",
"text": "Hi can you help me how to earn free DBA voucher. what courses should I do to earn free DBA voucher.",
"username": "kalle_preetham"
},
{
"code": "",
"text": "Hi I have been verified for github student account. So to get free certification should I complete learning paths\nor I will get direct like voucher to my mail.kindly help me in this issue.",
"username": "kalle_preetham"
},
{
"code": "",
"text": "Hi @kalle_preethamIn order to be eligible for the DBA voucher, you’ll have to complete the DBA learning path.Once you’ve completed the learning path, you can follow the instructions for a voucher on your personal profile page at MongoDB Student Pack.Good luck!",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "",
"username": "Lieke_Boon"
}
]
| Cannot access MongoDB University with Github student account | 2022-08-31T20:01:16.507Z | Cannot access MongoDB University with Github student account | 5,700 |
null | []
| [
{
"code": "_id_directoryperdb=true",
"text": "Hi all,I updated a system from MongoDB 4 to MongoDB 5 and observe much larger index sizes in MongoDB 5 than in MongoDB 4. I now run two systems in parallel (one with MongoDB 4 and one with MongoDB 5) and these are my observations for a collection with lots of data:Other indexes also increase, maybe because of the increased space for the _id_ field. What can be the cause of the index growth? The only change in configuration I did was to seperate the database to an own directry (directoryperdb=true).Records are inserted to the collection with a rate of about 1000 docs/second and get removed by a TTL-index.Thanks for any hint\nFranzPS: The update was not a real update. I could just install the new system with MongoDB 5 and start with a fresh database.",
"username": "Franz_van_Betteraey"
},
{
"code": "_id",
"text": "Hi @Franz_van_Betteraey,That’s weird and most probably unexpected.\nAre the documents exactly the same? (i.e content of the _id identical?)\nDoes the _id contain the default ObjectId or something else? If it’s something else, does its size varies?\nWhich versions of MongoDB are you using exactly? This could help to track a ticket eventually.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "idorg.bson.types.ObjectId \"_id\" : ObjectId(\"623f2e0f200e061cb71ca9ae\")\n",
"text": "Hi @MaBeuLux88,the versions are 4.2.19 and 5.0.8. The document content is generated (test data), thus not exactly the same but comparable. My client is a SpringBoot Application using the Spring Data MongoDB framework. The id is of the org.bson.types.ObjectId type. I do not set the id myself. This is done by MongoDB (or the Spring Framework). The ids look ‘normal’ like this (in both server versions):With the server update I also updated the client to use the java driver version 4.6.0 instead of 3.11.2. I have also observed a drop in performance here (in my use case). I cannot say whether this is connected to the larger index. It could also be due to connection pooling or something else. When I use the old driver version (also with the new MongoDB 5 version), I do not observe any performance loss. Therefore I think the problem is more on the client side.Thank you for your efforts\nFranz",
"username": "FrVaBe"
},
{
"code": "_id",
"text": "The first thing that comes to mind when I see these indexes is to understand how the index was built and how the collection lived so far.When an index is freshly built (as it’s _id in this case, when the collection was created) it’s very compact and optimized. But as docs are added, removed, added, removed and updated, the entries start to spread and keep space in between.If you rebuild an index it will be very compact but also have no space in it, if you then add things to it that are spread through the index it can rapidly grow as it has to split every block to make room for new entries.So depending if the collection are freshly loaded or used for years, this can make a huge difference. It doesn’t mean that this makes the index less efficient though. Performance issue could be related to a bunch or other reasons.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "The observations were made on a fresh collection. But it is good to know that there is no fundamental changes expected here.\nI will try to test this again in isolation, so that it can be reproduced in case of doubt. I still need time for that though. Thanks for now!",
"username": "Franz_van_Betteraey"
},
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: c:\\Progs\\mongodb-win32-x86_64-2012plus-4.2.22\\data\\\n journal:\n enabled: true\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: c:\\Progs\\mongodb-win32-x86_64-2012plus-4.2.22\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27020\n bindIp: 127.0.0.1\n",
"text": "Hi @MaBeuLux88 and others,I have to come back to this issue because it looks like my observation was not accidental. To proof my observation I now did a dedicted test with the following test stepsTest-Result:Do you know what causes this difference? I can provide the test data and also the collection stats() reports if you would like to reproduce the behaviour.Kind regards\nFranzTest environmentMongoDB Configuration",
"username": "Franz_van_Betteraey"
},
{
"code": "",
"text": "I did further research and observed the following:I would appreciate it very much if my observations here were confirmed by official side. Then I could exclude a cause for this observation on my side (which I do not see).",
"username": "Franz_van_Betteraey"
},
{
"code": "\t_id index size vs. 4.2.22\t\n4.2.22\t2088960\t100.00%\n4.4.16\t2805760\t134.31%\n5.0.12\t2809856\t134.51%\n6.0.1\t2650112\t126.86%\n\t\t\n\tsecondary index size vs. 4.2.22\t\n4.2.22\t1277952\t100.00%\n4.4.16\t1417216\t110.90%\n5.0.12\t1417216\t110.90%\n6.0.1\t1417216\t110.90%\n_id_id",
"text": "Hi @Franz_van_BetteraeyI did a similar repro using a similar document sizes, 100,000 of them, using the procedure you described and came across these results:Before taking the size of each collection, I executed db.adminCommand({fsync:1}) to ensure that WiredTiger does a checkpoint. This will make the sizes consistent as written on disk. Without fsync, you might find that the sizes keeps fluctuating before it settles after a minute (WiredTiger does a checkpoint every minute).In addition to the _id index, I also created a secondary index just to double check.What I found is that secondary index sizes are quite consistent from 4.4 to 6.0, with 4.2 being the odd one out. With regard to _id, 4.4 to 6.0 are about 130% larger than 4.2.I believe what you’re seeing was caused by the new-ish (from MongoDB 5.0) WiredTiger feature of Snapshot History Retention. The introduction of this feature changes a lot of WiredTiger internals, and this is one of the side effect of the change. To be complete, this issue was known, and was mentioned in SERVER-47652, WT-6082, and WT-6251.Hope this explains what you’re seeing here Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi,that’s just the information I was looking for. Big thanks! I also posted this question on SQ here. Feel feel to give your answer also on this site and I will be glad to accept ist.Kind regars\nFranz",
"username": "Franz_van_Betteraey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Larger indexes in MongoDB 5 compared to MongoDB 4 | 2022-05-30T13:09:28.749Z | Larger indexes in MongoDB 5 compared to MongoDB 4 | 3,928 |
null | [
"python",
"atlas-cluster"
]
| [
{
"code": "",
"text": "Here is the error code;[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:\ncertificate has expired (_ssl.c:997)')>, <ServerDescription (‘ac-hokxrtg-shard-00-02.kegkbmg.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘ac-hokxrtg-shard-00-02.kegkbmg.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:997)’)>]>I already installed certifi, pymongo[srv], dnspython and python-dotenv.I checked for TSL 1.2 in the internet properties and its checked. Also tried disabling 1.0 but it didnt work. Also tried downloading a security certificate that worked for someone else. The certificate was posted under a similar issue as this one.",
"username": "Edwin_Garcia"
},
{
"code": "",
"text": "Found the solution, no worries! Thank you for whoever reads this",
"username": "Edwin_Garcia"
},
{
"code": "",
"text": "https://www.mongodb.com/community/forums/t/m220-ticket-connection-ssl-certificate-verify-failed-certificate-has-expired-on-mongo-atlas/177826?u=edwin_garcia",
"username": "Edwin_Garcia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| ServerSelectionTimeoutError; SSL: CERTIFICATE_VERIFY_FAILED | 2022-09-14T03:44:00.367Z | ServerSelectionTimeoutError; SSL: CERTIFICATE_VERIFY_FAILED | 3,319 |
null | [
"cluster-to-cluster-sync"
]
| [
{
"code": "",
"text": "I have two Mo Primary clusters which are free clusters and if I would like to sync them using Mongosync the clusters are not responding and Since they are in 5.0.12 the process is not smooth, how to upgarde primary cluster form version 5.0.12 to 6 without upgrading it to dedicated cluster.",
"username": "Ch_Sadvik"
},
{
"code": "mongosync",
"text": "Hi @Ch_Sadvik welcome to the community!Unfortunately the shared tier clusters (M0, M2, and M5) are not supported by mongosync at this moment. There are more limitations as per the page mongosync limitations; please check out the linked page for more information.Also, it is currently not possible to upgrade the version a shared tier cluster is running. For this, please check out Atlas M0 (Free Cluster), M2, and M5 LimitationsBest regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How MongoSync can be done if I want to Sync two M0primary atlas clusters which are of Mongo 5.0.12 | 2022-09-12T10:08:19.201Z | How MongoSync can be done if I want to Sync two M0primary atlas clusters which are of Mongo 5.0.12 | 1,878 |
null | [
"production",
"c-driver"
]
| [
{
"code": "",
"text": "Announcing 1.22.0 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Bug fixes:Bug fixes:Improvements:Features:Thanks to everyone who contributed to this release.",
"username": "Kevin_Albertson"
},
{
"code": "mongoc_client_encryption_rewrap_many_datakey",
"text": "The C driver 1.22.0 release has a known possible data corruption bug in mongoc_client_encryption_rewrap_many_datakey when using libmongocrypt versions less than 1.5.2. Please upgrade version 1.22.1 or higher.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB C Driver 1.22.0 Released | 2022-06-29T16:49:21.910Z | MongoDB C Driver 1.22.0 Released | 2,531 |
null | [
"compass"
]
| [
{
"code": "",
"text": "While inserting Json data into Mongodb using Insert doc using MongoDB Compass v1.33\nIt shows “\\n” and “” for the fields\" as shown below\\n “item”: [\\n {\\n “name”: “Introduction to hpin”,\\n “item”: [\\n {\\n “name”: “hpind Overview”,\\nit looks fine in the text document json document\nany suggestions ?",
"username": "Girish_V"
},
{
"code": "",
"text": "Hi @Girish_V , this behavior is not necessarily expected–I am looking into this but could not reproduce the error. Would you mind providing a screenshot that demonstrates this behavior? Feel free to message me directly if you would not like to post the screenshot publicly.",
"username": "Julia_Oppenheim"
}
]
| While adding json document to using MongoDB Compass it adds "/n" in the document | 2022-09-13T11:35:47.294Z | While adding json document to using MongoDB Compass it adds “/n” in the document | 1,321 |
null | [
"python",
"production",
"motor-driver"
]
| [
{
"code": "",
"text": "We are pleased to announce the 3.1 release of Motor - a coroutine-based API for non-blocking access to MongoDB in Python. Motor 3.1 brings support for MongoDB 6.0.\nFor more information, see the full changelog .See the Motor 3.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Motor 3.1.0 Released | 2022-09-13T19:41:09.261Z | MongoDB Motor 3.1.0 Released | 2,465 |
null | [
"python",
"production",
"motor-driver"
]
| [
{
"code": "",
"text": "We are pleased to announce the 3.1 release of Motor - a coroutine-based API for non-blocking access to MongoDB in Python. Motor 3.1 brings support for MongoDB 6.0.\nFor more information, see the full changelog .See the Motor 3.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Motor 3.1.0 Released | 2022-09-13T19:41:09.261Z | MongoDB Motor 3.1.0 Released | 2,466 |
[
"queries",
"node-js"
]
| [
{
"code": "const {MongoClient} = require('mongodb');\nconst uri = \"mongodb://localhost:27017\";\nconst database = 'ecommerce'\n\nconst client = new MongoClient(uri);\n\nasync function getData(){\n let result = await client.connect();\n let db = result.db(database);\n let collection = db.collection('phones');\n let response = await collection.find({}).toArray();\n console.log(response);\n\n}\n\ngetData();\n\n",
"text": "\n\nimage1558×848 69 KB\n",
"username": "Satyam_Chaudhary"
},
{
"code": "",
"text": "ECONNREFUSED indicates that there is not server listening at the given address/port.Make sure mongod is running and listening on the appropriate address and port.Try with 127.0.0.1 rather than localhost. The part of the error ::1:27017 seems to indicate that localhost resolve to IPv6.",
"username": "steevej"
},
{
"code": "",
"text": "Thanku so muchhhhhhhh sir",
"username": "Satyam_Chaudhary"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Trying to connect mongodb with nodejs | 2022-09-13T10:02:17.183Z | Trying to connect mongodb with nodejs | 1,713 |
|
null | [
"queries",
"change-streams",
"realm-web"
]
| [
{
"code": "private async initWatcher() {\n\n if (this.debug) console.log(\"[SiteService] - initWatcher - checking existent watcher...\");\n // closing the existing watcher to avoid memory leaks\n if (this.watcher) this.watcher.return(undefined);\n if (this.debug) console.log(\"[SiteService] - initWatcher - defining new watcher...\");\n\n // creating the new watcher \n this.watcher = this._sitesCollection.watch();\n\n // listening for changes\n for await (const change of this.watcher) {\n\n if (this.debug) console.log(\"[SiteService] - initWatcher - watch operationType: \", change.operationType);\n await this.fetchSites();\n await this.fetchDashboardSites();\n }\n }\n",
"text": "Hi there,I have an issue with watch() function fo realm web sdk collections.When I insert a new document in the collection, the change event get trapped and I make a\ncollection.find() to update the list of document. The issue is that the newly created document\nis not present in the find() result.I think that the issue is related to Atlas cluster default read and write concerns…\ni have a mongodb 5.0 cluster M10 and the default readConcern is local while the\ndefault write concern is majority. I was unable to modify the default readConcern and\nI have not found anything about specifyng readConcern during find() operation (web sdk).here is a snippet of code to clarify what I’m tryng to do:",
"username": "Armando_Marra"
},
{
"code": "\nawait this._realmSvc.currentUser.refreshAccessToken();\n\n\n",
"text": "I think that I filnally managed to solve the issue.The problem was due to the sync functionality paired with the WEB SDK use of watch().I think that the WEB SDK uses the GraphQL API or the Data API under the hood, so when a call is done by the client, a JWT token is passed in the bearer of the http call.\nI noticed that the JWT token includes the custom user data, writePartition array included, that is the list of partitions the user have access to. This token remains the same until it expires.\nEvery time I create a new site, I generate a new partition and push that partition to the writePartitions array of the user’s custom data collections, but the JWT token stored in localstorage of the client does not have this newly created partition in his writePartitions array.\nThis cause the subsequent http calls to transfer a wrong list of partition to the server that returns a wrong list of sites inside the watch() functions.I solved this adding a forced refresh of the access token before the find() call:after this, all became working like a charm.",
"username": "Armando_Marra"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm WEB SDK collection watch and readConcern | 2022-09-07T10:08:26.165Z | Realm WEB SDK collection watch and readConcern | 2,737 |
[]
| [
{
"code": "",
"text": "I have done all things. I set the path variable to the bin and deleted and reinstalled MongoDB multiple times. but mongobd/mongo command is not working in my cmd.\nrather mongod work perfectly. please help me to fix it.\n",
"username": "Satyam_Chaudhary"
},
{
"code": "mongomongosh",
"text": "The mongo command line tool is no longer shipped with MongoDB version 6.0. This tool is deprecated and replaced by the new mongosh shell. This tool may have been installed depending on how you installed the MongoDB package. If it was not you can always download it.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongo/mongodb command not working and environment variable path is also set | 2022-09-13T09:34:37.268Z | Mongo/mongodb command not working and environment variable path is also set | 25,399 |
|
null | [
"compass"
]
| [
{
"code": "connection <monitor> to 13.245.226.234:27017 closed\n",
"text": "this is the error messagehow do i fix it?",
"username": "Josephine_Adigwe"
},
{
"code": "",
"text": "Have you whitelisted your IP?\nSomething is blocking your connection\nDid you try from another location/network or your mobile hotspot\nAre you using any VPN,proxy,firewall etc?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I did not whitelist my IP, and I don’t understand what you mean by if I tried another location. Lastly I’m not using any vpn or proxy",
"username": "Josephine_Adigwe"
},
{
"code": "",
"text": "eitherTo check Atlas side,If you still cannot connect, then try a different PC or different internet.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I allowed access fro anywhere and it worked",
"username": "Josephine_Adigwe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I cannot connect my cluster to mongoDB compass | 2022-09-05T14:57:11.633Z | I cannot connect my cluster to mongoDB compass | 3,700 |
[]
| [
{
"code": "",
"text": "I have done all things. I set the path variable to the bin and deleted and reinstalled MongoDB multiple times. but mongobd/mongo command is not working in my cmd.\nrather mongod work perfectly. please help me to fix it.\n",
"username": "Satyam_Chaudhary"
},
{
"code": "",
"text": "What is your mongodb version?\nmongo is not packaged with latest versions as it is deprecated\nYou have to use mongosh which must be available in your bin\nJust issue mongosh and see if you can connect",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I’m using latesst version of mongodb(6.0.1)\n\nimage610×517 11.2 KB\n\n\nimage726×793 42.3 KB\n",
"username": "Satyam_Chaudhary"
},
{
"code": "",
"text": "please watch my reply",
"username": "Satyam_Chaudhary"
},
{
"code": "",
"text": "Please check contents of your mongodb/bin directory\nDid you try mongosh?\nIf mongosh shell is not in your bin you can always install it separately and try to work with mongodb",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongo/mongodb command not working even environment variable path is set | 2022-09-13T09:30:13.314Z | Mongo/mongodb command not working even environment variable path is set | 5,428 |
|
null | [
"data-modeling"
]
| [
{
"code": "ApplicationsRunsRunApplication",
"text": "Hello everyone,So here is the issue:\nWe have two collection : Applications and Runs, and each Run document represents an execution of a specific Application (also a document).Our UI lets users filter the Runs and next to each run we wish to show the application name, and application labels. e.g. Run number 1234 app_name: “hello_world_app” app_labels: [‘latests’ ,‘test’].My problem is that Mongo is slow at “joins” (lookups) so referencing an external applciation is not an option, therfore I have to keep application info embeded in the Run document - the downside of this approach is that applications may be renamed, but usually it’s labels are being editited, so emeddeing will keep an “old” version of that application.I will be very happy to get an idea. (by the way just finished the M320 online course)",
"username": "Uri_Grinberg"
},
{
"code": "",
"text": "If you have the appropriate indexes the following is not usually an issue.Mongo is slow at “joins” (lookups)I am not too sure about the use cases but I would believe that the embedded document is best. I would not like to see an old Run with the new Application name. Keeping the historical name might be better. Also if the renaming is not so frequent, you might update both collections to keep things in sync.",
"username": "steevej"
}
]
| Data modeling issue - Application and Runs | 2022-09-13T10:25:02.533Z | Data modeling issue - Application and Runs | 1,133 |
null | [
"serverless"
]
| [
{
"code": "",
"text": "Hi all,We’ve got datadog integration working for MongoDB Atlas Cluster, works great, but we’re moving over to Atlas Serverless and I was wondering whether to ever expect monitoring integrations for this product such as datadog ? How are other companies monitoring Serverless ?Mike",
"username": "Mike_Corlett"
},
{
"code": "",
"text": "Hi @Mike_Corlett - Welcome to the community Datadog integration with serverless instances is not currently available and is not mentioned in the serverless limitations documentation (no “Coming Soon” check). If you would like this feature to be added Serverless, I would suggest you to file a feature request via feedback.mongodb.com. From that platform, you will be able to interact with our Product Management team, keep track of the progress of your request, and make this visible to other users.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "thanks. Have done what you said here ! : Add Datadog integration to Atlas Serverless – MongoDB Feedback Engine",
"username": "Mike_Corlett"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Will there ever be datadog support for Atlas Serverless? | 2022-09-05T16:04:59.227Z | Will there ever be datadog support for Atlas Serverless? | 1,762 |
[]
| [
{
"code": "{\n \"recipient\": {\n \"name\": \"Silvia de Silva\"\n },\n \"address\": {\n \"placeName\": \"Some Str\",\n \"buildingNumber\": \"20\",\n \"zipcode\": \"xxxxx\",\n \"city\": \"City\",\n \"country\": \"Country\",\n \"isValidated\": false\n },\n \"shipment\": \"CXE-43556546\",\n \"expirationDate\": \"2022-09-25T20:19:42.001Z\",\n},\n",
"text": "Hello, seeing how we plan to host our database on MongoDB it seemed a logical step for us to take from it as much as possible and thus we would like to embrace Atlas Search as the main workhorse powering our search heavy systems.I have tried playing with it for the past few days, and while I find the initial setup very easy to do, I have faced some problems/ questions.Considering the following document:And the following Atlas Search setup:\nimage1075×720 17.9 KB\nProblem number 1:Searching for this documented by address, entering Some Str, 20 will successfully find it, however as secondary findings it will also list unrelated documents, only because of the matching street number, for example:Completely Unrelated Str, 20\nUnrelated Square, 20\netc.Could you tell me please what I am doing wrong here, and how could I finetune this?Problem number 2:As you can see, there are more fields set up that I would like to search this collection by. However, for some reason I am only able to search the documents by the recipient.name and address. If I were to search by expirationDate, be it an exactly precise value, e.g. 2022-09-25T20:19:42.001Z or just 2022, I get 0 results found. The same happens if I attempt to search by shipment ID, for example “CXE-43556546”.It puzzles me, because as you can see, the fields are inserted into the index and I was not able to find the solution on my own to this problem online, therefore I would like to kindly ask you if you could tell me what I am doing wrong in this case.Problem number 3:Finally, I have noticed that Mongo product managers frequently visit these forums, so I would like to report a translation problem regarding the pricing page on your website.Since I need to present MongoDB Atlas to several people I opted for your webpage in Italian, and most of it is translated. However, the most important section, that is the plan comparison table, still appears in English, the currency is presented in dollars, feature comparisons and descriptions are written in English. It is a jarring experience having to present your product to my peers in such a half-localized fashion.Grateful,Ben",
"username": "RENOVATIO"
},
{
"code": "",
"text": "Hey there! Thanks for these great questions! Do you mind sharing the query you were testing out for Problem 1&2? That would certainly help me diagnose / provide ideas.",
"username": "Elle_Shwer"
},
{
"code": "[\n {\n $search: {\n index: 'products',\n text: {\n query: 'Via Something',\n path: {\n 'wildcard': '*'\n }\n }\n }\n }\n]\n",
"text": "Hello there, thank you so much for your reply!As requested, here is the query:I would like to expand upon the problem #1 as I have done some more testing in the past few days and I have found another issue with how I query it:In Italy street names are preceded by street type names to indicate their size and type (boulevard, street, alley, square etc.). It is the same as everywhere else, it is just it goes upfront. Now, with the current search query as it is, if I search for any street, say Via Something and Via Completely Else, both will be found and listed because both contain “Via” in the beginning.",
"username": "RENOVATIO"
},
{
"code": "db.street.aggregate([\n {\n $search: {\n \"compound\": {\n \"must\": [{\n \"text\": {\n \"query\": \"via something\",\n \"path\": \"placeName\"\n }\n }],\n \"should\": [{\n \"text\": {\n \"query\": \"20\",\n \"path\": \"buildingNumber\"\n }\n }]\n }\n }\n }\n]) \nexpirationDate",
"text": "Hey @RENOVATIO, I have a few ideas…Have you tested running this against a compound query? This may help you be more specific, especially given today the documents are in different fields.For question #2, I was thinking - which Index analyzer are you using? (It’s not included in your screenshot). It may be better to specify your expirationDate as Keyword Analyzer.Also this blog has some more ideas for how to achieve exact matching.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| 2 questions about Atlas Search | 2022-09-07T09:13:38.935Z | 2 questions about Atlas Search | 1,719 |
|
null | [
"data-modeling"
]
| [
{
"code": "users:\n user_id\n display_name\n avatars\n age\n gender\n\n matches: [user1, user2...]\n my_blockeds: [user1, user2...]\n blockeds: [user1, user2...]\n likes: [user1, user2...]\n dislikes: [user1, user2...]\nconst logged_id = \"...\";\nconst user_ids = [user1, user2...];\n\nconst min_age = 18;\nconst max_age = 30;\nconst gender_preference = 'male';\n\nconst users = await User.find({\n _id: { $in: user_ids },\n\n matches: { $ne: logged_id },\n my_blockeds: { $ne: logged_id },\n blockeds: { $ne: logged_id },\n\n likes: { $ne: logged_id },\n dislikes: { $ne: logged_id },\n\n age: { $gte: min_age, $lte: max_age },\n gender: { $eq: gender_preference },\n\n })\n .limit(10)\n .select('display_name avatars age')\n .lean();\n",
"text": "My question is as follows. I will bring up 10 available contacts from the patron list for a user. However, it has to go through some filtering. These are as follows:They are not matched anyway, the user is not already swiping it to the right or left, the user is not blocking it or it is not blocking it.andI thought of a model and query as above. It works fine now, but for some users I am afraid the list will grow significantly. I thought of using Outlier Pattern for this, but I think it is not suitable for this model. How do you think I should go about it? How can I create a model? Thank you from now.",
"username": "Enes_Karadeniz"
},
{
"code": "",
"text": "Hello Enes,I have thought about such a model as well, and my tests shown that you need to be careful with the arrays.\nBecause, onde the arrays go past the 10000 element size, things start to be extremely difficult to maintain.\nand in your data-model, the likes and dislikes (for example) will grow to an unmanageable size, after a few days of using the app.the blocks list not so much (I imagine).From my considerations, one idea, is to (assuming you use geofencing and most recently active), would be to use a TTL-indexed collection with “active profiles”, get those, query a collection for your “likes” and “dislikes” and so on (the ones to be excluded) (these would be single documents for each), and then in the application subtract the latest to the first.Because otherwise, you will always need to deal with arrays (that will be enormous) or define a bucketing parameter that may not be flexible or fitting to your needs.This is using Mongodb/document db.Using an RDBMS (hope I am not being blasphemous) you would have a “recently_online” table , get the ids and picture and so on, and just subtract from the “already_voted_on” table the relevant ids.These are some ideas. Hope it helped in some way.Thanks,\nJP",
"username": "Joao_Pinela"
},
{
"code": "",
"text": "need to be careful with the arrays.Yes. Interesting read:https://www.mongodb.com/article/schema-design-anti-pattern-massive-arrays/",
"username": "steevej"
},
{
"code": "",
"text": "Using an RDBMS (hope I am not being blasphemous) you would have a “recently_online” table , get the ids and picture and so on, and just subtract from the “already_voted_on” table the relevant ids.These are some ideas. Hope it helped in some way.Thanks,\nJPI think using RDBMS is not suitable for something like this. Because it will be Big Data and SQL cannot remove something like this. Tinder etc. applications can do with a NoSQL database such as MongoDB. I did some research and thought I could do it using Bloom Filter. Sample topic is: Scaling a data model using bloom filtersDoes this make sense or can you suggest something else? How can I use it if it makes sense?@steevej I am already using this. I have collections of “blockeds, matches, likes, dislikes” but in addition, I have to keep them from a single collection and write a query accordingly.",
"username": "Enes_Karadeniz"
},
{
"code": "",
"text": "Hello @Enes_KaradenizI actually don’t agree that a tinder-like app is a big data problem: it is a lot of data problem (number), but not a big data (type, diversity, variety, velocity, source) problem. It is a very structured data model. Fixed and stable.\nBecause you can run 2 extremely quick queries, by index, just subtracting the like/disliked profile ids by a given user from the currently online ids (from a given geo area). The online table (if geofenced and including recent activity) is relatively small, compared to the number of likes/dislikes.\nI ran some tests on postgres in a simple VM, and it worked well (but I didn’t create 100M records… true).That being said…\ncertainly there is a solution in mongodb, but using arrays will eventually hit a brickwall because they have performance limits when they go past the 10ths of thousands.\nIt is more likely just insert one document per like/dislike and recent activity, get all of those by index quickly, and let the code in the app sort out the needed and not needed.also, you just need to get some results, not all, to the user quickly. While the user selects, clicks or not on some profiles, you can get some more behind the scenes br,\nJP",
"username": "Joao_Pinela"
},
{
"code": "",
"text": "@Enes_Karadeniz Did you solve this problem? I’m facing it now.",
"username": "Tarcisio_Melo"
},
{
"code": "",
"text": "Hmm. Did your issue solve?",
"username": "Leo_French"
}
]
| Data modeling for Tinder app | 2021-05-02T00:20:07.570Z | Data modeling for Tinder app | 9,081 |
null | [
"app-services-user-auth",
"realm-web"
]
| [
{
"code": "",
"text": "My Realm App gets SPAMMED with anonymous users because apparently, MongoDB creates a new Realm App User every time someone logs in anonymously. I don’t want it to store the anonymous users at all! OR at least delete it when the user closes the app. How can I do this?I already looked at this thread but it’s not a great solution since it happens only every x minutes and needs TONS of computation time.\nI’d like the app to just not store any anonymous user at all.",
"username": "SirSwagon_N_A"
},
{
"code": "app.currentUserlogin()",
"text": "Hi @SirSwagon_N_A ,The behaviour is exactly as designed: anonymous logins create new users, that, most of the times, would need to have limited persistence (and app.currentUser, instead of login(), will re-connect the same anonymous user without creating a new one), but are automatically deleted by the system at some point (at this time, 90 days).What’s the use case for having anonymous users that don’t persist any data outside the single session? If you don’t need any persistence whatsoever, wouldn’t, for example, a local, in-memory realm serve you better?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "@Paola_MannaHi, thanks so much for the quick answer.\nMy reason is very simple: I need users to be able to see the data (READ ONLY). It’s like a “guest” mode where they can look around the app, as guests. As far as I know this can only be accomplished by allowing anonymous login. Or maybe there is another solution to make data publicly readable?",
"username": "SirSwagon_N_A"
},
{
"code": "",
"text": "I need users to be able to see the data (READ ONLY). It’s like a “guest” mode where they can look around the app, as guests.You could define an API Key user for all guests, and give it the read-only access you want for them to have.The permissions would be needed also for anonymous users, anyway, if you don’t want everyone to be able to modify data in the DB…",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm App: don't store anonymous users! / delete anonymous users after log out | 2022-09-12T10:23:01.557Z | Realm App: don’t store anonymous users! / delete anonymous users after log out | 2,748 |
null | []
| [
{
"code": "",
"text": "Hi Everyone,\nSani Yusuf here and I am pleased to meet you all. I help organise community events for the London group. Feel free to get in touch.",
"username": "Sani_Yusuf"
},
{
"code": "",
"text": "Welcome to the MongoDB Community forums @Sani_Yusuf! We’re pleased to have you here.",
"username": "Doug_Duncan"
}
]
| Hello World, Im Sani Mongo London Community | 2022-09-06T20:09:23.030Z | Hello World, Im Sani Mongo London Community | 1,855 |
null | [
"containers",
"atlas"
]
| [
{
"code": " mongodb:\n image: mongo:latest\n ports:\n - 27017:27017\n networks:\n mynetwork:\n aliases:\n - mongodb-service\n volumes:\n - vol-mongo-db:/mongo/db\n - vol-mongo-configdb:/mongo/configdb\n",
"text": "I am using docker-compose to create applications and mongodb containers .The below line of code creates a mongoDB container and I am referring the service name “mongodb-service” in application containers to connect to mongoDB .I am doing A POC , where I need to connect to mongoDB atlas not to mongoDB container.\nHow I can connect to mongoDB atlas as a service , how to define HOST URL and user name password in that service .",
"username": "shirish_sahu"
},
{
"code": "mongodb-service",
"text": "Hi @shirish_sahu and welcome to the community!!I am using docker-compose to create applications and mongodb containers .As I understand the above statement correctly, the application is also using docker and is deployed using docker compose.\nCan you please help with the connection string that the application is using currently to connect the application with the database?\nIf the URI is pointing towards mongodb-service, you can change the current URI with the Atlas URI instead.Please let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
}
]
| Docker-compose with atlas connection | 2022-08-25T22:27:56.321Z | Docker-compose with atlas connection | 5,060 |
null | [
"node-js",
"mongoose-odm",
"database-tools",
"atlas",
"serverless"
]
| [
{
"code": "Failed: (AtlasError) collation not allowed in this atlas tier",
"text": "I used mongoexport to export a collection from a local database, and I’m trying to use mongoimport to add the collection to a serverless Atlas database.The import runs for some time, but then fails with this error:Failed: (AtlasError) collation not allowed in this atlas tierI’m not sure why it’s erroring about collation. Is it because the schema for the collection embeds other schemas? I used nodeJs mongoose to create the collection.Thanks for any help",
"username": "Tracy_Collins"
},
{
"code": "",
"text": "May be BUG with serverless instance\nAre you using any case sensitive userid/password\nDid you try with another user\nWhat is your cluster type free or paid?\nFor free tiers there are some restrictions",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "thanks for your help",
"username": "Tracy_Collins"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
}
]
| Atlas mongoimport Failed: (AtlasError) collation not allowed in this atlas tier | 2022-09-12T23:09:57.967Z | Atlas mongoimport Failed: (AtlasError) collation not allowed in this atlas tier | 3,076 |
null | [
"atlas-cli"
]
| [
{
"code": "brew install mongodb-atlas",
"text": "The MongoDB Atlas CLI is the fastest way to create and manage an Atlas database, automate ongoing operations, and scale your deployment for the full application development lifecycle. You can programmatically manage clusters, automate user creation, control network access, and much more.Since the announcement of the Atlas CLI at MongoDB World, we’ve continued to add new capabilities to the Atlas CLI. You can now upgrade from shared to dedicated clusters and access advanced cluster and project settings from the command line. You can also modify settings, jobs, and schedules for backing up to Amazon S3 buckets. Previously available via Homebrew, Apt, and Yum, the Atlas CLI is also now downloadable through the Chocolatey package manager for Windows.Download the Atlas CLI today with brew install mongodb-atlas in Homebrew or via one of the other installation options. For more information on Atlas CLI releases you can review the full changelog.",
"username": "Shelby_Carpenter"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Atlas CLI 1.15 Updates | 2022-09-12T23:26:23.522Z | MongoDB Atlas CLI 1.15 Updates | 2,033 |
null | [
"node-js",
"compass"
]
| [
{
"code": "",
"text": "Hello All,I’m using Mongodb local for a MERN stack application which is hosted on our server . However when Im trying to view it in compass its throwing ECONN Refused error and I’m not sure what to do in this case.Can someone help me on how can I access it through compass?",
"username": "priyatham_ik"
},
{
"code": "",
"text": "It looks like mongod is not running on your server. Are you windows, linux or mac?Share the connection string you are using.A screenshot of Compass might also help.Can you connect with mongosh?",
"username": "steevej"
},
{
"code": "",
"text": "Hello steevej,The mongodb is running on ubuntu 20.04 and When I see the status using service mongodb status its says:\nmongodb.service - An object/document-oriented database\nLoaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor prese>\nActive: active (running) since Tue 2022-01-11My connection string is : mongodb://serverip:27017/?readPreference=primary&appname=MongoDB%20Compass&ssl=false",
"username": "priyatham_ik"
},
{
"code": "",
"text": "serverip in the connection string is basically instead of localhost Im inputting server id when Im trying to connect through compass on my machine,FYI.",
"username": "priyatham_ik"
},
{
"code": "bindIp127.0.0.1net:\n port: 27017\n bindIp: 127.0.0.1\n127.0.0.1",
"text": "In your MongoDB configuration file, is the bindIp address set to include the IP address of the machine? Most likely this only has 127.0.0.1 which means it only listens on the localhost address.If you see something like the above, you would need to add the other IP address after the 127.0.0.1 address separated by a comma.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Yes I did added the ipaddress of my machine",
"username": "priyatham_ik"
},
{
"code": "mongodmongod",
"text": "Are you trying to connect locally from the machine running the mongod process or from a remote server?If you’re trying to connect remotely, does the machine running the mongod process have a firewall in place that is blocking the traffic?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I’m trying to connect remotely, Basically a db hosted on prod server and I’m sure there is a firewall but not quite sure if its blocking me anyway I’m on the same internal network as my server",
"username": "priyatham_ik"
},
{
"code": "",
"text": "The IP address of your machine or the IP address of the server?Did you restarted mongod after adding the address?The status you shared indicate that mongod has been running since 2022-01-11. Did you add the address back then or just now?Share the configuration file.",
"username": "steevej"
},
{
"code": "",
"text": "I forgot to restart and I just retstarted now but unfortunately Im unable to get it back started its says it exited with status code 48 and here is the error from the logsPrefix: “/run/mongodb” } }, storage: { dbPath: “/var/lib/mongodb”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongodb.log” } }\n2022-09-12T11:58:36.717-0300 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] now exiting\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] shutting down with code:48",
"username": "priyatham_ik"
},
{
"code": "",
"text": "It fixed by chaning the BindIp in config to 127.0.0.1\nhowever I cant access through my remote machine via compass and Im not sure why adding 0.0.0.0 is not working and throwing errorEdit: Looks like having both 127.0.0.1,0.0.0.0 is throwing Eadress already in use may be it’s because 0. 0.0.0 basically means all ip and its somehow conflicting and making bindip only to 0.0.0.0 works fine and I 'm able to access the db via remote host however worried if it has any security concerns",
"username": "priyatham_ik"
},
{
"code": "mongodmognodbindIp0.0.0.0mongod0.0.0.0bindIp127.0.0.1,10.x.x.x127.0.0.110.x.x.x0.0.0.0",
"text": "2022-09-12T11:58:36.717-0300 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] now exiting\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] shutting down with code:48This means that there is already a service listening on the server/port combination. Most likely this is another instance of the mongod process.I forgot to restart and I just retstarted nowWhen you restarted, did you reboot the machine or just restart the mognod process? If you restarted the process, how did you do it?making bindip only to 0.0.0.0 works fine and I 'm able to access the db via remote host however worried if it has any security concernsSetting bindIp to 0.0.0.0 just means that mongod will listen on all network interfaces. If your host has a network interface that allows incoming traffic and you don’t have firewalls in place blocking public traffic to port 27017, then you have every right to be worried about security concerns. If your server has two network cards in it, one for external and one for internal traffic, then 0.0.0.0 will bind to both of these which is not good. External network traffic does not need to connect directly to your MongoDB servers.I would set my bindIp up to be 127.0.0.1,10.x.x.x. The 127.0.0.1 address is your localhost address so can connect locally from the machine without it going over the network. If you don’t want people to connect from the MongoDB host, you can leave this IP address out. The 10.x.x.x would ideally be an internal only interface that does not allow for outside your network traffic. I only recommend using 0.0.0.0 for testing purposes and then recommend changing back to more restrictive IPs.The configuration docs have a small section on security considerations. But it’s definitely worth making sure your database server is properly secured.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for this Info.Yes I have restarted the mongodb process using systemctl restart mongodb and I tried using 127.0.0.1 , 10.x.x.x however it throwed error in starting mongodb service with status code 48 saying this -Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] now exiting\n2022-09-12T11:58:36.718-0300 I CONTROL [initandlisten] shutting down with code:48",
"username": "priyatham_ik"
},
{
"code": "ps aux | grep mongodsudo systemctl start mongodbmongod",
"text": "Run ps aux | grep mongod and see if you have anoter instance of MongoDB running. If you do, kill that instance and then try your sudo systemctl start mongodb command.As the error states, the requested address cannot be assigned, and exit code 48 is thrown when there is already something listening on the port that mongod is trying to listen on.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Cannot assign requested addressIt sometimes means that you are trying to listen on an IP address that is not valid for your machine.Is 10.x.x.x a valid IP for this machine?If it is, it is possible that the network interface is not up yet. For example, VPNs are started after and your 10.x.x.x is not valid when the mongod service start. If that is the case, you may add a depency in your mongod service file to ensure what ever make 10.x.x.x valid is ran first.",
"username": "steevej"
},
{
"code": "Address already in useCannot assign requested addresserrorexit",
"text": "Thanks @steevej I see you are correct. An incorrect IP address will give that message. I don’t think that I’ve come across that in the past.I just associate error 48 with a second instance trying to run on the same machine with the same host/port, but the error message in that case is Address already in use and I obviously didn’t pay attention to the error message in this case which is Cannot assign requested address. :sad:For testing purposes here are the log entries (filtering only the lines with error or exit in them). The first one is trying to start an instance up when I’ve already got one running on the defaults, and the second one when trying to bind to an IP address that is not assigned to my NIC:\nimage1669×312 86.1 KB\nYou’ve just taught me something once more. @priyatham_ik can you verify, as Steeve mentions, that your IP address is correct?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "You’ve just taught me something once more.I have stopped counting how many times learned from you.",
"username": "steevej"
},
{
"code": "",
"text": "Sorry for the late reply, basically I have given it like 10.10.0.0. to make it allow anything that matches this pattern and my machine IP was something like 10.10.x.x .Am I wrong with this kind of attempt in allowing all internal IP addresses?",
"username": "priyatham_ik"
},
{
"code": "",
"text": "It did not work. So yes you werewrong with this kind of attempt in allowing all internal IP addressesthe way you did it.Do do that you have to use 0.0.0.0 or bindIpAll as documented.",
"username": "steevej"
}
]
| Unable to connect to mongodb hosted locally on our server through Compass | 2022-09-08T12:58:58.523Z | Unable to connect to mongodb hosted locally on our server through Compass | 9,521 |
null | []
| [
{
"code": "EventEventPlanEvent: {\n _id: String\n plan: EventPlan\n}\n\nEventPlan: {\n _id: String\n blocks: EventBlock[]\n}\n\nEventBlock: {\n _id: String\n items: EventItem[]\n}\nEventPlanEventBlockEventBlockEventItem_idEventBlockEventPlan.blocks/action/findOne_id/action/find",
"text": "I’ve got some apps that use Atlas App Services for mobile, but I’m attempting to create a static website to view a bit of data. Basically, there is an Event object with an EventPlan object. I’m able to connect and query for these objects and display their data.But then I run into issues. I’ve got 2 levels of linked object lists.My simplified model looks like this:The EventPlan object has an array of EventBlock objects.\nThe EventBlock objects have arrays of EventItem objects.\nIt’s those arrays of objects that are causing me grief.I can see the _id for each EventBlock in the EventPlan.blocks array, but the only way I have found to retrieve all of the objects in the list are to hit the /action/findOne endpoint, but that seems like an extremely inefficient way to do it and is basically unusable with all of the back and forth.I’ve experimented with trying to create a filter string with the _ids to get all of them from /action/find endpoint to no avail, and honestly, that seems like I’m doing it wrong.Is there a clean way to get all of the linked objects with the Data API endpoints as they currently work?",
"username": "Kurt_Libby1"
},
{
"code": "/action/aggregatefilterpipeline let eventBlockIds = eventPlan.eventBlocks;\n let pipeline = [{\n $match: {\n \"_id\": {\n $in: ebs, \n },\n },\n }]\n",
"text": "Hey, just going to post this here (probably for when I google it in the future ), but I was able to solve this after a very helpful call with @Sumedha_Mehta1.The answer is in calling /action/aggregateInstead of including a filter, you build a simple pipeline with all of the primary key ids from the parent object.For instance, in my situation I had an EventPlan object with the blocks list of EventBlock objects. Realm/App Services create an array of Primary Keys.So I simply created a string with the keys like this:When passing that into pipeline like is mentioned in the DataAPI docs, it worked as expected.\n",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Data API and Linked Objects | 2022-09-01T21:47:29.948Z | Data API and Linked Objects | 2,323 |
null | [
"aggregation"
]
| [
{
"code": "[{\n_id: 1235678\nprice: 50000\ncategory: mycategory\n},\n{\n_id: 12356781\nprice: 65000\ncategory: mycategory\n},\n{\n_id: 12356781\nprice: 123000\ncategory: mycategory\n},\n{\n_id: 12356781\nprice: 40000\ncategory: mycategory\n}\n]\n{\n_id: 12345\nminPrice: 5000\nmaxPrice: 12000\n}\n$lookup: {\n from: 'listings',\n let: {\n minPrice: '$minPrice',\n maxPrice: '$maxPrice',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n\n {\n $lte: [\n '$price',\n '$$minPrice'\n ]\n },\n {\n $gte: [\n '$price',\n '$$MaxPrice'\n ]\n },\n \n ]\n }\n }\n },\n {\n $project: {\n _id: 0\n }\n }\n ],\n as: 'matchingProperties'\n}\nif $minPrice != 0 { $lte: [ '$price', '$$minPrice'\n ]}\n else: {}\n",
"text": "Hi,I’m pretty new to MongoDB and I have a bit of a problem.I have the following documents in a collectionthen I have the following document in another collectionwhat I need to do is to $lookup all the data in the first collection and return the documents that have the “price” $gte than “minPrice” and $lte “maxPrice” … wich I do, but I encount a problem… if the value in “minPrice” and / or “maxPrice” is 0, I want to ignore it and move forward with the query and return documents based on the other valueshere is what I managed to do, and it works, but if the value of “minPrice” and / or “maxPrice” is 0, it returns nothing from the db.Is there and if else like statement to check if the value in a field or let is equal to 0 and if it is to ignore that field in $expr ?something like(pseudo code of what I want)Thank you for you help!",
"username": "Mingo"
},
{
"code": "maxPriceMaxPrice",
"text": "You definemaxPricebut test withMaxPriceYou could add a clause to test maxPrice $ne 0 and minPrice $ne 0, then add an outer $or for your else part.",
"username": "steevej"
},
{
"code": "",
"text": "How can I do that?Thanks!",
"username": "Mingo"
},
{
"code": "min_or_max_is_0 = { \"$or\" : [ { \"minPrice\" : 0 } , { \"maxPrice\" : 0 } ] }\n\nmin_and_max_are_not_0 = { \"$not\" : min_or_max_is_0 }\n\nelse_query = { }\n\nprice_lte_min = { \"$lte\" : [ '$price' , '$minPrice' ] }\n\nprice_gte_max = { \"$gte\" : [ '$price' , '$maxPrice' ] }\n\nquery = { \"$or\" :\n [\n { \"$and\" : [ min_and_max_are_not_0 , price_lte_min , price_gte_max ] } ,\n { \"$and\" : [ min_or_max_is_0 , else_query ] }\n ]\n}\n\n",
"text": "",
"username": "steevej"
}
]
| $lookup pipeline ignore if field value is 0 | 2022-09-08T19:12:49.811Z | $lookup pipeline ignore if field value is 0 | 2,042 |
null | [
"swift",
"transactions"
]
| [
{
"code": "import SwiftUI\nimport RealmSwift\n\n/// Show a detail view of a task Item. User can edit the summary or mark the Item complete.\nstruct ItemDetail: View {\n // This property wrapper observes the Item object and\n // invalidates the view when the Item object changes.\n @ObservedRealmObject var item: Item\n\n var body: some View {\n Form {\n Section(header: Text(\"Edit Item Summary\")) {\n // Accessing the observed item object lets us update the live object\n // No need to explicitly update the object in a write transaction\n TextField(\"Summary\", text: $item.title)\n }\n }\n .navigationBarTitle(\"Update Item\", displayMode: .inline)\n .onAppear(perform: {\n $item.views.wrappedValue += 1\n })\n }\n}\n",
"text": "I have been playing around with the test application (Flexible Sync) for Swift (https://github.com/mongodb-university/realm-template-apps/tree/main/swiftui) and found that the UI can be quite laggy. For example below I added a view var to the Item Object to count the number of times the Item has been viewed. But on opening and closing the detailView the UI is very laggy. Is this because the write transaction is happening on the main thread? If so how can I do it on the background in the exact scenario as below? I tried using writeAsync but this did not solve the problem.TLDR how do I perform tasks on the background while making use of the ObservedRealmObject like shown below?",
"username": "Jesse_van_der_Voorn"
},
{
"code": "Itemviews",
"text": "detailView the UI is very laggyCan you elaborate a bit on that? I copied and pasted your code and it’s very fluid for me. Tried it in both a macOS App as well as iOS.Also, it appears the Item object was modified with a views property. Can we see what that object looks like?",
"username": "Jay"
},
{
"code": "class Item: Object, ObjectKeyIdentifiable {\n\n@Persisted(primaryKey: true ) var _id: ObjectId\n@Persisted var title: String\n@Persisted var picture: Data\n@Persisted var about: String\n@Persisted var price: Double\n@Persisted var condition: String\n@Persisted var createdAt = NSDate().timeIntervalSince1970\n@Persisted var favorites: MutableSet<String>\n@Persisted var likes = 0\n@Persisted var views = 0\n// :state-start: flexible-sync\n@Persisted var owner_id: String\n// :state-end:\n}\n",
"text": "Hello,Thanks for your reply , yeah I added some properties this is what it looks like now:I am not too familiar yet, but the UI freezes/lags on every transaction that’s happening. I also have this like button on the items in the rowView and when I tap it it increments the ‘like’ properties with 1, but it also freezes the UI for around 0.5-1 second. Same thing happens when I tap to go to the detailView where the view increment happens onAppear and also when I go back to the listView. Even just scrolling through the list can be quite laggy. Opening the tab where the itemsView is displayed also causes the whole UI to freeze up for around 1.5 seconds before it actually switches tabs to the itemsView.I was also wondering if it’s possible to get the loading state of the ObservedResults collection. So I can show a progressView while the collection is loading.So basically my problem here is that every transaction just freezes up the UI and I think should be moved to a background thread, but I am not sure how . I have tried stuff with writeAsync and read the whole documentation but I have not seen any cases where they do this with ObservedResults/ObservedRealmObjects.",
"username": "Jesse_van_der_Voorn"
},
{
"code": "$item.views.wrappedValue += 1",
"text": "I am working with your code and your object in a small test project and I am just not able to duplicate the issue. Honestly, this code$item.views.wrappedValue += 1is such a tiny amount of data, even if it was being written on the main UI thread, you’d likely never notice it. That being said, it appears you’re using @ObservedRealmObject, and connecting asynchronously so you’re already on a background thread with those writes. Right?",
"username": "Jay"
}
]
| ObservedRealmObject View Counter Lags | 2022-09-09T11:57:14.980Z | ObservedRealmObject View Counter Lags | 1,817 |
null | []
| [
{
"code": "",
"text": "Could You tell me how to install Mongo DB on my own compuyter locally, please? The progress bar is continuously empty all the time during installation proccess. I don’t know what I do incorrectly.",
"username": "Dariusz_Jenek"
},
{
"code": "",
"text": "Hi @Dariusz_Jenek and welcome to the MongoDB Community forums.It’s hard to know what’s going on with your system so we can’t really tell you what’s happening or not.Have you followed along with the installation documentation for your OS?If you’re following that documentation, then you will need to provide more information for us to help:",
"username": "Doug_Duncan"
}
]
| Installation of MongoDB | 2022-09-12T18:37:37.976Z | Installation of MongoDB | 1,294 |
[
"unity"
]
| [
{
"code": "",
"text": "How can I connect to the Atlas cluster Database using Realm in Unity?I have referenced the youtube tutorial but it does not work!https://www.youtube.com/watch?v=aIU20Cufd-o&ab_channel=MongoDBI always got an error at “_realm = await Realm.GetInstanceAsync(new PartitionSyncConfiguration(“[email protected]”, _realmUser));” (See it in Screen Capture)\nI have tried it for a week, but it still does not work!!!\nq2882×1796 366 KB\n",
"username": "Yin_Ar"
},
{
"code": "",
"text": "Hi @Yin_Ar and welcome to the MongoDB Community forums.I haven’t worked with Unity before, but I wonder if things have changed in the past year since that video was recorded.I did see that there is a section for Unity over at the MongoDB Developers site.Question about MongoDB and Unity? Look no further. MongoDB Developer Center has articles, videos, podcasts, and more to help you get the most from your dataI wonder if that might have some helpful tutorials and posts for you.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Yin_Ar You may want to move over to forums.realm.io for this question - there is support for building a Unity app, with our Realm .NET SDK, and using Atlas Device Sync to sync to Atlas. There are some docs here -\nhttps://www.mongodb.com/docs/realm/sdk/dotnet/unity/But the error you are seeing is that you are trying to connect with a partition-based sync API from the client but your server-side Atlas App Services is configured with Flexible Sync - you’ll want to change it on one side or the other but I’d recommend Flexible Sync as that is the future of Device Sync at MongoDB.",
"username": "Ian_Ward"
}
]
| Cannot Connect to my Atlas cluster in Unity | 2022-09-12T10:40:16.053Z | Cannot Connect to my Atlas cluster in Unity | 2,484 |
|
null | []
| [
{
"code": "",
"text": "At the access log there are randomly error messages: BadValue: SCRAM-SHA-256 authentication is disabled.\nThis happens 100% of the time with mms-automation from localhost and occasionally from regular remote client even the connection string is always the same.SCRAM-SHA is not disabled and this seems like an Atlas bug.",
"username": "Danwiu"
},
{
"code": "",
"text": "Did you solve this? Have the same problem.",
"username": "Stefan_Verhagen"
},
{
"code": "SCRAM-SHA-256SCRAM-SHA-1SHA-1mms-automationmms-automationSCRAM-SHA-256mms-automation",
"text": "Hi @Danwiu,SCRAM-SHA is not disabled and this seems like an Atlas bug.Currently, Atlas does not support SCRAM-SHA-256, but does support SCRAM-SHA-1. Notably, MongoDB authentication protocols do not use SHA-1 as a raw hash function for passwords or digital signatures, but rather as an HMAC construction in, e.g., SASL SCRAM-SHA-1. While many common uses of SHA-1 have been deprecated or sunset by standards organizations, these do not typically apply to HMAC functions.At the access log there are randomly error messages: BadValue: SCRAM-SHA-256 authentication is disabled.Just to clarify, is the above message you’re seeing within the Database Access History section?This happens 100% of the time with mms-automation from localhostThe mms-automation user is used for Atlas internal automation tasks including monitoring. The source of this message is that mms-automation user initially attempts authentication using SCRAM-SHA-256 which Atlas doesn’t support, causing the “BadValue: SCRAM-SHA-256 authentication is disabled” message, before falling back to SCRAM-SHA-1. Note that there is no detrimental effect to the operation of the database, and this informational message is provided for your own auditing purposes.occasionally from regular remote client even the connection string is always the same.Other than the mms-automation user, what other application(s) from your environment are causing the same “BadValue: SCRAM-SHA-256 authentication is disabled.” message? Please provide the following details about those application(s):Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "SCRAM-SHA-256SCRAM-SHA-1mongod",
"text": "Hi @Stefan_Verhagen - Welcome to the community As noted above in my previous response to Danwiu, Currently, Atlas does not support SCRAM-SHA-256, but does support SCRAM-SHA-1. Hopefully the previous response provides more details you were after.However, could you clarify what problem you are seeing exactly? Please provide the following so we are able to assist with narrowing down what the particular issue could be:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for your quick response Jason, indeed it is the mms-automation user creating the culprit.",
"username": "Stefan_Verhagen"
},
{
"code": "",
"text": "It seems to me a bug for Atlas to report a problem for a problem Atlas caused.",
"username": "Steve_Hand1"
},
{
"code": "",
"text": "A post was split to a new topic: “BadValue: SCRAM-SHA-256 authentication is disabled”",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| BadValue: SCRAM-SHA-256 authentication is disabled | 2022-01-22T12:13:43.239Z | BadValue: SCRAM-SHA-256 authentication is disabled | 12,938 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I am new MongoDB. I have a table like this below.I would like to get the list of product categories who has minimum price within that category and their count.ProdName PrdCategory PrdPricePrdB 34 8\nPrdA 34 50\nPrdC 134 49\nPrdD 134 50\nPrdE 34 8My SQL works fine as shown below.select PrdCategory, min(PrdPrice), count(*) from ProdTable\ngroup by PrdCategory\norder by PrdPrice, PrdCategoryMy answer at SQL was:PrdCategory min(PrdPrice) count\n34 8 2\n134 49 1How will I get the same results with MongoDB, with a collection similar to ProdTable?",
"username": "Venkat_Swamy"
},
{
"code": "db.products.aggregate([{\n $group: {\n _id: \"$ProdCategory\",\n minPrice: {\n $min: \"$ProdPrice\"\n },\n prices: {\n $push: \"$ProdPrice\"\n }\n }\n },\n {\n $unwind: \"$prices\"\n\n },\n {\n $match: {\n $expr: {\n $eq: [\"$prices\", \"$minPrice\"]\n }\n }\n },\n {\n $group: {\n _id: \"$_id\",\n minPrice: {\n $min: \"$minPrice\"\n },\n count: {\n $sum: 1\n }\n }\n }\n])\n[\n { _id: 134, minPrice: 49, count: 1 },\n { _id: 34, minPrice: 8, count: 2 }\n]\n$project",
"text": "Hello @Venkat_Swamy and welcome to the MongoDB community.The following command will work but it’s messy:For your sample documents it returns the results of:You could add a final $project stage to rename the fields if you wanted to.There’s probably a better solution out there that I’m too tired to see right now, but the above will give you something to play around with.As always, this query works, but may not be efficient at higher amounts of data. Always test in your environment with production level data to see how it works for you. Filter out as as much as you can early in the pipeline so you don’t do unnecessary work in the pipeline.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thank you Duncan. This works like a charm. Really appreciate your quick answer. Kudos to you!",
"username": "Venkat_Swamy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregate Group by using minimum value and display count for each group | 2022-09-11T19:28:30.592Z | Aggregate Group by using minimum value and display count for each group | 1,337 |
[
"node-js",
"data-modeling",
"crud",
"mongoose-odm"
]
| [
{
"code": "const Product = require(\"../models/Product\");\nconst router = require(\"express\").Router();\n\n// CREATE PRODUCT\nrouter.post(\"/\", verifyTokenAndAdmin, async (req, res) => {\n const newProduct = new Product(req.body);\n\n try {\n const savedProduct = await newProduct.save();\n res.status(200).json(savedProduct);\n } catch (err) {\n res.status(500).json(err);\n }\n});\nconst mongoose = require(\"mongoose\");\n\nconst ProductSchema = new mongoose.Schema(\n {\n prodId: {\n type: String,\n required: true, \n },\n prodName: {\n type: String,\n required: true,\n },\n brandName: {\n type: String,\n required: true,\n },\n img: {\n type: Array,\n },\n color: {\n type: Array,\n },\n size: {\n type: Object,\n },\n fabricSpecs: {\n type: String,\n },\n model: {\n type: String,\n },\n descDetail: {\n type: String,\n },\n price: {\n type: Number,\n required: true\n },\n discount: {\n type: Boolean\n },\n discountAmount: {\n type: Number\n },\n rating: {\n type: String\n },\n review: {\n type: Number\n },\n soldOut: {\n type: Boolean\n },\n category: {\n type: String,\n },\n type: {\n type: String,\n }\n },\n {\n timestamps: true,\n }\n);\n\nmodule.exports = mongoose.model(\"Product\", ProductSchema);\n",
"text": "Hi everyone,I hope this is the right place to discuss the CRUD issues. So I’m building a MERN e-commerce app, where I created the mongoose schema and connected with MongoDB to store the products & users. To test my schema and routes I used Postman, and while other requests related to users were working as usual I faced a weird error in the case of adding new products since this is the most important feature.I’m not sure what is this error and why is this error occurring.\nThis is my POST request -The verifyToken is a JWT token.Here is the SchemaHere is the error shown in the Postman while adding creating another product\n",
"username": "sujoy_dutta"
},
{
"code": "",
"text": "Even doing so with mongo shell I’m getting “duplicate key error”\n\nimage963×919 42.2 KB\n",
"username": "sujoy_dutta"
},
{
"code": "titleproductsnullnulltitle",
"text": "Hi @sujoy_dutta and welcome to the MongoDB Community forums. What I am seeing is that you have a unique index on the title field for the products collection. This is not getting populated so a null value is getting passed to the document and there is already a document with a null value in the collection.If you are not using title as a field, you can drop that index.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hey @Doug_Duncan thanks for the reply and it’s nice to be here.I checked my schema and the JSON object that I’m passing none of them has the title field, I’m not sure how it is getting included since MongoDB inserts _id as default",
"username": "sujoy_dutta"
},
{
"code": "",
"text": "FYI, the error only occurs from the second insert onwards the first insert works well as expected. Also I forgot to mention I don’t have any unique index in the schema.",
"username": "sujoy_dutta"
},
{
"code": "titledb.products.getIndexes()db.products.dropIndex({\"title\": 1})",
"text": "FYI, the error only occurs from the second insert onwards the first insert works well as expected.This is to be expected as the first insert would not violate a uniqueness constraint.Also I forgot to mention I don’t have any unique index in the schema.You have a unique index on title based on the error you’re getting. Run db.products.getIndexes() and you should see the index. It should look similar to the following:Again, if this index is not needed you can safely drop it by running db.products.dropIndex({\"title\": 1}).",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "And, if I may add, if the index is needed and it has to be unique, you may make it a partial index that exclude null value on title. This way if title is specified uniqueness will be enforced but you will be able to have multiple documents with null title.",
"username": "steevej"
},
{
"code": "",
"text": "Hey thanks a lot @Doug_Duncan I really appreciate your help, your fix worked and now I’m able to add products to the DB. May I ask why these indexes are hidden from the actual JSON object?It seems like if the error occurs again I will have to drop those indexes.",
"username": "sujoy_dutta"
},
{
"code": "",
"text": "May I ask why these indexes are hidden from the actual JSON object?I’m not sure what you mean here. The indexes are hidden from what JSON object?",
"username": "Doug_Duncan"
},
{
"code": "{\n \"prodId\": \"A0WE194R5V\",\n \"prodName\": \"Quilted Straight Leg Dungarees\",\n \"brandName\": \"Andrea Bogosian\",\n \"img\": [\n \"https://i.ibb.co/BPFvmv4/Womens-Dungarees-0006-3-quilted-straight-leg-dungarees.jpg\",\n \"https://i.ibb.co/LxQc77d/Womens-Dungarees-0007-3a-quilted-straight-leg-dungarees.jpg\",\n \"https://i.ibb.co/zZF7gjC/Womens-Dungarees-0008-3b-quilted-straight-leg-dungarees.jpg\"\n ],\n \"color\": [\n {\n \"hexcode\": \"#38576c\",\n \"value\": \"Blue\"\n },\n {\n \"hexcode\": \"#b9c6ca\",\n \"value\": \"Silver\"\n },\n {\n \"hexcode\": \"#798ea4\",\n \"value\": \"Blue\"\n }\n ],\n \"size\": {\n \"Standard\": [\n \"XXS\",\n \"XS\",\n \"S\",\n \"M\",\n \"L\",\n \"XL\",\n \"XXL\"\n ],\n \"UKsize\": [\n \"6\",\n \"8\",\n \"10\",\n \"12\",\n \"14\",\n \"16\",\n \"18\"\n ],\n \"Italysize\": [\n \"38\",\n \"40\",\n \"42\",\n \"44\",\n \"46\",\n \"48\",\n \"50\"\n ]\n },\n \"fabricSpecs\": \"Cotton 100% | logo plaque | adjustable shoulder straps | straight leg | Made in Italy\",\n \"model\": \"The model is 1.76 m wearing size 27 (Waist)\",\n \"descDetail\": \"Put a spin on those plain dungarees with this pair by Andrea Bogosian. Is it just us, or does the chain strap and quilted flap pocket combo remind you of Chanel’s coveted 2.55 bag?\",\n \"price\": 692.45,\n \"discount\": true,\n \"discountAmount\": 10,\n \"rating\": \"4.9\",\n \"review\": 4806,\n \"soldOut\": false,\n \"category\": \"Denim\",\n \"prodType\": \"Dungarees\",\n \"_id\": \"631ce678f6ed781219fea5dd\",\n \"createdAt\": \"2022-09-10T19:33:12.927Z\",\n \"updatedAt\": \"2022-09-10T19:33:12.927Z\",\n \"__v\": 0\n}\n",
"text": "Ahh sorry, I mean in Postman 200 response message I couldn’t find the title field.",
"username": "sujoy_dutta"
},
{
"code": "titletitle",
"text": "I couldn’t find the title field.MongoDB will only return fields that the document has. If there is no title field in the document then there is nothing to show. It’s not hidden, it just simply isn’t there.MongoDB will allow you to create indexes on fields that don’t exist. It seems that somehow a unique index got created on the title field at some time even though no documents would contain that field. Now that you’ve dropped that index everything should be good and you will be able to insert more than a single document without the duplicate key violation.",
"username": "Doug_Duncan"
},
{
"code": "title",
"text": "MongoDB will only return fields that the document has. If there is no title field in the document then there is nothing to show. It’s not hidden, it just simply isn’t there.Thanks a lot @Doug_Duncan for clarifying I know it might be a dumb question to ask but I had this weird confusion.And again thanks, I’m glad I learned something new",
"username": "sujoy_dutta"
},
{
"code": "",
"text": "know it might be a dumb question to ask but I had this weird confusion.You had a question and you asked. Nothing dumb about that. You had the courage to ask, which can help others out should they have similar questions around this. In the end, you not only helped yourself understand something, but have helped others out as well.I’m glad I learned something newThis is what the forums are all about. Learning and sharing knowledge.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Failed to create product document | 2022-09-10T11:01:51.051Z | Failed to create product document | 3,454 |
|
null | [
"queries",
"node-js",
"crud"
]
| [
{
"code": "[\n {\n \t\"cityId\": NumberInt(1),\n \t\"stateId\": NumberInt(2)\n },\n {\n \t\"cityId\": NumberInt(9),\n \t\"stateId\": NumberInt(8)\n }\n]\n[\n {\n \t\"cityId\": NumberInt(1),\n \t\"stateId\": NumberInt(2),\n \"id\": ObjectId(\"631ee4b7e996a64a4a791a85\")\n },\n {\n \t\"cityId\": NumberInt(9),\n \t\"stateId\": NumberInt(8),\n \"id\": ObjectId(\"60473638ccdab2446cd656e3\")\n }\n]\n",
"text": "hihow in one mongo query i can do the next > itrate on the document and insert a new field called id with a unique mongo objectIDBefore:After:",
"username": "Gal_Ben-Evgi"
},
{
"code": "_idid_iddb.collection.update({},\n[\n {\n \"$set\": {\n \"id\": \"$_id\"\n }\n }\n],\n{\n \"multi\": true\n})\n",
"text": "Hi,You can leverage the fact that Mongo will automatically create unique ObjectId for the property _id when the document is created. So, you can just add new property called id that will have the same value as existing _id property. You can do it like this:Working example",
"username": "NeNaD"
}
]
| Insert new field called id in one mongo query | 2022-09-12T08:00:45.843Z | Insert new field called id in one mongo query | 2,183 |
null | [
"storage"
]
| [
{
"code": "",
"text": "Does wiredtiger has a capability to encrypt / descrypt data at rest using apis that my enterprise exposes ?\nOr may be calling HSM ?",
"username": "Vidyasagar_Gayakwad"
},
{
"code": "",
"text": "Hi @Vidyasagar_Gayakwad welcome to the community!In short, no. WiredTiger can encrypt data at rest natively (i.e. not configurable for calling an API) but this feature is limited to the MongoDB Enterprise Server, which requires the Enterprise Advanced subscription.Alternatively, you can use Client-Side Field Level Encryption that works with MongoDB Community Server. The only difference between Community & Enterprise editions is that the Enterprise edition allows you to use automatic encryption:Otherwise both editions are equally secure.If you can use Atlas, it provides encryption-at-rest by default, and you can also manage your own keys to do so.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Wiredtiger storage engine to use apis to encrypt / decrypt data at rest? | 2022-09-09T07:53:45.995Z | Wiredtiger storage engine to use apis to encrypt / decrypt data at rest? | 2,186 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[{\n \"id\" : 1,\n \"name\": \"test\",\n \"phase\": \"2\",\n \"grades\" : [\n\t { \"grade\" : 80, \"mean\" : 75, \"std\" : 6 },\n\t { \"grade\" : 85, \"mean\" : 90, \"std\" : 4 }\n ]\n},{\n \"id\" : 2,\n \"name\": \"test\",\n \"phase\": \"2\",\n \"grades\" : [\n\t { \"grade\" : 90, \"mean\" : 75, \"std\" : 6 },\n\t { \"grade\" : 87, \"mean\" : 90, \"std\" : 3 },\n\t { \"grade\" : 91, \"mean\" : 85, \"std\" : 4 }\n ]\n}]\n[{\n \"id\" : 1,\n \"name\": \"test\",\n \"phase\": \"2\",\n \"grades\" : [\n\t { \"grade\" : 80, \"mean\" : 75, \"std\" : 6 },\n\t { \"grade\" : 85, \"mean\" : 90, \"std\" : 4 }\n\t { \"grade\" : 90, \"mean\" : 75, \"std\" : 6 },\n\t { \"grade\" : 87, \"mean\" : 90, \"std\" : 3 },\n\t { \"grade\" : 91, \"mean\" : 85, \"std\" : 4 }\n ]\n}]\ndb.collection.aggregate([{\n $unwind: \"$grades\"\n },{\n $match: {\n \"grades.grade\": {\n $gte: 90\n }\n }\n },{\n $replaceRoot: {\n newRoot: \"$grades\"\n }\n }])\n",
"text": "Hi,I have a collection that contains multiple documents, like this:Is there a way to return one document with the array (grades) aggregated from all documents? as following:The returned document has the shared attributes between all documents.I used unwind and replaceRoot, as following:but it returned the array of grades only, I need to put also the document information in the result in addition to the array that holds all elements of arrays from all documents.Thanks",
"username": "Rami_Khal"
},
{
"code": "",
"text": "in your sample input docs name and phase are the same but id is different and you only have id:1 in your sample result. so it is not clear what you want to do with id. what about other documents where name and phase are different?",
"username": "steevej"
},
{
"code": "",
"text": "All attributes are the same in all documents, except the id. The id in returned document can be the id of the first document, or any other document.\nI’m interest in returning one document, with the array value aggregated from all documents.",
"username": "Rami_Khal"
},
{
"code": "db.grades.aggregate([{\n $unwind: \"$grades\"\n}, {\n $match: {\n \"grades.grade\": {\n $gte: 90\n }\n }\n}, {\n $group: {\n _id: {\n name: \"$name\",\n phase: \"$phase\"\n },\n grades: {\n $push: \"$grades\"\n }\n }\n}, {\n $project: {\n name: \"$_id.name\",\n phase: \"$_id.phase\",\n grades: \"$grades\",\n _id: 0\n }\n}])\n[\n {\n name: 'test',\n phase: '2',\n grades: [\n { grade: 90, mean: 75, std: 6 },\n { grade: 91, mean: 85, std: 4 }\n ]\n }\n]\nnamephase$match[\n {\n name: 'test',\n phase: '3',\n grades: [\n { grade: 96, mean: 95, std: 1 },\n { grade: 95, mean: 92, std: 2 }\n ]\n },\n {\n name: 'test',\n phase: '2',\n grades: [\n { grade: 90, mean: 75, std: 6 },\n { grade: 91, mean: 85, std: 4 }\n ]\n },\n {\n name: 'test',\n phase: '4',\n grades: [ { grade: 99, mean: 92, std: 4 } ]\n }\n]\n",
"text": "Are you looking for something like the following?Which returns the following output based on the two sample documents you provided:NOTE: Your sample query filters out grades less than 90, but your sample output showed them. I left the query the same so this will need to be tweaked to suit your needs.Should you have various name/phase combinations in your data set and you don’t perform a $match for a certain combination, this will return all combinations as follows:Let us know if this is what you’re looking for, or if I missed what you were asking. There are undoubtedly other ways to do this as well, but I just went the easy route and tweaked what you had started with.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Yes, this is exactly what I’m looking for.\nThanks a lot Doug!Actually I’m newbie to mongodb, and I have question about the performance of the query. Do you think this grouping and projections wont impact the performance of the query in case I have 500K elements in the grades array distributed among multiple documents, and of course I will use pagination in returning data from grades array in chunks?Thanks",
"username": "Rami_Khal"
},
{
"code": "gradegradesgrades.gradedb.grades.aggregate(\n [{\n $match: {\n \"grades.grade\": {\n \"$gte\": 90\n }\n }\n },\n {\n $addFields: {\n grades: {\n $filter: {\n input: \"$grades\",\n as: \"grade\",\n cond: {\n $gte: [\"$$grade.grade\", 90]\n }\n }\n },\n }\n },\n {\n $unwind: \"$grades\"\n },\n {\n $group: {\n _id: {\n name: \"$name\",\n phase: \"$phase\"\n },\n grades: {\n $push: \"$grades\"\n }\n }\n }\n ])\nexplain",
"text": "As with any query, you will want to test things to see how it impacts the system and how performant it is. You will want to filter out as many documents as you can early in the process so you’re sending the least amount of data through the pipeline as possible.Not knowing how your data is distributed, and how many elements are in each grades array, it’s hard to make any assumptions about performance. Add on top of that, the hardware and resources available also have impact on how the query will run. Another thing to take into account is how often to you expect this query to run? If it’s infrequent, then it could be a little slower and not cause much in the way of problem for day-to-day operations. However if it’s running frequently you will definitely want to make sure you get it as performant as possible.Here is another version that will do the same thing for you. This one filters out any documents that don’t have a grade of 90 or greater in the grades array and should be able to take advantage of an index on grades.grade.I haven’t take the time to build up a larger dataset for testing to see which might be better, but even if I did, that would just be with dummy data that may not even be close to what you have in reality. Take some time to play with both queries. Tweak as necessary. Look at the results of running an aggregate query with the explain option and look at the results to see you can get things to run optimally.Being new to a technology can be frustrating at times because there’s so much to learn and you don’t always know where to start, but at the same time you have the enjoyment of learning new things and the sense of accomplishment that brings when you figure something out.The documentation is a great place to learn. If you have yet come across it yet, MongoDB University offers free online courses and they have both admin geared courses and those that are geared to developers. And always feel free to stop by here to ask questions. The community is great and lots of people around helping out our fellow travelers on the path to MongoDB mastery.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Very helpful. Thank you very much for your insights Doug!",
"username": "Rami_Khal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Query to return one document with array attribute aggregated from different documents | 2022-09-11T12:42:29.297Z | Query to return one document with array attribute aggregated from different documents | 2,989 |
null | [
"aggregation",
"data-modeling"
]
| [
{
"code": "{\n \"customer\": \"62f75f6204a24bb48edae723\",\n \"product\": \"62cd46a3b325452b3efc6dd3\",\n \"downPayment\": 140,\n \"planOfInstallment\": 12,\n \"moneyRequiredToPay\": 629,\n \"contractInitiated\": false,\n \"contractStatus\": \"Normal\",\n \"moneyRecieved\": 0,\n \"investor\": [\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 200,\n\"moneyRecived\": 10,\n\"totalEarning\": 250,\n\"monthlyEarning: 10,\n \"_id\": \"630e87abf5c87d202c27a2f8\"\n },\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 170,\n\"moneyRecived\": 8,\n\"totalEarning\": 210,\n\"monthlyEarning: 8,\n \"_id\": \"630e87abf5c87d202c27a2f9\"\n }\n ],\n \"createdDate\": \"2022-08-30T21:55:18.917Z\",\n \"_id\": \"630e87abf5c87d202c27a2f7\",\n \"paymentschedule\": [\n {\n \"monthName\": \"September\",\n \"dateToPay\": \"2022-09-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2fa\"\n },\n {\n \"monthName\": \"October\",\n \"dateToPay\": \"2022-10-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2fb\"\n },\n {\n \"monthName\": \"November\",\n \"dateToPay\": \"2022-11-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2fc\"\n },\n {\n \"monthName\": \"December\",\n \"dateToPay\": \"2022-12-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2fd\"\n },\n {\n \"monthName\": \"January\",\n \"dateToPay\": \"2023-01-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2fe\"\n },\n {\n \"monthName\": \"February\",\n \"dateToPay\": \"2023-02-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a2ff\"\n },\n {\n \"monthName\": \"March\",\n \"dateToPay\": \"2023-03-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a300\"\n },\n {\n \"monthName\": \"April\",\n \"dateToPay\": \"2023-04-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a301\"\n },\n {\n \"monthName\": \"May\",\n \"dateToPay\": \"2023-05-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a302\"\n },\n {\n \"monthName\": \"June\",\n \"dateToPay\": \"2023-06-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a303\"\n },\n {\n \"monthName\": \"July\",\n \"dateToPay\": \"2023-07-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a304\"\n },\n {\n \"monthName\": \"August\",\n \"dateToPay\": \"2023-08-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"630e87abf5c87d202c27a305\"\n }\n ],\n \"documentContract\": [],\n \"__v\": 0,\n \"id\": \"630e87abf5c87d202c27a2f7\"\n}\n",
"text": "This is my structure for example the person pay his monthly installment of _id: 630e87abf5c87d202c27a2fa and payment paid status is true so then i need that in investor array very object has monthlyEarning is added to moneyRecieved\nI need api for that?",
"username": "arbabmuhammad_ramzan"
},
{
"code": "\"paid\": true",
"text": "Hello @arbabmuhammad_ramzan ,Please correct me if my understanding of this use-case is not correct. when you get an update of \"paid\": true and some integer value in payment. You want to make an automatic update to some other field values in the same document?To understand your user-case better, could you please share below details:Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| I want when one field is updated so then in the same schema there is an array who value should also be updated? | 2022-08-31T07:08:26.221Z | I want when one field is updated so then in the same schema there is an array who value should also be updated? | 1,250 |
null | [
"atlas-online-archive"
]
| [
{
"code": "",
"text": "Hello,\nI have a mongo collection of about 6B rows that weights about 1.2Tb\nI turned Online Archive for this collection.\nI waited about 10 days to archive the appropriate data. The status was ‘Archiving’.\nNow I have about 600Gb archived data. The status is ‘Idle’.\nBut the problem is that the original collection wasn’t changed as the data wasn’t deleted.\nI’ve been waiting for about a week and nothing changes.\nHow can I check the exact status of Online Archive?",
"username": "Igor_Kazak"
},
{
"code": "",
"text": "Hi @Igor_Kazak welcome to the community!It is possible that the data wasn’t archived due to them not fulfilling the requirements. You might want to inspect the archiving rules (see Edit an Archiving Rule) and double check that everything is set up as expected.If everything seems to be in order, you might want to contact Atlas support since they will have better visibility into your deployment and will be able to help you further.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi, thank you.\nThe data looks good, the only problem is that it exists twice (both db and archive)\nI’ve contacted atlas support few days ago and still waiting for response…",
"username": "Igor_Kazak"
}
]
| Online archive doesn't delete rows | 2022-09-11T14:50:29.806Z | Online archive doesn’t delete rows | 2,123 |
null | [
"database-tools"
]
| [
{
"code": "mongoexport2022-09-04T23:00:53.215Z2022-09-04T23:00:53.000Z2022-09-04T23:00:53Z2022-09-04T23:00:53.000+0000",
"text": "Hello!\nI’m exporting my collections data to a cloud storage for later querying (on Dremio). For that, I’m using mongoexport, as I need the data in JSON format.\nThe problem is on my datetime fields. They are ISODates on database and were exported on this format: 2022-09-04T23:00:53.215Z. That would be very nice if all dates were exported this way… but when a datetime has 0 milisseconds (like 2022-09-04T23:00:53.000Z on db), I got this on exported JSON: 2022-09-04T23:00:53Z (without the 000 at the end). It gaves me format inconsistency when reading on Dremio.\nOther tools as Studio3T export the same data on another pattern, like 2022-09-04T23:00:53.000+0000, and that works very fine… but I couldn’t mimic this kind of export in bash.\nHow could I change this datetime export behavior (maybe forcing al the fields having the milisseconds part)?P.S.: I could convert my date fields to string, but then I’ll lose my date index.Thanks!",
"username": "almirb"
},
{
"code": "mongoexportmongoexportmongoexport",
"text": "After a lot of research, I ended up solving the problem. I have seen that there have been several changes to the mongoexport date format standard since the first versions of Mongo.\nThe version of mongoexport that comes with MongoDB 4.0.27 exports dates in the correct format, keeping the 000 Z if the date is not milliseconds long and maintaining a consistent pattern throughout the file. In version 4.2.21 mongoexport, the zeros are removed, where instead of 2022-09-04T23:00:53.000Z, the date appears as 2022-09-04T23:00:53Z.\nObs.: now Dremio reads the files without problem. Even gzipped.",
"username": "almirb"
},
{
"code": "canonical json output--jsonFormat=canonicalrelaxed",
"text": "Hello @almirb,Welcome to the MongoDB community! Happy to know you found a solution to this, to add a bit more from my side, as per this documentation of MongoDB Extended JSON (v2) - The date/time has a maximum time precision of milliseconds:If this is an issue for your application, one can try one of below:By using canonical json output by adding --jsonFormat=canonical in your mongoexport query. By default this is relaxed.By Exporting in CSV Format instead of Json as it does not omit the milliseconds.Regards,\nTarun Gaur",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongoexport - inconsistent date format in JSON | 2022-09-06T16:32:54.960Z | Mongoexport - inconsistent date format in JSON | 3,366 |
null | [
"aggregation",
"node-js",
"mongoose-odm"
]
| [
{
"code": "$sortArray[\n {\n bandName: \"The Beatles\",\n tours: [\n {\n date: \"2022-09-10\"\n location: \"San Francisco\"\n },\n {\n date: \"2022-09-15\"\n location: \"Seatle\"\n }\n ]\n },\n...\n const rs = await this.bands\n .aggregate([\n filter,\n {\n $set: {\n tours: {\n $sortArray: {\n input: '$tours',\n sortBy: { 'date': -1 }\n }\n }\n }\n },\n {\n $sort: {\n bandName: 1\n }\n }\n]\n\n",
"text": "I’m trying to use $sortArray to sort a collection by a field (‘bandName’), and within each document, sort the embedded array ‘tours’ by date.In the official example (linked above) sortArray is used in the projection stage. In my case I want keep the shape of the document.CodeError: MongoServerError: Invalid $set :: caused by :: Unrecognized expression '$sortArrayMongo server version: 6\nMongoose: 6.4.7\nNode: 16.4",
"username": "V11"
},
{
"code": "",
"text": "I misstated the version. We’re on v5.0.8 which doesn’t have $sortArray. It’s a managed service so upgrade is beyond our control.",
"username": "V11"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to sort embedded array in-place with $sortArray in Mongoose | 2022-09-11T08:14:00.957Z | How to sort embedded array in-place with $sortArray in Mongoose | 3,581 |
null | [
"containers",
"storage"
]
| [
{
"code": "--quietSep 09 10:30:38Z split-serving/omnicoder-84f66bbf7-78vqw mongodb {\"t\":{\"$date\":\"2022-09-09T10:30:38.100+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662719438:100728][7:0x7f22cd832700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 111, snapshot max: 111 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1\"}}\n~/Documents/GitHub/omnicoder/k8s master ≡\n❯ docker run --rm mongo:5.0.9 --quiet --quiet\nError parsing command line: Multiple occurrences of option \"--quiet\"\ntry 'mongod --help' for more information\n\n~/Documents/GitHub/omnicoder/k8s master ≡\n❯ docker run --rm mongo:5.0.9 --quiet=2\nError parsing command line: option '--quiet' does not take any arguments\ntry 'mongod --help' for more information\n",
"text": "Starting a local dev instance with --quiet still emits a lot of redundant logs on the formSince these are supposed to be informative only I would like to turn them off.I only want logs about two things:I do not want information of “everything is fine this is what I am doing right now”–quiet does not turn this off. How can I turn it off?",
"username": "Henrik_Holst"
},
{
"code": "",
"text": "Hi @Henrik_HolstI think what you’re looking for is changing the log verbosity setting. You can change this using the command db.setLogLevel() or set it in the config file.Out of curiousity, the logs are supposed to help with troubleshooting since it records what the server is doing at what particular time. Most times, you don’t need to look at them (a lot of applications simply redirect them to a file), and if you’re interested in some part of it, you can filter out the rest of them. Is there a specific reason why you don’t want these informations to be recorded?Best regards\nKevin",
"username": "kevinadi"
}
]
| Recurring log "saving checkpoint snapshot" | 2022-09-10T15:52:45.649Z | Recurring log “saving checkpoint snapshot” | 2,721 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "RangeError [ERR_OUT_OF_RANGE]: The value of \"offset\" is out of range. It must be >= 0 && <= 17825792. Received 17825801\n at Buffer.write (buffer.js:1052:5)\n...\n...\nLeadDeliveryLog.find(query)\n .populate('lead', '-_id fname lname email phone city state zip ip')\n .lean()\n .then(leads => {\n console.log('Done', leads.length);\n })\n .catch(e => {\n console.log('---------------ERROR--------------');\n console.log(e);\n })\n",
"text": "I am running a find command (using node) against my mongodb Atlas cluster. The query is supposed to return in the neighborhood of 1.3 mil records. But I get this error:Here is my code:From what I understand, that error is thrown when you try to insert a document larger than 16M into your database. But I am not doing that. I am just running a find command. What am I missing here?",
"username": "Zarif_Alimov"
},
{
"code": "query",
"text": "Hi @Zarif_Alimov,Just wondering if you’re still having issues with this? If so, could you share further details regarding the query portion of your code snippet as well as the MongoDB version in use? (Redact any personal or sensitive information before posting here)Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": ".aggregate.find",
"text": "Changing the code to use .aggregate instead of .find fixed the issue. No clue why to be honest.",
"username": "Zarif_Alimov"
},
{
"code": "query",
"text": "Thanks for advising the fix and glad to hear it was resolved Zarif. Would you be able to provide the query value of the code? I would like to see if I could replicate the error.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range on a "find" query | 2022-08-03T02:02:25.738Z | RangeError [ERR_OUT_OF_RANGE]: The value of “offset” is out of range on a “find” query | 5,316 |
null | [
"compass",
"charts"
]
| [
{
"code": "",
"text": "Is it possible to use a dataset stored locally (where I’m using Compass) to Charts? I like using it in a personal atlas instance I have but I have company data hosted locally I’d like to be able to use it with. I haven’t been able to find any documentation on it so not sure if I’m searching the wrong items or if it’s not possible.\nThank you\nPaul",
"username": "paul_carson"
},
{
"code": "",
"text": "Hi @paul_carson,My interpretation of the question is that you’re wanting to use MongoDB Charts in Atlas with a Data Source that is stored locally on your machine which contains company data (which cannot be hosted in Atlas I presume). Please correct me if I am wrong here.Is it possible to use a dataset stored locally (where I’m using Compass) to Charts?If so, as mentioned in more detail in this post, MongoDB Charts on-premises was discontinued in Sept 2021 and is no longer supported. So to answer your question regarding using a data set stored locally with use in MongoDB Charts (Atlas) - > This unfortunately cannot be done currently.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Charts and Compass | 2022-09-06T17:07:51.180Z | Charts and Compass | 2,531 |
null | [
"vscode"
]
| [
{
"code": "",
"text": "I want to try to import these two sources into MongoDB for VSCode, but I don’t know how to import them, which folder to import them to, and how to connect them to the HTML source codes.The sources in question: https://raw.githubusercontent.com/mongodb/docs-assets/geospatial/restaurants.json\nhttps://raw.githubusercontent.com/mongodb/docs-assets/geospatial/neighborhoods.jsonAny help would be greatly appreciated!",
"username": "Ali_Codes"
},
{
"code": "",
"text": "You have to download the json file to some directory and then use mongoimport to load the data\nPlease refer to mongo documentation for exact syntax/options for mongoimport",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "So this also applies to VSCode?\nSorry, I’m just new to using MongoDB in VSCode in general-",
"username": "Ali_Codes"
},
{
"code": "",
"text": "Yes applies to VSCode too\nI don’t use VSCode but what i understand is it can be used as IDE with Mongodb\nYou have to use inbuilt tools to load the data\nFrom view->command palette you can access mongodb playground or mongo shell\nFor inserting few records you can use mongodb playground but for more data like your json files you have to use mongoimport\nMay be others with VSCode knowledge can help more on this",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Ah, got it! Thanks for the tips + help!",
"username": "Ali_Codes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Inputting MondoDB Databases into VScode | 2022-09-10T15:58:18.833Z | Inputting MondoDB Databases into VScode | 2,836 |
null | []
| [
{
"code": "",
"text": "Hello, I’ve deployed the NestJs app to Heroku and it’s connected to MongoDB when run locally it connects without issues and even shows data. I’ve added a var config to Heroku as well. Does anyone know what could be the cause of it?",
"username": "Adam_Ondrejkovic"
},
{
"code": "",
"text": "Is your code referring to the correct collection name?\nIf it is working locally it should work with app too",
"username": "Ramachandra_Tummala"
}
]
| MongoDB returning [] on Heroku | 2022-09-11T12:29:57.229Z | MongoDB returning [] on Heroku | 1,029 |
null | []
| [
{
"code": "",
"text": "I had a MySQL database populated with around 1000+ rows and are some empty values in particular rows.\nNow, I want to synchronise the MySQL database with MongoDB, in such a way that if any data is added to the MySQL database, the same should be copied to the MongoDB. In this migration process there should not be any change is datatypes of particular fields.Can anyone suggest to build the best solution for this?",
"username": "Bhavesh_Asanabada"
},
{
"code": "",
"text": "You could use Kafka and a combination of the Debezium MySQL source connector and the MongoDB Connector for Apache Kafka as a sink.Here are the docs for the sink perspectiveHere are the docs for Debezium MySQL source\nhttps://debezium.io/documentation/reference/stable/connectors/mysql.html",
"username": "Robert_Walters"
}
]
| Synchronise the MySQL Database with MongoDB | 2022-09-09T10:24:41.178Z | Synchronise the MySQL Database with MongoDB | 1,859 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "Hello DevsI’m new to mongodb and I have a few questions if someone can help me1- Can Mongodb handle inserting a medium amount of data every minute as I’m building an application that needs to update and insert data every minute2- Is there a specific schema design I must use for such an app?Thank you",
"username": "Seif_Omran"
},
{
"code": "",
"text": "Hi @Seif_Omran ,MongoDB is a good fit for data ingested over time. We do have some specific solutions for storing timeseries data which sounds like what you are talking about where data is ingested over time:Time series, IOT, time series analysis, time series data, time series dbI would start by looking into using timeseries collections for your schema.On another note I suggest to read the following articles for our patterns and anti patternsThanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Insertion every one minute | 2022-09-09T22:16:33.156Z | Insertion every one minute | 1,063 |
[]
| [
{
"code": "",
"text": "How do I change my currently used namespace to be firebase\ntest is the default one, and now I need to use the firebase as I’m integrating firebase to the project\nso that whenever I send data to the database from my server it goes to the firebase namespace instead to the test namespaceI am connected to my node project with a connection stringI remember seeing it on a youtube video and just by changing something in the connection string but I cant find it again, also tried searching on google but results are different on what I am finding for, so maybe my term is wrong but I hope I explained it clearly here.",
"username": "Kyle_Atienza"
},
{
"code": "",
"text": "What db name you used in your connect string?\nReplace that with DBname you want to connect/insert data\nIf you did not use any DBname it defaults to test",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra thanks for the response!\nhere is my connection string and I don’t specified any DBname so I don’t know at what part to add /replace the DBname\nmongodb+srv://kyle-sproutit:[email protected]/?retryWrites=true&w=majorityat what part should I add it?",
"username": "Kyle_Atienza"
},
{
"code": "",
"text": "Add after mongodb.net/dbname",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Changing currently used namespace | 2022-09-10T23:24:15.925Z | Changing currently used namespace | 1,797 |
|
null | [
"aggregation",
"compass"
]
| [
{
"code": "{\n \"id\": 12,\n ...\n}\n{\n \"id\" : ...\n \"matchResults\" : [ \n {\n \"player1\": 12,\n \"player1\": 13,\n ....\n }, \n {\n \"player1\": 14,\n \"player1\": 12,\n ...\n },\n ....\n ]\n}\neventmatchResultsplayerplayer1player2$in requires an array as a second argument, found: missing",
"text": "I’m pretty new to mongo and this is causing me quite some confusion. I’ve seen other posts describing how to do this, but it doesn’t seem to work in my case. I’d appreciate any guidance on my data model as well, it maybe that it isn’t appropriate.player collectionevents collectionBasically an event will have many matchResults, which may or may not be related to a specific player, and the player could appear in field player1 or player2.I’d like an aggregation that returns the player with only their matches from all event documents.\nFrom what I can see some kind of aggregation pipeline using $lookup and $in is the way to go, but I couldn’t get compass to go beyond complaining about $in requires an array as a second argument, found: missing",
"username": "jimmyb"
},
{
"code": "{\n from: 'event',\n let: {'id':\"$matchResults.player1\"},\n pipeline: [{\n $match : {\n $expr : {\n $in : ['$id', \"$$id\"]\n }\n } \n }],\n as: 'matches'\n}\n",
"text": "This is as far as I got with the aggregation. I just focused on matching to player1 initially, but that still doesn’t work.",
"username": "jimmyb"
},
{
"code": "$match$or$filterdb.collection.aggregate([\n {\n \"$match\": {\n \"$or\": [\n {\n \"matchResults.player1\": 1\n },\n {\n \"matchResults.player2\": 1\n }\n ]\n }\n },\n {\n \"$project\": {\n \"matchResults\": {\n \"$filter\": {\n \"input\": \"$matchResults\",\n \"cond\": {\n \"$or\": [\n {\n \"$eq\": [\n \"$$this.player1\",\n 1\n ]\n },\n {\n \"$eq\": [\n \"$$this.player2\",\n 1\n ]\n }\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "Hi @jimmyb ,You can do it like this:$match with $or - to find all documents that has at least one event where requested player was either player1 or player2.$filter - to filter out only events where the requested player was either player1 or player2.Working Example",
"username": "NeNaD"
},
{
"code": "",
"text": "Thanks. I found a solution before the reply, but it ended up being fairly similar.",
"username": "jimmyb"
}
]
| $lookup on array of objects | 2022-08-25T01:29:28.429Z | $lookup on array of objects | 1,774 |
null | [
"python"
]
| [
{
"code": "",
"text": "As there is not an official python driver for Realm, is there a way to use Realm API auth using pymongo which would allow me to use the defined Realm collection rules?Thank you",
"username": "Tyler_Collins"
},
{
"code": "",
"text": "Looking to find an answer for this as well…",
"username": "Carl_Castillo"
},
{
"code": "",
"text": "Same here. How to call Atlas functions (former REalm) from Python?",
"username": "Sergio_M"
}
]
| Use Realm API auth and rules with python | 2022-03-31T19:10:55.915Z | Use Realm API auth and rules with python | 2,633 |
null | [
"python",
"compass",
"storage"
]
| [
{
"code": "storage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\ncacheSizeGBstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n wiredTiger:\n engineConfig:\n cacheSizeGB: 48\nconnect ECONNREFUSED [127.0.0.1:27017](http://127.0.0.1:27017)pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno\n111] Connection refused, Timeout: 30s, Topology Description:\n<TopologyDescription id: 631a26da7c16647798b4ac71, topology_type:\nUnknown, servers: [<ServerDescription ('localhost', 27017)\nserver_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017:\n[Errno 111] Connection refused')>]>\n",
"text": "Ubuntu 22.04.1\nMongoDB 6.0.1With default mongod.conf the connection is okay.With custom cacheSizeGB it is not possible to connect.Compass output:\nconnect ECONNREFUSED [127.0.0.1:27017](http://127.0.0.1:27017)PyMongo output:",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "ECONNREFUSED means mongod is not running.You probably got errors when you restarted with the new configuration. Share those errors. You should use a log file as it simplifies trouble shooting.Do you have at least 48GB of RAM?",
"username": "steevej"
},
{
"code": "cacheSizeGBmongod.conf",
"text": "No, there are 64 GB. According to htop it is 62.6 GB. My goal is to allow MongoDB to use 56 GB. The inability to connect is observed with different cacheSizeGB values.The error was observed both immediately after saving the changes in mongod.conf, and after rebooting the OS.",
"username": "Platon_workaccount"
},
{
"code": "cacheSizeGB",
"text": "The mongod.log stops filling up with new lines after adding the cacheSizeGB parameter to the config.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Depending on how you installed mongod, rebooting the OS might be insufficient to restart mongod.Make sure it is running.",
"username": "steevej"
},
{
"code": "sudo systemctl stop mongodsudo systemctl enable mongod",
"text": "To disable MongoDB I apply sudo systemctl stop mongod or reboot OS, to enable - sudo systemctl enable mongod. None of these things work.",
"username": "Platon_workaccount"
},
{
"code": "sudo systemctl start mongod\nsudo systemctl status mongod\n",
"text": "TryAnd provide output ofNote that it is possible that the service is named mongodb rather than mongod.",
"username": "steevej"
},
{
"code": "sudo systemctl start mongodsudo systemctl status mongod× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled;\nvendor preset: enabled)\n Active: failed (Result: exit-code) since Sat 2022-09-10 22:47:59\nMSK; 16s ago\n Docs: https://docs.mongodb.org/manual\n Process: 5069 ExecStart=/usr/bin/mongod --config /etc/mongod.conf\n(code=exited, status=2)\n Main PID: 5069 (code=exited, status=2)\n CPU: 20ms\n\nсен 10 22:47:59 platon-MS-7D42 systemd[1]: Started MongoDB Database Server.\nсен 10 22:47:59 platon-MS-7D42 mongod[5069]: Unrecognized option:\nstorage.wiredTiger\nсен 10 22:47:59 platon-MS-7D42 mongod[5069]: try '/usr/bin/mongod\n--help' for more information\nсен 10 22:47:59 platon-MS-7D42 systemd[1]: mongod.service: Main\nprocess exited, code=exited, status=2/INVALIDARGUMENT\nсен 10 22:47:59 platon-MS-7D42 systemd[1]: mongod.service: Failed with\nresult 'exit-code'.\n",
"text": "sudo systemctl start mongod\nsudo systemctl status mongod",
"username": "Platon_workaccount"
},
{
"code": "Unrecognized option:\nstorage.wiredTiger\n",
"text": "This confirms mongod is not running. It also indicates that the configuration you specified is wrong.Check https://www.mongodb.com/docs/v5.0/reference/configuration-options/#storage-options for proper options. May be you have tabs rather than spaces.",
"username": "steevej"
},
{
"code": "# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n wiredTiger:\n engineConfig:\n cacheSizeGB: 56\n",
"text": "Following the pattern of default mongod.conf, I use a multiple of two spaces for indentation.Could you please clarify in which exact place of the code below there is an error?",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "May be you have tabs or spaces after the :.",
"username": "steevej"
},
{
"code": "",
"text": "Try removing the # engine",
"username": "steevej"
},
{
"code": "# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n wiredTiger:\n engineConfig:\n cacheSizeGB: 56\nmongodstorage.wiredTigermongod --versionmongod",
"text": "I just tested locally with this in my config file and mongod starts up just fine with MongoDB 6.0.1. This config even works with version 3.4.23, so storage.wiredTiger is a valid key.I know you said that you’re running MongoDB 6.0.1, but can you post the results of mongod --version just for sanity. It’s almost like you have a really old version of mongod running.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable connecting to locally installed MongoDB if custom cacheSizeGB is specified | 2022-09-08T17:44:43.572Z | Unable connecting to locally installed MongoDB if custom cacheSizeGB is specified | 3,819 |
null | [
"indexes"
]
| [
{
"code": "",
"text": "Hello there,A pretty lame question, sorry about that, but I have been looking for the answer everywhere here and on the Internet and failed to find the answer.\nI am trying to count the number of documents in a partial index (whole index or part of it). Not in a whole collection.\nThe only recommendation I found is to perform a find with the index hint or with a filter that matches the index, and count the number of documents returned. This looks like O(N), which is unreasonable. There should be a way to obtain that information as O(1) if the whole index count is requested, and at worst O(log(N)) in case counting the number of documents is operating on a sub-index.\nI am using the MongoDB C driver, so the solution I need must fit within the API provided by it. I see only collection-level functions (mongoc_collection_count_documents, mongoc_collection_estimated_document_count).Thanks in advance!",
"username": "Vincent_Lextrait"
},
{
"code": "",
"text": "I do not know if you can get what you want exactly but the following is the closest I could find.",
"username": "steevej"
},
{
"code": "collStatsmongod",
"text": "One thing to note is that collStats counters are reset to 0 if the mongod process is restarted, so not a reliable source.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Ah yes, then not a solution indeed.",
"username": "Vincent_Lextrait"
}
]
| Counting the number of documents in a partial index | 2022-09-08T14:30:45.637Z | Counting the number of documents in a partial index | 1,765 |
null | []
| [
{
"code": "",
"text": "Hey guys,I have a problem with MongoDB. It’s been taking this snapshot for days. Is there anything I can do to disable it? If it would be possible can you send me the command. And one more question\nSnapshot: hastebinI have linked my CollectionAPI here, which I programmed in Java and I wanted to ask if it is so good or if an improvement is urgently needed. We make a Minecraft server with MongoDB and we have MongoDB problems, sometimes we just time out from the server and I assume that the connection to the MongoDB server is broken.CollectionAPI: hastebin",
"username": "ItsKnxck_N_A"
},
{
"code": "",
"text": "Am only new to the forum, if I have a wrong category, please excuse me",
"username": "ItsKnxck_N_A"
},
{
"code": "",
"text": "Hi @ItsKnxck_N_A welcome to the community!Snapshots and checkpoints are internal WiredTiger methods to persist data to disk. See the Snapshots and Checkpoints section in the manual page for more details. Thus you cannot turn them off, since then you’ll have no data persisted. Those log lines are for informative purposes only, and do not indicate that there is any issue.Regarding the Java code you posted, I believe you might have better response on specific Java-related site, such as StackOverflow or CodeReview StackExchange, since this forum is specific to MongoDB.Best regards,\nKevin",
"username": "kevinadi"
}
]
| MongoDB makes checkpoints and snapshot all the time | 2021-05-08T13:05:27.329Z | MongoDB makes checkpoints and snapshot all the time | 2,652 |
[
"node-js",
"crud"
]
| [
{
"code": "",
"text": "Hey, I’m building a GraphQL API with NodeJS and MongoDB. Everything was going smooth until i tried saving data to the database and it didn’t return any OPS RESULT in the terminal. Please have a look at the code for better understanding and guide me on where did I go wrong? Thank you in advance!\n\nnodemon1042×593 29.2 KB\n",
"username": "run_x_todo"
},
{
"code": "insert()InsertManyResultacknowledgedinsertedCountinsertedIdsinsert()insertOne()insertMany()bulkWrite()",
"text": "Welcome to the MongoDB Community Forums @run_x_todo !Can you provide more information to help understand your issue:In the MongoDB Node.js 4.0 driver, insert() returns an InsertManyResult which has the 3 properties in your screenshot: acknowledged, insertedCount, and insertedIds.Since the insert() method is deprecated (per the warning in your screenshot), I’d recommend using insertOne(), insertMany(), or bulkWrite() to ensure you get the expected result object returned.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hey, thank you for your response.I want to return “RESULT” to see inserted data but I’m not being able to do it, can you please help me with how I can do that? how do in return it? I’m losing all of my brain cells lolbefore I’d just add;\nconsole.log(result); terminal would show result with an array OPS then id add;\nreturn result.ops[0], before highlighted code and it’d return the inserted data result//see attached screenshot for better understanding// Thanks in advance!",
"username": "run_x_todo"
},
{
"code": "",
"text": "how should i return this below attached code if i’m not getting array OPS after console.log(result);?",
"username": "run_x_todo"
},
{
"code": "",
"text": "@Stennie_X\n",
"username": "run_x_todo"
},
{
"code": "",
"text": "Ever figure this out? I’m having the exact same issue, not sure how to get the result object show the ops with graphql and nodejs.",
"username": "Paul_Jreij"
},
{
"code": "",
"text": "any help would be greatly appreciated ",
"username": "Paul_Jreij"
},
{
"code": "const result = await db.collection('Users').insertOne(newUser);\n const someId = result.insertedId;\nconst actualResult = await db\n .collection('Users')\n .findOne({ _id: someId });\n",
"text": "Hey Man it’s because of Mongo 4.x update where we no longer have ops instead of that we can do something like this after insert.InsertId and acknowledgement are the only response we get from the db.you have to run another db query hereand if you do console log you will get the inserted document Happy Coding",
"username": "sandeep_krishna"
},
{
"code": "",
"text": "Sen varya adamsın adam <3. Daşşaklarını yapmaya beton yetmez !\nThank you so much bro. You are my hero ",
"username": "Diyar_Can"
}
]
| The Result Object OPS not Showing in GraphQL with nodejs | 2021-08-04T10:22:30.397Z | The Result Object OPS not Showing in GraphQL with nodejs | 8,109 |
|
null | [
"atlas-device-sync"
]
| [
{
"code": "realmApp.syncManager.errorHandler = { error, session in\n let syncError = error as! SyncError\n \n switch syncError.code {\n \n case .clientResetError:\n guard let (path, clientResetToken) = syncError.clientResetInfo(),\n let realmFileURL = getClientResetRealmFileURL() else { return }\n \n DispatchQueue.main.async {\n let data = SyncErrorData(clientResetToken: clientResetToken, autoBackupPath: path, realmFileURL: realmFileURL, syncManager: realmApp.syncManager)\n AppDelegate.shared.rootViewController.handleClientResetError(data: data)\n }\n \n case .clientUserError: // \"expired refresh token\" error, after 30 days\n DispatchQueue.main.async {\n AppDelegate.shared.rootViewController.handleSyncError(.expiredRefreshToken)\n }\n\n // In case of other errors, we do the same as for expired refresh token errors: log out the user\n default:\n AppDelegate.shared.rootViewController.handleSyncError(.unknown(syncError))\n }\n}\nError:\n\nending session with error: integrating 1 changesets failed after 1 attempts in 7.540692071s: could not complete upload integration as this connection no longer owns the file ident; no action is needed, as the client has already established a new connection to the sync server to complete its upload (ProtocolErrorCode=201)\n\nLogs:\n\n[ \"Session was active for: 10s\" ]\n\nPartition:\n\nPUBLIC\n\nSession Metrics:\n\n{ \"uploads\": 1, \"downloads\": 1 }\n\nRemote IP Address:\n\n81.102.24.215\n\nSDK:\n\nRealm Cocoa v10.15.1\n\nPlatform Version:\n\nVersion 15.1 (Build 19B74)\ncould not complete upload integration as this connection no longer owns the file ident",
"text": "Some users of my production app consistently get “Bad changeset (DOWNLOAD)” errors when using Realm Sync, even after re-installing the app.The code of the sync error handler is the following (Swift):When getting this error, or any other kind of sync error, the user is logged out. But for this error, when users try to login again, they get the same error (even if they uninstall and re-install the app).The error on the server is:The important part (I guess) is could not complete upload integration as this connection no longer owns the file ident, but I don’t understand what it means. The part that troubles me is “no action is needed, as the client has already established a new connection to the sync server to complete its upload”, which seems to indicate that this shouldn’t be an error.It’s also worth noting that before getting this specific error, users got another error for a few months, “Bad progress information (DOWNLOAD)”, which also prevented them from using the app.I’m guessing the source of the problem is a schema inconsistency, however this should be resolved by now. My concern is that such errors should be solved by doing a client reset, which is not the case here.How to get sync to work again for those users?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "historyupdatedeleteid",
"text": "Hello @Jean-Baptiste_BeauMy name is Josman and I am happy to assist you with this issue. Usually, a BADCHANGESET (DOWNLOAD) error is an error related to the history for a said partition. These types of errors can occur due to actions that are not permitted like making changes to schemas with development mode Off, setting incorrect partition value, syncing to/from incorrect partition, etc.In your particular scenario, it appears that the client is unable to rebuild the history for a particular partition due to two (or more) operations that are not congruent with each other, i.e. an update after a delete related to the same id of a document.Currently, there is only two possible solution to this error:Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "",
"text": "Hi @Josman_Perez_Exposit,Thank you for your answer.Terminating Sync is a very risky and troublesome process, as you pointed out, and the last times I had to do it were absolute nightmares, causing the app in production to be down for a few days.Isn’t it possible to restart sync (or something similar) only for some users? I find it very surprising that to solve the problem of one user (or a few), I have to put the whole thing down for everyone.Thanks,\nJB",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Hello @Jean-Baptiste_BeauIsn’t it possible to restart sync (or something similar) only for some users? I find it very surprising that to solve the problem of one user (or a few), I have to put the whole thing down for everyone.Unfortunately no, terminating Sync is, as you said, the last resource to fix this kind of issue and it cannot be performed for only some users. The badchangeset problem you are experiencing for that partition is making the client not be able to restore the history for the same. If you could open a support case, we would be able to help you in the best way possible and investigate further if we could solve this without terminating Sync.Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "message handler failed with error: error handling \"upload\" message: could not complete upload integration as this connection no longer owns the file ident; no action is needed, as the client has already established a new connection to the sync server to complete its upload",
"text": "I have a similar issue with the Sync. The clients reported that the app is “down”. Not sure if that’s user-related or time-related though. The error message is the same message handler failed with error: error handling \"upload\" message: could not complete upload integration as this connection no longer owns the file ident; no action is needed, as the client has already established a new connection to the sync server to complete its upload\n\nimage2520×1702 376 KB\nMight be related to Error Invalid Session on Old Client 10.3 or 10.4",
"username": "Anton_P"
},
{
"code": "",
"text": "Might be related to the fact that async open took 4 minutes to complete and we allow to reconnect after it takes more than 2 minutesThe whole uncompressed database size is ~7mb and it even less for a specific user due to partition so it’s quite strange especially because some users were from the USA and we have AWS N. Virginia (us-east-1) M10 there.",
"username": "Anton_P"
},
{
"code": "01.10 20:38:13.168 - The app tries to async open two Realms\n01.10 20:38:14.630 - The smaller one (user) successfully opened\n01.10 20:40:36.307 - The bigger one (public) download start\n01.10 20:40:46.171 - The bigger one (public) download finish\n4.4.104.4.11AWS Ireland (eu-west-1) M0AWS N. Virginia (us-east-1)",
"text": "It is indeed related to the Error Invalid Session on Old Client 10.3 or 10.4 as symptoms are the same but might not be related to this topic even though the error seems the same.It is happening right now but was fine during morning-day. It takes more than 2 mins during Realm async open to actually start downloading any data. Moreover, it isn’t the first launch. It takes 4-5 mins to async open Realm on app restart.So it takes 140 seconds to start the download and just 10 seconds to finish downloading. That’s beyond any expectations. Especially because all the data should already be available locally but it looks like it redownloading everything after the app restart.UPDATE: It looks like MongoDB was just updated from the 4.4.10 to the 4.4.11 and it works just fine after restart. Though, not sure if that’s a restart or version update that helped.UPDATE 2: This is happening again right now so it was not fixed . The strange thing it is happening 3rd day in a row at the same time. It took 8 minutes today to startup the app. The server load is just zero but it works like under 1000% load\n\nimage1280×222 64.3 KB\nUPDATE 3: It works fine on the AWS Ireland (eu-west-1) M0 machine but for all our AWS N. Virginia (us-east-1) projects it just doesn’t work. I think we will try to migrate Is it possible to migrate MongoDB together with Realm and Sync to different cloud provider or region?",
"username": "Anton_P"
},
{
"code": "",
"text": "Terminate Sync, wait 10 minutes and re-enable Sync.Why do we need to wait 10 mins? I tried Terminated sync and re-enable. It also seems to work fine.",
"username": "NightNight"
},
{
"code": "",
"text": "Hello @NightNight , sorry for the long reply on this. Usually, it is a good practice to wait a minimum of 10 minutes before terminating and reenabling Sync to allow the Sync process to prune all pending operations.Note that the Realm database on the server is common for all your Realm applications within the same project. When you terminate a Realm application, the server process must delete all metadata associated with that application. So, although you can terminate and start it without any problem, depending on the size of the stored data, it is recommended (as a good practice) to wait a minimum amount of time to ensure that there will be no problems when restarting Atlas Device Sync again.Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "Sync is currently terminating... Please wait for sync to finish terminating before enabling again.",
"text": "I had to terminate sync and there is a message Sync is currently terminating... Please wait for sync to finish terminating before enabling again. that persists for 30 minutes already and does not allow to enable sync back. I doubt there is work that requires so much time since there is only 2MB of data.",
"username": "Anton_P"
},
{
"code": "",
"text": "Hello @Anton_P ,Could you please share with me by private message the URL of your App Services Project? I would want to see what could be the issue you are facing.Looking forward to your responsePlease let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "",
"text": "Hello @Anton_PAs we have been talking privately, I confirm that the problem with your application is now fixed.Sorry for the inconvenience and thank you for your patience.Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
}
]
| Keep getting Bad changeset (DOWNLOAD), even after re-installing app | 2022-01-03T16:55:26.534Z | Keep getting Bad changeset (DOWNLOAD), even after re-installing app | 6,447 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "aggregate([\n {\n \"$lookup\": {\n \"from\": UserModel.collection.name,\n \"localField\": \"threat.lastModifiedBy\",\n \"foreignField\": \"username\",\n \"as\": \"inventory_docs\"\n }\n },\n {\"$project\":{\n inventory_docs:1,\n project:1,\n }},\n {\n $set: {\n \"threat.$.reviewedBy\": {\n $arrayElemAt: [\n \"$inventory_docs._id\",\n 0\n ]\n }\n }\n }\n ])\n",
"text": "What I need to do is to search the whole collection and check the threats array of objectsIn that array currently there is a “lastModifiedBy” field that contains a usernameI need to get the userName and search the user db and instead set the user Id on that fieldI know that in a normal update you can use the $. to update an array of object but that does not seem to work in aggregation",
"username": "AbdulRahman_Riyaz"
},
{
"code": "",
"text": "Take a look at $map. It is used to modify array.",
"username": "steevej"
}
]
| How to update a single field in an array of objects using aggregation | 2022-09-07T18:42:03.435Z | How to update a single field in an array of objects using aggregation | 1,990 |
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "db.case_details.aggregate( [\n {\n $facet: {\n \"status_counts\": [{\"$group\" : {_id:{source:\"$assigned_to_ref.$id\",status:\"$status\"}, count:{$sum:1}}}],\n \"Priority_counts\": [{$match:{\"status\":{$ne:\"Closed\"}}},{\"$group\" : {_id:{agent_id:\"$assigned_to_ref.$id\",Priority:\"$priority\"}, count:{$sum:1}}}]\n }\n }\n ])\n{\n \"status_counts\" : [\n {\n \"_id\" : {\n \"agent_id\" : \"61a4740aaa59a81392928a7d\",\n \"status\" : \"Closed\"\n },\n \"count\" : 30.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4740aaa59a81392928a7d\",\n \"status\" : \"Pending\"\n },\n \"count\" : 88.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4732eaa59a81392928a7c\",\n \"status\" : \"Pending\"\n },\n \"count\" : 20.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"6184fdcc68b51a7ad0cff7a9\",\n \"status\" : \"Pending\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4780baa59a81392928a89\",\n \"status\" : \"Closed\"\n },\n \"count\" : 94.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47cbeaa59a81392928a8e\",\n \"status\" : \"Closed\"\n },\n \"count\" : 143.0\n },\n {\n \"_id\" : {\n \"status\" : \"Open\"\n },\n \"count\" : 222.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47d13aa59a81392928a8f\",\n \"status\" : \"Closed\"\n },\n \"count\" : 120.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e69aa59a81392928a92\",\n \"status\" : \"Closed\"\n },\n \"count\" : 97.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47696aa59a81392928a85\",\n \"status\" : \"Pending\"\n },\n \"count\" : 146.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47d13aa59a81392928a8f\",\n \"status\" : \"Pending\"\n },\n \"count\" : 154.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47836aa59a81392928a8a\",\n \"status\" : \"Closed\"\n },\n \"count\" : 29.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4732eaa59a81392928a7c\",\n \"status\" : \"Closed\"\n },\n \"count\" : 11.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474f7aa59a81392928a80\",\n \"status\" : \"Open\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a475d8aa59a81392928a82\",\n \"status\" : \"Pending\"\n },\n \"count\" : 79.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a0ad9b8fc5a9742c9cf619\",\n \"status\" : \"Open\"\n },\n \"count\" : 2.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47659aa59a81392928a84\",\n \"status\" : \"Pending\"\n },\n \"count\" : 155.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"619b71ec6ec78556320d8e27\",\n \"status\" : \"Open\"\n },\n \"count\" : 9.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47696aa59a81392928a85\",\n \"status\" : \"Closed\"\n },\n \"count\" : 93.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a452f467dacd4a141d195e\",\n \"status\" : \"Pending\"\n },\n \"count\" : 87.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4533d67dacd4a141d195f\",\n \"status\" : \"Closed\"\n },\n \"count\" : 149.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47468aa59a81392928a7e\",\n \"status\" : \"Closed\"\n },\n \"count\" : 130.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474abaa59a81392928a7f\",\n \"status\" : \"Closed\"\n },\n \"count\" : 26.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4533d67dacd4a141d195f\",\n \"status\" : \"Pending\"\n },\n \"count\" : 158.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a476e1aa59a81392928a86\",\n \"status\" : \"Open\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a452f467dacd4a141d195e\",\n \"status\" : \"Closed\"\n },\n \"count\" : 152.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"status\" : \"Closed\"\n },\n \"count\" : 220.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"status\" : \"Pending\"\n },\n \"count\" : 103.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"status\" : \"Pending\"\n },\n \"count\" : 99.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4780baa59a81392928a89\",\n \"status\" : \"Pending\"\n },\n \"count\" : 158.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a477bfaa59a81392928a88\",\n \"status\" : \"Pending\"\n },\n \"count\" : 283.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a8fcdc6e4376427f0ccb1b\",\n \"status\" : \"Pending\"\n },\n \"count\" : 2.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47747aa59a81392928a87\",\n \"status\" : \"Pending\"\n },\n \"count\" : 5.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47747aa59a81392928a87\",\n \"status\" : \"Closed\"\n },\n \"count\" : 162.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47659aa59a81392928a84\",\n \"status\" : \"Closed\"\n },\n \"count\" : 169.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47468aa59a81392928a7e\",\n \"status\" : \"Pending\"\n },\n \"count\" : 113.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a476e1aa59a81392928a86\",\n \"status\" : \"Pending\"\n },\n \"count\" : 192.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"status\" : \"Open\"\n },\n \"count\" : 33.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47564aa59a81392928a81\",\n \"status\" : \"Closed\"\n },\n \"count\" : 80.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474abaa59a81392928a7f\",\n \"status\" : \"Open\"\n },\n \"count\" : 2.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"6184fe3068b51a7ad0cff7aa\",\n \"status\" : \"Closed\"\n },\n \"count\" : 3.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e13aa59a81392928a91\",\n \"status\" : \"Pending\"\n },\n \"count\" : 128.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a475d8aa59a81392928a82\",\n \"status\" : \"Closed\"\n },\n \"count\" : 263.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e69aa59a81392928a92\",\n \"status\" : \"Pending\"\n },\n \"count\" : 54.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"status\" : \"Closed\"\n },\n \"count\" : 160.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"status\" : \"Open\"\n },\n \"count\" : 31.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47468aa59a81392928a7e\",\n \"status\" : \"Open\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4761faa59a81392928a83\",\n \"status\" : \"Pending\"\n },\n \"count\" : 135.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4761faa59a81392928a83\",\n \"status\" : \"Closed\"\n },\n \"count\" : 55.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47836aa59a81392928a8a\",\n \"status\" : \"Pending\"\n },\n \"count\" : 167.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47564aa59a81392928a81\",\n \"status\" : \"Pending\"\n },\n \"count\" : 186.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"619b71ec6ec78556320d8e27\",\n \"status\" : \"Closed\"\n },\n \"count\" : 68.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474abaa59a81392928a7f\",\n \"status\" : \"Pending\"\n },\n \"count\" : 21.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e13aa59a81392928a91\",\n \"status\" : \"Closed\"\n },\n \"count\" : 87.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a476e1aa59a81392928a86\",\n \"status\" : \"Closed\"\n },\n \"count\" : 14.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"6184fdcc68b51a7ad0cff7a9\",\n \"status\" : \"Closed\"\n },\n \"count\" : 55793.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47cbeaa59a81392928a8e\",\n \"status\" : \"Pending\"\n },\n \"count\" : 159.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"619b71ec6ec78556320d8e27\",\n \"status\" : \"Pending\"\n },\n \"count\" : 194.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a477bfaa59a81392928a88\",\n \"status\" : \"Closed\"\n },\n \"count\" : 45.0\n }\n ],\n \"Priority_counts\" : [\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e69aa59a81392928a92\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 54.0\n },\n {\n \"_id\" : {\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 202.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a475d8aa59a81392928a82\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 79.0\n },\n {\n \"_id\" : {\n \"Priority\" : \"High\"\n },\n \"count\" : 5.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"Priority\" : \"Low\"\n },\n \"count\" : 5.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47836aa59a81392928a8a\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 167.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"6184fdcc68b51a7ad0cff7a9\",\n \"Priority\" : \"High\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4761faa59a81392928a83\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 135.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 124.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47dc0aa59a81392928a90\",\n \"Priority\" : \"Medium\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4533d67dacd4a141d195f\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 158.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47e13aa59a81392928a91\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 128.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4780baa59a81392928a89\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 158.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"Priority\" : \"Medium\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a8fcdc6e4376427f0ccb1b\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 2.0\n },\n {\n \"_id\" : {\n \"Priority\" : \"Medium\"\n },\n \"count\" : 7.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47cbeaa59a81392928a8e\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 159.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474f7aa59a81392928a80\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a452f467dacd4a141d195e\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 87.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"Priority\" : \"Low\"\n },\n \"count\" : 1.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47747aa59a81392928a87\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 5.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a477bfaa59a81392928a88\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 283.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4740aaa59a81392928a7d\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 88.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a0ad9b8fc5a9742c9cf619\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 2.0\n },\n {\n \"_id\" : {\n \"Priority\" : \"Low\"\n },\n \"count\" : 8.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a476e1aa59a81392928a86\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 193.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47696aa59a81392928a85\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 146.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"619b71ec6ec78556320d8e27\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 203.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a4732eaa59a81392928a7c\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 20.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47468aa59a81392928a7e\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 114.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a474abaa59a81392928a7f\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 23.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47659aa59a81392928a84\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 155.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47d13aa59a81392928a8f\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 154.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47564aa59a81392928a81\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 186.0\n },\n {\n \"_id\" : {\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n \"Priority\" : \"Escalated\"\n },\n \"count\" : 134.0\n }\n ]\n}\n\n>{\n \"agent_id\" : \"61a47c5aaa59a81392928a8d\",\n Total : 62, \n Open: 20 ,\n Pending: 12,\n Closed: 30,\n Low : 15,\n High: 10,\n Medium : 3,\n Escalated : 4 \n},\n{\n \"agent_id\" : \"61a47c5aaa59a81392928a8c\",\n Total : 62, \n Open: 20 ,\n Pending: 12,\n Closed: 30,\n Low : 15,\n High: 10,\n Medium : 3,\n Escalated : 4\n }\n",
"text": "Hi Team,\nI am so worried about getting individual agent counts from collections in that collections there is huge documents, each document has status , priority from this collection How to get each agent_id had how many cases are in the status of Open, Closed, Pending . And Priority has Low , Medium, High, Escalated counts (Which not in Closed status) of each individual agent_id … I was tried below queryGetting result from the above queryBut I am expecting Each Agent_id counts at one place…\nExpected Output:Please help me on this …",
"username": "Lokesh_Reddy1"
},
{
"code": "sourceagent_idsource",
"text": "sourceForgot change as agent_id insted of source",
"username": "Lokesh_Reddy1"
},
{
"code": "",
"text": "To get one document per agent_id you need a new $group with _id:$_id.agent_id. You would use the $push accumulator with { status:$_id.status , count:$count}.The format will not be exactly what you want but the data will grouped like you wish. You could use $project with $arrayToObject to get your final format. Personally, I do this type of data cosmetics on the application side.",
"username": "steevej"
}
]
| How to get individual counts when grouping two fields at the time? | 2022-09-09T06:54:29.177Z | How to get individual counts when grouping two fields at the time? | 1,510 |
null | [
"sharding",
"containers"
]
| [
{
"code": "",
"text": "Basically we had a mongodb sharded cluster setup and running inside docker container. After several weeks, the container used all host memory and caused other containers crashed. To counter this, we config its memory limit so that it used 1/6 the host ram’s capacity. I assume it would lower mongodb performance, or at least make the contaner crash quicker. But it seems to run fine this time. What is the reason behind this? Can someone explain? Thank you",
"username": "Nam_Le"
},
{
"code": "",
"text": "It shows a lot of warnings like these. Should I be worried?\n\nScreen Shot 2022-09-06 at 08.49.441875×539 19.1 KB\n",
"username": "Nam_Le"
},
{
"code": "currentOp",
"text": "Hi @Nam_Le and welcome to the community!!For better understanding of above, it would be helpful if you could share the following details:What difference do you observe in terms of performance by lowering the RAM utilised by the container.Are there any errors seen due to lowering the memory limit?For how long was the container working smoothly before the container crash happened?The MongoDB version you are usingMongoDB runtime setting files(e.g config files, command line options etc)The topology for the sharded deployment and if the shards are running on the same machine or different.After several weeks, the container used all host memory and caused other containers crashed.Could you also explain the above statement.we config its memory limit so that it used 1/6 the host ram’s capacity. I assume it would lower mongodb performance, or at least make the contaner crash quicker.Can you also confirm that after modifying down the memory limit, everything works well?Could you also confirm the origin of currentOp as this seems to be truncated in the logs screenshot shared. This warning implies that the output would be truncated in order to fit in 16MB document limit.P.S.: please refer to the following documentation on Resource Constraint in Docker for more understanding.Best regards\nAasawari",
"username": "Aasawari"
}
]
| What would happen if we set memory-limit to mongodb docker container? | 2022-09-05T11:22:43.238Z | What would happen if we set memory-limit to mongodb docker container? | 4,244 |
null | [
"atlas-search"
]
| [
{
"code": "gauss$search{\n \"dummy\": {\n \"type\": \"number\"\n }\n}\ngauss{\n \"function\": {\n \"gauss\": {\n \"path\": {\n \"value\": \"dummy\",\n \"undefined\": 1895\n },\n \"origin\": 295,\n \"scale\": 500\n }\n }\n}\n",
"text": "I’m looking for a way to get the gauss score of a constant value as part of a $search query.I’ve found a workaround where you can define a dummy field in the search index that will never be populated likeand then you pass that to your gauss function likeWhile this does work, it feels really hacky to do and I’m wondering if there’s a better, more “correct” way to do this.",
"username": "Lucas_Burns"
},
{
"code": "gauss{\n \"compound\": {\n \"must\": [\n {\n \"near\": {\n \"path\": \"fieldA\",\n \"origin\": 0,\n \"pivot\": 20,\n \"score\": {\n \"path\": {\n \"value\": \"fieldA\"\n }\n }\n }\n },\n {\n \"near\": {\n \"path\": \"fieldB\",\n \"origin\": 0,\n \"pivot\": 20,\n \"score\": {\n \"path\": {\n \"value\": \"fieldB\"\n }\n }\n }\n }\n ],\n \"score\": {\n \"function\": {\n \"gauss\": {\n /* uses relevance score from this block */\n }\n }\n }\n }\n}\ngauss",
"text": "Even better would be if I could feed the relevance score of a query into a gauss:which (in this case) is a really inelegant version of “add two numbers together and get the gauss of the sum” but this would also be a really nice thing to be able to do.",
"username": "Lucas_Burns"
},
{
"code": "gaussmustgaussgaussgauss",
"text": "Can you provide us more information on what you seek to accomplish with this query?What you describe, though I’m not completely sure I understood you correctly, sounds somewhat similar to how gauss works in Atlas Search. In the case of the first two must clauses, the contributions to the relevance score are added up. The gauss function will then impact the relevance score of those two docs from the point in the origin that you elect to pivot on and at the rate dictated by the decay function or gauss parameters.For more actionable assistance, please provide:Thanks for the question! I feel it bear a fruitful discussion.",
"username": "Marcus"
},
{
"code": "loglog1p",
"text": "Have you explored log and log1p?",
"username": "Marcus"
}
]
| Can Atlas gaussian score modifier accept arbitrary numeric inputs like log? | 2022-09-01T21:05:17.246Z | Can Atlas gaussian score modifier accept arbitrary numeric inputs like log? | 1,435 |
[
"atlas-device-sync"
]
| [
{
"code": "false",
"text": "Hello.I have explicitly set the Read permission to False.\nAnd I also use a function that returns False.\nDev mode is OFF\nI also restarted sync.\nUsing single partition based sync.But the object is still being updated and the logs are all successful.\nI am using iOS Realm SDK for writing to the Realm.\nI will attach some images of my configuration.\nScreen Shot 2022-09-09 at 7.49.32 PM1262×1190 67 KB\n\n\nScreen Shot 2022-09-09 at 7.50.00 PM1350×1222 92.4 KB\n[Update1]I have also returned false for both Read and Write, but the sync still works normally.\nI get a print in logs regarding permissions, but the Object is Fetched and Updated normally.\n(I have checked the database, the objects are being updated)\nI am not sure what’s happening\nScreen Shot 2022-09-09 at 10.35.58 PM1348×2280 294 KB\n[Update 2]Well not it seems that Read is working if I do a clean iOS app install.\nIt will not fetch the object, so Read permission is denied.BUT, if R & W is allowed, I fetch the object, then I Write access is denied. (R=True. Write=False)\nThe user can still update the object even if Write is false.",
"username": "Georges_Jamous"
},
{
"code": "",
"text": "Well it looks like Sync does not call the permission script that often.\nSo, there was a bit of delay (about 6hours) before the permissions were revalidated.\nSo in the end, it worked.",
"username": "Georges_Jamous"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sync permission not working as expected | 2022-09-09T18:00:02.423Z | Sync permission not working as expected | 1,612 |
|
[
"production",
"php"
]
| [
{
"code": "composer require mongodb/mongodb:1.13.1\nmongodb",
"text": "The PHP team is happy to announce that version 1.13.1 of the MongoDB PHP library is now available.Release HighlightsA complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=34481DocumentationDocumentation for this library may be found at:InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.",
"username": "jmikola"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB PHP Library 1.13.1 Released | 2022-09-09T23:30:30.704Z | MongoDB PHP Library 1.13.1 Released | 2,286 |
|
null | [
"aggregation",
"dot-net",
"change-streams"
]
| [
{
"code": "ongoDB.Driver.MongoCommandException: Command aggregate failed: PlanExecutor error during aggregation :: caused by :: BSONObj size: 29749890 (0x1C5F282) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { _data: \"82631AFEEF000000032B022C0100296E5A1004602205B9156343C2AADFC6E485D4A71446645F69640064631A0BEE6B2B2CADFE192D310004\" }.\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ProcessResponse(ConnectionId connectionId, CommandMessage responseMessage)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.ExecuteAsync[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ChangeStreamOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\nResumeToken._data",
"text": "We have an Audit log implementation (C#) that uses a change stream with various filters to watch change & replace operations. The audit log uses pre images to generate a diff of an update & persist.While testing the resistance of the implementation I encounter a scenario that fails at the driver level.When updating a document to close to the 16mb Bson threshold, the update is accepted but the resulting change stream object surpasses the limit with the various addition fields included. This is resulting in the driver throwing an exception when attempting to process the message:As the change stream implementation is watching changes, we ideally do not want to miss any updates. We, therefore, use ResumeAfter with the ResumeToken._data field of the last successfully processed event. As the event cannot be deserialised by the driver, we can never get the bad messages resumption token to skip over it causing continuous failure.Are there any ways to get a change stream event resume token without loading the message? (Perhaps we could use an aggregate to project just the token?)",
"username": "Anthony_Halliday"
},
{
"code": "ChangeStreamDocument<T>_idcursor.GetResumeToken()$projectchange.ClusterTimeBsonTimestampChangeStreamOptions.StartAtOperationTimeBsonTimestampBsonTimestampIncrementTimestamp",
"text": "Hi, @Anthony_Halliday,I understand that you’re having a problem with returned ChangeStreamDocument<T> objects exceeding 16MB. This is a limitation of change streams because the returned document is a BSON document itself and must fit within the 16MB BSON limit. This limit can be encountered more frequently when requesting pre-/post-images of the affected document in the change stream.Since the change stream document cannot be parsed, you cannot access its _id, which is the resume token. You can however call cursor.GetResumeToken() even after the change stream throws.Note that the resume token returned is probably the change stream event that exceeded 16 MB. Thus you would need some additional logic to skip over the offending change stream document, restart the change stream without pre-/post-images, or include a pipeline that performs a $project to omit fields and reduce the change stream event to less than 16 MB.Another potential solution would involve tracking the last successful change.ClusterTime - which is a BsonTimestamp - and using that with ChangeStreamOptions.StartAtOperationTime after incrementing the BsonTimestamp enough to avoid the >16MB change stream document. BsonTimestamp consists of an Increment (monotonically increasing counter of operations within a single second) and a Timestamp (seconds since Unix epoch).Hopefully that provides you with some ideas of how to work around this issue.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to recover change stream from document that wont deserialize | 2022-09-09T09:08:38.224Z | How to recover change stream from document that wont deserialize | 2,379 |
null | [
"backup"
]
| [
{
"code": "",
"text": "I searched on YT on how to backup MongoDB data, and I had a Dev tell me that it’s not efficient backing up a dataset that is large but he doesn’t know the set number, I was wondering what will be the “limit”. The reason why I’m backing up data is bc the VPS is unstable and I might lose data but safe than sorry.",
"username": "Shaughn_De_Sousa"
},
{
"code": "mongodumpmongorestoremongodumpmongorestoremongodmongodumpmongorestore",
"text": "Welcome to the MongoDB Community @Shaughn_De_Sousa !A backup strategy (which includes testing the restore process) is essential if your data is important. You can use any of the MongoDB Backup Methods applicable to your deployment type and MongoDB server version.I suspect the inefficient backup approach was referring to mongodump and mongorestore. Per the Performance Consderations section of the documentation for Backup and Restore with MongoDB Tools:Because mongodump and mongorestore operate by interacting with a running mongod instance, they can impact the performance of your running database. Not only do the tools create traffic for a running database instance, they also force the database to read all data through memory. When MongoDB reads infrequently used data, it can evict more frequently accessed data, causing a deterioration in performance for the database’s regular workload.I was wondering what will be the “limit”There is no specific limit, but if your data is significantly larger than available memory there will be more overhead for reading data with a mongodump backup and more time to recreate your deployment with mongorestore. The timing and performance impact of backups will vary depending on your backup approach, deployment resources, workload, and backup frequency.The MongoDB server documentation includes considerations and procedures for supported backup methods.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I see thank you for the response, since I’m quite new to the backuping MongoDB Data, I was hoping to back data up via code itself and set a regular interval around everyday and back data up, I’ll check the MongoDB backup methods themselves and see if I can understand them and use it to my advantage!",
"username": "Shaughn_De_Sousa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Backingup MongoDB Data "Limit" | 2022-09-07T21:48:07.566Z | Backingup MongoDB Data “Limit” | 1,570 |
[
"queries",
"atlas-cluster"
]
| [
{
"code": "ATLAS_URI=mongodb+srv://allyson:*********@cluster0.87dg5hc.mongodb.net/?retryWrites=true&w=majority\nPORT=4000\nconst express = require(\"express\");\n\n// recordRoutes is an instance of the express router.\n// We use it to define our routes.\n// The router will be added as a middleware and will take control of requests starting with path /plants.\nconst plantRoutes = express.Router();\n\n// This will help us connect to the database\nconst dbo = require(\"../db/conn\");\n\n// This help convert the id from string to ObjectId for the _id.\nconst ObjectId = require(\"mongodb\").ObjectId;\n\n// This section will help you get a list of all the records.\nplantRoutes.route(\"/plant\").get(function (req, res) {\n let db_connect = dbo.getDb(\"plant-babies-data\");\n db_connect\n .collection(\"plants\")\n .find({})\n .toArray(function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n});\n",
"text": "Hi, I’m trying out my first MERN app and I’m running into trouble. I’ve been following the MongoDB documentation to set up my project and for the routes. The server is running and I’m connected to MongoDB but when I go to browser (http://localhost:4000/plant) to test that I’m able to get the data I just get an empty array. So I’m thinking I’m using an incorrect database or collection name or something? I would love an extra set of eyes if possible.\nScreen Shot 2022-09-08 at 11.34.51 AM1191×860 95.2 KB\nAttached is a pic of my database page and I used the given connection URI to my cluster:Here is a section of my routes file:Thanks so much for any help!!",
"username": "Allyson_Smith"
},
{
"code": "",
"text": "I think I found my error- I had used the collection name instead of the database name when connecting to MongoDB. I can now see all my data objects in the browser.",
"username": "Allyson_Smith"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Getting empty array in browser when testing database connection | 2022-09-08T15:56:39.847Z | Getting empty array in browser when testing database connection | 3,365 |
|
[
"aggregation"
]
| [
{
"code": "db.provisioningOrderInfo.aggregate([\n\t\t{$match:{$and : [{\"tenantId\":\"XXXXXX\"},{\"orderDate\":{$gte:\"2022-08-03\",$lte:\"2022-08-25\"}}]}},\n\t\t{$group:{\n\t\t\t\"_id\" :\"$orderID\",\n\t\t\t\"lastDoc\" : { \"$last\" : \"$$ROOT\"}\n\t\t}},\n\t\t{$group:{\n\t\t\t\"_id\":null,\n\t\t\t\"TOTAL_ORDERS\" : {$sum:1},\n\t\t\t\"SUCCESS\":{\"$sum\": { \"$cond\":[ { \"$eq\": [\"$lastDoc.orderStatus\", \"Success\"] } , 1, 0 ] }},\n\t\t\t\"FAILED\":{\"$sum\": { \"$cond\":[ { \"$eq\": [\"$lastDoc.orderStatus\", \"Failed\"] } , 1, 0 ] }},\n\t\t\t\"PARTIALLYFULFILLED\":{\"$sum\": { \"$cond\":[ { \"$eq\": [\"$lastDoc.orderStatus\", \"PartiallyFulfilled\"] } , 1, 0 ] }},\n\t\t\t\"PENDING\":{\"$sum\": { \"$cond\":[ { \"$eq\": [\"$lastDoc.orderStatus\", \"Pending\"] } , 1, 0 ] }}\n\t\t\t}}\n])\n",
"text": "\nScreenshot from 2022-09-08 18-10-50834×325 70.5 KB\nHI Guys… Following is the usecase for which I have to write a query. I’m actually not able to get proper approach or Im not sure even if we can achieve. Kindly help with any possible approach for the usecase.From the screenshot, Same OrderID is having multiple orderNumbers and their corresponding orderStatus.The screenshot has data for only one orderID. But the actual data will have multiple orderID also.First I want to group for each orderID. Within each orderID, I want to group by orderNumber and from each orderNumber, I want to pull the last record and generate the count for each status.However I was able to achieve grouping with just ordeID as shown belowThis is giving me the response properly. However, Im getting confused on introducing the second group by i.e, group by orderNumber.Kindly help me if this kind of usecase can be tackled by aggregation query ?",
"username": "Dilip_D"
},
{
"code": "",
"text": "Please provide your sample documents in textual JSON format so that we can cut-n-paste into our system.",
"username": "steevej"
},
{
"code": "",
"text": "In addition to the sample docs that Steeve asked for, we would need to know what the expected output should look like. This saves time with going back and forth tweaking the query.",
"username": "Doug_Duncan"
}
]
| Aggreagation with multiple groups stage | 2022-09-08T12:56:25.661Z | Aggreagation with multiple groups stage | 1,074 |
|
null | [
"aggregation"
]
| [
{
"code": "db.temp_log.aggregate([{$match: {\n $and: [\n {\n $or: [\n {\n sId: {\n $in: [\n 'A',\n 'B',\n 'C',\n 'E'\n ]\n }\n },\n {\n tmp: {\n $gte: 10\n }\n },\n {\n temperatureLogTime: {\n $gte: ISODate('2022-07-01T00:00:00.000Z')\n }\n },\n {\n temperatureLogTime: {\n $lte: ISODate('2022-07-31T00:00:00.000Z')\n }\n },\n {\n sId: {\n $in: [\n 'D'\n ]\n }\n },\n {\n tmp: {\n $gte: 50\n }\n },\n {\n temperatureLogTime: {\n $gte: ISODate('2022-07-01T00:00:00.000Z')\n }\n },\n {\n temperatureLogTime: {\n $lte: ISODate('2022-07-31T00:00:00.000Z')\n }\n }\n ]\n }\n ]\n}}, {$group: {\n _id: {\n sId: '$sId'\n },\n count: {\n $push: '$tmp'\n }\n}}])\n**1)**\n1. id\n\n:\n\nObject\n\n 1. sId\n\n:\n\n\"D\"\n2. count\n\n:\n\nArray\n\n 1. 0\n\n:\n\n-124.255\n\n 2. 1\n\n:\n\n-126.255\n\n 3. 2\n\n:\n\n-124.255\n\n 4. 3\n\n:\n\n-126.255\n\n 5. 4\n\n:\n\n-126.255\n\n 6. 5\n\n:\n\n-126.255\n\n 7. 6\n\n:\n\n-126.255\n\n 8. 7\n\n:\n\n-126.255\n\n 9. 8\n\n:\n\n-124.255\n\n 10. 9\n\n:\n\n-95.119\n\n 11. 10\n\n:\n\n-126.255\n\n**2)**\n\n1. _id\n\n:\n\nObject\n\n 1. sId\n\n:\n\n\"B\"\n2. count\n\n:\n\nArray\n\n 1. 0\n\n:\n\n-126.255\n\n 2. 1\n\n:\n\n45\n\n 3. 2\n\n:\n\n45\n\n 4. 3\n\n:\n\n45\n\n 5. 4\n\n:\n\n45\n\n 6. 5\n\n:\n\n45\n\n 7. 6\n\n:\n\n45\n\n 8. 7\n\n:\n\n45\n\n 9. 8\n\n:\n\n45\n\n 10. 9\n\n:\n\n45\n\n 11. 10\n\n:\n\n45\n",
"text": "Hi ,\nI am using the following query to get field sId:“A”,“B”,“C”,“E” which contains tmp:$gte:10.0, and the field sId:“D”, which contains the tmp:$gte:50.0, from the same collection. i want to get the sId:a,b,c,e and there tmp values are greater than 10.0 and sid:d and the tmp value are greater than 50.0 in the same object. Following queryey i am executing to get the results, but not getting accurate results, Kindly do needfull in this matter.Result Set:",
"username": "MERUGUPALA_RAMES"
},
{
"code": "",
"text": "Hi @MERUGUPALA_RAMES can you share some documents that we can work with while testing? Also it would be nice to see the desired output you are expecting.",
"username": "Doug_Duncan"
},
{
"code": "/* 1 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81cfc\"), \n \"sId\" : \"B\",\n \"time\" : NumberLong(1642748640),\n \"typ\" : 0,\n \"tmp\" : 28.2,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:04:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\",\n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81cfd\"),\n \"sId\" : \"C\",\n \"time\" : NumberLong(1642748640),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:04:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\",\n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81cfe\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642748640),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:04:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 4 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81cff\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642748640),\n \"typ\" : 0,\n \"tmp\" : 28.5,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:04:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 5 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81d00\"), \n \"sId\" : \"B\",\n \"time\" : NumberLong(1642748700),\n \"typ\" : 0,\n \"tmp\" : 28.2,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:05:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 6 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81d01\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642748700),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:05:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 7 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81d02\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642748700),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:05:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 8 */\n{\n \"_id\" : ObjectId(\"61ea5d6e2bf8817538c81d03\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642748700),\n \"typ\" : 0,\n \"tmp\" : 28.5,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:05:00.000Z\"),\n \"temperatureLogid\" : \"61ea5d6e2bf8817538c81cfb\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 9 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d05\"), \n \"sId\" : \"B\",\n \"time\" : NumberLong(1642748760),\n \"typ\" : 0,\n \"tmp\" : 28.2,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:06:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 10 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d06\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642748760),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:06:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 11 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d07\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642748760),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:06:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 12 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d08\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642748760),\n \"typ\" : 0,\n \"tmp\" : 28.5,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:06:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 13 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d09\"), \n \"sId\" : \"B\",\n \"time\" : NumberLong(1642748820),\n \"typ\" : 0,\n \"tmp\" : 28.2,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:07:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 14 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d0a\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642748820),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:07:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 15 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d0b\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642748820),\n \"typ\" : 0,\n \"tmp\" : 28.4,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:07:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 16 */\n{\n \"_id\" : ObjectId(\"61ea5dc72bf8817538c81d0c\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642748820),\n \"typ\" : 0,\n \"tmp\" : 28.5,\n \"temperatureLogTime\" : ISODate(\"2022-01-21T07:07:00.000Z\"),\n \"temperatureLogid\" : \"61ea5dc72bf8817538c81d04\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 17 */\n{\n \"_id\" : ObjectId(\"61e67d390fcdd25fa73aa080\"), \n \"sId\" : \"A\",\n \"time\" : NumberLong(1642486290),\n \"typ\" : 0,\n \"tmp\" : 7.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:11:30.000Z\"),\n \"temperatureLogid\" : \"61e67d390fcdd25fa73aa07f\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 18 */\n{\n \"_id\" : ObjectId(\"61e67d390fcdd25fa73aa081\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642486290),\n \"typ\" : 0,\n \"tmp\" : 25.9,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:11:30.000Z\"),\n \"temperatureLogid\" : \"61e67d390fcdd25fa73aa07f\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 19 */\n{\n \"_id\" : ObjectId(\"61e67d390fcdd25fa73aa082\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642486290),\n \"typ\" : 0,\n \"tmp\" : 26.3,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:11:30.000Z\"),\n \"temperatureLogid\" : \"61e67d390fcdd25fa73aa07f\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 20 */\n{\n \"_id\" : ObjectId(\"61e67d390fcdd25fa73aa083\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642486290),\n \"typ\" : 0,\n \"tmp\" : 26.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:11:30.000Z\"),\n \"temperatureLogid\" : \"61e67d390fcdd25fa73aa07f\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 21 */\n{\n \"_id\" : ObjectId(\"61e67d470fcdd25fa73aa085\"), \n \"sId\" : \"A\",\n \"time\" : NumberLong(1642486451),\n \"typ\" : 0,\n \"tmp\" : 7.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:14:11.000Z\"),\n \"temperatureLogid\" : \"61e67d470fcdd25fa73aa084\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 22 */\n{\n \"_id\" : ObjectId(\"61e67d470fcdd25fa73aa086\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642486451),\n \"typ\" : 0,\n \"tmp\" : 25.9,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:14:11.000Z\"),\n \"temperatureLogid\" : \"61e67d470fcdd25fa73aa084\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 23 */\n{\n \"_id\" : ObjectId(\"61e67d470fcdd25fa73aa087\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642486451),\n \"typ\" : 0,\n \"tmp\" : 26.3,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:14:11.000Z\"),\n \"temperatureLogid\" : \"61e67d470fcdd25fa73aa084\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 24 */\n{\n \"_id\" : ObjectId(\"61e67d470fcdd25fa73aa088\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642486451),\n \"typ\" : 0,\n \"tmp\" : 26.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:14:11.000Z\"),\n \"temperatureLogid\" : \"61e67d470fcdd25fa73aa084\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 25 */\n{\n \"_id\" : ObjectId(\"61e67d530fcdd25fa73aa08a\"), \n \"sId\" : \"A\",\n \"time\" : NumberLong(1642486610),\n \"typ\" : 0,\n \"tmp\" : 26.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:16:50.000Z\"),\n \"temperatureLogid\" : \"61e67d530fcdd25fa73aa089\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 26 */\n{\n \"_id\" : ObjectId(\"61e67d530fcdd25fa73aa08b\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642486610),\n \"typ\" : 0,\n \"tmp\" : 25.9,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:16:50.000Z\"),\n \"temperatureLogid\" : \"61e67d530fcdd25fa73aa089\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 27 */\n{\n \"_id\" : ObjectId(\"61e67d530fcdd25fa73aa08c\"), \n \"sId\" : \"D\",\n \"time\" : NumberLong(1642486610),\n \"typ\" : 0,\n \"tmp\" : 26.3,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:16:50.000Z\"),\n \"temperatureLogid\" : \"61e67d530fcdd25fa73aa089\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 28 */\n{\n \"_id\" : ObjectId(\"61e67d530fcdd25fa73aa08d\"), \n \"sId\" : \"E\",\n \"time\" : NumberLong(1642486610),\n \"typ\" : 0,\n \"tmp\" : 26.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:16:50.000Z\"),\n \"temperatureLogid\" : \"61e67d530fcdd25fa73aa089\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 29 */\n{\n \"_id\" : ObjectId(\"61e67d600fcdd25fa73aa08f\"), \n \"sId\" : \"A\",\n \"time\" : NumberLong(1642486774),\n \"typ\" : 0,\n \"tmp\" : 26.0,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:19:34.000Z\"),\n \"temperatureLogid\" : \"61e67d600fcdd25fa73aa08e\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n\n/* 30 */\n{\n \"_id\" : ObjectId(\"61e67d600fcdd25fa73aa090\"), \n \"sId\" : \"C\",\n \"time\" : NumberLong(1642486774),\n \"typ\" : 0,\n \"tmp\" : 25.9,\n \"temperatureLogTime\" : ISODate(\"2022-01-18T06:19:34.000Z\"),\n \"temperatureLogid\" : \"61e67d600fcdd25fa73aa08e\", \n \"_class\" : \"com.dipl.assets.entity.TemperatureLogDetails\"\n}\n",
"text": "Hi Doug_Duncan,\ni am having a data in this format in my collectionWhere my requirement is to get the records which contains the fields for “sId”:[“A”,“B”,“C”,“E”] and there field “tmp” values should be greater than 10.0 and for “sId”:“D” the “tmp” values which contains greater than 50.0. In the above aggregation query i am able to get the results but it is not accurage, For “sId”:“A”,“B”,“C”,“E” the tmp values i am able to get minus values and for “sId”:“D” also getting the minus values in the Result set. I hope this will helpfull to understand my requirement.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "db.temp_log.aggregate(\n [\n {\n $match: {\n $and: [{\n temperatureLogTime: {\n $gte: ISODate('2022-01-19T00:00:00.000Z'),\n $lte: ISODate('2022-01-29T00:00:00.000Z')\n }\n },\n {\n $or: [\n {\n sId: {\n $in: ['A', 'B', 'C', 'E']\n },\n tmp: {\n $gte: 10\n },\n },\n {\n sId: 'D',\n tmp: {\n $gte: 28\n },\n }\n ]\n }\n ]\n }\n },\n {\n $group: {\n _id: {\n sId: '$sId'\n },\n count: {\n $push: '$tmp'\n }\n }\n }\n ]\n)\n[\n { _id: { sId: 'B' }, count: [ 28.2, 28.2, 28.2, 28.2 ] },\n { _id: { sId: 'C' }, count: [ 28.4, 28.4, 28.4, 28.4 ] },\n { _id: { sId: 'E' }, count: [ 28.5, 28.5, 28.5, 28.5 ] },\n { _id: { sId: 'D' }, count: [ 28.4, 28.4, 28.4, 28.4 ] }\n]\ntmp",
"text": "Give this a try:The above returns the following results:Note I modified the dates and tmp values to fit the sample data set you provided (thanks for giving a lot of data to test with).Let us know if you have any questions.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for the Update Doug_Duncan,\ni had executed the above query which you have provided, but unfortunatelly i am getting only “sId”:“D” and their tmp values in the result set. Kindly provide me another approach if possible, so that it will helpfull for me to get the accurate rusults.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "let letters = 'ABCDEFG';\nlet startDate = new Date('2022-01-01T00:00:00.000');\nlet maxTemp = 0;\nlet msinday = 24 * 60 * 60 * 1000\n\nfor (let i = 0; i < 10000; i++) {\n\tlet sid = letters.charAt(Math.floor(Math.random() * letters.length));\n\tif (sid == 'D') {\n\t\tmaxTemp = 75;\n\t}\n\telse {\n\t\tmaxTemp = 15;\n\t}\n\t\n\tdb.temp_log.insert({\n\t\t\"sId\": sid,\n\t\t\"tmp\": Math.floor(Math.random() * maxTemp),\n\t\t\"temperatureLogTime\": new Date(+startDate + Math.floor(Math.random() * 365) * msinday)\n\t})\n}\n[\n {\n _id: { sId: 'A' },\n count: [\n 14, 12, 11, 13, 13, 10, 13, 13, 11,\n 10, 14, 14, 12, 10, 10, 10, 14, 13,\n 11, 14, 12, 12, 14, 12, 11, 11, 14,\n 14, 14, 11, 14, 14, 11, 11, 14, 14\n ]\n },\n {\n _id: { sId: 'C' },\n count: [\n 14, 10, 12, 12, 12, 12, 14, 11, 10,\n 12, 11, 12, 12, 13, 13, 14, 14, 14,\n 11, 14, 14, 11, 12, 10, 13, 14, 10,\n 11, 11, 11, 11, 14, 11\n ]\n },\n {\n _id: { sId: 'B' },\n count: [\n 13, 14, 13, 10, 10, 11, 10, 13, 11, 13,\n 14, 14, 13, 11, 13, 11, 12, 10, 11, 11,\n 12, 11, 10, 13, 13, 14, 11, 11, 13, 14,\n 12, 13, 14, 10, 10, 11, 11\n ]\n },\n {\n _id: { sId: 'D' },\n count: [\n 55, 66, 56, 58, 66, 71, 64, 64, 58, 54,\n 72, 69, 63, 67, 53, 51, 65, 56, 62, 71,\n 63, 67, 63, 60, 62, 73, 59, 71, 51, 61,\n 72, 73, 50, 50, 74, 55, 73, 61, 73, 65\n ]\n },\n {\n _id: { sId: 'E' },\n count: [\n 10, 12, 13, 14, 11, 14, 12, 14, 12,\n 10, 10, 14, 12, 14, 13, 10, 10, 11,\n 12, 10, 10, 11, 14, 12, 12, 12, 14,\n 10, 14, 11, 13, 11, 10, 10, 12, 11\n ]\n }\n]\n",
"text": "I am not sure why the code does not work for you. Have you done any work to troubleshoot it? The code I provided worked on the test set you gave me as evidenced by the results I provided.Kindly provide me another approach if possible, so that it will helpfull for me to get the accurate rusults.I am just a community member like you with a job, family and friends that take up most of my time. I try to spend some of the free time I have left over trying to help fellow MongoDB users out with their issues, but I don’t have unlimited time to solve all their problems.To give me a bigger test set, I ran the following Javascript code that put in 10,000 documents with random values:I then ran the code I sent yesterday against this larger set and I got the following results:This tells me that based on the parameters you provided that things are working from what I can see.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Need to get All sId:"A","B","C","E", tmp:$gte:10 and sId:"D",tmp:$gte:50 from the same collection | 2022-09-08T05:12:02.175Z | Need to get All sId:”A”,”B”,”C”,”E”, tmp:$gte:10 and sId:”D”,tmp:$gte:50 from the same collection | 1,702 |
null | [
"change-streams"
]
| [
{
"code": "",
"text": "I’m listening to a collection on mongoDB using change streams. I update a document in the collection using the $set query. However, all the arguments in the query don’t actually update the document (some of them just try to set the same value as in the existing document). Now, when I receive the change event for this change in my change stream service, will I see all the arguments I have passed in the query in the updated fields, or will I only see the fields that have actually been modified as a result of executing the query?",
"username": "Siddarth_Sukameti"
},
{
"code": "",
"text": "Since you alreadylistening to a collectionyou propably can answer your own question by printing on the listening side.But what I can say, is that change streams uses the replication oplog. Hopefully, it is optimized NOT to send unchanged data for replication purpose. My guess is that you do not get unchanged fields. You may ask for the whole document if you absolutly need the unchanged fields.",
"username": "steevej"
}
]
| Update Description in mongodb 5.0 change events | 2022-09-08T12:37:36.642Z | Update Description in mongodb 5.0 change events | 1,854 |
null | [
"aggregation"
]
| [
{
"code": "cument for example: \ndevice: \nid: \"7863\",\ndeviceSN: \"378623\", \n\n \"gameSessions\": [\n {\n \"deviceSN\": \"180322905602120\",\n \"endTime\": 1633520201,\n \"gameId\": \"5cf6426984c42d6c2775b6be\",\n \"gameName\": \"Lumberjacks\",\n \"playmode\": \"Wall\",\n \"gameEvents\": [],\n \"points\": [\n {\n \"count\": 134,\n \"GridColumn\": 0,\n \"GridRow\": 0\n },\n {\n \"count\": 29,\n \"GridColumn\": 1,\n \"GridRow\": 0\n },\n {\n \"count\": 24,\n \"GridColumn\": 2,\n \"GridRow\": 0\n },\n {\n \"count\": 0,\n \"GridColumn\": 0,\n \"GridRow\": 1\n },\n {\n \"count\": 88,\n \"GridColumn\": 1,\n \"GridRow\": 1\n },\n {\n \"count\": 44,\n \"GridColumn\": 2,\n \"GridRow\": 1\n },\n {\n \"count\": 0,\n \"GridColumn\": 0,\n \"GridRow\": 2\n },\n {\n \"count\": 14,\n \"GridColumn\": 1,\n \"GridRow\": 2\n },\n {\n \"count\": 3,\n \"GridColumn\": 2,\n \"GridRow\": 2\n }\n ],\n \"profile\": {\n \"name\": \"General\",\n \"uuid\": \"\"\n },\n \"startTime\": 1633519873\n }\n}\n\n\n.aggregate([\n { $addFields: { \n PreviousDate: { $subtract: [ new Date(), (1000*60*60*24*30) ] } \n }\n },\n {$match:{ deviceSN: \"180733444101861\",\n createdAt:{$gte:new Date(\"07-08-2021\") , $lt: new Date(\"$PreviousDate\")}}},\n { $unwind: \"$gameSessions\" },\n { $group:\n {\n _id: \"$gameSessions.gameId\", // Group key\n hoursPlayed: {$sum: { $dateDiff:{ \n startDate: '$startTime',\n endDate: '$endTime',\n unit: 'hour'}}},\n }\n }\n\n ]).toArray()\n\n",
"text": "Hi there,\nI am trying to create an aggregation to sum actual play time from documents that have a startTime and an endTime.the data is inside an array inside a document.what makes it tricky is that I am calculating sum of game Sessions for a few different games so I need to group by gameName/Id and produce total playtime for each game.is this achievable ?there are more fields to the object but I just need to display top 5 played games from the gameSessions array.here is what i have so fat that produces an empty array",
"username": "Yarden_Ish-Shalom"
},
{
"code": "",
"text": "You should $match first. This will ensure you use an appropriate index. The $addFields before the $match is potentially applied to all documents, even the ones that do not match. Yes, you use PreviousDate from your $addFields in your $match but you could do the test without having it as a field.In your $match, you test createdAt, but I do not see this field in your sample documents. This might explain the empty array. May be the field is named differently.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Querying a nested array from a document to calculate play time in hours | 2022-09-07T11:05:17.822Z | Querying a nested array from a document to calculate play time in hours | 993 |
null | [
"transactions",
"flutter",
"flexible-sync"
]
| [
{
"code": "userProfile adduserProfile(userProfile newuserProfile) {\n final userQuery = _realm.query<userProfile>(\"_id == \\$0\", [newuserProfile.id]);\n userProfile? userProfile;\n _realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add<userProfile>(userQuery, name: \"add-user-query\", update: true);\n userProfile = _realm.write((){\n return _realm.add(newuserProfile);\n });\n return userProfile!;\n return _realm.write(() {\n return _realm.add(newuserProfile);\n });\n }\n",
"text": "Hey, we are building an App on flutter, dart and realm. And we are trying to sync the data to the database using Flexible sync. But we are not able to progress further as There are no logs generated on the backend or on the Flutter dev tools for what’s happening.\nThe thread just halts with no error.\nThe transaction is not being started and no further code execution takes place,\nthe app freezes when we try to start the write transaction\nAlso the reading is done but gets stuck while writing.\nHere is my write code.",
"username": "Deepankshu_Shewarey"
},
{
"code": "update",
"text": "Hm… this looks like a bug on our end that we don’t throw the correct error here. The issue is you’re starting a transaction inside the subscription update block. That is not allowed. Can you move the write outside of the update block and see if that resolves the issue.",
"username": "nirinchev"
},
{
"code": "userProfile adduserProfile(userProfile newuserProfile) {\n final userQuery = _realm.query<userProfile>(\"_id == \\$0\", [newuserProfile.id]);\n userProfile? userProfile;\n _realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add<userProfile>(userQuery, name: \"add-user-query\", update: true);\n });\n return userProfile!;\n return _realm.write(() {\n return _realm.add(newuserProfile);\n });\n }\nflutter: [ERROR] Realm: SyncError message: error category: SyncErrorCategory.unknown code: 1011",
"text": "Thanks Nikola, As suggested we tried the following codeand getting this error\nflutter: [ERROR] Realm: SyncError message: error category: SyncErrorCategory.unknown code: 1011",
"username": "Deepankshu_Shewarey"
},
{
"code": "",
"text": "That sounds like a server error - can you try checking your server logs to see if there’s anything in there that may point us to what the issue is?",
"username": "nirinchev"
}
]
| In flexible sync read code is working but write code is not working | 2022-09-05T19:29:22.726Z | In flexible sync read code is working but write code is not working | 2,706 |
null | [
"node-js"
]
| [
{
"code": "async function myAsyncFunc(monthlyPeriodsArr, monthly_periods_results_arr, monthly_periods_plot_arr) \n{\nfor (let index = 0; index < monthlyPeriodsArr.length; index++) {\nconst elementArr = monthlyPeriodsArr[index];\nconsole.log(index+\": element: \"+elementArr);\n\nvar PromiseResult = await \nCallMongoDBWithPeriodResolution.executeMongoDBAggregateQueryDates(elementArr[0], elementArr[1]);\nconsole.log(\"PR: \"+PromiseResult);\nconsole.log(\"PromiseResult: \"+PromiseResult.then((result) => {\nconsole.log(\"Success\", result);\n\n}).catch((error) => {\nconsole.log(\"Error\", error);\n}));\n\n}\n\n}\n",
"text": "In the above code CallMongoDBWithPeriodResolution.executeMongoDBAggregateQueryDates(start_date, end_date) returns undefined.\nBut the query works in the function callback that connects to the database.\nI can do any and all things with the data in the callback but returning an array or a promise always returns undefined. How can I correct this?\nI have tried promises and await and async. I have been attempting regular functions while waiting for the callback.\nReturning myresultsArray that is visible inside the callback returns undefined in all cases.Cheers,\nDaniel",
"username": "Daniel_Stege_Lindsjo"
},
{
"code": "",
"text": "I have solved the issue using async, await and try/catch blocks.\nBut I am glad to use this forum and will ask again if the need arises. Happy coding to you all,\nDaniel",
"username": "Daniel_Stege_Lindsjo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How do I get data back from MongoDB callback in NodeJS? | 2022-09-08T12:54:23.229Z | How do I get data back from MongoDB callback in NodeJS? | 2,293 |
null | [
"connecting",
"sharding"
]
| [
{
"code": "{\"find\":\"shards\", \"readConcern\":{\"level\":\"majority\"}}Command timed out waiting for read concern to be satisfied",
"text": "( question also asked at cluster - MongoS refusing connections during network split - Server Fault )I am exploring sharded MongoDB clusters, and I have trouble understanding MongoS’s behavior during a network split.My test environment works as expected until I simulate a network split.\nIn the “smallest” network partition, I have:In this partition:Do you know why? I must be missing something obvious.\nAs far as I can tell from startup logs, MongoS sends queries like {\"find\":\"shards\", \"readConcern\":{\"level\":\"majority\"}} to the config server, which seems guaranteed to fail during a network split (Command timed out waiting for read concern to be satisfied).Thanks for any help",
"username": "Nick_Turner"
},
{
"code": "",
"text": "I reproduced this behavior when following https://www.mongodb.com/docs/manual/tutorial/deploy-shard-cluster/ (using MongoDB 6.0.1) :I expected MongoS to start and accept connections, even when the configuration replica set is in degraded mode (and only one SECONDARY configuration server is reachable by MongoS).Is that expectation wrong?",
"username": "Nick_Turner"
},
{
"code": "",
"text": "Hey @Nick_Turner,Can I ask why you’re skipping the shard replica set creation in the above step? It sounds like you’re deliberately creating a mangled sharded cluster. In this state, I don’t think we can guarantee certain expected behaviour.If you’re intending to simulate how a sharded cluster behave in a degraded state (which I think is a valid goal), I believe you need to have a working sharded cluster first, then use a tool like toxiproxy to simulate certain network conditions. I’m not sure if turning off part of the cluster will give you accurate information.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "My initial test had a functioning sharded cluster (2 shards, each one on a 3-server replica set). That’s when I noticed this MongoS behavior when the cluster is in a degraded state.I did a second test just to try and isolate the issue (reproducing the same behavior with fewer servers, while following the tutorial as much as possible). That’s why I skipped the shard replica set creation: even without it, I observed the same behavior. When my config replica set is healthy, MongoS accepts connections. But when the config replica set is in degraded state and the remaining servers are rebooted, MongoS refuses connections until the config replica set becomes healthy again.Thanks for the toxiproxy recommendation, I’ll look into it. But I feel like turning off servers is also a valid test case: in case of datacenter emergencies, we may have to power off some servers quickly - and even in this state, I expect the remaining servers to still accept connections.",
"username": "Nick_Turner"
}
]
| MongoS refusing connections during network split | 2022-09-04T07:45:28.515Z | MongoS refusing connections during network split | 2,067 |
null | [
"queries",
"database-tools"
]
| [
{
"code": "datequery.shstart= sort -n /ipath/date.txt | tail -1\nend= date --utc \"+%FT%T.%3NZ\"\necho '{\"memberCard.cardMbrDtlsCreatedTms\": { \"$gte\": { \"$date\": \"$start\"}, \"$lt\": {\"$date\": \"$end\"}}}'\n",
"text": "This is my mongoexportmongoexport --ssl --sslCAFile $sslCAFile --host $host -u $username -p $password --collection $collectionMember --db $database --limit 5 --query “’./datequery.sh’” --out $outputdatequery.sherror validating settings: query ‘[39 46 47 100 97 116 101 113 117 101 114 121 46 115 104 39]’ is not valid JSON: json: cannot unmarshal string into Go value of type map[string]interface {}",
"username": "Yugandhar_Prathi"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update the code and command you published.The way it is formatted it is not clear if the syntax is wrong or simply the formatting.",
"username": "steevej"
},
{
"code": "--querymongoexport./datequery.sh$(command)./datequery.sh./datequery.sh --query=\"$(./datequery.sh)\"\n --query=\"`./datequery.sh`\"\n$",
"text": "Welcome to the MongoDB Community @Yugandhar_Prathi !–query “’./datequery.sh’”The --query parameter for mongoexport expects a query as a JSON document enclosed in quotes.You can use shell command substitution to replace ./datequery.sh with the output of running that command, but the supported syntax is either single backticks (`command`) or $(command).Currently you have used single quotes ('), which (in a shell context) will preserve the literal value of each character within quotes. The \"39 47 … \" sequence is ./datequery.sh converted to the decimal value of each character.Either of these should work assuming ./datequery.sh is executable:I think the first form is more readable and consistent with the other variable substitutions using $.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X when I ran the job with first mentioned option\nI am getting this error\nerror validating settings: query ‘[50 48 50 50 45 48 49 45 50 49 84 48 48 58 48 48 58 48 48 46 48 48 49 90 10 50 48 50 50 45 48 57 45 48 57 84 48 55 58 50 53 58 49 54 46 55 52 48 90 10 123 34 109 101 109 98 101 114 67 97 114 100 46 99 97 114 100 77 98 114 68 116 108 115 67 114 101 97 116 101 100 84 109 115 34 58 32 123 32 34 36 103 116 101 34 58 32 123 32 34 36 100 97 116 101 34 58 32 34 36 115 116 97 114 116 34 125 44 32 34 36 108 116 34 58 32 123 34 36 100 97 116 101 34 58 32 34 36 101 110 100 34 125 125 125]’ is not valid JSON: invalid character ‘-’ after top-level value",
"username": "Yugandhar_Prathi"
},
{
"code": "",
"text": "I changed my entire shell script and passing query\nsource /path/config.shstart_dt= sort -n /path/date.txt | tail -1end_dt= date --utc “+%FT%T.%3NZ”mongoexport --ssl --sslCAFile $sslCAFile --host $host -u $username -p $password --collection $collectionName --db $database --limit 5 --query ‘{“xxxx.cxxxCreatedTms”: { “$gte”: { “$date”: “$start”}, “$lt”: {\"$date\": “$end”}}}’ --out $output",
"username": "Yugandhar_Prathi"
},
{
"code": "",
"text": "While I ran the above script I am facing this error2022-01-21T00:00:00.001Z\n2022-09-09T07:31:18.924Z\n2022-09-09T03:31:19.023-0400 error validating settings: parsing time “$start” as “2006-01-02T15:04:05Z07:00”: cannot parse “$start” as “2006”\n2022-09-09T03:31:19.023-0400 try ‘mongoexport --help’ for more information",
"username": "Yugandhar_Prathi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| While I want to fetch data from mongocollection based on updated time in shell I'm facing this error. Where I'm passing the date in other sh file | 2022-09-08T07:03:13.609Z | While I want to fetch data from mongocollection based on updated time in shell I’m facing this error. Where I’m passing the date in other sh file | 3,613 |
null | [
"queries"
]
| [
{
"code": "",
"text": "I am currently using\ndb.<collection_name>.find({}).sort({\"_id\":-1})Is this the right way to go ?",
"username": "Stuart_S"
},
{
"code": "_id",
"text": "Hi @Stuart_S,Yes, you can do it like that. _id filed is of the type ObjectId, which includes a timestamp.",
"username": "NeNaD"
},
{
"code": "",
"text": "Thanks Nenad,!!!",
"username": "Stuart_S"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is this the right way to get documents in descending order meaning last inserted first and so on with the first inserted in the end? | 2022-09-09T05:20:33.441Z | Is this the right way to get documents in descending order meaning last inserted first and so on with the first inserted in the end? | 1,419 |
[
"data-modeling",
"transactions"
]
| [
{
"code": "WalletTransactionsWalletTransactionLinesWalletTransactionsWalletTransactionLinesWalletTransactionsWalletTransactionLinesWalletTransactionsWalletTransactionReceiptWalletTransactionsWalletTransactionReceiptWalletTransactionsWalletTransactionLinesWalletTransactionLines",
"text": "I will be using MongoDB (which is NoSQL) and MSSQL at the same time to store data (that’s why there are Foreign Keys)Each account will be assigned a Wallet.A Wallet can have thousands of transactions. Each Wallet Transaction should implement double entry accounting - which means for every transaction, there will be a credit and debit document. To implement double entry accounting, there is a WalletTransactions (which stores common fields) and a WalletTransactionLines (which stores actual details i.e. which wallet is credited, debited) collection.Every WalletTransactions document will contain an array of WalletTransactionLines . WalletTransactions will have at least 2 WalletTransactionLines documents, one for credit, one for debit, always.Every WalletTransactions document will have a WalletTransactionReceipt object. Every WalletTransactions document may or may not have value for each field of WalletTransactionReceipt object.An account can also Top-up to their own wallet using i.e. Apple Pay. This action is not a wallet-to-wallet transaction, but an unknown-to-wallet transaction. This means that no wallet is debited, and only the account’s wallet is credited (meaning for one WalletTransactions, there’s only one WalletTransactionLines)Some comments:\nScreen Shot 2022-09-09 at 2.53.37 PM911×446 46.6 KB\n",
"username": "ABC"
},
{
"code": "WalletTransactionReceiptWalletsWalletTransactionsWalletTransactionLines",
"text": "editPlease note that WalletTransactionReceipt is an embedded object. Wallets, WalletTransactions, and WalletTransactionLines are all separate collections.The one-to-many lines are just for visual representation.",
"username": "ABC"
}
]
| Feedback for e-wallet data model | 2022-09-09T06:54:35.812Z | Feedback for e-wallet data model | 2,365 |
|
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "Users.find({ _id: { $gt: \"uid50\", $lt: \"uid100\" } })",
"text": "if we have a query like Users.find({ _id: { $gt: \"uid50\", $lt: \"uid100\" } })\nso how can we use the same filter with the MongoDB atlas search without using this in the match pipeline?",
"username": "sajan_kumar"
},
{
"code": "rangeint32int64doubledaterange",
"text": "Hi @sajan_kumar,Could you provide the following information:Atlas search has the range operator which can be used to perform a search over:However, it seems the values you’ve provided in the original query is of string type where as the range operator.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to Run Atlas Search Queries with a object id Range Filter? | 2022-08-30T11:12:46.917Z | How to Run Atlas Search Queries with a object id Range Filter? | 1,800 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "How will you build an Instagram story like schema which will be deleted after 24hrs and also contain list of users that viewed the story and time they viewed the story. Also users can add multiple stories.On the part of deleting I thought about using TTL index and a trigger for delete to get the data so I can delete the file from s3. But the delete doesn’t return document data.I really need help and suggestions for this.",
"username": "martin_daniels"
},
{
"code": "{\n_id : \"doc1\", \nstoryId : 'xxx' ,\nstoryCreateDate : ISODate(...),\nS3Url : \" .... \",\nusers : [ { \"userId\" : \"embeeded1\", \"avatar\" : \"...\" , dateViewed : ... } ... { \"id\" : \"embeededN\" } ],\noverFlowIndex: 1,\ntotalViewed : 300,\nhasOverflow : true\n}\n...\n{\n_id : \"doc2\", \nstoryId : 'xxx' ,\nstoryCreateDate : ISODate(...),\nS3Url : \" .... \",\nusers : [ { \"userId\" : \"embeededN+1\", \"avatar\" : \"...\" , dateViewed : ... } ... { \"id\" : \"embeededN+M\" } ] ,\noverFlowIndex: 2,\nhasOverflow : false\n}\n{storyId : 1, \"users.userId\" : 1}xxxdb.collection.find({\"storyId\" : \"xxx\" , overFlowIndex : { $gt : 0} }\ndb.collection.find({\"storyId\" : \"xxx\" , overFlowIndex : { $gt : 0} }.sort({ overFlowIndex : 1})\n{\"storyId\" : 1, \"overFlowIndex\" : 1}",
"text": "Hi @martin_daniels ,To me it sounds like the outlier pattern with story documents is the way to go.The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setThe idea is that you have each story as a document in the story collection, with its data and embedded array of users that viewed the post :When a specific post gets more than “N” number of distinct views (lets say ~200) we open a new document and paging those viewers. You can index {storyId : 1, \"users.userId\" : 1} to use it for a query to determine whether a user has already viewed that story or is it a new user…Now there will be a TTL index on the “storyCreateDate” and therefore all the documents will be deleted at the same cycle. Having said that the total views for a story will be calculated from a sum of views on each document or maintained in the “overFlowIndex : 1” document and incremented every update.Now to get all the documents of story xxx I need to query:If you need to sort the documents based on insert order:Now when indexing {\"storyId\" : 1, \"overFlowIndex\" : 1} you will get an indexed query to get all overflow documents.Regarding the trigger to delete the S3 files there are 2 solutions:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "S3 has way of deleting file older than some days. i think this will solve my problems. Thank you so much.",
"username": "martin_daniels"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Instagram Story Model | 2022-09-08T11:41:23.436Z | Instagram Story Model | 2,006 |
null | []
| [
{
"code": "{\n \"filter\": {\n \"createdAt\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1657839600000\"\n }\n },\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1657926000000\"\n }\n }\n }\n }\n}\n",
"text": "Hi,\nI just had a problem with querying with the date range, which seems to be not working. however, the general way of querying is working fine for me.",
"username": "Davod_Mozafari"
},
{
"code": "mongosh",
"text": "Hi @Davod_Mozafari - Welcome to the community!Can you provide further details regarding the issue? I.e. If you’re receiving any errors, unexpected document(s) being returned, full request details after redacting any credentials, API keys and sensitive information.however, the general way of querying is working fine for me.Could you also clarify this? My interpretation is that querying perhaps through mongosh, Compass or Data Explorer is working as per normal but you are only encountering issues only when using the Data API. However, please confirm.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Date range filter in Data API | 2022-08-16T16:56:50.353Z | Date range filter in Data API | 2,723 |
null | []
| [
{
"code": "\n{\n \"dataSource\":\"Cluster0\",\n \"database\":\"v3\",\n \"collection\":\"allPersons\",\n \"filter\":{\n \"$and\":[\n {\n \"host\":\"host-sandbox\"\n },\n {\n \"company.name\":{\n \"$regex\":\"name2\",\n \"$options\":\"i\"\n }\n },\n {\n \"company.pipedrive.orgId\":{\n \"$ne\":null\n }\n },\n {\n \"company.pipedrive.pipedrive.orgId\":{\n \"$ne\":\"\"\n }\n }\n ]\n }\n}\n{\n \"_id\": {\n \"$oid\": \"630ec5d1840fae32479fbc72\"\n },\n \"host\": \"host-sandbox\",\n \"person\": {\n \"fullName\": \"Placeholder\",\n },\n \"company\": [\n {\n \"name\": \"name\",\n \"size\": \"9\",\n \"email\": \"email\",\n \"website\": \"website\",\n \"gpStatus\": \"valid\",\n \"industry\": \"Law Practice\",\n \"position\": \"Senior Counsel \",\n \"companyOrder\": \"1\",\n \"verifyStatus\": \"success\",\n \"pipedrive\": {\n \"sync\": \"ready\",\n \"orgId\": \"1\"\n },\n \"verifyProcessed\": true,\n \"verifyResults\": \"valid\"\n },\n {\n \"name\": \"name2\",\n \"size\": \"9\",\n \"email\": \"email2\",\n \"website\": \"website2\",\n \"gpStatus\": \"valid\",\n \"industry\": \"Law Practice\",\n \"position\": \"Senior Counsel \",\n \"companyOrder\": \"1\",\n \"verifyStatus\": \"success\",\n \"pipedrive\": {\n \"sync\": \"ready\",\n \"orgId\": \"2\"\n },\n \"verifyProcessed\": true,\n \"verifyResults\": \"valid\"\n }\n ],\n \"importName\": \"testing_import\"\n}\n {\n \"name\": \"name2\",\n \"size\": \"9\",\n \"email\": \"email2\",\n \"website\": \"website2\",\n \"gpStatus\": \"valid\",\n \"industry\": \"Law Practice\",\n \"position\": \"Senior Counsel \",\n \"companyOrder\": \"1\",\n \"verifyStatus\": \"success\",\n \"pipedrive\": {\n \"sync\": \"ready\",\n \"orgId\": \"2\"\n }\n",
"text": "Hi, I can’t find any documentation on how to return only the matching arrays using the data-api endpoint /action/findOne.This is the query I’m usingThis is what my document scheme looks like:this is what I would like the query to return:Thank you so much for your help!",
"username": "spencerm"
},
{
"code": "aggregate$filter\"company\"\"name\"\"name2\"curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/aggregate' \\\n--header 'Content-Type: application/json' \\\n--header 'api-key: <MY_API_KEY>' \\\n--data-raw '{\n \"collection\":\"data\",\n \"database\":\"myFirstDatabase\",\n \"dataSource\":\"Cluster0\",\n \"pipeline\": [\n {\n \"$addFields\": {\n \"filteredArray\":{\n \"$filter\":{\n \"input\":\"$company\",\n \"cond\":{\"$eq\":[\"$$this.name\",\"name2\"]}\n }\n }\n }\n },\n {\n \"$project\": {\"_id\":0,\"filteredArray\":1}\n }\n ]\n}'\n{\n \"documents\":[\n {\n \"filteredArray\":[\n {\n \"name\":\"name2\",\n \"size\":\"9\",\n \"email\":\"email2\",\n \"website\":\"website2\",\n \"gpStatus\":\"valid\",\n \"industry\":\"Law Practice\",\n \"position\":\"Senior Counsel \",\n \"companyOrder\":\"1\",\n \"verifyStatus\":\"success\",\n \"pipedrive\":{\n \"sync\":\"ready\",\n \"orgId\":\"2\"\n },\n \"verifyProcessed\":true,\n \"verifyResults\":\"valid\"\n }\n ]\n }\n ]\n}\n",
"text": "Hi @spencerm,I believe you can use the aggregate action for the API request to achieve a similar output with the $filter operator. I’ve made a very simple example based off the sample document you have provided which only filters based off the objects within the \"company\" array and if those object’s have a \"name\" field value of \"name2\". You can alter the pipeline and condition accordingly based off your use case and requirements.Request:Response (formatted):Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Return only matching array via Data API | 2022-09-01T03:44:51.332Z | Return only matching array via Data API | 2,465 |
[
"atlas-functions"
]
| [
{
"code": "",
"text": "Hello i have connected rest api service mongodb to my application. But i got problem with the http endpoints function. App that i talk about basically like digital form. after we fill in and click submit the data will be exported to database. But, While i do export data from my app form it works but only the id(autogenerated) that has been inserted to the database. The parameters such as namegoods, quantity and the rest is missing or not recorded.Someone can explain this problem? did i declare a correct function to insert data from another app form?\nMy Function like picture below:\nimage2560×1131 118 KB\n",
"username": "Faiq_Anargya"
},
{
"code": "// This function is the endpoint's request handler.\nexports = async function({ query, headers, body}, response) {\n // Data can be extracted from the request as follows:\n\n // Recieved data from the post request in \"data\" object\n const data = JSON.parse(body.text());\n\n const good = {\n _id : data._id,\n namegoods: data.namegoods,\n quantity: data.quantity,\n condition: data.condition,\n barcode : data.barcode\n }\n\n const collection = await context.services.get(\"Syndes\").db(\"Soti\").collection(\"newgoods\")\n \n const result = await collection.insertOne(good);\n\n return result;\n};\n",
"text": "Hi @Faiq_Anargya ,I am not sure what you are trying to achieve here and the insertOne method you wrote looks something like a mongoose schema defenition.If I am reading correctly, your application will call an http endpoint passing the required new “goods” object as a body to that endpoint and the endpoint should store it in the database?Correct me if I am wrong, in that case the function should like something like:Perhaps also consider to use the Atlas Data API instead of defining your own endpoints if the need is just to perform simple CRUD:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "\"error\": \"{\\\"message\\\":\\\"Cannot access member 'text' of undefined\\\",\\\"name\\\":\\\"TypeError\\\"}\",\n \"error_code\": \"FunctionExecutionError\",\n \"link\": \"https://realm.mongodb.com/groups/630d71853be32c526f524ebe/apps/630d7789b5b628f76fcf2edd/logs?co_id=631764f0d84017ad42b4c8a3\"\n}\n",
"text": "Yeah sure, i want to insert a new data from my app and store it in the database.i have tried your function but it shows the error:when i try to test in the postman",
"username": "Faiq_Anargya"
},
{
"code": "",
"text": "Can you show me the postman screen shot?Also send me a link to your function in the cloud…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "here it’s the postman screenshoot\nimage1926×1480 129 KB\nand the link to of http ednpoints\nhttps://data.mongodb-api.com/app/data-htvfp/endpoint/insert",
"username": "Faiq_Anargya"
},
{
"code": "",
"text": "Are you sending anything in the body?That should be the document you want to insert …\nI suspect that you have no body in that request and therefore the body.text() errors out…And.by a link to the function I meant going in the web browser to the function page and copy paste here the URL that is in the browser when you are on that page. This way I can have a look…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you very much for your help, it works when i try again. Actually, i’m very new with mongodb and this kind of things so i need help for some of the features and how it works",
"username": "Faiq_Anargya"
},
{
"code": "",
"text": "Sure no problem.A good start will be to read some of the tutorials on our developer center:In general browse through:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Thanks!",
"username": "Pavel_Duchovny"
}
]
| Insert data from another app to mongodb | 2022-09-06T04:11:26.116Z | Insert data from another app to mongodb | 3,188 |
|
null | [
"replication"
]
| [
{
"code": " \"errmsg\" : \"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: poc-mongo-02:27017 failed with Error connecting to poc-mongo-02:27017 :: caused by :: Could not find address for poc-mongo-02:27017: SocketException: Host not found (authoritative), poc-mongo-03:27017 failed with Error connecting to poc-mongo-03:27017 :: caused by :: Could not find address for poc-mongo-03:27017: SocketException: Host not found (authoritative)\",*\n \"code\" : 74,*\n \"codeName\" : \"NodeNotFound\"*\n",
"text": "Hi,I am trying to initiate 3 node replica set, but it is failing with below error. Can anyone please suggest.“ok” : 0,Thanks & Regards,\nPradeep Kumar P",
"username": "Pradeep_Kumar_Paravastu"
},
{
"code": "",
"text": "Are all 3 nodes up & running?\nCan you connect to each one\nCan they communicate with each other?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Are all 3 nodes up & running?\nYes.Can you connect to each one\nYes. Able to connect mongo instance on all the 3 nodes.Can they communicate with each other?\nYes. Can ping each other on all the 3 nodes.",
"username": "Pradeep_Kumar_Paravastu"
},
{
"code": "",
"text": "Along with pinging can they communicate via the MongoDB port? You can do telnet to check if the FW is open. You need to make sure all hosts can communicate via the MongoDB port to all the other hosts.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "I think it is the firewall issue. Working with system admin to check.",
"username": "Pradeep_Kumar_Paravastu"
},
{
"code": "Could not find address for poc-mongo-03:27017: SocketException: Host not found",
"text": "The error below seems to indicate DNS configuration error for the host poc-mongo-03.Could not find address for poc-mongo-03:27017: SocketException: Host not foundMake sure all nodes from your replica set can resolve the host name poc-mongo-03 to the same IP address.I have some doubts aboutYes. Can ping each other on all the 3 nodes.Please share a screenshot of the ping test.A firewall issue will not generate a HostNotFound error.",
"username": "steevej"
},
{
"code": " \"errmsg\" : \"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: poc-mongo-03:27017 failed with Error connecting to poc-mongo-03:27017 (xxx.xx.xxx.xx:27017) :: caused by :: No route to host, poc-mongo-02:27017 failed with Error connecting to poc-mongo-02:27017 (xxx.xx.xxx.xx:27017) :: caused by :: No route to host\",*\n \"code\" : 74,*\n \"codeName\" : \"NodeNotFound\"*",
"text": "We got the host names and IP’s added to the /etc/hosts on each node. After reattempt of the rs.initiate. Now it is giving different error.Earlier it was “Could not find address for – SocketException: Host not found (authoritative)” & Now it is “failed with Error connecting to – :: caused by :: No route to host”\"“ok” : 0,",
"username": "Pradeep_Kumar_Paravastu"
},
{
"code": "",
"text": "Can you share whole command rs.initiate commad.\nRegards\nPrince",
"username": "Prince_Das"
},
{
"code": "No route to host, poc-mongo-02",
"text": "This new error message:No route to host, poc-mongo-02Confirms my doubts aboutYes. Can ping each other on all the 3 nodes.So, again, pleaseshare a screenshot of the ping test.using the host names you use in your replica set configuration. Make sure you ping all hosts from all hosts, even same host to same host and from the client you try to connect.",
"username": "steevej"
},
{
"code": "",
"text": "Ping was working on all the hosts from all the hosts. But the bidirectional communication between the hosts was failing. The mongo basic connectivity check was failing with the same error from one host to other host, error - \"No route to host”. After investigation it has been identified that the port 27017 is behind the firewall. A new rule was added to allow the ports to open. Now the rsinitiate is completed and replicaset was configured successfully. Thank you all for the help.mongo --host m3.example.net --port 27017",
"username": "Pradeep_Kumar_Paravastu"
}
]
| Rs.initiate is failing | 2022-09-07T12:01:17.486Z | Rs.initiate is failing | 4,399 |
null | [
"time-series"
]
| [
{
"code": "",
"text": "Have triggers enabled in other apps - all good, but today spun up a v6 instance and created a time series collection, with a simple trigger that simply does an http post.When I had the trigger setup with include full document - get errorerror issuing collMod command for UnityPricing.price_events: (InvalidOptions) option not supported on a time-series collection: changeStreamPreAndPostImageschanging the trigger to not include full document - get error(CommandNotSupportedOnView) Namespace UnityPricing.price_events is a timeseries collectionAt a bit of a loss, reviewing the docs can’t find anything that excludes time series from having a trigger - What am I missing?Thanks",
"username": "Garry_Black"
},
{
"code": "",
"text": "further searching reveals the answer - unsupported:Time Series, IOT",
"username": "Garry_Black"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Insert trigger for time series collection getting errors | 2022-09-08T11:15:14.103Z | Insert trigger for time series collection getting errors | 1,941 |
null | []
| [
{
"code": "const sortedTasks = realmRef?.objects('Task').sorted('@id');\nsetTasks([...sortedTasks]);\n",
"text": "HelloWill this cause lazy load to fail?And then using tasks inside a flat list. Will this cause all to be rendered?\nAnd i read in other SDK’s that there is a limit function.Because we need to render data based on user input.",
"username": "Dion_Grendelman"
},
{
"code": "",
"text": "Hello,Does anyone have a update on this?",
"username": "Dion_Grendelman"
},
{
"code": "sortedTasks",
"text": "sortedTasks will be a lazily loaded results object. Why not use that directly?",
"username": "Jay"
},
{
"code": " const realmRef = useRealm();\n const tasks = realmRef.objects('Task');\n\nreturn (\n<View>\n <FlatList\n data={tasks.sorted('@id')}\n renderItem={renderItem}\n keyExtractor={(item) => item.id}\n />\n</View>\n",
"text": "### How frequently does the bug occur?\n\nAll the time\n\n### Description\n\nHello!\n\n…\nI read realmjs is lazy loaded so using Object.(... for example in a flatlist i causing it to be lazy loaded.\nBut i had a few questions\n\n1. Is it actually laze loaded into flat list? \n2. When i use a function to get all tasks sorted by date and put it inside a react.useState, is it still lazy loaded? \n3. Is there a limit option in react native? I couldt find anything in dogs, because otherwise i would implement a onEndReached function to load on demand.\n\n\n\n### Stacktrace & log output\n\n```shell\nN/A\n```\n\n\n### Can you reproduce the bug?\n\nYes, always\n\n### Reproduction Steps\n\nN/A\n\n### Version\n\nNewest\n\n### What SDK flavour are you using?\n\nLocal Database only\n\n### Are you using encryption?\n\nNo, not using encryption\n\n### Platform OS and version(s)\n\nN/A\n\n### Build environment\n\nWhich debugger for React Native: ..\nN/A\nHello i already stated this in my github thread but.Even when doing this. Its still laggy. With 1200 taks this is so laggy. 400mb ram usage, 30 fps on scroll. And initial it console.log all.\nInstead of loggin “ön lazy load”",
"username": "Dion_Grendelman"
}
]
| Lazy loading/Pagination | 2021-10-05T15:59:14.536Z | Lazy loading/Pagination | 5,150 |
null | []
| [
{
"code": "",
"text": "I understand $search can be applied to single collection in MongoDB Atlas search. What can be a design approach for performing Atlas search over multiple collections in MongoDB. Should I create a single collection which has all attributes of Search terms? I would end up having a bloated document scenario and large size of index data.What is the best solution for this?",
"username": "syed_hakeem_basha_kaja_mohideen"
},
{
"code": "",
"text": "You are right. creating a single search collection is the best approach for predictable results. While we do plan to introduce searching across multiple collections, the recommended approach would still be to aggregate all documents in a sing me search collection. Relevance scores are calculated based on documents relationship to the entire corpus.You can use Realm Triggers and $merge to create a materialized view or you can create your own process for moving data to a single collection.",
"username": "Marcus"
},
{
"code": "",
"text": "Thanks for the quick response Marcus.I also felt the design in similar line.If I create a single collection for search from base collections, I will end up having another collection which may occupy more storage and additional billing.Alternative is to go with external reference pattern for child document collection (One collection with external reference document (Search Attributes) to base document collection. In this way, I would having similar code logic to Approach 1(Single collection for Search) for keeping external reference document up to date with base collection document. But this will not introduce additional collection for Search functionality. But there would be impact to CRUD operations as $merge need to be performed for multiple documents(even if one attribute of base document gets changed - in case a base document is referred in 20K Child documents).What will be the performance impact of REALM trigger and $merge as we will create a trigger for - INSERT, UPDATE and DELETE scenarios?Depending on above scenarios, what would be your recommendation? We may have 25M records each in both Child document and base document collections.",
"username": "syed_hakeem_basha_kaja_mohideen"
},
{
"code": "",
"text": "I have not not tested this to a point where I had any challenges with performance. If you push the limits, please let me know. We do have customers doing so that have not complained but I don’t know how big your application is today.",
"username": "Marcus"
},
{
"code": "",
"text": "I am just reopening this discussion again. As I already mentioned, single collection has been created for search purpose. But I have multiple attributes on that single collection and we have a requirement to search over a single or multiple attributes in the same collection.We currently have one search index covering for all attributes. What is the best practice for this scenario?",
"username": "syed_hakeem_basha_kaja_mohideen"
},
{
"code": "$search: {\n compound: {\n should: [\n {\n autocomplete: {\n query:'hammer',\n path: 'title',\n },\n },\n {\n autocomplete: {\n query:'hammer',\n path: 'plot',\n },\n },\n ],\n },\n},\n",
"text": "If you are trying to match on multiple attributes, use should: in compound operatorExample : - create autocomplete search index on both fields: title and plot",
"username": "Akash_Kumar6"
}
]
| How to apply $search pipeline for multiple collections in MongoDB Atlas Search? | 2021-10-31T20:28:02.941Z | How to apply $search pipeline for multiple collections in MongoDB Atlas Search? | 6,970 |
null | [
"swift",
"transactions"
]
| [
{
"code": "final class Household: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n\n @Persisted var name: String\n \n @Persisted var users = RealmSwift.List<String>()\n \n @Persisted var lists = RealmSwift.List<List>()\n \n @Persisted var itemTypes = RealmSwift.List<ItemType>()\n}\nfinal class List: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n\n @Persisted var name: String\n \n @Persisted var listItems = RealmSwift.List<ListItem>()\n}\nfinal class ItemType: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n\n @Persisted var name: String\n \n @Persisted var categories = RealmSwift.List<String>()\n}\nfinal class ListItem: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var _id: String = UUID().uuidString\n\n @Persisted var itemType: ItemType?\n \n @Persisted var quantity: String?\n \n @Persisted var note: String?\n}\nstruct ListsView: View {\n @StateRealmObject var household: Household #This is passed in from a parent view where the user selects a household. At the moment it only shows 1 household (since household ID is the partition key, but I will fix this up later once the rest of the app is working as intended)\n\n var body: some View {\n VStack {\n SwiftUI.List {\n ForEach(household.lists) { list in\n NavigationLink(destination: ListDetailView(household: household, list: list)) {\n Text(list.name)\n }.font(.body)\n }\n }\n .navigationBarTitle(\"Your Lists\")\n }\n }\n}\n\nstruct ListDetailsView: View {\n\n @StateRealmObject var household: Household\n @StateRealmObject var list: List\n\n @State var itemName: String = \"\"\n\n var body: some View {\n VStack {\n SwiftUI.List {\n ForEach(list.listItems, id: \\.id) { item in\n if let itemType = item.itemType {\n ListItemView(listItem: item, itemType: itemType)\n }\n }.onDelete(perform: $list.listItems.remove)\n }\n Spacer()\n HStack {\n TextField(\"Enter an item name\", text: $itemName)\n TsButton(\n action: {\n onInsertItem(name: itemName)\n },\n text: \"add item\"\n )\n }\n }\n .navigationTitle(list.name)\n }\n\n /**\n When a user inserts an item, check if an itemType already exists for the given name.\n If an itemType with this name already exists, assign that itemType to a new listItem\n and assign that listItem to the currently viewed list.\n\n If an itemType does not exist with a given name, create a new one, assign the new itemType\n to a new listItem and assign that listItem to the currently viewed list.\n */\n func onInsertItem(name: String) {\n if let itemType = household.itemTypes.first(where: { other in other.name == name }) {\n // Item type with this name already exists\n\n let listItem = ListItem()\n listItem.itemType = itemType\n\n $list.listItems.append(listItem)\n } else {\n // No item type exists for this name, so create a new one\n let itemType = ItemType();\n itemType.name = name\n $household.itemTypes.append(itemType)\n\n\n let listItem = ListItem()\n listItem.itemType = itemType\n $list.listItems.append(listItem)\n }\n }\n}\nObject is already managed by another Realm. Use create instead to copy it into this Realm.Cannot modify managed RLMArray outside of a write transaction. let listItem = ListItem()\n listItem.itemType = itemType\n\n $list.listItems.wrappedValue.append(listItem)\n",
"text": "Hey all,I’m having some issues with Realm + SwiftUI.Models\nMy app has 4 models:HouseholdListItemTypeListItemEssentially I have set it up so that a user belongs to a household (the household ID is the partition key), each household has 1 or more lists and 1 or more itemTypes. A list contains a collection of listItems and each listItem is assigned an item type.I’ve chosen this model so that when a user updates the name or categories associated with an itemType, all listItems assigned that itemType will also be updated.My View\nHere is a simplified version of my views:ListsViewListDetailViewProblem\nI am getting the following error whenever I try to add a new listItem for an itemType that already exists:\nObject is already managed by another Realm. Use create instead to copy it into this Realm.. If I add a listItem for a new itemType, it works as expected.I have tried the following and the error becomes Cannot modify managed RLMArray outside of a write transaction.:I have also tried opening a write transaction against the realm referenced by the list, and I get an error about the primary key of the itemType not being unique.When I inspect the itemType and list they both contain a reference to the same realm object. So I am not sure what may be going wrong.Does anyone have any ideas?",
"username": "Campbell_Bartlett"
},
{
"code": "func onInsertItem(name: String, listPrimaryKey: String) {\n let realm = try! Realm()\n guard let list = realm.object(ofType: List.self, forPrimaryKey: listPrimaryKey) else {\n return\n }\n\n let listItem = ListItem()\n let arrayItem = realm.objects(Household.self).compactMap { $0.itemTypes }.flatMap { $0 }\n if let itemType = arrayItem.first(where: { $0.name == name }) {\n listItem.itemType = itemType\n } else {\n // No item type exists for this name, so create a new one\n let itemType = ItemType()\n itemType.name = name\n listItem.itemType = itemType\n }\n\n try! realm.write({\n list.listItems.append(listItem)\n })\n }\n",
"text": "The error is correct, because how our SwiftUI Helpers are implemented we use a different instance of Realm (not different realm file) to do any operation and in this case the error shows that you are trying to copy a managed object by other Realm (ItemType) which is been used by other realm and this is true.\nI recommend to do everything on a realm write transaction.\nI tested the following code and it is working.",
"username": "Diana_Maria_Perez_Af"
},
{
"code": "@Environment(\\.realm) var realm: Realm HouseholdsView()\n .environment(\\.realm, realm) // Where realm is coming from autoOpen\nfunc onInsertItem(name: String, listId: String) {\n guard let realm = list.realm?.thaw() else {\n return\n }\n\n guard let list = realm.object(ofType: List.self, forPrimaryKey: listId) else {\n print(\"No list found with PK \\(listId)\")\n return\n }\n\n let listItem = ListItem()\n let arrayItem = realm.objects(Household.self).compactMap { $0.itemTypes }.flatMap { $0 }\n if let itemType = arrayItem.first(where: { $0.name == name }) {\n listItem.itemType = itemType\n } else {\n // item type does not exist\n let itemType = ItemType();\n itemType.name = name\n listItem.itemType = itemType\n }\n\n try! realm.write {\n list.listItems.append(listItem)\n }\n }\n",
"text": "Thanks Diana,That solution only works for an on device Realm right? I am trying to use a synced realm.I’ve tried accessing the realm via @Environment(\\.realm) var realm: Realm but all my queries are returning zero results.I am injecting the realm into the environment via the standard process:I’ve also tried this:and it almost works. I can insert as many items as I want into my list, but it always falls into the “item type does not exist branch”. I’m assuming this is because the Realm I’m using is attached to the list object, so it doesn’t contain the itemTypes which are in a different instance of Realm?",
"username": "Campbell_Bartlett"
},
{
"code": "$groups.append(Group())$groups.append(Group(name: \"Group Name\"))..environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"user=\\(app.currentUser!.id)\"))_partition",
"text": "I’m experiencing this issue as well.The guide here says to append a new item to the Observed results like:$groups.append(Group())The issue seems to be that if you want to initiate any values, there isn’t a functioning way to do that ( or at least the documentation / example is not complete).I’ve tried to pass in params for initiation like:$groups.append(Group(name: \"Group Name\"))but that is not possible.Alternatively I have tried to initiate the object first, set the initial parameters and then append, but that gives the error @Campbell_Bartlett has described here.It’s as if initiating the object is automatically assigning it to some local Realm even when using a synced Realm, so then it can’t append it as demonstrated in the example.Even if I pass in the correct environment in SwiftUI to the view with a ..environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"user=\\(app.currentUser!.id)\")) and establish the correct default _partition in the Realm Model, it is still creating the object in some other Realm and throwing the error.I know there has got to be a best practice for creating new objects in SwiftUI that allows for initial parameters to be passed in, it is just very unclear how to do this.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "@Kurt_Libby1 I cannot reproduce your issue. Passing params to a constructor on an object should not involve a Realm unless there is one involved inside the constructor. Could you open a GitHub issue showing your usage?",
"username": "Lee_Maguire"
},
{
"code": "@ObservedResults(Plan.self) var plans\n@StateRealmObject var location: Location\n...\nif let plan = plans.filter(\"location._id = %@\", location._id).first {\n // show plan\n} else {\n // add location to plan like this:\n $plans.append(Plan(_id: UUID(), _partition: \"user=\\(app.currentUser!.id)\",location: self.location))\n\n // because if I do it as prescribed like this:\n $plans.append(Plan())\n // it works, but there is no way to add the location, so there's just an empty Plan object added to the database\n}\nlocationvar@ObservedRealmObject@StateRealmObject",
"text": "Hey Lee.I’m probably saying it wrong if you think it shouldn’t involve a Realm.I’m talking about setting properties for a Realm object and then adding it in the prescribed way.Your example is about adding a Group Realm object by appending Group() with the @ObservedResults property wrapper and the using .append()Your example is extremely basic. For anything more complex, there is no example and that was my comment – either the example is incomplete or there isn’t a good way to do this.In my app I am checking to see if there is a plan with a location like this:This doesn’t work.I’ve tried to pass in the location as a var, as @ObservedRealmObject, and as @StateRealmObject as shown above. No matter what it always shows the error:Object is already managed by another Realm. Use create…I’m not super interested in opening a GitHub issue. I’m more interested in documentation and/or examples that goes beyond the most basic usage. Again, surely there is a way to do this, but either I’m misunderstanding how to do this in Realm, or I’m not saying it correctly. Either are entirely possible.",
"username": "Kurt_Libby1"
},
{
"code": "Realm.create(_, value:, update:)let realm = plans.thaw()!.realm!\ntry! realm.write {\n $plans.append(realm.create(Plan.self, value: Plan(_id: UUID(), _partition: \"user=\\(app.currentUser!.id)\",location: self.location), update: .modified)\n}\n\n",
"text": "@Kurt_Libby1 I see what you mean, this is for sure something we can improve on. My first idea to resolve the issue would be to use the Realm.create(_, value:, update:) API. The usage would be like so:",
"username": "Lee_Maguire"
},
{
"code": "update: .modified@ObservedResults(Tag.self) var tags\n@ObservedRealmObject var room: Room\n\n// Room() has a property defined as @Persisted var tags = RealmSwift.List<Tag>()\n\nForEach(tags) { tag in\n Text(tag.name)\n .onTapGesture {\n $room.tags.append(tag) \n }\n}\nuser=\\(app.currentUser!.id)",
"text": "I solved my issue by adding the plan to the location when the location was created.( I would love to know where in the documentation your approach is presented because I’ve never seen this update: .modified before. )Also, I’m again running into the same error today in another view where I’m loading a bunch of tags and trying to add them to a list of tags on another object.Simple example is like this:I’m getting this same error:Object is already managed by another realm.What I don’t understand is this:I only want to use one realm. It’s synced. It’s a partition set with the logged in user’s id as user=\\(app.currentUser!.id). And every interaction with realm should be with this realm. Every part of my object model has this _partition set in the definition of the model.So why are there other realms? How can I make it so that there are no other realms and every CRUD is always on this one realm with this one partition?And again, I can’t find any documentation that helps explain this in a clear enough way. For the sake of presenting realm as simple, it seems like anything beyond the examples just break the functionality.I know this probably isn’t the case, but that is how it seems.Any insight and links to additional documentation would be super helpful.Thanks.",
"username": "Kurt_Libby1"
},
{
"code": "@ObservedResults(Tag.self) var tags@ObservedRealmObject var room: RoomForEach(tags) { tag in\n Text(tag.name)\n .onTapGesture {\n $room.tags.append(room.realm.objects(Tag.self).first(where: {$0._id == tag._id}) \n }\n}\n",
"text": "Hi,I think the issue you are having is caused by @ObservedResults(Tag.self) var tags realm instance being different from @ObservedRealmObject var room: Room realm. I had the same problem and something like this worked from me:I hope this will help you.Best regards,\nHoratiu",
"username": "horatiu_anghel"
},
{
"code": "$room.tags.append(room.realm!.objects(Tag.self).first(where: {$0._id == tag._id})!)\n",
"text": "Thanks Hortiu,But that didn’t work.I had to add some bangs for the optionals, but I got the same “Object is already managed by another Realm” error.Here is the code with the bangs to make the complier happy:So this doesn’t work, AND it still doesn’t explain my main questions.Why is there another realm? How can I make sure that there is only ever one realm?I want my users to log in and always have the same realm. Always.I have other apps where I go back and forth between multiple realm partitions, and I though this implementation would be easy, but it is definitely not.",
"username": "Kurt_Libby1"
},
{
"code": "$room.tags.append(room.realm?.objects(Tag.self).first(where: {$0._id == tag._id}) ?? tag)",
"text": "Does your Tag type has other realm properties? Realm raises the same exception when you append a tag object that has other realm objects as its properties. It also sees those nested objects coming from a different realm.Regarding the different realms, I don’t know why. I faced the same problems and I noticed that, in my case, realm instances were different.About the realm query, you could let swift do the unwrapping:$room.tags.append(room.realm?.objects(Tag.self).first(where: {$0._id == tag._id}) ?? tag)Horatiu",
"username": "horatiu_anghel"
},
{
"code": " let thawedRoom = room.thaw()!\n let realm = thawedRoom.realm!\n let thawedTag = tag.thaw()!\n \n do {\n try! realm.write {\n thawedRoom.tags.append(thawedTag)\n }\n }\n",
"text": "My Tag type did have other Realm properties. So this morning I restructured the app so that it doesn’t, but that didn’t solve my issue. Still got the same Object is already managed by another Realm error.Spent a couple of hours trying different things and stripping it down to a basic app that you can find here.Ultimately I discovered that the object with the list as well as the object being added to the list both have to be thawed before it works:Hopefully the Realm team can figure out a way for this to be clearer and for the implicit writes to work beyond extremely basic examples. And then also add to the documentation how to get this to work with only one Realm so we can stop running into these “another Realm” errors when 99% of the time we only ever want to deal with one Realm.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "I agree with @Kurt_Libby1It is a bit frustrating this confusion about “managed by another realm”.The default behavior must be a “single realm”. I don’t want another Realm. Please “Mr. realm swift” , just use the single existing Realm.“let realm = try! Realm()”Why should I care about “another realms” if I just declared a single realm instance ?Can someone from mongo staff give some guidance ?",
"username": "Robson_Tenorio"
},
{
"code": "let realm = try! Realm()",
"text": "@Robson_TenorioDo you have a specific use case? This:let realm = try! Realm()Doesn’t create “another realm” it accesses the same Realm that everything else on that thread accesses. If you’re on a different thread however, say a background thread, then it would (could) be “another realm”.This is especially try if you’re using sync’ing and are writing on the UI thread as Sync writes on a background thread so there would be “another realm” in that case.Got some a troublesome code example you can post?",
"username": "Jay"
},
{
"code": "",
"text": "While Robson may have some troublesome code to post, I will say that the continued frustration with using any synced Realm is just how hard it is to do exactly what the complaint/agreement is here.Rather than posting “troublesome code”, it would be infinitely more beneficial for MDB to provide code examples of how to easily create one synced realm and always use that.The back and forth on every screen in every way that I handle writing and reading from Realm seems to behave in completely unpredictible ways as I’m constantly thawing or needing to find clever places to add writes just to “make sure” the object isn’t managed by another Realm.As I shared before and was not provided an answer, I will share again:I want my users to log in and always have the same realm. Always.There is no clean example of this that I know of in the documentation, nor as a response to my earlier post six months ago. Rather than seeing an answer to my or Robson’s troublesome code, I would sincerely love a straightforward answer about how the above quoted line should be achieved.As SJ would have said, maybe we’re just “holding it wrong.”That’s fine. I’ll admit to that. I just want to be shown how to hold it.",
"username": "Kurt_Libby1"
},
{
"code": "“let realm = try! Realm()”",
"text": "@Kurt_Libby1I think you missed my point, or perhaps it was too vague. The post referenced “a single realm” and the code presented“let realm = try! Realm()”does exactly that - a single realm. If you call that 900000 times in your app, you’re still going to have “a single realm”When there isn’t a “single realm” is when a Realm is instantiated on different threads, say in the UI thread and then again in a background thread - and then objects are addressed from those “different realms”. I provided an example of that in my response above.My query was in an effort to assist - requesting a short verifiable example to see what that user is experiencing to try to match it up with your initial code is not unreasonable.If you review the TaskTracker Sync sample app, you’ll find that when a users logs in they always have the same Realm. Always.The goal is to identify why your code behaves differently and IMO more data is needed.",
"username": "Jay"
},
{
"code": "final class ListItem: Object, ObjectKeyIdentifiable {\n @Persisted var _id: String = UUID().uuidString\n @Persisted var itemType: ItemType?\n @Persisted var quantity: String?\n @Persisted var note: String?\n}\nObject is already managed by another Realm. Use create instead to copy it into this Realm.let listItem = ListItem() // Create unmanaged object\nlistItem.itemType = itemType // We are setting a managed object to a property of an unmanaged object\n$list.listItems.append(listItem) // We append the unmanaged object\nlet realm = list.thaw()!.realm!\ntry! realm.write {\n let listItem = realm.create(ListItem.self, value: [\"itemType\": itemType], update: .modified)\n $list.listItems.append(listItem)\n}\n",
"text": "So the issue here is when we append an object with a nested managed object, for example for the following model from the initial post.we get an Object is already managed by another Realm. Use create instead to copy it into this Realm. error for the following operationThe reason for this, is that when the write is done, we only check if TopLevel object has a realm associated but because in this case it doesn’t (is unmanaged) we don’t thaw it to use the same realm as the write, we don’t check any nested if any nested properties has a managed object.\nFor this to work, we would have to verify not only if the TopLevel Object has an associated realm, but if any set property has it, and make sure they use the same realm.\nIn conclusion, we will have to do a recursive check for sub-objects when users try to add an unmanaged object.There is a workaround for this, you can use create, like the following code.This is something we clearly have been taking a look, and I just opened a GitHub issue to track any progress SwiftUI Property Wrappers Improvement · Issue #7942 · realm/realm-swift · GitHub on this.",
"username": "Diana_Maria_Perez_Af"
},
{
"code": "",
"text": "Thanks Diana.This is how I do it as well. I have that thaw workaround all over my apps to account for this bug. I’ll watch the issue you shared for sure. I think the frustrating thing here is what you pointed out at the beginning of your post and still is the spirit of this discussion:Once a user logs in, 99% of the time we only ever want to use their realm. So instead of creating “unmanaged” objects, I would love it if every newly created object was managed by that user’s realm unless explicitly stated otherwise. We’re adding the same bit of code all over the app for all different kinds of writes and it’s exactly the clunky boilerplate that is antithesis to the spirit of Realm.Very pleased to know that this is being addressed and will be resolved soon. --KurtPS: I do have one app where I switch the Realm environments by passing it in the .environment() and I have always assumed that passing the Realm in the environment will make any objects created in that View managed by that Realm, but it does not seem to be the case. This has added to the confusion as well.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Hey Diana,Thank you for getting back to me, I am glad the team is looking to simplify things so others don’t run into the same problem.I appreciate that you have taken the time to update me about such an old issue!",
"username": "Campbell_Bartlett"
},
{
"code": "",
"text": "Is this specifically an issue in SwiftUI only?It seems to work in Swift without that error message (if I am duplicating the issue correctly, which is questionable lol)",
"username": "Jay"
}
]
| Realm Sync + SwiftUI: Object is already managed by another Realm. Use create instead to copy it into this Realm | 2022-02-18T08:50:20.354Z | Realm Sync + SwiftUI: Object is already managed by another Realm. Use create instead to copy it into this Realm | 8,933 |
null | [
"queries"
]
| [
{
"code": "{\n _id: \"631835657700f008636fe395\",\n nameEn: \"new make 1 en\",\n nameAr: \"new make 1 ar\",\n makeLogo: \"new make 1 logo\",\n models: [\n {\n _id: \"81c69655-06f9-4cb8-a5da-6e401e30dd54\",\n nameEn: \"new model 3 en\",\n nameAr: \"new model 3 ar\",\n modelHide: true,\n trims: [\n {\n _id: \"710ad638-de13-4aec-9397-922415c6016c\",\n nameEn: \"new trim 3 en\",\n nameAr: \"new trim 3 ar\",\n bodyID: [\"6235832dde5b632b481db50b\"],\n engineSize: [3000, 5000],\n },\n{\n _id: \"710ad638-de13-4aec-9397-922415c60121c\",\n nameEn: \"new trim 3 en\",\n nameAr: \"new trim 3 ar\",\n bodyID: [\"6235832dde5b632b481db50b\"],\n engineSize: [3000, 5000],\n },\n ],\n },\n ],\n};\n",
"text": "",
"username": "Mehmood_Zubair"
},
{
"code": "",
"text": "Please provide an updated document based on the sample input document you shared. Do you want to update all trims? Or a single trim, in this case we need to know how you identify the trim to update, by its _id, its nameEn… We also need to know how you identify the containing model.Without knowing the exact update you want to perform it is hard to help.If you tried anything, please share what you tried and indicate how it fails. This way we will avoid pursuing a solution that you already rejected. It might also be a good starting point.",
"username": "steevej"
}
]
| Update the nested array object i want to update trim data | 2022-09-07T11:45:51.249Z | Update the nested array object i want to update trim data | 970 |
null | [
"react-native"
]
| [
{
"code": "Error:\n\nClient attempted a write that is outside of permissions or query filters; it has been reverted (ProtocolErrorCode=231)\nDetails:\n{\n \"Contact\": {\n \"0.8426483083351229_synced\": \"cannot write to table \\\"Contact\\\" before opening a subscription on it\"\n }\n}\nFunction Call Location:\nIE\nRemote IP Address:\n213.35.181.75\nSDK:\nios vRealmJS/10.19.1\nPlatform Version:\n15.4\n",
"text": "This is a follow-up to this github thread: Realm with sync enabled CRASHES the whole app when \"path\" is specified · Issue #4659 · realm/realm-js · GitHubLong story short, we have an app where user can create data before they’ve actually signed in to their account. On signing in, this locally created data has to sync with the server, unless the user has specified that they don’t want the data to sync. In that case, the data should be stored locally, until said otherwise.In Github, a solution was offered for switching out the realms with writeCopyTo and conditional config for RealmProvider. Unfortunately, I am running into problems with setting up a subscription. This is the error that I’m getting on logging in and switching realms:It seems that the subscription doesn’t exist at the time and so the write gets reverted. Can you help us by pointing out a way to get past this?Also, is there any way to achieve what we’re trying to achieve - partially syncing data from realm to cloud? While the user has turned off data syncing, we still want to sync some information about the account.My code demo: GitHub - soliloquyx/realm-sync-demo-1",
"username": "Olaf_Kraas"
},
{
"code": "",
"text": "Unfortunately it is currently not possible to convert a local Realm to a Flexible Sync Realm (see the Note in our documentation). You will have to copy the objects manually - or always start with a Flexible Sync Realm.",
"username": "Kenneth_Geisshirt"
},
{
"code": "",
"text": "Thanks for the quick reply!What about syncing data partially, based on users selection to turn off cloud syncing in our app? Can this be achieved with flexible sync permissions/rules or do we need to have 2 realm instances (local/synced) in use?",
"username": "Olaf_Kraas"
}
]
| Switching from local realm to a synced realm on login | 2022-09-08T07:45:11.205Z | Switching from local realm to a synced realm on login | 1,991 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "HI Friends. I need so simple mongodb query. I would be happy if you help:I have collection:name, salary\njohn 2000\njack 1900\nphill 2000\nlamar 1500I need that result :name, salary\njohn 2000\nphill 2000Simply list collections where salary is equal max.",
"username": "Bago_Bago"
},
{
"code": "",
"text": "I am not sure this is the best approach but I would start experimenting with",
"username": "steevej"
},
{
"code": "db.employees.aggregate(\n [{\n $facet: {\n salary: [{\n $group: {\n _id: null,\n max: {\n $max: \"$salary\"\n }\n }\n }],\n employees: [{\n $match: {}\n }]\n }\n },\n {\n $project: {\n match: {\n $filter: {\n input: \"$employees\",\n as: \"employees\",\n cond: {\n $eq: [\"$$employees.salary\", {\n $first: \"$salary.max\"\n }]\n }\n }\n }\n }\n }\n ]\n)\nemployees",
"text": "Something like the following would work, but probably not the most efficient way to do things:You will want to adjust the employees facet to filter out any records you don’t want to work with so as to not push a lot of unnecessary data through the pipeline.",
"username": "Doug_Duncan"
},
{
"code": "salary$group$maxmongosh// Set up test data\ndb.salaries.insert([\n\t{name:'john', salary: 2000},\n\t{name:'jack', salary: 1900},\n\t{name:'phill', salary: 2000},\n\t{name:'lamar', salary: 1500}\n])\n// Add index\ndb.salaries.createIndex({salary:1})\n// Find the max salary value\nvar maxSalary = db.salaries.find(\n {}, // query matches all documents\n {_id:0, salary:1} // projection limited to the indexed field to make this a covered query\n).sort({salary:-1}).limit(1).next()\n.explain(\"executionStats\")find()> maxSalary\n{ salary: 2000 }\n// Find all documents with max salary value\n> db.salaries.find(maxSalary)\n[\n {\n _id: ObjectId(\"631962b06fc808d36caace54\"),\n name: 'john',\n salary: 2000\n },\n {\n _id: ObjectId(\"631962b06fc808d36caace56\"),\n name: 'phill',\n salary: 2000\n }\n]\n.explain(\"executionStats\") nReturned: 2,\n executionTimeMillis: 0,\n totalKeysExamined: 2,\n totalDocsExamined: 2,\n$sort$limit$lookup",
"text": "Welcome to the MongoDB Community @Bago_Bago !The maximum salary value can be efficiently found by sorting in descending order using an index, which would be a better approach than iterating the whole collection to find a max value via aggregation accumulators like $group or $max.You can explain your queries to see how they are processed including the ratio of index keys and documents examined compared to results returned.Following is an example in the MongoDB shell (mongosh):Excerpt from .explain(\"executionStats\") output for this query:nReturned: 1,\nexecutionTimeMillis: 0,\ntotalKeysExamined: 1,\ntotalDocsExamined: 0,This query only had to check a single index key (the max value based on the index order) and returned the matching value from the index without fetching any documents (which is called a covered query).A quick check to confirm the value returned is suitable to be passed as the query parameter for find()Excerpt from .explain(\"executionStats\") output for this query:The ratio of index keys and documents examined is 1:1 with the number of result documents returned.You could implement the same pattern in an aggregation query with $sort followed by $limit to find the maximum value and then a $lookup to fetch the matching documents, but will likely have to add some additional aggregation stages to get the desired result document format.Regards,\nStennie",
"username": "Stennie_X"
}
]
| List collectiojs having max value in field | 2022-09-07T09:13:00.408Z | List collectiojs having max value in field | 2,697 |
null | [
"python",
"crud"
]
| [
{
"code": "_id:630e026211cb34947c85a7a5\nuser:\"akhil\"\nstatus:null\nold_data:null\nnew_data:2022-08-30T17:58:18.551+00:00\n\n\n_id:630e026211cb34947c85a7a6\nuser:\"sandeep\"\nstatus:null\nold_data:null\nnew_data:2022-08-30T17:58:18.551+00:00\n_id:630e026211cb34947c85a7a6\nuser:\"sandeep\"\nstatus:modified\nold_data:migration was scheduled at 2022-08-30T17:58:18.551+00:00\nnew_data:migration now scheduled at \"2022-08-26 16:26:00\"\n audit_details.update_many({ \"user\": \"sandeep\" },\n {\n \"$currentDate\": {\n \"last_modified\": True,\n },\n \"$set\": {\n \"status\": \"modified\",\n \"old_data\": {\"$concat\":[\"Migration Task was scheduled at \", \"$new_data\"]},\n \"new_data\" : {\"$concat\": [\"Migration Task now scheduled at \", request.data['updated_time']]}\n\n\n }\n\n }\n\n)\n{\n \"_id\": {\n \"$oid\": \"630e026211cb34947c85a7a5\"\n },\n \"user\": \"sandeep\",\n \"status\": \"modified\",\n \"old_data\": {\n \"$concat\": [\n \"Migration Task for\",\n \"$new_data\"\n ]\n },\n \"new_data\": {\n \"$concat\": [\n \"Migration Task for\",\n \"2022-08-26 16:26:00\"\n ]\n },\n \"last_modified\": {\n \"$date\": {\n \"$numberLong\": \"1661917054163\"\n }\n }\n}\n",
"text": "I have created a mongodb collection in python django.The collection name is audit_details and it consists of following datanow,i want the new_data value to be assigned to “old_data” and the “new_data” will take the “updated_time” value taken from API post request(I am using django restframework and the data i am sending as post request is {“action”:“modify”,“user”:\"sandeep,“updated_time”:“2022-08-26 16:26:00”})i want the output to be like belowI have tried the following:my output is like below,which is not correct:",
"username": "sai_sankalp"
},
{
"code": "newValuefilter = { \"user\": \"sandeep\" }\n\nnewValue = [ { \"$set\":{\"old_data\":'$new_data', \"new_data\": datetime.datetime.now(), \"status\": \"modified\"} } ]\n\ncollection.update_one(filter, newValue)\n \nfor record in cursor:\n print (record)\n{'_id': ObjectId('631982b43831f60dfdcaaee3'), 'user': 'sandeep', 'status': 'modified', 'old_data': datetime.datetime(2022, 9, 8, 11, 19, 19, 356000), 'new_data': datetime.datetime(2022, 9, 8, 11, 55, 0, 291000)}\n\n{'_id': ObjectId('631982c13831f60dfdcaaee4'), 'user': 'akhil', 'status': None, 'old_data': None, 'new_data': datetime.datetime(2022, 9, 8, 11, 19, 37, 149000)}\nnew_dataold_data",
"text": "Hi @sai_sankalp and welcome to the community!!I believe it’s possible to achieve what you need by using the aggregation pipeline as the update operator using update_one, update_many, or find_one_and_update. This feature was added in MongoDB 4.2.Note that newValue is a list instead of a dictionary, which signifies that this is an aggregation pipeline instead of a simple update document.Please refer to the documentation for parameters of update query. which explains the same.As an example, using pymongo:\nThe query created a filter and the newValue basically contains the aggregation stage which sets the updated values.The output for the above query would look like:One suggestion I would like to make from the example you posted is that you might want to rethink the use of a string combined with date datatypes in your new_data and old_data fields. If these two fields are supposed to describe dates, I would recommend you to keep the date datatype, which will make it easier to manipulate in the future.Please let me know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari\nThanks for the detailed explanation. It was really helpful",
"username": "sai_sankalp"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Assigning a field value to another field in a document | 2022-08-31T04:03:22.068Z | Assigning a field value to another field in a document | 4,642 |
null | []
| [
{
"code": "{acc_id:'A23BCXY',\nacc_open_date: '2022-05-21',\nacc_close_date:'9999-12-31'}\nflag: 'N'{acc_id:'A23BCXY_20220521',\nacc_open_date:'2022-05-21',\nacc_close_date:'2022-6-30',\nflag:'N'}\n}\n{flag:'N'}acc_id(A23BCXY_20220521->A23BCXY)acc_idacc_id",
"text": "Hi all,I have a collection where the key field isonce the account is closed acc_id is getting appended with acc open date(not with close date) and there is flag: 'N' added when the account is closed.\nThe above account once closed will look like this:I want to find all closed accounts and their corresponding open account if any (consider case of a recycled account or maybe after the account is closed and opened again by mistake)Now I want to loop through all closed accounts {flag:'N'}, when I will get acc_id, will use substring to remove appended dates (A23BCXY_20220521->A23BCXY). Now I will have only acc_id without an appended date.\nHow I can use this acc_id to loop the whole collection again and check if an open account still exists?",
"username": "College_days_N_A"
},
{
"code": "acc_idflag:Nacc_id_<closed_date>acc_id_idacc_idacc_idflag:Nacc_idacc_iddb.test.aggregate([\n {$addFields:{\n id:{$regexFind:{\n input:'$acc_id',\n regex:'[^_]+'\n }}\n }},\n {$group:{\n _id:'$id.match',\n acc_id:{$push:'$acc_id'},\n count:{$count:{}}\n }}\n])\n",
"text": "Hello @College_days_N_A ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, then there are a couple of things that I may need clarification on:I believe the mutation is problematic, since it’s adding redundancy, making it very hard to check for existing acc_id, and you may get duplicate acc_id which is not ideal if it’s supposed to be unique.Have you considered:In case you cannot improve your schema, the query below might be helpful to achieve your goal.\nNote that this is untested so you might want to do your own testing to ensure it works with your data.However I would lean toward modifying the workflow to make the data easier to work with in the future.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Querying whole collection again | 2022-08-18T16:17:19.331Z | Querying whole collection again | 963 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "db.customer.aggregate()\n .match\n (\n {\n $and:\n [\n {\"audiences\":{$size:1}},\n { $$NOW: \n {\n lt:\n [\n {\n $addDate: \n { \n startDate: \"audiences.startDate\" ,\n unit: \"day\",\n amount: \"audiences.audience.membershipDuration\"\n }\n }\n ]\n }\n }\n ]\n }\n )\n .limit(100)\n{\n \"_id\":{\n \"$oid\":\"62e932661c0f2e018fe6be44\"\n },\n \"buCode\":\"LMIT\",\n \"clientNumber\":\"222222\",\n \"audiences\":[\n {\n \"audience\":{\n \"audienceId\":\"45379\",\n \"name\":\"OFFER_Marketing_Cible_Parcours_Confort_SCORE_1\",\n \"createdDate\":null,\n \"updatedDate\":null,\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":1,\n \"category\":\"LOYALTY\",\n \"description\":\"test longeurs chaines hsdfsqdfsqfdhsqkdjf sdfsjdfsqdfkl sdfSDFSDF SDFSQDFSQ sdfsqdfsq\",\n \"buCode\":\"LMIT\"\n },\n \"startDate\":{\n \"$date\":\"2022-07-28T11:59:33.000Z\"\n },\n \"remainingNumberOfUse\":1,\n \"lastEntryDate\":{\n \"$date\":\"2022-08-02T14:19:19.106Z\"\n }\n }\n ]\n}\n",
"text": "Hi There,I want to create a pipeline aggregate by comparing current date with some date.\nBut my need is to operate addDate (day) to my date in datebase.Here y aggregate pipelineAnd here the data stored in mongodbThanks\nMo",
"username": "Mohamed_DIABY"
},
{
"code": "MongoServerError: PlanExecutor error during aggregation :: caused by :: $dateAdd requires startDate to be convertible to a date\n$unwindaudiences$dateAdd",
"text": "Hi Mo,I’m assuming you’re having an error, but you don’t state what that was. Based on playing around with the code you posted above I was getting the following error:If that’s what you’re running into, I was able to resolve that by $unwinding the audiences array. It seem like the array is causing issues in the $dateAdd call.If you’re having other issues let us know what they are so we can more quickly help you out.",
"username": "Doug_Duncan"
},
{
"code": "{\n\t\"message\" : \"unknown top level operator: $$NOW. If you have a field name that starts with a '$' symbol, consider using $getField or $setField.\",\n\t\"ok\" : 0,\n\t\"code\" : 2,\n\t\"codeName\" : \"BadValue\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1660636531, 8),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : \"TglED2lvQswahQMyU5G7aIEsQBA=\",\n\t\t\t\"keyId\" : NumberLong(\"7067609856571604994\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1660636531, 8)\n}\n",
"text": "Hi Doug_Duncan,In my mongodb collection, startDate is a date.\nHere’s the message error in my case:",
"username": "Mohamed_DIABY"
},
{
"code": "startDate: \"$audiences.startDate\"\nstartDate: \"audiences.startDate\"\nfieldName : ValueOrComparator\n",
"text": "If you look at the documentation of $addDate you will see that both startDate and amount requires an <expression>. As such to access your field values you need to use the dollar sign. So it should berather thanThe syntax for $match isThe $$NOW variable is not a field name. I am not too sure about this one but I think you need to be inside $expr to be able to use it.",
"username": "steevej"
},
{
"code": "$addFields$$NOW[\n {\n \"$addFields\": {\"currentTime\": \"$$NOW\"}\n }\n]\n$$NOW",
"text": "The $$NOW variable is not a field name. I am not too sure about this one but I think you need to be inside $expr to be able to use it.You could also use $addFields and set a new field to the value of $$NOW:Which one is the best depends on if the value of $$NOW is needed only once in the pipeline.",
"username": "Doug_Duncan"
},
{
"code": "[\n {\n \"$addFields\": {\"currentTime\": \"$$NOW\"}\n }\n]\ndb.customer.aggregate()\n .match\n (\n {\n $and:\n [\n {\"audiences\":{$size:1}},\n { $$NOW: \n {\n lt:\n [\n {\n $addDate: \n { \n startDate: \"audiences.startDate\" ,\n unit: \"day\",\n amount: \"audiences.audience.membershipDuration\"\n }\n }\n ]\n }\n }\n ]\n }\n )\n .limit(100)\n",
"text": "Hi Doug,Thanks for the reply.\nbut i don’t know the correct syntax to use in my query belowHere’s my queryI need help\nThanks",
"username": "Mohamed_DIABY"
},
{
"code": "$dateAdd$matchdb.customer.aggregate(\n [\n {\n $unwind: \"$audiences\"\n },\n {\n $addFields: {\n qualify: {\n $cond: {\n if: {\n $gte: [\n {\n $dateAdd: {\n startDate: \"$audiences.startDate\",\n unit: \"day\",\n amount: \"$audiences.audience.membershipDuration\"\n }\n },\n \"$$NOW\"\n ]\n },\n then: true,\n else: false\n }\n }\n }\n },\n {\n $match: {\n qualify: true\n }\n }\n ]\n)\n$match",
"text": "I was not able to get $dateAdd to work in a $match stage, but I could have been doing something wrong.Something like this should work, or at least be a starting place for you:Note that you will want to put a $match to filter out any documents that you don’t want to work with to lessen the load on the server.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thank you to much @Doug_Duncan",
"username": "Mohamed_DIABY"
}
]
| Query with Date operation | 2022-08-12T08:40:43.506Z | Query with Date operation | 2,561 |
null | []
| [
{
"code": "",
"text": "Hello, I’ve just started to explore Mongodb. I haven’t installed it yer, I’m just using the Mongo atlas cluster through the online shell.Please help we clear out a few questions:",
"username": "Praxiee"
},
{
"code": "",
"text": "With GridFS you may get any file type inside MongoDB.I do not know about Zoho.",
"username": "steevej"
},
{
"code": "mongoimportopenpyxl",
"text": "Welcome to the MongoDB community @Praxiee !As @steevej noted, you can use the GridFS specification to store files in MongoDB, but file contents will be opaque binary blobs as far as MongoDB drivers and server are concerned.For your specific questions:You can store any file type as a binary blob in MongoDB using GridFS and add optional metadata that might be useful for queries.MongoDB drivers and server do not have support for interpreting arbitrary file types or extracting metadata, but you should be able to find suitable libraries for common file formats in your favourite programming language.If your general use case is storing and interpreting the contents of binary files, you may also want to consider using an existing Document Management System which is designed for that purpose rather than building a bespoke solution.Official MongoDB tools like mongoimport support common text-based data exchange formats like JSON, CSV (comma delimited), and TSV (tab delimited).For other file formats, you should be able to find libraries and tools with read or write support. For example, openpyxl is a Python library to read/write Excel 2010 xlsx/xlsm/xltx/xltm files. Excel files can include more complex content like formulas and references, so I recommend exporting to a text-based format like CSV if you want to transfer data between environments.That depends on the sort of data you are trying to export from Zoho Analytics and how (or if) you plan to automate any integration.General approaches might be:export from Zoho Analytics to CSVscript something to sync data using the Zoho Analytics APIask for suggestions in the Zoho Analytics CommunityRegards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Importing datafiles | 2022-08-04T00:26:15.104Z | Importing datafiles | 1,449 |
null | []
| [
{
"code": "",
"text": "I have a question about sharding the collection.\nCurrently, tens of thousands of data are stored in the collection.\nI want to change this collection to shard collection.\nI wonder if the previously stored data is automatically distributed and stored if I change it to Shard collection.\nAnd if you change the collection where approximately 100GB of data is stored to shard, approximately how long will sharding be completed?",
"username": "rinjyu_N_A"
},
{
"code": "",
"text": "That is impossible to tell.It all depends on your configuration; how many machine, how many shards, network connections between machines, storage type.",
"username": "steevej"
}
]
| Shard Collection Settings | 2022-09-06T12:31:21.096Z | Shard Collection Settings | 1,483 |
null | [
"connecting",
"atlas-cluster"
]
| [
{
"code": "Can someone help? I am getting this error.\n\n\nMay Node me with you\nlistening on 3000\nMongoServerSelectionError: Server selection timed out after 30000 ms\n at Timeout._onTimeout (/Users/melle/Desktop/crud for demo/node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-r8gp4rs-shard-00-01.adm9hgw.mongodb.net:27017' => [ServerDescription],\n 'ac-r8gp4rs-shard-00-00.adm9hgw.mongodb.net:27017' => [ServerDescription],\n 'ac-r8gp4rs-shard-00-02.adm9hgw.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-wpmlmp-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n",
"text": "",
"username": "Jamelle_Mobley"
},
{
"code": "mongosh",
"text": "Hi @Jamelle_Mobley and welcome to the MongoDB Community forums! It looks like you’re attempting to connect to an Atlas cluster using code written in Node.js and are getting a timeout. Have you made sure that you have allowed access to the Atlas cluster from the machine that’s running the code? Can you connect via the mongosh command line tool?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I recreated everything, even a new account just incase I did something wrong. I even tried compass and still get the same problem. I added and deleted the IP address more than once and it is still not working.",
"username": "Jamelle_Mobley"
},
{
"code": "mongosh0.0.0.0/00.0.0.0/0",
"text": "Were you able to connect to your cluster via the mongosh command line tool? Did you temporarily allow access to 0.0.0.0/0 which will not block any access for testing? If you are not able to connect when 0.0.0.0/0, then it would seem that you would have a VPN or firewall blocking your access to the Atlas server.I can verify that the connection gets closed after 30 seconds from my machine, which is what I would expect as my IP address is not in the allow list either explicitly or in a range of allowed IPs. My test was against the old cluster which is still running, because you didn’t give any information about the newly rebuilt one.",
"username": "Doug_Duncan"
},
{
"code": "MongoServerSelectionError: Server selection timed out after 30000 ms\n at Timeout._onTimeout (/Users/melle/Desktop/crud for demo/node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-jas6yzg-shard-00-02.3anzuip.mongodb.net:27017' => [ServerDescription],\n 'ac-jas6yzg-shard-00-01.3anzuip.mongodb.net:27017' => [ServerDescription],\n 'ac-jas6yzg-shard-00-00.3anzuip.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-2hvnvb-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n",
"text": "It did not work with 0.0.0.0/0, but it did work with my iPhone, I used it as a hot spot. As of now I am not connected with a vpn, I guess it could be my router causing it. Because I have the IP address from my router.",
"username": "Jamelle_Mobley"
}
]
| Mongo server selection error | 2022-09-04T15:37:23.330Z | Mongo server selection error | 7,226 |
null | [
"crud",
"mongoose-odm"
]
| [
{
"code": "const DB_URL = 'mongodb+srv://*****:********@cluster0.ptvwi.mongodb.net/*****?retryWrites=true&w=majority'\nconst mongo = require('mongoose');\nmongo.connect(DB_URL);\n\nvar User = mongo.model('User', {\n\tid: Number,\n\tbalance: Number,\n\ttrees: Array,\n});\nuser.trees.push({ id: tree.id, poken: 0});\n",
"text": "",
"username": "qwert_yuiop"
},
{
"code": "",
"text": "Hello @qwert_yuiop, Welcome to MongoDB community forum,You can use such array operators as defined in the below docs,And you can follow the update query instruction in below docs,for a more specific solution please share the exact use case and requirement with details.",
"username": "turivishal"
},
{
"code": "Database_name.Collection_name.trees\n{\n \"0\":\"Object\" {\n \"id\": 0,\n \"poken\": 5\n },\n \"1\":\"Object\" {\n \"id\": 1,\n \"poken\":7\n }\n}\n",
"text": "Here I would like to change it to poken: 7 in the object with poken: 5",
"username": "qwert_yuiop"
},
{
"code": "",
"text": "It is not clear to me, could you please explain more, if possible, please add a valid JSON document.",
"username": "turivishal"
},
{
"code": "var doc = {\n \"_id\": 1,\n \"balance\": 100,\n \"trees\": [\n { \"id\": 0, \"poken\": 5},\n { \"id\": 1, \"poken\": 7}\n ]\n}\n$elemMatch$test> db.data.insertOne(doc)\n{ acknowledged: true, insertedId: 1 }\n\ntest> db.data.findOne()\n{\n _id: 1,\n balance: 100,\n trees: [ { id: 0, poken: 5 }, { id: 1, poken: 7 } ]\n}\n\ntest> db.data.updateOne(\n\n\t// $elemMatch finds docs containing an array with a matching element\n\t{\n\t\t\"trees\": { \"$elemMatch\": { \"poken\": 5 }}\n\t},\n\n\t// Positional operator $ is a placeholder for the first matching array element\n\t{\n\t\t\"$set\": { \"trees.$.poken\": 7 }\n\t}\n);\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n\ntest> db.data.findOne()\n{\n _id: 1,\n balance: 100,\n trees: [ { id: 0, poken: 7 }, { id: 1, poken: 7 } ]\n}\n",
"text": "Hi @qwert_yuiop,If I understand correctly your example doc should look like:I think what you are looking for is the $elemMatch query operator which can be used to find the first matching element of an array and can be combined with the positional operator ($) in an update query.For example, using the MongoDB Shell:If that isn’t what you’re trying to achieve, please provide more specific details including a valid document to test with and an example of the desired result.The Array Update Operators documentation linked in @turivishal’s comment covers other available operators for updating arrays.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": " \t// $elemMatch finds docs containing an array with a matching element\n \t{\n \t\t\"trees\": { \"$elemMatch\": { \"poken\": 5 }}\n \t},\n \n \t// Positional operator $ is a placeholder for the first matching array element\n \t{\n \t\t\"$set\": { \"trees.$.poken\": 7 }\n \t}\n );\n",
"text": "Maybe a mistake somewhere, but it didn’t work\nmessage.user.updateOne(You are accessing db.data, and I somehow also need to access the database?",
"username": "qwert_yuiop"
},
{
"code": "<ref *1> Mongoose {\n connections: [\n NativeConnection {\n base: [Circular *1],\n collections: [Object],\n models: [Object],\n config: [Object],\n replica: false,\n options: null,\n otherDbs: [],\n relatedDbs: {},\n states: [Object: null prototype],\n _readyState: 1,\n _closeCalled: false,\n _hasOpened: true,\n plugins: [],\n _listening: false,\n _connectionOptions: [Object],\n client: [MongoClient],\n '$initialConnection': [Promise],\n name: 'admin',\n host: 'cluster0-shard-00-02.ptvwi.mongodb.net',\n port: 27017,\n user: 'itest**********',\n pass: '********',\n db: [Db]\n }\n ],\n models: {\n User: Model { User },\n },\n modelSchemas: {\n User: Schema {\n obj: [Object],\n paths: [Object],\n aliases: {},\n subpaths: [Object],\n virtuals: {},\n singleNestedPaths: {},\n nested: {},\n inherits: {},\n callQueue: [],\n _indexes: [],\n methods: {},\n methodOptions: {},\n statics: {},\n tree: [Object],\n query: {},\n childSchemas: [],\n plugins: [Array],\n '$id': 1,\n s: [Object],\n _userProvidedOptions: {},\n options: [Object],\n '$globalPluginsApplied': true,\n _requiredpaths: [],\n _indexedpaths: []\n },\n },\n options: { pluralization: true },\n _pluralize: [Function: pluralize],\n Schema: [Function: Schema] {\n reserved: [Object: null prototype] {\n populated: 1,\n remove: 1,\n validate: 1,\n toObject: 1,\n schema: 1,\n save: 1,\n modelName: 1,\n get: 1,\n isNew: 1,\n isModified: 1,\n init: 1,\n errors: 1,\n db: 1,\n collection: 1,\n removeListener: 1,\n listeners: 1,\n once: 1,\n on: 1,\n emit: 1,\n prototype: 1\n },\n Types: {\n String: [Function],\n Number: [Function],\n Boolean: [Function],\n DocumentArray: [Function],\n Embedded: [Function: SingleNestedPath],\n Array: [Function],\n Buffer: [Function],\n Date: [Function],\n ObjectId: [Function],\n Mixed: [Function],\n Decimal: [Function],\n Decimal128: [Function],\n Map: [class Map extends SchemaType],\n Oid: [Function],\n Object: [Function],\n Bool: [Function],\n ObjectID: [Function]\n },\n ObjectId: [Function: ObjectId] {\n schemaName: 'ObjectId',\n get: [Function (anonymous)],\n _checkRequired: [Function (anonymous)],\n _cast: [Function: castObjectId],\n cast: [Function: cast],\n checkRequired: [Function (anonymous)]\n }\n },\n model: [Function (anonymous)],\n plugins: [\n [ [Function (anonymous)], [Object] ],\n [ [Function (anonymous)], [Object] ],\n [ [Function], [Object] ],\n [ [Function (anonymous)], [Object] ]\n ]\n}\n",
"text": "It contains only the id of the users of my telegram bot\nmaybe it will help?",
"username": "qwert_yuiop"
},
{
"code": "async/awaitasync function yourMethodName() {\n await message.user.updateOne(...\n}\n",
"text": "Make sure you have wrapped the query in async/await method, to wait for the response.",
"username": "turivishal"
},
{
"code": "async function yourMethodName() {\n await message.user.updateOne(...\n}\n \t// $elemMatch finds docs containing an array with a matching element\n \t{\n \t\t\"trees\": { \"$elemMatch\": { \"poken\": 5 }}\n \t},\n \n \t// Positional operator $ is a placeholder for the first matching array element\n \t{\n \t\t\"$set\": { \"trees.$.poken\": 7 }\n \t}\n );}\n yourMethodName()\n",
"text": "async function yourMethodName() {\nawait message.user.updateOne (Got an error\nThe dollar ($) prefixed field ‘$elemMatch’ in ‘trees.0.$elemMatch’ is not valid for storage.",
"username": "qwert_yuiop"
},
{
"code": "",
"text": "Maybe it will help\ntrees : Array",
"username": "qwert_yuiop"
},
{
"code": "",
"text": "There is something wrong in your implemented code, look at the working query in playground as on your input document,Please share more details:",
"username": "turivishal"
},
{
"code": "",
"text": "I don’t know how to look at it\nHow do I know which version my hosting uses?\nI think I use, but what should the scheme / model look like?\n“implemented controller code” I don’t really understand what you are talking about (",
"username": "qwert_yuiop"
},
{
"code": "",
"text": "None of these examples work.",
"username": "Timur_Mukhamedov"
},
{
"code": "",
"text": "Please elaborate on what you tried exactly and explain how it failed. We cannot help you with the level of details you shared.",
"username": "steevej"
}
]
| How do I update the data in an object that is in an array? | 2021-10-13T23:08:55.786Z | How do I update the data in an object that is in an array? | 34,509 |
null | [
"aggregation",
"node-js",
"mongoose-odm"
]
| [
{
"code": "const productSchema = new mongoose.Schema({\n\n name: {\n\n type: String,\n trim: true,\n required : [true, 'Please add a product Name'],\n maxlength: 32\n },\n\n description: {\n type: String,\n trim: true,\n required : [true, 'Please add a product Description'],\n maxlength: 2000,\n },\n\n price: {\n type: Number,\n trim: true,\n required : [true, 'Product must have a price'],\n maxlength: 32\n\n },\n\n image: {\n\n public_id: {\n type: String,\n required: true\n },\n\n url: {\n type: String,\n required: true\n }\n\n },\n\n category: {\n type: ObjectId,\n ref: \"Category\",\n required : [true, 'Product must belong to a category'],\n\n },\n\n\n}, {timestamps: true});\n\nmodule.exports = mongoose.model(\"Product\", productSchema);\n\n// category model\nconst categorySchema = new mongoose.Schema({\n\n name: {\n type: String,\n trim: true,\n required : [true, 'Please add a category Name'], \n },\n\n}, {timestamps: true});\n",
"text": "",
"username": "Emmanuel_Francois"
},
{
"code": "",
"text": "we can ignore the question i solved it.",
"username": "Emmanuel_Francois"
}
]
| According to this schema how to count product category in each category. For ex. Electronics(2), Accessories(10). Help plz | 2022-09-07T02:02:13.478Z | According to this schema how to count product category in each category. For ex. Electronics(2), Accessories(10). Help plz | 2,599 |
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "db.case_details.aggregate( [\n {\n $facet: {\n \"Totalcount\":[{$count:\"count\"},{$project:{_id:0,Total_cases:\"$count\"}}],\n \"status_counts\": [{\"$group\" : {_id:\"$status\", count:{$sum:1}}},{$project:{_id:0,count:1,status:\"$_id\"}}],\n \"Priority_counts\": [{$match:{\"status\":{$ne:\"Closed\"}}},{\"$group\" : {_id:\"$priority\", count:{$sum:1}}},{$project:{_id:0,priority:\"$_id\",count:1}}] ,\n \"assigned\":[{$match:{\"status\":{$ne:\"Closed\"}}},{\"$group\":{_id:\"$is_assigned\",count:{$sum:1}}},{$project:{_id:0,assigned:\"$_id\",count:1}}],\n \"transfered\":[{$match:{\"status\":{$ne:\"Closed\"},\"is_transfer\":true}},{\"$group\":{_id:\"$is_transfer\",count:{$sum:1}}},{$project:{_id:0,transfered:\"$_id\",count:1}}],\n \"cc\":[{$match:{\"status\":{$ne:\"Closed\"},\"iscccase\":true}},{\"$group\":{_id:\"$iscccase\",count:{$sum:1}}},{$project:{_id:0,cccase:\"$_id\",count:1}}] \n }\n },{$addFields : {created_date : new Date()}}\n ])\n/* 1 */\n{\n \"Totalcount\" : [ \n {\n \"Total_cases\" : 64397\n }\n ],\n \"status_counts\" : [ \n {\n \"count\" : 696.0,\n \"status\" : \"Open\"\n }, \n {\n \"count\" : 59662.0,\n \"status\" : \"Closed\"\n }, \n {\n \"count\" : 4039.0,\n \"status\" : \"Pending\"\n }\n ],\n \"Priority_counts\" : [ \n {\n \"count\" : 2.0,\n \"priority\" : \"High\"\n }, \n {\n \"count\" : 4722.0,\n \"priority\" : \"Escalated\"\n }, \n {\n \"count\" : 11.0,\n \"priority\" : \"Medium\"\n }\n ],\n \"assigned\" : [ \n {\n \"count\" : 4351.0,\n \"assigned\" : true\n }, \n {\n \"count\" : 384.0,\n \"assigned\" : false\n }\n ],\n \"transfered\" : [ \n {\n \"count\" : 245.0,\n \"transfered\" : true\n }\n ],\n \"cc\" : [ \n {\n \"count\" : 4.0,\n \"cccase\" : true\n }\n ],\n \"created_date\" : ISODate(\"2022-09-07T07:29:07.735Z\")\n}\n",
"text": "from the above query I got result as like below:can we get result like:{Total:57875, Open:696, Pending:4039 , Closed:59662, Low:11, Escalated:4722, Medium:1, High:1, Assigned: 4351, Unassigned : 384, Transfered: 245, cc :4}",
"username": "Lokesh_Reddy1"
},
{
"code": "db.case_details.aggregate( [\n {\n $facet: {\n \"Totalcount\":[{$count:\"count\"},{$project:{_id:0,Total_cases:\"$count\"}}],\n \"status_counts\": [{\"$group\" : {_id:\"$status\", count:{$sum:1}}},{$project:{_id:0,count:1,status:\"$_id\"}}],\n \"Priority_counts\": [{$match:{\"status\":{$ne:\"Closed\"}}},{\"$group\" : {_id:\"$priority\", count:{$sum:1}}},{$project:{_id:0,priority:\"$_id\",count:1}}] ,\n \"assigned\":[{$match:{\"status\":{$ne:\"Closed\"}}},{\"$group\":{_id:\"$is_assigned\",count:{$sum:1}}},{$project:{_id:0,assigned:\"$_id\",count:1}}],\n \"transfered\":[{$match:{\"status\":{$ne:\"Closed\"},\"is_transfer\":true}},{\"$group\":{_id:\"$is_transfer\",count:{$sum:1}}},{$project:{_id:0,transfered:\"$_id\",count:1}}],\n \"cc\":[{$match:{\"status\":{$ne:\"Closed\"},\"iscccase\":true}},{\"$group\":{_id:\"$iscccase\",count:{$sum:1}}},{$project:{_id:0,cccase:\"$_id\",count:1}}] \n }\n },{$addFields : {created_date : new Date()}},{\n $project: {\n Total: {\n $first: '$Totalcount.Total_cases'\n },\n Open: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$status_counts',\n cond: {\n $eq: [\n '$$this.status',\n 'Open'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Pending: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$status_counts',\n cond: {\n $eq: [\n '$$this.status',\n 'Pending'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Closed: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$status_counts',\n cond: {\n $eq: [\n '$$this.status',\n 'Closed'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Low: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$Priority_counts',\n cond: {\n $eq: [\n '$$this.priority',\n 'Low'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Escalated: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$Priority_counts',\n cond: {\n $eq: [\n '$$this.priority',\n 'Escalated'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Medium: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$Priority_counts',\n cond: {\n $eq: [\n '$$this.priority',\n 'Medium'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n High: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$Priority_counts',\n cond: {\n $eq: [\n '$$this.priority',\n 'High'\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Assigned: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$assigned',\n cond: {\n $eq: [\n '$$this.assigned',\n true\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Unassigned: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$assigned',\n cond: {\n $eq: [\n '$$this.assigned',\n false\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n Transfered: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$transfered',\n cond: {\n $eq: [\n '$$this.transfered',\n true\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n },\n cc: {\n $ifNull: [\n {\n $first: {\n $filter: {\n input: '$cc',\n cond: {\n $eq: [\n '$$this.cccase',\n true\n ]\n }\n }\n }\n },\n {\n count: 0\n }\n ]\n }\n }\n}, {\n $project: {\n Total: 1,\n Open: '$Open.count',\n Pending: '$Pending.count',\n Closed: '$Closed.count',\n Low: '$Low.count',\n Escalated: '$Escalated.count',\n Medium: '$Medium.count',\n High: '$High.count',\n Assigned: '$Assigned.count',\n Unassigned: '$Unassigned.count',\n Transfered: '$Transfered.count',\n cc: '$cc.count'\n }\n}]);\n",
"text": "Hi @Lokesh_Reddy1 ,Its just a matter of playing with the projection, not sure if the complication of the query worth it ",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny I am very excited for your response its working fine… Thank so much ",
"username": "Lokesh_Reddy1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to get counts from multiple field keys and values with simple result? | 2022-09-07T07:35:44.350Z | How to get counts from multiple field keys and values with simple result? | 2,983 |
[
"100daysofcode"
]
| [
{
"code": "",
"text": "Hi everybody,\nI have heard a lot about #100daysof code. Yesterday was the 100th & final day for my colleagues(@Kushagra_Kesav & @henna.s).\nAfter seeing them making consistent progress and learning while they were doing the challenge, I am quite impressed and motivated to do the same.\nHence, I have decided that I will take the baton from here and will start my journey of 100daysofcode from today.I will be sharing my daily updates in the form of a Medium blog summarizing the things that I have learned during the day.\nimage3842×2162 460 KB\nWish me luck .Regards,\nSourabh Bagrecha,\nLinkedIn | Twitter",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned how to configure Email Password Authentication on MongoDB Realm.In less than 6 small steps, learn how to set up Authentication and allow your users to Login and Signup to your app without writing a…\nReading time: 4 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to implement Full-Stack Authentication in a React App using MongoDB Realm without worrying about servers at all.Build a full-stack app using MongoDB Realm GraphQL (without worrying about servers at all)\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to Configure MongoDB Realm to perform CRUD operations in our app using GraphQL.Build a full-stack app using MongoDB Atlas App Services without worrying about servers at all\nReading time: 6 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to set up our React app for CRUD operations using MongoDB Realm GraphQL. CRUD is a shorthand for Create, Read, Update and Delete.A MongoDB Realm GraphQL Tutorial explaining how to perform Create and Read operations in a React.js app — P\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to set up our React app to perform Update operations using MongoDB Realm GraphQL.A MongoDB Realm GraphQL Tutorial explaining how to implement Update operation in a React.js app\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to utilize Custom Resolvers for GraphQL in MongoDB Realm to perform advanced analytics on the data stored in our database that was beyond the capabilities of default GraphQL API provided by MongoDB Realm.MongoDB Atlas GraphQL API provides a lot of features out of the box. In the previous parts of this blog series, we implemented all the CRUD…\nReading time: 6 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to implement an analytics dashboard in React.js using the data we fetched from MongoDB Realm GraphQL.A React.js tutorial explaining how to implement charts to show insights based on user data.\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned about how to deploy/host a Website, React App, or any Single Page Application using MongoDB Realm.Atlas App Services Tutorial showing how to Host Websites and Single Page Applications like React.js in just 5 steps.\nReading time: 4 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Yesterday, we had a MongoDB User Group hosted at our MongoDB India Office.\ne0162670-21a1-4cd9-adc2-c47bef7365d51200×1600 331 KB\nWhile I was summarizing all the things I learned in the past 8 days I fell asleep midway because I was very tired after all the fun we had, so here I am, approximately 17(I usually post around 2-3 am) hours late to the party.\nBut, better late than never, here’s the summary of all the things I learned in the past 8 days.Learn how to perform Authentication, Authorization, CRUD operations, Analytics & Web App Deployment by building an Expense Manager in…\nReading time: 6 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "No worries, you are initiator of the party.\nNevertheless, this is helpful.",
"username": "Shreya_Bhanot"
},
{
"code": "",
"text": "Today, I learned how to integrate React Query in a GraphQL-based React App.Replace React’s useEffect & useState hooks with react query’s useQuery hook to simplify the process of fetching and managing the data via…\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today, I learned How to use React Query to perform Create, Edit, and Delete Mutation for GraphQL based Web Apps.Learn how to use the useMutation hook provided by React-Query to perform CRUD operations on a GraphQL based Web App\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today I learned How to handle error 401: unauthorized request in React Query? And how we can create custom hooks to reuse the logic across different components throughout our app.Learn how to create a reusable custom React Hook to detect users’ expired sessions(access tokens) and refresh them accordingly and fix the…\nReading time: 3 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today I learned How to create MongoDB charts and how to perform multistep calculations and data processing using an aggregation pipeline to feed them to the Charts.I will post a consolidated blog tomorrow with the steps to",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today I learned how to pre-process our collections’ data before feeding it to a MongoDB Chart, I also learned how we can use different Chart types like Circular Donut Charts and Histograms to make the best use of visualizations and tell a story to our audience because a picture worth a thousand words.Create real-time interactive analytics dashboards to convert raw data into meaningful insights by importing a Kaggle Dataset in MongoDB…\nReading time: 7 min read\n",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today I learned about:I will extend my expengo web app to Mobile devices as well. Using Atlas Device Sync I will try to provide a smooth user experience by creating a RN app that will work even when the device is not connected to the internet.I will try to finish building this app by the weekend, and hopefully, we will have a brand new blog by then.",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Today I learned about how to integrate React Navigation for Stack Navigator in a React Native App. I also grouped and separated the authenticated and unauthenticated screens for a smooth user experience. I added the login screen and implemented Email Password authentication using Atlas App Services as well.",
"username": "SourabhBagrecha"
},
{
"code": "Expense schema not found",
"text": "Weekend finally! Today I finished watching the second season of Panchayat, the last episode was very emotional, but overall it was so fun, really enjoyed it.In terms of the things I learned today, I made some progress on connecting the Local Realm on React Native with MongoDB’s Atlas Device Sync. Got some errors due to Expense schema not found, I guess I may have made an incorrect configuration therefore I will be making the required changes on the MongoDB Cloud tomorrow.",
"username": "SourabhBagrecha"
},
{
"code": "Expense schema not found",
"text": "Today I finally fixed the following error:Got some errors due to Expense schema not foundFigured out it was a very silly mistake on my end, I added the schema array to the sync config options object but it has to be a key of the parent config options object. I also had to verify my schema from the Realm SDK tab from the left panel of the MongoDB App Services Dashboard.\nNow I am finally able to read and sync all of my data from the MongoDB Cloud to my iOS Simulator.\nI am currently struggling to find a good React Native UI library that can provide components and other UI elements out of the box so that I don’t have to write CSS on my own.\nI also added the CreateExpense form and page and configured the same in ReactNavigation.",
"username": "SourabhBagrecha"
}
]
| The Journey of #100DaysOfCode (@sourabhbagrecha) | 2022-05-18T18:26:46.379Z | The Journey of #100DaysOfCode (@sourabhbagrecha) | 19,350 |
|
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Such situation.There is a collection with documents in the database,\nwe created Schema and created Model.\nAt the same time, not all fields of documents are described in the schema\ni.e. some documents have fields not described in the schema.\nNow we execute the model.deleteMany({x:}) where “x” is not in the schema\nThis call removes ALL documents in the collection!\nThis behavior was not in version 5This behavior is not described in Mongoose v7.6.3: Migrating to Mongoose 6\nand it looks like it’s just an error.",
"username": "Vadim_Shumilin"
},
{
"code": "strictQuerydeleteMany()strictQueryfalse",
"text": "Welcome to the MongoDB community @Vadim_Shumilin !I suspect this is another consequence of Mongoose 6 changing the strictQuery behaviour so query filter properties that are not in the schema will now be filtered out by default .This is unexpected behaviour that is inconsistent with the official MongoDB Node.js driver behaviour (related discussion: How to avoid accidentally returning an arbitrary document when using findOne with a non-existing field in Mongoose? - #7 by Stennie).Mongoose is an open source community project, so the best channel to search or create Mongoose bug reports is the GitHub issue queue: Issues · Automattic/mongoose · GitHub.I see you have already created Mongoose #12389 for this deleteMany() issue. I think a key upstream issue to watch/upvote is Mongoose #11861: Make strictQuery false by default again.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X\nI only recently work with mongoose, and when I came across this error - I was very surprised by this behavior, and began to look for where to report it. I found this community , but after two days I did not find the reaction, and further search led to GitHub issue.\nNow the situation is clear to me, I still think that such behavior is very dangerous, and at least it should have been written about separately in Migrating to Mongoose 6 document.\nRegards,\nVadim",
"username": "Vadim_Shumilin"
}
]
| Very-very wrong behavior of mongoose version 6 function deleteMany | 2022-09-02T08:03:54.113Z | Very-very wrong behavior of mongoose version 6 function deleteMany | 3,756 |
[
"aggregation"
]
| [
{
"code": "",
"text": "i have a table which is ward table, and another table which is levels.\nfor wards table, there is column with currentLevel which is associated to Id on levels table.\n . This is wards table.and this is levels table\n\n. Looking at this screenshots, if level id is associated with any ward, that ward is automatically in that class e.g basic 1, Basic 2 and so on.so i want to promote all the ward in that particular class (from basic 1 to basic 2) automatically using another currentLevel (id) which will be promoted to another class i.e from currentLevel (id) (basic 1) to currentLevel(id) basic 2. Please help me. its urgent",
"username": "Gbade_Francis"
},
{
"code": "wardslevelsdb.wards.updateMany(\n\t{\n\t\t// _id for a level with name \"basic 1\"\n\t\tcurrentLevel: ObjectId(...)\n\t},\n\t{\n\t\t$set: {\n\t\t\t// _id for a level with name \"basic 2\"\n\t\t\tcurrentLevel: ObjectId(...)\n\t\t}\n\t}\n)\n",
"text": "Hey @Gbade_Francis,Can you please explain this a bit more:using another currentLevel (id) which will be promoted to another class i.e from currentLevel (id) (basic 1) to currentLevel(id) basic 2As per my understanding, you have references using ObjectID( which is currentLevel) in the wards collection that would not match the ObjectIDs in levels (type mismatch). It would be good to fix this type mismatch and then you can try to do an approach like this:If this isn’t the case, provide sample documents in text format instead of screenshots along with the output you are expecting in order for us to better understand the problem statement. Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Updating column using agreegate | 2022-09-05T19:21:25.125Z | Updating column using agreegate | 1,125 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "Hello all,I am new to MongoDB and like to know how to do a query because I am using Wekan board application that is using MongoDB.I’ve learned that with the following query I can find boards where I am not an admindb.getCollection(‘boards’).find({members: {$elemMatch: { userId: “USER-ID-HERE”, isAdmin: false} } })The query I am looking for is how I can update all boards I am not the admin of, so set it from false to trueThanks for helping.",
"username": "jurjendevries"
},
{
"code": "",
"text": "Please provide sample documents from you collection and the expected results.Coming up with documents that match your use-case in order to experiment is time consuming.Please share anything you tried and indicate how it fails so that we do not experiment and propose a solution that you already rejected.As a starting point look at updateMany because that will be the method to use rather than find().The query parameter of updateMany() will be the same as your find().The update parameter will be $set.Of particular interest for your array update is https://www.mongodb.com/docs/manual/reference/method/db.collection.updateMany/#specify-arrayfilters-for-an-array-update-operations. However $ positional update might be sufficient in your case.",
"username": "steevej"
},
{
"code": "",
"text": "Hello @steevej\nThank you for helping meI am not sure what you mean with sample documents? Do you mean a sample of the Wekan application? If so the developer provided one at https://boards.wekan.team/b/D2SzJKZDS4Z48yeQH/wekan-open-source-kanban-board-with-mit-license .This is not the Wekan instance I am trying to change because that is not an open instance.\nSo I am trying to change all boards I am member of, but don’t have admin.\nI didn’t tried any update / manyupdate query yet because I am too new to it and afraid I am doing something wrong. But if I do understand you well I can do:db.getCollection(‘boards’).updateMany({members: {$elemMatch: { userId: “USER-ID-HERE”, isAdmin: false} } })\nAnd need to change USER-ID-HERE with my user id\nBut where and how exactly in this line should I add the $set for isAdmin true ?Best regards,\nJurjen",
"username": "jurjendevries"
},
{
"code": "",
"text": "sample documents?You are trying to update documents from the collection boards. We need to see the structure of those documents. And we need sample documents that we can cut-n-paste into out system to play with.Thanks for the link but most of us have to time to go investigate how a third party software you want to use integrate with the database. If you know that documents from the boards collection have an array named members which contains object with the fields userId and isAdmin you must have seen at least one document.Just doingdb.getCollection(‘boards’).find({members: {$elemMatch: { userId: “USER-ID-HERE”, isAdmin: false} } })will provide sample documents.",
"username": "steevej"
},
{
"code": "",
"text": "Hello @steevejSorry I misunderstood what you meant with sample documents. I only know the term records for what is returned by a query into databases.This is a paste of 2 rows records/documents with the query. I changed the userids because I am not sure if it is sensitive information for a public forum.So in this records/documents and all others that I got from the query should the isAdmin: false be updated where userId is xxx\nI like to know how to build that query in MongoDB.Thanks again db.getCollection(‘boards’).find({members: {$elemMatch: { userId: “xxx”, isAdmin: false} } })\n{ “_id” : “L4iFMFcj2QSLM8aJM”, “title” : “EU epics”, “permission” : “private”, “sort” : 11, “slug” : “eu-epics”, “archived” : true, “createdAt” : ISODate(“2021-02-24T14:07:33.070Z”), “modifiedAt” : ISODate(“2021-02-24T14:09:29.773Z”), “stars” : 0, “labels” : [ { “color” : “green”, “_id” : “v4HA7q”, “name” : “” }, { “color” : “yellow”, “_id” : “cPCsBe”, “name” : “” }, { “color” : “orange”, “_id” : “bYLMsW”, “name” : “” }, { “color” : “red”, “_id” : “yBfFjM”, “name” : “” }, { “color” : “purple”, “_id” : “EWiMFB”, “name” : “” }, { “color” : “blue”, “_id” : “bhpW2h”, “name” : “” } ], “members” : [ { “userId” : “other1”, “isAdmin” : true, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “xxx”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “ytdWuptdA2aJEpSqw”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false } ], “color” : “belize”, “subtasksDefaultBoardId” : null, “subtasksDefaultListId” : null, “dateSettingsDefaultBoardId” : null, “dateSettingsDefaultListId” : null, “allowsSubtasks” : true, “allowsAttachments” : true, “allowsChecklists” : true, “allowsComments” : true, “allowsDescriptionTitle” : true, “allowsDescriptionText” : true, “allowsActivities” : true, “allowsLabels” : true, “allowsAssignee” : true, “allowsMembers” : true, “allowsRequestedBy” : true, “allowsAssignedBy” : true, “allowsReceivedDate” : true, “allowsStartDate” : true, “allowsEndDate” : true, “allowsDueDate” : true, “presentParentTask” : “no-parent”, “isOvertime” : false, “type” : “board”, “archivedAt” : ISODate(“2021-02-24T14:09:29.684Z”), “allowsCardNumber” : false, “allowsShowLists” : true }\n{ “_id” : “GJXMEnZqrTPuAkyvz”, “title” : “Top Priorities”, “permission” : “private”, “sort” : 71, “slug” : “top-priorities”, “archived” : false, “createdAt” : ISODate(“2021-11-25T11:17:40.437Z”), “modifiedAt” : ISODate(“2022-04-29T08:30:06.646Z”), “stars” : 1, “members” : [ { “userId” : “other2”, “isAdmin” : true, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other3”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other4”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “xxx”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other5”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other6”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other7”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other8”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other9”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other10”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false } ], “color” : “belize”, “subtasksDefaultBoardId” : null, “subtasksDefaultListId” : null, “dateSettingsDefaultBoardId” : null, “dateSettingsDefaultListId” : null, “allowsSubtasks” : true, “allowsAttachments” : true, “allowsChecklists” : true, “allowsComments” : true, “allowsDescriptionTitle” : true, “allowsDescriptionText” : true, “allowsCardNumber” : false, “allowsActivities” : true, “allowsLabels” : true, “allowsCreator” : false, “allowsAssignee” : true, “allowsMembers” : false, “allowsRequestedBy” : true, “allowsCardSortingByNumber” : true, “allowsAssignedBy” : false, “allowsReceivedDate” : true, “allowsStartDate” : true, “allowsEndDate” : true, “allowsDueDate” : true, “presentParentTask” : “no-parent”, “isOvertime” : false, “type” : “board”, “allowsShowLists” : true }",
"username": "jurjendevries"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update the same documents (records) so that we can cut-n-paste into our system.",
"username": "steevej"
},
{
"code": "{ “_id” : “L4iFMFcj2QSLM8aJM”, “title” : “EU epics”, “permission” : “private”, “sort” : 11, “slug” : “eu-epics”, “archived” : true, “createdAt” : ISODate(“2021-02-24T14:07:33.070Z”), “modifiedAt” : ISODate(“2021-02-24T14:09:29.773Z”), “stars” : 0, “labels” : [ { “color” : “green”, “_id” : “v4HA7q”, “name” : “” }, { “color” : “yellow”, “_id” : “cPCsBe”, “name” : “” }, { “color” : “orange”, “_id” : “bYLMsW”, “name” : “” }, { “color” : “red”, “_id” : “yBfFjM”, “name” : “” }, { “color” : “purple”, “_id” : “EWiMFB”, “name” : “” }, { “color” : “blue”, “_id” : “bhpW2h”, “name” : “” } ], “members” : [ { “userId” : “other1”, “isAdmin” : true, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “xxx”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “ytdWuptdA2aJEpSqw”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false } ], “color” : “belize”, “subtasksDefaultBoardId” : null, “subtasksDefaultListId” : null, “dateSettingsDefaultBoardId” : null, “dateSettingsDefaultListId” : null, “allowsSubtasks” : true, “allowsAttachments” : true, “allowsChecklists” : true, “allowsComments” : true, “allowsDescriptionTitle” : true, “allowsDescriptionText” : true, “allowsActivities” : true, “allowsLabels” : true, “allowsAssignee” : true, “allowsMembers” : true, “allowsRequestedBy” : true, “allowsAssignedBy” : true, “allowsReceivedDate” : true, “allowsStartDate” : true, “allowsEndDate” : true, “allowsDueDate” : true, “presentParentTask” : “no-parent”, “isOvertime” : false, “type” : “board”, “archivedAt” : ISODate(“2021-02-24T14:09:29.684Z”), “allowsCardNumber” : false, “allowsShowLists” : true }\n{ “_id” : “GJXMEnZqrTPuAkyvz”, “title” : “Top Priorities”, “permission” : “private”, “sort” : 71, “slug” : “top-priorities”, “archived” : false, “createdAt” : ISODate(“2021-11-25T11:17:40.437Z”), “modifiedAt” : ISODate(“2022-04-29T08:30:06.646Z”), “stars” : 1, “members” : [ { “userId” : “other2”, “isAdmin” : true, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other3”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other4”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “xxx”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other5”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other6”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other7”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other8”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other9”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false }, { “userId” : “other10”, “isAdmin” : false, “isActive” : true, “isNoComments” : false, “isCommentOnly” : false, “isWorker” : false } ], “color” : “belize”, “subtasksDefaultBoardId” : null, “subtasksDefaultListId” : null, “dateSettingsDefaultBoardId” : null, “dateSettingsDefaultListId” : null, “allowsSubtasks” : true, “allowsAttachments” : true, “allowsChecklists” : true, “allowsComments” : true, “allowsDescriptionTitle” : true, “allowsDescriptionText” : true, “allowsCardNumber” : false, “allowsActivities” : true, “allowsLabels” : true, “allowsCreator” : false, “allowsAssignee” : true, “allowsMembers” : false, “allowsRequestedBy” : true, “allowsCardSortingByNumber” : true, “allowsAssignedBy” : false, “allowsReceivedDate” : true, “allowsStartDate” : true, “allowsEndDate” : true, “allowsDueDate” : true, “presentParentTask” : “no-parent”, “isOvertime” : false, “type” : “board”, “allowsShowLists” : true }```",
"text": "Like this?",
"username": "jurjendevries"
},
{
"code": "SyntaxError: Unexpected character '“'. (1:6)\n",
"text": "No not like this.Most likely you cut-n-paste from your previous post, so you got the wrong quotes. Exactly, what the triple dots fencing is supposed to prevent.To know if it is correct or not. You should try to copy the result of your editing, the part in the right pane when you edit post, and paste it into mongosh. If it works for you it will work for us, if it does not for you it will not for us.What we get when we try to cut-n-paste is:",
"username": "steevej"
},
{
"code": "",
"text": "Sorry for my very late response. Today I had a friend Rafid helping me with this. For who is in need for a similar query, here is the one that worked for me.db.boards.updateMany(\n{ members: { $elemMatch: { userId: “USER-ID-HERE”, isAdmin: false } } },\n{\n$set: { “members.$.isAdmin”: true },\n}\n);",
"username": "jurjendevries"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Beginner question, query to update multiple records | 2022-05-14T07:18:53.723Z | Beginner question, query to update multiple records | 4,881 |
null | []
| [
{
"code": "",
"text": "Hello!\nI’m trying to understand the course of actions I need to take in order to create a Project + Cluster in Atlas using Terraform (more like CDK TF).The latest development is me not getting the right configuration for IP Policy so that I can trigger the creation of resources from a GitHub Action runner (upon pushing a commit to a certain repo) - getting the error “IP_ADDRESS_NOT_ON_ACCESS_LIST”\nOne thing I will try is adding all the GitHub Runners IPs as entries, but even if it works, I’m not sure it’s the correct thing to do.\nAny help would be appreciated \nThanks!",
"username": "Sharon_Grossman"
},
{
"code": "",
"text": "Hi @Sharon_Grossman - Welcome to the community forums!It’s a bit of a longer read but perhaps some of the information on the following post may help. The user on that post was also utilising GitHub Action runner and terraform with Atlas .Let me know if there is any confusion although I must note that I am not too familiar with the GitHub Action runner side of things!Cheers,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Sharon_Grossman just checking if you were able to resolve this issue? If not, feel free to share any specific output/errors you are getting and we’re happy to help troubleshoot.Kind regards,\nZuhair",
"username": "Zuhair_Ahmed"
},
{
"code": "",
"text": "Yes thanks, the post in Jason’s answer clarified it ",
"username": "Sharon_Grossman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas Cluster provisioning with Terraform (CDKTF) | 2022-08-01T13:55:18.482Z | Atlas Cluster provisioning with Terraform (CDKTF) | 2,568 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.