image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"atlas-device-sync",
"flexible-sync"
] | [
{
"code": "",
"text": "Hi,We have an app that was built 2 years ago using Partition Sync. We see that Flexible Sync is now recommended and out of beta.We also read there might be a tool that would allow in-place migrations to flexible sync without needing to create a separate app. Is there an ETA on this tool before we embark on a manual migration?Thanks!",
"username": "Thompson"
},
{
"code": "",
"text": "Hi @Thompson, this feature is in active development and we hope to release it in the coming quarter.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ETA on Migration Tool from Partition Sync to Flexible Sync | 2023-01-23T20:44:52.788Z | ETA on Migration Tool from Partition Sync to Flexible Sync | 1,219 |
null | [
"android"
] | [
{
"code": "",
"text": "Is there any listener for the data that is being pushed or pulled in android in real time? To check on collection level or document level listeners while syncing the data.",
"username": "Shubham_Sharma9"
},
{
"code": "",
"text": "Hi, can you be more specific about what you are trying to do? It sounds like you are looking for Change Listeners: https://www.mongodb.com/docs/realm/sdk/java/examples/react-to-changes/",
"username": "Tyler_Kaye"
}
] | Data push and pull listener on collection/document basis | 2023-01-24T04:10:55.368Z | Data push and pull listener on collection/document basis | 1,175 |
null | [
"data-modeling"
] | [
{
"code": "books(_id, title)\nauthors(_id, firstName, lastName)\nbooksAndAuthors(_id, *book_id*, *author_id*)\n",
"text": "In relational databases normalization is paramount. Intermediary tables are therefore used to create many-to-many relationships between entities. In document oriented databases some denormalization is allowed to eliminate these intermediary tables to simplify and reduce the number of queries.An example of a fully normalized many-to-many relationship:So, in MongoDb this is not the normal way of creating a many-to-many relationship because searching for books by author for example would require more than one query.My question is: Are there any examples of situations where this design would be better than the normal way of creating many-to-many relationships? Is it a ‘sin’ to treat MongoDb like a relational database with full normalization? Or are there situations where the benefits of full normalization would outweigh the drawbacks of the increased number of queries required?",
"username": "Max7741"
},
{
"code": "books_writtenauthors",
"text": "Hey @Max7741,A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. A good way to model this is by embedding. For example, in the use case you provided, if one has to regularly search for all the books by a particular author, then it would be much more useful to add an array field of books_written in the authors collection itself.There are situations where full normalization may be beneficial in MongoDB, such as when dealing with very large datasets and a high volume of concurrent transactions. In these cases, the added complexity of multiple queries may be worth it to ensure data integrity and consistency.\nAnother example could be if you are dealing with sensitive data and need to ensure strict security and compliance requirements. The added complexity of multiple queries may be necessary to meet these requirements.\nHowever, in most cases, the benefits of denormalization in MongoDB outweigh the drawbacks of increased complexity and query overhead. The flexibility and scalability of MongoDB’s document-oriented model make it well-suited for many-to-many relationships and denormalization can greatly simplify data modeling and querying.Is it a ‘sin’ to treat MongoDb like a relational database with full normalization?It is not necessarily a “sin” to treat MongoDB like a relational database with full normalization, but it may not be the most efficient or effective approach for most use cases. It ultimately depends on the specific requirements and constraints of your project.In general, favor denormalization when:and favor normalization when:Regarding document growth, note that MongoDB has a hard limit of 16MB per document, thus any schema design that can have a document growing indefinitely will hit this 16MB limit sooner or later.\nYou can further read the following documentation to further cement your knowledge of Schema Design in MongoDB.\nData Model Design\nFactors to consider when data modeling in MongoDBNote that these points are just general ideas and not a strict rule. I’m sure there are exceptions and counter examples to any of the points above, but generally it’s more about designing the schema according what will suit the use case best (i.e. how the data will be queried and/or updated), and not how the data is stored (unlike in most tabular databases where 3NF is considered “best” for most things).Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "and favor denormalization when:I believe you meant the other way ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hey @Yilmaz_Durmaz,Thanks for pointing it out! Edited my answer. Thanks,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Schema design: Many-to-many relationships and normalization | 2023-01-19T13:11:02.779Z | Schema design: Many-to-many relationships and normalization | 4,790 |
[] | [
{
"code": "",
"text": "Hi Mongodb OPS and Sysad,I’m new to mongodb and currently working on a virtual machine installed with redhat. it happens that we tried to change the permission of the following while adding the keyfile on the security option under /etc/mongod.confthen we restart the mongodb using the command systemctl restart mongodhere’s the error:● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor pres>\nActive: failed (Result: core-dump) since Wed 2021-02-24 13:41:34 PST; 1 day >\nDocs: https://docs.mongodb.org/manual\nProcess: 1279 ExecStart=/usr/bin/mongod $OPTIONS (code=dumped, signal=ABRT)\nProcess: 1277 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited,>\nProcess: 1274 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (cod>\nProcess: 1271 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, s>mongod[1279]: mongod(_ZN5mongo11Initial>\nmongod[1279]: mongod(_ZN5mongo21runGlob>\nmongod[1279]: mongod(+0xD9DF2A) [0x560b>\nmongod[1279]: mongod(+0xD1F719) [0x560b>\nmongod[1279]: libc.so.6(__libc_start_ma>\nmongod[1279]: mongod(_start+0x2E) [0x56>\nmongod[1279]: ----- END BACKTRACE -----\nsystemd[1]: mongod.service: Control proc>\nsystemd[1]: mongod.service: Failed with >\nsystemd[1]: Failed to start MongoDB Data>I tried to uninstall using this command below:sudo service mongod stop\nsudo apt-get purge mongodb-org*Remove Data Directories.\nRemove MongoDB databases and log files.sudo rm -r /var/log/mongodb\nsudo rm -r /var/lib/mongodbAnd fresh install mongodb to another version same thing happened error message didn’t disappeared and mongodb service failed to start.Mongodb Current Version: MongoDB shell version v4.2.8\nOS: Red Hat Enterprise Linux release 8.2 (Ootpa)Hoping from this forum someone can help me on this issue and able to share some ideas to fix it. Thank you in advance.",
"username": "Ian_Sherwin_Canaya"
},
{
"code": "",
"text": "@Ian_Sherwin_Canaya Do you get the solution for this issue ? I have also this issue i can’t get solution yet…if you get the solution please give the method…",
"username": "ajay_ps"
},
{
"code": "Active: failed (Result: core-dump)Active: failed (Result: exit-code)",
"text": "Hi @Ian_Sherwin_Canaya and @ajay_ps, even I’m new to MongoDB and I too had this error or whatever we can call this. I searched a ton of communities and was unable to find a fix and at the end came to a decision of reinstalling MongoDB from scratch by first removing all the packages and directories like log, database etc. After reinstalling it works fine.About the error:\nI have encountered two types of similar errors so far. One is similar to the one talked about in this post Active: failed (Result: core-dump) for which only fix I could find was to reinstall from scratch. I read somewhere that this is a possible consequence of crashed or corrupted database which can be caused by terminating the database in a way which shouldn’t be used ( I have no idea what way is it).Another similar error is Active: failed (Result: exit-code). This can be caused by various correctable reasons like ownership (chown) or rwx (chmod) permissions of the log file and dbpath, the mongod service not running, etc.I hope someone would find this information useful ",
"username": "Yashvander_Bamel"
},
{
"code": "",
"text": "@Yashvander_Bamel I reinstalled many times after deleting all packages. But No solution. and i changed the user in ubuntu and again installed but no solutions.\nChecking time “sudo systemsctl start mongod” and after “mongo” command then get the error. Illegal instruction (core dumped)… this is the main issue\nI can’t get any solution yet…",
"username": "ajay_ps"
},
{
"code": "",
"text": "Did mongodb work on your system even once? Did you get this error when you installed mongodb for the first time ever? If that’s the case I can not comment anything.But if it did run once and then crashed, you can remove all the packages and use steps mentioned here and delete all the files associated with mongodb. And then try to reinstall. Note that this would work only if you could run mongodb atleast once on your system as far as I can tell. But you can try in the other case as well.And once you reinstall mongodb again, do not forget to change the ownership and permissions of the log and data folders",
"username": "Yashvander_Bamel"
},
{
"code": "",
"text": "Found this on StackOverflow, very underrated answer. The KVM processor used in most Linux VM environments is NOT COMPATIBLE with MongoDB, try to change it to something else. I use Proxmox, and when I changed from KVM processors to EPYC it worked right away, no changes.",
"username": "Clemence_Yi"
},
{
"code": "",
"text": "how do you change it? can you tell me?",
"username": "Gether_Medel"
},
{
"code": "",
"text": "I installed pve on a host with an Intel core i7-2600 CPU.\nI changed the processor type from Default KVM64 to Sandy-Bridge and then restart VM and mongodb service. It’s worked !",
"username": "Jaw_Ming_Luo"
}
] | mongod.service Failed with result core-dump | 2021-02-26T05:20:50.959Z | mongod.service Failed with result core-dump | 28,781 |
|
null | [
"atlas-search"
] | [
{
"code": "{\n knnBeta: {\n vector: [-0.08771466463804245,0.040466126054525375,...],\n path: \"embedding\",\n \"filter\":{\n \"range\":{\n \"path\":\"meta.duration\",\n \"gt\":2,\n \"lte\":10\n }\n },\n k: 10,\n },\n}\n",
"text": "Hi, I have a collection where each document has a vector embedding, a “type”, and some other metadata.\nI use the knnBeta operator in a $search to get the documents with the closest embedding I input, but I also need these documents to be filtered by some fields in their metadata.\nThe issue is that using the filter field in the knnBeta operator, I can only use each filter operator once. For example, if I want to filter all the documents where “meta.duration” is between 2 and 10 I can do this.But this way I won’t be able to filter using the range operator on another field at the same time.\nI tried to use an array of filters instead but I get the following error: “knnBeta.filter” must be a document.\nIs there a way I could get this to work ?",
"username": "Paul_Bourcereau"
},
{
"code": "compoundknnBeta.filter \"query\": {\n \"knnBeta\": {\n \"path\": \"description\",\n \"k\": 10,\n \"vector\": [\n 8,\n 8,\n 8\n ],\n \"filter\": {\n \"compound\": {\n \"mustNot\": {\n \"text\": {\n \"path\": \"title\",\n \"query\": \"world\"\n }\n },\n \"must\": [\n {\n \"text\": {\n \"path\": \"title\",\n \"query\": \"hello\"\n }\n }\n ]\n }\n }\n }\n },\n",
"text": "You should be able to put a compound within the knnBeta.filter field.Here is an example:",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Works perfectly, thanks ",
"username": "Paul_Bourcereau"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Multiple filters in knnBeta $search | 2023-01-20T10:45:01.051Z | Multiple filters in knnBeta $search | 2,517 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 6.0.3 is out and is ready for production deployment. This release contains only fixes since 6.0.2, and is a recommended upgrade for all 6.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "@Aaron_MorandThe documentation (Supported Platforms, RedHat - Platform Support, Ubuntu - Platform Support)is not reflecting the supported platform changes from:\nhttps://jira.mongodb.org/browse/SERVER-62300\nhttps://jira.mongodb.org/browse/SERVER-62302People have been rabidly waiting for it.",
"username": "chris"
},
{
"code": "",
"text": "Just switch back to ubuntu 20.04 or some fedora or arch. Or learn and use docker instead. because this is only one package(mongoDB). God knows how many more softwares are not supported in 22.04 (They have done many critical upgrades like wayland, libssl1 → libssl3, etc. Wait a year or so for everything get supported in ubuntu 22.04 and based distros( like popos, mint, distros).\nHappy coding fella:)",
"username": "mkbhru"
},
{
"code": "",
"text": "@mkbhru , please do not copy-paste and paste the same response everywhere you see this topic. that is not much different than spam posts.\nalthough you have some points, you also seem you did not fully read the topics you posted into. we already have discussed how we solved in some posts or given links to them in others.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Sorry hacker @Yilmaz_Durmaz , I am quite newbie at the time. I just did the above trick to gain maximum responses for my query. I wasn’t aware this is illegal or so.\n",
"username": "mkbhru"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0.3 is released | 2022-11-21T17:25:12.961Z | MongoDB 6.0.3 is released | 3,914 |
null | [] | [
{
"code": "",
"text": "On https://jira.mongodb.org, when I login, then click “Create Ticket”, fill in the form and save, the JIRA server gives a 200 response but my ticket is not actually created.Commenting on existing tickets is working fine. Please help.",
"username": "Johnny_Shields"
},
{
"code": "",
"text": "Hi @Johnny_Shields,Which Jira project(s) are you trying to create a new issue in? I see you have created some past issues in MONGOID, DOCS, SERVER, and RUBY.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Yep, those ones and also DRIVERS. It doesn’t seem to work for any project.",
"username": "Johnny_Shields"
},
{
"code": "",
"text": "Hi @Johnny_Shields,The drivers project is for meta-issues like driver specs so I would not be surprised if access to create issues in this project is limited to the driver team.Can you try creating an issue in one of the projects you’ve previously used (such as RUBY), instead? It would also be worth trying to clear cookies and log in again if that doesn’t work.Note: It looks like you may have multiple accounts in Jira, but I’m assuming you’re trying with the same login linked to your forum account.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, I’ve tried with two browsers and two accounts, including MONGOID which I could raise tickets for 1 week ago, so I’m thinking the issue is not on my side.",
"username": "Johnny_Shields"
},
{
"code": "",
"text": "Added:\nI finally figured out that I had to login to JIRA using the same credentials as for the MongoDB website. I successfully submitted my bug report.Original:\nI can’t figure out how to create an Issue, either here or in JIRA. I want to submit a bug report on the MongoDB Installer. Can someone please give me detailed instructions? The tools are too complicated for me to understand. In JIRA there is no button to create a new bug Issue. Help!",
"username": "David_Spector"
}
] | Help! I can't create new tickets on MongoDB JIRA | 2021-08-22T13:25:27.904Z | Help! I can’t create new tickets on MongoDB JIRA | 3,351 |
null | [
"swift"
] | [
{
"code": "please_report_this_issue_in_github_realm_realm_core_v_12_6_0",
"text": "A crash report from one of our customers hit the following at the bottom of the stack trace:please_report_this_issue_in_github_realm_realm_core_v_12_6_0 (terminate.cpp:65)I thought I’d check before blundering in with error reports - does that mean this event should be reported as an issue here? https://github.com/realm/realm-core/issues",
"username": "Ralph_Wessel"
},
{
"code": "#### SDK and version\nSDK : ? (Cocoa, Java, etc)\nVersion: ?\n\n#### Observations\n* How frequent do the crash occur?\n* Does it happen in production or during dev/test?\n* Can the crash be reproduced by you?\n* Can you provide instructions for how we can reproduce it?\n\n#### Crash log / stacktrace\n<!-- The full stack trace. -->\n\n#### Steps & Code to Reproduce\n<!-- What steps/operations resulted in the crash? Please show any relevant code or steps that WE can\n<!-- use to reproduce it. Even better is a full sample project that can reproduce the crash. -->\n<!-- Code and files can be shared privately at [email protected] if needed. -->\n",
"text": "Hi @Ralph_Wessel,Yes, that is the correct GItHub repo for Realm core issues. If you are signed into GitHub and create a new issue there is a template for Crash Reports: Sign in to GitHub · GitHubYou should see something similar to the following:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Crash reporting? | 2023-01-24T11:57:46.382Z | Crash reporting? | 983 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": " {\n \"_id\": \"61bb24d6aaee34d23dcf9782\",\n \"price\": 10250,\n \"brand\": \"audi\",\n \"model\": \"a4\",\n \"fuel\": \"diesel\",\n \"power\": 150,\n \"type\": \"sedan\",\n \"gearbox\": \"manual\",\n \"color\": \"blue\",\n },\n {\n \"_id\": \"62178eb749256d30c944f3cf\",\n \"price\": 15500,\n \"brand\": \"toyota\",\n \"model\": \"rav4\",\n \"fuel\": \"gas\",\n \"power\": 150,\n \"type\": \"suv\",\n \"gearbox\": \"automatic\",\n \"color\": \"red\",\n },\n {\n \"_id\": \"61bb3b73aaee34d23dcf9d82\",\n \"price\": 11500,\n \"brand\": \"bmw\",\n \"model\": \"318\",\n \"fuel\": \"diesel\",\n \"mileage\": 224999,\n \"power\": 143,\n \"type\": \"van\",\n \"gearbox\": \"automatic\",\n \"color\": \"yellow\",\n },\n {\n \"_id\": \"61bb24d6aaee34d23dcf9782\",\n \"price\": 10250,\n \"brand\": \"audi\",\n \"model\": \"a4\",\n \"fuel\": \"diesel\",\n \"power\": 150,\n \"type\": \"sedan\",\n \"gearbox\": \"manual\",\n \"color\": \"blue\",\n },\n {\n \"_id\": \"62178eb749256d30c944f3cf\",\n \"price\": 15500,\n \"brand\": \"toyota\",\n \"model\": \"rav4\",\n \"fuel\": \"gas\",\n \"power\": 150,\n \"type\": \"suv\",\n \"gearbox\": \"automatic\",\n \"color\": \"red\",\n },\n {\n \"_id\": \"61bb3b73aaee34d23dcf9d82\",\n \"price\": 11500,\n \"brand\": \"bmw\",\n \"model\": \"318\",\n \"fuel\": \"diesel\",\n \"mileage\": 224999,\n \"power\": 143,\n \"type\": \"van\",\n \"gearbox\": \"automatic\",\n \"color\": \"yellow\",\n },\n{\n \"brand\": [audi,toyota,bmw],\n \"fuel\":[diesel,gas],\n \"type\":[sedan,suv,van],\n \"gearbox\":[manual,automatic],\n \"color\": [blue,red,yellow],\n}\n",
"text": "I’m implementing a proof of concept were I have a collection with objects that have different schemas, but for example in a query I get these results using the $match pipeline stage :how can I get MongoDB to return all those documents plus an object that has all the distinct qualitative (not quantitative) keys and for each key, all its possible values (of the current objects) I would like the output to be like this:I’ve reed de docs from the beginning to end, and it seems that the aggregation pipeline does not have any stage that enables this kind of behavior, I’m I right? Or there’s some way to get it?Thanks",
"username": "Luis_Figueira"
},
{
"code": "$group$push$addToSet{\n \"brand\": [audi,toyota,bmw],\n \"fuel\":[diesel,gas],\n \"type\":[sedan,suv,van],\n \"gearbox\":[manual,automatic],\n \"color\": [blue,red,yellow],\n}\n",
"text": "Hello @Luis_Figueira, welcome to the forum!You can use the aggregation $group stage to get the following output. You can apply the Accumulator Operators within the $group; for example, $push and $addToSet are useful.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya,I’ll explore and try the resources you mentioned and will give you some feedback afterwards.",
"username": "Luis_Figueira"
}
] | How to add a document, with all the diferent keys, to the returned documents | 2022-05-08T22:19:12.779Z | How to add a document, with all the diferent keys, to the returned documents | 1,461 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "when I run mongodump against one database, it failed a few times, not always.\nthe primary node mongod stopped. I added readpreferrence to secondary but it failed sometimes too.\nerror message like following:\n“error creating intents to dump: error creating intents for database, io_service: connection() error occured during connection handshake: context deadline exceeded”\nanyone had the same experience? mongodb ent 4.4.3. mongodump version 100.2.1\nthanks,",
"username": "Hank_Su"
},
{
"code": "",
"text": "I am facing the same problem, the backup on my local hosted database is working, but when it comes to mongo atlas, it is failing with that error sometimes returning partial data.\n\nimage770×79 6.89 KB\n",
"username": "Fabrice_Hafashimana"
},
{
"code": "connection() error occurred during connection handshake: context deadline exceeded\n",
"text": "I am facing the same issue but to run mongorestore",
"username": "Rafael_Martins"
},
{
"code": "",
"text": "Hey guys, I solved the problem! It is a connection issue, I changed my connection and it works!",
"username": "Rafael_Martins"
}
] | Mongodump failed with error : connection handshake: context deadline exceeded | 2021-11-24T17:02:05.921Z | Mongodump failed with error : connection handshake: context deadline exceeded | 4,736 |
[] | [
{
"code": "",
"text": "Greetings.I want to install mongoDB but I cannot. I tried it at Oracle cloud, Ubuntu 22.04 LTS. I tried to follow this, but error occurs in last command(sudo apt install mongodb-org).\n\n스크린샷 2022-08-16 14-35-17866×614 109 KB\n\nI tried to sudo apt install libcurl1.1, but it occurs Error: E: Package ‘libssl1.1’ has no installation candidate.How can I install MongoDB in Ubuntu 22.04 LTS? I cannot downgrade my instance’s OS.",
"username": "Kyungsun_Ha"
},
{
"code": "",
"text": "PS. I founded ‘E: Unable to correct problems, you have held broken packages.’ and why this error occurs.I tried to sudo apt install libcurl1.1, but it occurs Error: E: Package ‘libssl1.1’ has no installation candidate.this is that’s solution.",
"username": "Kyungsun_Ha"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Just switch back to ubuntu 20.04 or some fedora or arch. Or learn and use docker instead. because this is only one package(mongoDB). God knows how many more softwares are not supported in 22.04 (They have done many critical upgrades like wayland, libssl1 → libssl3, etc. Wait a year or so for everything get supported in ubuntu 22.04 and based distros( like popos, mint, distros).\nI now understand why even arch exits at first place. And why fedora is is becoming popular\nHappy coding fella:)",
"username": "mkbhru"
}
] | I cannot MongoDB on Ubuntu 22.04 LTS | 2022-08-16T05:48:45.581Z | I cannot MongoDB on Ubuntu 22.04 LTS | 12,542 |
|
null | [
"queries",
"mongodb-shell"
] | [
{
"code": "",
"text": "I AM GETTING BELOW ERROR WHILE INSTALLING Mongodb on ubuntu 18.04dpkg: dependency problems prevent configuration of mongodb-org: mongodb-org depends on mongodb-mongosh; however: Package mongodb-mongosh is not configured yet.dpkg: error processing package mongodb-org (–configure): dependency problems - leaving unconfigured Setting up mongodb-org-tools (6.0.3) … Setting up mongodb-org-server (6.0.3) … System has not been booted with systemd as init system (PID 1). Can’t operate. Setting up mongodb-org-database (6.0.3) … Processing triggers for man-db (2.8.3-2ubuntu0.1) … Errors were encountered while processing: mongodb-org E: Sub-process /usr/bin/dpkg returned an error code (1)",
"username": "sonali_padhi"
},
{
"code": "",
"text": "Hello @sonali_padhi ,Welcome to The MongoDB Community Forums! Have you followed the installation notes for putting MongoDB on Ubuntu ?System has not been booted with systemd as init system (PID 1)This does not look like a MongoDB error but a more general linux error, could you please confirm if you are using WSL/WSL2 and any other environment details, which might help us understanding the context of these error?dpkg: dependency problems prevent configuration of mongodb-org: mongodb-org depends on mongodb-mongosh; however: Package mongodb-mongosh is not configured yet.To troubleshoot any errors encountered during installation because of dpkg error, please follow this troubleshooting guideLet me know if you get any additional errors with steps to reproduce the error, would love to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Errors were encountered while processing: mongodb-org | 2023-01-17T07:30:03.028Z | Errors were encountered while processing: mongodb-org | 2,295 |
null | [] | [
{
"code": "",
"text": "Hi there, I’m new to the Mongo space and am a database administrator.\nWe have 4 clusters, DEV, SIT, UAT and PROD.\nI want to grant a developerCan’t seem to find a way to apply read to PROD and UAT cluster only.Can anyone point me in the right direction?Kind Regards\nRod",
"username": "Rod_West"
},
{
"code": "developer_readwritedeveloper_readdeveloper_prod_read",
"text": "Hey @Rod_West,Welcome to the MongoDB Community Forums! I add it to UAT and PROD it would give the developer read/write over those clusters which I don’t wantThis looks like 4 separate systems, and thus would have 4 different users that are not connected to one another even though their username might be identical.A way to give permissions to developers across different environments in your case is to create separate roles for the developer in each cluster. For example, you can create a role called developer_readwrite in DEV and SIT, a role called developer_read in UAT, and a role called developer_prod_read in PROD. You can then grant the developer the appropriate role in each cluster. This way, they will only have read/write access in DEV and SIT, and read access in UAT, and in PROD. This approach allows for more granular control over the developer’s access permissions in each cluster.You can read more about managing users and roles here: Manage Users and Roles\nMore on Built-In RolesPlease let us know if this helps or if this is not what you’re looking for, then it would be great if you can share the process of how you created the roles and are granting the accesses to your developers. Are you creating these roles on mongo shell or using some other software to do so?Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Hi Satyam, thankyou for the considered response, very helpful. I’m using the cloud.mongodb.com interface, in the database access tab under security, I see the developers username, and it has been selected for access on 2 (DEV/SIT) of the 4 clusters with read and readwrite MongoDB Roles to the user database.\nSo the only option through this interface seems to be to add the extra clusters which would give him the same read/write throughout the UAT/Prod clusters, or login to the UAT/Prod clusters and add his username in manually through mongo shell to the read role in UAT and Prod.\nDoes this sound correct?\nThe web interface isn’t very helpful for managing this access across multiple clusters for this reason.",
"username": "Rod_West"
},
{
"code": "",
"text": "Hey @Rod_West,Thanks for clarifying on how you are granting permissions to the users. Unfortunately, Atlas currently does not support the kind of granularity that you intend to have for your clusters in the same project. It is currently not possible to grant more granular permissions at the project/cluster level for database users. What you’re after is similar to what is described in the following feedback post: More granular user privileges for Database Users in the same project.You can in the meanwhile, have individual logins for each environment. Having different projects for your different clusters would also grant you more control over what your users can access and not access. You can migrate a cluster from one project to another using the live migration option from the UI, as described in the live import documentation. Additionally, you can also restore a cluster from a backup onto a cluster in a different project. You can read more about Projects and Accesses here: Manage Project Access. However it is important to note when moving a cluster’s data to another project, Atlas does not migrate any project settings (e.g. Database Access settings, Network Access Lists, etc.). You will need to create these again specifically for the new project.Please let us know if this helps your problem or not. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Different permissions for developers across different cluster environments | 2023-01-10T05:02:03.589Z | Different permissions for developers across different cluster environments | 1,141 |
null | [
"queries"
] | [
{
"code": "",
"text": "Since I haven’t looked at the source code and don’t know the specific implementation of mongo’s bottom layer, won’t mongo delete the corresponding index record after deleting the document? or Is the B-tree not self-balancing? Otherwise, why would there be slow checks for 30,000 pieces of data?",
"username": "xiongyan_zhong"
},
{
"code": "executionStats",
"text": "Hello @xiongyan_zhong ,Welcome to The MongoDB Community Forums! There could be several reason to queries responding slow, most common reasons are resource crunch and in-efficient indexing. Please take a look at Analyze Slow Queries to make sure you are following the best practices for faster query processing. To learn more about your use case, can you please share below details?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "mongos> db.Roles.find({\"_id\" : ObjectId(\"5a6c9920991a709c50833b63\")}).explain()\n{\n \"queryPlanner\" : {\n \"mongosPlannerVersion\" : 1,\n \"winningPlan\" : {\n \"stage\" : \"SINGLE_SHARD\",\n \"shards\" : [\n {\n \"shardName\" : \"mongo1\",\n \"connectionString\" : \"xxx\",\n \"serverInfo\" : {\n \"host\" : \"xxx\",\n \"port\" : 27017,\n \"version\" : \"3.2.18\",\n \"gitVersion\" : \"4c1bae566c0c00f996a2feb16febf84936ecaf6f\"\n },\n \"plannerVersion\" : 1,\n \"namespace\" : \"db101.Roles\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"_id\" : {\n \"$eq\" : ObjectId(\"5a6c9920991a709c50833b63\")\n }\n },\n \"winningPlan\" : {\n \"stage\" : \"SHARDING_FILTER\",\n \"inputStage\" : {\n \"stage\" : \"IDHACK\"\n }\n },\n \"rejectedPlans\" : [ ]\n }\n ]\n }\n },\n \"ok\" : 1\n}\nmongos> db.Roles.stats()\n{\n \"sharded\" : true,\n \"capped\" : false,\n \"ns\" : \"db101.Roles\",\n \"count\" : 41063,\n \"size\" : 17369891292,\n \"storageSize\" : 114935259136,\n \"totalIndexSize\" : 194359296,\n \"indexSizes\" : {\n \"_id_\" : 73748480,\n \"_id_hashed\" : 37478400,\n \"attr.lv_1\" : 8916992,\n \"attr.lf_1\" : 974848,\n \"attr.nin_1\" : 42893312,\n \"attr.nmi_1\" : 30347264\n },\n \"avgObjSize\" : 423005.90049436234,\n \"nindexes\" : 6,\n \"nchunks\" : 13679,\n \"shards\" : {\n \"mongo1\" : {\n \"ns\" : \"db101.Roles\",\n \"count\" : 12803,\n \"size\" : 5369869548,\n \"avgObjSize\" : 419422,\n \"storageSize\" : 36037365760,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption\n[details=\"Summary\"]\n[spoiler]This text will be hidden[/spoiler]\n[/details]\n=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:db101/collection/13330-7644487786444373533\",\n \"LSM\" : {\n ...\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 707593,\n \"blocks allocated\" : 3041856,\n \"blocks freed\" : 2774511,\n \"checkpoint size\" : 1913065472,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 34124185600,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 36037365760,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 1586318,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 122620208,\n \"bytes read into cache\" : NumberLong(\"1799809388338\"),\n \"bytes written from cache\" : NumberLong(\"1356585890194\"),\n \"checkpoint blocked page eviction\" : 5,\n \"data source pages selected for eviction unable to be evicted\" : 272026,\n \"hazard pointer blocked page eviction\" : 1960,\n \"in-memory page passed criteria to be split\" : 33477,\n \"in-memory page splits\" : 16611,\n \"internal pages evicted\" : 423515,\n \"internal pages split during eviction\" : 70,\n \"leaf pages split during eviction\" : 19359,\n \"modified pages evicted\" : 1823107,\n \"overflow pages read into cache\" : 0,\n \"overflow values cached in memory\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring lookaside records\" : 0,\n \"pages read into cache\" : 3120642,\n \"pages read into cache requiring lookaside entries\" : 0,\n \"pages requested from the cache\" : 16970641,\n \"pages written from cache\" : 2873107,\n \"pages written requiring in-memory restoration\" : 30,\n \"tracked dirty bytes in the cache\" : 4909962,\n \"unmodified pages evicted\" : 1994134\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"compression\" : {\n \"compressed pages read\" : 2703425,\n \"compressed pages written\" : 1760856,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 1112245,\n \"raw compression call failed, additional data available\" : 0,\n \"raw compression call failed, no additional data available\" : 0,\n \"raw compression call succeeded\" : 0\n },\n \"cursor\" : {\n \"bulk-loaded cursor-insert calls\" : 0,\n \"create calls\" : 208437,\n \"cursor-insert key and value bytes inserted\" : NumberLong(\"1361073168445\"),\n \"cursor-remove key bytes removed\" : 2867883,\n \"cursor-update value bytes updated\" : 0,\n \"insert calls\" : 1765355,\n \"next calls\" : 11847,\n \"prev calls\" : 1,\n \"remove calls\" : 737555,\n \"reset calls\" : 9858337,\n \"restarted searches\" : 25836,\n \"search calls\" : 6567165,\n \"search near calls\" : 759,\n \"truncate calls\" : 0,\n \"update calls\" : 0\n },\n \"reconciliation\" : {\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 745691,\n \"internal page multi-block writes\" : 86808,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 20356,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 2,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 1472652,\n \"page reconciliation calls\" : 2821143,\n \"page reconciliation calls for eviction\" : 170333,\n \"pages deleted\" : 745720\n },\n \"session\" : {\n \"object compaction\" : 0,\n \"open cursor count\" : 165\n },\n \"transaction\" : {\n \"update conflicts\" : 412\n }\n },\n \"nindexes\" : 6,\n \"totalIndexSize\" : 56500224,\n \"indexSizes\" : {\n \"_id_\" : 27062272,\n \"_id_hashed\" : 9093120,\n \"base.level_1\" : 2260992,\n \"base.nickname_1\" : 8638464,\n \"base.number_id_1\" : 9129984,\n \"base.login_flag_1\" : 315392\n },\n \"ok\" : 1\n },\n",
"text": "Hi, @Tarun_Gaurexplain:stats:",
"username": "xiongyan_zhong"
},
{
"code": "\"version\" : \"3.2.18\",executionStatsdb.Roles.find({\"_id\" : ObjectId(\"5a6c9920991a709c50833b63\")})",
"text": "\"version\" : \"3.2.18\",MongoDB v3.2 reached end of life in September 2018, I would recommend you to update to MongoDB v4.2 at-least which will reach end of life in April 2023. The latest releases includes many improvements including performance improvements.Cloud you please provide me with more information to check if I can replicate the issue and figure out the reason for the same?Please share output of your query with explain in executionStats mode (e.g. `db.collection.explain(‘executionStats’).aggregate(…)), this will help us understand the execution plan being used by the queryPlanner and other relevant parameters.When you say slow are you talking about db.Roles.find({\"_id\" : ObjectId(\"5a6c9920991a709c50833b63\")}) query or other queries? How much slowness are you seeing? Have you experienced it just now may be after changing something or are you facing it from the start?Full output of your db.collection.stats(), as it appears that the output you posted is truncated.Topology of your cluster and the hardware configuration of a shard?what is the Shard key for the collection in question?Also please provide full output of sh.status().",
"username": "Tarun_Gaur"
}
] | In order to improve the slow query situation, after deleting 98% of the useless data in the 2 million data, it was found that the size of the collection index did not change, and the slow query did not improve | 2023-01-13T08:04:06.907Z | In order to improve the slow query situation, after deleting 98% of the useless data in the 2 million data, it was found that the size of the collection index did not change, and the slow query did not improve | 980 |
[] | [
{
"code": "",
"text": "I made a 3 node setup with 2 servers only.\nThe auto server switching seems good.\nDo you think if there is any issue with this kind of setup?\n\nimage700×400 37.2 KB\n",
"username": "shuryanc"
},
{
"code": "",
"text": "1 of the issue that I found is that:\nIf Primary is down and something is wrote in to Secondary, then if Secondary is down and Primary is up later, then something is wrote in the Primary, the data we wrote to Secondary will be lost after Secondary is up again.If Secondary is down and something is wrote into Primary, then if Primary is down and Secondary is up later, then something is wrote in the Secondary, the data we wrote to Primary will be lost after Primary is up again.I think its expected since the last timestamp one will be adopted for such 2 server setup.",
"username": "shuryanc"
},
{
"code": "",
"text": "Hey @shuryanc,Welcome to the MongoDB Community Forums! In addition to the problem you pointed out, there are always issues with having multiple arbiters in a replica set and you should be sure that you need two arbiters in your current replica set instead of adding another secondary. My recommendation would be to replace the arbiter with another data-bearing member. You can read more about the issues from the documentation: Concerns with Multiple Arbiters.\nYou can also check out this forum post on the consequences of having arbiters in a replica set, see: Replica set with 3 DB Nodes and 1 Arbiter - #8 by StenniePlease let us know if this helps or not. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | 2 Server replication setup | 2023-01-17T03:42:34.272Z | 2 Server replication setup | 1,066 |
|
null | [] | [
{
"code": "{\n \"_id\": \"637c648fb8fcfb2bc3071bb9\",\n \"consultant_name\": \"Sam smith\",\n \"consultantUsername\": \"sam\",\n \"consultant_Password\": \"123\",\n \"type\": \"consultant\",\n \"clients\": [\n {\n \"client_name\": \"john\",\n \"client_Username\": \"mouh\",\n \"client_Password\": \"123\",\n \"type\": \"client\",\n \"documents\": [\n {\n \"name\": \"Passeport\",\n \"description\": \"copie conforme certifié de tout le passeport\",\n \"doc_upload\": \"Capture dâeÌcran, le 2022-11-28 aÌ 23.01.12.png\",\n \"_id\": \"637c648fb8fcfb2bc3071bbb\"\n },\n {\n \"name\": \"Acte de naissance\",\n \"description\": \"Pour prouver la filiation\",\n \"doc_upload\": \"_Abstract Aesthetic CD Album Cover Art.png 637c648fb8fcfb2bc3071bbc\",\n \"_id\": \"637c648fb8fcfb2bc3071bbc\"\n }\n ],\n \"_id\": \"637c648fb8fcfb2bc3071bba\"\n },\n\nrouter.put(\"/:id\", async(req,res)=>{\n \n try {\n const updateUser = await User.findByIdAndUpdate(req.params.id, {\n \n $push: {\n clients:{ \n client_name: req.body.client_name,\n client_Username: req.body.client_Username,\n client_Password: req.body.client_Password,\n//documents not updating even with one single entry\n documents : [ \n {\n name : req.body.docName,\n description : req.body.docDescription,\n doc_upload : req.body.doc_upload,\n }\n ]\n\n \n \n }\n }\n\n },{new:true});\n res.status(200).json(updateUser);\n \n }\n catch(err) {\n res.status(500).json(err);\n }\n });\n\n",
"text": "I have a db that looks like this :As you can see, Sam smith has different clients and each of those clients have different documents. My goal is to be able to allow Sam to add one more client to his portfolio AND specify the documents (as many as he wants) of that new user created all at once (when Sam creates a new user in the db). The process I used is to update his client list by creating a new client.Here’s the code to add a client (that works), please note that the document part doesn’t get updated :So my instinct here would be to push documents using the $each operator, but because the document update doesn’t work I’m kind of stuck. In an ideal world if you have the answer/reflexion to be able to update the document part with multiple values it would be appreciated. Any idea on what to do or where should I look first ?Thank you",
"username": "Elias_Mhamdi"
},
{
"code": "req.params.id",
"text": "Most likelythe document update doesn’t workbecausereq.params.idis a string and your User document _id is an ObjectId.",
"username": "steevej"
}
] | Update a nested array with many elements in a nested array | 2023-01-23T01:59:52.260Z | Update a nested array with many elements in a nested array | 541 |
null | [] | [
{
"code": "",
"text": "Hello, my name is Néstor, a software engineer based in Montreal. I am excited to join the community as leader of the Montréal MUG! Really looking forward to grow this space and to get to know fellow coders in the area.Let’s get the party started at the Montreal, CA: MongoDB User Group!When I’m not working on code, I enjoy experiment with new recipes in the kitchen while watching soccer Please feel free to contact me in English, Spanish or French (still working on it, would be great to improve it talking about all things MongoDB).",
"username": "Nestor_Daza"
},
{
"code": "",
"text": "Can’t wait to partner with you to Néstor!",
"username": "Veronica_Cooley-Perry"
},
{
"code": "",
"text": "Hi NestorEnchanté ",
"username": "Elias_Mhamdi"
},
{
"code": "",
"text": "Hello @Nestor_DazaAwesome to see another MUG starting up. Looks like you have a good addition to the starting cohort with @steevej!",
"username": "chris"
}
] | Hello/Hola/Bonjour from beautiful Montréal | 2023-01-20T19:56:00.522Z | Hello/Hola/Bonjour from beautiful Montréal | 1,236 |
null | [
"replication",
"database-tools",
"backup"
] | [
{
"code": "Failed: restore error: error running merge command: (Unauthorized) not authorized on admin to execute command \n{ \n _mergeAuthzCollections: 1, \n tempUsersCollection: \"admin.tempusers\", \n drop: true, \n db: \"\", \n writeConcern: { \n w: \"majority\" \n }, \n lsid: { \n id: UUID(\"e6ae539f-7131-45ce-a48e-03d0410ca759\") \n }, \n $clusterTime: { \n clusterTime: Timestamp(1674472038, 57), \n signature: { \n hash: BinData(0, 3C92A7B78CAF115DCE80D2876EAB92DEDA0F0C6A), \n keyId: 7191796349149380612 \n } \n }, \n $db: \"admin\", \n $readPreference: { \n mode: \"primaryPreferred\" \n } \n}\n2023-01-23T10:45:35.606+0000\t73761 document(s) restored successfully. 0 document(s) failed to restore.\n",
"text": "Hi there,I’m restoring a replica set from local to Atlas. It looks like the restore completes but getting the below error. The DB user has full permissions on the cluster, is this error something to be concerned about?Thanks.",
"username": "Niamh_Gibbons"
},
{
"code": "adminadmin--nsExclude 'admin.*'",
"text": "Hi @Niamh_GibbonsAs Atlas manages the database users you do not have access to some collection, admin is one of them.You should exclude the admin namespace when doing the restore:\n--nsExclude 'admin.*'https://www.mongodb.com/docs/atlas/import/mongorestore/#cluster-security",
"username": "chris"
}
] | Mongorestore to Atlas - merge error | 2023-01-23T11:17:40.832Z | Mongorestore to Atlas - merge error | 1,196 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi there,The document for using geo coordinates for Mongo queries are really very helpful!On top of that, I am wondering usually how we could transform location, say we collect from customer in city-state-country, to the geo coordinates that could be recognized by Mongo? and vice versa.I am new to this, and thank you for any help with this!",
"username": "williamwjs"
},
{
"code": "",
"text": "Hi @williamwjs ,Usually you will need an external provider to do this translation , there are plenty of those.The one which is super popular is google geocodesGeocoding converts addresses into geographic coordinates to be placed on a map. Reverse Geocoding finds an address based on geographic coordinates or place IDs.Once you get the results from google with the coordinates in mongo with a geo index you good to goPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"name\": \"New York City\",\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [-74.0060, 40.7128]\n }\n}\n\n",
"text": "The document for using geo coordinates for Mongo queries are really very helpful!On top of that, I am wondering usually how we could transform location, say we collect from customer in city-state-country, to the geo coordinates that could be recognized by Mongo? and vice versa.Hello @williamwjs\nIn MongoDB, you can store location data as GeoJSON data type, which supports various geometry types such as Point, LineString, Polygon, etc. To store a location as a Point in MongoDB, you can create a document with a field containing a GeoJSON Point object that specifies the longitude and latitude coordinates of the location.For example, to store the location of “New York City, NY, USA” as a Point, you can create a document like this:To convert location in city-state-country to geo coordinates that can be recognized by MongoDB, you can use a geocoding service as I mentioned before and store the result in the same format as above.On the other hand, to convert geo coordinates stored in MongoDB to a city, state, and country, you can use reverse geocoding as I’ve mentioned before, after that you can store the result in your desired format.It is important to note that MongoDB also supports spatial indexes such as 2d and 2dsphere indexes, which enable efficient querying of GeoJSON data for proximity and location-based queries",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny @Sumanta_Mukhopadhyay , thank you for your reply and suggestions!!!A question related to this - I used Google geocoding to format “CA”, and it returned a json with geometry bounds, including only two pairs of coordinates - northeast and southwest (similar to the example json response in Geocoding request and response | Geocoding API | Google Developers). However, the mongodb Polygon needs at least four coordinate pairs, which specify the same position as the first and last coordinates. I guess the question would be whether there’s a library I could use to transform between the Google geocoding and the mongodb GeoJson?Thank you!",
"username": "williamwjs"
},
{
"code": "# Retrieve coordinates of city from Google Geocoding API\nlatitude, longitude = get_coordinates_from_google(city)\n\n# Create list of 4 points in the format [longitude, latitude]\npolygon = [[longitude, latitude], [longitude-0.1, latitude-0.1], [longitude+0.1, latitude-0.1], [longitude, latitude]]\n\n# Use $geoWithin operator in MongoDB query with $polygon operator\nresults = collection.find({\"location\": {\"$geoWithin\": {\"$polygon\": polygon}}})\n\n",
"text": "Hi @williamwjs ,Perhaps the best way is to try and assume a reasonable polygon in the middle coordinates. Why do you need a polygon and a geoNear is not enough:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Got it! Thank you!!!",
"username": "williamwjs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to transform a location (city-state-country) to geoJson | 2023-01-19T02:04:06.958Z | How to transform a location (city-state-country) to geoJson | 1,551 |
null | [
"queries"
] | [
{
"code": "userModel.find({\n categories: {\n $elemMatch: {\n activestatus: 'active',\n category: inquiryObject.category,\n answers: { $elemMatch: { question: {$in:questions}, answers: {$in:answers} }},\n location:{$near:{\n $geometry: {\n type: \"Point\",\n coordinates: [inquiryObject.location.coordinates[0], inquiryObject.location.coordinates[1]]\n },\n $maxDistance:'categories.servingradius',\n }}\n }\n },\n })\n",
"text": "i am using this querynow categories is array and in categories there is every object have servingradius i want to find the user whitch have that category and question answers within the user’s serving raduis how can i do that",
"username": "Muhammad_Hasnat_Shabir"
},
{
"code": "",
"text": "Hello @Muhammad_Hasnat_Shabir ,Welcome to The MongoDB Community Forums! Could you please provide additional details for us to understand your use-case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Getting data from geoseptical points and matching | 2023-01-20T09:26:55.372Z | Getting data from geoseptical points and matching | 474 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "strictQueryfalsemongoose.set('strictQuery', false);mongoose.set('strictQuery', true);",
"text": "hi\ni am reciving thiss error while i try to connect to mongo db using mongooseBlockquote\n(node:20404) [MONGOOSE] DeprecationWarning: Mongoose: the strictQuery option will be switched back to false by default in Mongoose 7. Use mongoose.set('strictQuery', false); if you want to prepare for this change. Or use mongoose.set('strictQuery', true); to suppress this warning.i need to know what are the strictQuery and why i am getting this error",
"username": "mina_remon"
},
{
"code": "setnew Schema({..}, options);\n\n// or\n\nconst schema = new Schema({..});\nschema.set(option, value);\nstrictstrictQuery",
"text": "Hello @mina_remon ,Welcome to The MongoDB Community Forums! Schemas have a few configurable options which can be passed to the constructor or to the set method:When strict option is set to true , Mongoose will ensure that only the fields that are specified in your Schema will be saved in the database, and all other fields will not be saved (if some other fields are sent). In simple term, the strict option, ensures that values passed to our model constructor that were not specified in our schema do not get saved to the db. Mongoose supports a separate strictQuery option to avoid strict mode for query filters. This is because empty query filters cause Mongoose to return all documents in the model, which can cause issues.To learn more about these options, please go through Mongoose v6.9.0: Schemas.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | DeprecationWarning: Mongoose: the `strictQuery` | 2023-01-21T09:59:55.053Z | DeprecationWarning: Mongoose: the `strictQuery` | 20,340 |
null | [
"indexes"
] | [
{
"code": "\"type\"{ \"type\": \"house\" }{ \"type\": \"car\" }{ \"type\": \"person\" }",
"text": "I’m trying to create a Search Index for one of the collections in my database that is in a Shared Cluster, but it fails because this collection has more than 300 fields limitation for Shared Clusters (specifically: 332). Creating dynamic mappings is critical for us, we don’t want to limit the number of mapped fields. To bypass this limitation, I’m thinking of creating multiple indexes for this collection, each index based on a different key/value pair that all documents in this collection have. For example, let’s imagine all documents in this collection have the path \"type\", which can have one of three values: “house”, “car”, or “person”. Then I would like to create three Search Indexes, one for documents with { \"type\": \"house\" }, another for documents with { \"type\": \"car\" }, and a last one for documents with { \"type\": \"person\" }. Can this be done and, if so, how do I specify the key/value pair condition?",
"username": "Beltran_Figueroa"
},
{
"code": "",
"text": "Could you specific based on the field? Therefore using field mappings?",
"username": "Elle_Shwer"
}
] | Creating different Search Indexes based on key/value pair within collection | 2023-01-18T23:55:26.251Z | Creating different Search Indexes based on key/value pair within collection | 797 |
null | [
"aggregation",
"python"
] | [
{
"code": "",
"text": "Please provide samples to pass variable from python to mongo aggregate",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Please see the following thread.",
"username": "steevej"
},
{
"code": "",
"text": "My python code is as below:pipeline= … “_id” : “$var_id”collection.aggregate(pipeline)Here, how to pass value for $var_id?Pipeline query comes from mongo…it has got more stages… i want to pass value for 5 variables at different srages…Please advise…",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "pipeline= … \"_id\" : var_id\n",
"text": "If var_id is a python variable, then you just use it as a python variable.With not quotes and no dollar sign:",
"username": "steevej"
},
{
"code": "",
"text": "This works only if i pass mongo query directly to aggregate…Whereas my requirement is i am fetching query from mongo…and it is string…when i pass string to aggregate it throws error as aggregate accept only list…so i am using json.loads. and it expects double quotes for variable…So how to pass variable from python to mongo pipeline…\nMy pipeline varrable has big mongo query with many parameters to be pssespipeline= ($match: var_id), ($match: date: todasdate)\ncollection.aggregate(pipeline)",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Could you please me solution for this?",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "pymongopymongopythonpipelinepipeline = [{\"$match\": {\"_id\": var_id, \"date\": todaysdate}}]\n",
"text": "@Krishnamoorthy_Kalidoss I assume you are using pymongo as your MongoDB driver.pymongo shows a few aggregation examples in their documentation that may be helpful.Without more details about your python code, etc., it’s difficult to be fully confident answering you, but perhaps the pipeline assignment you are looking for is:",
"username": "Cast_Away"
},
{
"code": "",
"text": "Share a few examples of the strings you get from reading the query.Share the results you get running json.loads on your sample query strings.Share the variables you need to inject into your queries.",
"username": "steevej"
},
{
"code": "",
"text": "i have pipeline in mongodb as below\npipeline = [{\"$match\": {\"_id\": “${var_id}”, “date”: ${“todaysdate}” }]in python i have retrieved the above pipeline from mongo and stored in variableNow, srcQuerry=[{\"$match\": {\"_id\": “${var_id}”, “date”: ${“todaysdate}” }]now i am using aggreagateresult = collection.aggregate(srcQuery)how to pass value to var_id and date from srcQuery??",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and reformat your code appropriately.Check your quotes, they are not consistents.It is not clear if srcQuery is an array with an object or if it is still a string. Share the code you use tk read the string from mongo and that parse it into an array.Is itvar_id and dateyou want to inject or var_id and todaysdate?Where did you find that you could use ${var_id} to substitute variables in a string? Any reference to python API that mentioned that? The dollar syntax is more a shell thing rather than python. May be what you want is more like",
"username": "steevej"
},
{
"code": "",
"text": "i have tried the below steps:stored the below pipeline in mongo db\n[{\"$match\": {\"_id\": var_id, “date”: todaysdate}}]retrieved the pipeline from mongo using python and stored in variable as below\npipeline= [($match: var_id), ($match: date: todasdate)]\nhere pipeline is string.collection.aggregate(pipeline) // This line throws error as aggregate accepts only list not string.so i have used json.loads(pipeline)again now i got error as json expects double quotes for var_id… if i use double quote for var_id , value will not get assigned…Hope this helps …Please help here",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Your code is still not formatted as recommended in the code snippet link I provided.Your stored query is still not using the % syntax as shown in the stackoverflow post I shared.You have to replace the variables in the string using the % syntax before you call json.loads().",
"username": "steevej"
},
{
"code": "\n \"_id\" : %s,\n \"my_dt\" : ISODate(%s)\n\n srcQueryToAggregate = srcQuery % ('\"'+id+'\"','\"'+myDate+'\"')\n print(srcQueryToAggregate) \n pipeline = json.loads(srcQueryToAggregate) \n\n```\n\n \"_id\" : \"XYZ-1\"\n\t \"my_dt\" : ISODate(\"2022-10-31T04:00:00Z\") \n```\n",
"text": "Thanks Steeve!!I have tried as below:in Mongo DB i have below pipeline:in Python i have as below:_id and my_dt from mongo db substituted with id and myDate variables in python. srcQuery has pipeline fetched from Mongodb.It prints as below:Although printing is good, i m getting below error for date at json.loads … (\"_id\" working fine.)file “C:\\Program Files\\Python38\\lib\\json\\decoder.py” line 2, in raw_decode\nraise JSONDecodeError(“Expecting Value”, s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value : line 2 column 24(char)i have date value… but still it throws error as Expecting value for my_dt. … Please advise",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "\"my_dt\" : ISODate(%s)\"my_dt\" : %s\"_id\" : \"%s\" ,\n\"_id\" : %s,",
"text": "Rather than\"my_dt\" : ISODate(%s)you need to use\"my_dt\" : %sand evaluate ISODate before doing the substitution.Also since _id is a string, it needs to be quoted in the substitution. It has to be something likerather than\"_id\" : %s,This is outside the scope of this forum since it is strictly Python knowledge. And I am not a Python programmer, I actually do not like Python, since it uses a non-visible character for indentation, the same error make’s authors did a long time ago. Stackoverflow might be better for a followup unless a user of this forum more fluent with Python takes over.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Krishna,You may want to checkout this library GitHub - VianneyMI/monggregate: Open souce project to make MongoDB aggregation framework easy to use in python to ease the process of building new pipelines. It will guide you along the way.",
"username": "Vianney_Mixtur"
}
] | How to pass variable to mongo pipeline from python | 2022-09-24T14:38:56.837Z | How to pass variable to mongo pipeline from python | 7,330 |
null | [
"replication",
"java",
"atlas-cluster",
"serverless",
"spring-data-odm"
] | [
{
"code": "Caused by: com.mongodb.MongoConfigurationException: A TXT record is only permitted to contain the keys [authsource, replicaset], but the TXT record for 'domain.mongodb.net' contains the keys [loadbalanced, authsource]\n",
"text": "Hi there,\nI’m currently experiencing some issue trying to connect to my Mongo Atlas Serverless Cluster through a Spring Boot application.\nI’ve seen that the version 2.5 of spring boot was not compatible due to outdated mongo-java driver.\nTherefore I upgraded the project to version 2.7.3 hopping that would solve the connection issue, but it seems that the problem persists.\nIn the maven central repository there is not other version of the mongo-java-driver that 3.11.12 Maven Central Repository SearchThe error message reported is the following oneThanks in advance for your guidance \nCheers!",
"username": "Juba_Saadi"
},
{
"code": "",
"text": "Hi! The most recent Java driver release is 4.8.2. You can find more details here. This section of the documentation may prove useful to you as well.",
"username": "Ashni_Mehta"
},
{
"code": "org.mongodb:mongo-java-driverorg.mongodb:mongodb-driver-sync",
"text": "Thank you @Ashni_Mehta, I’ve also realise that I was using the legacy mongo java driver dependency org.mongodb:mongo-java-driver and should have switched to org.mongodb:mongodb-driver-sync\nBest,\nJuba",
"username": "Juba_Saadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Atlas Serverless and SpringBoot v2.7.3 | 2023-01-20T16:34:34.703Z | Mongo Atlas Serverless and SpringBoot v2.7.3 | 1,642 |
null | [] | [
{
"code": "public static func startRealm() {\n guard let fileUrl = FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: UserDefaults.mySharedAppGroupID)?.appendingPathComponent(\"default.realm\") else { return }\n let config = Realm.Configuration(\n fileURL: fileUrl,\n schemaVersion: 46,\n migrationBlock: { migration, oldSchemaVersion in\n /* all the various migration logic */\n },\n deleteRealmIfMigrationNeeded: false\n )\n Realm.Configuration.defaultConfiguration = config\n\n do {\n let _ = try Realm()\n } catch let error {\n print(\"error: \\(error)\")\n }\n}\ndo {\n let realm = try Realm(fileURL: fileURL)\n} catch let error {\n print(error)\n}\n",
"text": "Hey folks, I’ve got an iOS app using RealmSwift. I’m trying to set things up to access the default realm from both the main app, as well as a new Widget I’m trying to make for iOS 14. I’ve successfully got the realm migrated to a shared AppGroup container, and can successfully get the read access I need from the Widget. The problem I can’t seem to work around is Schema migrations…The only way I can get the Widget process to successfully retrieve realm data, is by initially running an identical launch configuration from both the Widget, and from the parent app’s didFinishLaunching (which requires setting the most current schema version):The above func gets called from both the Widget and the parent app’s didFinishLaunching. The problem is that if a migration is necessary, it is very inconsistent whether it actually runs or not. When the app is launched, it seems as though the Widget process is making this call for realm access before the parent app’s didFinishLaunching. Therefore, the schema version is being bumped to the current version via the Widget process, which is only sometimes (and randomly) running the migration (8 out of 10 times, it’s not running the migration and just bumping the schemaversion up anyways), and by the time the parent app calls this from didFinishLaunching, the current schema version has already been bumped by the Widget, so the parent app’s didFinishLaunching also never triggers the migrationBlock.I’ve tried attempting to initialize the realm inside the Widget, using just the fileURL:But it just complains that the schemaVersion (0) is less than the current version. Which brings me back to the root problem where the Widget is forcing the update to the latest schema version, but usually just skipping the migration block.Is there a trick I’m not doing correctly, or best practices to both access a shared realm from an AppGroup, while also being able to run schema migrations?Also important to point out that I am not using a synced realm or encrypted realm. I’ve read that multi-process realm access is not supported with sycned or encrypted, but I am not using either.Totally stumped on this, would appreciate any insight or guidance!",
"username": "DiscoPixel"
},
{
"code": "",
"text": "Hello. Have you found a solution? Could you answer, please.",
"username": "111134"
},
{
"code": "",
"text": "Yes, I have really the same problem. Did you find the solution there?",
"username": "Jiri_Ostatnicky"
},
{
"code": "",
"text": "Ok, as @DiscoPixel mentioned, the realm is starting in Widget first. And the migration block is called there. Because the schema is the same I’m making migration changes in Widget code.Or if you comment realm initialization in Widget you will get migration block in main app and you can debug it there.",
"username": "Jiri_Ostatnicky"
},
{
"code": "",
"text": "We are also finding the same issue. Posted bug report in github. Have found a probable solution also. Could check that if works.",
"username": "Shreesha_Kedlaya"
}
] | Access realm in iOS 14 Widget via shared AppGroup while also still reliably running schema migrations? RealmSwift | 2020-08-25T20:53:09.578Z | Access realm in iOS 14 Widget via shared AppGroup while also still reliably running schema migrations? RealmSwift | 3,539 |
null | [
"queries",
"crud"
] | [
{
"code": "Atlas atlas-nmat34-shard-0 [primary] bird_data> db.birds.updateOne({\"_id\":ObjectId(\"6268471e613e55b82d7065d7\")},\n... {$push:{\"diet\": {$each:[\"newts\", \"opossum\", \"skunks\", \"squirrels\"]}}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\nAtlas atlas-nmat34-shard-0 [primary] bird_data> db.birds.findOne({\"_id\": ObjectId(\"6268471e613e55b82d7065d7\")})\n{\n _id: ObjectId(\"6268471e613e55b82d7065d7\"),\n common_name: 'Great Horned Owl',\n scientific_name: 'Bubo virginianus',\n wingspan_cm: 111.76,\n habitat: [ 'grasslands', 'farmland', 'tall forest' ],\n diet: [ 'mice', 'small mammals', 'rabbits', 'shrews' ],\n last_seen: ISODate(\"2022-05-19T20:20:44.083Z\")\n}\n",
"text": "I was doing the lesson 2 lab in new learn.mongodb.com “CRUD Operations” course and encountered something strange.I ran the command to push the new values into the array and it successfully completed but when I went to query the document, the new values were not there. I then ran the same update again and query again and they were there after a second update.Here is the output from the first update that apparently did nothing even though the output says otherwise. I’ve moved on in the course but this was bugging me since I didn’t understand why this would happen. Is this just some lab environment/timing bug/issue or is this something may have procedural done wrong?Thanks, -michael",
"username": "MichaelB"
},
{
"code": "",
"text": "Hey @MichaelB,Thanks for bringing this to our attention. I’ll raise this issue with the concerned team who can further evaluate on why this problem is happening.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Had a lab update that didn't update | 2023-01-22T16:56:07.562Z | Had a lab update that didn’t update | 1,180 |
null | [
"dot-net",
"flexible-sync"
] | [
{
"code": " {\n \"identities\": [\n {\n \"providerType\": \"anonymous\"\n }\n ]\n }\n",
"text": "I am trying to apply a read only role for anonymous users like this:But it doesn’t seem to work, I am still able to write anything so I want to ascertain that the role applied to the user, but how can I do that?Also, if the role is applied, will it work for the offline database as well as for the synced version of it?I am using the latest .NET implementation, thank you",
"username": "Movsar_Bekaev"
},
{
"code": "",
"text": "This is mostly an atlas question. What you’re looking for is setting sync permission. Have you tried already setting those? If yes, then we need to see those in order to help you.",
"username": "Andrea_Catalini"
}
] | How to get current user's role? | 2023-01-23T04:02:44.344Z | How to get current user’s role? | 1,171 |
null | [
"dot-net"
] | [
{
"code": " var config = new RealmConfiguration\n {\n SchemaVersion = CurrentVersionDB,\n MigrationCallback = MigrationCallBack()\n ,\n };\n public static async System.Threading.Tasks.Task<Realm> SyncRealm(Realm existingRealm)\n {\n try\n {\n var myRealmAppId = \"mytutorialapp-XXX\";\n var app = Realms.Sync.App.Create(myRealmAppId);\n var user = await app.LogInAsync(Realms.Sync.Credentials.EmailPassword(\"XXX@level28\", \"PASSWORT\"));\n\n var syncConfig = new FlexibleSyncConfiguration(app.CurrentUser);\n existingRealm.WriteCopy(syncConfig);\n syncConfig.Schema = new[] { typeof(DBImage) };\n var sRealm = Realm.GetInstance(syncConfig);\n sRealm.Subscriptions.Update(() =>\n {\n var myImages = sRealm.All<DBImage>();\n sRealm.Subscriptions.Add(myImages);\n });\n await sRealm.Subscriptions.WaitForSynchronizationAsync();\n return sRealm;\n }\n catch (Exception ex)\n {\n var x = ex.Message;\n }\n return null;\n }\n",
"text": "we have an existing realm db which in the beginning of the app will be offline and filled with data.\nthe realm gets intialized with :to a certain point we want to now add sync for a few (not all ) RealmObjects with our Mongo Atlas. The understanding is that we can set up now certain objects being synced with :When we run this we get a error on all other RealmObjects we did not add as a Subscription “encountered error when flushing batch: error in onBatchComplete callback: error updating resumable progress: timed out while checking out a connection from connection pool: context canceled; maxPoolSize: 20 …”Basically we are looking for a solutions where we can on demand sync certain RealmObjects with our MongoAtlas and keep most of the data offline. Is that possible with one Realm or is it better to set up a second realm which is handling the sync with Mongo?",
"username": "developer_level28"
},
{
"code": "writeCopywriteCopy",
"text": "Hi @developer_level28 , welcome!Unfortunately writeCopy does not work with flexible sync, and I think this is the cause of your issues. It should have raised an exception, but it didn’t, so I opened an issue. Unfortunately you can’t just convert your local realm to one that uses flexible sync, so you’ll need to implement the copy yourself.If you plan to keep your data offline, and then sync only a few things sparsely, I think it would make sense to have two separate realms, one local, and one with flexible sync. Apart from writeCopy, your code makes sense, and you are using subscriptions correctly. If you want, you don’t need to specify the schema for the flexible sync necessarily.\nIf you are using a synced realm, then you cannot specify what needs to be kept local and what synchronised, as only objects that adhere to the subscription queries will be ever persisted in realm, while the rest will be deleted. So with flexible sync if the data is persisted, is synchronised and viceversa.I hope I’ve been clear enough!",
"username": "papafe"
}
] | How to configure only few objects to be synced | 2023-01-21T18:44:47.040Z | How to configure only few objects to be synced | 797 |
null | [] | [
{
"code": "",
"text": "Hi there,I recently wrote about problems with the new university. This one is an outlier so this post.When the lab started, I followed instructions to fill 2-3 pages worth of output. It is about first getting “_id” of a document, then using it to “replace” with another content, then “find” the new document to see if the content has changed.I did not memorize the original document so I wanted to compare it by \"scroll\"ing the terminal. But as I noted in the title, it was not scrollable.I closed and re-open the lab, tried clearing the cache and re-open, hard refreshed through dev tools, and even open in a private browsing tab. none has changed the result. However, \"exit\"ing the shell and reconnecting to the cluster through the files in the folder (which might be a security problem in the future), both console and shell becomes scrollable again.I take the lab about 10 hours ago, and in between, I restarted my laptop. I used the history to open that lab (unfortunately, I had clicked the “check” button) and now could compare two labs. The next lab opens fine and the shell is scrollable in it. but this one still open without the scrolling ability.Can you please check what is wrong with this lab and find a way to check if any other has the issue?PS: if you are a community member to test this many times, do not click the “check” button or the link will be removed from the page. using browser history is an option but is also a pain.The problem is in this course/lab:\nCourse: MongoDB CRUD Operations: Replace and Delete Documents\nLab: Lesson 1 - Replacing a Document in MongoDB / Practice",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,Thank you for attempting the labs on the new platform and sharing your feedback. I’ve raised this with the concerned team and will keep you updated.Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,Your feedback has helped us in improving our platform, thank you!!\nThis is to update that the issue has been fixed from our end.Kind Regards,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "",
"username": "Sonali_Mamgain"
}
] | Mongo shell opens up un-scrollable in new labs (instruqt) | 2022-12-08T16:32:43.961Z | Mongo shell opens up un-scrollable in new labs (instruqt) | 1,963 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi MongoDB team,I am preparing for the MongoDB Developer Exam- Python. I am following this guide to prepare for the exam which is given below[link1] :Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.However, when I read this guide[link2] → https://10geneducation.zendesk.com/hc/en-us/articles/213550903-What-topics-are-tested-on-the-C100DEV-MongoDB-Certified-Developer-Associate-Exam-I found that some topics that are not given in link1 when compared with link2 are not same, such as sharding and replication. so, which link should I follow?\nand also can you please share the exam guide that I should follow to prepare for the exam because there is so much inconsistency online?",
"username": "Ishan_Anand"
},
{
"code": "",
"text": "Hi @Ishan_Anand,Welcome to the MongoDB Community forums [link1] :MongoDB Courses and Trainings | MongoDB UniversityRecently we launched a new version of the MongoDB University, along with a new certification program. In order to become certified, please follow the updated resources and information for Associate DEV Exam.If you have any doubts, please feel free to reach out to us.Thanks,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I also had the same question. So, per the new exam guide, there is no replication, sharding related words. Is it safe to exclude them from the preparation? Please confirm. Thanks.",
"username": "Nandhini_Madanagopal"
},
{
"code": "",
"text": "Hi @Nandhini_Madanagopal,As per new Associate Developer Exam study guide, you can safely skip replication and sharding from your preparation for the MongoDB Associate Developer Exam.Kind Regards,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Associate Developer Python Exam Topics | 2022-12-13T10:29:51.266Z | MongoDB Associate Developer Python Exam Topics | 2,655 |
[
"installation"
] | [
{
"code": "root@mongoserver:~# ps -aef | grep [m]ongod\nroot@mongoserver:~# ss -tlnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port Process\nLISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:((\"nginx\",pid=506,fd=7),(\"nginx\",pid=505,fd=7),(\"nginx\",pid=504,fd=7))\nLISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((\"sshd\",pid=499,fd=3))\nLISTEN 0 128 [::]:22 [::]:* users:((\"sshd\",pid=499,fd=4))\nroot@mongoserver:~#\n\n",
"text": "Goodmorning everybody,\nI try to install MongoDB on Debian11 by following this topic:\nhttps://www.mongodb.com/docs/v5.0/tutorial/install-mongodb-on-debian/\nBut i have got the error error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017 when i try to start the mongo.service\nhere is my mongod.conf\n\nmongodb558×625 10.9 KB\n\nand the result of the commands:Anyone can helping me, i’m newbie with mongo.\nRegards",
"username": "Arnaud_OCP"
},
{
"code": "",
"text": "Your mongod is not up\nCheck status of your service\nAlso check mongod.log for errors",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks a lot, finally i uninstall and reinstall the package and my server is up.\nRegards",
"username": "Arnaud_OCP"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017 | 2023-01-20T08:36:48.213Z | Error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017 | 2,311 |
|
null | [
"python"
] | [
{
"code": "",
"text": "I was enrolled in M220P course, now it does exist on my account.\nWhy it does not visible on my account ?Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "faizan_fareed"
},
{
"code": "",
"text": "Hey @faizan_fareed,Welcome to the MongoDB Community Forums! When did you enroll in the course? Since the M220P course was outdated, it has been retired and a new Python Course has been launched in the new LMS. It is recommended that you take this new course since the content is up-to-date and hence will benefit you more taking up this course.Please let us know if there’s anything else you need to know. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | M220P course not visible in MongoDB University | 2023-01-20T16:08:18.467Z | M220P course not visible in MongoDB University | 1,268 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Not sure what happened, but I have custom user data enabled in the Admin UI and user documents are no longer being saved to the users DB/collection. I recently made a PATCH API call to the below custom user data method in order set the “on_user_creation_function_id” value. Sometime around then, user documents stopped being saved. I’ve tried to create a new collection/re-point to the new collection and still no documents are saved.Any thoughts here?https://www.mongodb.com/docs/atlas/app-services/admin/api/v3/#tag/custom-user-data/operation/adminSetCustomUserDataConfig",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Hello,Thanks for posting on the community!We would need to look into your App Services app to investigate this further and identify if this is due to the process that is meant to upkeep the collection storing the custom user data or something else.Please raise a support case with our team.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Custom User Data no longer saving | 2023-01-18T15:27:59.162Z | Custom User Data no longer saving | 1,171 |
null | [
"crud"
] | [
{
"code": "filter: { $or: _ids.map(_id => ({ _id: { $oid: _id } })) },filter: { _id: { $in: { $map: { input: _ids, as: \"$oid\" } } } },\nfilter: { _id: { $in: { $oid: _ids } } },\n",
"text": "I’m trying to run an updateMany command with the data api, but I can’t figure out how to use $oid against an array of _ids. The workaround I landed on wasfilter: { $or: _ids.map(_id => ({ _id: { $oid: _id } })) },Is there a way to get a variation on one of these to work, though?",
"username": "Adam_Rackis"
},
{
"code": "curl --request POST \\\n 'https://data.mongodb-api.com/app/<Data API App ID>/endpoint/data/v1/action/updateMany' \\\n --header 'Content-Type: application/json' \\\n --header 'api-key: <Data API Key>' \\\n --data-raw '{\n \"dataSource\": \"Cluster0\",\n \"database\": \"todo\",\n \"collection\": \"tasks\",\n \"filter\": { \"_id\": {\"$in\" : [{ \"$oid\": \"6193ebd53821e5ec5b4f6c3b\" },{ \"$oid\": \"6193ebd53821e5ec5b4f6c3c\" }] } },\n \"update\": {\n \"$set\": {\n \"status\": \"updated\"\n }\n }\n }'\nfilter$oids \"filter\": { \"_id\": {\"$in\" : [{ \"$oid\": \"6193ebd53821e5ec5b4f6c3b\" },{ \"$oid\": \"6193ebd53821e5ec5b4f6c3c\" }] } }\nupdateMany[\n { _id: ObjectId(\"6193ebd53821e5ec5b4f6c3b\"), status: 'updated' },\n { _id: ObjectId(\"6193ebd53821e5ec5b4f6c3c\"), status: 'updated' }\n]\nstatus",
"text": "Hi @Adam_Rackis,Not too sure if this is what you are after but I managed to update 2 documents with the following:More specifically, the filter value used was (against an array of $oids):The documents after the updateMany Data API request:Note: Prior to the Data API request, the documents did not have a status fieldRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I updateMany with the data api, and $oid | 2022-12-21T23:40:07.984Z | How do I updateMany with the data api, and $oid | 1,358 |
null | [] | [
{
"code": "",
"text": "I am trying to understand the breakdown of pricing for mongodb atlas cloud backups. It is confusing,. Can anyone shed some light on the below please,In the pricing I see the following what do these mean?Atlas Backup Snapshot Export Download VM M50 - server hours\nAtlas Backup Snapshot Export Restore Storage - GB Hours\nAtlas Backup Snapshot Export Upload - AWS S3\nAtlas Cloud Backup Storage - AWS - GB Days\nAtlas Continuous Cloud Backup Storage - GB Days",
"username": "Raj_V"
},
{
"code": "",
"text": "Hi @Raj_V - Welcome to the community.I am trying to understand the breakdown of pricing for mongodb atlas cloud backups. It is confusing,. Can anyone shed some light on the below please,In the pricing I see the following what do these mean?Please contact the Atlas support team via the in-app chat to investigate any operational and billing issues related to your Atlas account. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Some examples of when to contact the Atlas support team:The Atlas support team should be able to provide more details into the billing lines you provided.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Mongodb atlas cloud backup pricing breakdown querstion | 2023-01-21T19:33:42.000Z | Mongodb atlas cloud backup pricing breakdown querstion | 651 |
null | [
"node-js",
"app-services-cli",
"react-js"
] | [
{
"code": "// This line of code does not cause an error\nimport Realm from 'realm';\n\n// This line of code causes the error\nconst realm = new Realm.App({id: `${app_id_here}`})\n[0] **ERROR** in **./node_modules/bindings/bindings.js** **5:9-22**\n\n[0] **Module** **not found** **:** **Error** **: Can't resolve 'fs' in '{path_to_project}/node_modules/bindings'**\n\n[0]\n\n[0] **ERROR** in **./node_modules/bindings/bindings.js** **6:9-24**\n\n[0] **Module** **not found** **:** **Error** **: Can't resolve 'path' in '{path_to_project}/node_modules/bindings'**\n\n[0]\n\n[0] **BREAKING CHANGE** **: webpack < 5 used to include polyfills for node.js core modules by default.**\n\n[0] This is no longer the case. Verify if you need this module and configure a polyfill for it.\n\n[0]\n\n[0] If you want to include a polyfill, you need to:\n\n[0] - add a fallback 'resolve.fallback: { \"path\": require.resolve(\"path-browserify\") }'\n\n[0] - install 'path-browserify'\n\n[0] If you don't want to include a polyfill, you can use an empty module like this:\n\n[0] resolve.fallback: { \"path\": false }\n\n[0]\n\n[0] **ERROR** in **./node_modules/file-uri-to-path/index.js** **5:10-29**\n\n[0] **Module** **not found** **:** **Error** **: Can't resolve 'path' in '{path_to_project}/node_modules/file-uri-to-path'**\n\n[0]\n\n[0] **BREAKING CHANGE** **: webpack < 5 used to include polyfills for node.js core modules by default.**\n\n[0] This is no longer the case. Verify if you need this module and configure a polyfill for it.\n\n[0]\n\n[0] If you want to include a polyfill, you need to:\n\n[0] - add a fallback 'resolve.fallback: { \"path\": require.resolve(\"path-browserify\") }'\n\n[0] - install 'path-browserify'\n\n[0] If you don't want to include a polyfill, you can use an empty module like this:\n\n[0] resolve.fallback: { \"path\": false }\n\n[0]\n\n[0] webpack compiled with **3 errors** and **2 warnings**\nimport * as Realm from 'realm-web';\n\nconst realm = new Realm.App({id: `${app_id_here}`})\n{\n nodeIntegration: true,\n contextIsolation: false\n}\n",
"text": "Hello, I’m running into issues integrating Realm into an Electron-React application.I tried following this guide: https://www.mongodb.com/docs/realm/sdk/node/integrations/electron-cra/ but am running into several error messages when running the following lines of code:The complete error messages:Note that I do not receive an error when I run this code:I need this application to work offline-first, however, and so realm-web will not suit all of my needs.For context, I run into these errors with Realm v.11.3.2 as well as the latest versions of React and Electron as well as the recommended versions in the guide (e.g. Electron v. 13.2.x).Lastly, I have the following webPreferences for electron:as well as the recommended craco.config.js settings from the guide.Please advise!",
"username": "Alexander_Ye"
},
{
"code": "**BREAKING CHANGE** **: webpack < 5 used to include polyfills for node.js core modules by default.**\n\n[0] This is no longer the case. Verify if you need this module and configure a polyfill for it.\npath-browserify[0] If you want to include a polyfill, you need to:\n\n[0] - add a fallback 'resolve.fallback: { \"path\": require.resolve(\"path-browserify\") }'\n\n[0] - install 'path-browserify'\n",
"text": "I guess “create-react-app@4” was using webpack 4.x and now it is highly possibly you have “create-react-app@5” with webpack 5.x. The error you get complaints about “fs” and “path” modules of nodejs and the part I quoted tells they are not included by default in the app’s packaging.The error message also suggests you a solution in later lines, with path-browserify. please first try that and see if the error goes away.",
"username": "Yilmaz_Durmaz"
},
{
"code": "package.json\"scripts\": {\n \"start\": \"craco start\",\n \"build\": \"craco build\",\n \"electron-dev\": \"electron .\",\n \"electron\": \"wait-on tcp:3000 && electron .\",\n \"dev\": \"concurrently -k \\\"BROWSER=none npm start\\\" \\\"npm:electron\\\"\"\n}\nstartcraco startreact-scripts startcraco.config.jsconst nodeExternals = require(\"webpack-node-externals\");\nmodule.exports = {\n webpack: {\n configure: {\n resolve: {\n fallback: {\n \"path\": require.resolve(\"path-browserify\"),\n \"fs\": false,\n \"os\": false\n }\n },\n target: \"electron-renderer\",\n externals: [\n nodeExternals({\n allowlist: [/webpack(\\/.*)?/],\n }),\n ],\n },\n },\n};\nfallbackrealmelectronReactnodeIntegration: truecontextIsolation: false",
"text": "Hey all, so I got it to work by doing the following.In package.json, I have these scripts:Of importance here is start—we use craco start instead of react-scripts start, which is the default.My craco.config.js file now looks like this:The fallback suggestion taken from @Yilmaz_Durmaz (thank you ).Combining realm and electron with React right now does not follow electron’s security recommendations—in particular, it doesn’t use ipcRenderer with Context Isolation. But unfortunately nodeIntegration: true and contextIsolation: false is the only way that works, but hopefully that will change in the near future.Regardless, this was helpful—thanks!",
"username": "Alexander_Ye"
}
] | [Realm with Electron Using React] **Module** **not found** Error | 2023-01-22T13:17:36.234Z | [Realm with Electron Using React] **Module** **not found** Error | 2,887 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I need a help to write a query and I would like aggregate function.\nThe scenario I’m giving is an example but it is very similar to my real scenario.I have a collection whose name is “corporate_composition”.\nFollow same data examples:{\nid: 1,\nassociate_name: partner_a,\ncorporate_name: Coca-cola\n}\n{\nid: 2,\nassociate_name: partner_b,\ncorporate_name: Coca-cola\n}\n{\nid: 3,\nassociate_name: partner_c,\ncorporate_name: Ford\n}And I have another collection whose name is “corporate_goods”.\nFollow same data examples:{\nid: 1,\ncomposition: 100% for Coca-cola,\nname_goods: build_1\n}\n{\nid: 2,\ncomposition: 59% for Coca-cola and 41% for Ford,\nname_goods: build_2\n}\n{\nid: 3,\ncomposition: 100% for Ford,\nname_goods: build_3\n}How I made a query with aggregate to get goods using the field “corporate_composition.corporate_name”\nand “corporate_goods.composition”?Using SQL is like that:SELECT corporate_goods.*\nFROM corporate_goods, corporate_composition\nWHERE corporate_goods.composition LIKE ‘%’||corporate_composition.corporate_name||’%’I tried to use lookup operator but the clausule ‘localField’ and ‘foreignField’ need to have the same value.\nI saw that have the clausule ‘let’ for create variables to clausule ‘pipeline’ but I can’t use it.Somebody can to write a exemple to resolve my question?Thanks",
"username": "Rafael_de_PauliBaptista"
},
{
"code": "{ id:1 ,\n composition : [\n { corporate : Coca-cola , percent : 100 }\n ]\n name_goods : ...\n}\n{ id:2 ,\n composition : [\n { corporate : Coca-cola , percent : 59 } ,\n { corporate : Ford , percent : 41 }\n ]\n name_goods : ...\n}\n...\n",
"text": "Please read Formatting code and log snippets in posts and then update your sample documents so that we can cut-n-paste them into our system and experiment potential solution to your issue.You are right about not being able to use localField and foreignField since your LIKE %…% and your free form text of the field composition will require $regex. Regex are slower than direct comparison. In your case a simple schema change might be in order.I would make the free form text field composition an array. Each element would become a tuple of corporate name and part percentage. So the corporate_goods collection could look likeNo regex and probably can use localField,foreignField",
"username": "steevej"
}
] | How to make a join between two string fields with contains method? | 2023-01-21T18:58:31.597Z | How to make a join between two string fields with contains method? | 1,831 |
[
"aggregation",
"queries"
] | [
{
"code": "name:\"vikash\"attendance:\"present\"objectid:\"63cbb3b15cd59c7810ab89a2app.get('/stud/:id', (req, res) => {\n ProjectSchema.find({\"projectmembers._id\":req.params._id}, (err, data) => {\n if (err) return res.status(500).send(err);\n res.status(200).send(data);\n });\n})\n ProjectSchema.aggregate([\n {\n $unwind:\"$projectmembers\"\n },\n {$match : {\n \"projectmembers._id\" : req.params._id\n }}\n ], (err, data) => {\n if (err) return res.status(500).send(err);\n res.status(200).send(data);\n });\n})\n",
"text": "I am working in a MERN project and i want to fetch data from inside the array of objects through the inside objectid\nMy model viewI want to fetch name:\"vikash\" and attendance:\"present\" through the objectid:\"63cbb3b15cd59c7810ab89a2for this i had tried two codeOutput of both code is []",
"username": "priyanshu_singh2"
},
{
"code": "\"projectmembers._id\": ObjectId(req.params._id)\"projectmembers._id\":{\"$oid\": req.params._id}",
"text": "“_id” field’s type is “ObjectId” whereas “req.params._id” is transferred as a string in the request. try one of these, or check documentation (for mongoose?) how you do that:",
"username": "Yilmaz_Durmaz"
},
{
"code": "{\n \"stringValue\": \"\\\"{ 'projectmembers._id': { '$oid': undefined } }\\\"\",\n \"valueType\": \"Object\",\n \"kind\": \"ObjectId\",\n \"value\": {\n \"projectmembers._id\": {}\n },\n \"path\": \"_id\",\n \"reason\": {},\n \"name\": \"CastError\",\n \"message\": \"Cast to ObjectId failed for value \\\"{ 'projectmembers._id': { '$oid': undefined } }\\\" (type Object) at path \\\"_id\\\" for model \\\"Project\\\"\"\n}\n",
"text": "after trying one of these…output is",
"username": "priyanshu_singh2"
},
{
"code": "{ '$oid': undefined }/stud/:idreq.params._id",
"text": "{ '$oid': undefined }The undefined in the error above means that you do not pass a valid value. It looks like a bug in your code. You defined your route as/stud/:idbut you access the route’s id parameter asreq.params._idI don’t do React but I suspect that it should be req.params.id.",
"username": "steevej"
},
{
"code": "ObjectId()$oidreq.paramsreq.body/stud/:idreq.params.id",
"text": "@steevej , i missed that both ObjectId() and $oid are doomed to fail here as they are passed an “undefined” value.req.params and req.body are general names but mostly point to an “express.js” server for javascript. As steve noted, you have to use the exact name you gave in your resources; here you have /stud/:id so it has to be req.params.idby the way, if you use mongoose, you may not need to convert these manually. check the use of a model schema for passing id around.",
"username": "Yilmaz_Durmaz"
}
] | How to get data of inside the array of objects in mongo db | 2023-01-22T07:33:58.523Z | How to get data of inside the array of objects in mongo db | 2,815 |
|
null | [] | [
{
"code": "user_id{\n \"title\": \"DFFilter\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"user_id\",\n \"name\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"user_id\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"edits\": {\n \"bsonType\": \"objectId\"\n }\n }\n}\n@objc public class DFFilter : Object {\n @objc dynamic public var _id: ObjectId = ObjectId.generate()\n \n // Partition Key\n @objc dynamic var user_id: String = \"\"\n\n @objc dynamic public var name: String = \"\"\n @objc dynamic public var edits: DFEdits? = nil\n \n override public static func primaryKey() -> String? {\n return \"_id\"\n }\n // ...\n let collection = database.collection(withName: \"DFFilter\")\n\n let filterDocument : Document = [\n \"user_id\": [\n \"$ne\": AnyBSON(forUser.id)\n ]\n ]\n\n collection.find(filter: filterDocument) { result in\n switch result {\n case .failure(let error):\n print(error)\n case .success(let documents):\n print(documents)\n completion(documents.map({ document in\n let dfFilter = DFFilter(value: document)\n print (dfFilter)\n return \"foo\" // just a placeholder for now\n }))\n }\n }\nTerminating app due to uncaught exception 'RLMException', reason: 'Invalid value 'RealmSwift.AnyBSON.objectId(6089b6e38c3fafc3e01654b1)' of type '__SwiftValue' for 'object id' property 'DFFilter._id'.' (lldb) po document\n▿ 4 elements\n ▿ 0 : 2 elements\n - key : \"_id\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - objectId : 6089b6e38c3fafc3e01654b1\n ▿ 1 : 2 elements\n - key : \"name\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - string : \"Hey Hey\"\n ▿ 2 : 2 elements\n - key : \"user_id\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - string : \"6089b62f9c0f6a24a1a5794b\"\n ▿ 3 : 2 elements\n - key : \"edits\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - objectId : 6089b6e38c3fafc3e01654b2\neditsDFFilterDFEdits",
"text": "This is a repost of a question I asked in the wrong forum. I’m reposting here since it has much more to do with the Swift SDK than MongoDB Realm in general.I’m working on a Swift iOS app using Realm Sync and MongoDB Atlas. It’s a photo editing app, and I want people to be able to create filters that they have write access to, and be able to share them, so that other users can have read-only access to them to download them on their phone.I’m able to sign in, open a realm, create filters, store them, and access them.However, I’d like to run a query for all filters available to download (i.e. those which aren’t owned by me). My data is partitioned by the user_id property.Here is the schema for my filters:And here is my equivalent swift Object:And here is how I’m performing the query:However, the initializer is failing to create a local unmanaged DFFilter object from the BSON Document I’m getting from Realm:Terminating app due to uncaught exception 'RLMException', reason: 'Invalid value 'RealmSwift.AnyBSON.objectId(6089b6e38c3fafc3e01654b1)' of type '__SwiftValue' for 'object id' property 'DFFilter._id'.' Here’s what the BSON document looks like when I print it in the console:I’ve tried search around for answers but I’m coming up blank. This indicates to me that potentially my whole approach to this problem might be mistaken?It is worth pointing out that the edits property of DFFilter which you see in the schema referenced by an object_id is a different object of type DFEdits. I’m also not sure how I can query an object and its dependencies at once? Does MongoDB resolve these automatically?It seems like it might even be easier to just write a GraphQL query directly and decode the response into my local object types?Just not really sure which direction to head.",
"username": "Majd_Taby"
},
{
"code": "",
"text": "Are you using RealmSwift, the Swift SDK?",
"username": "Jay"
},
{
"code": "@import Realm;import RealmSwift",
"text": "I’m using @import Realm; in my objective-c code which is consuming my persistence-layer API.Within the persistence-layer, where all the aforementioned code snippets are, and which is built in Swift, I’m using import RealmSwiftHere is what my SPM include looks like:\nCleanShot 2021-04-29 at 09.21.27@2x1244×460 33.8 KB",
"username": "Majd_Taby"
},
{
"code": "let collection = database.collection(withName: \"DFFilter\")let results = realm.objects(DFFilter.self)@objc dynamic public var _id: ObjectId = ObjectId.generate()",
"text": "Ok. Just needed clarification so we answer in the right language; your coding in ObjC, not Swift.Objc\nlet collection = database.collection(withName: \"DFFilter\")Swift\nlet results = realm.objects(DFFilter.self)Oh, just a fyi, and this isn’t valid in Swift as Realm doesn’t have public swift vars, just vars and private vars@objc dynamic public var _id: ObjectId = ObjectId.generate()",
"username": "Jay"
},
{
"code": "database.collectionrealm.objects(DFFilter.self)realm.objects(DFFilter.self) guard let realm = inRealm.realm else { return }\n \n let results = realm.objects(DFFilter.self)\n let filteredResults = results.filter(\"user_id != '\\(forUser.id)'\")\n print(filteredResults.count) // Printed 0 in the console (Incorrect)\n \n \n //==============================\n \n let client = forUser.mongoClient(\"mongodb-atlas\")\n let database = client.database(named: \"Darkroom\")\n let collection = database.collection(withName: \"DFFilter\")\n\n let filterDocument : Document = [\n \"user_id\": [\n \"$ne\": AnyBSON(forUser.id)\n ]\n ]\n\n collection.find(filter: filterDocument) { result in\n switch result {\n case .failure(let error):\n print(error)\n case .success(let documents):\n print(documents.count) // Printed 3 in the console (Correct)\n }\n }\n",
"text": "I’m not sure I understand your objc/swift distinction. Are you suggesting that I’m using the RealmObjC SDK when I use the database.collection API, and I’m using the RealmSwift SDK when using realm.objects(DFFilter.self)Per my reading of the docs, realm.objects(DFFilter.self) will return all the DFFilter objects in that realm, which is partitioned to my user_id, so this realm isn’t aware of publicly-available filters, not owned by me (I.e. do not share my partition key).As @Andrew_Morgan suggested in the original thread, I could download all the publicly available filters in my system to each client device but that’s not really applicable here, there’s going to be an ever-growing volume of those, and it doesn’t seem memory nor disk nor bandwidth efficient to download my entire backend to each client device.I just need to query my backend for some data, and generate local unmanaged versions of that download. It’s sounding more and more like I need to just process the BSON as if it were coming from a generic backend, and populate unmanaged DFFilter objects manually?Just really curious why a Document backed by a schema, doesn’t allow me to generate a local object that defined that schema.Apologies if I asked the same question again, I just want to be extra explicit to make sure we don’t talk past each other.Appreciate the help, thank you so much.",
"username": "Majd_Taby"
},
{
"code": "database.collectionrealm.objects(DFFilter.self)@objc public class DFFilter : Object {\n @objc dynamic public var _id: ObjectId = ObjectId.generate()\n ...\n",
"text": "I’m not sure I understand your objc/swift distinction. Are you suggesting that I’m using the RealmObjC SDK when I use the database.collection API, and I’m using the RealmSwift SDK when using realm.objects(DFFilter.self)Correct. Just wanted to establish which SDK was being used as your coding platform so if we answered with code it would be applicable as the two SDK’s are quite different.I could download all the publicly available filters in my system to each client device but that’s not really applicable here, there’s going to be an ever-growing volume of those, and it doesn’t seem memory nor disk nor bandwidth efficient to download my entire backend to each client device.As far as the SDK’s go, when you touch a partition, everything in that partition is sync’d. So if you want to query for your DFFilter objects, you will have to open Realm (a partition) that has those filters and they will all be sync’d to your device. Query based (partial) syncs are not currently available (as they were in the prior Realm). I other words, for the SDK’s there’s no “server data” and “client data”; only “sync’d data” which exists on both the server and the client. When a query is run, no data is ‘pulled’ from the server, it runs against the local data (which is why it’s fast as it’s not waiting on the server or the internet’I just need to query my backend for some data, and generate local unmanaged versions of that download.You can’t query (only) the backend for data (from the SDK) because it will exist locally but you can do the latter. It’s the same process - to query against a Realm you have to open that Realm (partition), which then sync’s everything in that partition.Just really curious why a Document backed by a schema, doesn’t allow me to generate a local object that defined that schema.I am curious - your question has a local Realm Object that defines the schema of the object. Right? What does the ‘generate a local object’ part of that mean? - like if you create an object in the the Realm console an object is generated in code? (that would be cool).I do the opposite, or at least I think I am doing the opposite lol. I create my objects in code and that in turn generates/creates those objects in the Realm Console (Atlas) when it syncs.That being said, you CAN store data on the server that’s independent of the client. As long as the client doesn’t touch that data then it won’t sync (download). You could then access that via REST calls or server-side functions. That really sounds like a solution - keep your filters on the server only, make a server side function call and then parse the local data and instantiate ObjC or Swift objects to hold the data.",
"username": "Jay"
},
{
"code": "Document",
"text": "I appreciate your responses, Jay, thanks again for taking the time. I think I understand the nuance between Realms being synced automatically to their partitioned data in a collection, so let me take some time to share some diagrams that might shed some light on what I’m trying to do, and where I’m struggling.CleanShot 2021-04-29 at 15.54.18@2x2270×816 233 KBIn this scenario, there are two users: A, and B. Each has their own Realm in an iOS app. Their realms are initialized with their User ID as their partition value. They automatically get their own documents synced to the client. This is all working as expected.If User B shares a filter, it should create a new Document in the cluster which references user B, but is accessible by anyone.When a filter is shared, it will create a link that includes the ID of the shared filter (in this digram, 5)User A can use that link to install the filter. I would like to get the filter details, and create a filter, owned by User A, with the data of the shared filter.My question is: How do I get the details of that shared file in my swift app, since I can use the realm to do it. I tried querying it directly via the collection, and that worked, but I’m getting a Document type response, which I can’t initialize a Filter object from.I have seen the suggestion that I open a second realm on User A’s app, which can sync all the shared filters, and access it that way, but I don’t want to sync every shared filter in the cluster, it’d be wasteful and it will keep growing over time.It sounds like I need to just fetch the shared filter from the collection by its id (like I’m doing in my snippet), and then instantiate a filter object and populate it with the values in the Document BSON manually?",
"username": "Majd_Taby"
},
{
"code": "",
"text": "I was able to just use the GraphQL API to access the object in the Atlas cluster directly including its nested objects. Then I can construct filter objects and add them to my realm manually. Seems like that the safest/easiest way to go",
"username": "Majd_Taby"
},
{
"code": "class DFFilter : Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partitionKey = \"\"\n @objc dynamic var owner: UserClass? //the owner of the object\n\n let userList = List<UserClass>() //users who can access it\n //or\n let uidList = List<String>() //a list of user id string that can access this object\n",
"text": "If User B shares a filter,Unfortunately, that won’t work. From the SDK, user B won’t have access to the Shared Filters if they are only located on the server. All objects that are sync’d must contain a partition key. For user B, or any user, to access the filters they must access the filters partition, which means the filters sync to all devices.It’s a little unclear why, when a user wants to share a filter, additional data is created. Perhaps having all filters stored in one partition, and then setting up permissions that define which users can access which filters. Then sharing would just be a matter of adding the users uid to the filter. At a high level, something like this exampleI don’t want to sync every shared filter in the cluster,If you want to sync an object in a partition (a Realm), all objects in that partition are sync’d.",
"username": "Jay"
},
{
"code": "",
"text": "When a user shares B a filter with user A, they’re not sharing access to the same document. If that were the case, then any changes either user made to the document would be synced. That is not the sharing model we’re building.In our sharing model, when user B shares a filter, they create an immutable copy. Otherwise, you might install a filter that increases your saturation, then try it again the next day, and it might make your photo black & white.I’m coming around to the notion that our sharing model needs to work outside of the scope of MongoDB Realm Sync.I’m coming to the conclusion that when a user shares a filter (and “share” in this context is used in the UX, app-based sense, not the Realm partition-sharing sense), I can run a GraphQL mutation to create a new document on the server, and then use GraphQL to access that document when installing that filter, and reconstruct a new DFFilter object, owned by the user who installed that filter.",
"username": "Majd_Taby"
},
{
"code": "",
"text": "I want to circle back around on this as I was assuming you were using the RealmSwift Sync SDK, but you’re not; your also using MongoDB Remote Access - iOS SDK - not sure how I overlooked that.So flipping my suggestion around, if you are NOT sync’ing any of the Filter data, then that part of the SDK should enable you to do what you want. It may require a server side function as well - have to give that some thought.",
"username": "Jay"
},
{
"code": "",
"text": "I’m finding myself quite confused by the various SDKs to be honest:All I enabled was add Realm as a swift package manager, I’m not making an explicit decision on which SDK to use, and the boundaries between these aren’t clear to me. I’m simply using the APIs available to me.CleanShot 2021-05-04 at 14.00.41@2x1472×410 26.4 KB\n",
"username": "Majd_Taby"
},
{
"code": "",
"text": "@Majd_Taby Let me see if I can help. In the RealmSwift SDK that you have installed there are two methods for accessing data from MongoDB AtlasUse Realm Sync, this will automatically translate MongoDB documents into Realm Objects and store them locally in a Sync Realm. You fetch these documents by opening sync realms with a partitionKey, which is a field you define on the Atlas document - you pass in a value and Realm Sync grabs these documents and translates them to Realm Objects - syncing any deltas on the documents back and forth between Atlas and the synced Realm you just opened - https://docs.mongodb.com/realm/sdk/ios/examples/sync-changes-between-devices/You can fetch documents using RealmSwift’s MongoDB Remote Access APIs - this will fetch documents from Atlas based on a MQL query. The results will be returned to you as JSON document just as it would if you were using a MongoDB driver. Think of this as a REST API request-response. If you want to store them in a realm you will need to convert the documents into Realm objects in a separate non-syncing local realm. We do not provide this mapping for you because well, that’s what sync does. There are two types of realms, syncing and non-syncing - you would open separate realms (separate variables with separate configurations) and access and store data in them indepedently.\nhttps://docs.mongodb.com/realm/sdk/ios/examples/mongodb-remote-access/I hope this helps. In general, I recommend using the Realm Sync APIs as it simplifies the architecture. But there are some use cases where using the Remote Data Access APIs might be useful. Depends on your usecase",
"username": "Ian_Ward"
},
{
"code": "",
"text": "To go along with @Ian_Ward excellent explanationMuch (all?) of Realm is based in ObjC and when Swift came out a Swift interface was added to enable Swift Developers to interact with Realm objects in a more Swifty way.If you are coding in ObjC, the ObjC SDK is for you. If you are a Swift developer, RealmSwift is for you.MongoDB Remote access is built into both and enables interaction with the Realm server in a less structured way.Both ObjC and RealmSwift SDK’s rely on your custom objects whereas the Remote Access aspect doesn’t have custom objects at all and uses BSON Documents to interact with your server data.The ObjC and RealmSwift SDK’s require far less code to retrieve, update and write data. The Remote Access aspect (to me) is much more raw and low level.For your situation, combining the two I think may be a solution as with Remote Access you can get and write data without sync’ing everything in a partition. So, for example, you could download a filter, do something with it in code and write it to the server without ever sync’ing it to the device. I am pretty sure using that process, you could write it to a partition that would then sync to another user or enable shared access in some other creative way.",
"username": "Jay"
},
{
"code": "",
"text": "Ah. Thank you @Jay and @Ian_Ward for your explanations.I must admit I have never investigated Realm before the MongoDB acquisition so the historical context was missing and this context helps me follow your earlier responses.Yes based on the sharing model we’re pursuing, using a synced Realm for user-data makes sense, and using raw queries using Remote Access for downloading publicly-available documents makes sense too. I suppose the missing part in my approach was the expectation that I ought to be able to decode a Mongo DB DFEdits BSON Document to a Swift DFEdits object directly, but I can also just map the fields manually. That’s not an issue.And I presume that read-access restrictions would have to be implemented in a server-side function that ensures for only shared filters are accessible via Remote Access?",
"username": "Majd_Taby"
},
{
"code": "",
"text": "And I presume that read-access restrictions would have to be implemented in a server-side function that ensures for only shared filters are accessible via Remote Access?You can apply permissions by using Realm Rules - https://docs.mongodb.com/realm/mongodb/define-roles-and-permissions/I’d also say that you can have read-only sync realms, this is a common use case for public data for mobile apps, for instance, a catalog for an inventory or shopping app. See here:\nhttps://docs.mongodb.com/realm/sync/partitioning/#partition-strategiesAlso, a new partitioning guide is here -\nhttps://www.mongodb.com/how-to/realm-partitioning-strategies/",
"username": "Ian_Ward"
},
{
"code": "Decodable",
"text": "I know this thread is a bit old, but since I am having essentially the exact same issue, I will try my luck. Mapping this manually turn out as being pretty inefficient for me (a query for around 10 objects taking 3s+ to initialise the swift objects). Is there a way to use Decodable or some other efficient method? Also, why is RealmSwift not allowing the conversion? Since I imagine this has to be done somehow under the hood for the synced Realms…\n@Ian_Ward @Jay",
"username": "David_Kessler"
},
{
"code": "",
"text": "@David_Kessler Trying to keep all of the data in one spot. Please see the StackOverflow questions as there may be more to this part of the question:a query for around 10 objects taking 3s+ to initialise the swift objects",
"username": "Jay"
}
] | Generating Unmanaged Realm Objects from Equivalent MongoDB Atlas BSON Documents (Cross-Post) | 2021-04-29T15:26:38.291Z | Generating Unmanaged Realm Objects from Equivalent MongoDB Atlas BSON Documents (Cross-Post) | 4,672 |
null | [] | [
{
"code": "exports = function(arg){\n\n console.log(\"Start\");\n var cloudinary = require('cloudinary').v2\n\n return {arg: arg};\n};``` \n\nHere is the error: \nat require (native)\nat node_modules/fs-extra/lib/index.js:12:9(19)\n\nat require (native)\nat node_modules/get-uri/dist/file.js:57:26(48)\n\nat require (native)\nat node_modules/get-uri/dist/index.js:24:38(43)\n\nat require (native)\nat node_modules/pac-proxy-agent/dist/index.js:19:41(24)\n\nat require (native)\nat node_modules/proxy-agent/index.js:30:29(68)\n\nat optionalRequire (node_modules/cloudinary/lib/utils/index.js:1858:11(21))\nat node_modules/cloudinary/lib/uploader.js:78:39(164)\n\nat require (native)\nat node_modules/cloudinary/lib/cloudinary.js:27:28(44)\n\nat require (native)\nat node_modules/cloudinary/cloudinary.js:18:28(45)\n",
"text": "Hi team,I am trying to create a function to control files in Cloudinary.\nI tried to add the dependency both ways: Using package name / Upload .zip folder.\nI got the same error when I init cloudinary in the function. I am not sure if I am doing something wrong… ran at 1649305391531\ntook\nerror:\nfailed to execute source for ‘node_modules/cloudinary/cloudinary.js’: FunctionError: failed to execute source for ‘node_modules/cloudinary/lib/cloudinary.js’: FunctionError: failed to execute source for ‘node_modules/cloudinary/lib/uploader.js’: FunctionError: failed to execute source for ‘node_modules/proxy-agent/index.js’: FunctionError: failed to execute source for ‘node_modules/pac-proxy-agent/dist/index.js’: FunctionError: failed to execute source for ‘node_modules/get-uri/dist/index.js’: FunctionError: failed to execute source for ‘node_modules/get-uri/dist/file.js’: FunctionError: failed to execute source for ‘node_modules/fs-extra/lib/index.js’: FunctionError: failed to execute source for ‘node_modules/fs-extra/lib/fs/index.js’: TypeError: Cannot access member ‘native’ of undefined\nat node_modules/fs-extra/lib/fs/index.js:92:12(123)",
"username": "Brett_Huang"
},
{
"code": "",
"text": "Can anyone help with this question ? ",
"username": "Brett_Huang"
},
{
"code": "",
"text": "I’m facing the same issue.\n@Brett_Huang were you able to fix it? or any alternative?",
"username": "Pankaj_Patidar"
}
] | Init Cloudinary in Function failed | 2022-04-07T04:29:34.601Z | Init Cloudinary in Function failed | 1,983 |
null | [
"python"
] | [
{
"code": "mongodb / mongoexport CC=egccpython buildscripts/scons.py install-mongod --disable-warnings-as-errors/usr/local/bin/egcc/usr/include/openssl/ssl.h",
"text": "I am building mongodb / mongo on OpenBSD 7.0.I’ve tried various environment variable tricks (export CC=egcc, etc.), various pathing tricks, but python buildscripts/scons.py install-mongod --disable-warnings-as-errors in my virtual environment won’t find gcc 8.4.0 (/usr/local/bin/egcc) and it won’t find ssl.h (/usr/include/openssl/ssl.h).Any tips, please?",
"username": "Jack_Woehr"
},
{
"code": "CCCXXpython buildscripts/scons.py install-mongod --disable-warnings-as-errors CC=/usr/local/bin/egcc CXX=/usr/local/bin/egc++/usr/local/config.logVERBOSE=1 --config=force",
"text": "Hi @Jack_Woehr -In general, SCons ignores the shell environment in the interest of reproducible builds. If you wish to customize CC and CXX when invoking SCons, you must pass them as command line arguments to SCons. Try the following instead:python buildscripts/scons.py install-mongod --disable-warnings-as-errors CC=/usr/local/bin/egcc CXX=/usr/local/bin/egc++(Note: I’m guessing on that CXX value, I’m not very familiar with OpenBSD’s compiler setup).One more thing I’ll note regarding toolchain is that it looks like your toolchain is in /usr/local/. Will your toolchain automatically arrange for binaries to search for the C++ runtime in the right place? If not, some additional flags may be needed to setup an additional runpath.The issue OpenSSL is more surprising. Can you provide more details on what is going wrong? Is it failing the configure check? If so, you can look in the config.log, a path to which is printed out after a failed config, and it will often show something helpful. It might be worth building with VERBOSE=1 --config=force to make sure you aren’t just getting echoed back a cached failure.Also, it would be helpful if you could let me know what branch / version of MongoDB you are trying to build.Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Thanks, Andrew … trying this now …",
"username": "Jack_Woehr"
},
{
"code": "master f7e3b602cf",
"text": "Trying to build master f7e3b602cf",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Living on the edge!Please let me know how it works out.",
"username": "Andrew_Morrow"
},
{
"code": "CXX=/usr/local/bin/eg++scons: Configure: Checking for SSL_version(NULL) in C library ssl... \nbuild/scons/opt/sconf_temp/conftest_42d562287bca6fadd943ac430ed30054_0.c <-\n |\n |\n |#include \"openssl/ssl.h\"\n |int\n |main() {\n | SSL_version(NULL);\n |return 0;\n |}\n |\n/usr/local/bin/egcc -o build/scons/opt/sconf_temp/conftest_42d562287bca6fadd943ac430ed30054_0.o -c -std=c11 -Werror -fasynchronous-unwind-tables -ggdb -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -fno-omit-frame-pointer -fno-strict-aliasing -O2 -march=sandybridge -mtune=generic -mprefer-vector-width=128 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-const-variable -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp -fPIE -DNDEBUG build/scons/opt/sconf_temp/conftest_42d562287bca6fadd943ac430ed30054_0.c\n/usr/local/bin/egcc -o build/scons/opt/sconf_temp/conftest_42d562287bca6fadd943ac430ed30054_0_a1ba5b757e2b0ce3219a399f52ae31c0 -Wl,--fatal-warnings -pthread -Wl,-z,now -fstack-protector-strong -Wl,--build-id -Wl,--hash-style=gnu -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro -Wl,--compress-debug-sections=none -pie -rdynamic build/scons/opt/sconf_temp/conftest_42d562287bca6fadd943ac430ed30054_0.o -lkvm -lcrypto -lssl -lcrypto -ldl\nld: error: unable to find library -ldl\ncollect2: error: ld returned 1 exit status\nscons: Configure: no\nlibdldlopen()libc",
"text": "Okay @Andrew_Morrow, that got me past the compiler error (BTW for Gnu it’s CXX=/usr/local/bin/eg++) but stuck with openssl apparently because a dependent library is wanted:There is no libdl on my system … if it’s looking for dlopen() that should be in libc on OpenBSD.",
"username": "Jack_Woehr"
},
{
"code": "\n \n conf = Configure(myenv, custom_tests = {\n 'CheckBoostMinVersion': CheckBoostMinVersion,\n })\n \n libdeps.setup_conftests(conf)\n \n ### --ssl checks\n def checkOpenSSL(conf):\n sslLibName = \"ssl\"\n cryptoLibName = \"crypto\"\n sslLinkDependencies = [\"crypto\", \"dl\"]\n if conf.env.TargetOSIs('freebsd'):\n sslLinkDependencies = [\"crypto\"]\n \n if conf.env.TargetOSIs('windows'):\n sslLibName = \"ssleay32\"\n cryptoLibName = \"libeay32\"\n sslLinkDependencies = [\"libeay32\"]\n \n # Used to import system certificate keychains\n if conf.env.TargetOSIs('darwin'):\n \n ",
"text": "I believe this is happening due to this logic in the build system:MongoDB currently does no testing on any BSDs, so it isn’t surprising to me that there are some incorrect assumptions. Unfortunately, you will need to make a local fix to work past this. It is reasonable to assume that you will find other issues like this. In particular, you may find that several of the vendored third party libraries (MozJS, tcmalloc, and libunwind all come to mind) lack configurations for your platforms. You may need to disable features or build against a system version in order to work past those.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Thanks, Andrew … I didn’t think it was going to be easy ",
"username": "Jack_Woehr"
},
{
"code": "arm_neon.hscons: Configure: Checking for C header file arm_neon.h... \nbuild/scons/opt/sconf_temp/conftest_ca1da64614a9ba196b2237c2c671e2f3_0.c <-\n |\n |#include \"arm_neon.h\"\n",
"text": "Well, I made some progress but now it has gone bonkers and is looking for arm_neon.h",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "If I remember right, that is a misleading bit of information that the build spits out. The check for that is unconditional, but failing it doesn’t stop your build. It just happens to be the last check that gets run, so it looks suspicious when you see the “failed” result. I suspect there is another error happening elsewhere.",
"username": "Andrew_Morrow"
},
{
"code": "stdoutCompiling build/opt/mongo/db/ttl.o\n/tmp//cch4k6rb.s: Assembler messages:\n/tmp//cch4k6rb.s:3658: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:3668: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5245: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5266: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5275: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5470: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5491: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5500: Error: no such instruction: `vzeroupper'\n/tmp//cch4k6rb.s:5820: Error: no such instruction: `vmovdqu 16(%rdx),%xmm0'\n/tmp//cch4k6rb.s:5821: Error: no such instruction: `vmovaps %xmm0,-96(%rbp)'\n/tmp//cch4k6rb.s:6000: Error: no such instruction: `vmovdqa -96(%rbp),%xmm1'\n/tmp//cch4k6rb.s:6001: Error: no such instruction: `vmovaps %xmm1,-64(%rbp)'\n",
"text": "Yes, @Andrew_Morrow , I’m sure you’re right about that.It doesn’t seem to make it into the failure log, but watching stdout I see what’s happening.\nISTR speed-reading some MongoDB docs that later versions of the x86_64 port require advanced instructions.Well.Here’s some output from the oldish computer on which I have OpenBSD installed that leads me to believe that the immediate problem is that the Intel(R) Core™2 Duo CPU T6500 is unsupported for MongoDB 5:… and so on for a page or two.",
"username": "Jack_Woehr"
},
{
"code": "-march=sandybridge-marchCCFLAGSscons ... CCFLAGS=-march=core2\n",
"text": "Yeah, that looks like your assembler doesn’t like the codegen from the compiler. However you can override our default targeting of -march=sandybridge by providing your own -march argument to GCC, again on the command line, via CCFLAGS.",
"username": "Andrew_Morrow"
},
{
"code": "Compiling build/opt/mongo/db/user_write_block_mode_op_observer.o\nvirtual memory exhausted: Cannot allocate memory\n",
"text": "That got me farther. Now …which is funny because I don’t see it hitting swap. It just exhausts free physical RAM.",
"username": "Jack_Woehr"
},
{
"code": "psutil-j NNmongod--link-model=dynamicmongodCCFLAGS=-gsplit-dwarfCCFLAGSscons ... CCFLAGS=\"-march=core2 -gsplit-dwarf\"-gsplit-dwarf--link-model=dynamic--install-action=hardlinkDESTDIR",
"text": "@Jack_Woehr -Building the server sources is fairly resource intensive. I recommend using the most powerful machine you can. But there are a few things you can try to reduce various sorts of resource constraints:By default the build will use all available cores as found by the python psutil library. But you can reduce that on the command line by passing an explicit -j N argument. Cutting that N value back from using all your local cores should reduce the memory pressure during compilation.Linking a static mongod binary can take a lot of memory. If you don’t need a production quality binary, you can build with --link-model=dynamic, which will instead build a mongod that links tons of little shared libraries. That makes the final link much less resource intensive.If you must go with a static build but don’t need debug info after development, you can try building with CCFLAGS=-gsplit-dwarf, which should reduce the amount of memory and disk used to manage debug info, but the resulting build is really only appropriate for development use since you can’t take the debug info with you easily. Note that you are already passing an argument to CCFLAGS, so the syntax for passing multiple values is a space separated string: scons ... CCFLAGS=\"-march=core2 -gsplit-dwarf\". There isn’t much use to -gsplit-dwarf for a --link-model=dynamic build, in our experience.You can save a disk space on the installation by building with --install-action=hardlink, which will hardlink files into the installation directory, rather than copying them from the build directory. This will only work as long as your build directory and installation directory are on the same filesystem. If you have customized DESTDIR or similar, that may not be true.All of the above comes with the caveat that all our developer experience with these flags is from macOS and Linux. So, just like with the code itself, your mileage may vary on BSDs.Andrew",
"username": "Andrew_Morrow"
},
{
"code": "--link-model=dynamic-link-model=dynamic",
"text": "Thanks for all the tips, @Andrew_Morrow … Will work with this today.You’re right, it’s pretty silly to throw this little old laptop at this big compilation problem.Of course I’m really using MongoDB on a mixture of more advanced platorms, Ubuntu and Fedora.In the present case, I’m just trying to see the viability of OpenBSD as a modern MongoDB platform and had installed the latest OBSD cut on a little laptop. Thanks for your patience with my experiment!PS I couldn’t make it accept --link-model=dynamic so I tried -link-model=dynamic (1 dash) and that is running now and has already gotten further than I got before …",
"username": "Jack_Woehr"
},
{
"code": "CCFLAGS=\"-march=core2 -g -link-model=dynamic\"-gboost::optional<long long> _N;",
"text": "Using CCFLAGS=\"-march=core2 -g -link-model=dynamic\" (-g strips everything ) I was able to get far enough to run into errors that are based on the code … now I have to do some real work … My system doesn’t like boost::optional<long long> _N; …",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "@Jack_Woehr - That sounds like you are up and running on a working environment for doing the necessary porting. Please reach out of if you have further questions.",
"username": "Andrew_Morrow"
},
{
"code": "CXXFLAGS=\"-O2 -Wall -DNDEBUG\"",
"text": "Got past the boost problem with this article … I added CXXFLAGS=\"-O2 -Wall -DNDEBUG\"",
"username": "Jack_Woehr"
},
{
"code": "-O2-Wall-NDEBUG--dbgSConstruct",
"text": "I’d be careful there. We already manage optimization flags like -O2 and we always build with -Wall. The setting -NDEBUG is also something we already manage for you, based on whether the build has the --dbg flag.I definitely recommend hunting around in the top level SConstruct for these sorts of flags before introducing them via the command line. Yes, the file is huge and complex, but there is a lot of accumulated knowledge baked in there too.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Hi,Sorry for the slightly OT question, but which debugging options are included within the --dbg flag?",
"username": "Zikker"
}
] | Build mongo current on OpenBSD : find egcc, ssl.h | 2022-03-17T15:03:47.794Z | Build mongo current on OpenBSD : find egcc, ssl.h | 6,943 |
null | [] | [
{
"code": "",
"text": "While the navigation bar on the mongodb jumpstart start series uses realm the web uses app services. I want to add a collection to my app but I can’t see the option. How do I add a collection to my app?",
"username": "leo_adigwe"
},
{
"code": "",
"text": "I believe that “Realm” is transitioning to “Atlas App Services” but the rest should be exactly the same. If you already have a cluster setup then you can access your databases and collections via the “Data Services” tab along the top, choose your cluster and click on “Browse Collections” and hover over your database name to add a collection.\nmongodb-collections1556×1388 68.8 KB\n",
"username": "Ian"
},
{
"code": "...",
"text": "Hi @leo_adigweName changes are inevitable over time, but most of the other things that make it whole stays the same.Realm is now a name bigger than App Services, or smaller depending on where you look. Realm is the name for the small-scale mongodb server that is NoSQL equivalent of Sqlite where you can even run on mobile. App services are the hub where you can make data sync and more for your Realms, or things without Realms.On the menu panel to the left, you will see that when he clicks on the “get started” button and adds a collection, the “rule” is highlighted. And just before that, you see “cluster0 linked” in the box to the left before clicking. you can say you are basically linking a database to an app project and then defining rules, auth etc on it. You do not actually manage data but are still able to add a database/collection here without switching back and forth.So to continue, click on the “Data access → Rules” on the left, click on the ellipses (three dots, ...) of the “mongodb-atlas” in the middle section under collections, select “create collection” and create collection (in new database if you want), or directly select a collection in the list and create new rule on the right panel. To actually work on the data itself, switch to “Data Services” as @Ian noted above.If you see more differences, try checking the menu on the left for the names you see in the tutorials.",
"username": "Yilmaz_Durmaz"
}
] | Some screens on the mongodb jumpstart series are different from the mongoldb.com website. this is very odd | 2023-01-21T03:37:35.200Z | Some screens on the mongodb jumpstart series are different from the mongoldb.com website. this is very odd | 550 |
[
"graphql"
] | [
{
"code": "{\n \"name\": \"This is a text string\",\n \"results\": [\n { \"type\": \"count\", \"value\": 12, \"unit\": \"items\" },\n { \"type\": \"comment\", \"value\": \"A text comment\", \"unit\": \"n/a\" },\n { \"type\": \"weight\", \"value\": 56.69, \"unit\": \"kg\" }\n ]\n}\n{\n $jsonSchema: {\n bsonType: \"object\",\n properties: {\n _id: { bsonType: \"objectId\" },\n name: { bsonType: \"string\" },\n results: {\n bsonType: \"array\",\n items: {\n bsonType: \"object\",\n properties: {\n type: { bsonType: \"string\" },\n value: { bsonType: [ \"string\" , \"int\", \"double\" ] },\n unit: { bsonType: \"string\" }\n }\n }\n }\n }\n }\n}\n{\n $jsonSchema: {\n bsonType: \"object\",\n properties: {\n _id: { bsonType: \"objectId\" },\n name: { bsonType: \"string\" },\n results: {\n bsonType: \"array\",\n items: {\n anyOf: [\n {\n bsonType: \"object\",\n properties: { type: { bsonType: \"string\" }, value: { bsonType: \"string\" }, unit: { bsonType: \"string\" } }\n },\n {\n bsonType: \"object\",\n properties: { type: { bsonType: \"string\" }, value: { bsonType: \"int\" }, unit: { bsonType: \"string\" } }\n },\n {\n bsonType: \"object\",\n properties: { type: { bsonType: \"string\" }, value: { bsonType: \"double\" }, unit: { bsonType: \"string\" } }\n }\n ]\n }\n }\n }\n }\n}\n{\n \"title\": \"example\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\" },\n \"name\": { \"bsonType\": \"string\" },\n \"results\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"type\": { \"bsonType\": \"string\" },\n \"unit\": { \"bsonType\": \"string\" },\n \"value\": { \"bsonType\": \"int\" }\n }\n }\n }\n }\n}\nquery { example { _id name results { type unit value } } }\n{\n \"data\": {\n \"example\": {\n \"_id\": \"60755d9c0000b6ffe15cf906\",\n \"name\": \"This is a text string\",\n \"results\": [\n { \"type\": \"count\", \"unit\": \"items\", \"value\": 12 },\n { \"type\": \"comment\", \"unit\": \"n/a\", \"value\": null },\n { \"type\": \"weight\", \"unit\": \"kg\", \"value\": 56 }\n ]\n }\n }\n}\n ...\n \"bsonType\": \"<BSON Type>\" | [\"<BSON Type>\", ...],\n ...\n{\n \"title\": \"example\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\" },\n \"name\": { \"bsonType\": \"string\" },\n \"results\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"type\": { \"bsonType\": \"string\" },\n \"unit\": { \"bsonType\": \"string\" },\n \"value\": { \"bsonType\": [ \"string\", \"int\", \"double\" ] }\n }\n }\n }\n }\n}\nquery { example { _id name results { type unit value } } }\n{\n \"data\": null,\n \"errors\": [\n {\n \"message\": \"Cannot query field \\\"value\\\" on type \\\"ExampleResult\\\".\",\n ...\n }\n ]\n}\ntype CustomTypeComment { type: String, value: String, unit: String }\ntype CustomTypeCount { type: String, value: Int, unit: String }\ntype CustomTypeWeight { type: String, value: Double, unit: String }\nunion CustomResultType = CustomTypeComment | CustomTypeCount | CustomTypeWeight\n\ntype Event { id: String!, name: String, results: [CustomResultType] }\n[\n { \"_id\": \"...\", \"multi\": \"A string\" },\n { \"_id\": \"...\", \"multi\": 10.25 },\n]\n...\nfield: Int | String\n...\ntype CustomTypeOne { id: Int, value: String }\ntype CustomTypeTwo { id: Int, value: Int }\nunion CustomType = CustomTypeOne | CustomTypeTwo\n\ntype Event { id: Int!, results: [CustomType] }\n{\n \"name\": \"This is a text string\",\n \"results\": [\n { \"type\": \"count\", \"count\": 12, \"unit\": \"items\" },\n { \"type\": \"comment\", \"comment\": \"A text comment\" },\n { \"type\": \"time\", \"time\": 5600, \"unit\": \"s\" }\n ]\n}\n",
"text": "HiEDIT: For clarity I have explained this as different object types in array instead of multiple types per field.NOTE: In the case of different scalar types in the same field there is an outstanding issue with GraphQL (not Realm GraphQL) to have this supported in GraphQL schemas see Additional Note 1 below.Environment:MongoDB Atlas cluster running MongoDB 4.4\nMongoDB Realm connected to clusterIssue:I am trying to configure a Realm schema and Realm GraphQL (and later Sync) to support data where an array of objects can contain different objects, regardless of what I try I am unable to generate/create a valid schema that reflects the data and also works for Realm GraphQL.I am investigating and asking for help in case anyone else has a solution or potential solution that could be implemented in Realm GraphQL.What I am trying to achieve:The problem:Here is a simplified sample of the data I am trying to model, specifically the “results” array should be capable of accepting different “result” objects, and ideally each “result” object should be unique with respect to “type”. (see Additional Note 2 below for alternate data structure)What I have tried so far:1 - I have inserted the typed data into a MongoDB collection:\ndata1390×560 61.2 KB\n2 - I have written a MongoDB collection validation using json schema which validates the data, note the use of an array of bsonType for “value”:The following MongoDB collection validation also validates, note the use of anyOf to validate each “results” item:3 - I then use Realm to generate a schema from the data (Realm → Schema → select collection → Schema tab → Generate Schema), Realm suggests the following schema - note that the bsonType suggested is the first type in the array - in this case int, but if a different type was first in the array the generator would suggest that but not multiple types.\nNOTE: you can set the bsonType to string which will allow a Realm GraphQL query to return the data as strings but not the type, this also means that mutations are not strongly typed and do not accept int or double values as desired.4 - This schema is accepted by Realm GraphQL (no warnings) but when a query is run the results are not correct - note that the string value is returned as null and the double value has been truncated:GraphQL query run in GraphiQL:Result:5 - If I try to modify the Realm schema to match the data using an array of bsonTypes as suggested by the Realm documentation specifically in the note:The fields available in a JSON schema object depends on the type of value that the schema defines. See the document schema types reference page for details on all of the available schema types.that links to this documentation and shows “The following fields are available for all schema types” should accept an array for bsonType:The Realm schema should become:however this Realm schema causes Realm GraphQL to show a warning:results.value\tInvalidType\terror processing “type” property in JSON Schemaand trying to run a query returns an error:GraphQL query run in GraphiQL:Result (error):6 - I have also tried generating a valid JSON Schema using online tools and by manually constructing a schema, in these cases the tools sometimes suggest the use of “anyOf” however in each instance even if the schema is accepted by Realm it still shows the same GraphQL Schema generation warning.Latest / TLDR:Based on my investigation and the information about GraphQL union types described in the GraphQL GitHub issues it should be possible to use a union type in the Realm GraphQL schema to handle different objects nested in arrays.In the GraphQL schema:However I cannot see any way to customise the Realm GraphQL schema - there is an open issue to allow custom Realm GraphQL schema - please vote for this:Dear MongoDb GraphQl Stitch developers,\n\nIs it possible to add an ability to modify the final GraphQl schema (in order to remove unnecessary for my API stuff)\nOR\nBuild it based on existed roles?\n\nFor example, Im building a \"read only\" GraphQl API and...\n.\n.Additional:Additional Note 1:I have found there is a GraphQL proposal to support multiple scalar types in fields, for example using a string or double for the same field: Proposal: Support union scalar types #215This is so that for the data:you could specify a GraphQL schema:Currently it seems a workaround is to define a ‘union’ type in GraphQL and then use that union type rather than scalar types:For example in GraphQL:Additional Note 2:If possible each “result” object should contain similar field names, however if this causes problems\nthe data could be modelled with each object having different field names, for example:",
"username": "bolokos_bolokos"
},
{
"code": "",
"text": "Did you found any solution for this ? I am also facing the same issue with Realm.",
"username": "Dhananjay_Puglia"
},
{
"code": "",
"text": "Edit - details moved to original post.",
"username": "bolokos_bolokos"
},
{
"code": "{ \"field\": { \"anyOf\": [{ \"type\": \"string\" }, { \"type\": \"null\" }] } }",
"text": "We don’t currently support a union of scalar types in our default GraphQL service to stay as close to the spec as possible. That being said, there is a workaround to achieve something similar by setting the schema of the field like so:{ \"field\": { \"anyOf\": [{ \"type\": \"string\" }, { \"type\": \"null\" }] } }The generated schema will ignore this field, but it can still be accessed via the custom resolver, (likely by having to define two different types).",
"username": "Sumedha_Mehta1"
},
{
"code": "{\n \"title\": \"example\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\" },\n \"name\": { \"bsonType\": \"string\" },\n \"results\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"type\": { \"bsonType\": \"string\" },\n \"value\": { \"anyOf\": [{ \"type\": \"string\" }, { \"type\": \"null\" } ] },\n \"unit\": { \"bsonType\": \"string\" }\n }\n }\n }\n }\n}\nField Path Error Code Message\nresults.value MissingSchemaType error processing \"type\" property in JSON Schema\n",
"text": "Hi SumedhaI tried using anyOf in my Realm schema you suggested:As expected the Realm GraphQL still gives me a schema error and ignores the field:Can you explain how I can access the collection using a custom resolver / define two different types",
"username": "bolokos_bolokos"
},
{
"code": "{\n \"title\": \"BuildingEquipment\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\"},\n \"configs\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"field\": {\"bsonType\": \"string\"},\n \"value\": { \"anyOf\": [{\"bsonType\": \"string\"}, {\"bsonType\": \"double\"}]}\n }\n }\n }\n }\n}\n",
"text": "{ “anyOf”: [{ “type”: “string” }, { “type”: “null” }] }Hi @Sumedha_Mehta1. I tried your recommendation but it doesn’t seems to be working. Below is the schema that I am using:And following is the error I am getting when generating Realm Data ModelsScreenshot 2021-04-13 at 9.25.29 PM1864×644 65.8 KB",
"username": "Dhananjay_Puglia"
},
{
"code": "",
"text": "@Dhananjay_PugliaAre you using Sync in this application? That error refers to that service specifically, not GraphQL. GraphQL will just ignore that field in your schema.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "@Sumedha_Mehta1 Yes I am using Sync in this application. Will this solution not working with Sync? If no then is there any way where I can sync multiple type of data for a single field ?",
"username": "Dhananjay_Puglia"
},
{
"code": "",
"text": "We are working on a way to to introducing this for Sync pretty soon (~1 mo) - you can stay up to date on new data types by subscribing here: Realm Community Projects | Realm.io",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Great. Thanks a lot @Sumedha_Mehta1 for the update.",
"username": "Dhananjay_Puglia"
},
{
"code": "",
"text": "I see that the “mixed” type has now been added to Realm DB / Sync – I presume this would now enable you to implement union types in GraphQL? Or has it already been done? @Sumedha_Mehta1",
"username": "Nikolaj_Selvik"
},
{
"code": "",
"text": "Is there any progress on this subject?",
"username": "Mike_Notta"
},
{
"code": "",
"text": "You can apply multiple types to a field in the schema like this [“string”, “null”].\nRefer: https://docs.mongodb.com/realm/schemas/types/\nimg: https://i.imgur.com/vulUoem.pngBut those type will be ignored by the graphql schema generation.\nRefer: https://docs.mongodb.com/realm/graphql/types-and-resolvers/\nimg: https://i.imgur.com/eXxqljz.pngcurrently I cant find a workaround for this.",
"username": "Dines_Patrick"
},
{
"code": "anyOf",
"text": "Has anyone managed to get multiple types to be accepted at all? Using anyOf in the Payload Type has been mentioned as a way but this hasn’t worked for me (and seemingly for others on this thread).I’d really like to know if this can actually be achieved and see a working example.cc @bolokos_bolokos",
"username": "Ian"
}
] | Realm schema for different object types in array | 2021-04-12T13:13:34.387Z | Realm schema for different object types in array | 14,338 |
|
null | [
"aggregation",
"queries"
] | [
{
"code": "[\n {\n \"_id\": \"Id\",\n \"name\": \"Product Name\",\n \"Departments\": [\n {\n \"_id\": \"63c0faf4752a6a7cfd169602\",\n \"name\": \"Audio\",\n \"parentId\": null\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16960c\",\n \"name\": \"Home Audio & Speakers\",\n \"parentId\": \"63c0faf4752a6a7cfd169602\"\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16960d\",\n \"name\": \"Speakers\",\n \"parentId\": \"63c0faf4752a6a7cfd16960c\"\n }\n ],\n },\n {\n \"_id\": \"Id\",\n \"name\": \"Product Name\",\n \"Departments\": [\n {\n \"_id\": \"63c0faf4752a6a7cfd169957\",\n \"name\": \"Computers & Accessories\",\n \"parentId\": null\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16996c\",\n \"name\": \"Data Storage\",\n \"parentId\": \"63c0faf4752a6a7cfd169957\"\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16996e\",\n \"name\": \"SSD\",\n \"parentId\": \"63c0faf4752a6a7cfd16996c\"\n }\n ],\n },\n {\n \"_id\": \"Id\",\n \"name\": \"Product Name\",\n \"Departments\": [\n {\n \"_id\": \"63c0faf4752a6a7cfd169602\",\n \"name\": \"Audio\",\n \"parentId\": null\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16960c\",\n \"name\": \"Home Audio & Speakers\",\n \"parentId\": \"63c0faf4752a6a7cfd169602\"\n },\n {\n \"_id\": \"63c0faf4752a6a7cfd16960d\",\n \"name\": \"AV Receivers\",\n \"parentId\": \"63c0faf4752a6a7cfd16960c\"\n }\n ],\n]\n{\n\tShopDepartments: [\n\t\t{\n\t\t\t\"_id\": \"63c0faf4752a6a7cfd16960c\",\n\t\t\t\"name\": \"Home Audio & Speakers\",\n\t\t\t\"parentId\": \"63c0faf4752a6a7cfd169602\"\n },\n\t\t{\n\t\t\t\"_id\": \"63c0faf4752a6a7cfd16996c\",\n\t\t\t\"name\": \"Data Storage\",\n\t\t\t\"parentId\": \"63c0faf4752a6a7cfd169957\"\n\t\t},\n\t]\n\n}\n",
"text": "How do I group nested arrays like this.Into this. I want them to be grouped by their department names and getting only the 2nd item of the array. I tried doing different approaches and I still can’t manage to do it. Can anyone help me with this.",
"username": "Fhilip_Jhune_Fernandez"
},
{
"code": "",
"text": "You will need to use a MongoDB Search aggregation pipeline and use facets inside the $searchMeta pipeline stage to group your results.Have a look at this page on facets and in particular the examples which help to demonstrate. There is also a helpful tutorial to walk you through an example.",
"username": "Ian"
},
{
"code": "set_second_element = { $set : {\n second_element : { $arrayElemAt : [ \"$Departments\" , 1 ] }\n} }\ngroup_second_element = { $group : {\n _id : \"$second_element.name\" ,\n department : { $first : \"$second_element\" }\n} }\n{ _id: 'Home Audio & Speakers',\n department: \n { _id: '63c0faf4752a6a7cfd16960c',\n name: 'Home Audio & Speakers',\n parentId: '63c0faf4752a6a7cfd169602' } }\n{ _id: 'Data Storage',\n department: \n { _id: '63c0faf4752a6a7cfd16996c',\n name: 'Data Storage',\n parentId: '63c0faf4752a6a7cfd169957' } }\n",
"text": "I want them to be grouped by their department names and getting only the 2nd item of the arrayWith the example result and sample documents, what I understand you want is that for each input documents, you want to find the unique list of the 2nd elements of the Departments array.You extract the 2nd element with a $set stage like:Then you $group by using second_element as the group _id like:The aggregation pipeline [ set_second_element , group_second_element ] will produceWhich matches the data of the expected result. I leave as an exercise to the reader a final cosmetic $project to get the data into the exact expected format.When you redact documents for publishing please ensure we can cut-n-paste them and insert them into our system without having us to edit them. The last sample document was not terminated correctly. The \"_\"id:“Id” produce duplicate errors.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Group By Nested Array Aggregation | 2023-01-21T11:31:02.199Z | Group By Nested Array Aggregation | 1,150 |
null | [] | [
{
"code": "",
"text": "We are not able to install mongodb on amazon linux 2022(al2022).\nal2022 comes with default openssl3.\npackages available for al2022 MongoDB Repositoriesafter installing rpm. we are not able to start the service. Service is not found.Please advice.",
"username": "ranjini_ganesh"
},
{
"code": "",
"text": "al2022 is not on the Supported Platforms. So this would be a ‘good luck’ installation.Have you tried the Amazon Linux 2 install instructions?",
"username": "chris"
}
] | Please share the Steps to install mongodb on amazon linux 2022? | 2023-01-20T04:59:15.137Z | Please share the Steps to install mongodb on amazon linux 2022? | 970 |
null | [
"aggregation"
] | [
{
"code": "{\n \"points\": [\n {\n \"geometry\": {\n \"coordinates\": [\n {\n \"$numberDouble\": \"10.1\"\n },\n {\n \"$numberDouble\": \"20.2\"\n }\n ]\n }\n },\n {\n \"geometry\": {\n \"coordinates\": [\n {\n \"$numberDouble\": \"10.1\"\n },\n {\n \"$numberDouble\": \"20.2\"\n }\n ]\n }\n }\n ]\n}\n{\n \"points\": [\n {\n \"geometry\": {\n \"coordinates\": [10.1, 20.2]\n }\n },\n {\n \"geometry\": {\n \"coordinates\": [10.1, 20.2]\n }\n }\n ]\n}\n{\n \"$set\": {\n \"$points.geometry.coordinates\": {\n \"$toDecimal\": \"$points.$.geometry.coordinates\"\n }\n }\n}\n",
"text": "Hello,I have saved some data containing numbers as a Double in mongodb. After requesting my data in Realm with an aggregation, I get my data in a format like this:But I would like to get these in a JS-friedly format like this:How can I achieve this? I’m thinking of something like this:But maybe the solution is more complicated…Thanks in advance,\nCarsten",
"username": "Carsten_Bottcher"
},
{
"code": "{\n \"geometry\": {\n \"coordinates\": [\n {\n \"$toDouble\": \"10.1\"\n },\n {\n \"$toDouble\": \"20.2\"\n }\n ]\n }\n}\n",
"text": "Hi,You can use $toDouble operator:",
"username": "NeNaD"
},
{
"code": "",
"text": "Hi NeNaD,thanks, but my problem is the other way round. After I requested the data from mongodb, they are not in the JS consumable format. Instead of the first output, I would like to have them in the second output format.",
"username": "Carsten_Bottcher"
},
{
"code": "",
"text": "I am also facing similar issue. Did you find any solution?",
"username": "Durga_Prasad_Gembali1"
}
] | How to convert double type in an aggregation | 2022-10-09T19:38:25.671Z | How to convert double type in an aggregation | 1,193 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Hi, I have a basic conceptual question about using relam with local and flexibleSync usage.Starting point:\nMy requirement is that I want to offer the user only a local realm database in the Free version. However, if the customer decides to upgrade to a Pro version, I want them to be able to share the data with other users via flexibleSync. Btw. if relevant the app is developed based on Flutter.Issue:\nMy issue is that I cannot “simply” convert a local relam database to flexibleSync. Instead, I need to create a flexibleSync database and transfer the data from the local database to the flexibleSync database. Also if the user decides to go back to free version of the app I need to transfer the data back.My question about the concept:\nIs it possible and reasonable to do without the local Relam database and use only the flexibleSync database from the beginning, but with the setting that no data is synchronised in the background. Only when the user activates the Pro version are rules created that define that certain data should be synchronised. It is clear to me that I need a user for flexibleSync, but that user can be anonymous.Is it feasible concept from realm point of view? From my point of view it makes sense in any case, because I can avoid the implementation of two realm databases (local and flexibleSync) and the architecture of the application is simplified considerably as a result.",
"username": "Konst"
},
{
"code": "",
"text": "Hi,\nWe don’t support converting local to flexible sync realms.\nIn fact convert support is coming in the next Realm Flutter/Dart release, but the local to flx sync support will not be there.\nWhat you can do, if and only if you expect people to be using the local realms for short period of time with a limited data operations like add and remove is, you can use Configuration.disconnectedSync.\nThis configuration can be used to open a synced realm locally without sync enabled. But there is a drawback that it will persist any data operations that are made in the realm and that will constantly increase the realm size cause it will keep history records of how data was changed. This history will not be pruned. This is done so on the next Configuration.flexibleSync open, the data can be synced correctly.\nSo in order for this to work you can open a disconnectedSync realm and afterwards just change the configuration you are using to flexibleSync, to enable syncing of the data.The more robust approach currently, that will not increase the file size dramatically over time, is to just use local realm and transfer the data manually into a synced realm when the user activates the Pro version.We have plans on supporting what you describe in the future, but I can’t comment when it may be available.Cheers",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Concept on realm - with local and flexibleSync usage | 2023-01-20T16:18:02.580Z | Concept on realm - with local and flexibleSync usage | 997 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hi allI am trying to achieve peer-to-peer offline sync for React Native devices.While it is possible to have a sync while online, I’m trying to figure out, when there is no internet connection. The devices are connected through wifi, but the main network can sporadically lose internet for hours. The devices should still be able to sync their realmdb.Are there ideas how to achieve this?Happy to open-source an adapter/layer for it.Cheers",
"username": "Lucky_Bassi"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Lucky_Bassi !Atlas Device Sync currently only supports online sync with Atlas’ cloud backend. Applications using a Realm SDK write data locally on the device first, and will sync with Atlas when network access is available. Peer-to-peer offline sync is not a current feature of the Realm SDKs, but you could raise this as a suggestion on the MongoDB Feedback Engine.Related discussion: Any known issues with manually syncing offline devices through a physical server?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_XMany thanks for your reply!I would really like to use RealmDB for this use-case as we have great experiences with it in the past.Do you think it’s feasible to manually implement such a logic?I am thinking of creating a separate collection, where:unsynced data is captured for later sync to onlinewe use revisions similar to pouched and sync this over wifi to other clientsOr we could use a field within each item, where we store the sync status.Thanks!",
"username": "Lucky_Bassi"
}
] | Peer-to-peer Offline Sync for Realm | 2023-01-19T22:17:59.211Z | Peer-to-peer Offline Sync for Realm | 1,344 |
null | [] | [
{
"code": "{\"error\":\"NotWritablePrimary: Not-primary error while processing 'find' operation on 'rcs' database via <mark>fire</mark>-and-forget command execution.\"}}",
"text": "Hi,\nWe’ve upgraded our stack of clusters to MongoDB Community 6.0.3.\nOn our test phases, we noticed that, when a node was restarting after its DBPath was cleaned, during the sync phase (STARTUP2), it throws these errors :{\"error\":\"NotWritablePrimary: Not-primary error while processing 'find' operation on 'rcs' database via <mark>fire</mark>-and-forget command execution.\"}}After some research, it is an error that is raised because recovering nodes may be part of an election (as per this SERVER-70510). This topic seems to mention there is a choice that need to be made.We found out that these errors were masked in 6.2.0rc0 (see SERVER-60553), but only “masked”.We don’t see any of these errors on our previous stacks (4 and 5)\nDo these errors mean there is somewhere a client that fails to read because it was instructed to read on a recovering node ? Or were they just extra logs on an existing behaviour ?\nMaybe the same occur on 4 / 5 but we just don’t see it ?Thanks !",
"username": "MBO"
},
{
"code": "",
"text": "What is the topology of your cluster, how many nodes are in the replica set etc, it sounds like you don’t have an active primary. Did you check all the nodes to make sure you do have a healthy primary?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Hello\nthe RS is a 3-node. At the recovery time, there is a master (we monitor the RS.status() through Grafana).\nThe error is thrown on a secondary node while being in recovery, hence the weirdness.",
"username": "MBO"
},
{
"code": "",
"text": "Hi @MBO, great research on the error. Looking at the jira it appears the fire-and-forget operation are due to mirrored reads.As these are primarily to have a partially warm cache on primary candidates the responses(or lack of in this case) are never waited for and won’t impact your actual clients.Maybe the same occur on 4 / 5 but we just don’t see it ?Mirrored reads have been in since 4.4. Also this comment mentions 4.4 so just hadn’t been seen I guess.",
"username": "chris"
},
{
"code": "",
"text": "Hi Chris\nthanks a lot. Indeed, I misunderstood this mirrored read feature. Now that makes a whole lot of sense. The issue (non critical) aims at not considering recovering nodes as eligible, in order to avoid being mirrored by the primary as they are not ready for this “wam-up”.\nWill try with mirrorRead off, just to check the issues disappear, but in any case, these are, as you mentioned, harmless warnings.PS : This cluster was previously on 4.0, therefore, no mirrored read support. But, on the other hand, one of our clusters was on 5.0, but we never saw these errors (and we still don’t…). Maybe this cluster is much less active, and there is not enough time during the recovery process for the primary to send these fire-and-forget ?",
"username": "MBO"
},
{
"code": "",
"text": "By the way, in a setup where we use secondaryRead preferences, does it benefit from mirrored reads, as they should be warming up data already ?",
"username": "MBO"
},
{
"code": "",
"text": "The default is to mirror 0.01. So there could be some benefit. But that would depend if there are overlaps on the reads on the primary and the one occurring on the secondaries.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | In MongoDB 6.0.3, multiple "Not-primary error while processing 'find' operation on 'XXX' database via fire-and-forget command execution." | 2023-01-19T09:05:06.137Z | In MongoDB 6.0.3, multiple “Not-primary error while processing ‘find’ operation on ‘XXX’ database via fire-and-forget command execution.” | 2,486 |
null | [
"aggregation",
"serverless"
] | [
{
"code": "{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"7023421D\",\n \"planCacheKey\": \"737D18C4\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"issue_date\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"direction\": \"forward\"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 111579,\n \"executionTimeMillis\": 181,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 111579,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 111579,\n \"executionTimeMillisEstimate\": 53,\n \"works\": 111581,\n \"advanced\": 111579,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 116,\n \"restoreState\": 116,\n \"isEOF\": 1,\n \"transformBy\": {\n \"issue_date\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 111579,\n \"executionTimeMillisEstimate\": 21,\n \"works\": 111581,\n \"advanced\": 111579,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 116,\n \"restoreState\": 116,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 111579\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 111579,\n \"executionTimeMillisEstimate\": 109\n },\n {\n \"$group\": {\n \"_id\": {\n \"year\": {\n \"$year\": {\n \"date\": \"$issue_date\"\n }\n }\n },\n \"total\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"total\": 1872\n },\n \"totalOutputDataSizeBytes\": 11908,\n \"usedDisk\": false,\n \"nReturned\": 26,\n \"executionTimeMillisEstimate\": 178\n },\n {\n \"$project\": {\n \"_id\": true,\n \"year\": \"$_id.year\",\n \"total\": \"$total\"\n },\n \"nReturned\": 26,\n \"executionTimeMillisEstimate\": 178\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"year\": 1\n }\n },\n \"totalDataSizeSortedBytesEstimate\": 13156,\n \"usedDisk\": false,\n \"nReturned\": 26,\n \"executionTimeMillisEstimate\": 178\n }\n ],\n \"serverInfo\": {\n \n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 16793600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 33554432,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"Pages\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": {\n \"year\": {\n \"$year\": \"$issue_date\"\n }\n },\n \"total\": {\n \"$count\": {}\n }\n }\n },\n {\n \"$project\": {\n \"year\": \"$_id.year\",\n \"total\": \"$total\"\n }\n },\n {\n \"$sort\": {\n \"year\": 1\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1674088070,\n \"i\": 22\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"A/J23GkvPsdMBnzWui0R7/rMKGk=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": {\n \"$numberLong\": \"7150657730754117634\"\n }\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1674088070,\n \"i\": 22\n }\n }\n}\n",
"text": "Hi folks, about a year ago I asked something similar when serverless was brand new, but seems that things have evolved.\nI’m still debating whether I migrate my m2 cluster to serverless.One question I have is regarding the RPUs. If I have a query with skip(20).limit(20) from the explain plan I see I get 20 results, with 40 scanned indexes. What is the RPU for such query?The reason for this ask, is that I expect that a lot of queries from my app will rely on pagination using limit/skip (I know I should probably be using a timestamp field, but this is my reality at the moment).Another one is regarding aggregations, specially with groups where the entire collection sometimes needs to be scanned. Will I be billed for the size of my collection in RPUs once aggregations are executed?Thank youFor instance take this aggregation I run on a 110k docs collection. The first stage traverses all documents, but only 26 documents are returned from the aggregation, what is the RPU cost here?",
"username": "Vinicius_Carvalho"
},
{
"code": "COLLSCANStorage",
"text": "Hi @Vinicius_Carvalho,One question I have is regarding the RPUs. If I have a query with skip(20).limit(20) from the explain plan I see I get 20 results, with 40 scanned indexes. What is the RPU for such query?Unfortunately I cannot provide the RPU costs for the queries you have provided since it depends on the exact situation but hopefully the below details may help with understanding how the RPU’s are calculated.I’m still debating whether I migrate my m2 cluster to serverless.The best database deployment for you depends on your use case and feature requirements. You may choose a serverless instance if you want to:To help you to choose the best database deployment type, you can find more details about use cases, feature support, and comparisons in our choosing database deployment type documentation.For instance take this aggregation I run on a 110k docs collection. The first stage traverses all documents, but only 26 documents are returned from the aggregation, what is the RPU cost here?Read Processing Units are accrued per operation and Atlas charges one RPU for each document read (up to 4 KB) or for each index read (up to 256 bytes) when covered queries (can be satisfied by an index). For more information, please see the Serverless Instance Costs documentation.Please note that a single read operation often does not equate to 1 RPU. One read operation can result in many RPUs depending on how the query is structured, how your data is structured, how large the documents are, among other things. In the most basic terms, RPUs represent how much work the server needs to do to fulfill your query (more on this below). Atlas meters based on the units of documents and index data read that meet the criteria outlined on our Serverless Instance Costs linked above.During a query’s execution it may need to read a lot more data to determine what documents to return. For example, If a query involves a COLLSCAN (Collection Scan), then it can charge many RPUs as it reads all document bytes in a collection regardless of how many documents are returned.Another one is regarding aggregations, specially with groups where the entire collection sometimes needs to be scanned. Will I be billed for the size of my collection in RPUs once aggregations are executed?In aggregation pipelines, all necessary documents scanned to fulfill the query contribute to the RPUs. This can include documents scanned, documents sorted, documents grouped in aggregation, and documents returned.For guidance on optimizing aggregation pipeline performance review the following documentation links:The Storage in the serverless instances includes the logical document and index storage. This value includes the number of bytes of all uncompressed BSON documents stored in all collections, plus the bytes stored in their associated indexes.The best practice to reduce RPUs is to use indexes. Indexes reduce the ratio of the number of documents scanned to the number of documents returned by queries.Using indexes improves your search queries and reduce the RPUs performed on your Atlas instance. If your application is running infrequent loads but you notice that you still incur high RPU costs, this is an indication that your queries are not optimized or your application is not using efficient indexes. See the Indexing Strategies documentation for improving the use of your indexes.For more reading and examples about Serverless costs, see the blog post Serverless Instances Billing 101: How to Optimize Your Bill with Indexing.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for the detailed explanation. I guess my concerns have been confirmed. The model is very similar to other vendors such as google firebase/datastore, which made me reconsider them in first place.I think I may end up with a hefty bill at the end of the month, even if I only have a few thousand queries per day, I may end up having several million RPUs per day. I guess I’ll stick with the M2 instance and see if it can hold its breath, as we get more traffic, we move to larger clusters.Thank you",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Serverless RPU model | 2023-01-19T00:26:53.459Z | Serverless RPU model | 1,843 |
null | [
"queries",
"crud",
"mongodb-shell"
] | [
{
"code": "",
"text": "My shell script is :#! /bin/sh\nmongo --host 127.0.0.1 << EOF\nuse data\ndb.agent.updateMany({},{$set: {“incoming”:-99}})\nexit\nEOFOn executing this script I get the following error:E QUERY [thread1] SyntaxError: invalid property id @(shell):1:32The same commands when run individually on mongo shell execute successfully with the following output\n{ “acknowledged” : true, “matchedCount” : 5, “modifiedCount” : 1 }",
"username": "Munish_K"
},
{
"code": "",
"text": "The shell /bin/sh probably tries to expand $set as a shell variable.Try to put it inside single quotes.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | updateMany shows syntax error when running from shell script | 2023-01-20T07:51:22.978Z | updateMany shows syntax error when running from shell script | 878 |
null | [
"queries",
"node-js"
] | [
{
"code": "{\n \"items\": [\n {\n \"product\": \"63c66986e9e1ab5801914215\"\n \"variant\": \"63c66986e9e1ab5801914216\",\n \"locations\": [\n {\n \"quantity\": 1,\n \"location\": \"63c66733e3e9f11b2e8f8947\"\n }\n ],\n },\n {\n \"product\": \"63c66776a6d77f32e2776c63\",\n \"variant\": null,\n \"locations\": [\n {\n \"quantity\": 6,\n \"location\": \"63c66733e3e9f11b2e8f8947\",\n },\n {\n \"quantity\": 7,\n \"location\": \"63c66733e3e9f11b2e8f8948\",\n }\n ]\n }\n ],\n \"paymentStatus\": \"PAID\",\n \"fulfilledStatus\": \"PARTIALLY-FULFILLED\",\n \"orderId\": \"O-001\",\n}\n \"items\": [\n {\n \"product\": \"63c66986e9e1ab5801914215\"\n \"variant\": \"63c66986e9e1ab5801914216\",\n \"locations\": [\n {\n \"quantity\": 1,\n \"location\": \"63c66733e3e9f11b2e8f8947\",\n \"fulfilledStatus\": \"FULFILLED\"\n }\n ],\n },\n {\n \"product\": {\n \"63c66776a6d77f32e2776c63\"\n },\n \"variant\": null,\n \"locations\": [\n {\n \"quantity\": 6,\n \"location\": \"63c66733e3e9f11b2e8f8947\",\n \"fulfilledStatus\": \"FULFILLED\"\n },\n {\n \"quantity\": 7,\n \"location\": \"63c66733e3e9f11b2e8f8948\",\n }\n ]\n }\n ]\n",
"text": "Hi there, I want to update every sub-element in sub of sub-array\nThis is my Schema:So here I want to add a new field “fulfilledStatus” in the locations array for every item in the items array that got the location = “63c66733e3e9f11b2e8f8947”so the items should like this after updateHow can i do that",
"username": "med_amine_fh"
},
{
"code": "",
"text": "It looks like what you want to do is",
"username": "steevej"
},
{
"code": "db.collection.updateOne(\n {\"items.locations.location\" : \"63c66733e3e9f11b2e8f8947\"},\n {$set: {\n 'items.$[].locations.$[x].fulfilledStatus': \"FULFILLED\"\n }},\n {arrayFilters: [\n {\"x.location\": \"63c66733e3e9f11b2e8f8947\"}\n ]}\n)\n$[<identifier>]arrayFiltersarrayFiltersarrayFilters",
"text": "Hello @med_amine_fh ,I believe @steevej’s answer is correct. To expand a little, you can try below query and update it according to your requirementsBelow is the explanation of the query.Let me know if you have any more questions. Happy to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks for help @steevej and @Tarun_Gaur",
"username": "med_amine_fh"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update every sub document in sub sub array | 2023-01-18T09:01:10.682Z | Update every sub document in sub sub array | 1,460 |
[
"aggregation",
"java",
"spring-data-odm"
] | [
{
"code": "{ // \"$match\" omitted\n \"value.timestampWithoutTimezoneValue.value\" : {\n $gt : ISODate('Sat Jan 14 11:29:29 CET 2023')\n }\n}\n{\n \"$match\":{\n \"value.timestampWithoutTimezonevalue.value\":{\n \"$gt\":{\n \"$date\":\"2023-01-14T10:29:29.499Z\"\n }\n }\n }\n}\n case GREATER_THAN -> {\n // LocalDateTime: 2023-01-14T11:29:29.499999999\n Date out = Date.from(((LocalDateTime) filter.getValue()).atZone(ZoneId.of(\"UTC\")).toInstant()); // out = Sat Jan 14 12:29:29 CET 2023 (I assume it's +0)\n yield Criteria.where(\"value.timestampWithoutTimezonevalue.value\").gt(out); // See below\n }\n{\n ...\n \"pipeline\":[\n ...\n {\n \"$addFields\":{\n \"value.timestampWithoutTimezoneValue.value\":{\n \"$toDate\":\"$value.timestampWithoutTimezoneValue.value\"\n }\n }\n },\n {\n \"$match\":{\n \"value.timestampWithoutTimezonevalue.value\":{\n \"$gt\":{\n \"$date\":\"2023-01-14T11:29:29.499Z\"\n }\n }\n }\n }\n\t ...\n ],\n ...\n}\n[{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-07T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-08T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-09T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-10T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-11T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-12T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-13T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-14T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-15T12:30:30.500\"\n }\n }\n},{\n \"value\": {\n \"timestampWithoutTimezoneValue\": {\n \"value\": \"2023-01-16T12:30:30.500\"\n }\n }\n}]\n",
"text": "Context:TLDRI’d like to generate an aggregation step like this (tested with MongoCompass and works):but this is generated when I use MongoTemplate instead:I can’t really find the reason for this and I get zero results.N.B.: I know that this is not the Java MongoDB library but rather something working on top of it, but I still believe I might get good feedback or ideas here.Debug detailsThis is where I generate the Criteria:EDIT: I noticed that the conversion to Date caused a +1h instead of a +0h, but it’s not relevant for this thread’s sake. Keep reading and you’ll understand.This is the aggregation pipeline that I submit to MongoTemplate (I omitted the rest because the rest works as intended):Note that:For completion’s sake I share the small testing dataset I’m using (only the relevant field):Expectation of the query result:Return documents with dates:Actual Result: Zero resultsConclusions:This is one of many attempts by the way… I tried to:None of the above worked and my impression is that my code is ok. The problem I’m facing is that my aggregation pipeline isn’t interpreted the way I expect to (structure-wise) and I can’t find a way to make it work out of MongoCompass.",
"username": "Capitano_Giovarco"
},
{
"code": "$date{\"$date\":\"2022-12-21T23:00:00.000+0000\"}$match$dateMongoDB Enterprise myReplSet:PRIMARY> db.tst.find({\"created_at\" : { \"$date\" : \"2022-11-21T23:00:00.000+0000\" } } )\nError: error: {\n\t\"operationTime\" : Timestamp(1673260574, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"unknown operator: $date\",\n\t\"code\" : 2,\n\t\"codeName\" : \"BadValue\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1673260574, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n}\ndb.tst.aggregate([\n... {\n... $match: {\n... \"account_id\": 10, \n... created_at:{\n... $lte:new Date(\"2023-01-14T11:29:29.499Z\")\n... }\n... }\n... }])\n{ \"_id\" : ObjectId(\"63a4645c660455aea91e63db\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-11-20T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"63a46461660455aea91e63dc\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-11-21T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"63a46464660455aea91e63dd\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-11-22T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"63a46469660455aea91e63de\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-12-21T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"63a4646d660455aea91e63df\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-12-22T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"63a46471660455aea91e63e0\"), \"account_id\" : 10, \"created_at\" : ISODate(\"2022-12-20T00:00:00Z\") }\n// Define the date range\nSimpleDateFormat sdf = new SimpleDateFormat(\"yyyy-MM-dd\");\nDate startDate = sdf.parse(\"2022-01-01\");\nDate endDate = sdf.parse(\"2022-06-30\");\n\n// Find documents within the date range\nIterable<Document> documents = collection.find(Filters.and(Filters.gte(\"date\", startDate), Filters.lt(\"date\", endDate)));\n",
"text": "Hello Capitano,Thank you for the detailed problem description.I would like to clear the confusion about $date as this is not a filter operator but rather a MongoDB extended JSON representation.While date objects return to the driver with the representation of {\"$date\":\"2022-12-21T23:00:00.000+0000\"}, you cannot use it in a find operation or in an aggregation’s $match stage.In fact, if you tried to filter in a find operation using $date , you will notice it’s not a valid operator:In order to pass the date correctly you should pass it as a date object, from the mongo shell I can use the function Date()Similarly, from the driver you are using you should pass a date object in your code.In Java, it should be something like the below:I hope you find the above helpful.Regards,\nMohamed Elshafey",
"username": "Mohamed_Elshafey"
},
{
"code": "SimpleDateFormat sdf = new SimpleDateFormat(\"yyyy-MM-dd\");\nDate startDate = sdf.parse(\"2022-01-01\");\nDate endDate = sdf.parse(\"2022-06-30\");\n SimpleDateFormat sdf = new SimpleDateFormat(\"yyyy-MM-dd\");\n Date endDate = sdf.parse(\"2023-01-14\");\n aggregations.add(\n match(\n Criteria.where(\"value.timestampWithoutTimezonevalue.value\").gt(endDate)\n )\n );\n",
"text": "Hi Mohamed,Thanks for the comment.This thing of the MongoDB extended JSON looks a bit confusing, but regardless I tried to change my code to comply with your example:I still get no results out of the aggregation. And if I remove this $match, and therefore there are no matching conditions, then I get all the documents. That’s why I’m saying that something stinky is happening here…",
"username": "Capitano_Giovarco"
},
{
"code": "MongoClient mongoClient = MongoClients.create(\"mongodb://localhost:27017\");\nMongoTemplate mongoTemplate = new MongoTemplate(new SimpleMongoClientDbFactory(mongoClient, \"testdb\"));\nSampleDocument doc1 = new SampleDocument(\"John Doe\", new SimpleDateFormat(\"yyyy-MM-dd\").parse(\"2022-01-01\"), 30);\nSampleDocument doc2 = new SampleDocument(\"Jane Smith\", new SimpleDateFormat(\"yyyy-MM-dd\").parse(\"2022-03-01\"), 25);\nSampleDocument doc3 = new SampleDocument(\"Bob Lee\", new SimpleDateFormat(\"yyyy-MM-dd\").parse(\"2022-05-01\"), 35);\nSampleDocument doc4 = new SampleDocument(\"Amy Williams\", new SimpleDateFormat(\"yyyy-MM-dd\").parse(\"2022-07-01\"), 40);\nmongoTemplate.insert(doc1);\nmongoTemplate.insert(doc2);\nmongoTemplate.insert(doc3);\nmongoTemplate.insert(doc4);\n// Specify the start and end dates for the range\nSimpleDateFormat formatter = new SimpleDateFormat(\"yyyy-MM-dd\");\nDate startDate = formatter.parse(\"2022-05-01\");\n// Create the aggregation pipeline\nMatchOperation match = Aggregation.match(new Criteria(\"date\").gte(startDate));\nProjectionOperation projectStage = Aggregation.project(\"name\",\"age\", \"date\");\nAggregation aggregation = Aggregation.newAggregation(match,projectStage); \nList<SampleDocument> result = mongoTemplate.aggregate(aggregation, \"sampleDocument\", SampleDocument.class).getMappedResults();\nresult.forEach(System.out::println);\nSystem.exit(0);\nSampleDocument{name='Bob Lee', date=Sun May 01 00:00:00 IST 2022, age=35}\nSampleDocument{name='Amy Williams', date=Fri Jul 01 00:00:00 IST 2022, age=40}\n",
"text": "Hello Capitano,I tried to reproduce the example in my testing environment and I see it’s working as expected.Here is the snippet that inserts 4 records, I executed an aggregation that filters on the date field:And here is the output:Regards,\nMohamed Elshafey",
"username": "Mohamed_Elshafey"
}
] | Querying Dates using MongoTemplate and aggregations results in a broken query | 2023-01-16T09:01:00.882Z | Querying Dates using MongoTemplate and aggregations results in a broken query | 5,030 |
|
null | [
"aggregation",
"graphql",
"views"
] | [
{
"code": "",
"text": "I have two collections (Collection A and B) in one database. Collection A can be considered the main content and Collection B can be considered advertising which just needs to be mixed into the main feed every nth position. I want to make that single result set available through the GraphQL API which will receive search queries for the data (Collection A) and return the mixed result set (Collection A + B mixed in).Can anyone advise where and how it is best to perform that merge?I don’t want to $merge results into a new collection. That would just duplicate data and be something to keep in sync. I also need users to be able to search for pure content (Collection A only).I also don’t want to $lookup. I’m not performing a join as there is nothing to join on. The two datasets are not related in any way. It’s just a case of placing adverts (Collection B) into a content stream (Collection A).Would it be advisable to do this in a function for a custom resolver?Any pointers appreciated.",
"username": "Ian"
},
{
"code": "",
"text": "Just to be clear, I’m mostly looking for advice as to the best place and the best time to do this rather than the code to do it.Is this best performed after conducting the search in Collection A and then manipulating the results by combining Collection B before returning it back to the user through the GraphQL API? Or is it better to do this in the client or elsewhere?",
"username": "Ian"
}
] | How to merge results from one collection in with another unrelated collection | 2023-01-18T21:54:54.848Z | How to merge results from one collection in with another unrelated collection | 1,392 |
null | [
"transactions",
"field-encryption",
"storage"
] | [
{
"code": "mongod --version\ndb version v6.0.0\nBuild Info: {\n \"version\": \"6.0.0\",\n \"gitVersion\": \"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\n \"modules\": [],\n \"allocator\": \"system\",\n \"environment\": {\n \"distarch\": \"aarch64\",\n \"target_arch\": \"aarch64\"\n }\n}\nmongod --dbpath restore-63c6be85f42181731c9ea998/ --port 58109\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.462+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.463+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.465+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":53549,\"port\":58109,\"dbPath\":\"restore-63c6be85f42181731c9ea998/\",\"architecture\":\"64-bit\",\"host\":\"Alexs-Mac-Studio.local\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.0\",\"gitVersion\":\"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.2.0\"}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.466+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"port\":58109},\"storage\":{\"dbPath\":\"restore-63c6be85f42181731c9ea998/\"}}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.467+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:29.467+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=32256M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.247+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":780}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.247+01:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":1673969429,\"i\":30}}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.247+01:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":5380106, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger oldestTimestamp\",\"attr\":{\"oldestTimestamp\":{\"$timestamp\":{\"t\":1673969129,\"i\":30}}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.251+01:00\"},\"s\":\"I\", \"c\":\"WT\", \"id\":4366406, \"ctx\":\"initandlisten\",\"msg\":\"Modifying the table logging settings for all existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":true}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.281+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22383, \"ctx\":\"initandlisten\",\"msg\":\"The size storer reports that the oplog contains\",\"attr\":{\"numRecords\":4397065,\"dataSize\":5373326204}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.281+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22386, \"ctx\":\"initandlisten\",\"msg\":\"Sampling the oplog to determine where to place markers for truncation\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.281+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22389, \"ctx\":\"initandlisten\",\"msg\":\"Sampling from the oplog to determine where to place markers for truncation\",\"attr\":{\"from\":{\"$timestamp\":{\"t\":1664745097,\"i\":1}},\"to\":{\"$timestamp\":{\"t\":1673969466,\"i\":1}}}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.281+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22390, \"ctx\":\"initandlisten\",\"msg\":\"Taking samples and assuming each oplog section contains\",\"attr\":{\"numSamples\":102,\"minBytesPerStone\":524288000,\"containsNumRecords\":429032,\"containsNumBytes\":524288107}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.289+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22393, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling complete\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.289+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22382, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger record store oplog processing finished\",\"attr\":{\"durationMillis\":8}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.846+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.846+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"6.1\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"initandlisten\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"initandlisten\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"initandlisten\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.901+01:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"initandlisten\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22372, \"ctx\":\"OplogVisibilityThread\",\"msg\":\"Oplog visibility thread shutting down.\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"initandlisten\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.902+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"initandlisten\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.941+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":39}}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.941+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"initandlisten\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.941+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.941+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-01-17T21:35:30.941+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":62}}\n",
"text": "In Atlas we are running MongoDB 6.1.1I have downloaded a backup, and unzip it like usual. On my local computer I have mongod community installed:When I now try to run the downloaded backup with mongod like this:I get the following error:This used to be such a convenient way of running MongoDB backups - but is it impossible now? Please advice ",
"username": "Alex_Bjorlig"
},
{
"code": "Wrong mongod version\n...\nadmin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"6.1\\\" }\n...\nInvalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.\n...\n",
"text": "it seems 6.1 came with an incompatible feature set. if appropriate, upgrade your local version to 6.1 to use that data.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I tried following instructions online on how to install MongoDB tools 6.1 on a Mac - but could not find a good resource.(I found this guide, but seems to be version 6.0 only)",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "bad news: Install MongoDB — MongoDB ManualMongoDB 6.1 is a rapid release and is only supported for MongoDB Atlas.by the way, I am guessing you have backed up all databases including “admin.system”. you may try use to exclude this, for example. (I admit I don’t know how to isolate).or you may use import/export tools mentioned in list to the left here: mongodump — MongoDB Database Tools",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Alex_Bjorlig , I just wanted to confirm what @Yilmaz_Durmaz posted in the last comment. Rapid Releases are only available in MongoDB Atlas so that is why your downloaded 6.1 backup will not work with 6.0.If you are looking for a way to still get this data into your mongod community, ou can use either mongodump or Export Cloud Backup Snapshot which will give you files in JSON format. Note: there could be some inconsistencies using this approach going from 6.1 to 6.0 so this should not be used for any production use.",
"username": "Evin_Roesle"
},
{
"code": "",
"text": "Hi @Evin_Roesle - thanks for tuning in.I’m all about production - running a business here Correct me if I’m wrong - but how would mongodump work? Can I tell mongodump to download a cloud back up from Atlas?If I should use the Export Cloud Backup Snapshot method, can I tell the tool to export a given cloud backup?",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "@Alex_Bjorlig if you are looking for production use cases then I would only recommend restoring the 6.1 backup to another 6.1 instance. As the rapid releases (6.1) are only available on Atlas, this means that you should restore to another 6.1 Atlas instance. Once you have created a new 6.1 cluster, you can easily choose this cluster as your restore destination in the restore process.Other users sometimes want to restore in similar situation but only for testing or investigation purposes. This is where mongodump or export cloud backup methods may be helpful but as I mentioned last time, this is not a supported restore path so there maybe unexpected consistencies.mongodump creates a binary export of a database’s current contents so this would not be from a specific backup. You can read more about mongodump here.For Export Cloud Backup Snapshots, you can choose a specific snapshot to export (although it must be exported to a S3 bucket). This documentation page talks about the Export process but specifically how to set up a specific export as well.",
"username": "Evin_Roesle"
}
] | Problems running a downloaded backup from Atlas; possible version mismatch | 2023-01-17T20:41:17.687Z | Problems running a downloaded backup from Atlas; possible version mismatch | 1,759 |
[
"node-js",
"crud"
] | [
{
"code": "await user.findOneAndUpdate(\n {\n name: \"sss\",\n id: \"test1\",\n decibelHistory: {\n $elemMatch: { config: { min: 10 } },\n },\n },\n {\n $set: { \"config.$.min\": 1 },\n }\n );\n{\n \"_id\": {\n \"$oid\": \"638b39c2d96a4ac3ebb33c6b\"\n },\n \"name\": \"sss\",\n \"password\": \"sss\",\n \"decibelHistory\": [\n {\n \"id\": \"test1\",\n \"config\": [\n {\n \"max\": 90,\n \"min\": 10,\n \"avg\": 35\n }\n ]\n }\n ],\n \"timeLapse\": 1200,\n \"__v\": 0\n}\n",
"text": "i have a problem that i cannot resolved by myself. i have a data in mongo db and i want to update specific value i show the code and images what i want is to update the specific object (the min propetry) how i can update it?Document\nScreenshot_20230111_223849_Chrome1080×2283 132 KB\n",
"username": "Lior_aharon"
},
{
"code": "mindb.collection.updateOne(\n {\"name\" : \"sss\", \"decibelHistory.id\": \"test1\"},\n {$set: {\n 'decibelHistory.$[].config.$[x].min': 999\n }},\n {arrayFilters: [\n {\"x.min\": 10}\n ]}\n)\n$[<identifier>]arrayFiltersarrayFiltersarrayFilters",
"text": "Hello @Lior_aharon ,Welcome to The MongoDB Community forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, you can try using below query to update the values of field min.Below is the explanation of the query.Note that the code example above seems to work correctly with the example document you posted, thus you may need to modify the example code to suit your use case betterLet me know if you have any more questions. Happy to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb update a value in array of object of array | 2023-01-11T20:41:12.842Z | Mongodb update a value in array of object of array | 8,242 |
|
[
"node-js",
"mongoose-odm"
] | [
{
"code": "const express = require(\"express\");\nconst app = express();\nconst mongoose = require(\"mongoose\");\nconst bodyParser = require(\"body-parser\");\nconst crypto = require(\"crypto\");\n\napp.use(bodyParser.urlencoded({extended: true}));\n\nmongoose.connect(\"/*.....*/\");\n\n\nconst usersSchema = {\n\temail: String,\n\tpassword: String\n}\n\nconst User = mongoose.model(\"User\", usersSchema);\n\napp.get(\"/\", function(req, res) {\n\tres.sendFile(__dirname + \"/example_form.html\");\n})\n\napp.post(\"/\", function(req, res) {\n\tUser.findOne({ email: req.body.email}).then(user => {\n \tif (user) {\n \t\t//after process\n \t} else {\n \t\tres.send(\"Your email is invalid.\");\n\t\t}\n\t})\t\n})\napp.post(\"/\", function(req, res) {\n\tUser.findOne({ email: req.body.email}).then(user => {\n \tif (user) {\n \t\t/*\n \t\t\tencryption process\n\t\t\t*/\n\n\t\t\tvar crypted = cipher.update(userStr, 'utf8', 'hex');\n\t\t\tcrypted += cipher.final('hex');\n\t\t\tconst constid = req.body.email;\n\n\t\t\tUser.findOne({ email: req.body.email, password: crypted}).then(user => {\n\t\t\t\tif(user) {\n\t\t\t\t\tres.send(\"Success.\");\n\t\t\t\t} else {\n\t\t \t\tres.send(\"Email or password is wrong.\");\n\t\t\t\t}\n\t\t\t})\n \t} else {\n \t\tres.send(\"Your email is invalid.\");\n\t\t}\n\t})\t\n})\n",
"text": "Hi there,For example, suppose there are data like the following, the user’s name is “John Smith” and There are Email and encrypted password.Screenshot 2023-01-16 at 10.58.38 AM3914×665 159 KBI have an input form in HTML where the user enters their Email and password, and the password is encrypted and checked to make sure it matches the password in the database.I know how to make sure an Email exists.And I also know how to make sure the Password and Email match.btw for this situation, how can I get a name value like “John Smith” by entered Email and Password value?",
"username": "YumaSan"
},
{
"code": "User.findOne({ email: req.body.email, password: crypted}).then(user => {\n\t\t\t\tif(user) {\n\t\t\t\t\tres.send(\"Success.\");\n\t\t\t\t} else {\n\t\t \t\tres.send(\"Email or password is wrong.\");\n\t\t\t\t}\nconst filter = {\n 'email': req.body.email\n};\nconst projection = {\n '_id': 0, 'name': 1\n};\n\nconst cursor = coll.find(filter, { projection });\nconst name = await cursor.toArray();\n",
"text": "Hi @YumaSan and welcome to the MongoDB community forum!!To find the name for a valid email id and password, you could directly use email to find the user and project only the username.This would give an output of the username which matches with the correct email id and the password.However I would have to mention that storing encrypted password is inherently insecure. If the primary need for this code is for authentication, I would recommend you to check out passport.js or similar.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | How can I use Node.js to get some specific value? | 2023-01-16T02:08:16.891Z | How can I use Node.js to get some specific value? | 1,976 |
|
null | [
"mongodb-shell"
] | [
{
"code": "● mongodb.service - An object/document-oriented database\n Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Tue 2023-01-17 19:27:24 -03; 4min 18s ago\n Docs: man:mongod(1)\n Process: 2205 ExecStart=/usr/bin/mongod --quiet --config /etc/mongodb.conf (code=exited, status=217/USER)\n Main PID: 2205 (code=exited, status=217/USER)\n\nJan 17 19:27:24 localhost.localdomain systemd[1]: Started An object/document-oriented database.\nJan 17 19:27:24 localhost.localdomain systemd[2205]: Failed at step USER spawning /usr/bin/mongod: No such process\nJan 17 19:27:24 localhost.localdomain systemd[1]: mongodb.service: main process exited, code=exited, status=217/USER\nJan 17 19:27:24 localhost.localdomain systemd[1]: Unit mongodb.service entered failed state.\nJan 17 19:27:24 localhost.localdomain systemd[1]: mongodb.service failed.\n[root@localhost /]# systemctl daemon-reload\n[root@localhost /]# systemctl status mongodb.service\n● mongodb.service - An object/document-oriented database\n Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Tue 2023-01-17 19:27:24 -03; 5min ago\n Docs: man:mongod(1)\n Main PID: 2205 (code=exited, status=217/USER)\n\nJan 17 19:27:24 localhost.localdomain systemd[1]: Started An object/document-oriented database.\nJan 17 19:27:24 localhost.localdomain systemd[2205]: Failed at step USER spawning /usr/bin/mongod: No such process\nJan 17 19:27:24 localhost.localdomain systemd[1]: mongodb.service: main process exited, code=exited, status=217/USER\nJan 17 19:27:24 localhost.localdomain systemd[1]: Unit mongodb.service entered failed state.\nJan 17 19:27:24 localhost.localdomain systemd[1]: mongodb.service failed.\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.304-03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.305-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.306-03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.308-03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.308-03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.308-03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.309-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.309-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":2248,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"localhost.localdomain\"}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.309-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.3\",\"gitVersion\":\"f803681c3ae19817d31958965850193de067c516\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.309-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"version\":\"Kernel 3.10.0-1160.81.1.el7.x86_64\"}}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.309-03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.310-03:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.310-03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1125}}\n{\"t\":{\"$date\":\"2023-01-17T19:46:09.310-03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n-bash: mongo: command not found \n",
"text": "Good night usingcentos 7 with mongodb 6.\nI can’t get it to work at all.\nCan you help me?\nI’m a beginner so please be patientsystemctl status mongodb.servicemongodmongo",
"username": "William_Guimaraes"
},
{
"code": "",
"text": "You have to use mongosh with latest versions of mongodb\nJust issue mongosh and see if you can connect(assuming you already have a mongod up & running)\nRegarding errors while starting mongod it appears to be permissions issues\nAlso your prompt indicates you tried to start it as root\nDo not start mongod as root.Bring it up as normal user and use sudo privileges wherever you need root privs like to start service,create dir etc\nAlso check ownership/permissions on that TMP sock file",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "How do I check ownership/permissions of that TMP sock file?Now it’s like thismongosh\nCurrent Mongosh Log ID:\t63c8617aa251ba8aa3ee901a\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.2\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017sudo service mongod start\nRedirecting to /bin/systemctl start mongod.service\nJob for mongod.service failed because the control process exited with error code. See “systemctl status mongod.service” and “journalctl -xe” for details.I’m trying to access via ssh\nmongodb on centOs server 7\nme on pc fedora 37 with vscode",
"username": "William_Guimaraes"
},
{
"code": "",
"text": "On your Centos machine check this\nls -lrt /tmp/mongodb-27017.sock",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "ls -lrt /tmp/mongodb-27017.sock\nsrwx------. 1 william william 0 Jan 18 19:07 /tmp/mongodb-27017.sock",
"username": "William_Guimaraes"
},
{
"code": "",
"text": "/var/lib/mongo ls -l\ndrwxr-xr-x. 2 mongod mongod 6 Nov 14 16:44 mongosudo tail /var/log/mongodb/mongod.log\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:2124,“port”:27017,“dbPath”:\"/var/lib/mongo\",“architecture”:“64-bit”,“host”:“localhost.localdomain”}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“6.0.3”,“gitVersion”:“f803681c3ae19817d31958965850193de067c516”,“openSSLVersion”:“OpenSSL 1.0.1e-fips 11 Feb 2013”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“rhel70”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“CentOS Linux release 7.9.2009 (Core)”,“version”:“Kernel 3.10.0-1160.81.1.el7.x86_64”}}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.054-03:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:\"/etc/mongod.conf\",“net”:{“bindIp”:“127.0.0.1”,“port”:27017},“processManagement”:{“fork”:true,“pidFilePath”:\"/var/run/mongodb/mongod.pid\",“timeZoneInfo”:\"/usr/share/zoneinfo\"},“storage”:{“dbPath”:\"/var/lib/mongo\",“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:\"/var/log/mongodb/mongod.log\"}}}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.055-03:00”},“s”:“E”, “c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Operation not permitted”}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.055-03:00”},“s”:“F”, “c”:“ASSERT”, “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:40486,“file”:“src/mongo/transport/transport_layer_asio.cpp”,“line”:1125}}\n{“t”:{\"$date\":“2023-01-19T09:07:50.055-03:00”},“s”:“F”, “c”:“ASSERT”, “id”:23092, “ctx”:“initandlisten”,“msg”:\"\\n\\n***aborting after fassert() failure\\n\\n\"}",
"username": "William_Guimaraes"
},
{
"code": "",
"text": "Remove TMP file and try to start mongod again\nsudo systemctl start mongod\nCheck mongodb documentation for Centos",
"username": "Ramachandra_Tummala"
}
] | Mongodb 6 does not start service on centOs 7 | 2023-01-17T22:52:12.440Z | Mongodb 6 does not start service on centOs 7 | 1,872 |
null | [
"aggregation",
"queries"
] | [
{
"code": "QUERY:\ndb.books.aggregate([{\n $lookup: {\n from: \"series\",\n localField: \"series\",\n foreignField: \"name\",\n as: \"series_lookup\"\n }\n },\n {\n $group: {\n '_id': '$series',\n 'count': { '$sum': 1 }\n }\n }, {\n $match: { \n series: { $exists: false } \n }\n }\n]).toArray()\n\nDESIRED RESULTS:\n[{\n \"_id\":\"orphan_series\",\n \"count\": 2\n},{\n \"_id\":\"another_orphan\",\n \"count\": 3\n}]\n\nACTUAL RESULTS:\n[{\n \"_id\" : \"currie_odyssey_one\",\n \"count\" : 7\n},\n{\n \"_id\" : \"currie_archangel_one\",\n \"count\" : 3\n},\n{\n \"_id\" : \"kloos_frontlines\",\n \"count\" : 10\n},\n{\n \"_id\" : \"currie_holy_ground\",\n \"count\" : 1\n},\n{\n \"_id\" : \"currie_star_rogue\",\n \"count\" : 1\n},\n{\n \"_id\" : \"king_legend_of_zero\",\n \"count\" : 7\n}]\n\nSERIES COLLECTION SAMPLE:\n{\n \"_id\": \"orphan_series\",\n \"name\": \"Not used in collections\",\n}\n\nBOOKS COLLECTION SAMPLE:\n{\n \"_id\": \"kloostermsofenlistment\",\n \"order\": 1,\n \"title\": \"Terms of Enlistment\",\n \"author\": {\n \"name\": \"Marko Kloos\",\n \"_id\": \"marko_kloos\"\n },\n \"series\": \"kloos_frontlines\"\n}\n",
"text": "I’m working on an aggregation to locate unused series in a collection of books. The series are it’s own collection and the books reference the series _id. I can get close to what I’m looking for but can’t get a set of series and counts that are not used in books. I’m getting the opposite, actually. I feel like it’s a small thing I’m missing… but I just can’t see it. Still learning aggs and the ins & outs so I appreciate the help.",
"username": "Mike_E"
},
{
"code": "\ndb.series.aggregate(([{\n $lookup: {\n from: \"books\",\n localField: \"name\",\n foreignField: \"series\",\n as: \"book_lookup\"\n }\n },\n{\n $match: { \n $eq: [0, {$size: \"$book_lookup\"} ]\n }\n },\n\n {\n $group: {\n '_id': '$series',\n 'count': { '$sum': 1 }\n }\n }\n])\n",
"text": "Hi @Mike_E ,So the simplified lookup syntax is design to only match documents between collections.If I understood correctly you want the opesite to get all the series documents that do not match…This can be achieved by running on series collection and looking up on the books:I haven’t run the aggregation but the concept is to lookup the series with books. A series with an empty “books_lookup” is an orphan one (size = 0).Only then count them Hopefully my aggregation is not buggy .Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you, Pavel. That helps greatly. I didn’t think of the lookup having a zero count in that way. That solved my issue.",
"username": "Mike_E"
}
] | Finding count of unused series in a collection of books via aggregation and grouping | 2023-01-16T15:26:17.126Z | Finding count of unused series in a collection of books via aggregation and grouping | 583 |
null | [
"aggregation"
] | [
{
"code": "const pipeline = [\n {\n $match: {\n \"updatedAt\": {\n $gt: start_date\n }\n \n }\n }, {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"bucket_name\",\n \"region\": \"us-west-2\",\n \"filename\": filename,\n \"format\": {\n \"name\": \"parquet\",\n \"maxFileSize\": \"10GB\",\n \"maxRowGroupSize\": \"100MB\"\n }\n }\n }\n }\n ];\n\n return events.aggregate(pipeline);\n};\n",
"text": "I am trying to write a function for a scheduled trigger that sends all documents from a collection that have been updated since the trigger was last executed. Is there a system variable or somewhere in the context object that would return this timestamp? I’ve searched extensively and found no examples. Another option may be to write the execution time to a collection at the end of the function, then retrieve that date at the beginning of each function call. If anyone has any similar examples, I would appreciate it. My pipeline definition looks like the following. I just want to be able to set start_date=‘trigger last execution date/time’:",
"username": "Greg_Olson"
},
{
"code": "exports = async function () {\n\n const service = context.services.get(\"my_federated_db_name\");\n const db = service.db(\"VirtualDatabase0\")\n const events = db.collection(\"VirtualCollection0\");\n\n const current_timestamp = new Date(Date.now()).toISOString();\n const date_str = current_timestamp.split('T')[0];\n //console.log(date_str);\n const epoch_date=new Date().getTime() / 1000;\n const epoch_date_str=epoch_date.toString();\n\n //generate unique file name including partition\n const filename=\"customers/file_date=\".concat(date_str).concat(\"/\").concat(\"customers\").concat(\"_\").concat(epoch_date_str);\n console.log('filename: ',filename);\n\n const event_service=context.services.get(\"change-stream-poc\");\n const event_db=event_service.db(\"trigger_execution\");\n const event_collection=event_db.collection(\"events\");\n\n //retrieve last execution date from collection\n const start_date_array= await event_collection.find({type: \"execution\"}).sort( { timestamp: -1 } ).limit(1).toArray(); //Works!\n const start_date_str=start_date_array[0].timestamp;\n const start_date=new Date(start_date_str);\n console.log(JSON.stringify(start_date_array[0].timestamp, null, 4));\n\n const query_results = await events.find({updatedAt: {$gt: start_date}}).toArray();\n console.log('query results: ',JSON.stringify(query_results, null, 4));\n\n //const start_date = new Date(\"2023-01-19T00:00:00.0Z\");\n\n\n\n\n const pipeline = [\n {\n $match: {\n \"updatedAt\": {\n $gt: start_date\n }\n\n }\n }, {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"mybucket\",\n \"region\": \"us-west-2\",\n \"filename\": filename,\n \"format\": {\n \"name\": \"parquet\",\n \"maxFileSize\": \"10GB\",\n \"maxRowGroupSize\": \"100MB\"\n }\n }\n }\n }\n ];\n\n const response= events.aggregate(pipeline);\n\n //insert execution timestamp into collection\n event_collection.insertOne( { type: \"execution\", timestamp: current_timestamp.toString() } );\n return response\n};\n",
"text": "Was able to get this working with the following solution, tracking each execution time in a collection:",
"username": "Greg_Olson"
}
] | Trigger Function - Reference last_execution date / time of trigger | 2023-01-12T23:35:34.923Z | Trigger Function - Reference last_execution date / time of trigger | 1,062 |
null | [
"java",
"morphia-odm"
] | [
{
"code": "import com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport lombok.Getter;\nimport org.mongodb.morphia.Datastore;\nimport org.mongodb.morphia.Morphia;\npublic PlayerDataManager() {\n\n //Setup the MongoDB Connection\n ConnectionString connectionString = new ConnectionString(\"xxxxxxxxx\"); //This does work btw, have tested\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .build();\n try {\n client = MongoClients.create(settings);\n\n morphia = new Morphia();\n morphia.map(PlayerData.class);\n playerDataStore = morphia.createDatastore(client, \"playerdata\");\n playerDataStore.ensureIndexes();\n\n playerDataDAO = new PlayerDataDAO(PlayerData.class, playerDataStore);\n }\n catch (NoClassDefFoundError e) {\n e.printStackTrace();\n }\n }\n",
"text": "So I’m trying to setup MongoDB with Morphia for Java, but when I try to pass in the MongoClient class, it says I am not using the correct version of this class.I have tried editing the import to use the other class (com.mongodb.MongoClient instead of com.mongodb.client.MongoClient) but that breaks the rest of the code, and casting throws an error as well. Also can’t find this error anywhere online.Does anyone have any ideas on what could be going on here",
"username": "Porkchop123"
},
{
"code": "",
"text": "How did you solve this?",
"username": "Duarte_Carvalho"
}
] | MongoClient Class not working with Morphia? (Java) | 2022-04-07T16:57:40.600Z | MongoClient Class not working with Morphia? (Java) | 3,538 |
null | [
"dot-net",
"field-encryption",
"unity"
] | [
{
"code": "",
"text": "Hi,\nI am trying to get the C# NuGet driver to work with Unity.\nSo I used NuGet (for Unity) to load and install the driver.\nThere are two issues with the package:the package MongoDB.Libmongocrypt.1.6.0 contains a number older versions of the library\nunder a subfolder “…content files/any”. Since those folders contain files with the same name Unity complains about the file names not being unique. This can be mended by just deleting that folder.\nIt is anoying however and left me wondering and researching some time as to which I should use and whether I could just delete the ones I don’t need. Maybe you could make a package without the other versions.the package also depends on Snappier 1.0.0.\nFor this Unity logs this error:\n“Assembly ‘Assets/Packages/Snappier.1.0.0/lib/netstandard2.1/Snappier.dll’ will not be loaded due to errors:\nSnappier references strong named System.Runtime.CompilerServices.Unsafe Assembly references: 4.0.6.0 Found in project: 5.0.0.0.\nAssembly Version Validation can be disabled in Player Settings “Assembly Version Validation””So there seems to be a mismatch in versions and I don’t know how to remedy that. I definitely do not want to deactivate version validation!\nAny help would be deeply appreciated!Regards\nScheuni",
"username": "Scheuni"
},
{
"code": "",
"text": "We ran into similar issues You can find our solution at then of this thread:Hope that helps.",
"username": "Martin_Raue"
},
{
"code": "",
"text": "Hi Martin,\nmany thx fr your reply. Good to know that other people are facing similar problems (is it just me or…). \nI am working under Win10 though using Unity 2021.3.16f1 (LTS)\nAfter finding some feedback on Stackoverflow I managed to pull this off (similar to your approach):It seems to be working but now Unity is crashing quite often so I am not all that content yet. I will have to investigate the cause from the logs - will report if anything seems connected to the driver.Many thx again\nScheuni",
"username": "Scheuni"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issues in Unity with NuGet C# Driver Package | 2023-01-15T15:03:14.182Z | Issues in Unity with NuGet C# Driver Package | 2,497 |
null | [
"node-js",
"time-series"
] | [
{
"code": "",
"text": "Hi!\nInitially, our backend was connected to a local MongoDB instance where we used timeseries and common collections, and everything was working great. However, now that we have transferred the data to Atlas, the microservice that works with timeseries is unable to retrieve any data from Atlas after a few requests (it starts working again after a restart, but eventually reaches a “limit” of requests). On the other hand, the microservice that works with common collections does not have any problems. Are there any restrictions from Atlas regarding request frequencies or any other issues that could be causing these data gathering problems?\nBest Regards",
"username": "Anatoly_Zimin"
},
{
"code": "",
"text": "Initially, our backend was connected to a local MongoDB instance where we used timeseries and common collections, and everything was working great. However, now that we have transferred the data to Atlas, the microservice that works with timeseries is unable to retrieve any data from Atlas after a few requests (it starts working again after a restart, but eventually reaches a “limit” of requests). On the other hand, the microservice that works with common collections does not have any problems. Are there any restrictions from Atlas regarding request frequencies or any other issues that could be causing these data gathering problems?MongoDB Atlas, the cloud-hosted version of MongoDB, places certain limitations on usage to ensure that all customers have a fair and consistent experience. These limitations include a maximum number of operations per second (OPS) that can be performed on a cluster, as well as a maximum number of connections that can be open at one time. If your microservice is making a large number of requests in a short period of time, it could be hitting these limits, which would cause the requests to fail.Additionally, if your microservice is using a lot of connections, it could be running out of available connections, which would also cause requests to fail. To avoid these issues, you may want to consider implementing connection pooling, which allows a limited number of connections to be reused for multiple requests, rather than opening a new connection for each request.Other possible issues that could cause the data gathering problem you’re experiencing include network latency, poor performance of the MongoDB cluster, or issues with the specific query or index used by the microservice.\nIt’s recommended to investigate the performance metrics on your cluster, check the logs, and work with MongoDB support team, to help you troubleshoot the issue.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Thank you, but I haven’t encountered any limitations and the issue occurs randomly. Additionally, as I mentioned earlier, another microservice works perfectly fine with the same cluster, and the problem only seems to occur with the time series collection.",
"username": "Anatoly_Zimin"
}
] | Unexpected behaviour with nestjs and Atlas data gathering | 2023-01-19T09:00:34.120Z | Unexpected behaviour with nestjs and Atlas data gathering | 1,018 |
null | [
"aggregation",
"queries"
] | [
{
"code": "\"textValuesTr\": [\n {\n \"_id\": {\n \"$oid\": \"63be50667a6c91dc6bf00e97\"\n },\n \"textId\": \"29563\",\n \"localeCode\": \"en-ZZ\",\n \"textValues\": [\n {\n \"textPart\": \"PTYPE-001\",\n \"status\": \"source\"\n },\n {\n \"textPart\": \"PTYPE-002\",\n \"status\": \"source\"\n },\n {\n \"textPart\": \"PTYPE-003\",\n \"status\": \"source\"\n },\n {\n \"textPart\": \"PTYPE-004\",\n \"status\": \"source\"\n }\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"63be54a6dc5b5823d4b2458b\"\n },\n \"textId\": \"29563\",\n \"localeCode\": \"es-ES\",\n \"textValues\": [\n {\n \"textPart\": \"PTYPE-001\",\n \"status\": \"Translated\"\n },\n {\n \"textPart\": \"PTYPE-002\",\n \"status\": \"Translated\"\n },\n {\n \"textPart\": \"PTYPE-003\",\n \"status\": \"Translated\"\n },\n {\n \"textPart\": \"PTYPE-004\",\n \"status\": \"Translated\"\n\n }\n ]\n }\n]\n \"textId\": \"0049912\",\n \"localeCode\": \"es-ES\",\n \"textValues\": [\n {\n \"status\": \"Translated\"\n }\n ]\n }\n]\n{\n \"_id\": [\n \"Translated\"\n ],\n \"count1\": 1\n},\n{\n \"_id\": [\n \"Translated\",\n \"Translated\",\n \"Translated\",\n \"Translated\"\n ],\n \"count1\": 1\n}\n",
"text": "Hi,\ngiven below is my document structure after a $lookup stage.doc1:{\n“_id”: {\n“$oid”: “63be50667a6c91dc6bf00e97”\n},\n“textId”: “29563”,}doc2:{\n“_id”: {\n“$oid”: “63be506d7a6c91dc6bf072fe”\n},\n“textId”: “0049912”,\n“textValuesTr”: [\n{\n“_id”: {\n“$oid”: “63be506d7a6c91dc6bf072fe”\n},\n“textId”: “0049912”,\n“localeCode”: “en-ZZ”,\n“textValues”: [\n{\n“status”: “source”\n}\n]\n},\n{\n“_id”: {\n“$oid”: “63be54badc5b5823d4b2a564”\n},}I want to calculate the count of translated documents and count of user\nI tried using $group aggregation inside &facet but it is not giving the expected output.aggregation tried for status count is{$facet:{\nstatus_cnt: [\n{\n$group: {\n_id: {\n$let: {\nvars: {\nitem: {\n$arrayElemAt: [\n“$textValuesTr.textValues”,\n1,\n],\n},\n},\nin: “$$item.status”,\n},\n},\ncount1: {\n$sum: 1,\n},\n},\n},\n],\n}\n}It is giving the Output as:-\n[{\n“status_cnt”: []\n}]But I want the output in the below format[{\n“status_cnt”: [\n{\n“_id”: [\n“Translated”\n],\n“count1”: 2\n},\n]\n}]Please help me.Regards,\nVishnupriya.",
"username": "vishnupriya_d"
},
{
"code": "",
"text": "Thank God! I got a solution that best matches my requirement.I just used two $project stages to manipulate the nested array field using $arrayElemAt before $facet stage. Currently it is working fine for me.Thank you.",
"username": "vishnupriya_d"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting the count of nested array field in Mongo DB | 2023-01-17T11:05:53.780Z | Getting the count of nested array field in Mongo DB | 1,150 |
[
"london-mug"
] | [
{
"code": "Staff Developer Advocate at MongoDBCreator Of Beltstripe",
"text": "Finally, our first London meetup is happening on the 26th of October at Metro Bank London in Holborn.\n_London MUG - Design1920×1080 125 KB\nEvent Type: In Person\nLocation: Metro Bank, 1 Southampton Row, London WC1B 5HA\nVideo Conferencing URLAndrew Morgan\nStaff Developer Advocate at MongoDB\nTwitter - @andrewmorganSani Yusuf\nCreator Of Beltstripe\nTwitter - @saniyusuf",
"username": "Sani_Yusuf"
},
{
"code": "",
"text": "Thanks for the invite , looking forward to see you",
"username": "Anil_Kumar_Senapati"
},
{
"code": "",
"text": "Hello Everyone,We are excited to see you today evening at Metro Bank for our London group launch. Location Metro Bank, 1 Southampton Row, LondonFew things to note:Please reply to this thread if there are any questions.Looking forward to seeing you today at the event.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Here’s a photo from the event! <3\n\nIMG_23931920×1440 296 KB\n",
"username": "Harshit"
}
] | London MUG: MongoDB London Launch Meetup | 2022-10-13T16:07:05.703Z | London MUG: MongoDB London Launch Meetup | 5,266 |
|
null | [
"aggregation",
"next-js"
] | [
{
"code": "_id:\"ABC\"\ndate:\"2022-10-01\"\npowerIn:30\n",
"text": "Hello, I’m working on aggregation and bar charts using next.js.\nI’m recording my data daily. When the first day of the month has started, I would like to make all dates from 1st to 30th(/31st). Since I’m using bar charts, if there’s only one data, it only shows one bar chart.\nAnd sometimes there’s no data for the day, and I want to fill the gap setting the value of powerIn as 0.\nI found $fill and $densify but those won’t work. And I already looked into another question similar as mine that someone recommending making a mock collection or doing it on server side, I found this tricker. Could anyone give me solution on this?",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "You may be able to use $range in an aggregation to generate the missing days.See the following thread for examples specific to using dates.",
"username": "steevej"
}
] | Finding missing date using aggregation | 2023-01-18T01:06:24.665Z | Finding missing date using aggregation | 1,857 |
[] | [
{
"code": "",
"text": "\ndate836×354 20.5 KB\n\nI want to change the date type from dateString to date field ( createdAt to updatedAt) how to do it ???when I go to compas, createdAt is a date string, when I manually change to date and export the record, I see that the updatedAt field is a date NumberLong.how to change the type of createdAt on all my collection ??",
"username": "tim_Ran"
},
{
"code": "createdAtdb.collection.updateMany(\n {\"createdAt\": \n {$type:\"string\"}\n },\n [{$set: \n {\"createdAt\": \n {$toDate : \"$createdAt\"}\n }}]\n)\nString$set$toDate$toDate$toDate",
"text": "Hello @tim_Ran ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, I assume you wanted to convert the datatype of the field createdAt from string to ISODate, you can update below query as per your use case.Explanation:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "createdAt$convertupdateMany()$convertcreatedAtdb.collection.updateMany(\n {},\n { $convert: { input: \"$createdAt\", to: \"date\" } }\n);\n\ncollectioncreatedAtcreatedAtupdatedAtdb.collection.updateMany(\n {},\n [\n {\n $convert: {\n input: \"$createdAt\",\n to: \"date\"\n }\n },\n { $rename: { \"createdAt\": \"updatedAt\" } }\n ]\n);\n\n",
"text": "sume you wanted to convert the datHello @tim_Ran\nTo change the data type of the createdAt field from a string to a date in MongoDB, you can use the $convert operator in the updateMany() method. This operator allows you to convert a field to a specified data type using an expression.Here’s an example of how you can use the $convert operator to change the createdAt field to a date type:This will update all documents in the collection collection and set the createdAt field to a date type.It’s important to note that you should always make a backup of your data before making any changes. Also, you should test your queries on a small subset of data before running it on your entire collection, to ensure that it behaves as expected.If you want to rename createdAt to updatedAt in the same operation you can use the $rename operatorThis will update all documents, change the createdAt field to a date and rename it to updatedAtIt’s important to note that this will change the type of the field in the documents but it won’t change the index type, if you have any index on that field, you will need to remove it and re-create it. Also, if you’re using any code that depends on the original type of the field, you’ll need to update it to reflect the new data type.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Thanks for your feeldback @Tarun_Gaur\ni want to convert date string to date Long (Number long) not to ISO date",
"username": "tim_Ran"
},
{
"code": "mongoexportdb.foo.drop();\ndb.foo.insertOne({createdAt:\"2023-01-19T08:24:14-05:00\"})\ndb.foo.updateMany({createdAt:{$type: \"string\"}},[{$set:{createdAt:{$toDate:\"$createdAt\"}}}])\nmongoexport -d test -c foo --quiet --pretty -jsonFormat=canonical\n{\n\t\"_id\": {\n\t\t\"$oid\": \"63c948e23db2dd3bf3ec900c\"\n\t},\n\t\"createdAt\": {\n\t\t\"$date\": {\n\t\t\t\"$numberLong\": \"1674134654000\"\n\t\t}\n\t}\n}\n\n",
"text": "If you do it the way @Tarun_Gaur suggests and then do the mongoexport you will end up with what you expect.Example:",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo date format | 2023-01-13T06:01:22.276Z | Mongo date format | 5,596 |
|
null | [
"java"
] | [
{
"code": "",
"text": "Hi there,I’m working on a project and I want to use the last driver version for mongoDB.\nI found, on maven central, the Uber JAR MongoDB Java Driver, but I don’t get what version of driver is used, 4.8?Thanks in advance",
"username": "Luciano_Bigiotti"
},
{
"code": "mongo-java-drivermongodb-drivermongodb-driver-sync",
"text": "Uber Jar, mongo-java-driver, was legacy even back when the driver was version 3.4 And mongodb-driver is also kinda legacy since version 3.7. Suggested driver is mongodb-driver-sync since then.check different versions here: MongoDB Java Driverscheck this one if you want the latest, currently 4.8.2, with a quick start: Quick Start — Java Sync (mongodb.com)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Driver Version MongoDB | 2023-01-19T11:52:02.574Z | Driver Version MongoDB | 991 |
null | [
"node-js",
"containers"
] | [
{
"code": "",
"text": "I am upgrading mongo db driver from 2.3 to 4.13 and using mongo db and a node app with docker containers.\nDocker-compose.yml\nimage: mongo:latest\nand image for node is 18.12.1After upgrading the driver i get error\nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\nat Timeout._onTimeout (/srv/deployments/profile-service/node_modules/mongodb/lib/sdam/topology.js:285:38)\nat listOnTimeout (node:internal/timers:564:17)\nat process.processTimers (node:internal/timers:507:7) {\nreason: TopologyDescription{ type: ‘Unknown’, servers: Map(1) { ‘localhost:27017’ => [ServerDescription] }\n,\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: null,\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined,\n[Symbol(errorLabels)]: Set(0) {}\n}Please note it works fine outside docker container",
"username": "sandeep_rao"
},
{
"code": "mongodb://mongoversion: '3'\nservices:\n mongo:\n image: mongo:6.0\n ... \n ...\n\n nodeapp:\n image: nodeapp\n ...\n ...\n",
"text": "Each container has its own network namespace. Using 127.0.0.1 as the mongo address is directing node to connect to the node container!Given the compose file below the mongodb uri can be mongodb://mongo",
"username": "chris"
},
{
"code": "version: '3.7'\nservices:\n mongo:\n image: mongo:latest\n networks:\n - Test\n ports:\n - \"27017:27017\"\n volumes: \n - mongo_data:/data/db\nmain-api:\n restart: always\n build:\n context: /node-service\n dockerfile: /node-service/dev.Dockerfile\n args: \n NPM_TOKEN: ${NPM_TOKEN}\n # command: npm start\n image: node-service-local\n environment:\n - NODE_ENV=qa\n - GRP_ID=${GRP_ID}\n ports:\n - \"30002:30002\"\n networks: \n - Test\n volumes: \n - ${HOME}/node-service:/srv/deployments/node-service\n depends_on:\n - mongo\n",
"text": "Hi,\nI am using mongo:latest image and for connection i am using the service name. Please find below my docker-compose fileMy connection URL is mongodb://mongo:27017/db_name?directConnection=true&retryWrites=true&w=majority",
"username": "sandeep_rao"
},
{
"code": "",
"text": "I created a sample app that was connecting to mongo on 27017 since I exposed it. I did not use docker and directly run the app and everything works fine. Only when I run the app inside docker container, I am getting this error",
"username": "sandeep_rao"
},
{
"code": "directConnectiondirectConnection",
"text": "Is your mongo container a single node replica set? You can use the directConnection option to the driver.Given your driver upgrade and use of the service name in the mongo uri this make me think the auto discovery and unified topology in the 4.0 is detecting the replicaSet member hostname and then attempting connection.If the directConnection works for you that may be satisfactory. The actual solution would be to update the replicaSet member to use the servicename.",
"username": "chris"
},
{
"code": "You can use the directConnection option to the",
"text": "Hi Chris,\nIt is a single node replica. What do you mean by You can use the directConnection option to the. I am passing this as part of connection string",
"username": "sandeep_rao"
},
{
"code": "",
"text": "So it is, something wrong with my eyes today! ",
"username": "chris"
},
{
"code": "const { MongoClient } = require(\"mongodb\");\n\nconst uri = process.env.MONGO_URI;\nconst client = new MongoClient(uri);\n\nasync function run() {\n try {\n const fooColl = await client.db(\"test\").collection(\"foo\");\n const cur = fooColl.find({}).limit(10);\n for await (const doc of cur) {\n console.log(doc);\n }\n } finally {\n await client.close();\n }\n}\n\nrun().catch(console.dir);\nMONGO_URI=\"mongodb://mongo/?serverSelectionTimeoutMS=3000&directConnection=true\" node connectandprint.js \n{ _id: 0 }\nMONGO_URI=\"mongodb://mongo/?serverSelectionTimeoutMS=3000\" node connectandprint.js \nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\n at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/sdam/topology.js:285:38)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(1) { '127.0.0.1:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 's0',\n maxElectionId: ObjectId { [Symbol(id)]: [Buffer [Uint8Array]] },\n maxSetVersion: 5,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n\n",
"text": "That should be working, and I just tried it myself on a single node replicaSet.Is your code consuming the uri as-is or doing some processing on it first ?With directConnect:Without directConnect:",
"username": "chris"
},
{
"code": "client.db('sample_mflix');Unhandled rejection MongoError: Unsupported OP_QUERY command: find. The client driver may require an upgrade. For more details see https://dochub.mongodb.org/core/legacy-opcode-removal ",
"text": "Hi Chris,\nThanks for the code. I was missing await keyword.\nAs per doc which is https://www.mongodb.com/docs/drivers/node/current/quick-start/client.db('sample_mflix'); was missing await keyword. after adding await, I am able to get the connection.\nBut facing another issue which is\nUnhandled rejection MongoError: Unsupported OP_QUERY command: find. The client driver may require an upgrade. For more details see https://dochub.mongodb.org/core/legacy-opcode-removal Please note i am using the latest mongo driver which is 4.13.",
"username": "sandeep_rao"
},
{
"code": " const database = client.db('sample_mflix');\n const movies = database.collection('movies');\nawait const movie = await movies.find(query);\n",
"text": "try separating database and collection assignments:and you are using await in the wrong place. these two lines prepare the connection but do not use the connection until you execute commands/queries on the collection. that part is the one that needs it.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi Yilmaz,\nError was resolved when started using await client.db(‘test’). If I do not use the await keyword, I get above mentioned error. I know that even mongo doc does not recommend using await but not sure what to do to resolve the issue.",
"username": "sandeep_rao"
},
{
"code": "docker compose start mongo\n# and after 10 seconds for example\ndocker compose start nodeapp\n",
"text": "Error was resolved when started using await client.db(‘test’).I do not think this solves the problem because there is another problem masking its correctness.By the way, I am now beginning to suspect that it was related to your docker setup all this time. You see, “depends_on” is mainly to check if the container is booted but you need to also wait for the mongodb to fully start up listening on the port. depending on the data size in its data volume, this will take longer than you expected thus refusing any connection. (or the host might be slow to boot up mongodb container).revert to the previous version and try starting your containers separately:and stop/remove only the application container, not the database, while you are still developing. (unless you need to reset it too)you can later write scripts to tap into the container and check the health of the mongod process, so that the “depends_on” would correctly identify the readiness of it. check this thread:mongodb - Why does the docker-compose healthcheck of my mongo container always fail? - Stack Overflow",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi Yilmaz,\nSo i was using nodemon which detects any changes and restarts the node server. Now in my case when mongo was up and I tried restarting the node server using some file change, I was still getting this. Now I have not modified anything but used await and everything works fine. I agree that we should check for mongo db health before we bring up the node container",
"username": "sandeep_rao"
},
{
"code": "version: '3.7'\nname: \"mongo-node-docker\"\nservices:\n mongo:\n image: mongo:latest\n ports:\n - \"27017:27017\"\n volumes: \n - ./mongo_data:/data/db\n environment:\n MONGO_INITDB_ROOT_USERNAME: root\n MONGO_INITDB_ROOT_PASSWORD: example\n nodeapp:\n image: node:19-alpine\n working_dir: /app\n command: \"npm start\" # \"npm install\" then \"npm start\"\n ports:\n - \"8080:8080\"\n volumes: \n - ./nodeapp:/app\n depends_on:\n - mongo\n environment:\n MONGODB_ADMINUSERNAME: root\n MONGODB_ADMINPASSWORD: example\n MONGODB_URL: mongodb://root:example@mongo:27017/\n{\n \"name\": \"nodeapp\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"main\": \"index.js\",\n \"scripts\": {\n \"start\": \"node index.js\"\n },\n \"keywords\": [],\n \"author\": \"\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"mongodb\": \"^4.13.0\"\n }\n}\nconst { MongoClient } = require(\"mongodb\");\n\nlet uri = process.env.MONGODB_URL;\n\nconst client = new MongoClient(uri);\n\nasync function run() {\n try {\n const conn = await client.connect()\n console.log(\"from: \",conn.s.url)\n\n const database = client.db(\"testdb\");\n const coll = database.collection('testcoll');\n \n await coll.insertOne({\"name\":\"testname\"})\n\n const data = await coll.findOne({});\n console.log(data);\n\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\ndocker compose run -d mongodocker compose run --rm nodeappcommand: \"npm install\"from: mongodb://root:example@mongo:27017/\n{ _id: new ObjectId(\"63c5dbffaef20bb1535ec6c1\"), name: 'testname' }\nnpm installnpm start",
"text": "let me take you basics for the trio, mongodb, nodejs and docker (compose). please setup the following simple system, inspect, and try to check which part of your code deviates, then let me know the result.1- create a new test folder, copy the following compose content to a “compose.yml” file2- create “mongo_data” folder to bind to mongodb data folder. binds are faster to remove than a volumes3- create “nodeapp” folder. inside create 2 files, “package.json” and “index.js”, then copy following contents4- run database server first, as a service: docker compose run -d mongo5- then run app in foreground: docker compose run --rm nodeapp6- you should see the following output (different id, though)you will immediately notice the use of await keyword; when I make the open queries. and app works fine.if you followed all steps, now compare your app and compose file and see if you can find what differs in these basic steps.EDIT: I forgot a step in running the app. since I don’t use an image creation step, you need to first run it with having command to be npm install then second time with npm start.",
"username": "Yilmaz_Durmaz"
},
{
"code": "OP_QUERYnpm install",
"text": "Hi again, regarding the OP_QUERY error, I have a resolution on it. I tried nodejs drivers 2.2.x, 3.0.0, 3.1.0 and above. this is related to versions before 3.1.0 and mongodb 5.1 and newer.I have a bit longer answer about this here:in case you are sure you tried to update to 4.13 and got this error, then be sure it did not. you need to rebuild the image but first remove its traces from your docker cache. docker records commands as layers, and if a command is not changed, here npm install, it may not be run again, and previous 2.3 driver layer might be in use.",
"username": "Yilmaz_Durmaz"
},
{
"code": " await client.connect()\n console.log(\"from: \",conn.s.url)\n db = client.db();\n",
"text": "Hi,For some reason, if i haveThis works fine with a mongo db URL managed by mongo server which has conn URL mongo+srv/…\nBut same does not work when I run mongo inside docker container and try to connect\nError isMongoServerSelectionError: connection timed out\nmain-api_1 | at Timeout._onTimeout (/srv/deployments/profile-service/node_modules/mongodb/src/sdam/topology.ts:582:30)\nmain-api_1 | at listOnTimeout (node:internal/timers:564:17)\nmain-api_1 | at processTimers (node:internal/timers:507:7) {\nmain-api_1 | reason: TopologyDescription {\nmain-api_1 | type: ‘Unknown’,\nmain-api_1 | servers: Map(1) { ‘mongo:27017’ => [ServerDescription] },\nmain-api_1 | stale: false,\nmain-api_1 | compatible: true,\nmain-api_1 | heartbeatFrequencyMS: 10000,\nmain-api_1 | localThresholdMS: 15,\nmain-api_1 | setName: null,\nmain-api_1 | maxElectionId: null,\nmain-api_1 | maxSetVersion: null,\nmain-api_1 | commonWireVersion: 0,\nmain-api_1 | logicalSessionTimeoutMinutes: null\nmain-api_1 | },\nmain-api_1 | code: undefined,\nmain-api_1 | [Symbol(errorLabels)]: Set(0) {}\nmain-api_1 | }",
"username": "sandeep_rao"
},
{
"code": "mongomongo:27017mongonodeappmain-api",
"text": "the mongo name in the connection mongo:27017 is the name of the service. I used mongo and nodeapp names in my example compose file.you seem to change the app service’s name to main-api, so strongly possible you also changed the name of the database service. use that name in your connection string instead. (timeout occurs when a given name does not resolve to an address or the resolved address is not mapped to a running host. that is the difference from a refused connection)If you later happen to completely separate services into two files, that may require extra work because docker may start them in separate networks. so heads up. (though it will be similar to connecting outside database if you choose this path)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I thought about editing my above response, but you might miss then if you already are around and read that.The other possibility is just an honest mistake that happens to any of us: you forgot to start the database service. If your compose file still has both services in it, then this is the strongest possibility for the timeout. check the 4th step.",
"username": "Yilmaz_Durmaz"
}
] | MongoServerSelectionError: connect ECONNREFUSED insdie docker container | 2023-01-11T09:38:22.760Z | MongoServerSelectionError: connect ECONNREFUSED insdie docker container | 7,815 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "As per my use case, I need to restrict my end-user to login from single device. I want to implement any one of the following. Assume end-user is already logged in from one device. When user tries to login from second deviceAny of these approach will help me to implement my use case.Thanks",
"username": "Sudarshan_Roy"
},
{
"code": "",
"text": "Hey Sudarshan -The general approach would be:a) Store the user’s “currentDeviceId” every time the user logs in with Custom Function Authentication and Custom User Datab) The next time the user logs in, revoke all sessions from the Admin API in your custom function, before logging them again and setting the user’s new Device ID - this way you get to add your session expiration logic before the user has actually logged in on another device and provided another session token.c) Add client code to handle invalid session requests and take user to logout screenIn practice, this would look like:\ndevice a → logged in\ndevice b → calls login function → -> revokes all sessions and invalidates device a → logs user in from device b with new session → user is logged in on device b successfullyany subsequent request will fail, client code handles invalid session and takes user to login screenIf you want to request more session/token configuration options - you can add a request here. We use items here to influence our roadmap on Realm.Sumedha",
"username": "Sumedha_Mehta1"
},
{
"code": "loginPayload",
"text": "Thanks for such a detailed approach.Can you please let me know the structure of the parameter, i.e., loginPayload of Custom Function Authentication.",
"username": "Sudarshan_Roy"
},
{
"code": "",
"text": "The function payload will be whatever custom credentials you want to pass in for authentication (e.g. email, password, device Id, etc)There is an example on one of our DevHub posts on how to use Custom Function Auth",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hi Sumedha ,\nMentioned DevHub posts \nis not available (404-Not Found). Can please check it once.",
"username": "Animesh_Dey"
},
{
"code": "",
"text": "A quick search on the MongoDB Developer site returns this article has has the same title. I’m pretty sure that MongoDB changed their site around a bit not that long ago so it makes sense that articles were moved to the new location.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "currentDeviceIdHow can I get the ‘currentDeviceId’ is it inside the authEvent? and can I do this within the authentication trigger?",
"username": "Superday_Studios"
}
] | Restricting user to login from multiple devices | 2021-04-20T21:22:24.551Z | Restricting user to login from multiple devices | 12,521 |
null | [
"python",
"compass",
"php",
"cxx"
] | [
{
"code": "",
"text": "I think that MongoDB meets my needs for tracking business clients and their orders, with fast development time. But I have hit a roadblock in the documentation and can’t find what I need. It seems to focus on data and queries, but not programming.Can anyone please advise me how to run full-scale JavaScript programs that interface with my MongoDB, running only locally on a Windows server? Is there a Compass command or other command to run such programs? Is there a menu system for running such programs? How do I debug such programs, given that they are not running in a browser, so there are no Developer Tools and not even a Console?Or would I be better off using PHP and the MongoDB library code? No, I don’t want to run using Smalltalk, Ruby on Rails, React, C++, Python, or any other language or framework. Just JavaScript, please. With some sort of debugging tools, please.Also, can Mongo/JavaScript do asynchronous programming? Can I “await” a Mongo Promise to do a query or update while my program does something else until the Promise resolves?",
"username": "David_Spector"
},
{
"code": "",
"text": "Look at https://learn.mongodb.com/ and follow the developer path of the language of your choice.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for taking the time to answer, but these course cover only Node, Python, C#, and Java.",
"username": "David_Spector"
},
{
"code": "",
"text": "And what is Node if not JavaScript?",
"username": "steevej"
},
{
"code": "",
"text": "Node is a little operating system with many dependencies that I don’t want on my computer. I especially don’t want NPM, which is an enormous library of software I don’t need.",
"username": "David_Spector"
},
{
"code": "",
"text": "Compass includes MONGOSH, which is all of standard JavaScript .",
"username": "David_Spector"
},
{
"code": "",
"text": "How do you do JS (except for web browser) without node?I am pretty sure you may use the JS driver without node just list any JS package.",
"username": "steevej"
},
{
"code": "",
"text": "Yes. You may write scripts and use them in mongosh. See",
"username": "steevej"
},
{
"code": "",
"text": "load(‘test.js’)MongoshUnimplementedError: [COMMON-90002] load is not currently implemented for this platform",
"username": "David_Spector"
},
{
"code": "",
"text": "How do you do JS (except for web browser) without node?Compass contains a complete Mozilla implementation of JavaScript, supposedly. I don’t want node on my computer, I want to develop with JavaScript, including debugging, preferably inside a browser so I have the Developer Tools available.",
"username": "David_Spector"
},
{
"code": "",
"text": "Yep. The Compass version of mongosh does not support load. You need to use the command line version.",
"username": "steevej"
},
{
"code": "",
"text": "I will try. But is there any way to get access to MongoDB, community version, from the JavaScript running in Firefox or Chrome? It would be ideal to do my development locally in a browser, so I have access to Console and other Developer Tools.",
"username": "David_Spector"
},
{
"code": "",
"text": "I tried to install the MongoDB Shell (mongosh) using the instructions at instructions and failed. First, I downloaded the zip file for Windows 64. This expanded into a folder containing a file mongosh.1.gz and a few other files. The instructions say to expand this .gz file, so I used 7-zip to expand it. The file inside, mongosh.1, was not of any known file type, so 7zip could not expand it further. Windows does not know what to do with such a file.I can understand why a Windows installation would not be as easy as a Linux installation, given that few developers work on Windows. But this installation really does seem to fail.",
"username": "David_Spector"
},
{
"code": "mongosh",
"text": "Any reasons why you decided to follow the complicated way?From the instructions link you provided follow the instructions:1Open the MongoDB Download Center.2345",
"username": "steevej"
},
{
"code": "",
"text": "No, no particular reason. When I looked there, I only saw a .zip file download. Thanks for this.ADDED: Ah, I see the problem. There are several selection boxes. The last one is Package. If it is set to Zip it only offers Zip as a choice. It is necessary to open the Platform box, and near the end of the list is Windows/msi, which is the installation option that works. The way the download dialog works could be improved a bit.",
"username": "David_Spector"
},
{
"code": "alert('ok');\ntest> load('test.js')\nReferenceError: alert is not defined\n",
"text": "I wrote the program test.js as follows:The result of loading this program into mongosh:Also, as I predicted, Developer Tools (including Console) are not available in mongosh.I would like to write complete programs to make use of the MongoDB, but this does not seem easy in mongosh. Any suggestions?Can I connect to MongoDB from regular JavaScript or PHP in a browser?",
"username": "David_Spector"
},
{
"code": "",
"text": "I do not agree withthis does not seem easy in mongosh. Any suggestions?Because it is easy. But don’t expect to use browser specific features. For example, there is no DOM outside the browser.If you use features specific to the browser do not expect to run your code anywhere else then in the browser. The main issue here is that you wantto write complete programsbut want to stick to the browser and not use npm.I really cannot help further so this will be my last reply on this subject.",
"username": "steevej"
},
{
"code": "mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.2\n",
"text": "I really cannot help further so this will be my last reply on this subject.Thank you so much for your help, bye!So, I definitely need to run in a browser, since I want access to the DOM so that I can interface with myself as the user of the database via forms.I’m looking for a way to interface with MongoDB from JavaScript running in a browser, A starting point is the command reported by standalone mongosh:I can certainly run this from PHP code running in a local browser using either GET or PUT, so I should be able to connect to the DB this way. Now, how do I retrieve a connection resource/pointer/object and use HTTP commands to do DB operations? Must I use the PHP library for MongoDB, or can I connect directly?",
"username": "David_Spector"
},
{
"code": "",
"text": "Take a look at https://www.mongodb.com/docs/atlas/api/data-api/.Note that the js driver is a library, if somehow you are able to put the library where the browser can see it you might be able to import it with a <script> tag.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for continuing to reply even though you cannot help me.The page you linked to makes it clear that the data api is normally disabled, and can only be enabled for Atlas, which is MongoDB in the cloud, whose data and programs are owned by whoever maintains MongoDB. As I’ve said above, I’m using the Community MongoDB, not Atlas. The reason is that I want to guarantee my clients security by not putting their data on the Web, in the cloud, or under the protection of a third party. Instead, I run the current mySQL database and its programs on a local Windows server having no access from outside of my local wireless network. I want to do the same with MongoDB and switch from relational records to JSON.So, I still need someone to help me understand how to access MongoDB using REST (HTTP commands). I do not need all the fancy caching and other features provided by the various libraries for MongoDB, as my data is simple, and limited to just a few thousand records with no concurrent access needed. I want to use MongoDB under the assumption that it stores the data more efficiently than I could store it using my own hashing algorithms.",
"username": "David_Spector"
}
] | (Newcomer) How do I write real JavaScript programs for Mongo/Compass? | 2023-01-09T22:31:57.441Z | (Newcomer) How do I write real JavaScript programs for Mongo/Compass? | 3,071 |
null | [
"transactions",
"change-streams"
] | [
{
"code": "",
"text": "Are the change events of a transaction always guaranteed to appear in the order in which they are executed? For example, if two concurrent transactions are being executed, the below sequence shows the order of the operations being maintained even if the transactions themselves are not grouped together.txn1 - operation1\ntxn1 - operation2\ntxn2 - operation1\ntxn1 - operation3\ntxn1 - operation4\ntxn2 - operation2\ntxn2 - operation3\ntxn2 - operation4\nIs there a possibility of the change events appearing as:txn1 - operation1\ntxn1 - operation2\ntxn2 - operation1\ntxn1 - operation4\ntxn1 - operation3\ntxn2 - operation4\ntxn2 - operation3\ntxn2 - operation2",
"username": "Kiruphasankaran_Nataraj"
},
{
"code": "",
"text": "@Kiruphasankaran_Nataraj\nIn MongoDB, the order of operations within a single transaction is guaranteed to be maintained. However, the order of operations across multiple concurrent transactions is not guaranteed to be maintained. In other words, if two transactions are executed concurrently, the order in which the change events for each transaction appear in the database may not match the order in which the operations were executed. This is because MongoDB uses a multi-version concurrency control (MVCC) model which allows multiple transactions to execute concurrently and make changes to the same data, and the resulting order of operations may depend on the specific timing and order of those changes.",
"username": "Sumanta_Mukhopadhyay"
}
] | Are the change events of a transaction always guaranteed to appear in the order in which they are executed? | 2023-01-19T06:41:01.759Z | Are the change events of a transaction always guaranteed to appear in the order in which they are executed? | 1,261 |
null | [] | [
{
"code": "",
"text": "What is the best way to find information on how to build a MEVN (Mongo, Express, Vue, Node) stack.",
"username": "ofeyofey_N_A"
},
{
"code": "",
"text": "Hi @ofeyofey_N_A ,There are several guides, we do have an official tutorial on MERN which is a good starting point switching react for vue.This tutorial will show you how to build a full-stack MERN application -- in this case an employee database -- with the most current tools available.You can then use other guides like :In this beginner-friendly tutorial, we will create a simple CRUD To Do application using the popular MEVN stack. Users can use the end application to create, read, update, and delete data...To connect the dots",
"username": "Pavel_Duchovny"
}
] | Best way to build a MEVN stack | 2023-01-18T17:31:28.798Z | Best way to build a MEVN stack | 1,669 |
null | [
"aggregation",
"queries"
] | [
{
"code": "[primary] distapp> db.sellers.find({ code: 'MATS' }, { code: 1, name: 1, \"settings.constraints\": 1 })\n[\n {\n _id: ObjectId(\"5e73e8ec6294db00038b4f1c\"),\n settings: {\n constraints: {\n 'Texture:Type': {\n regex: 'soft|hard|matty|tough|stainresist|adhesive'\n },\n '%or': [\n { 'thickness:ratio': { '%gte': 700 } },\n {\n 'thickness:ratio': { '%gte': 600, '%lte': 699 },\n TRA: { '%max': 1.3 }\n },\n {\n 'thickness:ratio': { '%gte': 580, '%lte': 599 },\n TRA: { '%max': 1.2 }\n },\n {\n 'thickness:ratio': { '%gte': 540, '%lte': 579 },\n TRA: { '%max': 1.1 }\n },\n { TRA: { '%max': 1 }, 'thickness:ratio': { '%eq': null } }\n ]\n }\n },\n code: 'DELTA',\n name: 'DELTA pads'\n }\n]\n[primary] distapp> db.sellers.find({ code: 'MATS' }, { code: 1, name: 1, \"settings.constraints\": 1 })\n[\n {\n _id: ObjectId(\"5e73e8ec6294db00038b4f1c\"),\n settings: {\n constraints: {\n 'Texture:Type': {\n regex: 'soft|hard|matty|tough|stainresist|adhesive'\n },\n '%or': [\n { 'thickness:ratio': { '%gte': 700 } },\n {\n 'thickness:ratio': { '%gte': 600, '%lte': 699 },\n TRA: { '%max': 1.3 }\n },\n {\n 'thickness:ratio': { '%gte': 580, '%lte': 599 },\n TRA: { '%max': 1.2 }\n },\n {\n 'thickness:ratio': { '%gte': 540, '%lte': 579 },\n TRA: { '%max': 1.1 }\n },\n { TRA: { '%max': 1 }, 'thickness:ratio': { '%eq': null } }\n ]\n }\n },\n code: 'DELTA',\n name: 'DELTA pads'\n }\n]\nCode: \"$code\"\nName: \"$name\"\n\"$settings.constraints.Texture:Type.regex\"\n\"$settings.constraints.%or.0.thickness:ratio.%gte\"\n\"$settings.constraints.%or.1.thickness:ratio.%gte\"\n\"$settings.constraints.%or.1.thickness:ratio.%lte\"\n\"$settings.constraints.%or.1.TRA.%max\"\n\"$settings.constraints.%or.2.thickness:ratio.%gte\"\n\"$settings.constraints.%or.2.thickness:ratio.%lte\"\n\"$settings.constraints.%or.2.TRA.%max\"\n\"$settings.constraints.%or.3.thickness:ratio.%gte\"\n\"$settings.constraints.%or.3.thickness:ratio.%lte\"\n\n",
"text": "Hello Everyone,\nI have this collection:How can i generate the following fields from the settings.constraints array, this is required for csv export.Thanks",
"username": "Ashok_Kumar10"
},
{
"code": "mongoexport --collection=sample --db=test --type=csv --fields='code', 'name', 'settings.constraints.Texture:Type.regex', 'settings.constraints.%or.0.thickness:ratio.%gte','settings.constraints.%or.1.thickness:ratio.%gte','settings.constraints.%or.1.thickness:ratio.%lte','settings.constraints.%or.1.TRA.%max' --out=events.csv\ncode,name,settings.constraints.Texture:Type.regex,settings.constraints.%or.0.thickness:ratio.%gte,settings.constraints.%or.1.thickness:ratio.%gte,settings.constraints.%or.1.thickness:ratio.%lte,settings.constraints.%or.1.TRA.%max\nDELTA,DELTA pads,soft|hard|matty|tough|stainresist|adhesive,700,600,699,1.3\n\n",
"text": "Hey @Ashok_Kumar10,Welcome to the MongoDB Community Forums! this is required for csv exportSince your end goal is to export the required fields to CSV, you can try and use the mongoexport command. To test this, I created a document from the sample you shared, and used mongoexport command to get a CSV for the first seven fields you mentioned:This is the CSV snippet:You can mention all the fields that you want to display in your CSV and can get the required CSV file. You can also, create a view of all the fields that you want from the collection and then export it using mongoexport.Please let us know if this helps or not. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | Aggregate pipleline to generate new fields from the array | 2023-01-10T04:38:56.660Z | Aggregate pipleline to generate new fields from the array | 574 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "\n[\n{\n unique_key:\"1\",\n user_form:\n { \n user_details:\n { \n first_name:\"Tely\",\n last_name:\"Zed\"\n }\n }\n},\n{\n unique_key:\"2\",\n user_form:\n { \n user_details:\n { \n first_name:\"Rock\",\n last_name:\"Monty\"\n }\n }\n}\n]\nUser.aggregate([\n {\n $match: {\n user_form:\n {\n user_details:\n {\n first_name: req.body.user_form.user_details.first_name\n }\n }\n }\n }\n \n])\n.exec(function(err,filteredUsers){ \n if(filteredUsers) {\n console.log(filteredUsers);\n res.json({\n \"msg\": \"Successfully gets the filtered users\",\n \"data\": filteredUsers,\n \"success\": true\n });\n } if(err){\n res.json({\n \"msg\":\"Error occured while filtering users\",\n \"error\": err ,\n \"success\":false\n });\n }\n})\n{\n user_form:\n { \n user_details:\n { \n first_name:\"Rock\"\n }\n }\n}\n\n{\n\"msg\": \"Successfully gets the filtered users\",\n\"data\": [],\n\"success\": true\n}\n",
"text": "I’m building REST APIs in node.js using express.js library. For the embedded data stored in mongodb as json object, I’m trying to get specific data by filtering, thereby using the aggregate match to achieve so.For the users collection, I’ve the data as belowI want to get data of user by searching name. So I’m using aggregate method and then using match operator to achieve it. User is the name of the model. For filtering I’m using below aggregate method on User model.Now when I’m posting request from postman with searching with first_name as Rock in postman //body as shown below.So upon hitting send, I’m getting below response in postman, so I’m not able to get filtered //data instead getting an empty array.So please tell what should I do to filter data .",
"username": "R_V"
},
{
"code": "db.array.find().pretty()\n{\n \"_id\" : ObjectId(\"63c01a42232f174edd983fbe\"),\n \"unique_key\" : \"1\",\n \"user_form\" : {\n \"user_details\" : {\n \"first_name\" : \"Tely\",\n \"last_name\" : \"Zed\"\n }\n }\n}\n{\n \"_id\" : ObjectId(\"63c01a42232f174edd983fbf\"),\n \"unique_key\" : \"2\",\n \"user_form\" : {\n \"user_details\" : {\n \"first_name\" : \"Rock\",\n \"last_name\" : \"Monty\"\n }\n }\n}\n> db.array.aggregate([{$match:{\"user_form.user_details.first_name\":\"Rock\"}}]).pretty()\n{\n \"_id\" : ObjectId(\"63c01a42232f174edd983fbf\"),\n \"unique_key\" : \"2\",\n \"user_form\" : {\n \"user_details\" : {\n \"first_name\" : \"Rock\",\n \"last_name\" : \"Monty\"\n }\n }\n}\n",
"text": "Hi @R_V ,\nyou need to use the dot notation for query a subdocument, so in this esample:Hoping is it useful!!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi @Fabio_Ramohitaj , I’ve tried doing it by dot notation too but it still giving the same empty array.",
"username": "R_V"
},
{
"code": "\"first_name\"\"first_name \"",
"text": "Hi @Fabio_Ramohitaj ,\nThe mistake I did was I kept a space in \"first_name\" like \"first_name \" while inserting the document, so that’s why I wasn’t able to get filtered data by dot notation method which I tried earlier as well ,so later upon finding the mistake I get the answer. Thank you.",
"username": "R_V"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Match query for json object mongodb data returning empty array | 2023-01-12T14:29:09.812Z | Match query for json object mongodb data returning empty array | 1,818 |
null | [
"atlas",
"data-api"
] | [
{
"code": "",
"text": "Heyis there any way to enable Data API via atlas cli or curl ?Arek",
"username": "Arkadiusz_Borucki"
},
{
"code": "\"data-api\"\"product\"{appid}{groupid}Data ServicesData API \"Enabled\"",
"text": "Hi @Arkadiusz_Borucki,When enabling the Data API through the UI, An Atlas App Services App is created (for the Data API).In saying so, you can try using Create Application API specifying the value \"data-api\" for the query parameter \"product\". Once this is created, you can get and use the {appid} (from the Data API app created previously) value along with {groupid} to enable the Data API.However, I have encountered a bug in the UI with the above steps where the Data API is enabled but the Data Services → Data API screen does not represent the correct state (screenshot below). I have reported this behaviour which is being worked on.You can see that the Data API is \"Enabled\" yet the UI from this section is still asking for the Data API to be enabled\nimage2516×1320 227 KB\nUpon checking if the data Data API is enabled in the app itself:\nimage2014×1104 201 KB\nWith the above, perhaps it may be best to go through the UI at this stage to enable the Data API. I’ll update this topic once this unexpected behaviour is resolved.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Enable Data API via atlas cli / curl | 2023-01-06T18:08:49.317Z | Enable Data API via atlas cli / curl | 1,585 |
null | [
"aggregation",
"node-js",
"python",
"mongoose-odm",
"indexes"
] | [
{
"code": "",
"text": "Hi Community Members,We are trying to create the text indexes on our already created collections. We have text based search and aggregate takes 3 seconds to fetch just a set of 10 records.\nWe tried creating indexes with below MongoDB compass, nodejs (mongoose), python(pymongo) but in all the cases, index is not getting created. MongoDB deployed on AWS, collection is having only one document, but still index is not created.Your inputs will be appreciated.\nThanks in advance for the support!",
"username": "Bharat_Gupta2"
},
{
"code": "",
"text": "Hi Community Members,Your responses will be much appreciated on this.Regards,\nBharat",
"username": "Bharat_Gupta2"
},
{
"code": "db.collection.getIndexes()",
"text": "Hi @Bharat_Gupta2,We tried creating indexes with below MongoDB compass, nodejs (mongoose), python(pymongo) but in all the cases, index is not getting created. MongoDB deployed on AWS, collection is having only one document, but still index is not created.There is not enough information here to determine or narrow down what may be causing the issue you have described. Can you supply the following details:We have text based search and aggregate takes 3 seconds to fetch just a set of 10 records.In addition to the above information, can you also elaborate on the text based search / aggregation? Please provide the operation used in this case.Regards,\nJason",
"username": "Jason_Tran"
}
] | MongoDB Text Index are not getting created | 2023-01-10T13:14:16.771Z | MongoDB Text Index are not getting created | 1,255 |
null | [] | [
{
"code": "",
"text": "Hello so I have a vps that Im using to deploy my apps, today I want to deplooy my app that needs Mongodb, on local machine there isn’t any issues but when deploying it looks like the vps can’t connect to the databse, even though i already allowed all ip adresses!,the vps that im using only support ipv6",
"username": "mr_gamers"
},
{
"code": "",
"text": "Hi @mr_gamers - Welcome to the community even though i already allowed all ip adresses!Could you clarify if the MongoDB deployment you’re attempting to connect to is self hosted or on MongoDB Atlas?the vps that im using only support ipv6In the case you are attempting to connect to an Atlas deployment then this won’t be possible at the time of this message as Atlas only allows client connections to the database deployment from entries in the project’s IP access list which currently only allows IPv4 addresses.You can vote for the related feedback post about IPv6 support if the above details relate to your use case.Regards,\nJason",
"username": "Jason_Tran"
}
] | Connectivity Issues when connect to mongodb from vps | 2023-01-18T04:24:12.916Z | Connectivity Issues when connect to mongodb from vps | 656 |
[
"queries",
"server"
] | [
{
"code": "",
"text": "\nScreenshot 2022-12-03 at 8.18.07 AM2380×430 102 KB\n\nI am using mac M1 and I tried to lower the mongodb-community version below 5 but the service status is showing error, only version 6 is running fine.",
"username": "Hemant_Kumar3"
},
{
"code": "mongod --version",
"text": "Are you trying to connect to a local MongoDB instance? if so what is the result of mongod --version?\nIf it is an Atlas cluster, is it managed by Atlas, or by you? I don’t know if it is possible to manage the version. It is probably version 6. tell if otherwise.\nLast question is which package is it and which version? mongodb or mongoose? by the way, at least from the error message, you need to upgrade to a higher version then the current one you installed.",
"username": "Yilmaz_Durmaz"
},
{
"code": "OP_QUERYMongoError: Unsupported OP_QUERY command: find. The client driver may require an upgrade.\n For more details see https://dochub.mongodb.org/core/legacy-opcode-removal\nnode_modulesnpm install",
"text": "Hi again @Hemant_Kumar3 , can you please, next time, copy-paste error messages so we can find a solution faster!?OP_QUERY is a legacy opcode and was removed from mongodb v5.1 and onward. Legacy Opcodes — MongoDB Manualthe nodejs driver for mongodb applied this change on 3.1.0 and onward.If someone gets this error, then that means the driver in use older than v3.1.0 and the connected database is 5.1 and higher.If you tried to update the driver, yet still getting the same error, then delete node_modules folder and re-install packages, npm install. if it is a container image, then it needs to be rebuilt (removing from the cache, if needed, as commands are recorded as layers, and layer may not be replaced if command is the same)@Stennie_X , thanks for the ping edit. I came across this error in another post, and now I found out an explanation ",
"username": "Yilmaz_Durmaz"
}
] | I am getting this error while use of parse query during login verification | 2022-12-03T03:00:27.230Z | I am getting this error while use of parse query during login verification | 3,453 |
|
null | [
"python",
"atlas-cluster"
] | [
{
"code": "",
"text": "Hello. I’m connecting for the first time with Python on Ubuntu 22.04 to the Atlas mongodb. However, I am getting the following error regarding DNS.I saw some similar threads, but I couldn’t solve my problem. Could you help me please?from pymongo import MongoClient\nimport dns.resolver\ndef get_database():CONNECTION_STRING = “mongodb+srv://projegui:@clusterak.mongodb.net/myFirstDatabase”client = MongoClient(CONNECTION_STRING)return client[‘myFirstDatabase’]if name == “main”:dbname = get_database()ERROR:\n/usr/bin/python3.10 /home/rodrigo/opt/gui/pymongo_get_database.py\nTraceback (most recent call last):\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/srv_resolver.py”, line 89, in _resolve_uri\nresults = _resolve(\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/srv_resolver.py”, line 43, in _resolve\nreturn resolver.resolve(*args, **kwargs)\nFile “/usr/local/lib/python3.10/dist-packages/dns/resolver.py”, line 1368, in resolve\nreturn get_default_resolver().resolve(\nFile “/usr/local/lib/python3.10/dist-packages/dns/resolver.py”, line 1190, in resolve\n(request, answer) = resolution.next_request()\nFile “/usr/local/lib/python3.10/dist-packages/dns/resolver.py”, line 691, in next_request\nraise NXDOMAIN(qnames=self.qnames_to_try, responses=self.nxdomain_responses)\ndns.resolver.NXDOMAIN: The DNS query name does not exist: _mongodb._tcp.clusterak.mongodb.net.During handling of the above exception, another exception occurred:Traceback (most recent call last):\nFile “/home/rodrigo/opt/gui/pymongo_get_database.py”, line 19, in \ndbname = get_database()\nFile “/home/rodrigo/opt/gui/pymongo_get_database.py”, line 10, in get_database\nclient = MongoClient(CONNECTION_STRING)\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/mongo_client.py”, line 736, in init\nres = uri_parser.parse_uri(\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/uri_parser.py”, line 542, in parse_uri\nnodes = dns_resolver.get_hosts()\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/srv_resolver.py”, line 121, in get_hosts\n_, nodes = self._get_srv_response_and_hosts(True)\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/srv_resolver.py”, line 101, in _get_srv_response_and_hosts\nresults = self._resolve_uri(encapsulate_errors)\nFile “/usr/local/lib/python3.10/dist-packages/pymongo/srv_resolver.py”, line 97, in _resolve_uri\nraise ConfigurationError(str(exc))\npymongo.errors.ConfigurationError: The DNS query name does not exist: _mongodb._tcp.clusterak.mongodb.net.Process finished with exit code 1",
"username": "Rodrigo_A_Kartcheski"
},
{
"code": "from pymongo import MongoClient\n\nclient = MongoClient(\"mongodb+srv://cluster0.sqm88.mongodb.net/test\")\ndb = client.get_database('mydatabase')\nmongoshcompass",
"text": "Hi @Rodrigo_A_Kartcheski and welcome to the MongoDB community forum!!I tried the following code in my local environment with Atlas URI and a localhost URIand it works perfectly fine.\nTo further understand the error messages , could you help me with a few details:Let us know if you have further queries .Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "clusterkmongodb.netsqm88cluster0.sqm88.mongodb.netsqm88",
"text": "Verify you cluster URI since the following is incompleteclusterak.mongodb.netYou are missing a few letters and/or numbers between clusterk and mongodb.net like sqm88 incluster0.sqm88.mongodb.netBut yours will be different from sqm88.",
"username": "steevej"
},
{
"code": "from pymongo import MongoClient\n\nclient = MongoClient(\"mongodb+srv://projegui:[email protected]/test\")\ndb = client.get_database('myFirstDatabase')\n",
"text": "Hello, Aasawari. I’m trying to access mongodb Atlas to create some databases and collections. Basic Python code operationsMy pymongo version: pymongo==4.3.3By mongodb Atlas I was able to connect perfectly.I tried the following exact string through Pycharm. I received the message “Process terminated with exit code 0”",
"username": "Rodrigo_A_Kartcheski"
}
] | The DNS query name does not exist | 2023-01-17T04:30:12.767Z | The DNS query name does not exist | 5,420 |
null | [
"swift"
] | [
{
"code": "maximumNumberOfActiveVersions66",
"text": "I can see when maximumNumberOfActiveVersions is incremented but I can’t find when this value is decremented. We set 66 for production and it is not enough",
"username": "Ned"
},
{
"code": "maximumNumberOfActiveVersions",
"text": "Can you be a bit more specific? Where can you see maximumNumberOfActiveVersions - in code? In a Function? Endpoint call? Can you clarify what’s being asked and include some example code if possible?",
"username": "Jay"
},
{
"code": "Realmvar config = Realm.Configuration(\n fileURL: fileURL,\n schemaVersion: 0,\n deleteRealmIfMigrationNeeded: false,\n objectTypes: ...\n )\n config.maximumNumberOfActiveVersions = 66\nmaximumNumberOfActiveVersionsNumber of active versions (66) in the Realm exceeded the limit 67",
"text": "In realm swift you use config object to instantiate Realm:One of the parameters is maximumNumberOfActiveVersions. At some point Realm is crashing with next message Number of active versions (66) in the Realm exceeded the limit 67. We can see in which cases this counter is increasing but it is not clear how to make Realm decrease it.",
"username": "Ned"
},
{
"code": "",
"text": "Ah, I see. So you’re using a local only Realm (no sync) and (maybe?) during a migration.That value is set by the developer so Realm will ‘crash’ instead of creating a ginormous file that cannot be opened. If it’s hitting that limit you set, there’s something else likely going wrong, as that’s a big number.Are you processing a huge dataset on a background thread perhaps? One of our apps, cranks through 2Gb of data and we never had that issue.I would think doing some troubleshooting may help - if there are a lot of threads active at once or if frozen objects are left frozen; maybe a class is not being deallocated? Maybe a runaway process? Checking memory with XCode may lead to why that’s happening.Wish I had more precise info but it seems that issue is caused by something else in the app.",
"username": "Jay"
},
{
"code": "let realm = ...\n\n// Create on main thread\nrealm.objects(Entity.self)\n .collectionPublisher\n .sink { completion in } receiveValue: { value in }\n .store(in: &subscriptions)\n\nDispatchQueue.background.async {\n (0..<1_000).forEach {\n let realmBackground = ... \n try? realmBackground.write(...)\n }\n}\nmaximumNumberOfActiveVersionsmaximumNumberOfActiveVersions\n \n [realm beginWriteTransaction];\n [IntObject createInRealm:realm withValue:@[@0]];\n XCTAssertFalse([realm refresh]);\n [realm cancelWriteTransaction];\n }\n \n - (void)testCancelWriteWhenNotInWrite {\n XCTAssertThrows([RLMRealm.defaultRealm cancelWriteTransaction]);\n }\n \n - (void)testActiveVersionLimit {\n RLMRealmConfiguration *config = RLMRealmConfiguration.defaultConfiguration;\n config.maximumNumberOfActiveVersions = 3;\n RLMRealm *realm = [RLMRealm realmWithConfiguration:config error:nil];\n \n // Pin this version\n __attribute((objc_precise_lifetime)) RLMRealm *frozen = [realm freeze];\n \n // First 3 should work\n [realm transactionWithBlock:^{ }];\n [realm transactionWithBlock:^{ }];\n \n ",
"text": "Yes, only local realm DB, no sync and crash is happening during normal read/write operations not a migration.The way how I was managed to reproduce it:If I move writing to main thread maximumNumberOfActiveVersions will never raise an exception.if there are a lot of threads active at once or if frozen objects are left frozen; maybe a class is not being deallocatedThat is why I want to understand in which case Realm decrement maximumNumberOfActiveVersions \nI found a unit test which demonstrate when it is incremented but I can’t find unit test or explanation in documentation when it is decremented",
"username": "Ned"
},
{
"code": "",
"text": "I am not sure I am clear on the code. What does the first section have to do with the second?Then in the second, it looks like 1001 Realms are being created with each one written to. While technically they allocated and deallocated that doesn’t seem like a good thing.Are you using an autorelease pool anywhere? See Avoid Pinning Transactions",
"username": "Jay"
},
{
"code": "final class DBClient {\n private let dispatchQueue: DispatchQueue = .init(label: \"db\", qos: .utility)\n\n func store(report: Report) {\n dispatchQueue.async {\n let realmBackground = ... \n try? realmBackground.write(report)\n }\n }\n\n func observeReports() -> AnyPublisher<[Report], Never> {\n let realm = ...\n realm.objects(Report.self)\n .collectionPublisher \n ...\n }\n}\nstoreOlder data is only garbage collected when it is no longer referenced or actively in use by a client application.",
"text": "Ok, let me try to explain what is happening. I have next class to work with Realm DBI think I’ve read that creating new instance of realm is cheap operation. That is why I create a new instance on each call to avoid accessing realm from different threads.In my case store functions is executed very often and at some point it leads to a crash, because we reached maximum number of active version. From the documentation I can see that Older data is only garbage collected when it is no longer referenced or actively in use by a client application. But in my case for some reason doesn’t happen.",
"username": "Ned"
},
{
"code": "store",
"text": "That is why I create a new instance on each call to avoid accessing realm from different threads.If you’re writing 1000 objects to realm, there’s no reason to create 1000 realm instances to do that; create one and do your write, ensuring it’s on a background thread to avoid blocking the UI. For a great option see Run A Transaction.store functions is executed very often and at some point it leads to a crashThat store function, as is, does not lead to a crash for us, nor cause any issues with maximumNumberOfActiveVersions.To verify, we created 100,000 objects and called that function to write them to realm. There was no issue or crash. My guess is, the issue is caused by something else other than that specific code.Older data is only garbage collected when it is no longer referencedThat is correct and if you’re working with larger datasets, it’s a good idea to wrap it in an autorelease pool so they are disposed of as soon as possible once they go out of scope. Here’s some additional reading that may helpSee this answer on StackOverflow along with this one and maybe this one too.The question about maximumNumberOfActiveVersions;from my understanding this is set to 0 by default and then you, as the developer can set it to a non-zero number. e.g. it doesn’t change itself - you have to do that. So there isn’t a way to decrement that number; that being said it can be handy to determine if you’ve got runaway processes, memory issues etc. Setting it to a small number will cause a app crash in instead of creating a crazy large datafile that’s corrupted or to large to be opened.That helps find issues before they are bigger issues.",
"username": "Jay"
},
{
"code": "Number of active versions (66) in the Realm exceeded the limit 67maximumNumberOfActiveVersions",
"text": "If you’re writing 1000 objects to realm, there’s no reason to create 1000 realm instances to do that; create one and do your write, ensuring it’s on a background thread to avoid blocking the UIAlso one of the reasons I created realm instance for each call was to “avoid” holding realm instances. When I saw first time exception Number of active versions (66) in the Realm exceeded the limit 67 I thought that this is because I’m not releasing realm object. But from what I can see it should be OK to keep 2 Selma instances: one to observe and one to write in background thread.See this answer on StackOverflow along with this one and maybe this one too.thanks, I’m already using autoreleasepoolSo there isn’t a way to decrement that number;Yes it is clear, that I can’t manually decrement this value, but I’m wondering how it is implemented in Realm, I mean when realm decides that maximumNumberOfActiveVersions should be decremented. When all realm DB instances released or etc",
"username": "Ned"
}
] | When maximumNumberOfActiveVersions is decremented? | 2023-01-16T09:01:54.331Z | When maximumNumberOfActiveVersions is decremented? | 1,046 |
null | [
"security",
"configuration"
] | [
{
"code": "net:\n port: 27017\n bindIp: \"127.0.0.1\" # bindIpAll: true work well too\nnet:\n port: 27017\n bindIp: \"127.0.0.1, 19.0.18.101\"\n Process: 28566 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)\n Process: 28563 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 28561 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 28559 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 27861 (code=exited, status=0/SUCCESS)\n inet 19.0.18.100 netmask 255.255.255.0 broadcast 19.0.18.255\n inet 127.0.0.1 netmask 255.0.0.0\n",
"text": "Hello,I’ve got an error while trying to bing IP with the config file.The server (19.0.18.100 on CentOS 7) and client (19.0.18.101) are WM on the same subnetwork.The config file was auto created by mongoDB installation, all work except when I change the IP binding line:No problem with:With BindIpAll I can reach the server with the client.Fail with:I’ve read other post with some IP binding problem which talk about network config.\nIf need, here ifconfig -a | grep “inet”:I really appreciate some help.Have a good DayBenoit",
"username": "benoit_pont"
},
{
"code": "",
"text": "Are you using correct IP?\nWhat error you see in the mongod.log\nUse server IP instead of client IP you added",
"username": "Ramachandra_Tummala"
},
{
"code": " bindIp: \"127.0.0.1, 19.0.18.101\"",
"text": "Use server IP instead of client IP you addedThe above is the solution. You wroteThe server (19.0.18.100 on CentOS 7)and you try to bind with bindIp: \"127.0.0.1, 19.0.18.101\"which is theclient (19.0.18.101)",
"username": "steevej"
},
{
"code": "",
"text": "Hello,Thanks a lot for the answers.I thought that bindIp is to autorize the client IP (external) to connect to the MongoDB server. My MongoDB server is on the 19.0.18.100 and my client is on 19.0.18.101.\nSo I autorize to connect locally plus an external client (19.0.18.101).Could you explain me why autorizing the mongoDB server IP (19.0.18.100) will autorize an external connection for a client (19.0.18.101)?Should I bind: 127.0.0.1, 19.0.18.100 and 19.0.18.101 ?Thanks",
"username": "benoit_pont"
},
{
"code": "",
"text": "You are misunderstanding the meaning of bindIp.To restrict access based on IP you have to setup a firewall.",
"username": "steevej"
},
{
"code": "",
"text": "Ok I undestand, thanks a lot. Could you give my a real world usage of bindIp in order to understand it?",
"username": "benoit_pont"
},
{
"code": "",
"text": "There are examples in the links I provided.If you are planning to run your own server I strongly recommend you take M103 from https://learn.mongodb.com/. Otherwise Atlas might be a better choice.",
"username": "steevej"
},
{
"code": "mongodlocalhostnet.bindIP19.0.18.101mongod$ ifconfig | grep \"inet \"\n\tinet 127.0.0.1 netmask 0xff000000\n\tinet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255\n",
"text": "Could you give my a real world usage of bindIp in order to understand it?Hi @benoit_pont,By default the mongod process only binds to the localhost (127.0.0.1) loopback IP address which limits connections to those originating from the same host. The net.bindIP configuration value enables the process to bind to one or more local network interfaces.Your real world use case is adding the 19.0.18.100 address to allow non-localhost connections.Should I bind: 127.0.0.1, 19.0.18.100 and 19.0.18.101 ?You cannot bind to the external 19.0.18.101 IP address; this will result in a startup error for mongod similar to:Failed to set up listener: SocketException: Can’t assign requested addressThe only valid bind IPs are addresses for local network interfaces. For example:Could you explain me why autorizing the mongoDB server IP (19.0.18.100) will autorize an external connection for a client (19.0.18.101)?Listening to 19.0.18.100 allows any client with an open route to this IP address and port combination to connect.As @steevej noted, you need to configure a firewall to restrict remote access based on client IPs.I strongly recommend configuring (and testing) role-based access control and network encryption before opening your deployment to broader network exposure. For more information on available security measures, please review the MongoDB Security Checklist.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello,Thanks a lot. I really appreciate your precise reply.Have a nice day",
"username": "benoit_pont"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB server 6 on CentOS: Can't add IP bind address | 2022-12-30T08:14:25.898Z | MongoDB server 6 on CentOS: Can’t add IP bind address | 2,992 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "db.orders.aggregate( [ { $indexStats: { } } ] )since",
"text": "Hi community,I’m trying to manage indexes for my collections, and running this command db.orders.aggregate( [ { $indexStats: { } } ] ) to get the index info, from the result of the index stats, it has an accesses prop, and within the accesses prop, it has a count and since prop available for me to check out how many times this index has been used since a date.is there any chance to change the retention date for this since prop, so it’s longer? from my observation, it only captures the latest 7-10 days, how can I capture index usage for the latest 30 days or 60 days for example.Any insights?\nThanks",
"username": "Oliver_Weng"
},
{
"code": "indexStatsmongod",
"text": "I believe that the indexStats command does not accept any parameters and there’s no way to configure the retention period for the data it collects. It’s also reset on the restart of a mongod process or similar things like that.Though if what you want to do a in-depth performance investigation, MongoDB FTDC might hold more information for you to use - https://www.mongodb.com/docs/manual/administration/analyzing-mongodb-performance/.Even MongoDB Atlas’ Performance Advisor seems to be only able to go back as far as 7 days.",
"username": "Luis_Osta1"
}
] | Indexes management | 2023-01-18T16:35:58.755Z | Indexes management | 893 |
null | [
"dot-net"
] | [
{
"code": "[BsonIgnoreExtraElements]\n class ListingAndReview\n {\n [BsonId]\n public long Id { get; set; }\n\n [BsonElement(\"name\")]\n public String Name { get; set; }\n\n [BsonElement(\"room_type\")]\n public String RoomType { get; set; }\n\n [BsonElement(\"property_type\")]\n public String PropertyType { get; set; }\n\n [BsonElement(\"amenities\")]\n public String[] Amenities { get; set; }\n/*\n [BsonElement(\"host\")]\n [BsonIgnoreIfNull]\n public IEnumerable<Host> Host { get; set; }\n*/\n }\n",
"text": "How would I use BsonElement syntax to map the array of reviews in the sample_airbnd.listingsAndReviews collection into a class for the document?\nThis isn’t working",
"username": "Ben_Hunsberger"
},
{
"code": "namespace MongoDbPlay\n{\n [BsonIgnoreExtraElements]\n class Host\n {\n [BsonElement(\"host_id\")] public long HostId { get; set; }\n [BsonElement(\"host_url\")] public string HostUrl { get; set; }\n [BsonElement(\"host_name\")] public string HostName { get; set; }\n }\n\n [BsonIgnoreExtraElements]\n class Review\n {\n [BsonElement(\"_id\")] public long Id { get; set; }\n [BsonElement(\"date\")] public DateTime Date { get; set; }\n [BsonElement(\"comments\")] public string Comments { get; set; }\n }\n\n [BsonIgnoreExtraElements]\n class ListingAndReview\n {\n [BsonId] public long Id { get; set; }\n [BsonElement(\"name\")] public string Name { get; set; } = null!;\n [BsonElement(\"room_type\")] public string RoomType { get; set; } = null!;\n [BsonElement(\"property_type\")] public string PropertyType { get; set; } = null!;\n [BsonElement(\"amenities\")] public List<string> Amenities { get; set; }\n [BsonElement(\"host\")] [BsonIgnoreIfNull] public Host Host { get; set; }\n [BsonElement(\"reviews\")][BsonIgnoreIfNull] public List<Review> Reviews { get; set; }\n }\n}\n",
"text": "OK, lots of embarrassment for asking this question now.\nBelow is a sample Mapping Class for the listingsAndReviews collection:My issue was a error message being thrown when just trying to work with the host object and in newbie fashion had [BsonElement(“host_name”)] public long HostName { get; set; } instead of string.\nAfter fixing that it was all “as expected” to arrive at the above sample",
"username": "Ben_Hunsberger"
}
] | BsonElement mapping for Objects and Arrays of Objects | 2023-01-12T21:49:42.874Z | BsonElement mapping for Objects and Arrays of Objects | 761 |
null | [
"charts"
] | [
{
"code": "<script src=\"https://unpkg.com/@mongodb-js/charts-embed-dom\"></script>\n\n<script>\n const ChartsEmbedSDK = window.ChartsEmbedSDK;\n\n const sdk = new ChartsEmbedSDK({\n baseUrl: \"https://charts.mongodb.com/charts-webscraping-ciaso-ercot-wnjhz\",\n });\n\n const chart = sdk.createChart({\n chartId: \"62d7224c-79ab-41ca-83f6-690f4ab86869\", \n });\n\n chart.render(document.getElementById('chart'));\n\n</script>\n</body>\n<script type=module> /*did that to avoid error cannot import outside a module*/\n\n import ChartsEmbedSDK from \"@mongodb-js/charts-embed-dom\"; //error: relative references must begin with /, ./, or ../\n\n //import ChartsEmbedSDK from \"https://unpkg.com/@mongodb-js/charts-embed-dom\" // Ambiguous indirect export: default \n\n // import ChartsEmbedSDK from \"/unpkg.com/@mongodb-js/charts-embed-dom\" //error not found.\n\n //import 'https://unpkg.com/@mongodb-js/charts-embed-dom'; // TypeError: t is null (same as other method)\n\n\n\n const sdk = new ChartsEmbedSDK({\n baseUrl: \"https://charts.mongodb.com/charts-webscraping-ciaso-ercot-wnjhz\", \n });\n\n const chart = sdk.createChart({\n chartId: \"62d7224c-79ab-41ca-83f6-690f4ab86869\", \n height: \"700px\",\n // Additional options go here\n });\n\n chart.render(document.getElementById(\"chart\"));\n </script>\n",
"text": "METHOD 1I am trying to embed a chart from MongoDB using the following code, which was adapted from documentation from MongoDB and NPM.The error I’m getting is “TypeError t is null”a picture of the errorNear as I can tell that might mean that whatever is supposed to be imported from https://unpkg.com/@mongodb-js/charts-embed-dom is not coming through so the sdk and the chart aren’t getting created properly. Hence why the chart comes up null when it trys to getElementById.METHOD 2I also tried a different method. What you see below is directly copied from Mongo’s documentation. I got an error that “relative references must begin with /, ./, or …/”.You can see I also tried a few other things (commented out).I think that for method 2 a possible reason it’s not working is that I wasn’t able to install the @mongodb-js/charts-embed-dom package correctly. When I tried to install using npm this is what I saw: screenshot of error with npmI did some looking into this problem but was never able to resolve it.Overall it seems like I’m not able to properly import the charts-embed-dom. It seems to me like method 1 only has one problem to fix, whereas method 2 has possibly 2 or more layers of problems, so I’m hoping there is a relatively simple solution to method 1.I know another solution would be to use an iframe. I’ve gotten that to work, but it just doesn’t seem to be versatile enough to do what I need (drop down boxes, dynamic filtering)",
"username": "Elizabeth_Leeser"
},
{
"code": "type=module<script src=\"https://unpkg.com/@mongodb-js/charts-embed-dom\"></script>\n\n<script type=module>\n const ChartsEmbedSDK = window.ChartsEmbedSDK;\n\n const sdk = new ChartsEmbedSDK({\n baseUrl: \"https://charts.mongodb.com/charts-webscraping-ciaso-ercot-wnjhz\",\n });\n\n const chart = sdk.createChart({\n chartId: \"62d7224c-79ab-41ca-83f6-690f4ab86869\", \n });\n\n chart.render(document.getElementById('chart'));\n\n</script>\n</body>\n",
"text": "Hi,From your initial error, you have 2 scripts, only add type=module to the second one.",
"username": "Najm"
},
{
"code": "",
"text": "Thank you! I made that change and unfortunately still get the same error. “Uncaught (in promise) TypeError: t is null”, referring to the line “chart.render(document.getElementById(‘chart’));”",
"username": "Elizabeth_Leeser"
},
{
"code": "<html>\n <script src=\"https://unpkg.com/@mongodb-js/charts-embed-dom\"></script>\n <body>\n <div id=\"chart\"></div>\n </body>\n\n <script>\n const ChartsEmbedSDK = window.ChartsEmbedSDK;\n\n const sdk = new ChartsEmbedSDK({\n baseUrl:\n \"https://charts.mongodb.com/charts-webscraping-ciaso-ercot-wnjhz\",\n });\n\n const chart = sdk.createChart({\n chartId: \"62d7224c-79ab-41ca-83f6-690f4ab86869\",\n });\n\n chart.render(document.getElementById(\"chart\"));\n </script>\n</html>\n\n",
"text": "Hi Elizabeth,Thanks for raising the issue.I have tried with code below for method 1 and was able to render the embed chart without an error. I loaded the html in Google Chrome browser. Would be good if you could give this a try. If this also works for you then it would be a good starting point to figure out what issue you are having. If the html code does not contain sensitive information and is sharable here that might help if you could share the full code here, and also what browser did you use.",
"username": "James_Wang1"
},
{
"code": "<!DOCTYPE html>\n<html>\n<script src=\"https://unpkg.com/@mongodb-js/charts-embed-dom\"></script>\n\n<head>\n <title>LMP Test Data</title>\n <meta name=\"csrf-token\" content=\"{{ csrf_token() }}\">\n</head>\n\n<body>\n <div id=\"chart\"></div>\n</body>\n\n<script>\n const ChartsEmbedSDK = window.ChartsEmbedSDK;\n\n const sdk = new ChartsEmbedSDK({\n baseUrl: \"https://charts.mongodb.com/charts-webscraping-ciaso-ercot-wnjhz\",\n });\n\n const chart = sdk.createChart({\n chartId: \"62d7224c-79ab-41ca-83f6-690f4ab86869\", \n //height \"7000px\"\n // Additional options go here\n });\n\n chart.render(document.getElementById('chart'));\n </script>\n\n</html>\n",
"text": "Hello,Thank you for your response! This definitely helped. I am now seeing different problems though. On the webpage it looks like the chart is loading, but only half visible and then when it stops loading I can’t see it. It also says there is a breakpoint at the line const ChartsEmbedSDK = window.ChartsEmbedSDK. And perhaps there are issues with cookies? A picture of what I’m seeing is attached.\n\nmaybe chart1576×806 56.9 KB\nI am using the Laravel framework and my code is a part of that, but here is everything in my html file:",
"username": "Elizabeth_Leeser"
},
{
"code": "",
"text": "Hi Elizabeth,The breakpoint is from the chrome dev tool, I can see in the screenshot line 13 is highlighted by with a blue arrow, you can simply click on the blue arrow to remove the breakpoint: Pause your code with breakpoints - Chrome DevelopersI would try:\nScreen Shot 2023-01-17 at 10.07.11 pm1886×588 96.8 KB\n",
"username": "James_Wang1"
},
{
"code": "",
"text": "I ended up also having to specify the height of the chart, and that worked! Thank you so much!",
"username": "Elizabeth_Leeser"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problems with using @mongodb-js/charts-embed-dom to embed a chart from MongoDB | 2023-01-12T01:10:40.386Z | Problems with using @mongodb-js/charts-embed-dom to embed a chart from MongoDB | 1,932 |
null | [
"atlas-search"
] | [
{
"code": "",
"text": "Hello,\nDoes $search support querying for array size?e.g. countries is an array field in collection products - retrieve all retrieve all products that are sold in more than 3 countries.Thanks!",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Hi @Prasad_Kini and welcome to the MongoDB community forum!!Does $search support querying for array size?In today’s date, this is not possible to do it using the $search operator. However, to query the array size, you can use the $size operator in the stage after $search stage of the pipline.I also see a feedback suggestion at our feedback engine. Linking it here for the future reference and upvoting for the feature request.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks Aasawari. Running $size after $search takes a big hit on performance and is unusable.I was the one to post the feedback suggestion a few days back on this ",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search Array Size | 2023-01-15T17:13:50.411Z | Atlas Search Array Size | 856 |
null | [] | [
{
"code": "",
"text": "Hey people,It’s really an honour and all the more exciting to be selected as the lead of MUG Patna , India.I belong from the city of Bhagalpur and currently in my 3rd year of studies at NIT Patna. I have always been into community driven clubs and events because I feel that this is what bring the best out of us as a developer, i.e, to learn from your peers and get to know the latest trends in the industry.Mongo DB being one such widely used technology that has changed the way we interact with data and secure out storage. Owing to it’s flexibility I feel that it would be a great tool to help aspiring developers accelarate their career and I will always ensure that this is the goal towards which MUG Patna works.",
"username": "Shivam_Jha1"
},
{
"code": "",
"text": "Hey Shivam,\nWelcome to the MongoDB Community!It’s a pleasure to have you here and we look forward to your contributions to the community!",
"username": "Harshit"
},
{
"code": "",
"text": "Hi Shivam,Welcome to the MongoDB Community!!We’re excited to have you here and congratulations on becoming the Patna MUG leader. We wish you a fantastic year with plenty of success and growth. Cheers,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Many Congratulations Shivam!!",
"username": "Chandan_Jha"
}
] | Hello everyone, it's Shivam from MUG Patna | 2023-01-15T16:15:56.744Z | Hello everyone, it’s Shivam from MUG Patna | 1,266 |
null | [
"node-js"
] | [
{
"code": "",
"text": "TypeError: Object prototype may only be an Object or null: [email protected] is simply running the nodejs quickstart. but replacing the uri with localhost uri, i.e “http://localhost:27017”. This has nothing to do with ipv6.",
"username": "tyson_N_A"
},
{
"code": "",
"text": "Hi, @tyson_N_A . that example you tried to use dates back to 2 years and has this in its README file:“this branch uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4”Using old examples with new drivers is always expected to give rise to problems. And since newer drivers use TypeScript, you should take extra care. check other version references and API documents here: MongoDB Node.js Driver",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb://mongodb+srv://uri = \"mongodb://localhost:27017\"ECONNREFUSED ::1:27017hosts127.0.0.1 localhost\n::1 localhost\n",
"text": "hey again @tyson_N_A , I went ahead and run the code from that quickstart. Seems the code in it is still applicable and runs fine. So I urge you to check how you used that example. I am surprised you didn’t get “Invalid scheme” error for your URI as the driver gives error and asks for it to be of the form mongodb:// or mongodb+srv:// (for the localhost it is uri = \"mongodb://localhost:27017\").when it is an IPv4 versus IPv6 issue, the error we get is very distinctive compared to other errors: ECONNREFUSED ::1:27017. and there are two things to cause this:",
"username": "Yilmaz_Durmaz"
}
] | TypeError: Object prototype may only be an Object or null: undefined | 2022-12-07T05:52:56.453Z | TypeError: Object prototype may only be an Object or null: undefined | 2,669 |
[
"connecting",
"mongodb-shell",
"configuration"
] | [
{
"code": "",
"text": "I’m creating a Mongo peering connection using terraform and want to automate the process in CICD using GitHub Actions. Since the GitHub runners don’t have a specific IP Address, the approach I went with was to use a proxy VM where all the network traffic from the GitHub runner in the CICD is passed through it and whitelisted its IP address on Mongo Atlas. However, when I run the CICD pipeline the pipeline fails with the title error but when I ssh into the proxy VM and connect to Mongo Atlas using mongosh the connection is established. Any ideas what the issue could be? or a different approach connecting to Mongo Atlas in a CICD pipeline?\nConnection Error1920×1101 135 KB\n",
"username": "Kevin_Karobia"
},
{
"code": "",
"text": "Hi @Kevin_Karobia - Welcome to the community.I’m hoping maybe details on this post will help narrow down or even resolve the issue. However, as per the Required for Select Resources: API Resource Request Access Lists documentation:tlas allows your API key to make requests from any address on the internet. Atlas has some exceptions to this rule. These exceptions limit which resources an API key can use without location-based limits defined in an API access list.\nTo add these location-based limits to your API key, create an API access list. This list limits the internet addresses from which a specific API key can make API requests.\nAny API keys with an API access list require all API requests to come from an IP address on that list. Your API access list must include entries for all clients that use the API.However, when I run the CICD pipeline the pipeline fails with the title error but when I ssh into the proxy VM and connect to Mongo Atlas using mongosh the connection is established.The API access list associated with the API key is different from the Atlas Project Network Access list. I.e. You can still connect to the Atlas instance(s) within a project from a IP that is on the Atlas Project Network Access List and not on the IP Acess List associated with the API Key(s) for that project.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Required for Select Resources: API Resource Request Access ListsAdding the IP address to the API access list solved the issue for me.Thanks",
"username": "Kevin_Karobia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error 403 IP_ADDRESS_NOT_ON_ACCESS_LIST | 2023-01-17T09:18:59.134Z | Error 403 IP_ADDRESS_NOT_ON_ACCESS_LIST | 2,320 |
|
null | [
"queries"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609C4FFF3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/index/index_access_method.cpp\",\"line\":383,\"s\":\"mongo::SortedDataIndexAccessMethod::newCursor\",\"s+\":\"13\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609B16873\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":93,\"s\":\"mongo::IndexScan::initIndexScan\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609B15D0A\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":146,\"s\":\"mongo::IndexScan::doWork\",\"s+\":\"2EA\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABF9CB\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.h\",\"line\":207,\"s\":\"mongo::PlanStage::work\",\"s+\":\"8B\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609B1CF60\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/fetch.cpp\",\"line\":91,\"s\":\"mongo::FetchStage::doWork\",\"s+\":\"70\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABF9CB\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.h\",\"line\":207,\"s\":\"mongo::PlanStage::work\",\"s+\":\"8B\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609B1F286\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/delete_stage.cpp\",\"line\":128,\"s\":\"mongo::DeleteStage::doWork\",\"s+\":\"116\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABF9CB\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.h\",\"line\":207,\"s\":\"mongo::PlanStage::work\",\"s+\":\"8B\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABDC15\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":369,\"s\":\"mongo::PlanExecutorImpl::_getNextImpl\",\"s+\":\"2B5\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABD8BF\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":489,\"s\":\"mongo::PlanExecutorImpl::_executePlan\",\"s+\":\"4F\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF609ABE70E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":559,\"s\":\"mongo::PlanExecutorImpl::executeDelete\",\"s+\":\"E\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF608EA14A5\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ttl.cpp\",\"line\":476,\"s\":\"mongo::TTLMonitor::deleteExpiredWithIndex\",\"s+\":\"DA5\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF608E9FCA6\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ttl.cpp\",\"line\":346,\"s\":\"mongo::TTLMonitor::deleteExpired\",\"s+\":\"996\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF608EA1E35\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ttl.cpp\",\"line\":231,\"s\":\"mongo::TTLMonitor::doTTLPass\",\"s+\":\"375\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF608EA3A56\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ttl.cpp\",\"line\":155,\"s\":\"mongo::TTLMonitor::run\",\"s+\":\"846\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF60AB57F01\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/background.cpp\",\"line\":163,\"s\":\"mongo::BackgroundJob::jobBody\",\"s+\":\"181\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF60AB56D4C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread\",\"line\":55,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_0d448e6a4a71ef01c6d7e83cfb041340> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFCFEE16B4C\",\"module\":\"ucrtbase.dll\",\"s\":\"recalloc\",\"s+\":\"5C\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"TTLMonitor\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFD00054ED0\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"10\"}}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.679+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23131, \"ctx\":\"TTLMonitor\",\"msg\":\"Failed to open minidump file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\6.0\\\\bin\\\\mongod.2023-01-13T03-26-17.mdmp\",\"error\":\"\\ufffdܾ\\ufffd\\ufffd\\ufffd\\ufffdʡ\\ufffd\"}}\n{\"t\":{\"$date\":\"2023-01-13T11:26:17.680+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"TTLMonitor\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n\n",
"text": "i’m running mongodb server in Windows Server 2022 Standeard.\nThe mongodb server will be shutdown after few minutes.\ni can’t find any problem,please help me !",
"username": "Evan_Pang"
},
{
"code": "",
"text": "How are you starting your mongod?\nWhy do many mongod.exes in the log?\nIf you installed it as service it should be up & running\nCheck mongod daemon service\nin your taskmanager",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "i try to start mongodb service so many time.\nuse “net start Mongodb”",
"username": "Evan_Pang"
}
] | I'm startting the mongodb server in windows server 2022 standeard.The mongodb server will be shutdown after few minutes | 2023-01-13T03:45:19.711Z | I’m startting the mongodb server in windows server 2022 standeard.The mongodb server will be shutdown after few minutes | 1,029 |
null | [
"mongodb-shell",
"atlas-cluster"
] | [
{
"code": "",
"text": "I set up connection between Atlas MongoDB and our VPC using AWS Peering. Configuration went smoothly but I do not know how to connect to my cluster from EC2 in our subnet. I do not know how to find IP MongoDB cluster ? When I try generate snippet on Mongodb page it generates something like this:mongosh “mongodb+srv://something.mongodb.net/myFirstDatabase” --apiVersion 1 --username ouruserI have the impression that it should provide an private IP but which one?",
"username": "44d9553e6cb393d2d61a92e9df8d493"
},
{
"code": "ping",
"text": "Hi @44d9553e6cb393d2d61a92e9df8d493,When I try generate snippet on Mongodb page it generates something like this:mongosh “mongodb+srv://something.mongodb.net/myFirstDatabase” --apiVersion 1 --username ouruserAs per the DNS configuration documentation specific for AWS:DNS resolves the cluster’s hostnames to their public IP address rather than their internal IP address if:One method you can follow to try verify if the hostnames are resolving to a private IP is to perform the following from a client within a subnet associated with the VPC peering:At step 2, the hostname(s) should resolve to a private IP assuming the VPC peering and DNS configuration are both set up appropriately. (AWS) Clients connecting from outside the VPC peering connection can use the same connection string but will connect over the public internet (assuming their IP is on the Network Access List).If you’re still having trouble with VPC peering setup, you can try contacting the in-app Atlas chat support however this may only be useful if you’re having issues setting the VPC peering connection up from the Atlas end. There can be some configurations / cases where the DNS configuration on the AWS’s client side (some mentioned above) which cause the SRV record to resolve to public IP addresses rather than internal IP addresses.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "It is appeared that we forgot add rule for peering in route table attached to subnet (by accident we added only rule to default route table for VPC )",
"username": "44d9553e6cb393d2d61a92e9df8d493"
},
{
"code": "",
"text": "Awesome sounds like you got it sorted from that ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problem with Atlas MongoDB and AWS Peering | 2023-01-13T19:10:11.171Z | Problem with Atlas MongoDB and AWS Peering | 2,230 |
null | [] | [
{
"code": "",
"text": "Hi,We are using WC:majority with our applications, do we need to use read-concern majority as well.\nHow read will work in this case.",
"username": "Aayushi_Mangal"
},
{
"code": "\"Read concern majority\"\"read concern majority\"\"WC: majority.\"",
"text": "Hi @Aayushi_Mangal,We are using WC:majority with our applications, do we need to use read-concern majority as well.It depends on the specific requirements of your application.\"Read concern majority\" is a read concern that ensures that a read operation will only return data that has been written to a majority of replica set members. By default, MongoDB reads from a single replica set member, so using “read concern majority” will result in slower read performance, but will ensure that the data returned is consistent across the majority of replica set members.If consistency is important for your application, it may be necessary to use \"read concern majority\" in addition to \"WC: majority.\" However, if read performance is more important, you may choose not to use “read concern majority” and instead rely on the built-in replication in MongoDB to eventually propagate the data to all replica set members.Visit docs to read more about it:Please let us know if you have any follow-up questions on this, and share the use case and the goal.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank @Kushagra_Kesav , I had read somewhere that the secondary reads provide the latest Oplog, but the fact is it get reads from the nearest/fastest (may or may not get stale)So to get latest document we need to go for write Concern Majority along with read concern level majority, is that correct?",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Hi @Aayushi_Mangal,I had read somewhere that the secondary reads provide the latest Oplog, but the fact is it gets reads from the nearest/fastest (may or may not get stale)It will be more helpful if we understand your use cases and it’s not recommended to read oplogs directly, primary, or secondary since it’s for MongoDB internal use. If the app needs to follow a collection/database/server changes, then using change-stream is the supported approach.So to get the latest document we need to go for write Concern Majority along with read concern level majority, is that correct?Not 100% correct, since there are nuances to this. Write majority implies that the document has propagated to the majority of voting nodes. Read majority means return the latest document that is majority committed.Those two documents may or may not be the same document due to the nature of a distributed system. Again what is the use case? If it’s read-your-writes then you need causal consistency. Write majority + read majority is not a direct replacement for that.Read more about Read Concern “Majority”Please let us know if you have any follow-up questions.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Majority write concern with respect to read concern | 2023-01-13T08:10:17.106Z | Majority write concern with respect to read concern | 874 |
null | [
"data-modeling"
] | [
{
"code": "const handler = async (req, res) => {\n try {\n const { videoId } = req.query;\n async function getComments(cursor, commentId) {\n let returnMsg = 'dog'\n if(cursor) {\n return fetch)\n .then(data => data.json())\n .then(async (data) => {\n\n const mapped = data[0].data.video.comments.edges.map((comment) => {\n let msg = \"\";\n for (let i = 0; i < comment.node.message.fragments.length; i++) {\n msg += comment.node.message.fragments[i].text;\n }\n return {\n contentOffsetSeconds: comment.node.contentOffsetSeconds,\n msg: msg,\n };\n });\n \n const addTag = await prisma.Video.update({\n where: {\n id: commentId,\n },\n data: {\n comments: {\n push: mapped\n },\n },\n })\n const hasNextPage = data[0].data.video.comments.pageInfo.hasNextPage\n const second = data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].contentOffsetSeconds\n if(hasNextPage || second < 500) {\n getComments(cursor, commentId)\n } else {\n return res.status(200).send({status: 'complete'})\n }\n })\n } else {\n return fetch()\n .then((data) => data.json())\n .then(async (data) => {\n const mapped = data[0].data.video.comments.edges.map((comment) => {\n let msg = \"\";\n for (let i = 0; i < comment.node.message.fragments.length; i++) {\n msg += comment.node.message.fragments[i].text;\n }\n return {\n contentOffsetSeconds: comment.node.contentOffsetSeconds,\n msg: msg,\n };\n });\n const video = {\n videoId: +videoId,\n comments: mapped,\n };\n const id = await prisma.Video.findMany({\n where: {\n videoId: +videoId\n }\n })\n \n if(id.length < 1){\n \n const comment = await prisma.Video.create({ data: video })\n \n getComments( data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].cursor, comment.id )\n }\n \n returnMsg = 'save'\n });\n }\n return returnMsg\n }\n console.log(await getComments())\n\n } catch (error) {\n return res.status(500).json({ error: error.message });\n }\n};\n\nexport default handler;\n",
"text": "I am creating a web app using next 13. A component that has a useEffect hook sends a request to my api. This api has a function that fetches data from a 3rd party api. I recursively run this fetch function until the data I receive says it does not have another page of data. Does anyone have any tips on how I should be doing this. I am going to be saving hundreds of thousands of comments to each Video.A big issue im running into is how I will tell my user it is currently saving comments and how I can make a progress bar for that. If I could send a response everytime I am about to recursively fetch the next list of comments I could send what second in the video the last comment I fetched was and create a progress bar.I also am having trouble sending a response once my else statment activates meaning their are no more comments. this if/else is inside of the fetch when i try to use res.send it gives error. I have to stick res.send out side of my function and it sends a response the second the api gets hit. I want it to send a response once the api has been hit and the getComments function is complete which will take minutes.",
"username": "Matthew_Wardlow"
},
{
"code": "const handler = async (req, res) => {\n try {\n const { videoId } = req.query;\n async function getComments(cursor, commentId) {\n if(cursor) {\n return fetch(\n .then(data => data.json())\n .then(async (data) => {\n const hasNextPage = data[0].data.video.comments.pageInfo.hasNextPage\n const second = data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].node.contentOffsetSeconds\n console.log(second)\n const cursor = data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].cursor\n const mapped = data[0].data.video.comments.edges.map((comment) => {\n let msg = \"\";\n for (let i = 0; i < comment.node.message.fragments.length; i++) {\n msg += comment.node.message.fragments[i].text;\n }\n return {\n contentOffsetSeconds: comment.node.contentOffsetSeconds,\n msg: msg,\n };\n });\n const entry = {\n contentOffsetSeconds: second,\n messages: mapped\n }\n const addTag = await prisma.Video.update({\n where: {\n id: commentId,\n },\n data: {\n comments: {\n push: entry\n },\n },\n })\n \n if(hasNextPage && second < 5000) {\n getComments(cursor, commentId) \n res.status(200).send({status: second})\n } else {\n return res.status(200).send({status: 'done'})\n }\n })\n } else {\n return fetch()\n .then((data) => data.json())\n .then(async (data) => {\n const second = data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].node.contentOffsetSeconds\n const mapped = data[0].data.video.comments.edges.map((comment) => {\n let msg = \"\";\n for (let i = 0; i < comment.node.message.fragments.length; i++) {\n msg += comment.node.message.fragments[i].text;\n }\n return {\n contentOffsetSeconds: comment.node.contentOffsetSeconds,\n msg: msg,\n };\n });\n \n const entry = {\n contentOffsetSeconds: second,\n messages: mapped\n }\n const video = {\n videoId: +videoId,\n comments: entry,\n };\n const id = await prisma.Video.findMany({\n where: {\n videoId: +videoId\n }\n })\n \n if(id.length < 1){\n const comment = await prisma.Video.create({ data: video })\n return getComments( data[0].data.video.comments.edges[data[0].data.video.comments.edges.length - 1].cursor, comment.id )\n } else{\n return res.status(200).send({status: 'saved'})\n }\n });\n }\n }\n getComments()\n \n } catch (error) {\n return res.status(500).json({ error: error.message });\n }\n};\n\nexport default handler;\nmodel Video {\n id String @id @default(auto()) @map(\"_id\") @db.ObjectId\n videoId Int @unique\n comments Comment[]\n}\n\ntype Comment {\n contentOffsetSeconds Int\n messages Message[]\n}\n\ntype Message {\n contentOffsetSeconds Int\n msg String\n}\n",
"text": "updated way of saving my messages. I basically have a array filled with arrays of 60 messages. before my messages array would have 100,000 messages in it. now it will have 1666 arrays filled with 60 messages each. Is this better or the same? my end goal is looking for special messages out of hundreds of thousands of them.I figured out how to send response if saved or if completed downloading. the issue is sending multiple responses because the headers dont exist because only one is requested.",
"username": "Matthew_Wardlow"
}
] | Best way to save hundreds of thousands of comments | 2023-01-18T04:09:36.982Z | Best way to save hundreds of thousands of comments | 793 |
null | [
"aggregation",
"golang"
] | [
{
"code": "func (in *Instance) ListProjects(ctx context.Context, orgID string, filters []*Filter, offset int32, limit int32) ([]*model.Project, int, error) {\n\tin.logger.Info(\"list_projects called with project_name\")\n\n\tfilter := ParseFilters(filters, in.config.Filters.AuthorizedFields)\n\toff := int64(offset)\n\tlim := int64(limit)\n\n\tpipeline := mongo.Pipeline{\n\t\t{{Key: \"$match\", Value: filter}},\n\t\t{{Key: \"$facet\", Value: bson.M{\n\t\t\t\"total\": []bson.M{{\"$count\": \"total\"}},\n\t\t\t\"results\": []bson.M{\n\t\t\t\t{\"$skip\": off},\n\t\t\t\t{\"$limit\": lim},\n\t\t\t},\n\t\t}}},\n\t}\n\tcur, err := in.db.\n\t\tDatabase(in.config.Database.Name).\n\t\tCollection(\"projects\").\n\t\tAggregate(ctx, pipeline)\n\tif err != nil {\n\t\treturn nil, 0, err\n\t}\n\tdefer cur.Close(context.TODO())\n\n\tprojects := make([]*model.Project, 0)\n\ttotal := 0\n\tfor cur.Next(context.TODO()) {\n\t var result bson.M\n\t err := cur.Decode(&result)\n\t if err != nil {\n\t in.logger.Error(err)\n\t continue\n\t }\n\t for _, v := range result {\n\t switch vv := v.(type) {\n\t case bson.M:\n\t for key, val := range vv {\n\t if key == \"total\" {\n\t total = int(val.(float64))\n\t } else if key == \"results\" {\n\t results := val.([]interface{})\n\t for _, item := range results {\n\t data := item.(bson.M)\n\t dataD, err := bson.Marshal(data)\n\t if err != nil {\n\t in.logger.Error(err)\n\t continue\n\t }\n\t var project model.Project\n\t err = bson.Unmarshal(dataD, &project)\n\t if err != nil {\n\t in.logger.Error(err)\n\t continue\n\t }\n\t projects = append(projects, &project)\n\t }\n\t }\n\t }\n\t }\n\t }\n\t}\n\treturn projects, total, nil\n}\n",
"text": "I am working in golang and the mongo driver. I am having a tough time understanding the proper way to go about the followingI find that my below code seems a bit hacked. Is this the expected path to work with mongodb aggreegates?",
"username": "Zachary_Schulze"
},
{
"code": "",
"text": "Hi @Zachary_Schulze ,What is the specific issues you experience the code seems to use facet for one key. Fir results the other for counts.Overall seems correctThanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Mongo Aggregate with total and results, converting to struct | 2023-01-17T21:28:42.578Z | Mongo Aggregate with total and results, converting to struct | 1,403 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : 1.0,\n \"data\" : [10.42, 40.07, 98.565, 69.8]\n}\ndb.test_data.aggregate(\n [\n { \"$addFields\" : {\n \"sum1\" : { \"$sum\" : \"$data\" },\n \"sum2\" : {\n \"$reduce\" : {\n \"input\" : \"$data\",\n \"initialValue\" : 0.0,\n \"in\" : { \"$add\" : [\"$$value\",\"$$this\"]}\n }\n }\n }\n }\n ]\n);\n{\n \"_id\" : 1.0,\n \"data\" : [\n 10.42,\n 40.07,\n 98.565,\n 69.8\n ],\n \"sum1\" : 218.855,\n \"sum2\" : 218.85500000000002\n}\n",
"text": "Collection has only one document :Make and run such aggregate:Results:It is clear that the internal representation of floating numbers can result in sum2,\nbut why are the results different at all?Comments?",
"username": "Vadim_Shumilin"
},
{
"code": "$sum$adddatatestdb> db.sumcoll.aggregate({\n '$addFields': {\n sumField: { '$sum': [ 10.42, 40.07, 98.565, 69.8 ] },\n addField: { '$add': [ 10.42, 40.07, 98.565, 69.8 ] }\n }\n})\n[\n {\n _id: ObjectId(\"63c76ab383424604d536bf3b\"),\n data: [ 10.42, 40.07, 98.565, 69.8 ],\n sumField: 218.855,\n addField: 218.855\n }\n]\n",
"text": "Interesting find @Vadim_Shumilin!I believe what you have seen in your results is a side effect of binary rounding errors. I believe this is a universal behaviour, so it’s not limited to MongoDB. In this particular case I believe it’s a direct result of two things:There are more details regarding the ordering of the sum in the stack overflow post for example. For reference as well, I did the $sum and $add as seperate fields for the same data array which both result in the same value:Hope the above helps or sheds some light on what has happened.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Strange different results in aggregate summing through $sum vs $reduce | 2023-01-13T13:49:51.713Z | Strange different results in aggregate summing through $sum vs $reduce | 695 |
null | [
"api"
] | [
{
"code": "",
"text": "It is possible from an AWS account connected to MongoDB atlas via PrivateLink to get the list of names of the clusters available over the PrivateLink?",
"username": "Kristian_Whittick"
},
{
"code": "",
"text": "Hi @Kristian_Whittick - Welcome to the community It is possible from an AWS account connected to MongoDB atlas via PrivateLink to get the list of names of the clusters available over the PrivateLink?I’m not entirely sure I understand this question - Do you mean you want a list of all clusters (for e.g. “Prod-cluster” (M20), “Dev-Cluster” (M10), etc) that are associated with a privatelink connection?Or are you after the hostnames of the nodes within the cluster? If the latter, then one way you can obtain the hostnames in the metrics page.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "After more checking, what I need is the sub-domain not the cluster name.\nE.g. I need “crt7mqe” from the URL.\nmongodb+srv://cluster0.crt7mqe.mongodb.net/myFirstDatabaseI need to get this information via API from the external AWS account that is connected via the PrivateLink.\nSo, I think I need to use the Atlas API key, which according to the documentation does not go via the PrivateLink, but only via the internet.Therefore, I think I have answered my own question, “It is not possible!”, unless there is a way to get DNS information from the PrivateLink endpoint.",
"username": "Kristian_Whittick"
},
{
"code": "",
"text": "Hi Kristian,I need to get this information via API from the external AWS account that is connected via the PrivateLink.\nSo, I think I need to use the Atlas API key, which according to the documentation does not go via the PrivateLink, but only via the internet.Therefore, I think I have answered my own question, “It is not possible!”, unless there is a way to get DNS information from the PrivateLink endpoint.Yes - you are correct as you can access the Atlas Administration API servers through the public internet only. The Atlas Administration API is not available over connections that use network peering or private endpoints.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Can you get the cluster names through AWS Private Link? | 2023-01-12T17:01:19.196Z | Can you get the cluster names through AWS Private Link? | 1,356 |
[
"compass"
] | [
{
"code": "",
"text": "Hi All,I am unable to use MongoDb Compass on MacOS Monterey With Following Error. I tried every possible solution to resolve it.reinstallation, Restart, Open Anyway Option, Control + click and open from FInder. still MAc is not allowing to launch it and says“This software needs to be updated. Contact the developer for more information.”\nScreen Shot 2023-01-17 at 4.11.33 PM488×604 54.9 KB\nERROR: “MongoDB Compass” can’t be opened because Apple cannot check it for malicious software.",
"username": "Arjunsingh_Rathod"
},
{
"code": "System PreferencesSecurity & PrivacyAllow apps downloaded from:",
"text": "Hi Arjun,Please navigate to System Preferences> Security & Privacy > Allow apps downloaded from:\nAllow the same and you will be able to install.I hope it helps!Regards,\nJanpreet",
"username": "Janpreet_Singh"
}
] | Unable to launch MongoDB On MacOS Monterey | Error MongoDB Compass” can’t be opened because Apple cannot check it for malicious software | 2023-01-17T22:26:05.694Z | Unable to launch MongoDB On MacOS Monterey | Error MongoDB Compass” can’t be opened because Apple cannot check it for malicious software | 3,425 |
|
null | [
"connecting",
"mongodb-shell",
"containers"
] | [
{
"code": "",
"text": "Inside AKS Cluster:kubectl get pods\nNAME READY STATUS RESTARTS AGE\nmongodb-5b8d4fc596-XXXXXX 2/1 Running 0 126mInstalled by:helm upgrade mongodb bitnami/mongodb \n–install \n–create-namespace \n–namespace mongodb \\export MONGODB_ROOT_PASSWORD=$(kubectl get secret \n–namespace mongodb mongodb \n-o jsonpath=“{.data.mongodb-root-password}” | base64 -d)kubectl run \n–namespace mongodb mongodb-client \n–rm --tty -i \n–restart=‘Never’ \n–env=“MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD” \n–image Docker: Accelerated, Containerized Application Development \n–command \n– bashInside pod:mongosh admin --host “mongodb” --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORDWhy this is failing with the following?Current Mongosh Log ID: 63c7362a08XXXXXXXXX\nConnecting to: mongodb://@mongodb:27017/admin?directConnection=true&authSource=admin&appName=mongosh+1.6.2\nMongoNetworkError: connect ECONNREFUSED 172.16.147.83:27017It was working yesterday? I am simply following the output instructions, but did notice yesterday the image was–image Docker: Accelerated, Containerized Application Developmentr9now–image Docker: Accelerated, Containerized Application Developmentr20This shouldn’t affect how it is connected internally within the AKS clusters?helm version\nversion.BuildInfo{Version:“v3.10.3”kubectl version --client --short\nClient Version: v1.24.0\nKustomize Version: v4.5.4",
"username": "Sebastian_Cheung1"
},
{
"code": "",
"text": "Hi Sebastian,I see that you have connection refused. Please confirm the following:Regards,\nJanpreet",
"username": "Janpreet_Singh"
}
] | MongoServerError: Authentication failed when trying to connect via mongosh | 2023-01-18T00:01:32.142Z | MongoServerError: Authentication failed when trying to connect via mongosh | 1,547 |
null | [] | [
{
"code": "",
"text": "I have installed the MongoDB for vscode extension. I want to run the default playground template, so I run the program and I am asked to save and I save it as playground.mongodb and when I run it next I get an error saying “you don’t have an extension for debugging mongodb should we find one at the marketplace?”",
"username": "leo_adigwe"
},
{
"code": "",
"text": "so I was able to get the results from playground by pressing on play button on the tab bar.",
"username": "leo_adigwe"
}
] | How to save the mongoDB playground on vscode | 2023-01-17T15:35:16.437Z | How to save the mongoDB playground on vscode | 596 |
null | [
"aggregation",
"queries"
] | [
{
"code": "date: \"2022-10-18 00:00:00\"\npowerIn: 30\npowerOut: 10\n_id: \"ABC0123\"\n {\n $setWindowFields: {\n sortBy: { date: 1 },\n output: {\n totalPowerIn: {\n $sum: \"$powerIn\",\n window: { documents: [\"unbounded\", \"current\"] }\n },\n totalPowerOut: {\n $sum: \"$powerOut\",\n window: { documents: [\"unbounded\", \"current\"] }\n }\n }\n }\n",
"text": "Hello, I’m working on SetWindowsField query. My original mongodb data isI have a date range from 2022-01-01 00:00:00 to today which is recorded by an hour.\nI want to save cumulative value of powerIn and powerOut, and want to reset every first day of month.\nSo for example, 2022-01-01~2022-01-31 totalPowerIn and totalPowerOut will be cumulative. And 2022-02-01 will set as 0. But I’m having a trouble of setting documents bound. Can I add if statement so that when the date becomes ‘01’, set totalPowerIn and totalPowerOut value as 0?",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "It looks like your date field is of type string. Date fields should use the date data type. It takes less space, it is faster to compare and provides a rich API.If I understand correctly what you are missing is the partitionBy parameter of $setWindowFields. Your <expression> will need to use $substr to extract the first 7 first characters, YYYY-MM, of your date string.",
"username": "steevej"
},
{
"code": "",
"text": "I really appreciate it! It works perfectly now.\nCan you have only year and month (or only year) in the format of date? If I parse “2022-11”, its date is set to “2022-11-01” which I didn’t intend to tho.We save a date data in the format of “2022-11-18 00:00”, “2022-11-18”, “2022-11”, “2022” which is hourly, daily, monthly, yearly. And since hourly and monthly datas are the majority, do you think it’s better to have hourly and monthly as a date data type and monthly and yearly as a string? Or just keep it all string for consistency?I am very new to mongodb and learning, all feedback is appreciated! Many thanks to you",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "Can you have only year and month (or only year) in the format of date?No. A date is a date.Or just keep it all string for consistency?As I mentioned dates should be stored using the Date data type. It takes less space, it is faster and a rich date specific API exists.I am not too sure what is the best for a scenario like:date data in the format of “2022-11-18 00:00”, “2022-11-18”, “2022-11”, “2022” which is hourly, daily, monthly, yearly.The issue with string is that they take more spaces and are slower to compare. Numbers would be better than string if date data type cannot be used. For example, if your smallest granularity is hourly, I would keep year data as yyyymmddhh and use 99 or 00 as a marker of the granularity. 2022999999 would indicate a a yearly data for 2022.But I am not a big fan of data mangling. If parsing the string 2022-10 gives a date data value of 2022-10-01 I would store that and a extra field to indicate yearly data.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your advice! It really helps",
"username": "Chloe_Gwon"
}
] | SetWindowsField Query | 2023-01-15T01:32:49.569Z | SetWindowsField Query | 503 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.