image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": " {\n \"group\": \"A\",\n \"subgroup\": \"A1\",\n \"name\": \"Abby\"\n },\n {\n \"group\": \"A\",\n \"subgroup\": \"A2\",\n \"name\": \"Andy\"\n },\n {\n \"group\": \"A\",\n \"subgroup\": \"A2\",\n \"name\": \"Amber\"\n },\n {\n \"group\": \"B\",\n \"subgroup\": \"B1\",\n \"name\": \"Bart\"\n }\ngroupsubgroupnamescount {\n \"_id\": \"B\",\n \"count\": 1,\n \"subgroup\": [\n {\n \"_id\": \"B1\",\n \"count\": 1,\n \"names\": [\"Bart\"]\n }\n ]\n },\n {\n \"_id\": \"A\",\n \"count\": 3,\n \"subgroup\": [\n {\n \"_id\": \"A1\",\n \"count\": 1,\n \"names\":[ \"Abby\"]\n },\n {\n \"_id\": \"A2\",\n \"count\": 2,\n \"names\": [\"Amber\", \"Andy\"]\n }\n ]\n }\n", "text": "Good day MongoDB Community,I would like to ask for your help in creating the correct pipeline for my data:I want to group by group first, then for each group, group by subgroup.\nThe names will also go to their respective subgroup and the count is showing the actual count.My expected output is as follows:The aggregation and actual output can be seen in the playground:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "joseph-d-p" }, { "code": "db.nestedGroup.aggregate([\n {\n $group: {\n _id: \"$subgroup\",\n topGroup: {$first: \"$group\"},\n names: {\n $push: \"$name\"\n }, \n count: {\n $sum: 1\n }\n }\n }, \n {\n $group: {\n _id: \"$topGroup\", \n subGroup: {$push: \"$$ROOT\"}, \n count: {$sum: \"$count\"}\n }\n },\n {\n $project: {\n count: 1,\n \"subGroup.names\": 1,\n \"subGroup.count\": 1,\n \"subGroup._id\": 1,\n }\n }\n])\nB1A2AB", "text": "Hi @joseph-d-p, welcome back to the community.\nBased on the expected output and input provided by you, you can use a combination of two group stages to achieve this.\nTake a look at the following pipeline:Here, the first group stage is to group the subgroups like B1, A2 etc, and collect the names in an array.\nThe second group stage is there to group those subgroups on the basis of their respective topGroup values like: A, B etc while simultaneously summing up the total count.Please note that this is untested, so you might want to do your own testing to ensure it works with your data and all the edge cases are handled correctly .If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
2-level group by for objects in an array
2022-10-03T05:53:17.004Z
2-level group by for objects in an array
1,946
null
[ "data-modeling", "storage" ]
[ { "code": "", "text": "Hi!I´m currently developing a scanning software, where the user can upload his code (with up too about 500-800 lines of code - sometimes only 20 lines of code) for a scan.\nI´m storing the content in my db and am wondering, if it´s mroe efficient to store potentially “huge” files as a string or file in my database.\nI couldn´t really find a specific case for my question, because it´s really flexible and so on.So yeah, there can and will be those files with hundreds of lines of code. What would you say is the more efficient way to store those?Cheers!", "username": "Daniel_Mielke" }, { "code": "", "text": "Welcome to the MongoDB Community @Daniel_Mielke!You can store up to 16MB in a single MongoDB document, so hundreds of lines of code isn’t particularly huge. Taking 100 characters as a (probably large) average line length, 800 lines of code would still be less than 100KB of text.For comparison, the Complete Works of William Shakespeare is 5.5MB in plain text format.What would you say is the more efficient way to store those?The most efficient approach depends on your use case.How do you plan to access the code contents? If you will be scanning, searching, or indexing in some way and need to retain the code contents, then it would make sense to save the metadata (and perhaps the contents) in MongoDB. I assume your use case may have several steps of scanning and processing the code.If your scanning software only needs the file contents transiently, it may be more efficient to save to temporary files on disk and remove after processing.Regards,\nStennie", "username": "Stennie_X" } ]
Store data with up to 800 lines of code as strings or files?
2022-09-30T15:21:32.913Z
Store data with up to 800 lines of code as strings or files?
2,404
null
[ "replication", "server", "storage" ]
[ { "code": "homebrewmongodhomebrew.mxcl.mongodb-community.plisthomebrew.mxcl.mongodb-community-rs1-0.plisthomebrew.mxcl.mongodb-community-rs1-1.plisthomebrew.mxcl.mongodb-community-rs1-2.plistProgrammArgumentsconf#!/bin/zsh\n\n# Start the mongodb replica set as services\nbrew services run mongodb-community --file=~/services/mongodb/runtime-config/homebrew.mxcl.mongodb-community-rs0.plist\nbrew services run mongodb-community --file=~/services/mongodb/runtime-config/homebrew.mxcl.mongodb-community-rs1.plist\nbrew services run mongodb-community --file=~/services/mongodb/runtime-config/homebrew.mxcl.mongodb-community-rs2.plist\nmongodb-communitybrew service run mongodb-community-rs10 --file....brew service run mongodb-community-rs11 --file....[mongodb] ./startMongoDbReplicaSet.zsh\n==> Successfully ran `mongodb-community` (label: homebrew.mxcl.mongodb-community)\nService `mongodb-community` already running, use `brew services restart mongodb-community` to restart.\nService `mongodb-community` already running, use `brew services restart mongodb-community` to restart.\nmongod27017plist<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n<dict>\n <key>Label</key>\n <string>homebrew.mxcl.mongodb-community</string>\n <key>ProgramArguments</key>\n <array>\n <string>/opt/homebrew/opt/mongodb-community/bin/mongod</string>\n <string>--config</string>\n <string>~/services/mongodb/mongod_rs1-0.conf</string>\n </array>\n <key>RunAtLoad</key>\n <true/>\n <key>KeepAlive</key>\n <false/>\n <key>WorkingDirectory</key>\n <string>/opt/homebrew</string>\n <key>StandardErrorPath</key>\n <string>/opt/homebrew/var/log/mongodb/output.log</string>\n <key>StandardOutPath</key>\n <string>/opt/homebrew/var/log/mongodb/output.log</string>\n <key>HardResourceLimits</key>\n <dict>\n <key>NumberOfFiles</key>\n <integer>64000</integer>\n </dict>\n <key>SoftResourceLimits</key>\n <dict>\n <key>NumberOfFiles</key>\n <integer>64000</integer>\n </dict>\n</dict>\n</plist>\n.conf<string>~/services/mongodb/mongod_rs1-0.conf</string>\n<string>~/services/mongodb/mongod_rs1-1.conf</string>\n<string>~/services/mongodb/mongod_rs1-2.conf</string>\n.conf# mongod.conf suitable for MACOSX on APPLE SILICON machines only\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n logRotate: reopen\n path: /opt/homebrew/var/log/mongodb/mongod_rs0-0.log\n\n# Where and how to store data.\nstorage:\n dbPath: /opt/homebrew/var/mongodb/mongod_rs0-0\n journal:\n enabled: true\n commitIntervalMs: 100\n directoryPerDB: true\n engine: \"wiredTiger\"\n wiredTiger:\n engineConfig:\n cacheSizeGB: 1\n journalCompressor: none\n directoryForIndexes: false\n collectionConfig:\n blockCompressor: none\n indexConfig:\n prefixCompression: false\n\n# mmapv1:\n\n# how the process runs\n#processManagement:\n# fork: true # fork and run in background\n# pidFilePath: /var/run/mongodb/mongod_rs0-0.pid # location of pidfile\n# timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 37017\n bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n#security:\n\n#operationProfiling:\n\nreplication:\n replSetName: \"dev_pinklady_rs0\"\n oplogSizeMB: 128\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\nmongod", "text": "I like to install a MongoDB ReplicaSet on my MacBook Air M2 for development purpose only. I run a similar setup on a MacBook Pro with Intel processor. But there I have 64 GB RAM and run a Linux VM for that. However, I like to get rid of virtualization and run all my stuff directly on the Mac. This saves me a lot of time to maintain linux VMs etc.Unfortunately I cannot get this thing going with homebrew. There seem to be several problems:What I have done so far:The problem is, that this script starts the same formulae mongodb-community and I have no idea how I can give these different names . I should start them like brew service run mongodb-community-rs10 --file.... , brew service run mongodb-community-rs11 --file.... , etc.The result of the above script is:And the only started mongod instance is running on port 27017 and therefore completely ignoring my parameter file.So here is one of my plist files as a sample:The three files only differ on line 11 where I load different .conf files for each instance (these are the config file parameters, each line in ONE of the three files) :And here is a sample of a .conf file:These config files differ in the following values:Can anybody help me with that or point me to some documentation how to install three mongod instances to run a ReplicaSet on one single Mac computer with Apple Silicon chip?", "username": "Thomas_Bednarz" }, { "code": "mlaunch$ mlaunch --repl\nlaunching: \"mongod\" on port 27017\nlaunching: \"mongod\" on port 27018\nlaunching: \"mongod\" on port 27019\nreplica set 'replset' initialized.\n$ mlaunch list\n\nPROCESS PORT STATUS PID\n\nmongod 27017 running 43667\nmongod 27018 running 43672\nmongod 27019 running 43682\n$ mlaunch stop\nsent signal Signals.SIGTERM to 3 processes.\n$ mlaunch start\nlaunching: \"mongod\" on port 27017\nlaunching: \"mongod\" on port 27018\nlaunching: \"mongod\" on port 27019\nmlaunchPATH--binarypathmlaunch", "text": "Hi @Thomas_Bednarz ,I’m not sure about the specifics of setting up a replica set to run via Homebrew services, but for development convenience I use mlaunch (part of the mtools Python scripts collection) to manage local deployments.For example:mlaunch uses MongoDB binaries found via your default PATH environment variable or a provided --binarypath parameter, so you can use this with binaries installed via Homebrew as well as alternative approaches.Note: I also happen to be the maintainer for mtools in my spare time :). I use mlaunch with both Intel and Apple Silicon.Regards,\nStennie", "username": "Stennie_X" }, { "code": "mtoolsmlaunch", "text": "Hi @Stennie_XThank you very much, that sounds like an interesting alternative! I am not familiar with py but I think I get this one done. Just figured out, that the problem is NOT only with MongoDB but also with Redis, which I use as simple cache.To me it looks like brew services is doing a lot of magic behind the scene and is taking the configurations from several places. The bad thing is, that it is different on Intel and Apple Silicon Macs. What I need is a shell script that starts all my services and another one which stops them correctly (especially the ReplicaSet). I need to be able to change configuration quickly and then restart again to test very specific things.So I will give mtools with mlaunch a try later today.Regards, Tom", "username": "Thomas_Bednarz" }, { "code": "initstopstartmongoshmlaunch init --replicaset --nodes 3 --name \"dev_pinklady_rs0\" --port 37017YAMLmlauch init", "text": "Hi @Stennie_XSorry, I did not have the time to come back to this last week.But today I played around a bit. I was able to init a RS, to stop, start etc and to connect to it using mongosh and Studio3T. However I do not understand how I can pass config files, and I do not know where the log files are etc.So to test, I initialized with mlaunch init --replicaset --nodes 3 --name \"dev_pinklady_rs0\" --port 37017. That works perfect, but as you can see in my initial message, I have config files with a bunch of additional parameters in a YAML conf file.Is it possible to pass such a file to mlauch init and if yes, do I have to pass ONE file for EACH node?Thanks a lot for your help.Regards, Tom", "username": "Thomas_Bednarz" }, { "code": "mlauch init-f /path/to/config.mlaunch_startupmlaunchstartup_info", "text": "Is it possible to pass such a file to mlauch init and if yes, do I have to pass ONE file for EACH node?Hi @Thomas_Bednarz,You can use -f /path/to/config to pass in the same config file for every replica set member.If you want to manually tweak parameters after init, these are saved in an .mlaunch_startup JSON file within the top-level of a directory where you init a deployment with mlaunch. The startup_info key has the command link for starting a process.Regards,\nStennie", "username": "Stennie_X" }, { "code": "mtools", "text": "Hi @Stennie_XThanks a lot, just what I was looking for.I just wanted install mtools on my Intel MacBookPro. I notices that all links on the GitHub Page are dead. Looks like the site https://blog.rueckstiess.com/ does not exist anymore, at least I cannot reach it at all.", "username": "Thomas_Bednarz" }, { "code": "mtools/doc", "text": "I notices that all links on the GitHub Page are dead. Looks like the site https://blog.rueckstiess.com/ does not exist anymore, at least I cannot reach it at all.Hi @Thomas_Bednarz,I’ll get in touch with the domain owner (and original mtools author) to find out what happened with the domain set up.In the interim, you can find the same documentation in GitHub under the mtools/doc directory: mtools/mlaunch.rst at develop · rueckstiess/mtools · GitHubOr via the archive on the Wayback Machine.Regards,\nStennie", "username": "Stennie_X" } ]
Run a mongodb ReplicaSet on a Mac using `brew services`
2022-09-20T15:51:07.038Z
Run a mongodb ReplicaSet on a Mac using `brew services`
5,799
null
[ "ops-manager" ]
[ { "code": "", "text": "Hi Team I’m working on installing opsmanager and i need to convert the url from http to https .I could use the updated https url .And when i try the reconfigure the agent (by following the steps generated in UI page) im getting error like certificate CN doesnot match with host name .Because of this only server metrics is getting displayed DB stats are missing in the portalThanks,\nRS~", "username": "Talla_Rishitham" }, { "code": "", "text": "Any help on this topic", "username": "Talla_Rishitham" }, { "code": "", "text": "Welcome to the MongoDB Community @Talla_Rishitham!getting error like certificate CN doesnot match with host nameThis error indicates that the Common Name (CN) on the TLS certificate you have used does not match the Ops Manager hostname, so a secure connection cannot be established.You need to change either the certificate or configured hostname so the details match.Regards,\nStennie", "username": "Stennie_X" } ]
Opsmanager and i need to convert the url from https to http
2022-09-29T11:35:43.635Z
Opsmanager and i need to convert the url from https to http
2,252
null
[]
[ { "code": "", "text": "Hello everyone.I would like to know some of the best practices for storing custom log data in mongodb.I want to be able to log user’s activities in my client-side application. e.g) which page the person entered and how long the person stayed in the page and so forth.Create a document that comprises detailed activities(log data) per each user once a day.This seems like not a scalable solution. If the number of users grew, so is a number documents that track users’ activities.If there are more experienced people out there, what are your take on this type of situation? I would love to have some discussions.Thanks.", "username": "Suk_Yu" }, { "code": "", "text": "Welcome to the MongoDB Community @Suk_Yu!It sounds like you are describing a Web Analytics solution that could either be part of your custom application code or use a third party service like Google Analytics.This seems like not a scalable solution. If the number of users grew, so is a number documents that track users’ activities.If you would like to track daily user activity on a detailed level, it is expected that the number of documents will grow with your user base.Perhaps you would want to roll up some of the user log activities into summary documents using Time Series Collections or Materialised Views. If you are using MongoDB Atlas, you can also take advantage of features like Online Archive to move infrequently accessed data to a read-only archive using cloud object storage for cost savings.If you don’t have custom requirements and the analytics are for your own purposes rather than displaying to end users, I would be inclined to use an existing third party web analytics solution.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Handling Custom Log Data in MongoDB
2022-09-29T06:43:17.622Z
Handling Custom Log Data in MongoDB
1,197
null
[ "swift", "app-services-user-auth" ]
[ { "code": "async function adminLogInresponse: {\"error\":\"failed to authenticate with MongoDB Cloud API: You are not authorized for this resource.\",\"error_code\":\"InvalidSession\",\"error_details\":{\"error\":401,\"reason\":\"Unauthorized\"}}adminApiPublicKeyadminApiPrivateKeyconst projectId = \"9ff9fkdfkj98sdjf90jklkls0ap\";\nconst appId = \"myApp-xxyyz\";\nconst apiUrl = \"https://realm.mongodb.com/api/admin/v3.0/groups/\" + projectId + \"/apps/\" + appId;\n\nexports = async function(uid) {\n \n async function adminLogIn() {\n console.log(\"async attempting to login as admoi\")\n \n // const username = context.values.get(\"adminApiPublicKey\");\n // const apiKey = context.values.get(\"adminApiPrivateKey\");\n // let email = username.email\n // console.log(email, apiKey)\n \n const adminKey = context.values.get(\"adminApiPublicKey\");\n let username = adminKey.email\n let apiKey = context.values.get(\"adminApiPrivateKey\");\n \n try {\n const response = await context.http.post({\n url: \"https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login\",\n body: {username, apiKey},\n encodeBodyAsJSON: true,\n });\n console.log(\"response: \" + response.body.text())\n const body = EJSON.parse(response.body.text());\n console.log(\"got token: \" + body.access_token)\n return body.access_token;\n } catch (err) {\n console.error(\"caught err \" + err.message)\n }\n }\n\n const token = await adminLogIn();\n\n async function deleteUser(uid) {\n console.log(\"async attempting to delete uid: \" + uid)\n \n try {\n await context.http.delete({\n url: `${apiUrl}/users/${uid}`,\n headers: {\"Authorization\": [`Bearer ${token}`]}\n });\n console.error(\"got an error: \" + uid)\n return uid;\n } catch (err) {\n console.error(\"ERROR! \" + err.message)\n }\n }\n\n return deleteUser(uid);\n};\n", "text": "I may this this question tagged incorrectly.The goal is to delete a user; but another user, not the logged in user.The use case is a manager with employees and as the employees come and go the manager needs to be able to add them, and then when they leave delete them. Ignoring for the moment the employees data, the goal is to call an Atlas function from the Swift SDK to delete another user.I have referenced this Delete User Using a Custom Function and have been working with the Values and Secrets but one issue I am running into is understanding what an ‘Admin User’ is because apparently that’s need to run the function in the above link.I have modified the code (Value) in the async function adminLogIn use my authentication but it throws this errorresponse: {\"error\":\"failed to authenticate with MongoDB Cloud API: You are not authorized for this resource.\",\"error_code\":\"InvalidSession\",\"error_details\":{\"error\":401,\"reason\":\"Unauthorized\"}}How does one become an admin user? And where do I get the two Values for adminApiPublicKey and adminApiPrivateKeyCode below is a copy/paste from the link.Is the question too vague?Jay", "username": "Jay" }, { "code": "your-app-xxxxxx", "text": "After a LOT of trial and error, got the info needed.First the Admin User (options) can be found in the console; upper right corner where you can select the organization. Then in the organization list, in the project name row, click users, then you can select which ever use you want to Admin and change their privileges.Then I discovered thisFor Admin API requests, your Application ID is the ObjectId value in the _id field, not the client_app_idThis refers to a list retrieved from doing a function call to get all of the apps in the project. e.g. to run admin tasks the app id you would normally use your-app-xxxxxx is not the right one. You will need to discover the application id, not app ID. It’s probably somewhere obvious but I did a little function to list all of the apps within the project and got it from there.", "username": "Jay" } ]
API Key For User Delete
2022-10-01T22:16:44.529Z
API Key For User Delete
1,895
null
[ "swift" ]
[ { "code": "struct ConversationDetail: View {\n @Environment(\\.presentationMode) var presentationMode\n @ObservedRealmObject var conversation: Conversation\n \n @ObservedResults(Message.self, where: ({ $0.conversation == \"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXX\"}), sortDescriptor: SortDescriptor(keyPath: \"timestamp\", ascending: true)) var messages\n \n var groupedByDate: [String: [Message]] {\n Dictionary(grouping: messages, by: {$0.timestamp.toLocalTime(format: \"EEEE\")})\n }\n var headers: [String] {\n groupedByDate.map({ $0.key }).sorted()\n }\n\n var body: some View { MyView }\n", "text": "I have this piece of code which sorts all the messages by date of a certain conversation which allows me to show users per day an overview of the messages that has been sent. This all works perfect, but maybe you already noticed that I hard coded in one of the conversation ID’s to test if it works. Now my problem is that I get this error when I try to use instead of the hardcoded id: conversation._idThis is the error: Cannot use instance member ‘conversation’ within property initializer; property initializers run before ‘self’ is availableI know about initializing and have been struggling with it for quite some time, but I just don’t get it to work for this scenario where I want to use an property of an ObservedRealmObject in an ObservedResults query.Any help is greatly appreciated!", "username": "Jesse_van_der_Voorn" }, { "code": "", "text": "hey @Jesse_van_der_Voorn Are you using the Realm SDKs with Swift or is this just a general question on Swift? We do have a SwiftUI guide here for Realm -If you are using Realm then likely you’ll get a better answer on our Realm forums over here -Atlas Device Sync is a device-to-cloud synchronization service designed to help address mobile app data challenges like connectivity, real-time performance, and multi-device collaboration.", "username": "Ian_Ward" } ]
Use ObservedRealmObject property in ObservedResults query
2022-09-18T12:48:08.033Z
Use ObservedRealmObject property in ObservedResults query
1,472
null
[ "time-series" ]
[ { "code": "metadata {\n timestamp: ISODate(\"2022-10-02T10:17:28.453Z\"),\n metadata: {\n innerId: 'an-id2',\n isAffectingAccount: true,\n tags: [ { name: 'currency', value: 'USD' } ]\n },\n amount: 15,\n _id: ObjectId(\"63396861d2cbcc6033eea58e\")\n }\ndb.testts.deleteMany({ 'metadata.tags': {$in: [{name: 'currency', value: 'USD']}]}})\nMongoServerError: Cannot perform an update or delete on a time-series collection when querying on a field that is not the metaField 'metadata'\n", "text": "Hello,env details:\nMongo atlas version 6.I have a timeseries collection with the metadata field set as the metadata field.example of document is:I’m trying to delete documents using the following command:Unfourtently getting the following error:", "username": "Shai_Katz" }, { "code": "elemMatch$in", "text": "Just found it work if using elemMatch instead of $in.", "username": "Shai_Katz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to delete documents from timeseries collection by nested array field
2022-10-02T11:32:03.895Z
Unable to delete documents from timeseries collection by nested array field
2,880
null
[ "graphql" ]
[ { "code": "", "text": "I am using the MongoDB GraphQL endpoint.\nI’ve setup several custom functions as graphQL mutations.Now I want to have the registration (1) available to everyone. Anonymously. But all the other mutations shall only be able for registered users.How to I configure that in MongoDB GraphQL?Can I use “Custom Function Authentication”? The UI sais “all requests to GraphQL must be authenticated”. Can I create an exception?Thank you for any support or hints.", "username": "Robert_Rackl1" }, { "code": "", "text": "@Robert_Rackl1 ,I would go the direction of having a “registration” api key and the module for registration will use this key to do any query/registration mutation. Any other calls will require a user authentication…Does that sound good to you?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes that sounds fine. Just one detail question. What do you mean by “module” ? As far as I can see there is only one MongoDB GraphQL endpoint in my mongodb atlas app that has ONE configuratioon for authentication.", "username": "Robert_Rackl1" }, { "code": "", "text": "I have another follow up questins: Ok let’s say I have a way how new user’s can register. The call something anonymously (eg. an HTTP endpoint).=> Then how do I create a user in a MongoDB atlas function?Use Case FlowBut now the new user is still not registered as a user in MongoDB Atlas! How to do that??? The doc sais that a user will automatically be created once he makes a call with his JWT. https://www.mongodb.com/docs/atlas/app-services/users/create/", "username": "Robert_Rackl1" }, { "code": "", "text": "Found another related topic. Create/register user on backend with Realm function - #3 by Mohammed_RamadanThee Pavel wrote: \" there isn’t a registration process separate from the login process, you call the same function every time,\" Mmmhh ok, I have to look into that …And here Register User in Web via Endpoint - #2 by nirinchev nirinchev mentions a not documented /register endpoint. Let’s see …", "username": "Robert_Rackl1" }, { "code": "", "text": "Hi @Robert_Rackl1 ,I think there is an option to automatically create user upon login with thr correct credentials.Let me know if that helps?Thanks\npavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes that seems to be the only way. It just feels a bit weired. It’s a workaround.Resgistration and login are two different things. For example at registration the client normally passes more information, such as additonal metadata. For login the client only passis the most basic data (eg. username and password, no additonal name, postal adress or mobile phone number).Would be nice if MongoDB Atlas would offer a way to actually register a user. (of course this depends on the authentication provide that you use.)", "username": "Robert_Rackl1" }, { "code": "/**\n * Create a new team and a mongoDB Atlas User\n * @param {Object} GraphQL TeamInsertInput { teamName admins { name email mobilephone } }\n * @return {Object} error or { team user jwt }\n *\n * This function uses some `config` variables read from mongodb app values\n */\nexports = async (newTeam) => {\n console.log(\"CreateNewTeam (context.user\", JSON.stringify(context.user))\n \n // ===== Insert newTeam into DB\n var collection = context.services.get(\"mongodb-atlas\").db(config.dbName).collection(config.collection)\n let insertResult = await collection.insertOne(newTeam)\n\n // ===== Query team (returned document contains _id !)\n let team = await collection.findOne({_id: insertResult.insertedId})\n\t\n // ====== Create a **new** JWT for the new user\n // .... some code with https://www.mongodb.com/docs/atlas/app-services/functions/globals/#std-label-function-utils-jwt\n // Keep in mind that the JWT must exactly match you configuration !\n // eg. issuer (\"iss\" claim) must be set correctly if you configured that\n // and of course use the correct secret to sign the key\n let token = /* ... create JWT ... *//\n\n // Make an authenticated call to Atlas App, so that MongoDB Atlas User is created (automatically)\n // https://www.mongodb.com/docs/atlas/app-services/users/create/\n let postRes = await context.http.post({ \n url: config.API_URL,\n headers: {\n // THIS IS IMPORTANT !!! Need to set the \"jwtTokenString\" header!\n //. \"Authentication: Bearer .....\" DOES NOT WORK HERE !!\n \"jwtTokenString\": [ token ] \n },\n body: { query: \"{ team { _id teamName} }\" }, // just query anything\n encodeBodyAsJSON: true\n })\n // The response body is a BSON.Binary object. Parse it and return.\n console.log(\"response of post: \"+ JSON.stringify(postRes))\n\n // Now the new user has also been created in MongoDB Atlas internal user list. \n\n return {\n team: team,\n user: team.admins[0],\n jwt: token\n }\n}\n\n\n", "text": "Ok, I got it working. Maybe a workaround. But it does work. HEre is my source code:", "username": "Robert_Rackl1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to have ONE anonymous GraphQL mutation, but all the others secured
2022-09-30T15:50:29.372Z
How to have ONE anonymous GraphQL mutation, but all the others secured
2,756
null
[]
[ { "code": "", "text": "hi,how do i sendFile html file or return a redirect url like expressjs redirect via mongodb realm function?", "username": "James_Tan1" }, { "code": "{ response: \"This is a JSON response from your function\",\n htmlSource: \"<html><h1>Funny page</h2></html\">\n", "text": "I think MongoDB Alas Functions must return JSON.\nYou might wann try to fiddle around with the returned “Content-Type” in the HTML response header, but I’d assume this will not be allowed.You could return HTML inside JSON.But of course this does not work, when you call your function from a browser. <= And that’s exactly the point: Functions are not meant to be called from browsers. They are an API. Meant to be called from clients (eg. a node script or a mobile app)", "username": "Robert_Rackl1" } ]
Render html content with function
2021-10-05T09:50:33.647Z
Render html content with function
2,232
null
[ "replication" ]
[ { "code": "", "text": "Hello Everyone,\nIs there any way to copy all collections into another database in the same server in MongoDB 4.4? The size of database is around 150GB, and it is also a replica set.\nSo it’s not efficient way that clone a collection, then delete all documents in the new collection. Do you have any suggestion about copying collections without its documents?\nThanks in advance.\nAhmet", "username": "Ahmet_Gunata" }, { "code": "", "text": "Check these links.May helpSimple tool to clone metadata for mongodb collections - GitHub - JonnoFTW/mongo-schema-export: Simple tool to clone metadata for mongodb collections\nAbove link discusses about cloning only one collection", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cloning a database without its data
2022-09-30T10:56:14.480Z
Cloning a database without its data
2,115
null
[ "cxx", "time-series" ]
[ { "code": "", "text": "Hi team! I want to try the time series collection features of MongoDB 5.0 and above. My program is written in C++, but the latest version of the driver mongocxx3.6.7 does not support the features of mongodb 4.4 and above. Are there any plans to continue to support subsequent versions? Thanks!", "username": "Tong_Zhang2" }, { "code": "", "text": "Hi @Tong_Zhang2,MongoDB 5.0 support in C++ driver is definitely something on the near term roadmap and the dev team is actively working towards it.", "username": "Rishabh_Bisht" }, { "code": "", "text": "Hi @Tong_Zhang2,I’m pleased to inform that C++ Driver with 5.0 support is now available - MongoDB C++11 Driver 3.7.0 Released", "username": "Rishabh_Bisht" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does the MongoDB C++ driver plan to support new features above 5.0?
2022-08-25T05:45:29.127Z
Does the MongoDB C++ driver plan to support new features above 5.0?
2,678
null
[ "golang" ]
[ { "code": "", "text": "Same Query response differ in golang and node js, WHY?", "username": "Aman_Saxena" }, { "code": "", "text": "Welcome to the MongoDB community @Aman_Saxena !Please provide some more details for this issue:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "Query := {\n\t\t\"_id.src\": {\n\t\t\t\"$in\": fids,\n\t\t},\n\t\t\"_id.trg\":{\n\t\t\t\"$in\": bids,\n\t\t},\n\t}\n", "text": "id is object of trg(int64) anf src(int64)\nmongoDB Driver golang = mongo-driver v1.10.1\nmongoDB driver in nodejs latest version 4.10.0server version is 4.4.1\n@Stennie_X\nSo ordering of data which I am getting is different?", "username": "Aman_Saxena" }, { "code": "", "text": "If I understand correctly the followingSo ordering of data which I am getting is different?it means you get the same documents but not in the same order. If you want a deterministic order in your documents you must sort.", "username": "steevej" }, { "code": "", "text": "Ok, I got it but is there any predefined order of documents for different drivers?", "username": "Aman_Saxena" }, { "code": "", "text": "There is no predetermined order of documents returned. Even using the same driver documents could be returned in a different order on consecutive calls.As @steevej mentions, if you want the documents returned in the same order everywhere, then provide a sort. One thing to note here is if the sort does not provide unique values to be sorted on, you could still see documents returned in different ordering for the values that are the same.", "username": "Doug_Duncan" }, { "code": "", "text": "you could still see documents returned in different ordering for the values that are the sameAs a work around you may always add _id:1 as the last multi-key sort to ensure order when sort prefix has same values. _id is unique so final order will be deterministic.", "username": "steevej" } ]
Order of results differs for same query in golang and node js
2022-09-29T11:33:07.161Z
Order of results differs for same query in golang and node js
2,217
null
[ "queries", "dot-net" ]
[ { "code": "var found = await (from f in _shiftCollection.AsQueryable()\n where f.ScheduleStart.Date == currentDate.Date\n select f).CountAsync();\n", "text": "Hello Everyone,I have just started learning and developing my first Mongo driven application. I am getting a serialization error\nwhen trying to compare date part of a datetime property in a collection.\nBoth _shiftCollection.Scheduled Start and currentDate are in UTC format.\nI have found numerous ways using Mongo Query, but I prefer my logic to be\nstrongly typed.[What I have tried so far]{document}{ScheduleStart}.Date is not supported.Any help are greatly appreciated. TIA.", "username": "Eugene_Magno" }, { "code": "", "text": "Interested in the same thing", "username": "Artur_A" }, { "code": "", "text": "I have the same problem, did you manage to solve it?\nCan you show me how?", "username": "DaniloBBezerra" }, { "code": "", "text": "Hello,Even I’m having the same issue when comparing DateTimeOffset fields. I want to compare only the date part from the DateTimeOffset. TimeSheet.FromDate is DateTimeOffset field and I’m comparing it with the DateTimeOffset field received from the user. This will have only the date. Please help.FilterDefinition filter = Builders.Filter.Where(x => x.DocType == nameof(TimeSheetInfo)\n&& x.TimeSheetInfo.UserID == userId\n&&x.TimeSheetInfo.FromDate.Date == fromDate.Date\n&& x.TimeSheetInfo.ToDate.Date == toDate.Date);", "username": "kusuma_srikar" } ]
Compare Date Part only of BSON using LINQ
2020-09-14T05:14:26.046Z
Compare Date Part only of BSON using LINQ
6,427
null
[ "python", "crud" ]
[ { "code": "{ \"_id\" : 23, \"local_id\" : 1234, \"global_id\" : [ \"P123\", \"P345\" ] }", "text": "Hi, I have a document like this{ \"_id\" : 23, \"local_id\" : 1234, \"global_id\" : [ \"P123\", \"P345\" ] }If I want to $push new value to the array has the key “global_id” then I can do thiscollection.update_one({‘local_id’: l_pid}, {’$push’: {‘global_id’: global_id}})and the document looks sth like this , for example : (push P678 to the array){ “_id” : 23, “local_id” : 1234, “global_id” : [ “P123”, “P345”, “P678” ] }But next time when the same key of “global_id” comes in it keeps appending to the end of array like : (this time the same P678 comes in){ “_id” : 23, “local_id” : 1234, “global_id” : [ “P123”, “P345”, “P678” , “P678”] }I want it to overwrite to existing value, and the array has to have unique value, the value can’t be the same.How can I do it?\nThanks", "username": "Huy_Hoang1" }, { "code": "", "text": "HELPP. Is anyone knows", "username": "Huy_Hoang1" }, { "code": "", "text": "use $addToSet rather than $push", "username": "steevej" }, { "code": "", "text": "Thanks, thats what I want", "username": "Huy_Hoang1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use $push overwrites existing value rather than adds to the end of an array within a document
2022-10-01T09:45:05.779Z
How to use $push overwrites existing value rather than adds to the end of an array within a document
1,766
null
[ "queries" ]
[ { "code": "", "text": "I have a collection called “Corpus”.In that collection, I have three records. The “field” for the data is called, “data”.The _id for the field I want to update is: “62c84c6cecede206139d725e”.This update statement fails and just prints “WriteResult({”db.corpus.update({_id:“62c84c6cecede206139d725e”},{$set:{\n** “data”: “who are you???”}});**I’ve tried switching to single quotes, unquoting the ID, but I can’t get it to update this “field”.What am I doing wrong?Thanks,", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "You cannot update _id field directly with update\nIt is a primary key in mongodb\nCheck dba stackexchange forum where they discuss adding new row with your required set value and deleting old row", "username": "Ramachandra_Tummala" } ]
Basic Update Fails
2022-09-30T11:02:34.681Z
Basic Update Fails
1,080
null
[ "queries", "crud" ]
[ { "code": "await realmapp.emailPasswordAuth.registerUser({ email, password });\n\n//Immediately login the new user\n//(to go past 'pending' in realm console)\nconst newUsercredentials = Realm.Credentials.emailPassword(email, password);\nconst loggedInNewUser = await realmapp.logIn(newUsercredentials);\n\nconsole.log('new\nuser just registered : ', loggedInNewUser);\n\nconst newUser = await users.insertOne({ id:\nloggedInNewUser.id });\n\nawait users.updateOne({ id:\nloggedInNewUser.id },\n\n{ $set:\n{\n\"custom_data.active\": true,\n\"custom_data.description\":{level: '',\ncomment: ''\n},\n\"custom_data.alternate_email\": \"\",\n\"custom_data.nickname\": \"\",\n\"custom_data.ownerOf\": [],\n\"custom_data.memberOf\": []\n}\n});\n63366590c70e5740bff597faerror \"invalid session: No user found for session for user id '633664ccc59dec10f25094ef 'error\t\"invalid session: access token expired\"\n\nerror_code \"InvalidSession\"\n", "text": "I’m trying to run functions insertOne, updateOne from web client (so can pass registration variables like nickname and comments), not via a registerUser (create) realm server trigger with this code:In http headers:\nRequest id for the newly created user is correct:63366590c70e5740bff597fabut I get Response:error \"invalid session: No user found for session for user id '633664ccc59dec10f25094ef 'ori.e. response indicates wrong user id or access/refresh token expiredWhy is this happening? What can I do to ensure a valid user session and complete the creation of the necessary user document and associated custom_data without using server side triggers that depend on server side functions? thanks …", "username": "freeross" }, { "code": "sessionStorage.setItem('realm-web:app(<my_app_id>):user(' + loggedInNewUser.id + '):refreshToken:', loggedInNewUser.refreshToken);", "text": "SOLUTION:\nI had to manually assign the refresh token to the session storage with the logged in user’s id:\nsessionStorage.setItem('realm-web:app(<my_app_id>):user(' + loggedInNewUser.id + '):refreshToken:', loggedInNewUser.refreshToken);\nThis is not mentioned in the docs as far as I could see. If there is a better way, please let me know here.", "username": "freeross" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create a new user with custom_data from the client (Web sdk)
2022-09-30T04:42:32.881Z
Create a new user with custom_data from the client (Web sdk)
1,284
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "HelloI currently have a database on cluster 0 configured that is my Production DataI would like to copy this database and use the copied version as a test database to do some testing with my application.My question is, is it best to create a NEW cluster to do this? Or should I simply just do a mongodump/mongorestore to a new database name.Again, I just want this database to be like a sandbox where I can read/write/edit/delete some collections(sorry in advanced as I am VERY new to mongo)", "username": "Ahmed_Chaarani" }, { "code": "database name", "text": "Hi @Ahmed_Chaarani,My question is, is it best to create a NEW cluster to do this?Ideally, you should keep your production and test database in different database deployments because of the following reasons:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Test Database Best Practice, new Cluster or New Database on existing Cluster?
2022-09-27T14:37:36.106Z
Test Database Best Practice, new Cluster or New Database on existing Cluster?
1,776
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "let logs = await this.profileModel.aggregate([\n {\n // finish here date\n // finish settlement\n // finish logReport\n $match: {\n bindedSuperAdmin: name,\n // transactionDate: { $gte: startDate, $lt: endDate },\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'logs',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalId',\n // as: 'logsByTpes',\n pipeline: [\n {\n $match: {\n \n transactionDate: { $gte: startDate, $lte: endDate },\n // transactionDate: { $in: [startDate, endDate] },\n },\n },\n ],\n as: 'logsByTpes',\n },\n },\n\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$logsByTpes' },\n {\n $project: {\n // bindedSuperAdmin: '$bindedSuperAdmin',\n // bindedBanque: '$bindedBanque',\n // bindedClient: '$bindedClient',\n\n snTpe: '$tpesBySite.sn',\n terminalId: '$tpesBySite.terminalId',\n\n transactionDate: '$logsByTpes.transactionDate',\n transactionTime: '$logsByTpes.transactionTime',\n\n outcome: '$logsByTpes.outcome',\n },\n },\n {\n $group: {\n _id: { bank: '$outcome' },\n count: { $sum: 1 },\n },\n },\n ]);\n\n return logs;\n", "text": "I want to get data between two dates if I give date 01-01-2022 and date 09-10-2022 I want to display all data between these two dates I did this example bellow:It matches only the given dates but I need to gt all data between these two dates I really get stuck how can I Fix It Please", "username": "skander_lassoued" }, { "code": "", "text": "It should work.Most likely some of your dates are not Date, otherwise you would NOT have specified your dates asif I give date 01-01-2022 and date 09-10-2022You cannot do $gt and $lt on dates are with this format because it makes the dates unordered.Share some input documents.", "username": "steevej" }, { "code": "db.users.find({\"lastLogin\":{$gte: ISODate(\"2022-08-01T00:00:00.000Z\"), $lte: ISODate(\"2022-08-31T00:00:00.000Z\")}})\n", "text": "This is probably more simple than requested but it will show you how you can find between two dates (example of login dates):", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I do not understand how a simple find() can be the solution to an aggregation that has 2 $lookup stages, some $unwind, a $project and the $group.", "username": "steevej" }, { "code": "", "text": "How to make a lookup with just simple find() ? ", "username": "skander_lassoued" }, { "code": "", "text": "I think that was the point @steevej was trying to make, you can’t. You need to use the aggregation pipeline. The example I posted was an example of finding something between two dates but your aggregation was much more complex with other stages.", "username": "tapiocaPENGUIN" }, { "code": "{\n \"lastLogin\":{\n $gte: ISODate(\"2022-08-01T00:00:00.000Z\"),\n $lte: ISODate(\"2022-08-31T00:00:00.000Z\")\n }\n}\nfind()$matchaggregate()$lookup", "text": "You can take theof the find() query and put that in a $match stage in your aggregate() command and then from there do your $lookup stages.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I match all data between two date
2022-09-26T10:35:02.162Z
How can I match all data between two date
7,901
null
[ "backup", "data-recovery" ]
[ { "code": "", "text": "Hello there,\nI had a collection with more than 7k data and because of some uncertain reason my all data is lost now, my collection doesn’t have a single record, and it all happens with my live project.Please suggest to me any idea to retrieve my collection’s data.Thanks in advance.", "username": "Praduman_Tiwari" }, { "code": "", "text": "Hi @Praduman_Tiwari, and welcome to the MongoDB Community forums! I’m afraid if you don’t have a backup of the data, then recovering the data will most likely not be possible. If this is a member of a replica set, you might be able to recover some of the data by using data in the oplog, but that would be a manual process.", "username": "Doug_Duncan" } ]
I want to recover my deleted collection's records
2022-09-30T11:44:12.534Z
I want to recover my deleted collection&rsquo;s records
1,858
null
[ "kafka-connector" ]
[ { "code": "", "text": "HI,\nI am using mongodb kafka connector as source connector. I am not sure how to increase the replication factor of the created topic.\nI tried adding these configs:", "username": "alaa_halawani" }, { "code": "", "text": "Are you trying to set topic replication factor for kafka connect service itself or override the value just for the MongoDB source connector?", "username": "Robert_Walters" }, { "code": "", "text": "Thanks for replying @Robert_Walters.both are ok for me as I am using mongo connector only with kafka connect service.\nI tried passing this “confluent.topic.replication.factor” : “3” as part of mongo connector config and i also tried passing environment variable for kafka connect service and in both ways the replication factor in kafka hasn’t changed.Could you please guide me how to do that either from kafka connect service or MongoDB source connector ?", "username": "alaa_halawani" }, { "code": "", "text": "I’m interested in the answer to this too. I think the op is trying to figure out how to set the replication factor for topics that are auto-created by the mongodb source connector.", "username": "valeeum" } ]
How to modify kafka topic replication factor that is created by mongodb connector
2020-09-28T22:58:09.563Z
How to modify kafka topic replication factor that is created by mongodb connector
3,662
https://www.mongodb.com/…12e8cf1aad7a.png
[ "queries", "replication", "mongodb-shell" ]
[ { "code": "", "text": "Hi everybody! All ok?I have a ReplicaSet PSA version 4.2 and i need get 3 informations about space of all databases, datas bellow:\ndb.stats().dataSize, db.stats().storageSize and db.stats().indexSizeI write one script in mongo shell js file with read listDatabases and execute while to change databases with db.getSiblingDB, but in second loop of while i have this message of error:[js] Error: [] is not a valid database name :Obs.: In first loop execute no mistakes. I execute 2 times (db.getSiblingDB commented and uncommented) and send for you the script files and log files. Please, if you have one idea how do i solve this, I’ll be very grateful.Thanks for your help!COMMENTED:\n\nScript JS with comment724×519 11.4 KB\n\nUNCOMMENTED:\n\nScript JS uncomment726×518 11.3 KB\n\n\nResult uncomment704×214 5.6 KB\n", "username": "Henrique_Souza" }, { "code": "", "text": "Thanks, but i solve this with other getSiblingDB:\nimage720×552 10.9 KB\n\n\nimage370×573 8.93 KB\n", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem with loop script (mongo with js file)
2022-09-30T04:43:22.354Z
Problem with loop script (mongo with js file)
2,013
null
[ "python" ]
[ { "code": "sudo python3 scons.py install-all-meta\nChecking required python packages...\nRequirements list:\n Cheetah3<=3.2.6.post1\n PyYAML<=6.0.0,>=3.0.0\n cryptography==2.3; platform_machine == \"s390x\" or platform_machine == \"ppc64le\"\n cryptography==36.0.2; platform_machine != \"s390x\" and platform_machine != \"ppc64le\"\n packaging<=21.3\n psutil<=5.8.0\n pymongo<4.0,>=3.9\n pypiwin32>=223; sys_platform == \"win32\" and python_version > \"3\"\n pywin32>=225; sys_platform == \"win32\" and python_version > \"3\"\n regex<=2021.11.10\n requests<=2.26.0,>=2.0.0\n requirements_parser<=0.3.1\n setuptools\n types-PyYAML~=6.0.5\n typing-extensions>=3.7.4\nResolved to these distributions:\n certifi 2022.6.15\n cffi 1.15.1\n charset-normalizer 2.0.12\n cheetah3 3.2.6.post1\n cryptography 36.0.2\n idna 3.3\n packaging 21.3\n psutil 5.8.0\n pycparser 2.21\n pymongo 3.12.3\n pyparsing 3.0.9\n pyyaml 6.0\n regex 2021.11.10\n requests 2.26.0\n requirements-parser 0.3.1\n setuptools 58.1.0\n types-pyyaml 6.0.11\n types-setuptools 57.4.18\n typing-extensions 4.3.0\n urllib3 1.26.9\n\nscons: *** No SConstruct file found.\nFile \"/home/dorianrosse/programs/mongo/src/third_party/scons-3.1.2/scons-local-3.1.2/SCons/Script/Main.py\", line 940, in _main\n\n", "text": "hello,when i run the line of command below it ask a file who miss in the github mongo :thanks you in advance to help myself fully install the fork mongo,Regards.Dorian ROSSE.", "username": "Dorian_ROSSE" }, { "code": "sudo", "text": "That’s an odd error for sure. May I ask first, why are you running under sudo? Does it work if you don’t do that?", "username": "Andrew_Morrow" } ]
No SConstruct file found
2022-08-08T20:25:10.286Z
No SConstruct file found
2,589
null
[]
[ { "code": "python3 buildscripts/scons.py mongod --disable-warnings-as-errors**g++:** **fatal error:** Killed signal terminated program cc1plus\n\ncompilation terminated.\n\nscons: *** [build/opt/mongo/db/db.o] Error 1\n\nscons: building terminated because of errors.\n\nbuild/opt/mongo/db/db.o failed: Error 1\n", "text": "Hi,As there is no available binary for Debian 10 arm64, I am trying to build 4.2.12 using the instructions on mongo/building.md at v4.2.12 · mongodb/mongo · GitHubWhen I do:\npython3 buildscripts/scons.py mongod --disable-warnings-as-errorsI get the following error:\nCompiling build/opt/mongo/db/commands/find_cmd.oNot sure where to go from here. Would really appreciate a direction on this thanks.", "username": "Hamza_Afridi" }, { "code": "#11 8.128 g++ -o build/opt/mongo/db/s/database_sharding_state.o -c -Woverloaded-virtual -Wno-maybe-uninitialized -fsized-deallocation -std=c++17 -fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-const-variable -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp -fPIE -DSAFEINT_USE_INTRINSICS=0 -DPCRE_STATIC -DNDEBUG -D_XOPEN_SOURCE=700 -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DBOOST_SYSTEM_NO_DEPRECATED -DBOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS -DBOOST_ENABLE_ASSERT_DEBUG_HANDLER -DABSL_FORCE_ALIGNED_ACCESS -Isrc/third_party/s2 -Isrc/third_party/SafeInt -Isrc/third_party/pcre-8.42 -Isrc/third_party/fmt/dist/include -Isrc/third_party/boost-1.70.0 -Isrc/third_party/abseil-cpp-master/abseil-cpp -Ibuild/opt -Isrc src/mongo/db/s/database_sharding_state.cpp\n#11 21.95 g++: fatal error: Killed signal terminated program cc1plus\n#11 21.95 compilation terminated.\n#11 21.96 scons: *** [build/opt/mongo/s/query/document_source_update_on_add_shard.o] Error 1\n#11 23.71 scons: building terminated because of errors.\n#11 23.73 build/opt/mongo/s/query/document_source_update_on_add_shard.o failed: Error 1\n", "text": "I get something similar, but at a slightly different stage:Perhaps @andrew_morrow can help, as I found his answers on Add MongoDB 4.2 ARM64 builds for Raspberry Pi OS 64 bit (Debian Buster) - #9 by Andrew_Morrow before finding this page?", "username": "Oliver_Lockwood" }, { "code": "", "text": "I should have said: I’m trying to build 4.2.17, rather than 4.2.12.", "username": "Oliver_Lockwood" }, { "code": "-j", "text": "Hi @Oliver_Lockwood - That looks to me like you are running out of memory and the compiler is getting OOM killed. Either build with lower concurrency (smaller argument to -j), or do a cross compile. The latter would be my recommendation, because you can then do the build on a heftier x86 machine. The post you linked to includes cross build instructions.", "username": "Andrew_Morrow" } ]
Mongodb 4.2.12 build failure on debian arm64
2021-11-28T09:27:45.516Z
Mongodb 4.2.12 build failure on debian arm64
3,321
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.7.0.This release provides support for new features in MongoDB 5.0.Please note that this version of mongocxx requires MongoDB C Driver 1.22.1 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.7.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "eramongodb" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C++11 Driver 3.7.0 Released
2022-09-30T17:00:00.938Z
MongoDB C++11 Driver 3.7.0 Released
2,038
null
[ "security", "field-encryption" ]
[ { "code": "", "text": "I am studying about “Queryable Encryption” and “Client-Side Field Level Encryption”.\nBut to me, these two look so similar.Queryable Encryption\nQueryable Encryption is a feature of MongoDB that enables a client application to encrypt data before transporting it over the network using fully randomized encryption, while maintaining queryability. Sensitive data is transparently encrypted and decrypted by the client and only communicated to and from the server in encrypted form. The security guarantees for sensitive fields containing both low cardinality (low-frequency) data and high cardinality data are identicalClient-Side Field Level Encryption\nClient-Side Field Level Encryption (CSFLE) is a feature of MongoDB that enables a client application to encrypt data before transporting it over the network. Sensitive data is transparently encrypted and decrypted by the client and only communicated to and from the server in encrypted form.I think the difference between the two is that Queryable Encryption maintains queryable and uses randomized encryption, and Client-Side Field Level Encryption uses a set encryption method(deterministic encryption).Is this correct?And what is Queryable?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi Kim and thank you for your question.You are correct that in the current Public Preview the difference between the two is how the data is encrypted. To make a field queryable on an exact match in Client-Side Field Level Encryption we use deterministic encryption. Deterministic Encryption is strong encryption for most data but if you have fields that are low cardinality, meaning very few values possible, it can be subject to inference attacks. With Queryable Encryption the data is always encrypted randomly so not subject to those same inference attacks on low cardinality data and you can still run an exact match query. The Public Preview of Queryable Encryption is our first release and only supports exact matches right now but additional querying capability - ranges, prefix, suffix and substring - are planned in the near future.So to recap -I hope that helps and keep an eye out for announcements of those new features.Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Queryable Encryption & Client-Side Field Level Encryption
2022-08-08T02:06:17.497Z
Queryable Encryption &amp; Client-Side Field Level Encryption
3,061
null
[ "aggregation", "compass", "transactions", "schema-validation" ]
[ { "code": "", "text": "Using MongoDB shell version v5.0.6 and MongoDB compass Version 1.32.5 (1.32.5) (Im not sure if its relevant for the problem tho) .Im not able to edit Schema validations on any collection in my DB, tried to do that on dbAdmin and readWrite role. what am i doing wrong?Error when on readWrite role:not authorized on eyal to execute command { collMod: “transaction”, validator: { $jsonSchema: { required: [ “clientId”], properties: { clientID: { bsonType: “objectId”, description: “Must be a objectId and is required” }, description: } } } }, validationAction: “error”, validationLevel: “strict”, lsid: { id: UUID(“3e1XCJKS-7cd3-4aa9-aac5-89234hdu3983”) }, $clusterTime: { clusterTime: Timestamp(1664562563, 1), signature: { hash: BinData(0, F13A7FB1E1SDFHJKSDFJH2936B435D5DD6C), keyId: 7137234432545796 } }, $db: “eyal” }When when on DBAdmin role:not authorized on eyal to execute command { aggregate: “transaction”, pipeline: [ { $match: { $jsonSchema: { required: [ “clientId”], properties: { clientID: { bsonType: “objectId”, description: “Must be a objectId and is required” }} } } } }, { $group: { _id: 1, n: { $sum: 1 } } } ], cursor: {}, lsid: { id: UUID(“ajkshd87921-3a82-4a5f-9e04-0af89jgu2940vb”) }, $clusterTime: { clusterTime: Timestamp(1553478349, 1), signature: { hash: BinData(0, 8F91FSD2234SFJG3455KKNA5B403E969E), keyId: 71376458048657765396 } }, $db: “eyal” }Thanks in advanced.", "username": "Eyal_Tamsot" }, { "code": "dbAdminreadWrite", "text": "Hi @Eyal_Tamsot and welcome to the community!!Based on the above informations shared, I tried to the Schema update documentation to reproduce the issue that you are seeing. Unfortunately, I am not observing the issue that you have been seeing.To help out further reproduce the issue in my local environment, could you help with the following details for the issue being seen:Thanks\nAasawari", "username": "Aasawari" } ]
Schema validation is disabled on readWrite and dbAdmin roles
2022-09-28T08:16:27.278Z
Schema validation is disabled on readWrite and dbAdmin roles
2,068
null
[]
[ { "code": "", "text": "Hi TeamI want to know how to update the document atomically. My Scenario is, I have a spring application (with 2 instances) polling from mongodb and update the documents. In some situations 2 instances are picking the same document and overwrite the other instance update. I have to prevent this overwrite operation.When I google the solution, I saw findAndModify is doing atomic update. I just want to understand how findAndModify works when 2 instances are trying to update the same document at the same time. Please suggest me if there are any better solution.Thanks\nPrabaharan Kathiresan", "username": "Prabaharan_Kathiresa" }, { "code": "", "text": "Hi @Prabaharan_Kathiresa,See my answer hereIf you still have questions let me knowPavle", "username": "Pavel_Duchovny" }, { "code": "findAndModify", "text": "@Pavel_Duchovny what will happen when service1 & service2 fired up the findAndModify at the same time?Will below process happen?does this happen? or something else?", "username": "Divine_Cutler" }, { "code": "", "text": "@Divine_Cutler,Service one acquire the lock and change the status field so service2 will wait and eventually will not get this specific document as its status has changed. So this read and update is atomic.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny ,\nI have another usecase, but wanted to know till when does service2 wait before giving exception if service one has not released the lock?Regards,\nPranit", "username": "Pranit_Chugh" }, { "code": "", "text": "I presume the write timeout that you specify…", "username": "Pavel_Duchovny" } ]
MongoDB atomic update on document
2020-09-07T14:56:00.478Z
MongoDB atomic update on document
4,033
null
[ "java", "compass" ]
[ { "code": " <dependencies>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.6.0</version>\n </dependency>\n\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-jdbc</artifactId>\n <version>2.0.0</version>\n </dependency>\n </dependencies>\n try\n {\n Class.forName(\"com.mongodb.jdbc.MongoDriver\");\n \n Connection connection = DriverManager.getConnection(\n \"mongodb://corpus:passpass@localhost:27017/?authMechanism=DEFAULT&authSource=corpus_test\");\n\n.\n.\n.\n\n", "text": "I’m trying to use a simple Java program to connect to a MongoDB hosted on my Mac accessible by Compass using the authentication user and password I set up.However, in the JDBC code, I get the error 18 (authentication failure). I have no idea why. Same connection string as in Compass.My two dependencies in pom.xml are:The main method is utterly simple but failsI’ve tried many variations on the connection string. None have worked.Here is the error I cannot get past: “Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server localhost:27017. The full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: “AuthenticationFailed”}”Would appreciate any suggestions on how to tackle this issue.Thanks", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "Most likely your authSource is wrong.Try to simply remove it and the default admin will be used.It it stiĺl does not work share the user data.", "username": "steevej" }, { "code": " Class.forName(\"com.mongodb.jdbc.MongoDriver\");\n\n String url = \"jdbc:mongodb://localhost:27017\";\n Properties properties = new Properties();\n properties.put(\"database\", \"corpus_test\");\n properties.put(\"user\", \"corpus\");\n properties.put(\"password\", \"the_password\");\n\n Connection connection = DriverManager.getConnection(url, properties);\n\nCaused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server localhost:27017. The full response is {\"ok\": 0.0, \"errmsg\": \"Authentication failed.\", \"code\": 18, \"codeName\": \"AuthenticationFailed\"}\n\tat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:302)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:258)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\n\tat com.mongodb.internal.connection.SaslAuthenticator.sendSaslStart(SaslAuthenticator.java:158)\n\tat com.mongodb.internal.connection.SaslAuthenticator.access$100(SaslAuthenticator.java:40)\n\tat com.mongodb.internal.connection.SaslAuthenticator$1.run(SaslAuthenticator.java:54)\n\t... 30 more\n", "text": "I tried to remove that auth string, but it doesn’t make any difference.Is JDBC a good way to work with MongoDB? I’m really having trouble using the standard Mongo API and even getting a basic JDBC example to work. Also, can’t find any books that contain JDBC for Mongo code. Stuff online is either out of date or just snippets with no complete examples. Difficult to get started…Here’s my revised connection code:", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "I do not use JDBC as I avoid abstraction layers.But Authentication Error means wrong user name, wrong password or wrong authentication database.It it stiĺl does not work share the user data.", "username": "steevej" }, { "code": "", "text": "The problem could be that there is no user created in the corpus_test database.In the admin database, I see this:\ndbdata1176×1116 154 KB\nDoes this help?Thanks,", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "See my posting (sorry, was posting when you wrote your reply) I added. I can connect, using Compass using these credentials.\nCompas Auth Works1502×1298 152 KB\nWhen I click “Connect”, it connects with no problems.(The actual password is “passpass” for testing.)Does this help?Thanks,", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "The only thing I can say is that your JDBC abstraction layer is your obstruction layer because it does not generates the appropriate URI using the properties you set.I would first create my user in the admin database rather than corpus_test. Hopefully JDBC can use the default authentication database correctly. I suspect that the property database is not to specify the authSource. May be there is another one for that purpose.Hopefully, the driver uses user and password correctly, but you could put them yourself in the URL and not setting the properties.", "username": "steevej" }, { "code": " Connection connection = DriverManager.getConnection(\n \"jdbc:mongodb://localhost:27017/corpus_test\");\n", "text": "If you check the screenshot I posted above, you see that I did, in fact, create the user in the corpus_test database. Other than that, I have no idea what to try. I’ve used every variation on the JDBC string I can think of.If I try something simple like:I get:java.sql.SQLException: There are issues with your connection settings : Mandatory property ‘database’ is missing.But, adding the “database” as a property to the other JDBC connection attempt didn’t work either.Thanks,", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": " → Oops, sorry I meant the user was created in the admin database.See screenshot above.", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "the user was created in the admin database.Apparently not, the evidences say otherwise, as the _id is corpus_test.corpus and you can conncect in Compass with authSource corpus_test.When publishing code or documents please do so in text rather than screenshot. It helps replying as we can cut-n-paste rather than typing.", "username": "steevej" }, { "code": "use admin\n'switched to db admin'\ndb.system.users.find()\n{ _id: 'corpus_test.corpus',\n userId: UUID(\"3732b335-8734-4c15-b2f8-db2b2f0abb41\"),\n user: 'corpus',\n **db: 'corpus_test',**\n credentials:\n { 'SCRAM-SHA-1':\n { iterationCount: 10000,\n salt: '9yIjqaRMSG3umZbPXkvCEw==',\n storedKey: 'meLH8bvOfPXA1LjcjCEX4Lt/Ecw=',\n serverKey: 'k9dBvlJYm24BzAGX8s4Ugk30fwM=' },\n 'SCRAM-SHA-256':\n { iterationCount: 15000,\n salt: 'vWw3BFDN88idZqGnnZYNDLFq4Y03Y1GUe4hU+A==',\n storedKey: 'b1HixboG/j34sPbvl5+mfFHsQofCHM1vH/XfQZ1XXrM=',\n serverKey: 'J1dRZvofxEzX81Y1SlfI8eQQcjKrniDvSwiQ2uUfrjw=' } },\n roles: [ { role: 'readWrite', db: 'corpus_test' } ] }\n", "text": "OK, you’ve lost me. Here’s the text version of the screenshot you requested. I tried to create the user in the corpus_test database, but it shows up here:If I go to the corpus_test database and try to create the user there, I get an incomprehensible message:‘switched to db corpus_test’\ndb.createUser(\n{\nuser: “corpus”,\npwd: “passpass”,\nroles: [ { role: “readWrite”, db: “corpus_test” }]}Error: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be cloned.\nat Object.serialize (v8.js:256:7)\nat u (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:593159)\nat postMessage (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:593767)\nat i (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:598575)", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "Hi @c51a1608bd6dd1713da7ee458e75803First of all, I’m not an expert in Java nor JDBC but I’ll try to answer your question. Is this the JDBC driver you’re using? GitHub - mongodb/mongo-jdbc-driver: JDBC Driver for MongoDB Atlas SQL interfaceIf yes, I think that driver is meant for connecting to a specific setup in Atlas and not to locally deployed instances, as mentioned in the Readme file there:The MongoDB Atlas SQL JDBC Driver provides SQL connectivity to MongoDB Atlas for client applications developed in Java.\nSee the Atlas SQL Documentation for more information.This is the diagram taken from the linked page https://www.mongodb.com/docs/atlas/data-federation/query/query-with-sql/\nimage1710×536 70.6 KB\nSo in the diagram, there’s Atlas → Atlas SQL Interface → JDBC driverIn my limited knowledge, JDBC is a method for Java to connect to SQL/tabular databases. MongoDB is definitely not an SQL database. If you’re looking to connect to MongoDB from Java, I think you’re looking for the MongoBD Java driver, along with the free MongoDB University course M220J MongoDB for Java DevelopersBest regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "create the user in the corpus_test databaseYou did indeed created it in the corpus_test. This we know for sure as you could connect with Compass using corpus_test as authenticatin source. It is possible thzt JDBC driver cannot use an authenticaton source other than admin. So you should notgo to the corpus_test database and try to create the user therebecause you already created there and it does not work with JDBC. You have to create it in the admin database.", "username": "steevej" }, { "code": "Error: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be cloned.\nat Object.serialize (v8.js:256:7)\nat u (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:593159)\nat postMessage (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:593767)\nat i (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:598575)\n", "text": "This is very confusing. I did create the login in the admin database, but that login (user) refers to the corpus_test database. Trying to create the login in the corpus_test database throws that error I posted above:", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "", "text": "What is confusing is that I wrote many times that the user has NOT been created in the admin database:Apparently not, the evidences say otherwise, as the _id is corpus_test.corpus and you can conncect in Compass with authSource corpus_testYou did indeed created it in the corpus_test.I also wrote:You have to create it in the admin database.but you try to create it again in corpus_test.", "username": "steevej" }, { "code": "", "text": "Doing a “db.system.users.find()” in the corpus_test database shows no users. Nothing displays.If I try to create a user there, I get the errors I posted above.If I go to the admin database, db.system.users.find() shows the corpus_test user.If that is not correct, what database and what create user command should I use?Thanks,", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "\"mongodb://corpus:passpass@localhost:27000/corpus_test?authMechanism=DEFAULT&authSource=corpus_test\"corpus_testtest\"mongodb://corpus:passpass@localhost:27000/corpus_test?authMechanism=DEFAULT&authSource=admin\"corpusadminsecurity.authorization: \"enabled\"/\"disabled\"admincorpusadmincorpus_testdb.dropUser(\"corpus\")", "text": "if these two does not work,", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I’m able to log in with no problems in Compass using authentication. Thanks.", "username": "c51a1608bd6dd1713da7ee458e75803" }, { "code": "mongsh/corpus_testcorpus_test", "text": "my bad … forgot the mongsh part while copy-pasting. remove it and use the rest of the string as your URL. I will edit my above post. the point is to have /corpus_test after the host name.the rest of those I suggested are because of the error you get for trying to add user to corpus_test db.", "username": "Yilmaz_Durmaz" }, { "code": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\nimport java.sql.Statement;\npublic class SimpleTest {\n private static Connection con;\n private static final String urlWithAuth = \"jdbc:mongodb://corpus:passpass@localhost:27017/corpus_test?authSource=corpus_test\";\n public static void main(String args[]) throws ClassNotFoundException, SQLException {\n Class.forName(\"com.wisecoders.dbschema.mongodb.JdbcDriver\");\n Connection connection = DriverManager.getConnection( urlWithAuth, null, null);\n Statement stmt=connection.createStatement();\n //stmt.execute(\"use corpus_test;\"); // use this or embed after hostname \"localhost:27017/corpus_test\"\n System.out.println( \"\\n\" );\n stmt.executeQuery(\"corpus_test.listCollectionNames()\");\n stmt.close();\n }\n}\nSimpleTest.javaMongoDbJdbcDriverjavac SimpleTest.javajava -cp MongoDbJdbcDriver/mongojdbc4.1.jar;. SimpleTestadmincorpus_teststmt.execute(\"use corpus_test;\");DbSchema", "text": "TL;DR : you need to hard check your URL or give us which driver and version you use, where you downloaded it from, how you use it. because I tried this and have absolutely no problem connecting if credentials has no typo.I have this working file I extracted and adapted from one of test files in the source of wise-coders/mongodb-jdbc-driver: MongoDB JDBC Driver | DbSchema MongoDB Designer (github.com)after testing many wrong credentials, as soon as I have it correct, there was only 1 error (of type 13) for trying to use admin without permission. and it was just as easy as adding corpus_test after host:port portion, or use stmt.execute(\"use corpus_test;\"); before any more queries.I have also tried DbSchema and got auth error a few times, but it fixed itself somehow after a few connection attempts. maybe the driver it downloaded was not yet fully downloaded. I don’t know. or the connection pooling failing due to my low connection limit of 10. but at the end, there was not a single leftover to get the error you experienced.so please this time be more clear and explaining. Examine your URL carefuly and if it still won’t solve then give use more. Which driver and which version is it. Where you get it and how you use it. how you import driver and how you compile your source.", "username": "Yilmaz_Durmaz" } ]
Cannot connect to MongoDB on localhost using JDBC
2022-07-09T14:14:48.117Z
Cannot connect to MongoDB on localhost using JDBC
18,936
null
[ "sharding", "security" ]
[ { "code": "", "text": "From IT Security I got the CIS (Center of Internet Security) benchmark: CIS_MongoDB_5_Benchmark_v1.0.0_PDF.pdf (CIS MongoDB Benchmarks) with the question whether MongoDB Atlas meets this Benchmark. Unfortunately there exists no dedicated MongoDB Atlas Benchmark and therefore while I could answer most questions with yes there still exist some open questions that require access to the underlying server configuration and maybe someone in this community can answer these questions:2.1 Ensure Authentication is configured\ncat /etc/mongod.conf | grep “authorization”\nThe value for authorization must be set to enabled.2.2 Ensure that MongoDB does not bypass authentication via the localhost exception\ncat /etc/mongod.conf |grep “enableLocalhostAuthBypass”\nThe value for enableLocalhostAuthBypass must be false.3.3 Ensure that MongoDB is run using a non-privileged, dedicated service account\nRun the following command to get listing of all mongo instances, the PID number, and the PID owner.\nps -ef | grep -E “mongos|mongod”4.4 Ensure Federal Information Processing Standard (FIPS) is enabled\nOn Ubuntu: To verify that the server uses FIPS Mode (net.tls.FIPSMode value set to true), run following commands:\nmongod --config /etc/mongod.conf\nnet: tls: FIPSMode: true\nOr To verify FIPS mode is running, check the server log file for a message that FIPS is active:\nFIPS 140-2 mode activated5.1 Ensure that system activity is audited\nTo verify that system activity is being audited for MongoDB, run the following command to confirm the auditLog.destination value is set correctly: On Ubuntu:\ncat /etc/mongod.conf |grep –A4 “auditLog” | grep “destination”5.3 Ensure that logging captures as much information as possible\nTo verify that the SystemLog: quiet=false option is disabled (value of false), run the following command: On Ubuntu:\ncat /etc/mongod.conf |grep “quiet”5.4 Ensure that new entries are appended to the end of the log file\nTo verify that new log entries will be appended to the end of the log file after a restart (systemLog: logAppend: true value set to true), run the following command: On Ubuntu:\ncat /etc/mongod.conf | grep “logAppend”6.1 Ensure that MongoDB uses a non-default port\nTo verify the port number used by MongoDB, execute the following command and ensure that the port number is not 27017: On Ubuntu:\ncat /etc/mongod.conf |grep “port”6.2 Ensure that operating system resource limits are set for MongoDB\nTo verify the resource limits set for MongoDB, run the following commands. Extract the process ID for MongoDB:\nps -ef | grep mongod7.1 Ensure appropriate key file permissions are set\nFind the location of certificate/keyfile using the following commands: On Ubuntu:\ncat /etc/mongod.conf | grep “keyFile:” cat /etc/mongod.conf | grep “PEMKeyFile:” cat /etc/mongod.conf | grep “CAFile:”7.2 Ensure appropriate database file permissions are set.\nFind out the database location using the following command: On Ubuntu:\ncat /etc/mongod.conf |grep “dbpath” or cat /etc/mongod.conf | grep “dbPath”", "username": "Raoul_Becke1" }, { "code": "", "text": "Welcome to the MongoDB community @Raoul_Becke1 !MongoDB Atlas follows best practices in security with preconfigured security features for authentication, authorization, encryption, and more. Atlas encrypts all cluster storage and snapshot volumes at rest by default, and dedicated instances have additional options like Encryption at Rest using Customer Key Management. There is also an independent MongoDB Atlas for Government environment which is FedRAMP ready.Please see MongoDB Atlas Security and the MongoDB Trust Center for more details on security and privacy compliance for MongoDB Cloud Services.Per the Trust Center:MongoDB Atlas undergoes independent verification of platform security, privacy, and compliance controls. Our strong and growing focus on standards conformance and compliance will help you meet your regulatory and policy objectives.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X, Thank you for your feedback.\nBut unfortunately I could not find answers to my questions above (respective the ones from the CIS Benchmark). The only indirect answer I can interprete is that regarding “4.4 Ensure Federal Information Processing Standard (FIPS) is enabled” this is not enabled on MongoDB Atlas in general but only for Government environments. Is there a way to access the MongoDB Atlas Cluster and verify the server configuration regarding the questions above?", "username": "Raoul_Becke1" }, { "code": "mongod", "text": "Hi @Raoul_Becke1,MongoDB Atlas meets higher standards of security than the CIS benchmark, including regular independent third party verification for compliance with multiple international security and privacy standards.The MongoDB Atlas Security link shared in my earlier reply also includes a white paper on MongoDB Atlas Security Controls which goes into publicly available security details.Quoting an earlier discussion on FIPS:FIPS mode cannot be enabled in Atlas directly. However, MongoDB Atlas is FIPS-compatible and supports connections from applications using FIPS 140-2 validated cryptographic modules. The Atlas service offers NIST-approved FIPS 140-2 encryption modes in network transport, API access, and key generation & delegation. Data at rest encryption at the storage volume level uses FIPS validated hardware security module and software components. Web service endpoints for Atlas are compatible with applications configured to communicate in FIPS mode over TLS .FIPS mode is only supported by MongoDB Enterprise Advanced server, which all Atlas clusters run.I believe your other points are covered in the documentation and white paper details, but most of those are at a lower level than end users have access to. Atlas is a fully managed data service so end users do not have direct access to make changes to the MongoDB server configuration file or the backing instances for an Atlas cluster. All common features can be managed via the Atlas UI/API. You can also discuss special requests for a dedicated cluster with the Atlas support team.Referencing your other points (please see the resource links and white paper for details):The MongoDB Atlas for Government service I mentioned is specifically designed for US government needs, and verified via FedRamp (Federal Risk and Authorization Management Program). I included this for completeness, but if your questions do not relate to a US government entity, this service is not applicable.If this information does not cover your concerns, I suggest contacting the MongoDB sales team to discuss your security and compliance requirements in more detail.Regards,\nStennie", "username": "Stennie_X" }, { "code": "SCRAM-SHA-256SCRAM-SHA-1", "text": "Hi @Stennie_X ,Thank you for your feedback.Regarding: MongoDB Atlas meets higher standards of security than the CIS benchmark, including regular independent third party verification for compliance with multiple international security and privacy standards .\nHow do you know/compare (without knowing the configuration details)? Maybe one could argument that the underlying server configuration is same for all Atlas Clusters and is believed to be secure because it has been hardened and penetration tested in different standards and certifications by independent third-parties …Yes I’ve read the white paper (and most of the other atlas documentation) some time ago and I’ve seen the discussion on FIPS. (And yes maybe I need to contact Sales and/or Support to get exact answers respective configuration details.)FIPS: I don’t understand why FIPS cannot be enabled respective what “FIPS Compatible” means. The only thing I found is “https://www.mongodb.com/docs/manual/tutorial/configure-fips/”: \" Starting in MongoDB 5.1, instances running in FIPS mode have the SCRAM-SHA-1 authentication mechanism disabled by default.\" VS “BadValue: SCRAM-SHA-256 authentication is disabled”: \" Currently, Atlas does not support SCRAM-SHA-256 , but does support SCRAM-SHA-1 .\" …Trying to match my questions to your answers:\n2.1 Ensure Authentication is configured\n2.2 Ensure that MongoDB does not bypass authentication via the localhost exception\n•\t2.1, 2.2: Authentication, TLS, and IP Access Lists are always enabled\nI guess 2.1 is true based on the security white paper “For the MongoDB Atlas Cluster, authentication is automatically enabled by default via SCRAM to help ensure a secure system out of the box.” And I’ve seen no switch to disable authentication.\nRegarding 2.2 I’m not sure when support accesses the server via SSH whether it can bypass authentication via the localhost exception3.3 Ensure that MongoDB is run using a non-privileged, dedicated service account\n•\t3.3: Each cluster is deployed within a VPC configuration that allows no inbound access by details.\nThis does not answer the question …5.1 Ensure that system activity is audited\n•\t5.1: System activity is audited and there are further options for database auditing\nI guess you’ve checked in the configuration and/or in the documentation – can you maybe add the reference?5.4 Ensure that new entries are appended to the end of the log file\n•\t5.4: All logs (including infrastructure, UI, mongod, …) have documented log retention policies\nDo you know the log retention? I guess this indirectly answers the question i.e. you could not guarantee log retention if append to the end of log file would be set to false …Thank you and kind regards\nRaoul", "username": "Raoul_Becke1" } ]
Does MongoDB Atlas meet the CIS (Center of Internet Security) Benchmark?
2022-09-23T12:40:24.587Z
Does MongoDB Atlas meet the CIS (Center of Internet Security) Benchmark?
4,032
null
[ "dot-net", "crud", "time-series" ]
[ { "code": "", "text": "How can I create a timeseries collection in c# client?After inspecting resul in MongoDb Compass this code does not seem to actually create a timeseries collection but only a simple collection:await eveDb.CreateCollectionAsync(\n“datapoints”,\nnew CreateCollectionOptions { TimeSeriesOptions = new TimeSeriesOptions(“timestamp”) }).ConfigureAwait(false);Moreover I fail to insert data when I create a time series in MongoDb Compass beforehand:\nongoDB.Driver.MongoBulkWriteException`1[InforsHT.Genesis.Core.Domain.Experiments.Entities.DataPoint] : A bulk write operation resulted in one or more errors. WriteErrors: [ { Category : “Uncategorized”, Code : 2, Message : “‘timestamp’ must be present and contain a valid BSON UTC datetime value” } ].My C# class contains an element:\npublic BsonDateTime Timestamp { get; set; }so I don’t get the problem…", "username": "Kay_Zander" }, { "code": "", "text": "just double checked - it is not a time series collection when creating using c# driver:\n{{ “name” : “datapoints”, “type” : “collection”, “options” : { }, “info” : { “readOnly” : false, “uuid” : CSUUID(“3cfb7158-9a28-4245-a45d-f37f7f34f133”) }, “idIndex” : { “v” : 2, “key” : { “_id” : 1 }, “name” : “id” } }}\napparently C# driver does cannot create timeseries? at least there is no documentation at all.\nterm “timeseries” does not even appear in documentation.", "username": "Kay_Zander" }, { "code": " if(!CollectionExists(\"datapoints\"))\n _database.CreateCollection(\"datapoints\",\n new CreateCollectionOptions { TimeSeriesOptions = new TimeSeriesOptions(\"timestamp\") });\n _TrendReadingsCollection2 = _database.GetCollection<ExperimentTrendReadings>(\"datapoints\");", "text": "This is what I do, and it works.", "username": "Aleksandr_Kratshteyn" }, { "code": "", "text": "Thank you for your comment. I’ve tried it and I like it.You could add an example of inserting and another of obtaining time series.Thank you very much!!", "username": "Neboki_Domains_DataBase" } ]
How to create a timerseries collection (C# driver)
2021-08-02T15:47:57.588Z
How to create a timerseries collection (C# driver)
7,542
null
[ "aggregation", "views" ]
[ { "code": "", "text": "We have a requirement where we want to use an “aggregation pipeline” which will include operations over 2 or 3 “source collections” and at the final stage of the pipeline the results has to be added to a “Target Collection”.\nNow, the data in the source collections can change on a daily basis and therefore we would want the data inside “Target Collection” also to be Refreshed periodically. On checking the article On-Demand Materialized Views — MongoDB Manual it is mentioned that we can use a function to trigger the pipeline again, however there is no info on how this refresh can be scheduled. So we have the following questions :MongoDB version : 4.2.17\nHosted on: AWS EC2\nConfig: Standalone 3 node replica setThanks in advance.", "username": "Rai_Deepak" }, { "code": "", "text": "Can anyone provide their guidance.", "username": "Rai_Deepak" }, { "code": "mongod", "text": "how we can schedule the refresh of Target collection on a periodic basis using standard MongoDB instanceYou could write a script and then invoke it with a cron job (assuming Linux OS).If some JSON records insert/updates to the Target Collection are rejected during Aggregation Pipeline run then how can we get a summary of this information at end of run?Depends what you mean by “rejected” - if there is an error when inserting or updating then the aggregation will error out with an error message. If the aggregation does not error then all the documents were processed according to the pipeline directive - unfortunately we don’t make the summary of how many documents that was available except potentially in mongod logs.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "You could write a script and then invoke it with a cron job (assuming Linux OS).We tried out this option however due to organization policies we do not have permissions to schedule a cron job on the server. So is there any alternative to cron jobs? The MongoDB instance is hosted on AWS EC2 so is there any AWS service that might help us in scheduling that you are aware of? Thanks in advance.", "username": "Rai_Deepak" }, { "code": "mongod", "text": "The cron job does not have to run on the server that mongod is running on. You can literally run it on any server, in the cloud or on prem. It just needs to be able to connect to the cluster.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Unfortunately we do not have permission to create cron jobs on any server. Are there any other ways for this?", "username": "Rai_Deepak" }, { "code": "Hosted on: AWS EC2", "text": "Have you looked at Atlas Triggers?I just realized you already said you’re Hosted on: AWS EC2 - maybe there is an equivalent to Atlas Triggers in AWS…", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Schedule Aggregation pipeline MV Refresh
2022-09-13T07:38:08.184Z
Schedule Aggregation pipeline MV Refresh
2,924
null
[]
[ { "code": "[{\n \"userId\": \"494df021-0a37-444c-8e1f-92bb4a35574e\",\n \"createdAt\": \"2022-08-06T06:26:27.926Z\"\n}]\n[{\n \"userId\": \"494df021-0a37-444c-8e1f-92bb4a35574e\",\n \"openedAt\": \"2022-08-06T06:26:27.926Z\"\n},\n{\n \"userId\": \"494df021-0a37-444c-8e1f-92bb4a35574e\",\n \"openedAt\": \"2022-09-06T06:26:27.926Z\"\n}]\n", "text": "I have 2 collections,I want to show Rolling 30 day growth rate in mongo chart, in other words\nNumber of users signed up during last T-30 days / Number of active users T-30 days agowhere my X axis will be date and Y axis will be Number of users signed up during last T-30 days / Number of active users T-30 days ago ,How can i achieve this? I cant think of a way to do this.", "username": "MAHENDRA_HEGDE" }, { "code": "", "text": "Number of active users T-30 days agoHow do you count this? So as of date X how do you count how many active users there are? Or is this just the number of unique users who had some activity during the previous 30 days?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hello,Yes, it is just the number of unique users who had some activity during the previous 30 days.", "username": "MAHENDRA_HEGDE" }, { "code": "$unionWithsignupsvar start_date = ISODate(\"2022-08-01T06:26:27.926Z\");\ndb.signups.aggregate([\n{$set:{num:1}}, /* count up each sign-up this many times */\n{$unionWith:{ /* generate a document for each date over the last \n 30 days with 0 signups to ensure no dates are missed */\n pipeline:[\n {$documents:{$map:{\n input:{$range:[0,30]},\n in:{ createdAt:{$dateAdd:{\n unit:\"day\", \n amount:\"$$this\",\n startDate: start_date\n }}}\n }}}\n]}},\n{$group:{\n _id:{ $dateTrunc:{date:\"$createdAt\",unit:\"day\"}},\n count:{$sum:\"$num\"}\n}},\n{$setWindowFields:{ /* count cumulative signups over last 30 days \"window\" */\n sortBy:{_id:1}, \n output:{signups:{\n $sum:\"$count\", \n window: {range:[-30,\"current\"], \n unit:\"day\"\n}}}}}, \n{$lookup:{\n from:\"active_history\", \n let:{date:\"$_id\",date30:{$dateSubtract:{startDate:\"$_id\",amount:30, unit:\"day\"}}},\n as:\"activity\", \n pipeline:[ \n {$match:{$expr:{$and:[{$gte:[\"$openedAt\",\"$$date30\"]},{$lte:[\"$openedAt\",\"$$date\"]}]}}},\n {$group:{_id:\"$userId\"}},\n {$count:\"activeUsers\"}]}},\n{$unwind:\"$activity\"})\n{ \"_id\" : ISODate(\"2022-08-02T00:00:00Z\"), \"count\" : 0, \"signups\" : 0, \"activity\" : { \"activeUsers\" : 1 } }\n{ \"_id\" : ISODate(\"2022-08-03T00:00:00Z\"), \"count\" : 0, \"signups\" : 0, \"activity\" : { \"activeUsers\" : 1 } }\n{ \"_id\" : ISODate(\"2022-08-04T00:00:00Z\"), \"count\" : 0, \"signups\" : 0, \"activity\" : { \"activeUsers\" : 1 } }\n{ \"_id\" : ISODate(\"2022-08-05T00:00:00Z\"), \"count\" : 0, \"signups\" : 0, \"activity\" : { \"activeUsers\" : 1 } }\n{ \"_id\" : ISODate(\"2022-08-06T00:00:00Z\"), \"count\" : 2, \"signups\" : 2, \"activity\" : { \"activeUsers\" : 1 } }\n{ \"_id\" : ISODate(\"2022-08-07T00:00:00Z\"), \"count\" : 1, \"signups\" : 3, \"activity\" : { \"activeUsers\" : 3 } }\n{ \"_id\" : ISODate(\"2022-08-08T00:00:00Z\"), \"count\" : 2, \"signups\" : 5, \"activity\" : { \"activeUsers\" : 3 } }\n{ \"_id\" : ISODate(\"2022-08-09T00:00:00Z\"), \"count\" : 3, \"signups\" : 8, \"activity\" : { \"activeUsers\" : 3 } }\nISODate", "text": "Ok, since I couldn’t assume that there would be some sign-ups every day, I put a $unionWith towards the front to make sure every date over the last 30 was represented, combining it with signupsWith a small amount of sample data the output would look something like:So for each day you have number of users who signed up that day (count), cumulative over previous 30 days (signups), and unique users who had activity over previous 30 days. Should be easy to transform that to whatever format you want.Asya\nP.S. I assumed all dates would be ISODate types.", "username": "Asya_Kamsky" }, { "code": "$unionWith pipeline:[\n {$documents:{$map:{\n", "text": "Thanks for the answer suites for my use case .However I’m getting$documents’ is not allowed in user requestsError at $unionWith pipeline stage, could not find any reference to the error in the docs.PS: I’m trying this in compas shell.", "username": "MAHENDRA_HEGDE" }, { "code": "$documents$map$range$unwind", "text": "Ah, $documents is new in version 6.0 - are you using an older version? The same thing can be done without 6.0 by generating an array of dates with same $map+$range and then an $unwind…", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Chart from different collections
2022-09-22T14:16:29.585Z
Chart from different collections
2,597
null
[]
[ { "code": "{\"t\":{\"$date\":\"2022-01-01T16:13:21.271+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.272+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.275+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.275+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":8149,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"Sais-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.4\",\"gitVersion\":\"62a84ede3cc9a334e8bc82160714df71e7d3a29e\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.2.0\"}}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.276+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.279+05:30\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.279+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.280+05:30\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.280+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.280+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.280+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.280+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-01-01T16:13:21.281+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n", "text": "Hello I have installed Mongo DB in M1 Mac as mentioned in the docs Install MongoDB Community Edition on macOS — MongoDB Manual\nBut when I try to run mongod command it gives me these errors.", "username": "Sai_Balaji1" }, { "code": "msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}", "text": "msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}Do you have another mongod already running?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I’m not sure how can I check it?", "username": "Sai_Balaji1" }, { "code": "", "text": "As per manual it is recommended to start mongod as serviceIf you have run brew services start there is no need to start mongod manually.Just issue mongo to connect to your instancecheck brew services list", "username": "Ramachandra_Tummala" }, { "code": "ps -aef | grep [m]ongod\nss -tlnp\n", "text": "I’m not sure how can I check it?With the shell commands", "username": "steevej" }, { "code": "", "text": "is your issue resolved?? i’m facing the same issue can you help me…!", "username": "THE_ULTIMATE_GAMER" }, { "code": "", "text": "Same issue means error 48?\nPlease show a screenshot or error logs\nAre you trying to start a mongod when there is already a mongod up and running?\nYou can check with commands given in previous\npost above", "username": "Ramachandra_Tummala" }, { "code": "", "text": "i used mongo to connect instance now its working fine… but i’m getting an error can you help me outcode: db.second.insertone({_id:1,name:“hero”})\nerreor : uncaught exception: TypeError: db.second.insertone is not a function :", "username": "THE_ULTIMATE_GAMER" }, { "code": "", "text": "my bad insert\"O\"ne. wrong spell, btw how can i connect u through social media…?", "username": "THE_ULTIMATE_GAMER" }, { "code": "", "text": "I also facing same issue.I’m totally new to mongodb and has no knowledge about it at all.can anybody please help me…\nthis is the error when i run mongod\nScreenshot 2022-06-24 at 11.58.57 AM1920×447 165 KB\n", "username": "Anupama_Aanzz" }, { "code": "", "text": "Check if your mongod is already up as service\nJust issue mongo and see if you can connect\nAlso check ps -ef | mongod if any mongod is up started manually\nAlso check permissions on that TMP .sock file", "username": "Ramachandra_Tummala" }, { "code": "", "text": "can you explain it in a simpler manner", "username": "Gagan_Baghel" }, { "code": "", "text": "Show us output of\nps -ef|grep mongod and\nls -lrt /tmp/mongod-27017.sock", "username": "Ramachandra_Tummala" }, { "code": "", "text": "s -lrt /tmp/mongod-27017.sockThis is the response i am getting\n\nimage2734×194 68.3 KB\n", "username": "Sashank_Gunda" }, { "code": "", "text": "No mongod running in your case\nHave you started the service?\nIf yes check service status and mongod.log on why it is failing", "username": "Ramachandra_Tummala" } ]
Help mongod command not working in M1 Mac
2022-01-01T10:57:25.415Z
Help mongod command not working in M1 Mac
12,483
null
[ "connecting" ]
[ { "code": "{ \"errorType\": \"Runtime.UnhandledPromiseRejection\", \"errorMessage\": \"MongoServerSelectionError: Server selection timed out after 30000 ms\", \"reason\": { \"errorType\": \"MongoServerSelectionError\", \"errorMessage\": \"Server selection timed out after 30000 ms\", \"reason\": { \"type\": \"ReplicaSetNoPrimary\", \"servers\": {}, \"stale\": false, \"compatible\": true, \"heartbeatFrequencyMS\": 10000, \"localThresholdMS\": 15, \"setName\": \"atlas-cnjwgm-shard-0\" }, \"stack\": [ \"MongoServerSelectionError: Server selection timed out after 30000 ms\", \" at Timeout._onTimeout (/var/task/index.js:2:464693)\", \" at listOnTimeout (internal/timers.js:554:17)\", \" at processTimers (internal/timers.js:497:7)\" ] }, \"promise\": {}, \"stack\": [ \"Runtime.UnhandledPromiseRejection: MongoServerSelectionError: Server selection timed out after 30000 ms\", \" at process.<anonymous> (/var/runtime/index.js:35:15)\", \" at process.emit (events.js:314:20)\", \" at processPromiseRejections (internal/process/promises.js:209:33)\", \" at processTicksAndRejections (internal/process/task_queues.js:98:32)\" ] }", "text": "I have an AWS lambda in my VPC which, using VPC peering, talks to my Atlas cluster.Some of the time I am seeing the following error: { \"errorType\": \"Runtime.UnhandledPromiseRejection\", \"errorMessage\": \"MongoServerSelectionError: Server selection timed out after 30000 ms\", \"reason\": { \"errorType\": \"MongoServerSelectionError\", \"errorMessage\": \"Server selection timed out after 30000 ms\", \"reason\": { \"type\": \"ReplicaSetNoPrimary\", \"servers\": {}, \"stale\": false, \"compatible\": true, \"heartbeatFrequencyMS\": 10000, \"localThresholdMS\": 15, \"setName\": \"atlas-cnjwgm-shard-0\" }, \"stack\": [ \"MongoServerSelectionError: Server selection timed out after 30000 ms\", \" at Timeout._onTimeout (/var/task/index.js:2:464693)\", \" at listOnTimeout (internal/timers.js:554:17)\", \" at processTimers (internal/timers.js:497:7)\" ] }, \"promise\": {}, \"stack\": [ \"Runtime.UnhandledPromiseRejection: MongoServerSelectionError: Server selection timed out after 30000 ms\", \" at process.<anonymous> (/var/runtime/index.js:35:15)\", \" at process.emit (events.js:314:20)\", \" at processPromiseRejections (internal/process/promises.js:209:33)\", \" at processTicksAndRejections (internal/process/task_queues.js:98:32)\" ] }I have not been able to find any solutions to this issue. Any help or pointers would be appreciated.", "username": "Edouard_Finet" }, { "code": "Some of the time", "text": "Hello @Edouard_Finet ,Regarding connections into Atlas from AWS Lambda, the page Manage connections with AWS Lambda might be of interest to you. Also the blog post Write A Serverless Function with AWS Lambda and MongoDB | MongoDB might be useful.Some of the time I am seeing the following errorCould you please provide below details for us to check on the error you are seeing:If this issue persists and all network settings should be allowing the connection, have you contacted Atlas support to look into this issue? They would have better visibility into the state of your deployment and thus may be able to help you further.Regards,\nTarun", "username": "Tarun_Gaur" } ]
I am getting "MongoServerSelectionError: Server selection timed out after 30000 ms" when my lambda is connecting to my database
2022-09-27T12:32:21.785Z
I am getting &ldquo;MongoServerSelectionError: Server selection timed out after 30000 ms&rdquo; when my lambda is connecting to my database
2,690
null
[ "sharding" ]
[ { "code": "", "text": "I am developing an application that uses a sharded mongo cluster… All writes use a {w: majority, j: true}.At some point, our cluster auto-scaled to increase disk space and several of our nodes experienced rollback events.How can a rollback take place when all writes use {w: majority, j: true}?", "username": "Hudson_Clark" }, { "code": "", "text": "My current best guess of what could have happened is:", "username": "Hudson_Clark" } ]
Rollback with {w: majority, j: true}
2022-09-30T12:48:50.097Z
Rollback with {w: majority, j: true}
1,284
null
[ "python", "field-encryption" ]
[ { "code": "provider = 'local'\n\npath = './master-key.txt'\n\nwith open(path, 'rb') as f:\n\n local_master_key = f.read()\n\nprint(local_master_key) #Famous print bdebug method\n\nkms_providers = {\n\n 'local': {\n\n 'key': local_master_key\n\n },\n\n}\n\nclient = MongoClient(connection_string)\n\nclient_encryption = ClientEncryption(\n\n kms_providers,\n\n key_vault_namespace,\n\n client,\n\n CodecOptions(uuid_representation=STANDARD) \n\n)\nTraceback (most recent call last):\n File \"c:~\\mongodb_test\\pymongo-fastapi-crud\\make_data_key.py\", line 43, in <module>\n client_encryption = ClientEncryption(\n File \"C:~\\mongodb_test\\env-pymongo-fastapi-crud\\lib\\site-packages\\pymongo\\encryption.py\", line 537, in __init__\n self._encryption = ExplicitEncrypter(\n File \"C:~\\mongodb_test\\env-pymongo-fastapi-crud\\lib\\site-packages\\pymongocrypt\\explicit_encrypter.py\", line 130, in __init__\n self.mongocrypt = MongoCrypt(mongo_crypt_opts, callback)\n File \"C:~\\mongodb_test\\env-pymongo-fastapi-crud\\lib\\site-packages\\pymongocrypt\\mongocrypt.py\", line 179, in __init__\n self.__init()\n File \"C:~\\mongodb_test\\env-pymongo-fastapi-crud\\lib\\site-packages\\pymongocrypt\\mongocrypt.py\", line 203, in __init\n self.__raise_from_status()\n File \"C:~\\mongodb_test\\env-pymongo-fastapi-crud\\lib\\site-packages\\pymongocrypt\\mongocrypt.py\", line 259, in __raise_from_status\n raise exc\npymongocrypt.errors.MongoCryptError: local key must be 96 bytes\n", "text": "I’m trying to mess a bit around with Queryable Enryption (Python!) but i can’t seem to make it work.I’ve followed the guide step-by-step and the local key i’ve generated with “openssl rand 96 > master-key.txt” is not accepted by pymongocrypt for some reason, and i can’t figure out why.Here’s the code i’ve used to read the .txt file and setup the ClientEncryptionTraceback ->>I’ve tried generating the key and pasting the string directly into the local.key field. I’ve tried encrypting it to base64 and many other things, but i’m still new to this encryption thing so i am really lost", "username": "Nicklas_Wurtz" }, { "code": "import os\n\npath = \"master-key.txt\"\nfile_bytes = os.urandom(96)\nwith open(path, \"wb\") as f:\n f.write(file_bytes)\n", "text": "Found a way to do it without OpenSSL Made a new .py which made the master-key instead of using openssl to do so", "username": "Nicklas_Wurtz" } ]
Queryable Encryption "Local Key Must Be 69 Bytes"
2022-09-30T11:39:13.046Z
Queryable Encryption &ldquo;Local Key Must Be 69 Bytes&rdquo;
1,817
null
[ "aggregation" ]
[ { "code": "", "text": "I have a mongo db collection as described below:\n({ ‘name’ : ‘rohan’, ‘dept’: ‘cse’ } ,\n{ ‘name’ : ‘aakash’, ‘dept’: ‘mech’ },\n{ ‘name’ : ‘kiara’, ‘dept’: ‘cse’ } )Query: I have a list x = [mech, cse] which means that I want to see all dept=mech first and then cse and so on. How can I sort my collection based on the values ?Output: My collection should look like:\n({ ‘name’ : ‘aakash’, ‘dept’: ‘mech’ },\n{ ‘name’ : ‘rohan’, ‘dept’: ‘cse’ } ,\n{ ‘name’ : ‘kiara’, ‘dept’: ‘cse’ } )", "username": "nancy_tyagi" }, { "code": "[\"x\", \"y\"][\"y\",\"x\"]$addFields :{ placement : \n{ $indexOfArray: [ [\"mech\", \"cse\"] , \"$x\" ] }}\n", "text": "Hi @nancy_tyagi ,It I understand correctly the challenge is thet you need to respect the order of the values in the array but you don’t know the values before hand and cannot sort the input array.For example you can have [\"x\", \"y\"] or [\"y\",\"x\"] and based on the order you need to sort.In the case I can give you a high level guidance and maybe later on I can give you an example.You should have a pipeline in aggregation that first match the array values to get all the related documents, then you need to use $addFields + $indexOfArray for each document “$x” in the input array to get a new field with the index number. Final stage should be sort based on that newly added placement field to get the order you want…Hope you find this helpfulThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny ,You understood the problem statement correctly. I need the sort based on the elements in the list. So if “x” comes first then I need all the documents with the value “x” first.But since I am new to mongo db, Can you help me with the exact statement to be implemented?Thanks a lot!!! ", "username": "nancy_tyagi" }, { "code": "db.collection.aggregate([\n{\n $match : { x : [\"mech\", \"cse\"]}\n},\n{\n$addFields :{ placement : \n{ $indexOfArray: [ [\"mech\", \"cse\"] , \"$x\" ] }}\n},\n{$sort : { placement : 1 } }\n// Optimally add : , {$project : { placement : 0 }}\n]\n", "text": "Hi @nancy_tyagi ,I believe it should look something of the following sort, but I haven’t tested it:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnyI tried to test the solution, but it seems the sort is not happening. After I implemented your solution, my collection still looks the same.INPUT COLLECTION:\n({ ‘name’ : ‘rohan’, ‘dept’: ‘cse’ } ,\n{ ‘name’ : ‘aakash’, ‘dept’: ‘mech’ },\n{ ‘name’ : ‘kiara’, ‘dept’: ‘cse’ } )and even the output collection is same. Can you please help me with it?", "username": "nancy_tyagi" }, { "code": "db.collection.insertMany([{ \"name\" : \"rohan\", \"dept\": \"cse\" } ,\n{ \"name\" : \"aakash\", \"dept\": \"mech\" },\n{ \"name\" : \"kiara\", \"dept\": \"cse\" }]);\n\ndb.collection.aggregate([\n{\n $match : { dept : { $in : [\"mech\", \"cse\"]}}\n},\n{\n$addFields :{ \"placement\" : { $indexOfArray: [ [\"mech\", \"cse\"] , \"$dept\" ] }}\n},\n{$sort : { placement : 1 } }\n]);\n\n\n", "text": "Hi @nancy_tyagi ,Sorry mistakenly used some wrong field names. The following works for me:The result ireturned by query is as follows:\n\nScreenshot 2022-02-07 at 09.06.311302×722 53.6 KB\n", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnyYou have been a life saviour. Thanks a lot for your help!!! ", "username": "nancy_tyagi" }, { "code": "", "text": "Any document reference on how to add fields (with indexOfArray expression)like this while using mongo-java driver.", "username": "Madhav_kumar_Jha" } ]
Sort the mongo db collection based on values
2022-02-07T05:07:55.415Z
Sort the mongo db collection based on values
6,048
null
[ "node-js", "app-services-cli" ]
[ { "code": "", "text": "Hi, I’m following the tutorial in the documentation to install Realm-CLI.\nI installed Node.js version 16.14.2 and npm version 8.5.5Still the terminal failes to execute:\nnpm install -g mongodb-realm-clithrowing this error and freezing afterwards:\nnpm WARN deprecated [email protected]: this library is no longer supported\nnpm WARN deprecated [email protected]: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See There’s Math.random(), and then there’s Math.random() · V8 for details.\nnpm WARN deprecated [email protected]: request has been deprecated, see h", "username": "Francesco_Pace" }, { "code": "", "text": "Did you ever figure out the issue?", "username": "Teelon_Mitchell" } ]
Unable To Install Realm-CLI
2022-03-27T13:06:33.746Z
Unable To Install Realm-CLI
4,369
null
[ "graphql" ]
[ { "code": "Authentication: Bearer xxxxx...xxxxxjwtTokenStringjwtTokenString", "text": "When you want to send a GraphQL request to your mongodb app\nand you configured CustomJWT authentication,then one would expect that it necesssay to send the JWT token in the Authentication: Bearer xxxxx...xxxxx header as normal.But this is not possible. One must send the token in a custom jwtTokenString Header. Even if you set both, the default Authentication and the jwtTokenString header, then it does not work. You must only set the custom jwtTokenString Header.Yes ok, I admint this is documented. If you manage to find it. At the very bottom of this page: https://www.mongodb.com/docs/atlas/app-services/graphql/authenticate/But this is unexpected behaviour. Also the customJWT authentication should use the default Header. Also for GraphQL requests. Just as any other request out there too.", "username": "Robert_Rackl1" }, { "code": "curl --location --request POST 'https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/<app-id>/graphql' \\\n --header 'Authentication: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI......' \\\n --header 'Content-Type: application/json' \\\n --data-raw '{ \"query\" : \"{ team { _id teamName} }\" }'\ncurl --location --request POST 'https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/<app-id>/graphql' \\\n --header 'jwtTokenString eyJhbGciOiJIUzI1NiIsInR5cCI......' \\\n --header 'Content-Type: application/json' \\\n --data-raw '{ \"query\" : \"{ team { _id teamName} }\" }'\n", "text": "Example curl request as one would expect. And as specified by the GraphQL standards (This request does not work!)Example request that does work - but is very uncommon:", "username": "Robert_Rackl1" } ]
GraphQL Custom JWT Authentication
2022-09-30T08:37:30.674Z
GraphQL Custom JWT Authentication
2,507
null
[ "node-js", "mongoose-odm" ]
[ { "code": "const fs = require('fs')\nconst path = require('path')\nconst db = require('../server/ModelHandler')\nconst mongoose = require('mongoose')\n\nmodule.exports = class Seeder {\n static async seed() {\n await db.connect()\n await mongoose.connection.db.dropDatabase()\n console.log('\\n\\nSEEDING DB\\n' + '-'.repeat(60) + '\\n')\n await this.insertData()\n console.log('\\n\\nAll done!\\n');\n process.exit();\n }\n\n static async insertData() {\n let allCustomers = []\n let data = this.readJsonFiles('./', 'data')\n \n for (let [route] of data) {\n let model = db.modelsByApiRoute[route]._model;\n if (!model) {\n throw (new Error(`Model for collection ${route} not found.`));\n }\n if (route === 'customers') {\n console.log('customers working')\n rows.forEach(x =>\n x.customerId = x.customerId === null ? null : allCustomers[x.customerId - 1]); \n } \n \n console.log(`Inserted data in the ${route} collection.`);\n }\n }\n}``` \n\nModelHandler.js / seems to format the json to schema via a template. \nglobal.db = this\n\nawait mongoose.connect(require('./secrets/dbCredentials.json'))\n\nthis.getAllModels()\nlet modelsPath = path.join(__dirname, 'models')\n\nfs.readdirSync(modelsPath).sort()\n\n .filter(x => x.slice(-3) === '.js')\n\n .map(x => path.join(modelsPath, x))\n\n .forEach(x => require(x))\n\nthis.finalizeSchemasAndModels()\nthis.modelsByApiRoute[settings.apiRoute] = settings\n\nthis.modelsByName[settings.model] = settings\nfor (let x of Object.values(this.modelsByName)) {\n\n let { collection } = x;\n\n let schemaProps = x.schemaProperties\n\n || this.modelsByName[x.schemaPropertiesFrom].schemaProperties;\n\n let schema = new mongoose.Schema(schemaProps, { collection });\n\n x.addHooks && x.addHooks(schema);\n\n let model = mongoose.model(x.model, schema);\n\n console.log('Model ' + x.model + ' created');\n\n x._model = model;\n\n}\n\nThe data itself \n\njson - > template \n\n\"customerName\": \"[email protected]\",\n\n\"email\": \"Arleta\",\n\n\"telephoneNumber\": \"16\"\n\"customerName\": \"[email protected]\",\n\n\"email\": \"Arleta\",\n\n\"telephoneNumber\": \"16\"\n\"customerName\": \"[email protected]\",\n\n\"email\": \"Arleta\",\n\n\"telephoneNumber\": \"6\"\n\nthe template \ncustomer for example \n\ncustomerName: { type: String, required: true },\n\nemail: { type: String, required: true },\n\ntelephoneNumber: {type: String, required: true}\n", "text": "HelloIm current in fullstack course and Im having problem appropriating a framework for seeding data inside collections. It creates consumer and tickets collections however json to schema data does not get sent / get reject. Im quite green.Seeder.jsconst fs = require(‘fs’)const path = require(‘path’)const mongoose = require(‘mongoose’)module.exports = class ModelHandler {static types = mongoose.Schema.Typesstatic modelsByApiRoute = {}static modelsByName = {}static async connect() {}static getAllModels() {}static registerModel(settings) {}static finalizeSchemasAndModels() {}}[{},{},{}]db.registerModel({model: ‘Customer’,collection: ‘customers’,apiRoute: ‘customers’,readOnly: false,schemaProperties: {},addHooks(schema) { }})", "username": "Tim_Hogklint" }, { "code": "", "text": "Oh by the way - this post is horrible formatted; excuse me while I try to figure out how to edit my post !Sorry !", "username": "Tim_Hogklint" }, { "code": "Seeder.js\nconst fs = require('fs')\nconst path = require('path')\nconst db = require('../server/ModelHandler')\nconst mongoose = require('mongoose')\n\nmodule.exports = class Seeder {\n static async seed() {\n await db.connect()\n await mongoose.connection.db.dropDatabase()\n console.log('\\n\\nSEEDING DB\\n' + '-'.repeat(60) + '\\n')\n await this.insertData()\n console.log('\\n\\nAll done!\\n');\n process.exit();\n }\n\n static async insertData() {\n let allCustomers = []\n let data = this.readJsonFiles('./', 'data')\n \n for (let [route] of data) {\n let model = db.modelsByApiRoute[route]._model;\n if (!model) {\n throw (new Error(`Model for collection ${route} not found.`));\n }\n if (route === 'customers') {\n console.log('customers working')\n rows.forEach(x =>\n x.customerId = x.customerId === null ? null : allCustomers[x.customerId - 1]); \n } \n \n console.log(`Inserted data in the ${route} collection.`);\n }\n }\n}\n\n/// ---------\nModelHandler.js \n\nconst fs = require('fs')\nconst path = require('path')\nconst mongoose = require('mongoose')\n\nmodule.exports = class ModelHandler {\n \n static types = mongoose.Schema.Types\n static modelsByApiRoute = {}\n static modelsByName = {}\n\n static async connect() {\n global.db = this\n await mongoose.connect(require('./secrets/dbCredentials.json'))\n this.getAllModels()\n }\n\n static getAllModels() {\n let modelsPath = path.join(__dirname, 'models')\n fs.readdirSync(modelsPath).sort()\n .filter(x => x.slice(-3) === '.js')\n .map(x => path.join(modelsPath, x))\n .forEach(x => require(x))\n this.finalizeSchemasAndModels()\n }\n\n static registerModel(settings) {\n this.modelsByApiRoute[settings.apiRoute] = settings\n this.modelsByName[settings.model] = settings\n }\n\n static finalizeSchemasAndModels() {\n for (let x of Object.values(this.modelsByName)) {\n let { collection } = x;\n let schemaProps = x.schemaProperties\n || this.modelsByName[x.schemaPropertiesFrom].schemaProperties;\n let schema = new mongoose.Schema(schemaProps, { collection });\n x.addHooks && x.addHooks(schema);\n let model = mongoose.model(x.model, schema);\n console.log('Model ' + x.model + ' created');\n x._model = model;\n }\n }\n}\n\n/// ------ The data \n\ncustomers.json \n\n[\n {\n \"customerName\": \"[email protected]\",\n \"email\": \"Arleta\",\n \"telephoneNumber\": \"16\"\n },\n {\n \"customerName\": \"[email protected]\",\n \"email\": \"Arleta\",\n \"telephoneNumber\": \"16\"\n },\n {\n \"customerName\": \"[email protected]\",\n \"email\": \"Arleta\",\n \"telephoneNumber\": \"6\"\n }\n]\n\n/// ----- The customer template (unsure if this is the correct technical term ) Im trying !\n\ndb.registerModel({\n model: 'Customer',\n collection: 'customers',\n apiRoute: 'customers',\n readOnly: false,\n schemaProperties: {\n customerName: { type: String, required: true },\n email: { type: String, required: true },\n telephoneNumber: {type: String, required: true}\n },\n addHooks(schema) { }\n}) \n\n", "text": "Well this is certainly not the first impression I wanted to give - but I cant edit my first post.Im abit afraid to post now but - here is the related code I have to illustrate my problem.It does create the collections , but for costumers where there should be data - there is none.\n\npicprob1762×340 23.1 KB\n", "username": "Tim_Hogklint" }, { "code": "", "text": "Here is some additional information / development on the problem.\npicprob21202×731 46.7 KB\nExcuse the self replies !", "username": "Tim_Hogklint" }, { "code": "customers.jsonseeder.jssave()save()", "text": "Hi @Tim_Hogklint welcome to the community!If I understand correctly, the issue you’re having is that you’re trying to seed the database with the example data you have (marked as customers.json there) but you’re not seeing the data in the collection. Is this correct?Does this mean that the file seeder.js is not doing what it’s supposed to do? In mongoose, typically you call the save() method to save the data in the database. However in the code you posted, I see none of them. Without save() or any other method that does a similar thing, definitely there won’t be any data in the database Having said that, it’s a bit difficult to confirm what you’re seeing since I think the code you posted forms part of a larger code. I suggest you trim the code down to something that can be executed as-is without requiring any other infrastructure. This is so that: 1) Anyone can copy-paste the code and run it, 2) Proof that it’s not caused by any misconfiguration on the infrastructure part of the code, and 3) Illustrates the problem clearly and succintly.Do you mind trimming the code down so it can achieve all three points above?Best regards\nKevin", "username": "kevinadi" }, { "code": "const fs = require('fs')\nconst path = require('path')\nconst mongoose = require('mongoose')\n\nmodule.exports = class ModelHandler {\n \n static types = mongoose.Schema.Types\n static modelsByApiRoute = {}\n static modelsByName = {}\n\n static async connect() {\n global.db = this;\n await mongoose.connect(require('./secrets/dbCredentials.json'))\n this.getAllModels()\n }\n\n static getAllModels() {\n let modelsPath = path.join(__dirname, 'models')\n fs.readdirSync(modelsPath).sort()\n .filter(x => x.slice(-3) === '.js')\n .map(x => path.join(modelsPath, x))\n .forEach(x => require(x))\n this.finalizeSchemasAndModels()\n }\n\n static registerModel(settings) {\n this.modelsByApiRoute[settings.apiRoute] = settings\n this.modelsByName[settings.model] = settings\n }\n\n static finalizeSchemasAndModels() {\n for (let x of Object.values(this.modelsByName)) {\n let { collection } = x;\n let schemaProps = x.schemaProperties\n || this.modelsByName[x.schemaPropertiesFrom].schemaProperties;\n let schema = new mongoose.Schema(schemaProps, { collection });\n x.addHooks && x.addHooks(schema);\n let model = mongoose.model(x.model, schema);\n console.log('Model ' + x.model + ' created');\n x._model = model;\n }\n }\n}\n", "text": "Thanks for taking the time to try and help me !There has been an update ; I can via this set data in the customer collection via vs code tool\n\nexample950×780 45.5 KB\nNow I looking to transfer that to our actual rest api if that makes senseI dont really understand how the “data” goes fromHope this illustrates the problem ?\nOtherwise I have the example the teacher gave us where its isolated to just the\nbare component , he was not done with the rest api - I guess; Im not sure how to\ncontinue “the work” I guess ?\nIsolated code zip download ", "username": "Tim_Hogklint" } ]
Filling collection with schema data problem
2022-09-29T12:06:33.521Z
Filling collection with schema data problem
1,781
null
[ "crud", "change-streams" ]
[ { "code": "", "text": "Hi! Is there a mechanism like Hooks (also known as lifecycle events), are which are called before and after calls in mongodb are executed. For example, if you want to escape all string in the JSON document then what would be the preferred mechanism to do this in MongoDB?", "username": "Rolu" }, { "code": "", "text": "Welcome to the MongoDB Community @Rolu !There are several different ways to approach this:Lifecycle events via your MongoDB driver or library. For example, the Mongoose Object-Data Mapper (ODM) framework for Node.js allows registration of middleware functions that can handle document, model, aggregation, and query events (pre and post).NOTE: This is closest to the use case you describe of cleaning up documents prior to saving them in your MongoDB deployment.Tracking and reacting to real-time data changes for a MongoDB deployment via the Change Streams feature.Executing Atlas Database Triggers for a MongoDB Atlas deployment. Atlas Triggers use change streams to implement data-driven interactions with server-side JavaScript functions that can be executed whenever a relevant document is added, updated, or removed in a linked Atlas cluster.what would be the preferred mechanism to do this in MongoDB?As noted above, I think you are looking for client-side middleware include pre and post event hooks. Those are typically added by frameworks that build on MongoDB drivers rather than being part of the core driver functionality, so I suggest looking into options for your language implementation of choice.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mechanism like Hooks (also known as lifecycle events)
2022-09-29T09:44:38.995Z
Mechanism like Hooks (also known as lifecycle events)
4,311
null
[ "queries", "java", "crud", "spring-data-odm" ]
[ { "code": "db.messages.updateMany(, [ $set: \nrecipientEmail: \n $toLower: '$recipientEmail'\n },\nsenderEmail: \n $toLower: '$senderEmail'\n }\n], multi: true );\n", "text": "Hello, I was finding on the internet how to update all the document field values with lowercase.\nI luckily found a query which I modified as per my requirement and it is working correctly.But now I am trying to convert this query into Java code, I am not able to convert it.\nI again started looking into the internet, but couldn’t find any code.\nSo, can anyone help me convert this query to Java code so that I can use it in my Spring Boot application?\nThanks in advance.", "username": "Shrey_Soni" }, { "code": "Arrays.asList(new Document(\"$set\", \n new Document(\"recipientEmail\", \n new Document(\"$toLower\", \"$recipientEmail\"))\n .append(\"senderEmail\", \n new Document(\"$toLower\", \"$senderEmail\"))))\n", "text": "Hi @Shrey_Soni and welcome to community forum!!Please use the below code snipped to use the $set operator:The following documentation on Update Many using Java might also be helpful in this case.Please note that MongoDB Compass and Atlas provides with the feature to export the aggregation pipelines to language specific codes.\nPlease refer to the documentation for Export Pipeline to Specific Language in compass and Atlas for trying out.Let us know if you have any further questions.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari , can u tell me how I use the list of documents in mongoTemplate. I don’t see any method accepting List. Also the query that I have share I found it from internet, it is aggregation update query, right now I am not using that version. so is there any way to write java code with version 2.7.1 to set field value to lower case.Thanks in advance.", "username": "Shrey_Soni" }, { "code": "$merge db.test.find()\n[\n {\n _id: 0,\n recipientEmail: '[email protected]',\n senderEmail: '[email protected]'\n }\n]\n\n db.test.aggregate([\n { $addFields: { recipientEmail: { $toLower: '$recipientEmail'}, senderEmail:{ $toLower: '$senderEmail'}}},\n { $merge: { into: 'test', on: '_id', whenMatched: 'replace', whenNotMatched: 'insert' } }\n])\n db.test.find()\n[\n {\n _id: 0,\n recipientEmail: '[email protected]',\n senderEmail: '[email protected]'\n }\n]\n", "text": "Hi @Shrey_SoniI’m not sure I understand the goal you’re trying to achieve. I think you are trying to convert some part of your document to lower case, and save the result back in the database. Is this correct? Could you post some document example and the result you’re trying to get?so is there any way to write java code with version 2.7.1 to set field value to lower case.Could you also help us in understanding if Java is a requirement for the application. If not, $merge would help.\nElse, please help with the driver and MongoDB versions for the application.\nPlease refer to the below snippet for further understanding.Also, please note that, $merge is available after MongoDB version 4.2 or newer.Let us know if you have further questionsBest Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari , the example that you added is correct, we have documents where recipient/sender emails value are in upper case and we won’t to update them in lower case.I will check how I can use merge from java side.Thanks for sharing the code.", "username": "Shrey_Soni" } ]
How to convert this custom update query to Java code
2022-09-22T11:38:48.123Z
How to convert this custom update query to Java code
3,899
null
[ "swift" ]
[ { "code": "", "text": "Id like to talk with an expert about Mongo DB integration with Swift using Vapor, I have 0 experience into handling back-end except Firebase, that as you know its very easy to implement in Xcode projects, id like to see a sample application such as a Note app that performs simple queries to Mongo DB Atlas, using vapor as middle layer language, but configured starting by zero, step by step UI and backend, maybe a payed video course or something like that, im gonna be the first to buy, im struggling for weeks into finding a similar tutorial about that but found nothing of detailed and clear. I need your help ", "username": "Marco_Vastolo" }, { "code": "mongodb+srv://<user>:<password>@project.5m93ijs.mongodb.net/<databaseName>\ntry app.databases.use(.mongo(\n connectionString: Environment.get(\"DATABASE_URL\") ?? \"mongodb+srv://userOfAtlas : [email protected]/databaseName\"\n ), as: .mongo)\n", "text": "You need to use this estructure:you can generate this link in ‘Connect using VS Code’ instead ‘Connect your application’.The database name can be any name, and it will be created when your program connect to mongo.Inside of Configure.swift:", "username": "RigelCarbajal" } ]
Swift --> Vapor --> Mongo DB --> AWS
2021-04-08T21:36:27.077Z
Swift &ndash;&gt; Vapor &ndash;&gt; Mongo DB &ndash;&gt; AWS
2,710
null
[ "aggregation", "java" ]
[ { "code": "", "text": "I have a query that works in MQL. I need to translate it into Java. The query in MQL looks like thisdb..aggregate( [\n{\n$project: {\n“MonitoringLocationIdentifier”: 1,\nepochTimes: {\n$filter: {\ninput: “$epochTimes”,\nas: “epochTime”,\ncond: { $and: [ {$gte: [ “$$epochTime”, NumberLong(“0”) ]}, {$lte: [\"$$epochTime\", NumberLong(“558268020000”)]} ]}\n}\n},\n}\n}\n] )The contains documents that look like this\n{ “_id” : ObjectId(“633218dfec534a6fe90106b8”), “MonitoringLocationIdentifier”: “Site1”, “epochTimes” : [ NumberLong(“451058760000”), NumberLong(“558189720000”), NumberLong(“516460860000”) ] }I am trying to get all the documents in the collection but filter the “epochTimes” for every document by a min/max.Any Java Driver wizards out there?", "username": "Matt_Young" }, { "code": "", "text": "I would copy your aggregation into Compass and export as Java.", "username": "steevej" }, { "code": "import static com.mongodb.client.model.Aggregates.project;\nimport static com.mongodb.client.model.Projections.computed;\nimport static com.mongodb.client.model.Projections.fields;\n...\nproject(fields(\n computed(\"MonitoringLocationIdentifier\", 1),\n computed(\"epochTimes\", new Document(\"$filter\",\n new Document(\"input\", \"$epochTimes\")\n .append(\"as\", \"epochTime\")\n .append(\"cond\", new Document(\"$and\", Arrays.asList(\n new Document(\"$gte\", Arrays.asList(\"$$epochTime\", 0L)),\n new Document(\"$lte\", Arrays.asList(\"$$epochTime\", 558268020000L))\n )))\n ))\n))\nDocument", "text": "Hi Matt, the Java translation would be:The Java driver offers static builder methods for some but not all parts of your MQL. Where no builder yet exists, I’ve used Document.", "username": "Maxim_Katcharov" } ]
Using $filter in MongoDB Java Driver
2022-09-27T23:04:21.276Z
Using $filter in MongoDB Java Driver
1,574
null
[ "aggregation", "data-modeling", "python" ]
[ { "code": "[\n {\n \"$facet\": {\n \"query_a\": [\n {\n \"$match\": {\n ...\n }\n },\n {\n \"$project\": {\n \"ID\": \"...\",\n \"date_a\": \"...\"\n }\n }\n ],\n \"query_b\": [\n {\n \"$match\": {\n ...\n }\n },\n {\n \"$project\": {\n \"ID\": \"...\",\n \"date_b\": \"...\"\n }\n }\n ]\n }\n },\n {\n \"$project\": {\n \"union\": {\n \"$setUnion\": [\n \"$query_a\",\n \"$query_b\"\n ]\n },\n \"query_a\": 1,\n \"query_b\": 1\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"data\": {\n \"$map\": {\n \"input\": \"$union\",\n \"in\": {\n \"union\": \"$$this\",\n \"query_a\": {\n \"$first\": {\n \"$filter\": {\n \"input\": \"$query_a\",\n \"as\": \"item\",\n \"cond\": {\n \"$eq\": [\n \"$$item.ID\",\n \"$$this\"\n ]\n }\n }\n }\n },\n \"query_b\": {\n \"$first\": {\n \"$filter\": {\n \"input\": \"$query_b\",\n \"as\": \"item\",\n \"cond\": {\n \"$eq\": [\n \"$$item.ID\",\n \"$$this\"\n ]\n }\n }\n }\n }\n }\n }\n }\n }\n },\n {\n \"$unwind\": \"$data\"\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$data\"\n }\n },\n {\n \"$project\": {\n \"union\": 1,\n }\n }\n]\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"1\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"2\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"3\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"4\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"5\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"6\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"7\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"8\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"9\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_b\": \"10\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_b\": \"11\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_b\": \"12\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"1\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"2\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"3\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"4\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"5\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"6\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"7\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"8\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"9\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_b\": \"10\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_b\": \"11\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_b\": \"12\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"1\", \"date_b\": \"10\"}}\n{'union': {\"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\", \"date_a\": \"2\", \"date_b\": \"10\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"3\", \"date_b\": \"11\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"4\", \"date_b\": \"11\"}}\n{'union': {\"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\", \"date_a\": \"5\", \"date_b\": \"11\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"6\", \"date_b\": \"12\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"7\", \"date_b\": \"12\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"8\", \"date_b\": \"12\"}}\n{'union': {\"ID\": \"a4ece1ba-42ae-e735-17b0-f619daa506f9\", \"date_a\": \"9\", \"date_b\": \"12\"}}\n", "text": "I have the following aggregation pipeline running in the latest version of mongoDB and pymongo:I am trying to get a Union on the field “ID” and I get the following example ouput:This from query_a:This from query_b:I’m trying to achieve the following output matched on “ID”:Your suggestions are appreciated!", "username": "J_P2" }, { "code": "", "text": "Here is a playground of my example: Mongo playground", "username": "J_P2" }, { "code": "db.collection.aggregate([\n {\n \"$project\": {\n \"union\": {\n \"$setUnion\": [\n \"$query_a\",\n \"$query_b\"\n ]\n }\n }\n },\n {\n \"$unwind\": \"$union\"\n },\n {\n \"$group\": {\n \"_id\": \"$union.ID\",\n \"date_a\": {\n \"$addToSet\": \"$union.date_a\"\n },\n \"date_b\": {\n \"$addToSet\": \"$union.date_b\"\n }\n }\n },\n {\n \"$unwind\": \"$date_a\"\n },\n {\n \"$unwind\": \"$date_b\"\n }\n])\n", "text": "Hi,You can do it like this:Working example", "username": "NeNaD" }, { "code": "", "text": "Thank you again! I saw your answer on stackoverflow too.", "username": "J_P2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Union of 2 pipelines with the same collection
2022-09-29T13:51:06.092Z
Union of 2 pipelines with the same collection
1,720
null
[ "node-js", "mongoose-odm", "compass" ]
[ { "code": "", "text": "I have discovered through MongoDB University that I can set indexes on my Atlas database using Compass. Does this mean I can just do this instead of settings indexes in my Mongoose schemas?", "username": "Black_Sulfur" }, { "code": "", "text": "Hello @Black_Sulfur, and welcome to the MongoDB Community forums! Indexes can be created with any tool that can connect to the database in question, as long as you’re using an authenticated user with the proper permissions.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Creating indexes in mongoose schemas vs Compass
2022-09-29T14:24:49.204Z
Creating indexes in mongoose schemas vs Compass
1,501
null
[ "database-tools" ]
[ { "code": "mongoimport --host=myRemoteHost.cloud --port=30107 --username=my_cloud_user\n --collection=profiles --db=controlpanel --file=/Users/myLocal/Documents/language profiles/languageprofiles.json\n--collection", "text": "prompts me the following message:zsh: command not found: --collection=profilesNot sure what I am missing here. As per what I read at the documentation, --collection is valid.", "username": "Wesley_Melis" }, { "code": "zsh: command not found --collection=profilesmongoimport --host=myRemoteHost.cloud --port=30107 --username=my_cloud_user \\\n--collection=profiles --db=controlpanel --file=/Users/myLocal/Documents/language\\ profiles/languageprofiles.json\n\\\\", "text": "Hello @Wesley_Melis, and welcome to the MongoDB community forums! I get the same result as you if I try to run the command as you have entered it here in the forums. As you can see from the last line zsh: command not found --collection=profiles, this means you’re trying to run the command split over two lines withe continuation character. You either need to put that on a single line, or write the command as follows:Notice the \\ at the end of the first line which means that the command continues on the next line.Also note that your file path has a space in it which means you’'ll get another error, so you will want escape the space with another \\ character as I have done in the command above.Doing both of those things should allow you to import your file.", "username": "Doug_Duncan" }, { "code": "2022-09-29T09:42:22.173-0300\terror connecting to host: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: myRemoteHost.cloud:30107, Type: Unknown, Last error: connection() error occurred during connection handshake: connection(myRemoteHost.cloud:30107[-64]) socket was unexpectedly closed: EOF }, ] }\n", "text": "Hello @Doug_DuncanYour suggestions had some effect. I added the “” to continue the command at the nextline and fixed my file path too.Now I run into a different issue.Don’t know if this eof either suggests something wrong with the connection string itself or my .json file. Or if I should use --uri instead of --host", "username": "Wesley_Melis" }, { "code": "mongodb://mongoimport mongodb://[email protected]:30107,myRemoteHost.cloud:30107,myRemoteHost.cloud:30107/clouddb?authSource=admin&replicaSet=replset --collection=profiles --db=myDatabase --mode=upsert --file=/Users/wesley/Documents/UAT28SEP/profilesprod.json\n[1] + exit 1 mongoimport \nzsh: command not found: --collection=profiles\n", "text": "I changed my approach a little bit by issuing our cloud mongodb:// stringI think I’m missing the \"\"again. Tried to inser it at different positions, but still I get a", "username": "Wesley_Melis" }, { "code": "?mongoimport \"mongodb://[email protected]:30107,myRemoteHost.cloud:30107,myRemoteHost.cloud:30107/clouddb?authSource=admin&replicaSet=replset\" --collection=profiles --db=myDatabase --mode=upsert --file=/Users/wesley/Documents/UAT28SEP/profilesprod.json\n2022-09-29T09:12:03.566-0600 error parsing command line options: Invalid Options: Cannot specify different database in connection URI and command-line option (\"clouddb\" was specified in the URI and \"myDatabase\" was specified in the --db option)\n2022-09-29T09:12:03.567-0600 try 'mongoimport --help' for more information\nmongo --host=myRemoteHost.cloud --port=30107 --username=my_cloud_user -p\n", "text": "The ? character is a special character in Linux shells, so that needs to be escaped. Using the URI format, it’s best just to put that string in quotes so you don’t have to worry about escaping the characters.The updated command would look like this:Now having said that you’ll still get an error:To answer your original question, it looks like there was a timeout connecting to the server. Can you connect using:", "username": "Doug_Duncan" } ]
'no collection specified' error when trying to import from local into remotehost
2022-09-28T22:30:31.883Z
&lsquo;no collection specified&rsquo; error when trying to import from local into remotehost
2,197
null
[]
[ { "code": "{ \n \"_id\" : ObjectId(\"5e36d904618c0ea59f1eb04f\"), \n \"gps\" : { \"lat\" : 50.073288, \"lon\" : 14.43979 }, \n \"timeAdded\" : ISODate(\"2020-02-02T15:13:22.096Z\") \n}\n{\n \"_id\" : ObjectId(\"5e49a469afae4a11c4ff3cf7\"), \n \"type\" : \"Feature\", \n \"geometry\" : { \n \"type\" : \"Polygon\", \n \"coordinates\" : [[ \n [ 14.609644294937652, 50.02081908813608 ],\n [ 14.609640493282695, 50.02078482460302 ],\n [ 14.609603249828798, 50.020776412254065 ],\n [ 14.609548954187744, 50.02072977072492 ],\n ... \n [ 14.609433408712134, 50.0208718692152 ],\n [ 14.609644294937652, 50.02081908813608 ]\n ]] \n }, \n \"properties\" : { \n \"Name\" : \"Region 1\" \n } \n}\ndb.points.aggregate([\n {$project: {\n coordinates: [\"$gps.lon\", \"$gps.lat\"]\n }}, \n {$lookup: {\n from: \"regions\", pipeline: [\n {$match: {\n coordinates: {\n $geoWithin: {\n $geometry: {\n type: \"Polygon\", \n coordinates: \"$geometry.coordinates\"\n }\n }\n }\n }}\n ], \n as: \"district\"\n }}\n])\n \"ok\" : 0,\n \"errmsg\" : \"Polygon coordinates must be an array\",\n \"code\" : 2,\n \"codeName\" : \"BadValue\"\ndb.points.aggregate([\n {$project: {\n coordinates: [\"$gps.lon\", \"$gps.lat\"]\n }}, \n {$lookup: {\n from: \"regions\", pipeline: [\n {$match: {\n coordinates: {\n $geoWithin: \"$geometry.coordinates\"\n }\n }}\n ], \n as: \"district\"\n }}\n])\n", "text": "I’m trying to match points in one collection with regions stored in another collection.\nHere are examples of documents.Points:Regions:And the query I’m trying to construct is something like this:I’m getting an error:assert: command failed: {} : aggregate failedI’ve noticed the structure of $geoWithin document is same as structure of one I have for each region. So I tried such query:The error was same.I looked up for geoqueries but surprisingly all found mentions had static region document instead of one taken from a collection. So I’m wondering - is it ever possible to map points with regions having that both document collections aren’t static and taken from DB?", "username": "Mychajlo_Chodorev" }, { "code": "$match$geoWithinPointsRegions", "text": "Hi @Mychajlo_Chodorev, welcome!I’m trying to match points in one collection with regions stored in another collection.In order to use expressive $lookup, the $match value in the pipeline requires to be an expression. Currently it’s not possible to use $geoWithin as an expression, there’s an open ticket for this SERVER-34766. Please feel free to watch/up-vote the ticket to receive update on it.One way to achieve what you’ve described is to use a client application to query from Points collection then programmatically build a geo query for Regions collection .Regards,\nWan.", "username": "wan" }, { "code": "", "text": "hi guys, I need to do something similar, on the one hand I have a collection of provinces with multiPolygon geometry, on the other hand I have a collection of geometry point (lon, lat), and I need to return in a query the points that fell in a given province, I am within the same error?query simple thats work:\ndb.provinciasArgentinas.find({\n“geometry”: {\n“$geoIntersects”: {\n“$geometry”: {\n“type”: “Point”,\n“coordinates”: [\n-58.443580877868406, -34.62635604171383\n]\n}\n}\n}\n})Thanks!", "username": "Marcelo_Laglaive" }, { "code": "", "text": "Hello!I’m trying to do something similar to that (matching points from one collection with polygons from another). Is there any updates that support this?Thanks!", "username": "LAG" } ]
Matching points from one collection with polygons from another
2020-03-07T21:38:24.335Z
Matching points from one collection with polygons from another
3,582
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hello ,\nI would like to introduce in the query field the like system as in the sql to be able to search for a telephone number starting with the middle example: +3367854456\nmy search is done like this: 67854456 without the telephone code\ndo you have any idea how to do it please?\nCordially.", "username": "Mahdi_HAMMOU" }, { "code": "$regexdb.collection.find({\n \"phone\": {\n \"$regex\": \"67854456\"\n }\n})\n", "text": "Hi,You can use $regex operator:Working example", "username": "NeNaD" }, { "code": "{\n autocomplete: {\n query: `%${phoneSearch}%`,\n path: 'phone',\n },\n},\n", "text": "Hi NeNad ,Thanx for your response but unfortunately I avoid the regex because it slows down the call to the api I would rather something like this :if it exists ?Cordialy", "username": "Mahdi_HAMMOU" }, { "code": "", "text": "the response of my question → https://www.mongodb.com/docs/atlas/atlas-search/define-field-mappings/#std-label-bson-data-types-autocomplete", "username": "Mahdi_HAMMOU" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Auto-complete : do a middle search
2022-09-28T17:20:34.668Z
Auto-complete : do a middle search
1,228
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I have configured my app in app services to use custom JWT authentication. I have provided the JWK uri (https://test.stytch.com/v1/sessions/jwks/project-test-39e8d76b-25c1-4426-b6cb-515aca0b30e0) but I am receiving the following error when I try to login from my node app:Error:failed to fetch JWK from URI: failed to extract from map: failed to construct key from map: failed to extract key from map: failed to extract header values: failed to set value for key x5c: invalid value for x5c key: failed to parse certificate: x509: malformed serial numberAny advice?", "username": "Chris_Lawrence" }, { "code": "", "text": "Hi @Chris_Lawrence,\nCould you provide some more details:", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "Thanks for the quick reply @Desislava_St_StefanovaHere is more info:I’m using the Node sdkThe JWT is coming directly from our authentication provider–Stytch. The JWK URI I’m using and posted a link to in the OP is an endpoint to obtain their JWK that is used to validate the token.I put the JWT in jwt.io and receive the following head information:\n\nimage888×394 21.2 KB\n", "username": "Chris_Lawrence" }, { "code": " const app = new Realm.App({ id: process.env.REALM_APP_ID! })\n const creds = Realm.Credentials.jwt(\"JWT_FROM_AUTH_PROVIDER\")\n app\n ?.logIn(credentials)\n .then((user) => {\n // Handle authenticated user...\n })\n .catch((err) => {\n logger.error('Error during realm app authentication', err)\n })\n", "text": "Here is a snippet of my code I’m using to auth in case this is any help", "username": "Chris_Lawrence" }, { "code": "", "text": "Hi @Chris_Lawrence,\nI have forwarded your question to the App Services team, but there seems to be some issue with the serial number of the certificate that is used to sign the JWT.\nThe only thing I can suggest is that you can check whether you get the same error if you use public key verification. You can convert your JWK to public key using some of the available “JWK to PEM converter”, then you can configure the Atlas App Service using the “Manually specify signing keys” option.\nLooking forward your response.", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "Hey @Desislava_St_StefanovaI have some good news! I brought this same issue up with Stytch and they were able to reproduce it within Mongo. They believe it is an error with the way they are formatting their JWK. They should have a fix out soon.I was able to convert the JWK into PEM format and submit that to my app as a work around. Authentication is now working.I appreciated the help with this issue.", "username": "Chris_Lawrence" }, { "code": "", "text": "I’m glad to hear that.\nIt’s nice that the workaround works.\nGood luck with your project, @Chris_Lawrence!", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error when using Custom JWT Authentication
2022-09-28T14:01:36.230Z
Error when using Custom JWT Authentication
3,736
null
[ "queries", "security", "atlas-data-lake" ]
[ { "code": "", "text": "We have read access on underlying Clusters’s db and datalake’s DB . access runs fine on underlying clusters but getting an error ( not authorized on DB to execute command) while retriving the data from dataLake ?", "username": "Dasharath_Dixit" }, { "code": "", "text": "Hey @Dasharath_Dixit, thanks for reaching out and apologies for the delayed response.When you say “Data Lake” here, are you referring to creating a “Pipeline” to make a copy of your cluster data for analytic purposes or are you referring to our Federated Query Engine product now called Atlas Data Lake. This transition was made at MongoDB World in June.For Atlas Data Lake your “Atlas User” needs to have Project Admin in order to create a pipeline.For Data Federation, your “Database User” needs to have access to the database and collection name in the Federated Database Instance just as it would in an Atlas Cluster.A couple of things to check if you are getting an unauthorized error querying in Shell or Compass would be if the Database user is “Scoped” to only have access to the cluster even though it has the same overall role. In Atlas there is a scoping section where you can restrict which resources (i.e. Clusters or Federated Database Instances) a “Database User” has access to.If this doesn’t help, feel free to reach out to me at [email protected], I’m happy to setup some time to quickly identify the issue here.Best,", "username": "Benjamin_Flast" }, { "code": "", "text": "|### Dasharath Dixit|2:37 PM (0 minutes ago)|||Hello Benjamin,I was referring the Federated Query Engine here. let me check with an option which you mention and get back to you.Regards\nDasharath", "username": "Dasharath_Dixit" }, { "code": "", "text": "Hey Dasharath,I wanted to follow up here and see if you were able to resolve this issue?Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "hey @Benjamin_Flastthis issue got resolve for me after we provide the readanydatabase on admin DB underneath collections’s node .not sure how it related", "username": "Dasharath_Dixit" } ]
Least permission for DataLake user
2022-07-07T08:55:39.518Z
Least permission for DataLake user
2,959
null
[ "node-js", "compass", "atlas-cluster", "atlas" ]
[ { "code": "error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017mongodb+srv://<dbuser>:<password>@noisycart-1.eoquk.mongodb.net/?retryWrites=true&w=majorityDeploy MongoDB Atlas with VPC peering into a new VPC", "text": "Hi there, I have been trying to connect my Node.js application in AWS EB to my AWS Atlas cluster via peering connection but I keep running into this error: error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017. I read that it meant the Mongodb is not running but I am able to connect to it via Compass. In my Node.js application I am using the following connection string mongodb+srv://<dbuser>:<password>@noisycart-1.eoquk.mongodb.net/?retryWrites=true&w=majorityI used this quickstart guide MongoDB Atlas on the AWS Cloud, choosed the Deploy MongoDB Atlas with VPC peering into a new VPC template and some tips from this old video MongoDB (Atlas) VPC Peering with AWS Tutorial - Jay Gordon - YouTube to figure out the routing table stuff. The Atlas cluster was created in a new project and I have added the Atlas Cluster’s CIDR to the two private subnets created by the template in the host VPC. On Mongo end, I have also whitelisted the VPC CIDR and also the EC2 instance’s public IP.What am I missing? Thanks in advance for the assistance.", "username": "Justin_Cook" }, { "code": "", "text": "After some sleep and contemplation, I realized that with a connection string, it shouldn’t be hitting the localhost. Checked my code and voila, the mongoose.connect was permanently set to the localhost. The second piece that got it working was help from AWS saying that I also need to set the routes on the public subnet that my EB instance was deployed to. I only set the routes for the private subnet. Hopefully this can help anyone following the quickstart. Cheers.", "username": "Justin_Cook" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trying to connect to Mongo Atlas AWS VPC from Node.js application in Beanstalk of another VPC
2022-09-29T03:12:24.831Z
Trying to connect to Mongo Atlas AWS VPC from Node.js application in Beanstalk of another VPC
2,560
null
[ "node-js", "replication", "mongoose-odm", "connecting", "atlas-cluster" ]
[ { "code": "", "text": "I’m getting the following random message on my Mongo Atlas “Access History” view:“FAILED BadValue: SCRAM-SHA-256 authentication is disabled”This is attached to my application IP hosted at AWS. I basically got this message first and then the successfully message.", "username": "Thiago_Scodeler" }, { "code": "", "text": "Hi @Thiago_Scodeler - Welcome to the community.I basically got this message first and then the successfully message.Just for clarification, are you able to connect successfully but are just curious for the source / reason for the “FAILED BadValue: SCRAM-SHA-256 authentication is disabled” message? If so, does the following post reply help clarify this?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran thanks for your answer.\nYes, i’m able to connect successfully to the database. The main reason of my question is if this “error” generates any side-effect or bad usage of my database, such as two connections attempts, e.g. first connecting action failing and reconnecting with SHA-1 success.As per your shared post, this is just a auditing message, correct?", "username": "Thiago_Scodeler" }, { "code": "", "text": "I have the same question, there are double the connections, got here cuz I was troubleshooting a high connections utilization. Shall we configure mongoose to use sha1?", "username": "Ed_Durguti" }, { "code": "DEFAULTauthMechanism", "text": "Hi Ed and Thiago,The main reason of my question is if this “error” generates any side-effect or bad usage of my database, such as two connections attempts, e.g. first connecting action failing and reconnecting with SHA-1 success.I have the same question, there are double the connections, got here cuz I was troubleshooting a high connections utilization. Shall we configure mongoose to use sha1?I believe this depends on the driver in use, but I will refer the details of the Node.JS driver Authentication Mechanism documentation here just in regards to the connection issue raised.If the DEFAULT mechanism is used, before the connection is established, the authentication methods are tried in the order specified in the documentation. If all fail, then the connection is refused. However, once authentication is successful, the connection is accepted.I have the same question, there are double the connections, got here cuz I was troubleshooting a high connections utilization.@Ed_Durguti Do you have more details regarding the above? I.e. Do you see in your logs that the connection is accepted for both the failure and success for each authMechanism attempted?Additionally, do you have some more details about the Atlas tier in use?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you @Jason_Tran . Based on that information, i’ll force SHA-1 on my string URI connection, right now it is using the DEFAULT order.", "username": "Thiago_Scodeler" } ]
Random Message: "BadValue: SCRAM-SHA-256 authentication is disabled"
2022-09-27T15:47:49.766Z
Random Message: &ldquo;BadValue: SCRAM-SHA-256 authentication is disabled&rdquo;
2,647
https://www.mongodb.com/…f_2_1024x629.png
[ "data-modeling" ]
[ { "code": "", "text": "Hello, I am trying to build an app with “Database per service - Microservice Architecture Pattern”. I have 3 major services,Both internal and external forum services have a dependency on the user service. What approach should I use to provide the user information of createdBy & moderators with the topics get API?Should I need to store the reference in the respective services and retrieve them with events when it is needed for the view as demonstrated below diagram?\nScreenshot 2022-09-29 at 1.04.10 PM2082×1280 323 KB\nIf so duplicating the same data across different databases is not an issue?Please help me resolve this riddle. Thanks in advance.", "username": "Prakash_Chokalingam" }, { "code": "", "text": "Pls find the normalized data flow chart here\nScreenshot 2022-09-29 at 1.16.25 PM2024×1314 321 KB\n", "username": "Prakash_Chokalingam" } ]
Normalized data duplication is fine? or should I use events between services for the Database per service - Microservice Architecture
2022-09-29T09:37:49.599Z
Normalized data duplication is fine? or should I use events between services for the Database per service - Microservice Architecture
1,329
null
[]
[ { "code": "{\ncode: 'US-NY',\nname: 'New York',\ncities: [ ]\n},\netc ..\ncities:[\n {\n code: 'new-york',\n name:'New York',\n },\n {\n code: 'bufalo',\n name:'Búfalo',\n },\n]\n", "text": "Hello Everybody i’m so sorry to ask but i’ve been trying to look for an answer for MONTHS, and im already desperate because i can’t seem to find what i need till the point that i’m almost giving up with NoSQL.So here’s my Schema.\nI have States of a Country and they have their non-repeatable ISO CODE,… however inside the state (nested objects) we have cities:So when i add a document, i want to avoid duplicated nested objects (cities) with the same code (example: ‘bufalo’) within the same document (state).\nBUT i could use ‘bufalo’ in another document (state).When i set an index to cities.code to be Unique, it applies in the whole collection, and i want to use that code in another document.\nI would like to repeat ‘bufalo’ city code in another document (state), but i dont want it to be repeated in the same document.How could i archive this? Thank you so much for your kind help, i will be checking this post desperately.ThanksAlan D.", "username": "AlanDanielx_N_A" }, { "code": "\"cities\"$addToSet$addToSet$addToSet$addToSet", "text": "Hi @AlanDanielx_N_A,When i set an index to cities.code to be Unique , it applies in the whole collection, and i want to use that code in another document.I’d like to clarify that currently indexes work on the collection level so this behaviour is expected. The reason being is that indexes are used to select documents to return (as a subset of the collection) so this will not work inside a single document.So when i add a document , i want to avoid duplicated nested objects (cities) with the same code (example: ‘bufalo’) within the same document (state).\nBUT i could use ‘bufalo’ in another document (state).Looking at the \"cities\" array field, perhaps utilising $addToSet may suit your use case. As per the $addToSet documentation:The $addToSet operator adds a value to an array unless the value is already present, in which case $addToSet does nothing to that array.Regards,\nJason", "username": "Jason_Tran" }, { "code": "{\n code: 'US-NY',\n name: 'New York',\n cities: [\n {\n code: 'new-york',\n name:'New York',\n },\n {\n code: 'bufalo',\n name:'Búfalo',\n } ]\n}\n code: 'US-NY',\n name: 'New York',\n cities: \n {\n 'new-york': { \n code: 'new-york',\n name:'New York',\n },\n 'bufalo': {\n code: 'bufalo',\n name:'Búfalo',\n } \n }\n}\n$objectToArray$exists$unset", "text": "What I’ve done once, is dropping array for subobject instead. So instead ofyou getthis of cause change the way you can work with the document, but it might suit your need. Or it might not. It depends Anyway - there are some nice operators that can help you out if you change the schema.\nLike the aggregation operator $objectToArray that can convert the object back into an array.\nAnd checking if a city exist becomes super simple with the $exists operator. Deleting with the $unset operator etc.But again - you might have other criteria’s that demands an array.", "username": "Vegar_Vikan" } ]
Im so frustrated. Avoid duplicated nested objects within the same document ONLY
2022-09-27T00:29:19.520Z
Im so frustrated. Avoid duplicated nested objects within the same document ONLY
1,630
null
[ "sharding" ]
[ { "code": "", "text": "I use MongoDB 4.2 ver.I created a situation where a large amount of CRUD was always coming from the application, and I made it run chunk migration periodically.But one day, there was a serious delay in one of Shard’s CRUD operations. This also caused a service failure. I think a lot of query load and chunk migration is the cause.Q1. So I looked at the log, and I think the log below is the log for this phenomenon, so what is the meaning of this log?Date~~ W STORAGE [FlowControlRefresher] Flow control is engaged and the sustainer point is not moving. Please check the health of all secondaries.Q2. To what level does the chunk migration take?Q3. Currently, I am using the read preference as the primary, but if I change it to secondary, can I solve the delay for the read job during the chunk migration? And is there another solution?", "username": "Kim_Hakseon" }, { "code": "flowControlTargetLagSecondsw:2", "text": "Hi @Kim_Hakseon,Q1. So I looked at the log, and I think the log below is the log for this phenomenon, so what is the meaning of this log?Flow control is a feature in MongoDB 4.2+ (enabled by default) that attempts to limit write throughput for a replica set primary in order to keep the replication lag for majority committed data under the flowControlTargetLagSeconds. Normal flow resumes as the replica set catches up on the backlog of writes and advances the majority commit point.Q2. To what level does the chunk migration take?The Chunk Migration Procedure includes reading documents from a source shard, replicating them to a destination shard, and deleting those documents from the source shard at a critical step when the migration completes. MongoDB can perform parallel chunk migrations, but a shard can participate in at most one migration at a time.If any of the shard replica sets in your deployment are lagging enough to trigger flow control limitations, this can also affect chunk migrations to/from the affected shard.Q3. Currently, I am using the read preference as the primary, but if I change it to secondary, can I solve the delay for the read job during the chunk migration? And is there another solution?Flow control is related to writing data to a majority of replica set members. I expect secondary read preferences won’t be helpful as you’ll be pushing more work to secondaries which already are unable to keep up with replicating writes from the primary.To manage the impact of chunk migrations you could:Avoid using arbiters. Arbiters cannot acknowledge writes and will introduce scenarios with additional replication lag and memory pressure with if there is a voting majority to sustain a primary without a majority of data bearing replica set members available to acknowledge writes. See Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie for more info.Schedule the Balancing Window so chunk migrations happen during an off-peak period with less contention for other CRUD activity.Enable Secondary Throttle so chunk migrations wait for acknowledgement (equivalent to w:2) for each document move.Set Wait for Delete so the final delete phase of a migration is a blocking phase before the next chunk migration. The default is for deletions to happen asychronously.Tune or disable flow control.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "hi, @Stennie_X and Thank you Can I ask you again about your answers?Q2-1. I expect a lock to take place during chunk migration, but which this lock level is collection level or document level?\n(And I think this lock is an intent lock.)“MongoDB can perform parallel chunk migrations, but a shard can participate in at most one migration at a time.”\nQ2-2. If there are 3 shards sharded cluster, are you saying that there are only 2 shards that can participate in 1 chunk migration work?\n(1 → 2 or 2 → 3 or 3 → 1 chunk migrating… and the other shard is off.)Q3-1. I understood that the write operation is controlled by flow control during chunk migration. If so, is there no relationship between chunk migration and read operation?", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Q. about Sharded Cluster Chunk Migration
2022-09-26T09:25:58.183Z
Q. about Sharded Cluster Chunk Migration
2,447
null
[ "mongodb-shell" ]
[ { "code": "$ ./mongosh --tls --tlsAllowInvalidCertificates --tlsAllowInvalidHostnames --tlsCertificateKeyFile=/home/test/root.pem --authenticationMechanism=MONGODB-X509 --authenticationDatabase='$external' --username='CN=root,OU=user,O=TestCompany,L=New York,ST=New York,C=US' localhost:27017/admin\nEnter password: # <- would like to get rid of this prompt\nCurrent Mongosh Log ID: xxxxxxxxxxxxxxxxxxxxxxxx\n", "text": "Hi, is there any way to avoid getting a password prompt via the mongosh when authMechanism=MONGODB-X509 is passed?", "username": "shun" }, { "code": "mongodmongos", "text": "Hi @shun and welcome to the MongoDB community!!I tried to reproduce the issue in my local environment using the documentations available for a test environment, however I was not successful in reproducing the password prompt that you are seeing. That is, using authMechanism=MONGODB-X509 does not produce a password prompt for me. I am using MongoDB 6.0.1.\nCould you help me with the steps or the documentations you are following and observing the same.Also, it would be great if you could help with the following details:Please refer to the following documentations for reference:Also, to create some test certificates, please check below links:Also note that these are only for test environments and not recommended for production.Let us know if you have further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "$externaltest_dev_rs0 [direct: primary] $external> db.getUsers()\n{\n users: [\n {\n _id: '$external.CN=root,OU=user,O=TestCompany,L=New York,ST=New York,C=US',\n user: 'CN=root,OU=user,O=TestCompany,L=New York,ST=New York,C=US',\n db: '$external',\n roles: [ { role: 'root', db: 'admin' } ],\n mechanisms: [ 'external' ]\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1664431086, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"ae15871b21094ce7cc8883ea2a85eef07f02cce9\", \"hex\"), 0),\n keyId: Long(\"7120435454641438721\")\n }\n },\n operationTime: Timestamp({ t: 1664431086, i: 1 })\n}\n--authenticationMechanism=MONGODB-X509--username='CN=root,OU=user,O=TestCompany,L=New York,ST=New York,C=US'--username--username--authenticationMechanism--username", "text": "Hi @AasawariThank you for your information.Using MongoDB: 6.0.1\nUsing Mongosh: 1.5.4Then, I think I found the root cause. I was passing both --authenticationMechanism=MONGODB-X509 and --username='CN=root,OU=user,O=TestCompany,L=New York,ST=New York,C=US' then --username part expected to come with password as well. After removing the --username part, I’m no longer seeing the password prompt!Maybe would it be helpful if mongosh errors out if it gets both --authenticationMechanism and --username?Thanks,\nShun", "username": "shun" } ]
Can mongosh stop prompting password when authMechanism=MONGODB-X509 is passed?
2022-09-22T08:59:50.184Z
Can mongosh stop prompting password when authMechanism=MONGODB-X509 is passed?
1,300
https://www.mongodb.com/…e_2_1024x491.png
[]
[ { "code": "", "text": "\nimage1355×651 87.3 KB\n\ntrying to get mongodb on my chrombook which uses debian 11- keep getting core dump, would love some help.", "username": "Alexander_Estrella-Martinez" }, { "code": "", "text": "You received SIGILL (Illegal Instruction) because your CPU does not support this version.", "username": "steevej" }, { "code": "/proc/cpuinfomongod", "text": "Welcome to the MongoDB Community @Alexander_Estrella-Martinez !As @steevej suggested, an illegal instruction error means the binary you have installed is using instructions not supported by your CPU or virtualisation environment. The x86_64 microarchitecture requirements for MongoDB 5.0 precompiled binaries include CPUs that have AVX Extensions available.Please confirm:version of MongoDB server you have installedoutput of /proc/cpuinfowhether mongod is running inside a virtualised or container environmentIf your CPU is not compatible with the available 5.0+ server binaries, your options are generally:Regards,\nStennie", "username": "Stennie_X" } ]
Debian 11 - Core dump - help please
2022-09-28T14:38:04.953Z
Debian 11 - Core dump - help please
2,275
null
[]
[ { "code": "", "text": "Hi all.\nWe are faced with replicating data between several AWS regions task. We need to replicate any user profile changes across all regions. At the same time, profile data can change in any of the regions in which the user is currently located.For example we have 3 regionsI would like to know who had a similar problem and how it can be solved in the best way)", "username": "Alexey_D" }, { "code": "", "text": "Hey @Alexey_D,Welcome to the MongoDB Community Forums! From what you mentioned, it seems you are looking for a multi-master setup, where any data can be changed anywhere and be propagated to all clusters. In an Atlas replica set deployment, members can be hosted in different regions. You can use Atlas Global Clusters for your use case. I’m attaching a few documentation links for you to go over.\nManage Global Clusters\nSegmenting Data by LocationPlease let us know if there’s any more information needed around this. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Hey Satyam,\nUnfortunately, we don’t use MongoDB Atlas. We chose the MongoDB Multi-Master deployment on AWS. Each our region will has own MongoDB cluster. And use the MongoDB Kafka Connector as the replication mechanism.\nBut also we need to integrate with AWS DocumentDB database and stream changes to MongoDB. We will use Kafka as a message broker.\nCan we use MongoDB Kafka Connector as a Sink/Source for AWS DocumentDB ?", "username": "Alexey_D" }, { "code": "", "text": "Hey @Alexey_D,But also we need to integrate with AWS DocumentDB database and stream changes to MongoDB. We will use Kafka as a message broker.\nCan we use MongoDB Kafka Connector as a Sink/Source for AWS DocumentDB ?Our Kafka connector was not designed to work with products other than genuine MongoDB servers. There is no guarantee that it can work correctly, or at all, with other databases.Also, since you mentioned Multi-Master deployment, I came across a blog on our developer centre that should be of use to you. It talks about deploying an application across multiple data centers where application servers in all data centers are simultaneously processing requests, something that you are trying to do as well.\nActive Active Architectures in MongoDBRegards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB on AWS. Sync data between regions
2022-09-19T09:38:48.606Z
MongoDB on AWS. Sync data between regions
2,496
null
[]
[ { "code": "{\"t\":{\"$date\":\"2022-09-27T06:40:07.049+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":\n{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document \n(ERROR: BadValue: Invalid value for version, found 4.0, expected '4.4' or '4.2'. Contents of featureCompatibilityVersion document in admin.system.version: \n{ _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.0\\\" }. See https://docs.mongodb.com/master/release-notes/4.4-compatibility/#feature-compatibility.). \nIf the current featureCompatibilityVersion is below 4.2, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/4.4/#upgrade-procedures.\"}}\n", "text": "Hello There,I have upgraded mongo from 4.2 to 4.4 and got this error which means earlier no one has done upgrade properlywhen checked it was having below\necho “db.adminCommand({getParameter: 1,featureCompatibilityVersion: 1})” | mongoecho “db.adminCommand({getParameter: 1,featureCompatibilityVersion: 1})” | mongo admin\nMongoDB shell version v4.4.12\nconnecting to: mongodb://127.0.0.1:27017/admin?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“a0ba77a8-af3e-44e0-8c10-b0928495d05b”) }\nMongoDB server version: 4.4.12\n{ “featureCompatibilityVersion” : { “version” : “4.0” }, “ok” : 1 }\nbyeso what I did launched mongo in 4.2 version and ran below command\ndb.adminCommand({setFeatureCompatibilityVersion: “4.2”})and restarted the mongo service, and it started working. I can see below\necho “db.adminCommand({getParameter: 1,featureCompatibilityVersion: 1})” | mongoecho “db.adminCommand({getParameter: 1,featureCompatibilityVersion: 1})” | mongo admin\nMongoDB shell version v4.4.12\nconnecting to: mongodb://127.0.0.1:27017/admin?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“a0ba77a8-af3e-44e0-8c10-b0928495d05b”) }\nMongoDB server version: 4.4.12\n{ “featureCompatibilityVersion” : { “version” : “4.2” }, “ok” : 1 }\nbyeI have query here, my mongodb is up and running is that mean I dont need to do any dump or restoration , I am new to mongodb and not sure if the current running mongo instance is correct since I didnt see any issues after that,can someone guide what had happened hereRegards\nSAM", "username": "sameer_khamkar" }, { "code": "", "text": "Hi @sameer_khamkar, and welcome to the MongoDB Community forums! Since the database is running under 4.4.x after you updated the FCV to 4.2, you should be good to continue using the database without further work. What you did is the correct thing.I would always recommend taking a backup of the database before doing an upgrade, so something to remember for the future. You want to do that just in case something happens during the upgrade.I would make sure your application has no issues with the database for a couple of weeks and if all looks good, then I would upgrade the FCV to 4.4. Waiting for a few weeks allows for an easier migration back to the 4.2 version if necessary. The FCV does not change by design on an upgrade.The following tip is taken from the upgrade notes for 6.0 although other version have a similar note:Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.", "username": "Doug_Duncan" }, { "code": "", "text": "Thank you Doug for your time and support, thanks for the encouragement ", "username": "sameer_khamkar" }, { "code": "featureCompatibilityVersiondb.adminCommand( { setFeatureCompatibilityVersion: \"4.4\" } ) \n", "text": "I have query here, my mongodb is up and running is that mean I dont need to do any dump or restoration , I am new to mongodb and not sure if the current running mongo instance is correct since I didnt see any issues after that,Welcome to the MongoDB community @sameer_khamkar!It sounds like you missed setting the prerequisite 4.2 featureCompatibilityVersion (fCV) before upgrading from MongoDB 4.2 to 4.4, but were able to fix that by downgrading to 4.2 and finishing that step before continuing with the 4.4 binary upgrade.If there are no further errors after your upgrade to MongoDB 4.4, you should be set.Note: The last step in a major version upgrade is setting fCV so you can use newer features that persist backward-incompatible changes. As @Doug_Duncan suggested, you may want to wait if a downgrade to 4.2 may still be likely.Otherwise the final step of your 4.4 upgrade should be completed:For full details on what that fCV change enables, please see Compatibility Changes in MongoDB 4.4.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
setFeatureCompatibilityVersion
2022-09-27T10:00:43.229Z
setFeatureCompatibilityVersion
2,414
null
[]
[ { "code": "Incident Identifier: \t\tC2C21955-D527-4C63-88D6-A500403D8776\nCrashReporter Key: \tdbe7dfb8fae1aa676c65abdfa2877b59fee4135a\nHardware Model: \tiPhone12,3\nProcess: \t\tXXXXXXXX\nPath: \t\t/private/var/containers/Bundle/Application/CC4E18A3-730F-49D4-8690-DB4629D75186/XXXXXX.app/xxxxxx\nIdentifier: \t\tXXXXXXXX\nVersion: \t\t1 (2.0.2)\nCode Type: \t\tARM-64 (Native)\nRole: \t\tForeground\nParent Process: \t\tlaunchd [1]\nCoalition: \t\tXXXXXXXXX\n\n\nDate/Time: \t\t2021-11-19 11:24:47.6376 +0200\nLaunch Time: \t\t2021-11-18 19:33:58.7292 +0200\nOS Version: \t\tiPhone OS 14.8.1 (18H107)\nRelease Type: \t\tUser\nBaseband Version: \t2.06.00\nReport Version: \t\t104\n\nException Type: \t\tEXC_CRASH (SIGABRT)\nException Codes: \t\t0x0000000000000000, 0x0000000000000000\nException Note: \t\tEXC_CORPSE_NOTIFY\nTriggered by Thread: \t3\n\nX\nX\nX\n\nThread 3 Crashed:\n0 libsystem_kernel.dylib \t0x00000001b7507334 __pthread_kill + 8\n1 libsystem_pthread.dylib 0x00000001d4f8da9c pthread_kill + 272\n2 libsystem_c.dylib \t0x000000019268eb84 abort + 124\n3 libc++abi.dylib \t0x000000019df04bb8 __cxxabiv1::__aligned_malloc_with_fallback+ 80824 (unsigned long) + 0\n4 libc++abi.dylib \t0x000000019def5eb0 demangling_terminate_handler+ 20144 () + 284\n5 libobjc.A.dylib \t0x000000019de0206c _objc_terminate+ 28780 () + 160\n6 libc++abi.dylib \t0x000000019df03fa0 std::__terminate(void (*)+ 77728 ()) + 20\n7 libc++abi.dylib \t0x000000019df06c0c __cxa_get_exception_ptr + 0\n8 libc++abi.dylib \t0x000000019df06bb8 __cxxabiv1::exception_cleanup_func+ 89016 (_Unwind_Reason_Code, _Unwind_Exception*) + 0\n9 Realm \t0x0000000106462f6c 0x106120000 + 3420012\n10 Realm \t0x0000000106601340 0x106120000 + 5116736\n11 Realm \t0x00000001066660ac 0x106120000 + 5529772\n12 Realm \t0x00000001065fd3ec 0x106120000 + 5100524\n13 Realm \t0x000000010677331c 0x106120000 + 6632220\n14 libsystem_pthread.dylib 0x00000001d4f8cbfc _pthread_start + 320\n15 libsystem_pthread.dylib 0x00000001d4f95758 thread_start + 8\n", "text": "Hello,I’m occasionally seeing on MongoDB Realm logs this Error: error loading dependencies for app. Newest incident: Nov 19 11:24:47+02:00Seems to pop up when we have had for some time idle on MongoDB realm and our existing user tries to log in with custom JWT. When logging in again everything works normally and both from IOS app and MongoDB Realm point of view.Our app also crashes at the same time. (See crash report below) Unfortunately this was a non release build w/ debug information w/o dSYM and standalone use. Crash report shows that it’s an app side Realm related crash as can be seen from report:Going through app code application is written so that it should recover from failing login attempt cracefully. Kind of hinting that issue lies in app.login(credentials:compeletionHandler:) which should in completion handler return error if something goes wrong. Unfortunately without dSYM file I can’t verify that.Anyhow, could some check what causes this MongoDB Realm dependency issue and what happens when it’s triggered. I also have now made a build with dSYM activated for debug thus when this happens again hopefully I can really see what happens on app itself. Unfortunately this phenomenon does not happen if there has been some activity on MongoDB realm side thus making debugging really difficult.-janne", "username": "Janne_Jussila" }, { "code": "", "text": "We also have this issue sporadically\n\nimage964×1224 65.8 KB\n", "username": "Anton_P" }, { "code": "", "text": "Has anyone resolved this? I am now having this issue for existing users and the app immediately crashes at login.", "username": "Jason_Tulloch1" }, { "code": "", "text": "I’m also experiencing this error–has anyone made headway?", "username": "Harry_Heffernan" }, { "code": "", "text": "Hi @Harry_Heffernan (and anyone else experiencing an error with App Services dependencies),Please start a new discussion topic with details around your own environment and any errors encountered. This will help focus on different problems which may manifest with similar error messages.Details on the dependencies you are using, versions of Realm SDK (if applicable to reproducing the error), and how this error message presents for your use case would be helpful.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB Realm: Error: error loading dependencies for app
2021-11-19T11:08:18.815Z
MongoDB Realm: Error: error loading dependencies for app
2,816
null
[ "security" ]
[ { "code": "PS C:\\Program Files\\MongoDB\\Server\\3.2\\bin> ./mongo\nMongoDB shell version: 3.2.19-24-g70faecde6a\nconnecting to: test\n\nshow dbs\n2022-09-28T09:07:27.049-0700 E QUERY [thread1] Error: listDatabases failed:{\n\"ok\" : 0,\n\"errmsg\" : \"not authorized on admin to execute command { listDatabases: 1.0 }\",\n\"code\" : 13\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nMongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1\nshellHelper.show@src/mongo/shell/utils.js:781:19\nshellHelper@src/mongo/shell/utils.js:671:15\n@(shellhelp2):1:1\n\nexit\n", "text": "Hi All,I need some urgent help — how do I log in as a super user to a 3.2 mongodb instance running locally on a Windows 2012 host?I was recently told we have a mongodb instance 3.2 running on a local windows 2012, it is not on Atlas. I need to create a user for development team so they can investigate a production issue.I can run a command window as an administrator on the host (they just gave me this access). see below:What I need to do is be able to log in as a super user and run below to create a new read only user:db.createUser({user: “newreadonlyuser”, pwd: “newpassword”, roles: [{role: “readAnyDatabase”, db: “ourdb”}]})PS. I was given a user/pass that I can use to login to make changes on the above database “ourdb”, but it appears it has no privilege to create a new user.Any help would be highly appreciated!!Thanks!Erin", "username": "Erin_Bao" }, { "code": "mongodmongod--authmongod--auth", "text": "Hi @Erin_Bao welcome to the community!In most cases if you have admin access to the machine the mongod process is running at, you can run mongod without the --auth parameter to basically make the server run without enforcing auth. In this state, you should be able to create the user, or do any other administration task.Please make sure that while the mongod is in this no-auth state, it’s safe from any network exposure or any other potential intrusion, since anyone can connect to it and be an admin in this state. Once you’re done, you should re-enable --auth again as soon as possible.Before doing this I would recommend you to take a backup, and consider upgrading to a supported version (MongoDB 3.2 series is out of support since September 2018).Best regards\nKevin", "username": "kevinadi" } ]
How to log in to a local mongodb 3.2 running on Windows 2012 as super user to create a new user?
2022-09-28T16:56:54.879Z
How to log in to a local mongodb 3.2 running on Windows 2012 as super user to create a new user?
1,986
null
[ "queries" ]
[ { "code": "", "text": "I have a question about setting the connection pool and timeout.\nIs there a way to calculate and set the optimal value?", "username": "rinjyu_N_A" }, { "code": "", "text": "Hi @rinjyu_N_AThis may be able to help you: Tuning Your Connection Pool SettingsThe optimal values for those settings would be dependent on your use case and you specific deployment situation, so unfortunately there is no single easy answer for this.However, are you seeing some error/issue that you think can be fixed by tuning those settings? Are the defaults not working out for you?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hello.\nI need to set the connection pool and timeout, but I wanted to get know-how because I didn’t have enough experience.\nI will refer to the link you sent me.\nThanks for your help. ", "username": "rinjyu_N_A" } ]
Connection pool & timeout settings
2022-09-28T04:39:40.139Z
Connection pool &amp; timeout settings
1,201
null
[ "queries" ]
[ { "code": "", "text": "The question I had is posted in stackoverflow. If somebody in this forum knows the answer kindly post solution.", "username": "Prabhu_P" }, { "code": "", "text": "Kindly re-post the question here so that we do not have to navigate into 2 different sites.", "username": "steevej" }, { "code": "\n\t[{\n\t\t\"day\": 10,\n\t\t\"orders\": [{\n\t\t\t\"id\": 1,\n\t\t\t\"items\": {\n\t\t\t\t\"uuid1\": {\n\t\t\t\t\t\"name\": \"item1\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t},\n\t\t\t\t\"uuid2\": {\n\t\t\t\t\t\"name\": \"item2\",\n\t\t\t\t\t\"status\": true\n\t\t\t\t},\n\t\t\t\t\"uuid3\": {\n\t\t\t\t\t\"name\": \"item2\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}]\n\t},\n\t{\n\t\t\"day\": 11,\n\t\t\"orders\": [{\n\t\t\t\"id\": 1,\n\t\t\t\"items\": {\n\t\t\t\t\"uuid1\": {\n\t\t\t\t\t\"name\": \"item1\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t},\n\t\t\t\t\"uuid2\": {\n\t\t\t\t\t\"name\": \"item2\",\n\t\t\t\t\t\"status\": true\n\t\t\t\t},\n\t\t\t\t\"uuid3\": {\n\t\t\t\t\t\"name\": \"item2\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}]\n\t}]\n\t\n db.<collection>.update (\n {day: 10},\n { $pull: { \n orders: {items: { status: 'true'} } }\n },\n {multi: true}\n );\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 0 })\n", "text": "The collection I have is like:I would like to delete items with status marked as true for day 10.\nTried the one below:The result is:and so it doesn’t delete.How should we rewrite this update and pull query so that the items of day 10 marked with status as true are deleted?", "username": "Prabhu_P" }, { "code": "// Object uuid2 removed because status:true\n{\n\t\t\"day\": 10,\n\t\t\"orders\": [{\n\t\t\t\"id\": 1,\n\t\t\t\"items\": {\n\t\t\t\t\"uuid1\": {\n\t\t\t\t\t\"name\": \"item1\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t},\n\t\t\t\t\"uuid3\": {\n\t\t\t\t\t\"name\": \"item2\",\n\t\t\t\t\t\"status\": false\n\t\t\t\t}\n\t\t\t}\n\t\t}]\n\t}\n\n// or object with id:1 removed from array orders because one uuid has status:true\n\n{\n\t\t\"day\": 10,\n\t\t\"orders\": [ ]\n\t},\n", "text": "Few things first:I am not sure of what is the expected results.Do you want the document to become?If uuid2 to be remove then $pull cannot be used because uuid2 is a field inside an object.If not to late in you model life cycle, I would take a look at Building with Patterns: The Attribute Pattern | MongoDB Blog because the dynamic nature of uuid1, uuid2, … complicates your life.", "username": "steevej" }, { "code": "", "text": "Thanks Steevej.The document should become like the first one you listed i.e. “uuid2” removed.\nIts a legacy data and I found pull cannot be used but the other option is “$unset” and its very hard to frame a query that can match the value of the key and delete the key.", "username": "Prabhu_P" } ]
Delete a nested object from a collection
2021-11-22T20:09:04.410Z
Delete a nested object from a collection
7,355
null
[ "installation" ]
[ { "code": "", "text": "Hello,there’s got no repository for Debian 11 (bullseye), the newest is for Debian 10 (buster).I think as Debian 11 is fledging, it’s time to honor this, so installing and updating MongoDB will be possible the standard debian way. Many thanks in advance.", "username": "Berthold_Humkamp" }, { "code": "testingdocs/build.md", "text": "Welcome to the MongoDB Community Forums @Berthold_Humkamp!Packaging and support for O/S releases is currently addressed after those releases become Generally Available (GA). Unstable/development/testing releases can be moving targets since dependencies and behaviour may change during the O/S development cycle. Creating production-ready MongoDB packages for a new O/S release also involves standing up the testing infrastructure to run continuous integration and performance tests to try to detect unexpected behaviour or regressions. Debian 11 is currently the testing branch for Debian – a target release date has not been announced yet but is expected later in 2021.If you want to work with Debian 11 while it is in development, you can either compile MongoDB from source (following the docs/build.md instructions for the specific release version you want to build on GitHub) or try to use the Debian 10 packages (which is possibly an option if the system libraries are still compatible).If you don’t have a strict requirement to run MongoDB server processes in a local Debian 11 environment, you could also run your application code locally and connect to a remote or cloud-hosted MongoDB deployment running on a supported O/S platform.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "", "text": "Noting for the archives: Debian 11 is supported for MongoDB 5.08 or newer per the Platform Support Matrix.Regards,\nStennie", "username": "Stennie_X" } ]
Debian 11 repository
2021-06-12T09:15:33.343Z
Debian 11 repository
8,553
null
[]
[ { "code": "", "text": "What is the status of Debian 11 support with debian 11 binaries? (see Debian 11 repository)Debian 11 Bullseye is GA in the meantime. It would be great, if there are official packages for it.", "username": "Alexander_Meindl" }, { "code": "", "text": "Noting for the archives: Debian 11 is supported for MongoDB 5.08 or newer per the Platform Support Matrix.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Debian 11 support
2021-10-22T18:08:13.280Z
Debian 11 support
2,571
null
[ "aggregation", "queries", "node-js", "crud" ]
[ { "code": "{\n \"_id\":{\n \"$oid\":\"62e932661c0f2e018fe6be47\"\n },\n \"buCode\":\"toto\",\n \"clientNumber\":\"111111\",\n \"audiences\":[\n {\n \"audience\":{\n \"audienceId\":\"2\",\n \"name\":\"test mdia\",\n \"createdDate\":null,\n \"updatedDate\":null,\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":20,\n \"category\":\"LOYALTY\",\n \"description\":\"\",\n \"buCode\":\"titi\"\n },\n \"startDate\":{\n \"$date\":\"2022-08-01T10:15:10.000Z\"\n },\n \"remainingNumberOfUse\":1,\n \"lastEntryDate\":{\n \"$date\":\"2022-08-02T14:19:19.106Z\"\n }\n }\n ]\n}\n-----------------------------------------------------------\n{\n \"_id\":{\n \"$oid\":\"6321ac546402366af853592c\"\n },\n \"buCode\":\"toto\",\n \"clientNumber\":\"22222\",\n \"audiences\":[\n {\n \"audience\":{\n \"_id\":{\n \"$oid\":\"62c7f6a1b5824a6a312ed5e6\"\n },\n \"buCode\":\"toto\",\n \"category\":\"LOYALTY\",\n \"audienceId\":\"2\",\n \"name\":\"Test MDIA\",\n \"description\":\"prova1\",\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":30,\n \"updatedDate\":{\n \"$date\":\"2022-08-23T13:46:53.763Z\"\n }\n },\n \"startDate\":{\n \"$date\":\"2022-08-01T00:00:00.000Z\"\n },\n \"remainingNumberOfUse\":1,\n \"lastEntryDate\":{\n \"$date\":\"2022-09-13T09:19:35.897Z\"\n }\n },\n {\n \"audience\":{\n \"buCode\":\"toto\",\n \"category\":\"LOYALTY\",\n \"audienceId\":\"2\",\n \"name\":\"Test MDIA 2\",\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":30,\n \"createdDate\":{\n \"$date\":\"2022-09-08T14:52:46.000Z\"\n },\n \"updatedDate\":{\n \"$date\":\"2022-09-08T10:04:19.167Z\"\n }\n },\n \"startDate\":{\n \"$date\":\"2022-09-13T00:00:00.000Z\"\n },\n \"lastEntryDate\":{\n \"$date\":\"2022-09-13T12:32:44.791Z\"\n },\n \"remainingNumberOfUse\":1\n },\n {\n \"audience\":{\n \"buCode\":\"toto\",\n \"category\":\"LOYALTY\",\n \"audienceId\":\"3\",\n \"name\":\"Test MDIA 3\",\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":30,\n \"createdDate\":{\n \"$date\":\"2022-09-09T14:52:46.000Z\"\n },\n \"updatedDate\":{\n \"$date\":\"2022-09-09T10:04:19.167Z\"\n }\n },\n \"startDate\":{\n \"$date\":\"2022-09-13T00:00:00.000Z\"\n },\n \"lastEntryDate\":{\n \"$date\":\"2022-09-13T12:32:44.791Z\"\n },\n \"remainingNumberOfUse\":1\n },\n {\n \"audience\":{\n \"buCode\":\"toto\",\n \"category\":\"LOYALTY\",\n \"audienceId\":\"4\",\n \"name\":\"Test MDIA 4\",\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":30,\n \"createdDate\":{\n \"$date\":\"2022-09-09T14:52:46.000Z\"\n },\n \"updatedDate\":{\n \"$date\":\"2022-09-09T10:04:19.167Z\"\n }\n },\n \"startDate\":{\n \"$date\":\"2022-09-13T00:00:00.000Z\"\n },\n \"lastEntryDate\":{\n \"$date\":\"2022-09-13T12:32:44.791Z\"\n },\n \"remainingNumberOfUse\":1\n },\n {\n \"audience\":{\n \"buCode\":\"toto\",\n \"category\":\"LOYALTY\",\n \"audienceId\":\"5\",\n \"name\":\"Test MDIA 5\",\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":60,\n \"createdDate\":{\n \"$date\":\"2022-09-09T14:52:46.000Z\"\n },\n \"updatedDate\":{\n \"$date\":\"2022-09-09T10:04:19.167Z\"\n }\n },\n \"startDate\":{\n \"$date\":\"2022-09-13T00:00:00.000Z\"\n },\n \"lastEntryDate\":{\n \"$date\":\"2022-09-13T12:32:44.791Z\"\n },\n \"remainingNumberOfUse\":1\n }\n ]\n}\n{\n \"_id\":{\n \"$oid\":\"62e932661c0f2e018fe6be47\"\n },\n \"buCode\":\"LMIT\",\n \"clientNumber\":\"92111403\",\n \"audiences\":[\n {\n \"audience\":{\n \"audienceId\":\"42003\",\n \"name\":\"OFFER_MDIA LMIT\",\n \"createdDate\":null,\n \"updatedDate\":null,\n \"maximumNumberOfUseDuringMembership\":1,\n \"membershipDuration\":20,\n \"category\":\"LOYALTY\",\n \"description\":\"\",\n \"buCode\":\"LMIT\"\n },\n \"startDate\":{\n \"$date\":\"2022-08-01T10:15:10.000Z\"\n },\n \"remainingNumberOfUse\":1,\n \"lastEntryDate\":{\n \"$date\":\"2022-08-02T14:19:19.106Z\"\n },\n \"deletedDate\":{\n \"$date\":\"2022-09-19T13:57:46.666Z\"\n },\n \"isDeleted\":true\n }\n ]\n}\nvar dbo = db.db(\"myDatabase\");\n\ndbo.collection(\"customer\").aggregate(\n [\n {\n $unwind: \"$audiences\"\n },\n {\n $addFields: {\n qualify: {\n $cond: {\n if: {\n $lte: [\n {\n $dateAdd: {\n startDate: \"$audiences.startDate\",\n unit: \"day\",\n amount: \"$audiences.audience.membershipDuration\"\n }\n },\n \"$$NOW\"\n ]\n },\n then: true,\n else: false\n }\n }\n }\n },\n { $match: \n {$and: [\n {qualify: true}\n ]\n }\n }\n ]\n)\n.forEach(function(doc) {\n dbo.collection(\"customer\").updateOne({ _id: doc._id, audiences : doc.audiences }, \n { $set: { 'audiences.$[0].isDeleted':true, 'audiences.$[].deletedDate':new Date() } }, \n { multi: true },\n function(err, obj){\n console.log(JSON.stringify(doc.audiences))\n db.close();\n });\n });\n});\n", "text": "Hi There,I need help to update specific object in array.I have two documents in my mongodbThe first document content one audience but the second has 4 audiences.\nAnd i want to add these fields (isdeleted = true and deletedDate = new Date()) in audiences array.I create a script in nodejs like thisbut when the audiences array content 2 items, the update is not working.\nSomeone can help me please ?Mohamed", "username": "Mohamed_DIABY" }, { "code": "", "text": "A few things.1 - You should try to $match first before altering the documents. You have better chance to hit an index if any exists.2 - I understand your $match involves a computed field from $addFields, but you should compute that field directly in your $match without altering the documents.3 - You can $match into an array without $unwind.4 - Rather than calling updateOne for each documents you should use bulkWrite to reduce the round trips to the server.5 - You could leverage the flexible schema nature of mongo by combining isDeleted and deletedDate into a single field. deletedDate null or missing means it is not deleted, that would save storage.6 - Take a look at $map which allows you to modify all elements of an array. You would need an expression that uses $mergeObjects. The last stage of your aggregation could then be a $merge to write back the documents into the original collection without even downloading any documents on the client.", "username": "steevej" }, { "code": " var dbo = db.db(\"audienceSharing\");\n \n \ndbo.collection(\"customer\").aggregate(\n [\n {\n $match:\n {\n \"clientNumber\":\"111111\"\n }\n },\n {\n $unwind: \"$audiences\"\n },\n {\n $addFields: {\n qualify: {\n $cond: {\n if: {\n $lte: [\n {\n $dateAdd: {\n startDate: \"$audiences.startDate\",\n unit: \"day\",\n amount: \"$audiences.audience.membershipDuration\"\n }\n },\n \"$$NOW\"\n ]\n },\n then: true,\n else: false\n }\n }\n }\n },\n { $match: \n {$and: [\n {qualify: true}\n ]\n }\n }\n ]\n)\n.forEach(function(doc){\n dbo.collection('customer').bulkWrite(\n [\n {\n updateOne:\n {\n filter: doc._id,\n update: \n {\n $set: \n { \n 'audiences.isDeleted':true, \n 'audiences.deletedDate':new Date() \n } \n },\n \"upsert\": false\n //arrayFilters:doc.audiences\n }\n }\n ],\n function(err,obj){\n console.log(JSON.stringify(doc));\n db.close();\n }\n )\n})});\n\n", "text": "Hi @steevej,thanks for your reply.\nI tried with bulkwrite, map and mergeObjects is not workingCan you help me please, i’m new in mongodb", "username": "Mohamed_DIABY" }, { "code": "// Connection URI\nconst url = process.env.uri;\n\nMongoClient.connect(url, function(err, db) {\n if (err) throw err;\n var dbo = db.db(\"audienceSharing\");\n \n \ndbo.collection(\"customer\").aggregate(\n [\n {$match:{\"clientNumber\":\"22222\"}},\n {\n $unwind: \"$audiences\"\n },\n {\n $addFields: {\n qualify: {\n $cond: {\n if: {\n $lte: [\n {\n $dateAdd: {\n startDate: \"$audiences.startDate\",\n unit: \"day\",\n amount: \"$audiences.audience.membershipDuration\"\n }\n },\n \"$$NOW\"\n ]\n },\n then: true,\n else: false\n }\n }\n }\n },\n { $match: \n {$and: [\n {qualify: true}\n ]\n }\n }\n ]\n)\n.forEach(function(doc){\n dbo.collection('customer').updateOne(\n {\n _id:doc._id, audiences:doc.audiences\n },\n [\n {\n $set:\n {\n \"audiences\":\n {\n $map:\n {\n \"input\":\"$audiences\",\n as:\"audience\",\n in:\n /*{\n $cond:\n [\n {\n $eq: [ \"$$audience\", doc.audiences]\n },*/\n {\n $mergeObjects:\n [\n \"$$audience\",\n {\n \"isDeleted\" : true,\n \"deletedDate\": new Date()\n }\n ]\n }\n //]\n //}\n }\n }\n }\n }\n ],\n function(err, obj){\n console.log(doc.audiences)\n }\n )\n})\n})\n\n", "text": "I used $map and $mergeObject is not working when the arrays contents more one items.I need your help", "username": "Mohamed_DIABY" }, { "code": " \"isDeleted\" : true,\n \"deletedDate\": new Date()\n", "text": "You still $unwind despite3 - You can $match into an array without $unwind.You still $addFields despite2 - I understand your $match involves a computed field from $addFields, but you should compute that field directly in your $match without altering the documents.You still usedespite5 - You could leverage the flexible schema nature of mongo by combining isDeleted and deletedDate into a single field. deletedDate null or missing means it is not deleted, that would save storage.", "username": "steevej" } ]
Update specific object in array document mongodb
2022-09-20T08:36:05.410Z
Update specific object in array document mongodb
1,643
null
[]
[ { "code": "mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Wed 2022-09-28 13:14:07 EDT; 1min 22s ago\n Docs: https://docs.mongodb.org/manual\n Process: 183848 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14)\n Process: 183845 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 183842 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 183839 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 150176 (code=exited, status=0/SUCCESS)\n\nSep 28 13:14:06 <server> systemd[1]: Starting MongoDB Database Server...\nSep 28 13:14:06 <server> mongod[183848]: about to fork child process, waiting until server is ready for connections.\nSep 28 13:14:06 <server> mongod[183848]: forked process: 183851\nSep 28 13:14:07 <server> mongod[183848]: ERROR: child process failed, exited with 14\nSep 28 13:14:07 <server> mongod[183848]: To see additional information in this output, start without the \"--fork\" option.\nSep 28 13:14:07 <server> systemd[1]: mongod.service: control process exited, code=exited status=14\nSep 28 13:14:07 <server> systemd[1]: Failed to start MongoDB Database Server.\nSep 28 13:14:07 <server> systemd[1]: Unit mongod.service entered failed state.\nSep 28 13:14:07 <server> systemd[1]: mongod.service failed.\n-- Unit mongod.service has begun starting up.\nSep 28 13:16:07 <server> mongod[184281]: about to fork child process, waiting until server is ready for connections.\nSep 28 13:16:07 <server> mongod[184281]: forked process: 184284\nSep 28 13:16:08 <server> mongod[184281]: ERROR: child process failed, exited with 14\nSep 28 13:16:08 <server> mongod[184281]: To see additional information in this output, start without the \"--fork\" option.\nSep 28 13:16:08 <server> systemd[1]: mongod.service: control process exited, code=exited status=14\nSep 28 13:16:08 <server> systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nSep 28 13:16:08 <server> systemd[1]: Unit mongod.service entered failed state.\nSep 28 13:16:08 <server> systemd[1]: mongod.service failed.\nSep 28 13:16:08 <server> python3[1035]: Job for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" f\nSep 28 13:16:17 <server> sshd[132898]: pam_unix(sshd:session): session closed for user dnc23\nSep 28 13:16:17 <server> sudo[150323]: pam_unix(sudo:session): session closed for user root\nSep 28 13:17:34 <server> omiserver[184545]: pam_unix(omi:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=svc_scom\nSep 28 13:17:34 <server> omiserver[184545]: pam_krb5[184545]: TGT verified using key for 'host/<server>@DHE.DUKE.EDU'\nSep 28 13:17:34 <server> omiserver[184545]: pam_krb5[184545]: authentication succeeds for 'svc_scom' ([email protected])\nSep 28 13:17:34 <server> sudo[184550]: svc_scom : TTY=unknown ; PWD=/var/opt/microsoft/scx/tmp ; USER=root ; COMMAND=/bin/sh -c df -kP | awk '{print $6, $3}' | grep -v /dev/\nSep 28 13:17:34 <server> sudo[184550]: pam_unix(sudo:session): session opened for user root by (uid=0)\nSep 28 13:17:34 <server> sudo[184550]: pam_unix(sudo:session): session closed for user root\nSep 28 13:17:37 <server> sudo[184558]: svc_scom : TTY=unknown ; PWD=/var/opt/omi/run ; USER=root ; COMMAND=/opt/microsoft/scx/bin/scxlogfilereader -p\nSep 28 13:17:37 <server> sudo[184558]: pam_unix(sudo:session): session opened for user root by (uid=0)\nSep 28 13:17:37 <server> sudo[184558]: pam_unix(sudo:session): session closed for user root\nSep 28 13:18:08 <server> systemd[1]: Starting MongoDB Database Server...\n-- Subject: Unit mongod.service has begun start-up\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has begun starting up.\nSep 28 13:18:08 <server> mongod[184692]: about to fork child process, waiting until server is ready for connections.\nSep 28 13:18:08 <server> mongod[184692]: forked process: 184695\nSep 28 13:18:09 <server> mongod[184692]: ERROR: child process failed, exited with 14\nSep 28 13:18:09 <server> mongod[184692]: To see additional information in this output, start without the \"--fork\" option.\nSep 28 13:18:09 <server> systemd[1]: mongod.service: control process exited, code=exited status=14\nSep 28 13:18:09 <server> systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nSep 28 13:18:09 <server> systemd[1]: Unit mongod.service entered failed state.\nSep 28 13:18:09 <server> systemd[1]: mongod.service failed.\nSep 28 13:18:09 <server> python3[1035]: Job for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" f\nlines 1708-1757/1757 (END)\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.072-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.073-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.076-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.083-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.106-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.106-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.106-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.107-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.107-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":144512,\"port\":27017,\"dbPath\":\"/data/db_storage/mongo\",\"architecture\":\"64-bit\",\"host\":\"vlt-metis-01.dhe.duke.edu\"}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.107-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.107-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Red Hat Enterprise Linux Server release 7.9 (Maipo)\",\"version\":\"Kernel 3.10.0-1160.76.1.el7.x86_64\"}}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.107-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"0.0.0.0\",\"port\":27017},\"processManagement\":{\"fork\":true,\"pidFilePath\":\"/var/run/mongodb/mongod.pid\",\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/data/db_storage/mongo\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.108-04:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.108-04:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n\n{\"t\":{\"$date\":\"2022-09-28T09:57:38.108-04:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "I upgraded our MongoDB server from 4.4 to 6.0 following the instructions to first upgrade to 5.0 and then to 6.0. All seemed to go well. However, I am facing a couple issues.First, the preferred approach to starting mongod doesn’t work. My server is RHEL 7.9. The preferred startup approach is:sudo systemctl start mongodThis command results in the following:Job for mongod.service failed because the control process exited with error code. See “systemctl status mongod.service” and “journalctl -xe” for details.I looked at systemctl status and journalctl but didn’t see any obvious. The output from systemctl status is:I don’t know how to start without the “–fork” option. So, I have not tried that option.The output from journalctl is longer but also not very enlightening:NOTE: I replaced my actual server name with in this output to comply with my organization’s security requirements.Following this attempt to start MongoDB, I used the following command:mongod --config ~/mongod.confWith this command the server starts and runs - for about half an hour. Then it disappears. Looking at the log file doesn’t really offer an explanation. The log shows that the server starts and runs. Then after a short time a series of the following block of messages appears. Each block begins with a SERVER RESTARTED message:So, at the moment I am dead in the water. This is the community edition that we continue to evaluate for purposes. In general, we like MongoDB. However, it actually has to work to be useable. Any insights to address our server crashing will be most appreciated.", "username": "David_Cox" }, { "code": "root", "text": "Hi David, I noticed this line in the log file you pasted:{“t”:{\"$date\":“2022-09-28T09:57:38.108-04:00”},“s”:“E”, “c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Operation not permitted”}}The socket file might be owned by the root user instead of the user that’s trying to run the service. You can delete this file and it will be recreated on the next start of the process.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks, I will give that a try.", "username": "David_Cox" }, { "code": "", "text": "Quick question. When starting mongod, should I start it as root? The instructions don’t say.", "username": "David_Cox" }, { "code": "mongodrootmongodmongodbsudo systemctl start mongod", "text": "When starting mongod, should I start it as root?You should never start the mongod process as the root user. The package you installed this as should have created a mongod (or mongodb) user and that is the user that the service will use when you run sudo systemctl start mongod.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks.\nI just got out of a meeting where we discussed this issue. Unknown to me there was another (home grown) process running the background that monitors mongodb and restarts it if it detects that mongodb has stopped.We stopped that process and now MongoDB seems to be working fine. Will take a closer look at that monitoring process. But I think the preferred approach is to use replication to ensure that the system continues working.", "username": "David_Cox" } ]
MongoDB Crashes after Upgrade to 6.0
2022-09-28T17:27:52.880Z
MongoDB Crashes after Upgrade to 6.0
2,952
https://www.mongodb.com/…9_2_1024x237.png
[ "aggregation" ]
[ { "code": "", "text": "Hello all,I have the following data structure:\nimage1367×317 38.9 KB\nAnd I have lots of such locations with lots of such nested events.I am trying to figure out how to get every event name, transform it with a JS function that I have, and write it back into new slug field inside the same event?Every bit of help would be much appreciated!So far I’ve tried with aggregation but I had no success. I’ve tried also getting each location, extracting the events, getting their names and transforming them into slugs. For each location I end up having an array of slugs for which I cannot figure out how to write them back inside the corresponding event.Thank you!Regards", "username": "Kaloyan_Hristov" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.Also share the expected result.Share what you tried and how it failed so that we can avoid experimenting in a direction you already know that fails.", "username": "steevej" }, { "code": "{\n \"_id\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"name\": \"Spectrum Center\",\n \"coords\": {\n \"lat\": { \"$numberDouble\": \"35.225088\" },\n \"long\": { \"$numberDouble\": \"-80.8397\" }\n },\n \"address\": \"333 E Trade St\",\n \"country\": \"USA\",\n \"city\": \"Charlotte\",\n \"category\": \"concert\",\n \"events\": [\n {\n \"name\": \"ANUEL AA: LEGENDS NEVER DIE WORLD TOUR\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-08\",\n \"time\": \"20:00\",\n \"expiresAt\": { \"$numberDouble\": \"1.6653456000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"624ef91f5030c078c684e572\" }\n },\n {\n \"name\": \"Greta Van Fleet - Dreams In Gold Tour 2022\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-28\",\n \"time\": \"19:00\",\n \"expiresAt\": { \"$numberDouble\": \"1.6670700000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"62703fd9e9212167a21e153c\" }\n },\n {\n \"name\": \"Daddy Yankee La Última Vuelta World Tour\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-12-06\",\n \"time\": \"20:00\",\n \"expiresAt\": { \"$numberDouble\": \"1.6704432000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"628ea28b50b698ff61909c13\" }\n },\n {\n \"name\": \"Lizzo: The Special Tour\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-20\",\n \"time\": \"20:00\",\n \"expiresAt\": { \"$numberDouble\": \"1.6663824000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"628ea2c150b698ff61909c24\" }\n }\n ],\n \"__v\": { \"$numberInt\": \"0\" },\n \"slug\": \"spectrum-center\"\n}\n\n\n{\n \"_id\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"name\": \"Spectrum Center\",\n \"coords\": {\n \"lat\": { \"$numberDouble\": \"35.225088\" },\n \"long\": { \"$numberDouble\": \"-80.8397\" }\n },\n \"address\": \"333 E Trade St\",\n \"country\": \"USA\",\n \"city\": \"Charlotte\",\n \"category\": \"concert\",\n \"events\": [\n {\n \"name\": \"ANUEL AA: LEGENDS NEVER DIE WORLD TOUR\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-08\",\n \"time\": \"20:00\",\n \"slug\": \"anuel-aa-legends-never-die-world-tour\",\n \"expiresAt\": { \"$numberDouble\": \"1.6653456000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"624ef91f5030c078c684e572\" }\n },\n {\n \"name\": \"Greta Van Fleet - Dreams In Gold Tour 2022\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-28\",\n \"time\": \"19:00\",\n \"slug\": \"greta-van-fleet-dreams-in-gold-tour-2022\",\n \"expiresAt\": { \"$numberDouble\": \"1.6670700000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"62703fd9e9212167a21e153c\" }\n },\n {\n \"name\": \"Daddy Yankee La Última Vuelta World Tour\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-12-06\",\n \"time\": \"20:00\",\n \"slug\": \"daddy-yankee-la-ultima-vuelta-world-tour\",\n \"expiresAt\": { \"$numberDouble\": \"1.6704432000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"628ea28b50b698ff61909c13\" }\n },\n {\n \"name\": \"Lizzo: The Special Tour\",\n \"image\": \"\",\n \"link\": \"\",\n \"date\": \"2022-10-20\",\n \"time\": \"20:00\",\n \"slug\": \"lizzo-the-special-tour\",\n \"expiresAt\": { \"$numberDouble\": \"1.6663824000000E+12\" },\n \"_location\": { \"$oid\": \"61fe9b6a390bd807cb3f8258\" },\n \"_id\": { \"$oid\": \"628ea2c150b698ff61909c24\" }\n }\n ],\n \"__v\": { \"$numberInt\": \"0\" },\n \"slug\": \"spectrum-center\"\n}\nlocations.aggregate([\n {\n $unwind: \"$events\",\n },\n {\n $addFields: {\n \"events.slug\": {\n $function: { body: string_to_slug, args: \"$events.name\", lang: \"js\" },\n },\n },\n },\n { $out: \"events_with_slugs\" }\n ]);\n locations.distinct(\"name\", function (err, docs) {\n docs.map(async (doc) => {\n const location = await locations.findOne({ name: doc });\n\n const location_events = location.events;\n const names = location_events.map(({ name }) => {\n return name;\n });\n const slugs = names.map((name) => {\n return string_to_slug(name);\n });\n\n //-------------------------\n //here I do not know how to insert the slugs into the respective event\n });\n });\n", "text": "This is an example location from my collection.What I am trying to do is transforming it into:Notice how for every event there is a slug added, which is the name of the event transformed into slug.I have a JS function that can transform the event name strings into the event slug strings, however I do not know how to access every event name separately, transform it with my function and add the output slug string inside the event object.Here is also a screenshot of my db, collections, etc.:\nimage1235×690 73.6 KB\nSo far I’ve tried following approaches:Here the string_to_slug function is my JS function which transforms an event name string into event slug string. But this approach is not working. Or at least I do not know how to group it all.The other thing that I’ve tried was to get all event names as an array and transform them into array of slugs … but I do not know how to insert them into the events afterwards:I hope with this code snippets and the screenshot of my data it is bit clearer what I am trying to achieve.Regards", "username": "Kaloyan_Hristov" }, { "code": "", "text": "Thanks, we can definitively experiment with this.", "username": "steevej" }, { "code": "locations.aggregate([\n {\n $unwind: \"$events\",\n },\n {\n $addFields: {\n \"events.slug\": {\n $function: {\n body: string_to_slug,\n args: [\"$events.name\"],\n lang: \"js\",\n },\n },\n },\n },\n {\n $group: {\n _id: \"$_id\",\n name: {\n $first: \"$name\",\n },\n address: {\n $first: \"$address\",\n },\n coords: {\n $first: \"$coords\",\n },\n city: {\n $first: \"$city\",\n },\n country: {\n $first: \"$country\",\n },\n category: {\n $first: \"$category\",\n },\n slug: {\n $first: \"$slug\",\n },\n events: {\n $push: \"$events\",\n },\n },\n },\n {\n $out: \"events_with_slugs\",\n },\n ]);\n", "text": "Hi,I think I have some progress. I’ve tried:It seems to work, but the amount of the output documents is far smaller than the original amount. This is maybe because there are lots of locations with no events inside.The question now is how to get also those in the final result?", "username": "Kaloyan_Hristov" } ]
Get nested document field values, transform them and write them as a new field in the nested document
2022-09-25T13:12:51.861Z
Get nested document field values, transform them and write them as a new field in the nested document
3,327
https://www.mongodb.com/…e_2_1024x512.png
[ "aggregation", "queries", "node-js", "compass", "atlas-data-lake" ]
[ { "code": "{\n $lookup: {\n from: { db: \"DataFederationDbName\", coll: \"CollectionNameInDataFederation\" },\n localField: \"address\",\n foreignField: \"address\",\n as: \"label\"\n }\n }\n{from: {db:<>, coll:<>}, ...}", "text": "I’d like to create a view from one of my DBs which use $lookup to join data from a Data Federation, I tried using the following tutorial and this example:I got the following error:\nMongoServerError: $lookup with syntax {from: {db:<>, coll:<>}, ...} is not supported for db: DataFederationDbName and coll: CollectionNameInDataFederationI’ve tried creating the view with Mongo Compass and via js package MongoClient, both displayed similar errors.\nWhat am I doing wrong?Thanks!", "username": "Tom_Keidar1" }, { "code": "", "text": "Hey Tom,When using Data Federation you are creating virtual databases and virtual collections that can source data from any of the supported sources. You can then do a $lookup using those virtual collections and databases, and it’s true you can do a cross DB lookup with the Virtual Dbs and Virtual Collections.Based on the error message you’re getting I’m guessing that you’re actually connecting to the cluster itself (not the Federated Database Instance and Virtual DBs and Collections), and the cluster itself does not support cross DB $lookup. Can you double check that you’re using the connection string provided when you click connect on the Federated Database Instance?Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Thanks @Benjamin_Flast, indeed I was using the cluster DB.\nI’ll try again and will update you here.Does it mean I will have to add to the data federation each and every collection I’d like to join (cross DBs)?", "username": "Tom_Keidar1" }, { "code": "\"databases\" : [\n {\n \"name\" : \"*\",\n \"collections\" : [\n {\n \"name\" : \"*\",\n \"dataSources\" : [\n {\n \"storeName\" : \"<atlas-store-name>\",\n }\n ]\n }\n ]\n }\n]\n", "text": "Yes exactly, you would need to add each.But if you just want to have every Database and Collection from your cluster represented in your Federated Database instance we actually have a short hand for that, you just put in your storage configuration (via the UI editor) something like this:", "username": "Benjamin_Flast" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use $lookup from DB to Data Federation?
2022-09-25T09:13:55.757Z
How to use $lookup from DB to Data Federation?
2,957
https://www.mongodb.com/…40a9622f2d6d.png
[ "atlas", "atlas-data-lake" ]
[ { "code": "", "text": "I am unable to create a Data Lake Pipeline which I believe is due to having many tenanted databases and the dropdowns failing to fetch the data to populate them.Sometimes I can select a Cluster, sometimes it times out. But I am unable to select a Database - this one times out every time I try.\nimage947×807 36.3 KB\nI’ve spoken with somebody on Live Chat last week but I’ve had no response as of yet.Does anybody know of a way around this?", "username": "Gareth_Lowrie" }, { "code": "", "text": "Hey Gareth,Thanks for reaching out. Do you know if support filed a ticket? This looks like a bug to me, and I think we’ll need to make a UI improvement to get around this limitation.There will be a workaround shortly as we’ll be releasing support for creating Data Lake Pipelines via the Public API, it should be available within the next few weeks.Best,\nBenP.S. I’m the PM for Atlas Data Lake and would be interested in getting any feedback (aside from this blocker) as well as learning more about your use case. My calendly link is here: Calendly - Benjamin Flast", "username": "Benjamin_Flast" }, { "code": "", "text": "Hey Benjamin,Support haven’t been replying to me so I don’t think so.Happy to jump on a call to tell you more about our use case - will get something booked in!Thanks,\nGareth", "username": "Gareth_Lowrie" }, { "code": "", "text": "Hey Gareth, sorry about the delayed response here and I’m sorry to hear that you’ve not been hearing back from support.Please do schedule some time with me here so I can help get this sorted out: Calendly - Benjamin FlastBest,\nBen", "username": "Benjamin_Flast" } ]
Unable to create Data Lake Pipeline
2022-09-05T09:30:42.482Z
Unable to create Data Lake Pipeline
2,485
null
[ "java", "change-streams", "spring-data-odm" ]
[ { "code": "", "text": "Using java Spring Boot framework. MongoDB Java drive 4.2.\nI want to save the resume token and timestamp to a collection and when app starts up, if I have a resume token or timestamp, saved, I want to resume from that token, or timestamp.My issue is I cannot successfully save and then retrieve the resume token from my collection. I have tried defining as BsonDocument in my class. That results in an error upon retrieval stating cannot convert from string to bson object. If I define and save as a string it fails upon retrieval stating cannot convert from bson document to string.Has anyone successfully stored and then retrieved a resume token from your own collection? If so, how did you define it in your class? Can you share the java code?The purpose is when we release a new version of the app, or if server failed, when the app restarts I want it to be able to resume where it left off, or close to it. I plan to save the last resume token every so many inserts.Other idea is to bypass resume token and store the last action timestamp from my latest document and write my own query process to catchup all documents between that last action timestamp and the last action timestamp retrieved off first document captured from the change stream. This is probably needed in case oplog has rolled anyway.", "username": "Thomas_Bishop" }, { "code": "private BsonString token;\nprivate BsonTimestamp startAtOperationTime;\nBsonTimestamp startAtOperationTime = changeStreamDocument.getClusterTime().asTimestamp();\nBsonDocument resumeToken = changeStreamDocument.getResumeToken().asDocument();\nBsonString token = resumeToken.getString(\"_data\");\nBsonTimestamp startAtOperationTime = cdrConfig.getStartAtOperationTime();\nBsonString bsonString = cdrConfig.getToken();\nBsonDocument resumeToken2.put(\"_data\", bsonString);\nif (resumeToken2 != null) {\n\tchangeStream = claims.watch(pipeline).resumeAfter(resumeToken2).fullDocument(FullDocument.UPDATE_LOOKUP);\n\t} else {\n\t\tif (startAtOperationTime != null) {\n\t\t\tchangeStream = claims.watch(pipeline).startAtOperationTime(startAtOperationTime).fullDocument(FullDocument.UPDATE_LOOKUP);\n\t\t} else {\n\t\t\tchangeStream = claims.watch(pipeline).fullDocument(FullDocument.UPDATE_LOOKUP);\n\t\t\t}\n}\n", "text": "I finally figured out a solution. Might not be the most straight forward but it worked.I can restart with either the timestamp or the token.This is all POC code. Not yet ready for production.How I defined in my cdrConfig.class.How I captured the 2 values to insert into my collection.How I built the resumeToken after receiving data back from my collection.How I started the changestream watch.", "username": "Thomas_Bishop" }, { "code": "", "text": "Struggling with the same issue, this is very helpful, thanks. Curious if you ever found a “cleaner” method to do this.", "username": "Chris_Stromberger" }, { "code": "", "text": "I am Stuck with the same problem can you share the exact code for this, if possible? Thanks in advance", "username": "Manikandan_V" }, { "code": "", "text": "Did you try the code I posted above?", "username": "Thomas_Bishop" }, { "code": "", "text": "Yes, I have tried but in my case, it didn’t work.", "username": "Manikandan_V" }, { "code": "", "text": "Is your changestream code working if you ignore my example code? If not, you need to get that working first. If you do have the Changestream working, what part of the above code is not working for you? What error do you see? To me, the only part that could cause an issue is the claims.watch(pipeline) if you didn’t build the “pipeline” correctly or if a newer driver wants something different.\nI posted that code well over a year ago and it is possible the example needs tweaked for a newer MongoDB java driver. We have not yet move forward with using ChangeStreams, the project is delayed until 2023.The above code is for use after you get your ChangeStream process working and decide you want to store the resume token and/or the resume timestamp for the last document read from the ChangeStream process. You can use the stored Token or timestamp to restart your changestream at that specific point, if the document is still in the Opslog.", "username": "Thomas_Bishop" }, { "code": "", "text": "In getting the resume Token from the last document, I didn’t get that one. Also, I need to store the last resume Token in a file or somewhere I can access when the app gets close. How to do those things", "username": "Manikandan_V" }, { "code": "", "text": "I show how to retrieve the token and timestamp in the code I posted above.\nLook under section “How I captured the 2 values to insert into my collection”.Then I stored the values in a collection using the data type described under section “How I defined in my cdrConfig.class”. I named my storage collection “cdrConfig”, you name yours whatever name fits your environment.", "username": "Thomas_Bishop" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Saving change stream resume token
2021-03-25T18:41:34.887Z
Saving change stream resume token
9,159
https://www.mongodb.com/…e_2_1024x512.png
[ "python", "change-streams", "cxx" ]
[ { "code": "", "text": "Hi!\nI’ve long waited for mongo 6.0 release to start using the pre-image feature in change streams.\nI upgraded my mongo version but I noticed that the mongocxx does not support this new feature yet.\nIs there a workaround to still read the pre-image data using mongocxx 3.6.6 version?Also, I tried doing a POC with pymongo and got into the same issue - 4.2.0 should support it but I can’t seem to be able to upgrade to it (currently latest I can install is pymongo 4.1.1)MongoDB triggers, change streams, database triggers, real time", "username": "Oded_Raiches" }, { "code": "", "text": "What is stopping you from upgrading PyMongo to 4.2?", "username": "Bernie_Hackett" }, { "code": "python3 -m pip install --upgrade pymongo==4.2.0ERROR: Could not find a version that satisfies the requirement pymongo==4.2.0 (from versions: 0.1rc0, 0.1.1rc0, 0.1.2rc0, 0.2rc0, 0.3rc0, 0.3.1rc0, 0.4rc0, 0.5rc0, 0.5.1rc0, 0.5.2rc0, 0.5.3rc0, 0.6, 0.7, 0.7.1, 0.7.2, 0.8, 0.8.1, 0.9, 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5, 0.9.6, 0.9.7, 0.10, 0.10.1, 0.10.2, 0.10.3, 0.11, 0.11.1, 0.11.2, 0.11.3, 0.12, 0.13, 0.14, 0.14.1, 0.14.2, 0.15, 0.15.1, 0.15.2, 0.16, 1.0, 1.1, 1.1.1, 1.1.2, 1.2, 1.2.1, 1.3, 1.4, 1.5, 1.5.1, 1.5.2, 1.6, 1.7, 1.8, 1.8.1, 1.9, 1.10, 1.10.1, 1.11, 2.0, 2.0.1, 2.1, 2.1.1, 2.2, 2.2.1, 2.3, 2.4, 2.4.1, 2.4.2, 2.5, 2.5.1, 2.5.2, 2.6, 2.6.1, 2.6.2, 2.6.3, 2.7, 2.7.1, 2.7.2, 2.8, 2.8.1, 2.9, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 2.9.5, 3.0, 3.0.1, 3.0.2, 3.0.3, 3.1, 3.1.1, 3.2, 3.2.1, 3.2.2, 3.3.0, 3.3.1, 3.4.0, 3.5.0, 3.5.1, 3.6.0, 3.6.1, 3.7.0, 3.7.1, 3.7.2, 3.8.0, 3.9.0, 3.10.0, 3.10.1, 3.11.0, 3.11.1, 3.11.2, 3.11.3, 3.11.4, 3.12.0, 3.12.1, 3.12.2, 3.12.3, 4.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1)\nERROR: No matching distribution found for pymongo==4.2.0\n", "text": "Hi @Bernie_Hackett, thanks for the reply!\nwhen running:\npython3 -m pip install --upgrade pymongo==4.2.0\nI get:Anyway, I was able to do a POC with mongosh, but I still want to have this working with mongocxx - any idea would be appreciated.", "username": "Oded_Raiches" }, { "code": "sigstop@devbox ~ $ python -m venv installtest\nsigstop@devbox ~ $ source installtest/bin/activate\n(installtest) sigstop@devbox ~ $ which python3\n/home/sigstop/installtest/bin/python3\n(installtest) sigstop@devbox ~ $ python3 -m pip install --upgrade --no-cache-dir pymongo==4.2.0\nCollecting pymongo==4.2.0\n Downloading pymongo-4.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (479 kB)\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 479.9/479.9 kB 6.9 MB/s eta 0:00:00\nInstalling collected packages: pymongo\nSuccessfully installed pymongo-4.2.0\n(installtest) sigstop@devbox ~ $ python3 --version\nPython 3.10.7\n", "text": "That’s odd. Installation works fine for me locally:What version of Python are you using on which operating system?", "username": "Bernie_Hackett" }, { "code": "", "text": "For the C++ driver, this feature has been implemented and committed to the master branch of the driver. It will be released in version 3.7.0, which will be out soon.", "username": "Bernie_Hackett" }, { "code": "python3 -m pip --versionpython3 -m pip install --upgrade pip", "text": "You may be running into a bug in pip. It would be helpful to post your pip version: python3 -m pip --versionPlease upgrade to the latest pip and then try to install pymongo 4.2 again: python3 -m pip install --upgrade pip", "username": "Shane" }, { "code": "python3 -m pip --version\npip 21.3.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)\n", "text": "Thanks for the replies!\n@Bernie_Hackett\ndo you have a time estimation for 3.7.0 release?@Shane this is the output:", "username": "Oded_Raiches" }, { "code": "", "text": "Aha. PyMongo 4.2.0 only supports Python 3.7+. We hope ro release C++ driver 3.7.0 in the next week or so.", "username": "Bernie_Hackett" }, { "code": "", "text": "thanks for the reply!\nlooking forward to using the new 3.7.0 C++ driver", "username": "Oded_Raiches" } ]
Pre/Post images processing using mongocxx driver
2022-09-19T08:15:13.510Z
Pre/Post images processing using mongocxx driver
3,552
null
[ "node-js", "data-modeling", "crud", "mongoose-odm", "transactions" ]
[ { "code": "{\n roundId: { type: Number, required: true, index: true },\n dateCreated: { type: Date, required: true, default: new Date() },\n gameState: { type: String, enum: ['open', 'closed'], required: true },\n winner: { type: String }, // only set after the game is complete\n}\n{\n roundId: { type: Number, required: true, index: true },\n username: { type: String, required: true },\n joins: [{\n xpWagered: { type: Number, required: true },\n joinTime: { type: Date, required: true, default: new Date() },\n }],\n}\n{\n username: { type: String, required: true, unique: true },\n totalXp: { type: Number, required: true, default: 0 },\n}\n/join_game/close_gamegameStateclosed/join_gametotalXproundIdclosedgameState/join_game.find().findOneAndUpdate()findOne()join_gamefindOneAndUpdate/close_game/close_game.find()/join_game", "text": "TLDR: Do the locks created by operations that occur within mongodb transactions unlock immediately after the operation is complete or after the transaction is complete (committed/aborted)?The background:I’m using mongo to model the lobby state of a game round that clients are able to join and I would just like some clarification on how the use of transactions and correct data modeling can allow me to handle a high amount of concurrent requests while eliminating the risk of race conditions and maintaining game rules that are established.My solution:I created two collection schemas (using NodeJS’s mongoose) to handle the game scenario (one for the game itself and another for game participants) as well as a general player schema that stores all player data.A simplified version of a game round looks something like:a simplified version of a game participant looks something like:and lastly, a simplified version of a player looks like this:The game participants collection is used to prevent frequent write locks when users join from being solely allocated to the game collection, hence why I did not include an array field of game participants within the game collection itself. Instead, if a player joins the game, then only their game participant document will lock.Within the game a player can join a game round (via the /join_game endpoint) up to four times and each time they join, we validate whether they have the xp to join and if they do it is decremented from their total xp. If a second unique player joins the game, a cron job starts and kicks off the game join timer loop and at any point in time and other participant can join the game lobby. Once the cron job is finished, it calls a /close_game endpoint which essentially sets the gameState inside the game round model to closed and no other players can join.The db reads/writes within the /join_game endpoint occur within a mongo transaction. These operations include:and if any of these fails, mongo will abort the transaction.My questions/concerns:\nI’m aware that mongo transactions are atomic in the sense that all transactions will either commit or roll back (which it’s why it’s useful to use the gameState lock/semaphore check at the end), but I was wondering if all documents within the transaction are locked for the duration of the semaphore? According to this article:MongoDB does lock a document that is being modified by a transaction. However, other sessions that attempt to modify that document do not block. Rather, their transaction is aborted, and they are required to retry the transactionSo if a document is locked for the entire time of the transaction until it is committed/aborted and multiple people are attempting to hit the /join_game endpoint at once, then won’t concurrent transactions continuously lock each other out and constantly have to retry their transactions because the read lock of the .find() of all the current game participants will conflict with the write lock of the .findOneAndUpdate() of the individual game participant? And therefore, this should lead to some latency if many concurrent users are attempting to join.Also won’t the findOne() of the current round document within the join_game endpoint therefore conflict with the findOneAndUpdate of the /close_game endpoint that sets the round to be closed? Could this potentially cause the /close_game endpoint to hang for a bit?If this wasn’t the case (and operations within transactions only locked for the duration of the operation), then won’t there be race-condition issues for the .find() number of unique participants check where many concurrent calls to the /join_game endpoint can lead to an inaccurate start of the cron job (ex: multiple concurrent reads w/ the result of one user existing in the game round will result in the cron job being called multiple times).I seem to be a little bit confused with how transaction locking exactly works, so any advice or information would be greatly appreciated. Thanks in advance!", "username": "Tom_S" }, { "code": "", "text": "Hi @Tom_S and welcome to the community!!Firstly, I would really appreciate you spending time on posts with so much details.However, to answer your question:Do the locks created by operations that occur within mongodb transactions unlock immediately after the operation is complete or after the transaction is complete (committed/aborted)?The locks in MongoDB are acquired at the transaction level. However only write operations will lock the associated document. Read operations will not lock them. Hence the locks will be released only when the transaction is completed.\nFor more reference on how transactions and locks work functions in MongoDB, would recommend following documentations for:In terms of your concern about latency, I think it depends on the size of the workload you’re expecting, and the hardware that executes this workload. I agree though that using transactions will definitely have a performance impact, but depending on your workflow, it might be unavoidable (especially if you’re depending on mutating the state of more than one documents where the all mutations must succeed to be considered successful). In general, though, MongoDB works fastest with operations involving single documents.Best Regards\nAasawari", "username": "Aasawari" } ]
Understanding locking within transactions (and how it deals with high concurrency)
2022-09-25T16:22:52.492Z
Understanding locking within transactions (and how it deals with high concurrency)
8,374
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Suppose i have a collection with some data, with a user activities and points for the each activity. I want to calculate the sum of all the activity points for each number and return then result as height to lowest points. What is the best way to do that. Doing sorting is not good because data with the calculated scored will be unindexed, and maxN operator is also not available as i am using mongo 5.0. so how can I solve this problem in an efficient way?", "username": "sajan_kumar" }, { "code": "$merge$out", "text": "Hello @sajan_kumar ,Please correct me if my understanding of your use case is wrong, you want to do a sum of values based on specific groupings, and then sort the results from high to low (descending order)?To understand your use case better could you provide below details:Doing sorting is not good because data with the calculated scored will be unindexedYou are right on this as in memory sorting can not use indexes. Since an index entry points to a specific physical document, you cannot use an index to sort something that has no physical presence. So if an index use is desired then it is required to actually create a physical document for the computed field, which the index can now point to. MongoDB provides a Materialized View feature, which is typically the result of a $merge or $out stage. You can create indexes directly on on-demand materialized views because they are stored on disk. It could provide a performance boost at the cost of extra storage space, and you will also need a way to trigger the data refresh so that the view remains up to date.Regards,\nTarun", "username": "Tarun_Gaur" } ]
What is the best way to calculate a number of values and get the sorted result on the basis of calculated values?
2022-09-19T17:45:49.577Z
What is the best way to calculate a number of values and get the sorted result on the basis of calculated values?
1,332
null
[ "aggregation", "queries", "crud", "transactions" ]
[ { "code": "[{\n \"id\": 45842,\n \"ownerId\": 45842,\n \"processing\": true\n},{\n \"id\": 45850,\n \"ownerId\": 45842,\n \"processing\": false\n},{\n \"id\": 46095,\n \"ownerId\": 45842,\n \"processing\": true\n},{\n \"id\": 46096,\n \"ownerId\": 45842,\n \"processing\": false\n}]\ndb[\"azure-objects\"].updateOne(\n { id: 45842 },\n [\n { $set: { processing: db['azure-objects'].aggregate(\n [\n { $match: { ownerId: 45842, id: { $ne: 45842 } } },\n { $group: { _id: \"$ownerId\", processing: { $max: \"$processing\" } } },\n { $project: { \"_id\": 0 } }\n ]\n ).next()[\"processing\"] } }\n ]\n);\n", "text": "Hello,\nI have a collection with parent objects and children objects. The relationships are defined by the “ownerId” field, that links children with their parent.\nFor example the collection structure could be:I need to update the derived “processing” field of the parent object based on the “processing” field of its children: if all cildren have “processing” false then the parent “processing” field must be false, otherwise must be true.\nWe have written this:The question is: is it atomic? Can i improve it? Or i need to wrap it in a transaction?Thanks in advance.\nStefano", "username": "Stefano_Agostini" }, { "code": "", "text": "The reason transactions exist as a design pattern since the 1960’s or 1970’s is that figuring out all the ways a sequence of database operations can unexpectedly become non-atomic requires knowledge of (often undocumented) internals. If your question is, “How to I guarantee that this compound operation will be atomic?” then the answer is, “make it a transaction.”", "username": "Jack_Woehr" }, { "code": "", "text": "Thanks for the reply! I figured the most immediate solution is to use a transaction but, since I’m using camel-mongodb, if it was possible to use a single atomic operation for me it would have been easier.", "username": "Stefano_Agostini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update a document based on other documents in atomically
2022-09-27T16:00:17.115Z
Update a document based on other documents in atomically
1,470
null
[ "replication", "sharding", "transactions" ]
[ { "code": "", "text": "Hi Team,We have a use case in which we are considering MongoDB as a key value store and a RDBMS (we are considering PostgreSQL as RDBMS) as a read view. We have an event driven system which will process the event only if the event id of the event is not found in the Key Value store. Once the event is processed, we need to add the event ID to the key value store and the result of the process to the RDBMS in one transaction. We wanted the transaction to be atomic such that if any one of the process (storing event id in MongoDB and storing a read view in RDBMS) fails, we will abort the transaction and rollback any commit. I have read about the “session” and transactions in MongoDB, but I have only found the ACID transaction support at the level of replica set and sharded cluster.Is it possible to achieve the global transaction with MongoDB as the key-value store?", "username": "Aman_Lonare" }, { "code": "", "text": "Hi @Aman_Lonare ,The real question is why do you need postgress? MongoDB is a general puprose database meaning that it can support Key-Value workloads as well as a read based view (fully indexable and filtering on various levels).I wonder if doing all of this complex cross database transaction is just an overhead you can avoid by doing a correct MongoDB schema design that will support both you reads/writes. Then you will not need a transaction outside of MongoDB and moreover might not need transactions at all.Can you describe your data concerns/design so I can help you use only MongoDB for this application?Otherwise, the only method is to manage a two-phase commit when you write data to MongoDB then stream it to Postgress and only once they are both commited you mark them with a flag of data being consistent.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Global distributed transaction involving MongoDB and a RDBMS
2022-09-28T04:08:33.592Z
Global distributed transaction involving MongoDB and a RDBMS
1,360
null
[ "aggregation", "queries" ]
[ { "code": "{\n \"_id\": 123 // id\n \"product_id\": ObjectId ('427875465348') // object id of a product\n \"quantity\": 50 // quantity added in the stock\n}\n{\n \"_id\": \"63327c0ba1d239a0dcec7d87\", // id used in stock collection\n \"name\": \"Rice\", // name of product\n \"category\": \"Vegetables\", \n \"life\": 20,\n \"price\": 22.9\n}\n{\n \"_id\": \"63327c0ba1d239a0dcec7d87\", // id used in stock collection\n \"name\": \"Rice\", // name of product\n \"category\": \"Vegetables\", \n \"life\": 20,\n \"price\": 22.9\n \"quantity\" : // sum of all the quantity of the document in stock relating to this product using \n product_id\n}\n", "text": "In my code, I have a collection called stock and another called product.Stock holds the data in this formatAnd product Id stores in this format:Every time a stock is added a new record will be created in stocks table. The stock table can have multiple document with same product id.I want to display all record of product in this way:How I do that?", "username": "Sahil_Shrestha" }, { "code": "db.product.aggregate( [\n { \"$match\" : { \"product_id\" : CurrentProduct } } ,\n { \"$lookup\" : {\n \"from\" : \"stock\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"product_id\" ,\n \"pipeline\" : [ { \"$group\" : { \"_id\":null , \"quantity\" : { \"$sum\" : \"quantity\" } } } ] ,\n \"as\" : \"result\" \n } }\n] )\n", "text": "Can you provide a sample stock document that refers to your product?The product_id:ObjectId(427875465348) does not match your _id from product.Basically, you do the following UNTESTED aggregation", "username": "steevej" }, { "code": "db.products.aggregate([{\n $match: {\n _id: '63327c0ba1d239a0dcec7d87'\n }\n}, {\n $lookup: {\n from: 'stock',\n localField: '_id',\n foreignField: 'product_id',\n pipeline: [\n {\n $group: {\n _id: null,\n quantity: {\n $sum: '$quantity'\n }\n }\n }\n ],\n as: 'quantity'\n }\n}, {\n $addFields: {\n quantity: {\n $first: '$quantity.quantity'\n }\n }\n}]) \n", "text": "Hi @Sahil_Shrestha ,A small correction to @steevej great pipeline example:This will allow you to reshape the data as wanted.", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to display sum of field from another document
2022-09-27T05:24:17.951Z
How to display sum of field from another document
973
null
[ "queries", "time-series", "data-api" ]
[ { "code": "{\n \"time\": \"2022-01-05T09:34:00.000Z\",\n \"coin\": \"avax\",\n \"_id\": \"61d566111c94337a77a8bdbb\",\n \"value\": 3.64,\n \"currency\": \"usd\"\n}\nconst result = await coins.find({\n coin: coin,\n time: {\n \"$gte\": new Date('2022-01-04')\n }\n }).toArray();\n\nexport const fetchCall = async (type, filter) => {\n let data = {\n \"collection\": \"Coins\",\n \"database\": \"Prices\",\n \"dataSource\": \"DataAPITestCluster\",\n \"filter\": {\n time: {\n \"$gt\": new Date('2022-01-04')\n }\n }\n };\n let config = {\n method: 'post',\n url: `https://data.mongodb-api.com/app/data-ycafa/endpoint/data/beta/action/${type}`,\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'api-key': key\n },\n data: data\n };\n return axios(config)\n}\n", "text": "I currently have a time series collection and I am exposing it with the Atlas Data API.This is the general schema of my documentsNow I am able to filter my query with the ‘time’ field whenMongoDB NodeJS ExampleSame Example using the Data APIIs this an issue with the Data API not recognizing ‘date’ types?", "username": "Salman_Alam" }, { "code": "let data = JSON.stringify( {\n \"collection\": \"Coins\",\n \"database\": \"Prices\",\n \"dataSource\": \"DataAPITestCluster\",\n \"filter\": {\n time: {\n \"$gt\": new Date('2022-01-04')\n }\n }\n });\n", "text": "Hi @Salman_Alam ,I believe according to documentation you suppose to stringify the data object before passing it to data api:You will see this in every node js test command provided by the UI examples…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny Thanks for the response. I tried “stringifying” but it still didn’t work.", "username": "Salman_Alam" }, { "code": "", "text": "What is the error you get?", "username": "Pavel_Duchovny" }, { "code": "", "text": "Oh I see the problem.You must specify dates in extended json format with a timestamp only :The above link has an example.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "export const fetchCall = async (type, filter) => {\n let data = {\n \"collection\": \"Coins\",\n \"database\": \"Prices\",\n \"dataSource\": \"DataAPITestCluster\",\n \"filter\": {\n time: {\n $gte: Date(\"2022-08-07\"),\n $lte: Date(\"2022-09-14\")\n }\n }\n };\n let config = {\n method: 'post',\n url: `https://data.mongodb-api.com/app/data-ycafa/endpoint/data/beta/action/${type}`,\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'api-key': key\n },\n data: data\n };\n return axios(config)\n}\n", "text": "Hi Pavel,Could you please provide an syntax example of what you mean? I’m having the same issue and am struggling to make it work.How exactly does the following original poster’s code need to be modified so that a range of documents in between the two dates is returned ??Please advise.", "username": "Josh_Wipf" }, { "code": " \"filter\": {\n time: {\n $gte: {\"$date\": \"2022-08-07T00:00:00.000Z\"},\n $lte: {\"$date\": \"2022-09-14T00:00:00.000Z\"}\n }\n }\n", "text": "Hi @Josh_WipfPlease try:", "username": "Pavel_Duchovny" } ]
Find() with Data API and a Timeseries Collection [NOT WORKING]
2022-01-08T06:34:05.249Z
Find() with Data API and a Timeseries Collection [NOT WORKING]
5,476
null
[ "security" ]
[ { "code": "", "text": "Is it possible to enable FIPS support in Cloud Atlas?I’ve inspected my logs but don’t see the “FIPS 140-2 mode activated” message in my Atlas DBs logs as I do if I enable FIPS on a local instance.If it’s possible, how can I enable it as I don’t have full access to the DBs config file?", "username": "Jason_Craig" }, { "code": "", "text": "It is available only in Entp. edition\nYou have to edit config file,add FIPS parameter and restart mongod to enable it\nSo access to configfile/mongod is a must", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks,Just wanted to confirm it isn’t enable-able in Cloud Atlas: )", "username": "Jason_Craig" }, { "code": "", "text": "Hi @Jason_Craig - Welcome to the MongoDB Community Is it possible to enable FIPS support in Cloud Atlas?FIPS mode cannot be enabled in Atlas directly. However, MongoDB Atlas is FIPS-compatible and supports connections from applications using FIPS 140-2 validated cryptographic modules. The Atlas service offers NIST-approved FIPS 140-2 encryption modes in network transport, API access, and key generation & delegation. Data at rest encryption at the storage volume level uses FIPS validated hardware security module and software components. Web service endpoints for Atlas are compatible with applications configured to communicate in FIPS mode over TLS.Additionally, you may also find viewing the Atlas security whitepaper useful.Hope this helps.Best Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
FIPS support in Cloud Atlas
2021-11-23T05:36:42.751Z
FIPS support in Cloud Atlas
4,896
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,\nUnable to run a query because of the following error.MongoServerError: Unrecognized expression ‘$bsonSize’", "username": "Vishnu_G_Singhal" }, { "code": "", "text": "Hi @Vishnu_G_Singhal, and welcome to the MongoDB Community forums! Can you please let us know what version of MongoDB you’re running, and show us the query and full error that you’re getting? Are you running the query via the shell or through a language driver?", "username": "Doug_Duncan" } ]
MongoServerError: Unrecognized expression '$bsonSize'
2022-09-28T03:20:42.840Z
MongoServerError: Unrecognized expression &lsquo;$bsonSize&rsquo;
1,239
null
[]
[ { "code": "", "text": "Let’s say I have an application that is going to allow users and organizations worldwide to collaborate on tasks for different projects. It allows for recording tasks that are assigned to different users, organizations and projects. I need to store potentially tens of millions of users, tens of thousands of organizations, tens of millions of projects and hundreds of millions of tasks (eventually). The application needs to be able to retrieve data from any perspective, so it may need to show all tasks for a given user, organization or project, and will also need to be able to show all users, organizations and projects for a given task. And really, it may need to also show all projects a user has participated in or all users for a particular organization.Is this kind of data problem appropriate for Mongo or should I really be looking at a relational DB? It would be nice to take advantage of the benefits of a document storage system in terms of these individual data entities (collections), along with the flexibility of Mongo, but the need to relate these different data entities as many-to-many in multiple ways is giving me pause. On the other hand, these kinds of joins don’t go deep, so perhaps not an issue?", "username": "Koranke" }, { "code": "", "text": "Is this kind of data problem appropriate for MongoYes, given enough hardware resources.", "username": "steevej" } ]
Mongo for many many-to-many relationships?
2022-09-25T21:43:42.507Z
Mongo for many many-to-many relationships?
1,379
null
[ "aggregation", "queries" ]
[ { "code": "I am currently working to write an aggregation query to pull data. We have a collection that has a subfield that can group multiple records together. An example would be we have a list of users that all have a unique \"_id\" field. All those users can be grouped into an organization and each organization has a unique id for it as well '_organization.id'.\n", "text": "Good afternoon,I am working to write an aggregation query that takes all the distinct ‘_organization.id’ records that are missing a specific field we will say ‘email’. and seeing which ‘_organization_id’ records have more then one user linked to that ‘_organization.id’I am having issues writing this up utilizing the $group statement.Any help would be greatly appreciated!", "username": "Josh_Barnes" }, { "code": "", "text": "Hey Josh, it would be beneficial to see sample documents and the expected output that you are looking for. With that we can better help write a query to do what you want. Without it, we’re just making assumptions which is generally not a good thing to do. ", "username": "Doug_Duncan" }, { "code": "", "text": "Thank you for the advice. So what I am looking for is a list of all ‘_organization.id’ that have more then one user linked to it.Example of data:\n{\n“_id” : “4ba107b0-9703-11e7-93f0-3780a90495c6”,\n“provider” : “local”,\n“hashedPassword” : “”,\n“salt” : “xxxxxxxxxxxxxxxxxxxxxxxx”,\n“_organization” : {\n“id” : “364b5970-783f-11e7-9303-378fa672143b”,\n“type” : “Organization”\n},\n“phone” : “555-555-5555”,\n“email” : “[email protected]”,\n“name” : “Test Record”,\n“verification_code” : “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx”,\n“type” : “User”,\n“created” : ISODate(“2017-09-11T15:10:06.763Z”),\n“role” : “admin”,\n“crm_id” : null,\n“is_verified” : false,\n“is_active” : true,\n“__v” : 0\n}", "username": "Josh_Barnes" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your sample documents.", "username": "steevej" } ]
Need help with getting a Group by to work with data I get from a Distinct statement
2022-09-22T15:26:21.642Z
Need help with getting a Group by to work with data I get from a Distinct statement
2,982
null
[ "aggregation", "crud" ]
[ { "code": "$cond$nedb.collection.updateMany({\n \"basicData.owners.relatedJson.basicData.devices.equipmentID\": {\n $exists: true\n }\n },\n [\n {\n $set: {\n \"basicData.owners\": {\n\n $map: {\n input: \"$basicData.owners\",\n in: {\n\n $mergeObjects: [\n \"$$this\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson\",\n undefined\n ]\n },\n {\n \"relatedJson\": {\n $mergeObjects: [\n \"$$this.relatedJson\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData\",\n undefined\n ]\n },\n {\n \"basicData\": {\n $mergeObjects: [\n \"$$this.relatedJson.basicData\",\n {\n $cond: [\n {\n $ne: [\n \"$$this.relatedJson.basicData.devices\",\n undefined\n ]\n },\n {\n \"devices\": {\n $map: {\n input: \"$$this.relatedJson.basicData.devices\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n equipmentId: \"$$this.equipmentID\",\n\n }\n ]\n }\n }\n }\n },\n {},\n ]\n }\n ]\n }\n },\n {},\n ]\n }\n ]\n }\n },\n {},\n ]\n }\n ]\n }\n }\n }\n }\n },\n {\n $unset: \"basicData.owners.relatedJson.basicData.devices.equipmentID\"\n }\n ])\nnullrelatedJsonrelatedJsonnull$condundefinednull$in", "text": "Hi all,I have the following query, which has multiple $cond and $ne functions:This query works mostly as expected. However, I noticed that if an object exists but has a value of null, say relatedJson in the above example then this update will create an empty relatedJson value instead of leaving it null.\nThis situation can arise if for example there is one owner in the owners list that satisfies the query filter, but there are others that don’t.\nHow can I modify the $cond functions to exclude undefined and null objects?\nI was thinking I could use the $in function somehow, but I can’t get it to work.\nAnyone got any ideas?Many thanks,Paul.", "username": "Paul_Mallon" }, { "code": "", "text": "", "username": "Jack_Woehr" } ]
How can I exclude null AND undefined objects from an update?
2022-09-27T12:27:22.193Z
How can I exclude null AND undefined objects from an update?
1,837
null
[]
[ { "code": "", "text": "Hi, I am new to Mongo, just trying to find my way around here", "username": "WhiteUnicorn" }, { "code": "", "text": " Hello @WhiteUnicorn, and welcome to the MongoDB Community forums!Once you’ve found your way around, you might want to check out the courses offered on MongoDB University. The documentation is well written and a good place to find information. And always free free to ask any question you have as you progress on your journey with MongoDB here in the forums. There are a lot of knowledgable people around (both community members like yourself and those employed by MongoDB) that love helping answer other’s questions.", "username": "Doug_Duncan" } ]
Hello community
2022-09-27T09:13:44.589Z
Hello community
1,804
null
[ "aggregation" ]
[ { "code": "{\n _id: \"123456\",\n values: [\n {\n \"value\": \"A\",\n \"begin\":0,\n \"end\":1,\n }\n {\n \"value\": \"B\",\n \"begin\":1,\n \"end\":2,\n }\n {\n \"value\": \"C\",\n \"begin\":3,\n \"end\":7,\n }\n ],\n \"name\": \"test\"\n}\ndb.collection.aggregate([\n {$unwind: \"$values\"},\n {$group: {_id: \"$values.value\", count: {$sum: 1}}}\n])\n", "text": "im having some trouble with my MongoDB aggregation. I have documents like:And i want to count only the “value” in “values”. With some help i got the following aggregation:The problem is: it takes me about 20 seconds to get the result for 6k documents. Is there anything i can do for optimication?Greetings", "username": "Nikolai_Klingeln" }, { "code": "$unwindvalues", "text": "Hi @Nikolai_Klingeln and welcome to the MongoDB Community forums! I have a couple questions for you:All of these could have impacts on MongoDB and the processing of this query.", "username": "Doug_Duncan" }, { "code": "", "text": "Hey, thanks for the welcome \nThe 6k Documents are before the unwind, after the unwind i have a total of 32 million documents.\nI use MongoDB with java and i have there the latest version of MongoDB.\nI have 16GB of RAM and a ryzen 4500u (6 CPUs ~2.4GHz).\nI have some other programs running but only low weight applications. Before i start the java program i have only 5% cpu usage and i only use 7GB of ram.", "username": "Nikolai_Klingeln" }, { "code": "db.collection.aggregate([\n {\n $project: {\n values: {\n $map: {\n input: { $setUnion: \"$values.value\" },\n as: \"value\",\n in: {\n value: \"$value\",\n count: {\n $size: {\n $filter: {\n input: \"$values\",\n cond: { $eq: [\"$this.value\", \"$value\"] },\n }\n }\n }\n }\n }\n }\n }\n },\n { $unwind: \"$values\" },\n { $group: { _id: \"$values.value\", count: { $sum: \"$values.count\" } } }\n])\n", "text": "I forgot the mention it.I used this with the hope it would create less documents but it got even worse The project and unwind operations was fast but the group operation was very slow and i had to wait 4 minutes for the aggregation. (It was before 20 seconds)", "username": "Nikolai_Klingeln" }, { "code": "", "text": "Could indexes help me with my problem? Im at the point where im trying everything ", "username": "Nikolai_Klingeln" }, { "code": "$unwind", "text": "Sorry for the delayed response. I was sick over the weekend and just got back to this thread.If you have 6k document before the $unwind and then 32M after, that means your values arrays have 5,333 items on average. sending 32M documents to a group stage is going to take time.How often does this query run? If not that often I would just let it be. 20 seconds to process 32M documents doesn’t seem like a lot of time to me. It’s rare that I ever want to do an aggregation over my entire dataset, but then I don’t know your data or requirements of the query you’re running so this might be perfectly normal in this case.You probably won’t get much help from an index since you’re unwinding an array, but you could tray. MongoDB does find ways of surprising me with what it can do.", "username": "Doug_Duncan" } ]
MongoDB slow $unwind and $group aggregation
2022-09-23T14:41:43.323Z
MongoDB slow $unwind and $group aggregation
3,055
null
[ "node-js", "connecting" ]
[ { "code": "2021-09-07T02:33:33.338615670Z 2021-09-07 1zz4:33:33:337NZ [ERROR][Application]: Error: ERROR:Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n2021-09-07T02:33:33.338645798Z STACK:MongoError: Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n2021-09-07T02:34:15.975509928Z 2021-09-07 14:34:15:974NZ [ERROR][Application]: Error: ERROR:Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n2021-09-07T02:34:15.975539600Z STACK:MongoError: Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n2021-09-07T02:35:32.557043399Z 2021-09-07 14:35:32:556NZ [ERROR][Application]: Error: ERROR:Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n2021-09-07T02:35:32.557073176Z STACK:MongoError: Exec error resulting in state FAILURE :: caused by :: operation was interrupted\n\n\n2021-09-07T00:03:15.557242233Z 2021-09-07 12:03:15:556NZ [ERROR][Application]: Error: ERROR:a stepdown process started, can't checkout sessions except for killing\n2021-09-07T00:03:15.557306312Z STACK:MongoError: a stepdown process started, can't checkout sessions except for killing\n2021-09-07T00:03:15.561886825Z 2021-09-07 12:03:15:561NZ [ERROR][Application]: Error: ERROR:a stepdown process started, can't checkout sessions except for killing\n2021-09-07T00:03:15.561903059Z STACK:MongoError: a stepdown process started, can't checkout sessions except for killing\n2021-09-07T02:35:54.927173707Z 2021-09-07 14:35:54:916NZ [ERROR][Application]: Error: ERROR:a stepdown process started, can't checkout sessions except for killing\n2021-09-07T02:35:54.927206107Z STACK:MongoError: a stepdown process started, can't checkout sessions except for killing\n", "text": "Yesterday we received a wave of bug reports our Node.js MongoDB driver with which we are connected to CloudAtlas.We’ve never come across such errors. Anyone have ideas what they mean?", "username": "Sandor_Vasas" }, { "code": "", "text": "Same here!\nIt started just after we enabled auto scaling.", "username": "IDWise_N_A" }, { "code": "", "text": "Did you manage to fix it? We are receiving the same, together with “pool was force destroyed”.", "username": "Sebastia_Guisasola" }, { "code": "", "text": "Same here, it happened during the auto scaling from M10 to M20 on a private cluster, we lost a lot of queries during a massive insert process. We cannot find documentation on this issue outside of this Thread which no one responds to.", "username": "Simone_Desantis" }, { "code": "serverSelectionTryOnce", "text": "It is still happening, this time i tracked the following errors on a costant polling test:\nin order i get:then 3 of:After that all is gone ok.\nHope someone could solve this.", "username": "Simone_Desantis" } ]
Stepdown process started, can't checkout sessions except for killing
2021-09-07T11:28:11.756Z
Stepdown process started, can&rsquo;t checkout sessions except for killing
4,399
null
[ "node-js", "connecting" ]
[ { "code": "Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (/home/paciojgq/nodevenv/loan/12/lib/node_modules/mongoose/lib/connection.js:830:32)\n at Mongoose.connect (/home/paciojgq/nodevenv/loan/12/lib/node_modules/mongoose/lib/index.js:335:15)\n at Object.<anonymous> (/home/paciojgq/loan/models/db.js:30:10)\n at Module._compile (internal/modules/cjs/loader.js:936:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:947:10)\n at Module.load (internal/modules/cjs/loader.js:790:32)\n at Function.Module._load (internal/modules/cjs/loader.js:703:12)\n at Module.require (internal/modules/cjs/loader.js:830:19)\n at require (internal/modules/cjs/helpers.js:68:18)\n at Object.<anonymous> (/home/paciojgq/loan/app.js:2:1)\n at Module._compile (internal/modules/cjs/loader.js:936:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:947:10)\n at Module.load (internal/modules/cjs/loader.js:790:32)\n at Function.Module._load (internal/modules/cjs/loader.js:703:12)\n at Function.Module.runMain (internal/modules/cjs/loader.js:999:10)\n at internal/main/run_main_module.js:17:11\n(node:74432) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)\n(node:74432) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.\n", "text": "Please help! My NodeJS application I hosted on Namecheap cannot connect to MongoDB Atlas.\nI get the following error:", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "As the error message indicateOne common reason is that you’re trying to access the database from an IP that isn’t whitelisted.but in this case the recommendationMake sure your current IP address is on your Atlas cluster’s IP whitelist:is misleading. What needs to be white listed is the IP of your app running on Namecheap.", "username": "steevej" }, { "code": "", "text": "Please how do I know the ip of my app running on Namecheap?\nThanks!", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "I do not know that service so I do not know what tools you can use. The best is to contact technical support of this provider.", "username": "steevej" }, { "code": "", "text": "\nAlright\nThanks", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "I agree with everyone’s suggestions. The problem is that the IP address isn’t Whitelisted on Atlas (this is a security feature to make sure bad actors do not try to access your MongoDB Database). Getting the IP address depends on the kind of hosting you have from NameCheap. You might have a shared or dedicated IP address depending on your plan. It appears that you can contact NameCheap support to request your IP Address.Learn more about How do I order a dedicated IP?. Find your answers at Namecheap Knowledge Base.Let me know if you have any questions or if you were able to resolve your issue.", "username": "JoeKarlsson" }, { "code": "", "text": "I seem to have a similar issue. I whitelisted my IP but still, it wont work. I was also in contact with the namecheap support but they could not find the problem. It works fine when I whitelist all IPs on MongoDB but adding the correct IP from my shared Namecheap server won’t work. And I also checked the DNS A Record. Its pointed to the correct IP.\nSince it still work when all IPs are whitlsited, is there a way to analyze (check) the IP from which a database was accessed. This way I could access it and then check what IP data was received by mongoDB . Is this possible?", "username": "Julian_von_Gebhardi" }, { "code": "", "text": "Try to go to https://www.whatismyip.com/ from the machine where your app is running. It should give the public IP address seen by the cluster.", "username": "steevej" }, { "code": "", "text": "thanks a lot! But its the same IP which I already whitelisted. Still, it’s not working. Therefore, I have to give access to every IP. It’s a little strange. Some rule of MongoDB seems to reject it even do its the correct IP.", "username": "Julian_von_Gebhardi" }, { "code": "", "text": "Share the exact error message you get when you only allow access from the whatismyip result.", "username": "steevej" }, { "code": "", "text": "I can not do it right now but what basically happens it that it seems to try to connect around 20 seconds and then it though an error like: “make sure you have whitelisted the IP from where you are accessing”That is why it would be more helpful to analyze it backwards.Of course, normally it should be the same as from “whatismyip”. But it would be interesting to check exactly this.", "username": "Julian_von_Gebhardi" } ]
Namecheap Node.js app cannot connect to Atlas
2020-06-27T13:09:10.880Z
Namecheap Node.js app cannot connect to Atlas
6,536
null
[ "aggregation" ]
[ { "code": "", "text": "i am trying to perform a “count” of a \"$group. It all works fine if the total count items are less than 100k items.When its beyound 100k items, it will return results, but wrong results because if you total them manually, it will be sumed to 100k items.Is there any way around this?Thanks", "username": "zhen_shin_kong" }, { "code": "> db.test.find()\n[\n { _id: 1 }, { _id: 2 },\n { _id: 3 }, { _id: 4 },\n { _id: 5 }, { _id: 6 },\n { _id: 7 }, { _id: 8 },\n { _id: 9 }, { _id: 10 },\n ...\n {_id: 200000}\n\n> db.test.aggregate([ {$group:{_id:null, sum:{$sum:'$_id'}, count:{$sum:1}}} ])\n[ { _id: null, sum: Long(\"20000100000\"), count: 200000 } ]\ndb.version()db.collection.stats()", "text": "Hi @zhen_shin_kong welcome to the community!When its beyound 100k items, it will return results, but wrong results because if you total them manually, it will be sumed to 100k items.I tried a quick repro in MongoDB 6.0.1 using 200,000 documents and don’t see this issue:It appears to correctly sums all the numbers together, and also indicates that there are 200,000 documents in the collection.Could you post a small self-contained code that reproduces this, and also post:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Here, i am matching 1 weeks worth of data. There is 800k entry.\n\nimage865×403 6.33 KB\n", "username": "zhen_shin_kong" }, { "code": "", "text": "If i execute the same query in mongo-client. its fine.\n\nimage709×693 8.7 KB\n", "username": "zhen_shin_kong" }, { "code": "", "text": "When i execute it via Mongo Atlas Web UI. there is where i see the limit.\nimage780×553 24.3 KB\n", "username": "zhen_shin_kong" }, { "code": "", "text": "Running version.\nMy appolgize, i can only post 1 uploads per reply (so have to split the response)", "username": "zhen_shin_kong" }, { "code": "sum : { $sum : 1 }\n", "text": "An _id when it is an ObjectId is not a number so you cannot $sum. If you want to count using sum you need to useNext time you publish code sample please post it using text format rather than a image. This way we can cut-n-paste into our system for experimenting and cut-n-paste into our replies.", "username": "steevej" }, { "code": "When Sample Mode is enabled, specifies the number of documents passed to $group, $bucket, and $bucketAuto stages. Lower limits improve pipeline running time, but may miss documents.", "text": "When i execute it via Mongo Atlas Web UI. there is where i see the limit.I believe this is expected, since the Atlas aggregation pipeline builder is primarily designed for building pipelines, rather than executing them. That is, the Atlas pipeline builder allows you to easily plan & design your aggregation, and once you finished designing them, you should be exporting the finished pipeline to be used in a driver. The pipeline builder provides an easy way to export your pipeline.In fact, the aggregation pipeline builder you see in Atlas is the same one that you see in MongoDB Compass. However the pipeline builder in Compass has a configurable limit which defaults to 100,000 documents (the same limit as you observed in Atlas). From the description of this setting in Compass:When Sample Mode is enabled, specifies the number of documents passed to $group, $bucket, and $bucketAuto stages. Lower limits improve pipeline running time, but may miss documents.But while this limit is configurable in Compass, presently I don’t think it is configurable in Atlas.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation Group by Count ($sum) maxed at 100k count
2022-09-26T11:48:53.442Z
Aggregation Group by Count ($sum) maxed at 100k count
2,800
null
[]
[ { "code": "", "text": "I have a collection where I have a timestamp field and several records. Now I am trying to make a single query where I put a request time as input param it should just return me the record which is immediately before and after that request time.For example, say there is 1 record for each day from 13-SEP till 22-SEP and I pass 18-SEP as request date then it should return me 17-SEP and 19-SEP records only.Is it possible to do in MongoDB and if so how??", "username": "Sourav_Mehra" }, { "code": "", "text": "My approach would be to", "username": "steevej" } ]
How to find Mongo records before and after a specific time in a single query
2022-09-23T22:12:50.011Z
How to find Mongo records before and after a specific time in a single query
1,432
null
[]
[ { "code": "collection1collection2", "text": "I have a mongo collection collection1 with ~ 1000 million records. Now I want to use value of certain fields from the documents of this collection, apply some transformation and insert a new document into another collection collection2.What is the recommended way to do this (considering the scale) while having some kind of observability to the process? If it helps, this is on mongo atlas.", "username": "Ashwin_Bhaskar" }, { "code": "", "text": "You write a aggregation pipeline that", "username": "steevej" } ]
Inserting documents into a collection from another collection
2022-09-26T11:48:59.243Z
Inserting documents into a collection from another collection
2,101
https://www.mongodb.com/…b_2_1024x364.png
[ "atlas-cluster", "database-tools" ]
[ { "code": "", "text": "Hi All, I am newly learning Mangodb. I am following https://university.mongodb.com/\nI try to do mangodb dump as reference of below screen. But I am getting confuse where can I execute below comment .\nimage1412×502 94.9 KB\nI executed this comment via Mango db terminal.mongoexport --uri=“mongodb+srv://m001-student:[email protected]/sample_supplies”", "username": "Bharathi_raja" }, { "code": "", "text": "Also,I tried mangodb terminal its shown erro\nmongoexport --uri=“mongodb+srv://m001-student:[email protected]/sample_supplies”its showing below error\n\nimage1221×167 6.11 KB\n", "username": "Bharathi_raja" }, { "code": "", "text": "Also this sample_supplies db availble in my Atlas window.\n\nimage1596×445 26.9 KB\nwhat is the issue? How can I do mangodb dump. Any one help me. Thanks", "username": "Bharathi_raja" }, { "code": "", "text": "Run the command from os prompt.You are running it at mongo prompt.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "\nimage1046×591 18.4 KB\nRam Actually this is my exact question, I want to do get answer for this question. So I want get response from my Atals. So I dnt know exactly , I need to execute above mentioned import/export operation then find answer for this question.", "username": "Bharathi_raja" }, { "code": "", "text": "HI Now I understand Ram. I got the exact place . Thanks\n\nimage1172×746 60.1 KB\n", "username": "Bharathi_raja" } ]
How to do and where to do mangodb dump
2022-09-27T12:01:17.004Z
How to do and where to do mangodb dump
1,872
null
[ "aggregation" ]
[ { "code": "db.sales.aggregate(\n [\n { $sort: { item: 1, date: 1 } },\n {\n $group:\n {\n _id: \"$item\",\n firstSalesDate: { $first: \"$date\" }\n }\n }\n ]\n)\nMongoServerError: PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.\n", "text": "Hi community,I have followed this tutorial code. In order to have the first sales date, the sorting is absolutely necessary. But what if there are 500 million sales and the total number of sales exceeds the maximum sort size?This is the error I’d get if the resultset is too largeAny ideas how to solve this?", "username": "derjanni" }, { "code": "", "text": "If you sort using the those keys often you must have an index.", "username": "steevej" } ]
Sort exceeded memory limit: using $group and $first in large result sets
2022-09-27T13:01:20.333Z
Sort exceeded memory limit: using $group and $first in large result sets
2,310
null
[ "mongodb-shell" ]
[ { "code": "", "text": "installed mongodb and mongoshell, running mongod\nWhen I try to start mongosh I get the error econnrefused\nHow can I fix this error?", "username": "WhiteUnicorn" }, { "code": "", "text": "If your mongod is up you should be able to connect\nIs your mongod running as service or you started it manually from command line?\nCheck mongod.log", "username": "Ramachandra_Tummala" }, { "code": "", "text": "It was looking in wrong location for the db, fixed that and all works now. Thank you.", "username": "WhiteUnicorn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get econnrefused error on mongosh
2022-09-27T09:27:14.763Z
Get econnrefused error on mongosh
896
null
[ "queries", "java", "connecting", "sharding" ]
[ { "code": "Exception sending message; nested exception is com.mongodb.MongoSocketWriteException: Exception sending message\n", "text": "MongoDB is composed of mongos, replica-set, and consists of four shards. (version 4.2.19)\nData is being loaded in real time on MongoDB.\nIn this situation, I would like to know the settings that recommend connectTimeoutMS, socketTimeoutMS, maxTimeMS, and KeepAlive.\nI am changing the setting value and monitoring it, but the error below occurs frequently.", "username": "rinjyu_N_A" }, { "code": "", "text": "Hi @rinjyu_N_A and welcome to the MongoDB community!!It would be very helpful to understand the issue if you could help us with the following details :Also, there has been a community forum post related to a similar issue, would suggest if you could use the solution Exception sending message, MongoSocketWriteException.P.S. 4.2.19 is an older MongoDB version. Please upgrade to 4.2.22 for bug fixes and new features introduced.Thanks\nAasawari", "username": "Aasawari" }, { "code": "case 1 :\nException sending message; nested exception is com.mongodb.MongoSocketWriteException\n\ncase 2 :\nPrematurely reached end of stream; nested exception is com.mongodb.MongoSocketReadException\nspring:\n data:\n mongodb:\n uri: mongodb://test:test@ip1:port1,ip2:port2,ip3:port3,ip4:port4/database?authSource=database&maxIdleTimeMS=360000&maxLifeTimeMS=360000\n", "text": "Hello.\nThank you for your reply.I use spring-data-mongo 3.0.9 version and mongo driver 4.6 version.\nThere are two intermittent errors when saving data to mongodb in the spring boot application.The application uses the save method in mongoTemplate when saving data.The mongodb setting in the application.yml of the spring boot application is shown below.The error occurs regardless of whether maxIdleTimeMS and maxLifeTimeMS are set in the mongodb setting.I tried to apply the MongoClient setting that you linked to the spring boot application, but the error was not resolved.Is there any additional opinion that will help me?", "username": "rinjyu_N_A" }, { "code": "", "text": "Hi @rinjyu_N_A and thank you for sharing the above detailsThe above issue might have occurred due to message being transmitted over a closed socket.\nCan you also confirm if there is pattern observed for the above problem. For example, if the app was running for a while?Is there any additional opinion that will help me?The following post on stackoverflow might have a possible solution.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "I will refer to your opinion.\nThanks for your help. ", "username": "rinjyu_N_A" } ]
MongoDB timeout settings
2022-09-21T13:37:41.370Z
MongoDB timeout settings
6,720
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "I have two collections:Collection 1:\nmail | value\na 2\nb 3\nc 4Collection 2:mail | value\na 2\nc 4Desired result:\nmail | value\nb 3How can I do this with the aggregation pipeline? Any help would be appreciated, I am just a begginer and I tried myself but with no success…", "username": "A_ML" }, { "code": "$lookuupdb.coll1.aggregate([\n {\n $lookup: {\n from: \"coll2\",\n localField: \"mail\",\n foreignField: \"mail\",\n as: \"joined\"\n }\n }\n])\n[\n {\n _id: ObjectId(\"633219622627d7ac551adc7f\"),\n mail: 'a',\n value: 2,\n joined: [\n {\n _id: ObjectId(\"633219742627d7ac551adc82\"),\n mail: 'a',\n value: 2\n }\n ]\n },\n {\n _id: ObjectId(\"633219622627d7ac551adc80\"),\n mail: 'b',\n value: 3,\n joined: []\n },\n {\n _id: ObjectId(\"633219622627d7ac551adc81\"),\n mail: 'c',\n value: 4,\n joined: [\n {\n _id: ObjectId(\"633219742627d7ac551adc83\"),\n mail: 'c',\n value: 4\n }\n ]\n }\n]\n$matchjoineddb.coll1.aggregate([\n {\n $lookup: {\n from: \"coll2\",\n localField: \"mail\",\n foreignField: \"mail\",\n as: \"joined\"\n }\n },\n {\n $match: {\n joined: []\n }\n }\n])\n[\n {\n _id: ObjectId(\"633219622627d7ac551adc80\"),\n mail: 'b',\n value: 3,\n joined: []\n }\n]\n", "text": "Hi @A_ML, and welcome to the MongoDB Community forums!In MongoDB the $lookuup operator does a left outer join of two collections.This will give results similar to the following based on your example:However it looks you want only those documents that exist in the left side and not the right side. You can add a $match to the above to return only where the joined field is an empty array (i.e. no matched document on the right side:Which returns:From there you can do what you need with the documents that remain in the pipeline.Note that this works just fine with the very small sample data, but you will want to test with larger amounts of production style data and make sure to filter out any unneeded documents before performing the lookup and make sure that the necessary indexes are in place.", "username": "Doug_Duncan" }, { "code": "{\n $match: {\n joined: []\n }\n }\n", "text": "Hi, thanks for your reply! I will take it into consideration. However, I would like to know how to do it taking two indexes at a time.\nFor example, not just looking at the column “mail”, but taking into account “mail” and “values” at the same time, since the same “mail” could have multiple records too. Is it possible with some kind of concatenation? In that case, is it possible to do it at runtime with a query?Thanks", "username": "A_ML" }, { "code": "$lookupdb.coll1.aggregate([{\n $lookup: {\n from: \"coll2\",\n let: {\n coll1_mail: \"$mail\",\n coll1_value: \"$value\"\n },\n pipeline: [{\n $match: {\n $expr: {\n $and: {\n $eq: [\"$$coll1_mail\", \"$mail\"],\n $eq: [\"$$coll1_value\", \"$value\"]\n }\n }\n }\n }],\n as: \"joined\"\n }\n },\n {\n $match: {\n joined: []\n }\n }\n])\n$lookup", "text": "The documentation covers multi field joins as well. You would use a form of the $lookup similar to the following:Hopefully the updated makes sense. Basically you create variables from the left collection fields to be used later in the pipeline. The pipeline then does the join (and could do other things as well if needed). Finally outside of the $lookup, you have you match to return only the documents that are not in the right collection.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for your help. Great explanation", "username": "A_ML" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I do a LEFT OUTER JOIN in mongodb?
2022-09-26T17:41:01.271Z
How can I do a LEFT OUTER JOIN in mongodb?
13,285
null
[ "node-js", "replication" ]
[ { "code": "", "text": "When trying to connect from our node.js web application server to mongodb, we use a single line connection string. Which looks like below:“mongodb://{userid}:{password}@mongodb1:27017,mongodb2:27017,mongodb3:27017/xxx-tenant-master?replicaSet=mongors&authSource=admin”This does not connect from the application but have no issues connecting from mongo-cli terminalAbove line can be used to connect to mongo replicaset with mongo-cli. and you will run it as below in terminal:mongo mongodb://{userid}:{password}@mongodb1:27017,mongodb2:27017,mongodb3:27017/xxx-tenant-master?replicaSet=mongors&authSource=admin", "username": "Rekha_Jadala" }, { "code": "mongodmongos", "text": "Hello @Rekha_Jadala ,Can I confirm that you’re trying to enable TLS/SSL in your deployment?\nIt should be enabled on the server side and is not activated through a connection string.I would recommend you to go through below documentation for more clarification:Also, to create some test certificates, please check below links:Note these certificates are for testing purpose only and for production deployments it’s best to obtain them from a reputable vendor.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
We want to know what connection string parameter should be updated in the application server to enable ssl
2022-09-19T19:13:06.677Z
We want to know what connection string parameter should be updated in the application server to enable ssl
1,313
null
[ "aggregation", "mongoose-odm" ]
[ { "code": "[\n {\n title: {\n de: \"Test\",\n en: \"test\"\n },\n infos: [\n {\n label: {\n de: \"Test\",\n en: \"test\"\n },\n data: [\n {\n label: {\n de: \"Test\",\n en: \"test\"\n }\n },\n \n ],\n \n }\n ],\n \n }\n]\n title: `$title.${languageCode}`,\n description: `$description.${languageCode}`,\n infos: {\n $map: {\n input: '$infos',\n in: {\n $mergeObjects: [\n '$$this',\n {\n label: `$$this.label.${languageCode}`,\n },\n {\n data: {\n $map: {\n input: '$$this.data',\n as: 'data',\n in: {\n $mergeObjects: [\n '$$data',\n {\n label: `$$data.label.${languageCode}`,\n },\n {\n description: `$$data.description.${languageCode}`,\n },\n ],\n },\n },\n },\n },\n ],\n },\n },\n },\n }```\n\nThe response looks as expected like that:\n\ntitle 'Test'\n....\n\nSo I merge the right language and only display that string.\n\nNow to my actual problem. When I'm using deep population, so:\n\n\n {\n path: 'exercises.exercise',\n select: ExerciseProtection.DEFAULT(languageCode),\n populate: ExercisePopulate.DEFAULT(languageCode), -> includes the projection showing above\n },\nThe \"infos\" were merged correctly but title and description are not anymore in the results.\nIf I change the projection: title: '$title.en' to someOtherTile: '$title.en' the \"someOtherTitle\" is displayed in the results.\n\nDoes someone has an idea why when I use title it is not displayed in the results but when using someOtherTitle it is in the results?\n\nThanks!", "text": "Hey there,I have a problem with merging objects to the root level while using deep population.\nMy schema looks like that:When using the aggregation function everything working fine with this syntax. I’m getting the following response when I execute:", "username": "Tobias_Duelli" }, { "code": "select: ExerciseProtection.DEFAULT(languageCode)ExerciseProtection.DEFAULT(languageCode)", "text": "Hi @Tobias_Duelli, welcome to the community.\nCan you please mention the reason behind this configuration:select: ExerciseProtection.DEFAULT(languageCode)And what is the value of the ExerciseProtection.DEFAULT(languageCode) variable?The select option in your populate configuration will limit the results to just contain the fields that are mentioned there. I’d recommend you to just remove the select option from your config, validate the output of the populate command and then you can start mentioning the keys in the configuration.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Merge objects in projections is not working correctly for deep populate
2022-09-22T13:37:21.067Z
Merge objects in projections is not working correctly for deep populate
1,415
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "When I run mongodump I get an error: Failed: error writing data for collection…\nI tried advise to use --forceTableScan. However, the error continues.\nI am at a total loss for getting this to work.", "username": "David_Cox" }, { "code": "", "text": "What is the associated error?Like unable to write to disk,host unreachable etc\nShare the mongodump command\nWhat is your mongodump version?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I’ve tried various options. For example:mongodump --uri=“mongodb://username:password@:27017/?authSource=admin”\nmongodump --uri=“mongodb://username:password@:27017/?authSource=admin” --forceTableScanThe version of mongodump is 100.6.0The error is Failed: error writing data for collection ‘COLLECTION’ to disk: error writing to file: write dump/COLLECTION.bson: input/output error", "username": "David_Cox" }, { "code": "", "text": "What happens is that it writes for a while and then stops with this error.", "username": "David_Cox" }, { "code": "", "text": "I’ve checked the usual suspects such as disk space. I have plenty. I’ve checked the log file and nothing there to explain why mongodump stopped.", "username": "David_Cox" }, { "code": "", "text": "Appears to be os issue\nWhere is your dump dir located\nWhat is your os? Try to give a different path than default dump dir with --out option", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am looking into the possibility that there is an issue with our network drive.", "username": "David_Cox" }, { "code": "", "text": "Hi,input/output error generally happens because of disk corruption.\nIf you do fsck to your disk, I think your problem will solve.Thanks.", "username": "Kadir_USTUN" } ]
Mongodump: Failed: error writing data for collection
2022-09-26T03:16:25.294Z
Mongodump: Failed: error writing data for collection
5,391