image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "golang" ]
[ { "code": "", "text": "About me I’m a backend developer working for Dell technologies , For our product we are currently using the globalsign package , But we are planning to migrate the mongoDB code to mongo package - go.mongodb.org/mongo-driver/mongo - Go Packages (go mongo driver) since globalsign doesn’t have the support for mongo 5.X as mentioned in the documentation .MongoDB 4.0 is currently experimental - we would happily accept PRs to help improve support!Questions:Any inputs will be of great help for us , We are having a hard time migrating the session to the client as per the official go mongo driver.", "username": "karthick_d" }, { "code": "golanggolang", "text": "Welcome to the MongoDB community @karthick_d !I don’t know what the gaps are in terms of mgo support for modern MongoDB versions, but since the last release was more than 4 years ago (predating all current non-EOL versions of MongoDB server), the least risky path would be moving to the official MongoDB Go driver. You could test mgo to see if there are any obvious discoverable issues with your application and MongoDB 5.0, but if you encounter any problems there are unfortunately no active maintainers to help investigate and resolve.If no workarounds , is there any document that helps in migrating the code from globalsing to official go mongo driver ?There is a Go Migration Guide and usage examples in the Go Driver Documentation that may be helpful.You can also post specific questions in the community forums (I recommend tagging with golang for visibility) or search past golang discussions in the Drivers & ODMs category.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need support for MongoDB Go migration
2022-10-09T09:25:54.314Z
Need support for MongoDB Go migration
1,827
null
[ "queries", "node-js" ]
[ { "code": "db.find_listing_by_id(\"6342e1a22d04b5dcc47d6e0c\");\nconst { MongoClient } = require('mongodb');\nconst client = new MongoClient(uri);\n\n\nasync find_listing_by_id(_player_id) {\n let result = null;\n try {\n // Connect the client to the server\n await client.connect();\n\n // Establish and verify connection\n await client.db(\"admin\").command({ ping: 1 });\n\n result = await client.db(\"game_data\").collection(\"players\").find({ _id: _player_id }).toArray();\n console.log(result);\n if (result) {\n console.log(`Player as found with the id: ${_player_id}`);\n console.log(result);\n }\n else {\n console.log(`Player as NOT found with the id: ${_player_id}`);\n }\n }\n catch (e) {\n console.log(e);\n }\n finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n console.log(\"Disconnected successfully to server\");\n }\n\n return result;\n }\n", "text": "I was trying to get the contents of a document by specifying its _id in the search.\nThis is my current code:I ran into some problems:", "username": "Vallo" }, { "code": "", "text": "Documentation for mongodbIf I don’t put “toArray ()” at the end of my search query, I get a “FindCursor” instead of documents. It’s normal?If you look at the finely written documentation, you will find that indeed this is normal.With my current code, where I search via id, I am returned an empty array.Most likely, because the value type of your variable _player_id does not match the value type of the field _id. This or _player_id is really not present in your collection.", "username": "steevej" }, { "code": "db.find_listing_by_id(ObjectId(\"6342e1a22d04b5dcc47d6e0c\"));", "text": "Ok, i solved using db.find_listing_by_id(ObjectId(\"6342e1a22d04b5dcc47d6e0c\"));.", "username": "Vallo" }, { "code": "", "text": "So it was the case thatthe value type of your variable _player_id does not match the value type of the field _id", "username": "steevej" }, { "code": "", "text": "Exactly. Unfortunately I’m new to JS and MongoDB , and hadn’t been careful about using the variable type.\nThanks for your help!", "username": "Vallo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find() return value problem on NodeJS
2022-10-09T15:51:53.174Z
Find() return value problem on NodeJS
2,968
null
[ "aggregation", "queries", "python" ]
[ { "code": "{ \"_id\" : ObjectId(\"62XXXXXX\"), \"res\" : 12, ... }\n{ \"_id\" : ObjectId(\"63XXXXXX\"), \"res\" : 23, ... }\n{ \"_id\" : ObjectId(\"64XXXXXX\"), \"res\" : 78, ... }\n", "text": "In my mongodb collection documents are stored in the following format:I need to extract id’s for the document for which the value of “res” is outlier (i.e. value < Q1 - 1.5 * IQR or value > Q3 + 1.5 * IQR (Q1, Q3 are percentiles)). I have done this using pandas functionality by retrieving all documents from the collection, which may become slow if the number of documents in collection become too big.Is there a way to do this using mongodb aggregation pipeline (or just calculating percentiles)?", "username": "Vahe_Sahakyan" }, { "code": "", "text": "May be, just may be, you may use:\nor", "username": "steevej" } ]
Mongodb aggregation to find outliers
2022-10-06T10:33:28.855Z
Mongodb aggregation to find outliers
1,237
null
[ "node-js", "atlas-cluster" ]
[ { "code": "", "text": "Help Please:MONGODB_URI seems invalid: querySrv ENOTFOUND _mongodb._tcp.cluster0.xxxxx.mongodb.net", "username": "Leonor_Lopes" }, { "code": "", "text": "Hi @Leonor_Lopes, and welcome to the MongoDB Community forums! I can get a similar error if I have a typo in my server name. Have you verified that you’re typing in the correct URI?Another thing to check for is to make sure you don’t have a firewall blocking traffic.", "username": "Doug_Duncan" } ]
Mongodb_URI seems invalid
2022-10-09T13:50:11.407Z
Mongodb_URI seems invalid
930
null
[ "aggregation" ]
[ { "code": "{ \n \"_id\" : \"627jhjhghgf5251c87klkjkj601aee\", \n \"creationDate\" : \"2022-05-12T17:16:31.788+0000\", \n \"modifiedDate\" : \"2022-05-12T17:16:43.062+0000\", \n \"claimNumber\" : \"Q030NEE01932\", \n \"xCorrelationId\" : \"b25a19ad-7816-4288-a0d9-becc0ac0a5db\", \n \"modifiedBy\" : \"12311684\", \n \"events\" : [{\n \"abc\":1\n }\n ], \n \"_class\" : \"com.xyaxdd\"\n},\n{ \n \"_id\" : \"627d40f05251c87c3f601aee\", \n \"creationDate\" : \"2022-09-12T17:16:31.788+0000\", \n \"modifiedDate\" : \"2022-09-12T17:16:43.062+0000\", \n \"claimNumber\" : \"Q030NEE0jhjhj\", \n \"xCorrelationId\" : \"b25a19ad-7816-4288-a0d9-becc0ac0a5db\", \n \"modifiedBy\" : \"456311684\", \n \"events\" : [{\n \"xyz\":3455\n }\n ], \n \"_class\" : \"com.AuditRecord\"\n}\ndb.audit.aggregate([\n {$group : { \"_id\": \"$xCorrelationId\", \"count\": { $sum: 1 },\n events: {$push: \"$events\"},\n creationDate: { $min: \"$creationDate\" }, modifiedDate: { $max: \"$modifiedDate\" },claimNumber:{$first: \"$claimNumber\"}, _class:{$first: \"$_class\"}\n },\n \n },\n {$match: {\"_id\" :{ $ne : null } , \"count\" : {$gt: 1} } },\n {$sort: {count: -1}},\n {$project: {\"xCorrelationId\" : \"$_id\", \"_id\" : 0, count: 1,\n events:{\n $reduce: {\n input: '$events',\n initialValue: [],\n in: {$concatArrays: ['$value', '$this']}\n }\n },\n creationDate: 1, modifiedDate: 1, claimNumber: 1, _class: 1\n } },\n \n],\n{allowDiskUse:true});\nSELECT modifiedBy\nFROM dpr_audit_record s\n JOIN (SELECT MAX(modifiedDate) AS md, _id FROM dpr_audit_record GROUP BY xCorrelationId) max\n ON s._id = max._id\n", "text": "I am trying to find duplicated data based on xCorrelationID in a collection as mentioned below:I have the following query which works fine but The piece where I am stuck is on modifiedBy field. The value of modifiedBy should correspond for the document with max(modifiedDate) within a group of related xCorrelationID. For example, if there are 10 documents within a group of related xCorrelationId and 5th document has max date for modifiedDate then I need to take modifiedBy value for that 5th documentmodifiedBy can be achieved in SQL DB with following query.Can anybody help me to transform it in mongodb? I am trying to use $lookup after $group in above query but failing.\nPS: Mongodb version - 4.4.15", "username": "Poonam_Gupta" }, { "code": "events: {$push: \"$events\"}", "text": "My approach would be a little different.events: {$push: \"$events\"}I will keep the $match as it itI don’t think you really need to $sort on count:1I would $lookup into audit $match-ing xCorrelationId (_id after $group) and modifiedDate (which is the $max in the $group) to find modifiedBy.", "username": "steevej" } ]
Self-join in MongoDb
2022-10-05T16:02:40.083Z
Self-join in MongoDb
1,778
null
[ "data-modeling" ]
[ { "code": "1. collection users: \n - userid \n - username \n - userpassword \n - other info user \n2. collection following: \n - userid \n - [array of followingIds] //productIds \n3. collection followed (watchlist): \n - productId \n - [array of followeeIds] \n1. collection users: \n - userid - username \n - userpassword \n - other info user \n - followeeIds: [] //ids of products user follows \n2. collection followed (watchlist) \n - userId \n - reference to product or embedded product \n\n1. collection users: \n - userid - username \n - userpassword \n - other user specific info user \n2. collection following: \n - userid \n - [array of followingIds] //productIds (no reference as it would never grow as large as userIds) \n3. collection followed: \n - productId \n - followee: {\n type: Schema.Types.ObjectId, \n ref: 'user' \n } //here instead of having array of userIds, I reference user\n1. collection users: \n - userid - username \n - userpassword \n - other info user \n2. collection follow \n - userid (the follower)\n -productId (followee)\n3. collection Watchlist (this is to show users watchlist, not for sending notifications to all followers of product xyz\n - userId\n -embedded or reference to product\n", "text": "There have been many similar questions but they are all about followers/followees where each other is following the other (bot sides could potentially grow very large)my case: I have users, each user can follow products. Each user has a watchlist (summary of all watched items). I want to be able to send notification to all users who follow a spec. product when something about the product has changed. So I need to query all followers of a specific product (there will be many products but they will never grow as big as the document size limit would be reached)first approachpro: only one lookup to get followerIds (lookup of collection followed)con: 16mb per document limit would be potentially reached because followeeIds in followed could grow very bigsecond approachpro: 16mb per document limit would not be reachedcon: would be 2 queries as I understand -have to search through all users + followeeIds to see who is follower3rd approach:I would reference users in my followee collection. Im not sure if this would lead to less queries/better performance4th approach: here I would not use arrays so lets say user 1 follows 5 products, there will be 5 documents belonging to collection Follow.", "username": "Anna_N_A" }, { "code": "collection users: \n - userid - username \n - type : \"main\"/\"overflow\"\n - userpassword \n - other info user \n - followeeIds: [{productId, name ...}]\n - hasOverflow : true/false\ncollection products:\n- any additional products info\n", "text": "Hi @Anna_N_A,I think you haven’t considered a hibread solution where you would embedded a portion of users following products in its main document and have any overflow products in an “overflow” document.So whenever your wish list is growing lets say beyond 200 products you open an overflow document in the same collection. You index the user id and also index the product id. When you search for a product id to notify you will get the user information from any document as you will duplicate that into each overflow document.Or if you search for the user entire list you will get all of its documents based on the user index. All done with one query no need for lookupsThis is called outlier pattern:The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot for your answer. Based on the article you shared, the outlier pattern is used more in exceptional cases, where certain documents could become too large (the example of the harry potter book for example). Should this pattern also be applied, when its likely that many (not most but neither only a few) documents will be affected and outlier pattern will be applied? Also, what number approx. would you recommend to start applying that pattern?", "username": "Anna_N_A" }, { "code": "", "text": "Hi @Anna_N_A ,Having large arrays you use for query or write operations is a place to be cautious about.Therefore if your arrays cross the houndreds i would split them into outlier pattern.Its also good for ui pagination if you think about it.My place to start for small to medium (few kb ) elements is around 500 (max 1000) per document/bucket…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "collection globalNotifications (notifications send to all users)\nto: [{ //ObjectId's indexed\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User'\n }],\n readAt: {\n type: Date\n },\n hasOverflow : true\n...\ncollection globalNotifications (notifications send to all users)\nto: { //ObjectId indexed\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User'\n },\n readAt: {\n type: Date\n },\n...\n", "text": "Just one last question. Would you also apply the outlier pattern to global notifications, where notifications are send to all users? something like this:or would you in this case because all users will be affected still choose this:", "username": "Anna_N_A" }, { "code": "", "text": "Please share the nature of a notification?Is it added when something change or a user login?", "username": "Pavel_Duchovny" }, { "code": "", "text": "I have 2 types of notifications.so far i have 3 schemas for notifications: userNotification, globalNotifications and notificationContent (which I want to share among global and userNotification)", "username": "Anna_N_A" }, { "code": "notification.find({timestamp : {$gt : <my last timestamp>}}).sort({timestamp : -1})\n", "text": "Ok @Anna_N_A ,So it sounds like the global notifications should have a collection of its own and every user should periodically or on login , query this collection for unread messages. Then the user can specify onit own object that up until x timestamp all global notifications received/read.On the product based notifications you can have a change stream or something of that nature Getting a product change and its Id and then query user collection to any user with an element of this product subscribed.Then you can have a document per user with the notification written or a bucket of latest x notifications Its up to you.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo schema for many to few relationship - users following few products - approach?
2022-10-06T16:58:21.935Z
Mongo schema for many to few relationship - users following few products - approach?
2,671
https://www.mongodb.com/…b_2_1024x155.png
[ "server" ]
[ { "code": "brew services start mongodb-communitybrew services restart mongodb-communitybrew services start [email protected] services listName Status User Filemongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist/bin/launchctl bootstrap gui/501 /Users/macbook/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist/bin/launchctl bootstrap system /Library/LaunchDaemons/homebrew.mxcl.mongodb-community.plist", "text": "Hello,I tried every solution here but I still get the same error when I try to start mongodb-community edition through brew services start mongodb-community\nor\nbrew services restart mongodb-community\nor\nbrew services start [email protected]\nEkran Resmi 2022-10-04 18.37.301202×182 37.2 KB\nI deleted the 27017.sock file and tried these things again. At some point I always get the same error when I\nbrew services listName Status User File\nmongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistAnd this error while trying to restart.\nError: Failure while executing; /bin/launchctl bootstrap gui/501 /Users/macbook/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist exited with 5.When I tried to start it “sudo”, I get the error below;\nError: Failure while executing; /bin/launchctl bootstrap system /Library/LaunchDaemons/homebrew.mxcl.mongodb-community.plist exited with 37.Can anyone help me on this? This really got my days and I’m frustrated.", "username": "Samed_Torun" }, { "code": "rootsudoroot", "text": "Have you looked at the MongoDB log file to see what’s being written to it? That should have the error that’s happening. If you can post the log showing the most recent attempt to start the process we can use that to help you out.Also note that you shouldn’t ever run the MongoDB process as the root user (running with sudo) as this sets permission on files/directories that cause issues when you try to run as your normal user. Running as root can also lead to escalation of privileges and allow for bad things to happen.", "username": "Doug_Duncan" }, { "code": "{\"t\":{\"$date\":\"2022-10-04T18:29:00.238+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/Users/macbook/Desktop/mongodatabasepathtest\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.238+03:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.238+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.948+03:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":\"[1664897340:946133][2690:0x11404f600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /Users/macbook/Desktop/mongodatabasepathtest/WiredTiger.turtle: handle-open: open: Operation not permitted\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.951+03:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":\"[1664897340:951390][2690:0x11404f600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /Users/macbook/Desktop/mongodatabasepathtest/WiredTiger.turtle: handle-open: open: Operation not permitted\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.952+03:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":\"[1664897340:952344][2690:0x11404f600], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /Users/macbook/Desktop/mongodatabasepathtest/WiredTiger.turtle: handle-open: open: Operation not permitted\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.952+03:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.952+03:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"1: Operation not permitted\"}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.952+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2022-10-04T18:29:00.952+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\ntail $(brew --prefix)/var/log/mongodb/mongo.log", "text": "Is it something like this? I used\ntail $(brew --prefix)/var/log/mongodb/mongo.logto get this.", "username": "Samed_Torun" }, { "code": "\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"dbPathmongoddbPathbrew services restart ...", "text": "The following sticks out:\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"It seems like there are data files in the dbPath that are not compatible with the version of MongoDB that you are running. You can either delete all the files in this path if the data is not needed. If you need/want to keep this data, then you can move them off in a new location and then try to figure out what version of MongoDB those files were created with and start that version of the mongod process at a later time. Once you have a clean dbPath directory, try running your brew services restart ... command again and see if you can get the process started.", "username": "Doug_Duncan" }, { "code": "systemLog:\n destination: file\n path: /usr/local/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath:\nnet:\n bindIp: 127.0.0.1, ::1\n ipv6: true\nName Status User File\nmongodb-community error 25600 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\ndbPath: /usr/local/var/mongodbName Status User File\nmongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nThe file /Users/macbook/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist does not exist.\nmacbook@SamsMacbook ~ % tail $(brew --prefix)/var/log/mongodb/mongo.log \n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":31154,\"port\":27017,\"dbPath\":\"\",\"architecture\":\"64-bit\",\"host\":\"SamsMacbook.local\"}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.3.0\"}}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.261+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1, ::1\",\"ipv6\":true},\"storage\":{\"dbPath\":true},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.262+03:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.262+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n{\"t\":{\"$date\":\"2022-10-07T19:11:33.262+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "Hello,I have changed my config file. And the dbPath part is empty now. This is how my config file looks right now.However, this time I get another error when I try to restart my mongo community. This is the error right now.After this error I checked this linkAnd I changed dbPath line to:dbPath: /usr/local/var/mongodbHowever, I get another error which has the sam error number like the first one (3584), like the one below:When I try to open the file in the error, I get an error like this.Because I have deleted that file formerly.This is the latest log.Can you guys help me please? I’ve been struggling for days. Thank you.", "username": "Samed_Torun" }, { "code": "", "text": "It says failed to unlink tmp .socket file.Check permissions/ownership of this file\nMost likely owned by root from your previous run\nYou have to remove this file and start service again\nAlso make sure your dbpath directory is empty and has appropriate permissions on it for mongod to write onto it", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely owned by root from your previous runWhat file should I remove? How can I change the root? How do I change the permissions mongod to write?", "username": "Samed_Torun" }, { "code": "", "text": "Check\nls -lrt /tmp/mongod-27017.sock\nls -lrt /usr/…/…/mongodb", "username": "Ramachandra_Tummala" }, { "code": "rootsudo rm /tmp/mongod-27017.sockbrew services listsmongodb-communityrootrootroot", "text": "As Ramacchandra states, the socket file is most likely owned by the root user and can safely be removed by running sudo rm /tmp/mongod-27017.sock.One thing I notice is that in your brew services lists output it shows the user of mongodb-community as root. You should be installing and running MongoDB as your regular user. The MongoDB process should never be run as the root user unless you are directed to by MongoDB support team for troubleshooting, and even then only if you understand the implications of what happens when you run the service as the root user. This could lead to privilege escalation which could then lead to bad things happening.", "username": "Doug_Duncan" }, { "code": "macbook@SamsMacbook ~ % sudo rm /tmp/mongod-27017.sock.\nPassword:\nrm: /tmp/mongod-27017.sock.: No such file or directory\nmacbook@SamsMacbook ~ % ls -lrt /usr/local/var/mongodb \ntotal 0\nmacbook@SamsMacbook Library % ls -l /tmp/mongodb-27017.sock\nsrwx------ 1 root wheel 0 7 Eki 18:44 /tmp/mongodb-27017.sock\nmacbook@SamsMacbook ~ % ls -l /tmp/mongodb-27017.sock\nsrwx------ 1 root wheel 0 7 Oct 18:44 /tmp/mongodb-27017.sock\nsudo rmmacbook@SamsMacbook ~ % brew services list\nName Status User File\nmongodb-community error 25600 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nmacbook@SamsMacbook /tmp % brew services list \nName Status User File\nmongodb-community error 25600 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nmacbook@SamsMacbook /tmp % tail $(brew --prefix)/var/log/mongodb/mongo.log\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.399+03:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.399+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.399+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.399+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.399+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.400+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.400+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.400+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.400+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-10-08T16:45:24.400+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "I get this when I write something like @Ramachandra_Tummala said.I don’t know what installing as a regular user means. I deleted the mongo and reinstalled it many times. Nothing works.This is the situation right now. I thought I have removed the file but whenever I write this command, the result show up.UPDATEI found the .sock file and deleted it with sudo rmHowever this time I get the error below;And now this is the log;I just dont even know what is going on at this point. All I ever wanted to do is setting up an environment an start my backend tutorial. I dont know if this is normal or not but I’m not sure if I ever use Mongo again.Do you guys know why doesn’t anything work? Or any suggestion?", "username": "Samed_Torun" }, { "code": "Killing all outstanding egress activity.\"macbook@SamsMacbook ~ % cd /tmp\nmacbook@SamsMacbook /tmp % ls -a\nmacbook@SamsMacbook /tmp % sudo rm mongodb-27017.sock\nmacbook@SamsMacbook ~ % brew services restart mongodb-communitymacbook@SamsMacbook ~ % brew services list\nName Status User File\nmongodb-community error 25600 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nmacbook@SamsMacbook ~ % open /usr/local/etc/mongod.confsystemLog:\n destination: file\n path: /usr/local/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /usr/local/var/mongodb\nnet:\n bindIp: 127.0.0.1, ::1\n ipv6: true\n dbPath: /usr/local/var/mongodbbrew services restart mongodb-community", "text": "Killing all outstanding egress activity.\"Hello guys,I have solved the problem.When the error was error 3584, what I did was to delete the 27017.sock file as you have said before.\nI did this using these commands one by one; (I’m sure there are easier ways but I wanted to see everything clearly.)And then restarted the mongo.macbook@SamsMacbook ~ % brew services restart mongodb-communityI saw this error when I check if it is working properly or not.I have learned that this is caused because of the .conf file details. How to reach it? Easy.macbook@SamsMacbook ~ % open /usr/local/etc/mongod.confThe file includes some code like this;See if there is different than the one I shared above. My dbPath was empty. I changed it to the default value I found in the internet. dbPath: /usr/local/var/mongodbAnd then, we commandbrew services restart mongodb-communityThat’s all folks! These are the solutions for error 3584 and error 25600 in Mongo.Thanks @Doug_Duncan and @Ramachandra_Tummala", "username": "Samed_Torun" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot Start mongodb-community
2022-10-04T15:40:21.324Z
Cannot Start mongodb-community
15,244
https://www.mongodb.com/…6_2_1023x147.png
[ "swift" ]
[ { "code": "⚠️ ld: ignoring file /Users/jenkins/builds/gvx3QBW3/0/gc.com/odyssey/DerivedData/Build/Products/Debug-iphonesimulator/XCFrameworkIntermediates/Realm/librealm-monorepo.a, building for iOS Simulator-x86_64 but attempting to link with file built for unknown-unsupported file format ( 0x76 0x65 0x72 0x73 0x69 0x6F 0x6E 0x20 0x68 0x74 0x74 0x70 0x73 0x3A 0x2F 0x2F )\n❌ Undefined symbols for architecture x86_64\n> Symbol: realm::ArrayMixed::init_from_mem(realm::MemRef)\n> Referenced from: realm::BPlusTree<realm::Mixed>::cache_leaf(realm::MemRef) in RLMQueryUtil.o\n❌ ld: symbol(s) not found for architecture x86_64\n❌ clang: error: linker command failed with exit code 1 (use -v to see invocation)\ngit diffbundle exec pod install", "text": "I tried to upgrade the version of RealmSwift in a project from 4.4.1 to 10.30.0 on my Intel mac.It compiles locally on my machine, but on a CI machine that is also an M1 mac, I get this error:However, doing the same upgrade on an M1 mac and pushing to CI makes it compile successfully in CI.I did a git diff against two branches where the upgrades were done on separate machines, and I see this:\nScreen Shot 2022-10-06 at 3.45.29 PM3188×458 139 KB\nThis seems to be telling me that the binaries are different. I’m curious why this is happening and/or if I’m misinterpreting what’s happening. Are the binaries actually different? Why? And if so, what is CocoaPods doing differently in the bundle exec pod install when it’s done on an Intel vs M1 mac?", "username": "atecle" }, { "code": "", "text": "@atecle Welcome to the forums…Stretching my memory here a bit but I don’t think you can go from 4.x to 10.x directly. The underlying file format was changed after 5 (?) and you’ll need to let Realm Studio upgrade the file(s).Yes, the compiled binaries are different - are you attempting to build for the same iPhone on both the Intel as M1 Macs? Do they both have the current version of cocoapods installed? 1.11.3 at the moment.What does your podfile look like?", "username": "Jay" }, { "code": "pod installtarget 'main' do \n# realmswift and other pods\n target 'main_tests' do \n inherit_search_paths!\n some_test_pods\n end\n\n## not exactly the source, there are some conditions i'm not including\n post_install do |installer|\n installer.pods_project.targets.each do |target|\n target.build_configurations.each do |config|\n config.build_settings['ALWAYS_EMBED_SWIFT_STANDARD_LIBRARIES'] = 'YES'\n config.build_settings['ENABLE_BITCODE'] = 'NO'\n config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '14.0'\n end\n end\n\n installer.generated_projects.each do |project|\n project.targets.each do |target|\n target.build_configurations.each do |config|\n config.build_settings['DEVELOPMENT_TEAM'] = 'dev_team'\n end\n end\n end\n end\nend\n\n# a bunch of abstract targets\n", "text": "Hi @Jay - thanks for the reply and for the welcome Stretching my memory here a bit but I don’t think you can go from 4.x to 10.x directly. The underlying file format was changed after 5 (?) and you’ll need to let Realm Studio upgrade the file(s).I actually didn’t have any apparent issue with my Realm file created in 4.4.1 and migrating to 10.31.0. If there was a file format related issue, should I expect a crash on start up? Or would there be other issues? Everything seemed peachy - so not sure if there’s some failure path I’m missing.Yes, the compiled binaries are different - are you attempting to build for the same iPhone on both the Intel as M1 Macs? Do they both have the current version of cocoapods installed? 1.11.3 at the moment.I can confirm the same Cocoapods version was used , 1.11.3, but I think I was targeting a different simulator. On the Intel mac it was an iPhone 13 Pro on iOS 16 I believe, and the on the M1 it was an iPhone 12 running iOS 15.2.Could you help me understand why those compiled binaries are different? The Realm library ships with those compiled binaries, so I was curious how those differences come to be? I guess I might be operating on the wrong assumption that what you get from a pod install doesn’t depend on the architecture of your system/what iPhone you’re building to - but just the version of the library you’re installing?Not sure how helpful this is, but here’s a sketch of what the Podfile looks like (not sharing in full since it’s an employer’s), let me know if there’s anything in particular I could share that would be helpful:", "username": "atecle" }, { "code": "platform :ios, 'xx.0'\ntarget 'MyRealmProject' do\n use_frameworks!\n pod 'RealmSwift', '~>10'\nend\nrm -rf ~/Library/Developer/Xcode/DerivedData", "text": "Hmm. I was expecting a podfile more close to the documentation - was looking for other dependencies and pods/versionsThere were a LOT of changes going from 4.x to 10.x, with many depreciations and implementation differences. I am surprised it went smoothy, but that’s good to hear!It seems like there’s some kind of version difference - are you using the same verisons of XCode? Also, that derived data looks wonky… did you try removing it?rm -rf ~/Library/Developer/Xcode/DerivedDataAlso, have you done any changes to the build-in Ruby? Probably not related but more data will help narrow the issue.", "username": "Jay" } ]
Installing RealmSwift on an Intel mac installs different librealm-monorepo.a binary than when installed on an M1 Mac
2022-10-06T19:46:54.469Z
Installing RealmSwift on an Intel mac installs different librealm-monorepo.a binary than when installed on an M1 Mac
2,196
https://www.mongodb.com/…a_2_1024x464.png
[ "dot-net" ]
[ { "code": "", "text": "The MongoDB driver for C# automatically serializes all of the properties for the given document class. Every property with “BsonElement” gets serialized. One of the fields contains a JSON property bag. The Mongo driver actually serializes it into an object.Is it possible to somehow serialize all the fields as normal but then keep the field with the JSON property bag intact so that the JSON gets stored to MongoDB as JSON rather than some object. Ideally, I’d like to store it with the JSON intact:\n\nMongoSerialization1313×595 198 KB\n", "username": "Sam_Lanza" }, { "code": "BsonClassMap<T>IBsonSerializer<T>[BsonExtraElements]BsonDocumentIDictionary<string, object>IDictionary<string, object>IBsonSerializer<T>", "text": "Hi, @Sam_Lanza,Welcome to the MongoDB Community Forums. I understand that you’re trying to store a JSON property bag (contained in a C# object) in MongoDB as JSON.One point of confusion that I should clear up first. MongoDB stores BSON or “binary JSON”. BSON offers additional datatypes over traditional JSON and a more compact serialization format. In order to store the JSON property bag in MongoDB, it will have to be serialized to BSON.In this case the .NET/C# MongoDB Driver is serializing your JSON property bag to a BSON subdocument, which is then sent over the wire (as BSON) to the MongoDB server where it is stored as BSON. The .NET/C# Driver provides a variety of ways to control the serialization process including configuration via BSON attributes, BsonClassMap<T>, and implementing your own IBsonSerializer<T>.For example, you can apply [BsonExtraElements] attribute to a property of type BsonDocument or IDictionary<string, object> to hold any elements present in the database but without a corresponding C# property.Since a JSON property bag can be thought of as a dictionary of string/object pairs, you could represent it in your C# object model as an IDictionary<string, object>. You can then control its serialization through Diciontary Serialization Options.Lastly you can take full control of serialization/deserialization for a type by writing your own IBsonSerializer<T>. See Specifying the Serializer for how to configure a custom serializer.Hopefully this provides you with some ideas of how to control the serialization of your JSON property bag and allows you to achieve your desired database format.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to configure Mongo driver to keep JSON format during serialization?
2022-10-07T17:25:49.535Z
Is it possible to configure Mongo driver to keep JSON format during serialization?
4,769
null
[ "queries", "python" ]
[ { "code": " {\n \"view.k1\": kwargs[\"taaaa\"],\n \"view.k2\": kwargs[\"jjjj\"],\n \"view.k3\": kwargs[\"jkjjkjk\"],\n \"view.k4\": True,\n \"view.k5\": {\"$ne\": kwargs[\"tktteam\"]},\n \"view.k6\"\": True,\n \"view.k7: {\"$in\": checklist}\n }\n", "text": "Hi Team,\nSo we have scenario which is taking long time as normal and now we want to tune that. So the issues is as follows:\nwe are querying mongo with huge array size (around 1k) in size and doing find on single collection. Our collection holds just 200 documents but query payload will keep on increasing . Its like mapping REAL time events to some queues so we prepare filters on the fly and hit find.Sample filter will have data like below.db.test.find({filter1},{filter2},{filter3},{filter4}, {fiter5},…{filter1000})Since we are sending huge payload over network due to which also lag will appear. Now I am looking for best way to handle this scenario. I don’t know much about mongo but I think caching shall work for me or may be break queries and do find concurrently.We are using PyMongo for now and this application is written very badly so we need this mapping to be very fast.Can you suggest best approach to handle this.", "username": "Anoop_Butola" }, { "code": "", "text": "The question is not clear for me, so what you want to do is search on collection which hold 200 documents several time? maybe sample document structure and sample synatx will help to understand more clearly", "username": "Kyaw_Zayar_Tun" }, { "code": "", "text": "I wanted to know is there any better way to query where we have many filters and each filter contains many fields.", "username": "Anoop_Butola" } ]
Best way to get all matching documents with huge JSON array as payload
2022-08-31T17:25:36.650Z
Best way to get all matching documents with huge JSON array as payload
1,953
null
[ "java" ]
[ { "code": "ChangeStreamIterable<Document> changes =\n client.getDatabase(<DBNAME>)\n .watch(Collections.singletonList(\n Aggregates.match(Filters.in(\"ns.coll\", Arrays.asList(WATCHED_COLLECTIONS)))))\n .fullDocument(FullDocument.UPDATE_LOOKUP);\n", "text": "I’m watching 3 collections in a DB using the Java driver. Each of those collections has one document only, each of which have embedded documents. My client code looks like this:where the variable “WATCHED_COLLECTIONS” is an array of the 3 collection names that I want to watch.Since I’ve used the “match” stage, this filtering should be happening at the server side right?\nDespite that, in the mongo logs, I can see that ‘docsExamined’ is very high!Why would that be happening, since there’s only one document in each collection? Even if we count all the embedded documents it doesn’t come up to 11000 documents.Excerpt from mongo log below:COMMAND [conn20161] command DBNAME.$cmd command: getMore { getMore: 1760441711222280319, collection: “$cmd.aggregate”, $db: “DBNAME”, $clusterTime: { clusterTime: Timestamp(1590477125, 7396), signature: { hash: BinData(0, 17B8B1B3ADE3FEFC381F56E9201694DC9509BC38), keyId: 6829683829607759874 } }, lsid: { id: UUID(“f88e3593-bec6-47cc-a067-6042f36aa1a3”) } } originatingCommand: { aggregate: 1, pipeline: [ { $changeStream: { fullDocument: “updateLookup” } }, { $match: { ns.coll: { $in: [ “COLLECTION1”, “COLLECTION2”, “COLLECTION3” ] } } } ], cursor: {}, $db: “DBNAME”, $clusterTime: { clusterTime: Timestamp(1590160602, 2), signature: { hash: BinData(0, 39A22239ED8BA07ED1E8B710D4212AE8CDB52663), keyId: 6829683829607759874 } }, lsid: { id: UUID(“f88e3593-bec6-47cc-a067-6042f36aa1a3”) } } planSummary: COLLSCAN cursorid:1760441711222280319 keysExamined:0 docsExamined:11890 numYields:7138 nreturned:0 reslen:305 locks:{ ReplicationStateTransition: { acquireCount: { w: 7141 } }, Global: { acquireCount: { r: 7141 } }, Database: { acquireCount: { r: 7141 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 7141 } } } storage:{ data: { bytesRead: 14 } } protocol:op_msg 351ms", "username": "Murali_Rao" }, { "code": "", "text": "Trying to bounce this topic up in case anybody has any insights…", "username": "Murali_Rao" }, { "code": "", "text": "@Murali_Rao sorry for the late reply, but to help clarify this behavior Change Streams are special cursors that are opened against the oplog.As such, when the cursor is being advanced it is scanning the documents in the oplog for matches; not the collections themselves.", "username": "alexbevi" }, { "code": "", "text": "HI ,I am using .net drivers. I have few questions on the same topic.because in my case i am just watching static/master data collection to refresh cache. i dont want it to scan all the oplog when opened.thanks.", "username": "Sahi_kakkar" } ]
Change Stream Java driver COLLSCAN
2020-06-02T05:01:58.492Z
Change Stream Java driver COLLSCAN
3,453
null
[ "python", "field-encryption" ]
[ { "code": "Traceback (most recent call last):\n File \"D:\\Python\\project\\MongoDBtest\\make_data_key.py\", line 119, in <module>\n encrypted_db.create_collection(encrypted_coll_name)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\database.py\", line 448, in create_collection\n return Collection(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 232, in __init__\n self.__create(name, kwargs, collation, session, encrypted_fields=encrypted_fields)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 313, in __create\n self._command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 285, in _command\n return sock_info.command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\pool.py\", line 766, in command\n return command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\network.py\", line 166, in command\n helpers._check_command_response(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\helpers.py\", line 181, in _check_command_response\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: **Encrypted collections are not supported on standalone**, full error: {'ok': 0.0, 'errmsg': 'Encrypted collections are not supported on standalone', 'code': 6346402, 'codeName': 'Location6346402'}\n", "text": "I tried to use the queryable encryption function of MongoDB according to the official documents(Quick Start of Queryable Encryption), and copied the Complete Python Application example to run on my local computer.However, an error occurred during the compilation of the make_data_key.py.The error message is as follows:I wonder how to solve it.", "username": "yuhan_cai" }, { "code": "Traceback (most recent call last):\n File \"D:\\Python\\project\\MongoDBtest\\make_data_key.py\", line 122, in <module>\n encrypted_db.create_collection(encrypted_coll_name)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\database.py\", line 448, in create_collection\n return Collection(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 229, in __init__\n self.__create(_esc_coll_name(encrypted_fields, name), opts, None, session)\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 313, in __create\n self._command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\collection.py\", line 285, in _command\n return sock_info.command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\pool.py\", line 766, in command\n return command(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\network.py\", line 166, in command\n helpers._check_command_response(\n File \"D:\\Python\\Python\\lib\\site-packages\\pymongo\\helpers.py\", line 181, in _check_command_response\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: BSON field 'create.clusteredIndex' is the wrong type 'object', expected types '[bool, long, int, decimal, double'], full error: {'ok': 0.0, 'errmsg': \"BSON field 'create.clusteredIndex' is the wrong type 'object', expected types '[bool, long, int, decimal, double']\", 'code': 14, 'codeName': 'TypeMismatch', '$clusterTime': {'clusterTime': Timestamp(1663769177, 6), 'signature': {'hash': b'\\xfa\\xb3_\\xbf^N\\xfd\\x80\\xc4\\x87\\xe4/\\xc5:\\xc7\\xf4\\xd7\\xb8\\xd2\\xc9', 'keyId': 7108772092992028684}}, 'operationTime': Timestamp(1663769177, 6)}\n", "text": "The previous problem has been solved, and now a new problem has been discovered.The error message is as follows:", "username": "yuhan_cai" }, { "code": "", "text": "I have the exact same error, have you figured out a solution to the problem yet?", "username": "Nicklas_Wurtz" }, { "code": "", "text": "Hello yuhan_cai (and Nicklas_Wurtz) and welcome to the Community!Queryable Encryption is not supported on standalones so you have to use either a replica set or sharded clusters, but it sounds like you got that part solved. The error you are seeing seems like it is coming from a server version that does not support clusteredIndex. What server version are you using?", "username": "Cynthia_Braund" }, { "code": "", "text": "Thank you Cynthia_Braund!I am using an Atlas Cluster\n\nimage989×80 3.95 KB\n", "username": "Nicklas_Wurtz" }, { "code": "", "text": "I’ve read a bit more about Queryable Encryption on MongoDB and figured out that since I’m using the free version which unly supports version 5.0 of MongoDB, I will not be able to use Queryable Encryption. It seems that the only supported version of MongoDB which can use the feature is 6.0+.It might be related to the error we are encounting, but I’m not sure", "username": "Nicklas_Wurtz" }, { "code": "", "text": "Hi Nicklas,Yes, that is the problem. The free tier will have support for 6.0 in the coming weeks. In the meantiime, you can use the Community edition of MongoDB if you want to test it out but you won’t be able to use Automatic Encryption.", "username": "Cynthia_Braund" }, { "code": "", "text": "Hello again Nickals,I forgot to include a link to the docs that have tutorials on how to use Explicit Encryption, which is what you’d be using for the Community edition if that you want to explore that. https://www.mongodb.com/docs/manual/core/queryable-encryption/fundamentals/manual-encryption/.Thanks,Cynthia", "username": "Cynthia_Braund" } ]
I found an error when trying to use Mongodb's queryable encryption
2022-09-20T06:04:06.710Z
I found an error when trying to use Mongodb&rsquo;s queryable encryption
3,621
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "{\n status : String,\n campaignId : mongooseId,\n}\n{\n audience : [String],\n status: String\n}\n", "text": "So, I have 2 queries and both of them are dependent on each other, let me explain:\nWhen I send a campaign I want the campaign to be marked as send and get added to queue.Schemas:\nQueue:Campaign:Problem:If I am saving the queue first and check for its successful execution then after that I updated the campaign status to “ongoing” to mark it as added to queue, if for some reason the campaign update query get failed I will be left with a document wasting memory.If I go the other way I may end up marking the campaign as “ongoing” but without any queue.I have encountered this problem so many times at so many different queries, please help me on how to overcome it.", "username": "Areeb_Ahmad" }, { "code": "", "text": "Hi @Areeb_Ahmad,If you have multi-document updates that depend on each other, you can use Transactions to ensure updates are consistent. I recommend reviewing the Production Considerations documentation for further information.Note: transactions require a replica set or sharded cluster deployment. You can create a single member replica set for testing in development, but I would recommend a full replica set deployment (three members) for a production deployment. You can also test transactions with a MongoDB Atlas Free Tier Cluster.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you so much, I just really can’t tell how much it helped, I was at a dead end when encountered this problem", "username": "Areeb_Ahmad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to ensure 2 queries that depend on one another
2022-09-22T10:36:28.000Z
How to ensure 2 queries that depend on one another
1,670
null
[ "queries", "text-search" ]
[ { "code": "", "text": "Hi everyone!Now i am working in a search bar where i have products with brands and models:query = { country, published: true, $or: [{product.brand.name}: this.req.query]}\nallpublications = this.publicationmodel.find(query)How can i perform it to make a “fuzzy query” in case someone misspell a word? Now i improve it with regex, but i don’t know if i can insert $text into it", "username": "Felipe_Lisperguer" }, { "code": "textfuzzytext", "text": "Hi @Felipe_Lisperguer thanks for the question and welcome to the MongoDB community!Supporting misspelled words is a great use case for Atlas Search. Among other relevance-based search features, the $search stage has a text operator which offers a fuzzy parameter that allows you to define how many character edits and different word variations to consider.To get started with Atlas Search, you’ll have to create a *search index*. Once you have created your search index, you can construct a $search query using the text operator with fuzzy enabled.You can also read more about the $search stage here.I hope this helps! Please do not hesitate to reply if you have any other questions.", "username": "amyjian" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it posible a "fuzzy find"?
2022-10-03T22:11:57.771Z
Is it posible a &ldquo;fuzzy find&rdquo;?
5,286
null
[ "field-encryption" ]
[ { "code": "", "text": "I’m trying to understand Queryable Encryption.How will the equality operator work if the encryption generates random ciphertext each time?Or will the value get decrypted and stored in-memory for querying?", "username": "Raghu_c" }, { "code": "", "text": "Hi @Raghu_cOr will the value get decrypted and stored in-memory for querying?No value gets decrypted in the server or in transit at any time. It only gets decrypted in the client.To learn how it works, you might be interested in:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the links. More things to learn.", "username": "steevej" } ]
How does equality operator in queryable encryption?
2022-10-04T07:42:39.816Z
How does equality operator in queryable encryption?
1,850
null
[ "aggregation", "queries", "node-js", "transactions" ]
[ { "code": "", "text": "I’m reading the docs about Transactions to make sure I don’t actually let any inconsistencies slip into my production data. Right now, I’m reading the section on Transactions and Read Concern. If I understand it correctly, the read concern defaults to “local” if I don’t set anything explicitly.What implications does this “local” read concern have for the transaction and consistency of my data? What I want to know specifically is if there are circumstances where a transaction could lead to partial updates or inconsistent data even though all the involved database operations are part of the same transaction/session. I imagine this could be caused because I am reading data that has actually been rolled back and execute my transactions with this wrong data.Are my worries here justified or do I understand the read concern wrong?", "username": "Florian_Walther" }, { "code": "", "text": "Can anyone help me? Maybe someone from the staff? This is important for my production database.", "username": "Florian_Walther" }, { "code": "\"local\"", "text": "Hi @Florian_WaltherWhat I want to know specifically is if there are circumstances where a transaction could lead to partial updates or inconsistent data even though all the involved database operations are part of the same transaction/session . I imagine this could be caused because I am reading data that has actually been rolled back and execute my transactions with this wrong data.Read concern is all about replica set, \"local\" and roll back refers to concepts in that area, not in transactions.MongoDB transaction follows the ACID property so partial uncommitted writes and inconsistent data should not be visible outside of that uncommitted transaction.Please also refer to the following documentation which may be useful:Can anyone help me? Maybe someone from the staff? This is important for my production database.The community forum users will try help as best as they can but please note that there is no SLA for responses. If it is urgent and/or affecting production heavily it may be better to subscribe to a support plan and create a support case which has dedicated SLAs (I have linked this to Cloud support as opposed to on-prem support as I believe you are utilising Atlas based off previous posts but please let me know if otherwise).Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_TranThank you for your response. I wrote the second comment to avoid that the thread automatically gets closed.You said that the read concern is not related to transactions, then why does the documentation specifically talk about “Transactions and Read Concern”?For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions.", "username": "Florian_Walther" }, { "code": "find()find()", "text": "Hi @Florian_Walther,What I want to know specifically is if there are circumstances where a transaction could lead to partial updates or inconsistent dataOutside of the transaction, you will not see an inconsistent state of the database as the transaction is done in an “all-or-nothing” manner.then why does the documentation specifically talk about “Transactions and Read Concern”?The Session.startTransaction() documentation contains further details specifically regarding readConcern:Optional. A document that specifies the read concern for all operations in the transaction, overriding operation-specific read concern.In addition to the above, the same page also describes atomicity with regards to transactions, perhaps more specific to your example:When a transaction commits, all data changes made in the transaction are saved and visible outside the transaction. That is, a transaction will not commit some of its changes while rolling back others.\nUntil a transaction commits, the data changes made in the transaction are not visible outside the transaction.In short, the options discussed changes how individual statements behave within the transaction. They do not affect statements outside the transaction. Due to the ACID guarantees of the transaction, statements outside the transaction would never see the database in an inconsistent state, no matter what read concern you’re using.However if you’re using the find() statement inside the transaction and the rest of your transaction makes decision based on the result of that find(), you might want to take a look at the blog post How To SELECT ... FOR UPDATE inside MongoDB Transactions | MongoDB Blog to see if this answers your concerns.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for the article, I will read it!\nDo these guarantees work the same if my database was sharded?", "username": "Florian_Walther" }, { "code": "", "text": "Hi @Florian_Walther,Do these guarantees work the same if my database was sharded?Yep, the transaction guarantees work the same way for replica sets and sharded clusters.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for your answers!", "username": "Florian_Walther" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
What implications does "local" read concern have for multi-document transactions?
2022-09-24T15:06:40.445Z
What implications does &ldquo;local&rdquo; read concern have for multi-document transactions?
2,839
null
[]
[ { "code": "logIn(credentials)Realm.handleAuthRedirect(); var user = await App.logIn(Realm.Credentials.apiKey(token););\n var client = user.mongoClient('mongodb-atlas');\n.currentUser.logOut();", "text": "Hi folksI need some help with setting up auth in my Realm app. I have gne through the available docs but they do not have what i am looking for.I am using Cloudflare Workers for serving the web app (its a simple forum like web app) - with Realm + Atlas for the database and Auth.My app needs to have Google & Apple as the signin methods.Based on all that i have read, this is my current understanding of doing auth;to fetch the data.\n6. To logout, I should call .currentUser.logOut(); on the clientside html/js.Is this overall flow correct?\nCan I redirect users on the same page instead of opening a new window?\nHow do I refresh tokens?", "username": "Rishav_Sharan" }, { "code": "", "text": "I am not able to edit my post, so adding one more thought here -\nit would be great if a sample app using cloudflare workers were there, which shows how authenticaion can be done for the normal web apps with html/js client code. The sample app\ncloudflare-worker-rest-api-atlas/index.ts at main · mongodb-developer/cloudflare-worker-rest-api-atlas · GitHub\nshows the api driven auth only, which is really not adequate for most use cases.", "username": "Rishav_Sharan" } ]
Need help with 3rd party auth on Real app
2022-10-07T08:56:47.962Z
Need help with 3rd party auth on Real app
1,009
null
[ "replication", "transactions", "containers" ]
[ { "code": "{\"t\":{\"$date\":\"2022-05-19T21:31:57.421+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":9200}}\nconst rsconf = {\n _id: \"rs0\",\n members: [\n {\n _id: 0,\n host: '127.0.0.1:27017',\n },\n ]\n};\n\nrs.initiate(rsconf);\nFROM mongo:latest\nCOPY mongo/mongo-conf/mongodb.key /mongodb.key\nRUN chmod 600 /mongodb.key\nRUN chown 999:999 /mongodb.key\n\n# To solve \"Unable to open() file /home/mongodb/.dbshell - source: https://github.com/docker-library/mongo/issues/323#issuecomment-494648458\nRUN mkdir \"/home/mongodb\"\nRUN touch \"/home/mongodb/.dbshell\"\nRUN chown -R 999:999 /home/mongodb\n\nENTRYPOINT [\"docker-entrypoint.sh\"]\nversion: '3.8'\nservices:\n # Mongodb database\n mongo:\n image: mongo-custom\n container_name: mongo\n command: --bind_ip_all --replSet rs0 --keyFile /mongodb.key\n # restart: unless-stopped\n environment:\n MONGO_INITDB_ROOT_USERNAME: mogoadm\n MONGO_INITDB_ROOT_PASSWORD: mogopwd\n ports:\n - \"27017:27017\"\n volumes:\n - ./mongo/database/mongodb/db:/data/db\n - ./mongo/mongo-init:/docker-entrypoint-initdb.d:ro\n", "text": "Hello,I’m trying to setup a mongodb replicaset as a single node in a docker container because I need to use the transactions to watch over changes in collections.The issue I’m having is that the replicaset seems to never finish to instanciate and the loog display the following line every seconds:I’ve tried every tutorials I’ve found, every stackoverflow issue, they all say the same thing, the configuration should be fairly simple and work with what I have now.Here’s my configurations:mongo-init.js (I removed the db and collection creation in hope to make it work)Dockerfile (I use a custom image because I’m on docker windows and I can’t set proper file permission on a file if I mount it in the container from the host. It’s a POC security is not a concern anyway)docker-compose.ymlI did read the official documentation and I see no functionnal difference, I’m really banging my head against a wall here trying to figure out what I did wrong?", "username": "Junn_Sorran" }, { "code": "", "text": "It is failing at rs.initiate step\nCheck your syntax.I see additional commas in your rsconfSample:\nconfig={\"_id\":“rs0”, “members”:[{\"_id\":0,“host”:“myhost:27017”}]}\nrs.initiate(config)", "username": "Ramachandra_Tummala" }, { "code": "const rsconf = {\n _id: \"rs0\",\n members: [\n {\n _id: 0,\n host: \"127.0.0.1:27017\"\n }\n ]\n}\nconst rsconf = {\n \"_id\": \"rs0\",\n \"members\": [\n {\n \"_id\": 0,\n \"host\": \"127.0.0.1:27017\"\n }\n ]\n}\nrs.initiate( {\n _id : \"rs0\",\n members: [\n { _id: 0, host: \"mongodb0.example.net:27017\" },\n { _id: 1, host: \"mongodb1.example.net:27017\" },\n { _id: 2, host: \"mongodb2.example.net:27017\" }\n ]\n})\n", "text": "Hello, thank for your anwser, as per your sample I removed the extra commas and tried with and without double quotes on the keys but this resulted in no changes in the error unfortunately.I tried:and :to no result.This is the sample config for the official documentation, I’m strugling to see what is wrong with my syntax compared to this:", "username": "Junn_Sorran" }, { "code": "", "text": "mongo-init.jsHow exactly this mongo-init.js is run?\nI have read in one of the post which uses docker setup there should be a sleep after running the rs.initiate().There should be a gap before running next scripts like connecting to mongo.Check our forum threadsAlso why don’t you try just rs.initiate() without any cfg variable\nIt will take default configuration", "username": "Ramachandra_Tummala" }, { "code": "/docker-entrypoint-initdb.d{\"t\":{\"$date\":\"2022-05-21T16:15:18.997+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:18.997+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:18.999+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:18.999+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.075+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.075+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.075+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.076+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.076+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.077+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"mongo-sitac\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.077+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.8\",\"gitVersion\":\"c87e1c23421bf79614baf500fda6622bd90f674e\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.077+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.077+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"*\"},\"replication\":{\"replSet\":\"rs0\"},\"security\":{\"keyFile\":\"/mongodb.key\"},\"storage\":{\"dbPath\":\"/data/db\",\"journal\":{\"enabled\":true}}}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:19.083+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=5855M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.240+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1653149724:240959][1:0x7fc3a4f91c80], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.241+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1653149724:241029][1:0x7fc3a4f91c80], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.255+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":5172}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.255+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.279+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":false}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.279+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.298+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22138, \"ctx\":\"initandlisten\",\"msg\":\"You are running this process as the root user, which is not recommended\",\"tags\":[\"startupWarnings\"]}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.298+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22178, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.300+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.301+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.302+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":20997, \"ctx\":\"initandlisten\",\"msg\":\"Refreshed RWC defaults\",\"attr\":{\"newDefaults\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.303+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/db/diagnostic.data\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.303+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"fc624045-215e-4036-986b-91064e9f63b5\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.326+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.327+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigStartingUp\",\"oldState\":\"ConfigPreStart\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.327+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.327+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280500, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to create internal replication collections\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.327+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"d2489be1-fa14-4def-8a30-7b93eb5416a6\"}},\"options\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.363+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.363+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.minvalid\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"b80857d3-271d-45ac-bdab-e5178caf07c5\"}},\"options\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.393+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.replset.minvalid\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.394+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.election\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"fb02e9bd-0ddd-4590-80ed-b65d641bc8dd\"}},\"options\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.replset.election\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280501, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to load local voted for document\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21311, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local initialized voted for document at startup\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280502, \"ctx\":\"initandlisten\",\"msg\":\"Searching for local Rollback ID document\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21312, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local Rollback ID document at startup. Creating one\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.445+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.rollback.id\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"d2872c85-c392-4bf3-ab91-f2c13bbd1309\"}},\"options\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.480+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.system.rollback.id\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21531, \"ctx\":\"initandlisten\",\"msg\":\"Initialized the rollback ID\",\"attr\":{\"rbid\":1}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21313, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local replica set configuration document at startup\",\"attr\":{\"error\":{\"code\":47,\"codeName\":\"NoMatchingDocument\",\"errmsg\":\"Did not find replica set configuration document in local.system.replset\"}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigUninitialized\",\"oldState\":\"ConfigStartingUp\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.481+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.views\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"4243d9cb-9af0-48fc-8dc2-187c55ed53d8\"}},\"options\":{}}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.514+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.system.views\",\"index\":\"_id_\",\"commitTimestamp\":null}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.517+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20714, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Failed to refresh session cache, will try again at the next refresh interval\",\"attr\":{\"error\":\"NotYetInitialized: Replication has not yet been configured\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.517+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40440, \"ctx\":\"initandlisten\",\"msg\":\"Starting the TopologyVersionObserver\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.517+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNotFound: config.system.sessions does not exist\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.517+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40445, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Started TopologyVersionObserver\"}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.518+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.518+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.518+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.528+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:24.928+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":600}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:25.528+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":800}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:26.329+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1000}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:27.330+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:28.532+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:29.933+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1600}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:31.535+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1800}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:33.337+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":2000}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:35.339+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":2200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:37.542+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":2400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:39.945+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":2600}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:42.548+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":2800}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:45.351+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":3000}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:48.354+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":3200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:51.558+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":3400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:54.962+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":3600}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:15:58.565+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":3800}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:02.370+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":4000}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:06.373+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":4200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:10.578+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":4400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:14.983+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":4600}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:19.588+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":4800}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:24.306+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1653149784:306812][1:0x7fc39c77f700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 55, snapshot max: 55 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1\"}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:24.393+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":5000}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:29.399+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":5200}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:34.605+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":5400}}\n\n{\"t\":{\"$date\":\"2022-05-21T16:16:40.010+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":5600}}\n", "text": "The mongo-init.js is ran by being mounted into /docker-entrypoint-initdb.d in the mongo container.\nI belive it’s automatically executed at database init when the container is started.I know it works because I use it for the database and collection creation with no issues (I commented thoses out for replicaset debug)I just tryed to put a sleep(10000) just after the rs.initiate(), but the container init didn’t even reach it, I belive the rs.initiate never ended.I tried the rs.initiate() without any configuration, unfortunatelly that didn’t change anything. (I also tried to put the config directly in the rs.initiate, in a const, var… All with no changes)Here’s the full log I get when I start the container:After that the mongo just log the same error and increase the time between retries.", "username": "Junn_Sorran" }, { "code": "", "text": "same error for me and i am struggling to find answer. It runs on jenkins but no way to pass this step even you use &", "username": "2ce61facf8ec21247c52628dc4a112e" }, { "code": "", "text": "Well I never did find an answer to this, I’m almost conviced it was a configuration issue since I managed to run a cluster with another docker compose I found on github (but with a different configf so it didn’t fit the project, the problem came back when i modified the config with what i wanted).Since it was a student project I ended up using an online atlas cluster instead and go no problem with it.", "username": "Junn_Sorran" } ]
Single node replicaset never finishing instanciating Error: "Cannot use non-local read concern until replica set is finished initializing"
2022-05-19T21:58:29.492Z
Single node replicaset never finishing instanciating Error: &ldquo;Cannot use non-local read concern until replica set is finished initializing&rdquo;
16,918
null
[ "replication" ]
[ { "code": "", "text": "I have a question about the fail-over of the replica set.\nReplica set consists of primary (db1)-secondary (db2)-arbiter(db3).It makes sense that primary (db1) goes down and secondary (db2) switches to new primary. But when I start the old primary (db1), it joins the replica-set as the secondary, and then automatically switches to the primary. (of course, the new primary(db2) became secondary again)Is this a normal behavior?", "username": "minjeong.bahk" }, { "code": "db1priorityrs.conf()db2db1", "text": "Hi @minjeong.bahk,In a default configuration all electable members of the replica set have the same priority so a former primary rejoining a replica set will not trigger an extra election after it has caught up.It sounds like you have configured a higher priority for db1, which would result in the behaviour you observed. Have a look at the priority values set in your rs.conf() to confirm.Another possibility would be coincidental timing of db2 being unavailable which would also lead to db1 becoming primary again. If this is the case you should be able to search your MongoDB server logs for REPL log lines that will confirm the events from the perspective of each replica set member.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for your help!\nIt was confirmed that each priority was 1 and 0.5. When I changed the priority and tested it again, it remained secondary!!", "username": "minjeong.bahk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Automatically switch to primary on fail-back
2022-10-07T06:00:02.326Z
Automatically switch to primary on fail-back
1,741
null
[ "server" ]
[ { "code": "", "text": "hi there!\nI’ve an issue while installing MongoDB in my Mac m1chip 2020.I run:\nbrew services start mongodb-community\nand got error:\nPermission denied @ rb_sysopen - /Users/pg/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistI try to fixed by:\nsudo chown $(whoami) ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistbut this is return:\nchown: /Users/pg/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist: No such file or directorycan anyone please help me with this.", "username": "Jettapat_Thongsima" }, { "code": "If MongoDB is installed on macOS via Homebrew, the default data directory depends on the type of processor in the system./opt/homebrew/var/mongodb", "text": "Where is your data directory set ?\nIf MongoDB is installed on macOS via Homebrew, the default data directory depends on the type of processor in the system. For your Apple M1 Processor you should change ownership of the /opt/homebrew/var/mongodb directory. Or review which permissions are currently set.", "username": "Tin_Cvitkovic" } ]
Permission denied @ rb_sysopen when start MongoDB
2022-10-07T07:35:01.818Z
Permission denied @ rb_sysopen when start MongoDB
2,453
null
[ "connector-for-bi" ]
[ { "code": " \"recordData\" : {\n \t\t\"subject\" : {\n \t\t\t\"txtFirstName\" : \"Harley\",\n \t\t\t\"txtLastName\" : \"Holland\",\n \t\t\t\"dtBirthDate\" : \"1942-06-14\",\n \t\t\t\"txtCity\" : \"Bedrock\",\n \t\t\t\"txtState\" : \"Idaho\",\n \t\t\t\"selGender\" : \"Male\"\n \t\t}\n }\n- Name: recordData\n MongoType: bson.document\n SqlName: recordData\n SqlType: varchar\n", "text": "Is it possible to configure a drdl file so that mongosqld will convert an object to JSON and present it as varchar?For example if this name/value is part of a document in Mongo:I’d like to create a drdl entry that would convert the whole objecte to a json string within one sql column.I can’t find a list of supported types in drdl. In the example below I’ve tried both object and string in place of bson.document and none work.Any pointers would help.", "username": "Mike_Kinney" }, { "code": "$objectToArray$map\"key\":\"value\"$reduce{$set: {recordData: {$concat: [\n '{\"subject\":{',\n {$reduce: {\n input: {$objectToArray: '$recordData.subject'},\n initialValue: '',\n in: {$concat: [\n '$$value',\n {$cond: {if: {$eq: ['$$value', '']}, then: '', else: ','}},\n '\"', '$$this.k' ,'\":\"', {$toString: '$$this.v'}, '\"'\n ]}\n }},\n '}}'\n]}}}\n$switch$type", "text": "You can apply aggregate on the field using $objectToArray then $map each records into \"key\":\"value\" string then $reduce them into one big string.There are more universal processing by using $switch, $type, etc… But I will skip them in order to make this answer short.", "username": "Billy_Bui" } ]
MongoSQLd - DRDL to convert object to json string in SQL column
2021-02-06T00:36:04.922Z
MongoSQLd - DRDL to convert object to json string in SQL column
4,101
null
[]
[ { "code": "", "text": "Update is not supported on fields with special characters in Data Explorer.Can anyone help?", "username": "fa26aaebc63537e60602cc6be13297d" }, { "code": "mongosh", "text": "Hi @fa26aaebc63537e60602cc6be13297d - Welcome to the community.Can you provide some steps to reproduce this issue? Please include any example documents and the type of update you were trying to perform.Based off the error message, you can try performing the update via mongosh or MongoDB Compass to see if the same update works on there.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Using MongoDB AtlasIf I create a new document with string: string (e.g. [email protected]: hello) and try to update this document, I get that error.", "username": "fa26aaebc63537e60602cc6be13297d" }, { "code": "", "text": "Update: Just realised this issue is with the PERIOD symbol (.)\nAny field created with it, I am unable to update that fieldStill testing on Atlas", "username": "fa26aaebc63537e60602cc6be13297d" }, { "code": "", "text": "Further update: This issue does not happen when using MongoDB Compass", "username": "fa26aaebc63537e60602cc6be13297d" }, { "code": "", "text": "Hi @fa26aaebc63537e60602cc6be13297d,I tried to reproduce this error but could not. Could you list step by step how this error is generated?Please see my reproduction steps below which did not generate the error:Empty collection:\n\nimage1824×706 45.3 KB\nInsert the document with the string you had provided:\n\nimage1150×1010 28.8 KB\n\nimage1812×720 59.8 KB\nUpdate the document:\n\nimage1830×360 34.5 KB\nDocument successfully updated:\n\nimage1864×370 20.7 KB\nNote: I tried to leave the period in the document to try generate the error on updateRegards,\nJason", "username": "Jason_Tran" } ]
Error with MongoDB Atlas
2022-10-04T16:53:22.021Z
Error with MongoDB Atlas
2,014
null
[ "node-js", "mongoose-odm" ]
[ { "code": "const Form = new Schema({\n schema: { type: Object, required: true }, // This is a form schema unrelated to mongo\n title: { type: String, required: false },\n subtitle: { type: String, required: false },\n acceptingResponses: { type: Boolean, required: false },\n userSubmissions: [ // Ends up being an array with objects each containing a key and following this schema\n new Schema({\n status: { type: String, required: true },\n response: { type: Object, required: true },\n respondant: { type: Object, required: true },\n },{\n _id: true,\n required: true\n })\n ]\n},{\n strict: false\n});\n{ type: Array, required: true }", "text": "I have spent the past hour or two puzzled over why my data doesn’t seem to be coming from the database. I’ve narrowed it down to being the result of the schema, but I’m stumped as to how I get around that.Here’s what happens:\nWhen I specify the schema for an array of objects (may be doing this wrong), that particular key and all the data with it get stripped from any query result. When I remove this from the schema, it suddenly works again.Here’s why I can’t just keep it out: I need an ObjectId generated for all objects in the array, and only way I can do that without manually supplying one is by using a subdocument schema. I’ve tried reading into documentation, looking this up various ways, looking at examples, and none of it seems to be giving me the solution I’m looking for. I can insert data just fine and validation passes, but it just gets removed when trying to read it.Schema:I’ve also tried setting userSubmissions to { type: Array, required: true } with the same issue.", "username": "Randall_Barker" }, { "code": "userSubmissions\"it\"Form", "text": "Hi @Randall_Barker,\nI tried to reproduce your issue, but the validation got failed while inserting can you help me by sharing the following details in order to better diagnose the issue?I can insert data just fine and validation passes, but it just gets removed when trying to read it.I am assuming you mean the array of userSubmissions by \"it\" here, is that correct?\nCan you also share the commands you use to create and query the Form documents or a self-contained code example that reproduces this issue?If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "models.Forms.updateOne({_id: formId},{$push: {userSubmissions: responseData}}).then(resolve).catch(reject);\n\"userSubmissions\": [{\n \"respondant\": {\n \"id\": \"62436ac8a19e9b08e18dab88\",\n \"userId\": 12345,\n \"username\": \"######\"\n },\n \"status\": \"pending\",\n \"response\": {\n \"question1\": [\"answer3\",\"answer1\",\"answer7\"],\n \"question2\": \"answer4\",\n \"question3\": \"being translucent\"\n }\n }],\n", "text": "I may have left this detail out that may have been very important. Insertion is done on a push of data that was manually entered, not entered through mongoose. Data is pushed into the userSubmissions array which may be skipped on validation, unsure of this behavior.After one submission, the array looks as follows:", "username": "Randall_Barker" }, { "code": "", "text": "For whatever reason, I cannot edit my reply.\nMy wording may have been slightly confusing. The document was created through manual entry into MongoDB Compass. Responses are pushed into the array. If this isn’t validated when being pushed in, that may very well explain why I’m not receiving any error.", "username": "Randall_Barker" }, { "code": "\"schema\"\"schema\"\"schema\"\"schema\"\"formSchema\"\"Schema\"", "text": "Hi @Randall_Barker,\nI have reproduced the issue and it seems like mongoose indeed doesn’t return the array when a key named \"schema\" exists in the document.\nI have raised a similar issue a few days back about the \"schema\" key causing failed validations even with valid documents. And this issue has already been added to v6.6.6 milestones, check out the following to learn more:### Prerequisites\n\n- [X] I have written a descriptive issue title\n- [X] I hav…e searched existing issues to ensure the bug has not already been reported\n\n\n### Mongoose version\n\n6.2.2\n\n### Node.js version\n\n16.14.2\n\n### MongoDB server version\n\n4.4.6\n\n### Description\n\nHi Team,\nI have observed a bug in the schema class/validator provided by mongoose. When a schema definition contains a key named: `schema` and another key containing an array of documents, the validation gets failed even with completely valid documents.\n\n\n### Steps to Reproduce\n\n#### Step 1. Defining a schema that contains a key named `schema` and another key that embeds an array.\n```javascript\nconst AuthorSchema = new Schema({\n fullName: { type: \"String\", required: true },\n});\n\nconst BookSchema = new Schema({\n schema: { type: \"String\", required: true },\n title: { type: \"String\", required: true },\n authors: [AuthorSchema],\n});\n\nconst Book = model(\"book\", BookSchema);\n```\n\n#### Step 2. Insert a document in the collection\n```javascript\n const book = await Book.create({\n schema: \"design\",\n authors: [{ fullName: \"Sourabh Bagrecha\" }],\n title: \"The power of JavaScript\",\n });\n```\n\n#### Step 3. Which throws the following error:\n```bash\nmongoose/node_modules/mongoose/lib/document.js:3055\n this.$__.validationError = new ValidationError(this);\n ^\n\nValidationError: book validation failed: authors: Cast to Array failed for value \"[ { fullName: 'Sourabh Bagrecha' } ]\" (type Array) at path \"authors\" because of \"TypeError\"\n at model.Document.invalidate (/Users/sourabh/Work/repro/mongoose/node_modules/mongoose/lib/document.js:3055:32)\n ...call stack clipped...\n {\n errors: {\n authors: CastError: Cast to Array failed for value \"[ { fullName: 'Sourabh Bagrecha' } ]\" (type Array) at path \"authors\" because of \"TypeError\"\n at model.$set (/Users/sourabh/Work/repro/mongoose/node_modules/mongoose/lib/document.js:1417:9)\n ...call stack clipped... \n {\n stringValue: `\"[ { fullName: 'Sourabh Bagrecha' } ]\"`,\n messageFormat: undefined,\n kind: 'Array',\n value: [ { fullName: 'Sourabh Bagrecha' } ],\n path: 'authors',\n reason: TypeError: doc.schema.path is not a function\n at new MongooseDocumentArray (/Users/sourabh/Work/repro/mongoose/node_modules/mongoose/lib/types/DocumentArray/index.js:60:47)\n ...call stack clipped...\n valueType: 'Array'\n }\n },\n _message: 'book validation failed'\n}\n```\nNote that the validation error does not occur when:\n- The field schema is removed\n- The field schema is renamed to Schema (uppercase S)\n- The array field is removed\n\nThe validation error occurs only when both the schema field AND the array field exist in the schema\n\n#### Step 4. Now if we update the schema by uppercasing the key `Schema` from lowercased `schema`\n```javascript\n const BookSchema = new Schema({\n Schema: { type: \"String\", required: true },\n// ^^^^^^^\n title: { type: \"String\", required: true },\n authors: [AuthorSchema],\n });\n\n const book = await Book.create({\n Schema: \"design\",\n// ^^^^^^^\n authors: [{ fullName: \"Sourabh Bagrecha\" }],\n title: \"The power of JavaScript\",\n });\n```\n\n#### Step 5. It will work completely fine if schema was spelled differently (e.g. with an uppercase S, i.e.: `Schema`)\n```javascript\n{\n \"Schema\": \"design\",\n \"title\": \"The power of JavaScript\",\n \"authors\": [\n {\n \"fullName\": \"Sourabh Bagrecha\",\n \"_id\": new ObjectId(\"633343a2396c4534d5a70e19\")\n }\n ],\n \"_id\": new ObjectId(\"633343a2396c4534d5a70e18\"),\n \"__v\": 0\n}\n```\n\n\n### Expected Behavior\n\nI understand that `schema` is a reserved keyword that can be used to fetch the document's schema.\n<img width=\"200\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27056663/192612592-e80a56b5-4a42-4a1b-8d72-db6ca2d67432.png\">\n\nBut in that case, it should throw a warning(compile-time) while we are specifying the `schema` key in our schema definition. Also, the validation is only throwing the error, when we are specifying an embedded array of documents in our definition. In all the other cases, having the `schema` key in our definition doesn't throw any validation error at the time of insert/update.\n\nThe expected behavior IMO would be a consistent error irrespective of the embedded array.However, when I change the key name: \"schema\" to something else, it starts working perfectly fine.\nTherefore, as a quick fix if you could change the key name \"schema\" to \"formSchema\" or uppercased \"Schema\", it will work as expected.Also, please avoid skipping mongoose’s schema validation process, and don’t insert documents manually unless that’s really needed.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why does Mongoose remove the key when specified in the schema?
2022-09-25T05:20:05.412Z
Why does Mongoose remove the key when specified in the schema?
4,880
null
[ "queries", "atlas-search" ]
[ { "code": "", "text": "HelloI have a collection named Projects, with an integer field Details.Situation. I was performing queries using the find operator, but, since we needed to improve our search mechanisms, we’re trying to migrate to Atlas Search instead.\nOne of the remaining problems, which I didn’t figure out how to solve it, is how to perform the query “Details.Situation”: { “$in”: [ 0, 1, 3, 5, 6] } in the $search pipeline.I’ve already tried to create the index as number, and string data type (with lucene.keyword analyzers)Does anyone have the idea how to perform this query?About converting this field, this collection contains 25M objects, so, I’d prefer to avoid it…Thanks for your attention\nJeferson Luis Soares", "username": "Jeferson_Soares" }, { "code": "Details.SituationDetails.Situationcompound.(filter|should).rangeDetails.Situation$search.compound.filter.range(options, path: \"Details.Situation\")$in", "text": "Hi @Jeferson_Soares thanks for the question. Depending on the details, it could be your lucky day. To help, we may need information like your index definition and a sample document. I will do my best here to answer with limited back-and-forth.Situation A: Details.Situation is a single numeric value, but you want to check to see if any of the integers exists as a value for Details.Situation. If that’s the case, you can use compound.(filter|should).range to match on these criteria.Situation B: Details.Situation is a multi-value numeric field. In other words, it’s an array/list of numeric values and you want to match if one number in the query appears in documents.We will begin rolling out the ability to index numbers in arrays this week or next at the latest (barring disaster). If you could send an email to [email protected], we will add you to the list of the first customers to get access to the capability.Then, for your reference, you will want to use (pseudo-code) $search.compound.filter.range(options, path: \"Details.Situation\") to get exactly what you are looking for in terms of $in functionality.I am curious about why casting to a string did not work as well. In any event, an index definition and sample doc could be helpful.Here’s the docs for this issue: https://www.mongodb.com/docs/atlas/atlas-search/compound/#mongodb-data-filter", "username": "Marcus" }, { "code": "{\n \"_id\" : NUUID(\"0a2ac404-f46b-4111-a857-0106a4791233\"),\n \"Details\" : {\n \"Customer\" : {\n \"Name\" : \"Fake Customer\",\n \"Id\" : NUUID(\"98561cfc-7096-48c7-aecf-7462630c007d\")\n },\n \"ModifiedOn\" : ISODate(\"2020-04-14T13:27:59.738Z\"),\n \"Name\" : \"Kitchen design\",\n \"Responsible\" : {\n \"Name\" : \"Fake Responsible\",\n \"Id\" : \"123456789\"\n },\n \"Situation\" : 0,\n \"Visibility\" : 1\n },\n \"AccountId\" : \"0001\",\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"AccountId\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"Details\": {\n \"fields\": {\n \"ModifiedOn\": {\n \"type\": \"date\"\n },\n \"Name\": {\n \"maxGrams\": 10,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n },\n \"Situation\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"Visibility\": {\n \"representation\": \"int64\",\n \"type\": \"number\"\n }\n },\n \"type\": \"document\"\n },\n }\n }\n}\n", "text": "Hi @MarcusFirst of all, thanks for your answer. it came sooner than expected!Related to my question, the scenario is the one you described on Situation A, and here comes a sample document:I didn’t get how to use the range filter, since the possible operators are lt, lte, gt and gte, and we do not have the selected values in a range - sometimes some values are skipped.Here is a sample of the search index:I’ve already used the definition of the field Details.Situation as defined to the Details.Visibility field.", "username": "Jeferson_Soares" }, { "code": "ltegtegte:3lte:3range/* never use eval in JS did they remove that yet? */\nfunction evaluate(input){\n if( input <= 3 && input >=3){\n return true;\n } else {\n return false;\n }\n}\n\nevaluate(3)\n// would return true\nevaluate(1)\n// would return false\nevaluate(5)\n// would return \n{\n \"$search\": {\n \"index\": \"faking_it\", \n \"range\": {\n \"path\": \"Details.Situation\",\n \"gte\": 3,\n \"lte\": 3\n }\n }\n}\n", "text": "Ahh, that makes sense. The API could improve there, for sure. It will soon.The easiest way to do equality today with range is to have a combination of lte and gte for a given value. For example, if you want to match 3, consider adding gte:3 and lte:3 as parameters to your range query. In this example, the only possible number is 3.Here’s some code for illustrative purposes in JavaScript of how it would behave under the hood so you can test it out if you’d like:for an example from our docs and API consider:Let me know if this helps.", "username": "Marcus" }, { "code": " \"should\": [\n { \"range\": {\"path\": \"Details.Situation\", \"gte\": 1, \"lte\": 1 } },\n { \"range\": {\"path\": \"Details.Situation\", \"gte\": 3, \"lte\": 3 } },\n { \"range\": {\"path\": \"Details.Situation\", \"gte\": 5, \"lte\": 5 } },\n { \"range\": {\"path\": \"Details.Situation\", \"gte\": 6, \"lte\": 6 } }\n ],\n \"minimumShouldMatch\": 1\n", "text": "Hey @MarcusI’ve written the query this way:It worked, but I was looking for something more elegant.Is this the current way of doing this query? Or the way I’d written it could be improved?Please let me know if there are plans of developing a better way.Thanks again", "username": "Jeferson_Soares" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to perform an $in query in a integer field inside a $search stage
2022-10-03T20:00:21.321Z
How to perform an $in query in a integer field inside a $search stage
3,259
null
[ "aggregation", "mongoose-odm" ]
[ { "code": "let logs = await this.profileModel.aggregate([\n {\n $match: {\n bindedBanque: name,\n transactionDate: { $gte: startDate, $lt: endDate },\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'logs',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalId',\n as: 'logsByTpes',\n },\n },\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$logsByTpes' },\n {\n $project: {\n // bindedSuperAdmin: '$bindedSuperAdmin',\n // bindedBanque: '$bindedBanque',\n // bindedClient: '$bindedClient',\n\n snTpe: '$tpesBySite.sn',\n terminalId: '$tpesBySite.terminalId',\n\n transactionDate: '$logsByTpes.transactionDate',\n transactionTime: '$logsByTpes.transactionTime',\n\n outcome: '$logsByTpes.outcome',\n },\n },\n {\n $group: {\n _id: { bank: '$logsByTpes.outcome' },\n count: { $sum: 1 },\n },\n },\n ]);\n console.log(logs);\n\n return logs;\n async getLogsByDate(startDate, endDate) {\n let data = await this.logModel.aggregate([\n { $match: { transactionDate: { $gte: startDate, $lt: endDate } } },\n {\n $group: {\n _id: { _id: '$outcome' },\n count: { $sum: 1 },\n },\n },\n ]);\n\n const computedValue = data.map((data) => {\n return { name: data._id._id, value: data.count };\n });\n console.log('computedValue', computedValue);\n\n return computedValue;\n }\n\n", "text": "I’m working with Nestjs graphqlI checked the data type of the field was Date and the input was DateBy the way it worked with the function belowI really stuck any one could help me", "username": "skander_lassoued" }, { "code": "startDateendDateconsole.log(typeof startDate)console.log(typeof endDate)", "text": "Hi @skander_lassoued,\nA similar issue was fixed by wrapping the input date into a JavaScript Date Object like this:But since you mentioned the following already:I checked the data type of the field was Date and the input was DateIn order to better analyze your situation can you provide some sample documents from the profile collection?\nAlso, can you please provide the startDate and endDate values with their actual formats and values by console logging their types using console.log(typeof startDate) & console.log(typeof endDate)?If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I want to get data between two date mongoose
2022-09-26T00:21:30.605Z
I want to get data between two date mongoose
1,714
https://www.mongodb.com/…_2_1024x768.jpeg
[ "sharding" ]
[ { "code": "", "text": "0I want to shard a collection with data. When I try with sh.shardCollection(“myDb.myCollection”, {id:“hashed”}) then this collection shard but it’s not spread to the whole shards. only spread to the primary shard. for example,\n\ngh1920×1440 121 KB\nIn this picture when I shard the empty collection it’s split into 4 chunks. But the previously created collection, when going to shard it, is split into one chunk(primary shard)My question is how correctly shard a collection with data in MongoDB. Have any other alternative way?", "username": "Lakshan_Amal" }, { "code": "Sharding a Populated Collectionmongosmongoddb.getSiblingDB(\"myDb\").myCollection.getShardDistribution()sh.enableSharding(\"myDb\")\nsh.shardCollection(\"myDb.myCollection\", { _id : 1 })\n", "text": "Hi @Lakshan_Amal ,Welcome to The MongoDB Community Forums! As per this documentation on Sharding a Populated CollectionIf you shard a populated collection using a hashed shard key:Your collection only has 1 chunk, and therefore can’t be divided among the shards. The Default maximum size for a chunk is 64MB. Depending on the version of MongoDB either the mongos or the mongod will call for a split when it realizes that a significant fraction of the maximum chunk size has been written to a chunk or you can manually split the chunk using sh.splitAt.You can also run below query to get information about the number and size of documents/chunks on each shard.db.getSiblingDB(\"myDb\").myCollection.getShardDistribution()Also if you are using any version of MongoDB below MongoDB 6.0 then you must enable sharding on database and on collection level:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thank you for your answer. I got your point. Very thankful to you.", "username": "Lakshan_Amal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How shard a collection with data inside sharded cluster MongoDB
2022-10-03T05:38:10.060Z
How shard a collection with data inside sharded cluster MongoDB
1,931
null
[ "replication" ]
[ { "code": "", "text": "I’m currently working on a replica set user-managed MongoDB for Alteryx software. The current replica set up is:", "username": "Flouncy_Mgoo" }, { "code": "", "text": "Welcome to the MongoDB community @Flouncy_Mgoo !A replica set member that is stale no longer has a common oplog point that would allow it to automatically sync from another member of the replica set.You will have to Re-sync the stale replica set member to fix this issue.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Secondary node of replica set has become stale Mongo 4.0
2022-10-06T14:47:46.727Z
Secondary node of replica set has become stale Mongo 4.0
1,028
null
[ "replication", "cloud-manager" ]
[ { "code": "", "text": "I’m trying to determine the best way to perform maintenance on the underlying system that my replica set members are running on. I’m defining “best” as minimal downtime while also minimizing unnecessary complications.My current setup is four mongods total, three of which are active replica set members and one of which is a hidden secondary (votes 0, priority 0). All are running on EC2 instances. They are all managed by Cloud Manager.In order to perform system upgrades to the operating system (Ubuntu 18.04), it will be necessary to reboot each instance. This is the procedure I have used in the past without issue:I am doing things this way in order to keep the number of voting nodes to an odd number (each replica set node has votes: 1, priority: 1, except the hidden secondary, which is normally votes: 0, priority: 0) during any time that a server will be offline.My coworker feels that this procedure is overly complicated, and that we can just do the following:I’m concerned with issues that may arise from the number of votes being an even number during the time when a replica set member is offline, but maybe I’m just being overly cautious.What’s the best procedure here? If it isn’t one of the above options, how do you handle system updates with your replica set?", "username": "laser" }, { "code": "", "text": "Hi @laser,Your coworker’s procedure is the recommended approach. Reconfiguring for maintenance is unnecessary as long as you ensure you always have a majority of voting members available. This general approach is described in Your Ultimate Guide to Rolling Upgrades | MongoDB Blog.If you have a MongoDB 4.4+ deployment, you may also want to look into adjusting Mirrored Reads to help reduce the performance impact of restarting the primary during planned maintenance. Mirrored reads pre-warm the caches of electable secondary replica set members by sending a configurable sample of supported query operations from the primary.I’m concerned with issues that may arise from the number of votes being an even number during the time when a replica set member is offline, but maybe I’m just being overly cautious.When a replica set member is offline, the number of configured voting members does not change. The motivation for having an odd number of voting members is to increase fault tolerance during periods where 1 (or possibly more) replica set member are not available due to maintenance, connectivity, or other scenarios.Your reconfiguration from 3 voting members to 2 voting members for maintenance would be unnecessary as the majority votes required to elect or sustain a primary would be 2 in both cases.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best way to perform system updates on replica set hosts
2022-10-06T18:45:13.248Z
Best way to perform system updates on replica set hosts
1,976
null
[ "security" ]
[ { "code": "", "text": "While Connecting to MongoDB using tls certificates we are facing the issueCommand issued:sudo mongo --tls --host 127.0.0.1 --tlsCAFile /etc/ssl/self/root_self_CA.pem --tlsCertificateKeyFile /etc/ssl/self/mongodb_client.pem --tlsCertificateKeyFilePassword admin@123 --tlsAllowInvalidCertificates -u mongouser -p --authenticationDatabase adminError Log:{“t”:{\"$date\":“2022-10-06T04:23:17.891-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:22988, “ctx”:“conn59”,“msg”:“Error receiving request from client. Ending connection from remote”,“attr”:{“error”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate validation failed: self signed certificate”},“remote”:“192.168.0.117:51786”,“connectionId”:59}}\n{“t”:{\"$date\":“2022-10-06T04:23:17.891-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn59”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.0.117:51786”,“uuid”:“5f3c1381-8f37-446a-b43c-2aa7a42e0859”,“connectionId”:59,“connectionCount”:0}}Please help me on this", "username": "bala_subramanian" }, { "code": "", "text": "It could be due to bindIp parameter\nWhat is the value you set it to?\nWhile connecting you are using localhost but it seems you are connecting remotely\nDid you try with actual hostname instead of localhost/127.0.0.1\nWhat is your os?\nFor Mac & Windows additional param like certificateselector is available", "username": "Ramachandra_Tummala" } ]
Securing Mongodb with TLS Authentication
2022-10-06T17:49:03.302Z
Securing Mongodb with TLS Authentication
1,961
null
[ "queries", "java", "mongodb-shell" ]
[ { "code": "\n db.collection.update({\n pollID: 123\n},\n{\n \"$inc\": {\n \"answerAnalytics.$[element].selectCount\": 1\n }\n},\n{\n \"arrayFilters\": [\n {\n \"$or\": [\n {\n \"element.option\": \"1\"\n },\n {\n \"element.option\": \"2\"\n }\n ]\n }\n ],\n \"multi\": true\n})\n\n DBCollectionFindAndModifyOptions dbCollectionFindAndModifyOptions = (new DBCollectionFindAndModifyOptions()).projection((DBObject) null).sort((DBObject) null).remove(false).update(incrObj).returnNew(true).upsert(false).bypassDocumentValidation(true).maxTime(0L, TimeUnit.MILLISECONDS).writeConcern(writeConcern);\n\n \nDBObject dbObject = dbCollection.findAndModify(queryDocument, dbCollectionFindAndModifyOptions);\n", "text": "Demo Mongo playground.I found examples to do with spring-mongodb. But unable to find any working way to do this with mongodb driver based code.Query:I have tried doing something like this", "username": "Manish_sharma9" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to create a Java mongodb driver query that is identical to this mongodb update query with arrayfilter
2022-10-06T20:50:43.957Z
How to create a Java mongodb driver query that is identical to this mongodb update query with arrayfilter
1,158
null
[]
[ { "code": "", "text": "Is there any way to get updates for these threads?", "username": "Basavaraj_KM1" }, { "code": "", "text": "Hi @Basavaraj_KM1,Is there any way to get updates for these threads?As long as you are logged into the site, you can set notification options for individual topics via the selection list below the topic discussion:You can also set notification options for categories and tags of interest: Managing and subscribing to notificationsRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there any way to get updates for topics
2022-10-06T14:24:13.185Z
Is there any way to get updates for topics
2,012
null
[ "java" ]
[ { "code": "{\n \"rules\": {\n \"AnalysisModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": false,\n \"write\": true\n }\n ],\n \"CoordinatesModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": false,\n \"write\": true\n }\n ],\n \"UserModel\": [\n {\n \"name\": \"anyperson\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": true\n }\n ]\n },\n \"defaultRoles\": [\n {\n \"name\": \"read-write\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": true\n }\n ]\n}\nCredentials credentials = Credentials.anonymous();\n\n User userSync = app.login(credentials);\n\n SyncConfiguration config = new SyncConfiguration.Builder(userSync)\n .initialSubscriptions(new SyncConfiguration.InitialFlexibleSyncSubscriptions() {\n @Override\n public void configure(Realm realm, MutableSubscriptionSet subscriptions) {\n\n subscriptions.addOrUpdate(Subscription.create(\"anyperson\",realm.where(UserModel.class)));\n\n }\n })\n .allowQueriesOnUiThread(true)\n .allowWritesOnUiThread(true)\n .modules(new ModuleUserAndAnalysis())\n .build();\n Realm.getInstanceAsync(config, new Realm.Callback() {\n @Override\n public void onSuccess(Realm realm) {\n Log.v(\"EXAMPLE\", \"Successfully opened a realm.\");\n }\n });\n\n realmConfig = config;\n\n return Realm.getInstance(realmConfig);\nsignature\n", "text": "I would like to know how to limit a collection to read-only and another to write-only, I made a rule in flexible but I am not able to implement a signature that follows these rules can anyone help me?rule implemented in atlas", "username": "multiface_biometria" }, { "code": "{\n \"name\": \"readonly\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false\n}", "text": "@multiface_biometria I’m not sure exactly what you are trying to do but perhaps Asymmetric Sync is what you are looking for?Another option would be to have a read-only role?\nsomething like:", "username": "Ian_Ward" }, { "code": "", "text": "Thanks for the return.What I really wanted is a rule and a signature to be read only in one collection and write only in another.", "username": "multiface_biometria" }, { "code": "", "text": "{\n“rules”: {\n“Collection1l”: [\n{\n“name”: “only-write”,\n“applyWhen”: {},\n“read”: false,\n“write”: true\n}\n],\n“Collection2”: [\n{\n“name”: “read-write”,\n“applyWhen”: {},\n“read”: true,\n“write”: true\n}\n]\n}I would like to know what signature I would use for collection 1 .\nwhere is only written.collection2 is using subscriptions.addOrUpdate(Subscription.create(“read-write”,realm.where(collection2.class)));", "username": "multiface_biometria" }, { "code": "", "text": "Write permissions require and imply read permissions, so unfortunately it’s not possible to make a rule with write-only (and not read) permissions.Take a look at the docs on permissions: https://www.mongodb.com/docs/atlas/app-services/sync/data-access-patterns/permissions/#write-permissions", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Thanks for the return.Is there any other way to implement a write-only collection?The asymmetric mode for example?", "username": "multiface_biometria" }, { "code": "", "text": "Asymmetric sync is write-only in the sense that noone can actually sync it down. It is ideal for things like metrics, logging, IoT measurements, etc. I suspect it might be what you are looking for, but I am curious why exactly you wany write-only permissions since it does seem like a bit of an anti-pattern to let someone write something that they are not allowed to read.", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you very much for the feedback.You made everything more understandable.The ideal for me is to record a route taking the coordinate data and saving it directly in the mongo atlas, so if I cleaned the local data I would still have the atlas for consultation that would be used by an admin login.in my app we don’t need to have the data on the device only in mongo atlas so I didn’t want to read anything from the atlas.", "username": "multiface_biometria" }, { "code": "", "text": "If you never want your app to read any data locally / from atlas then Asymmetric sync is exactly what you want. It will essentially guarantee that everything you ever write will make it to Atlas (even if your device does not have service). The one caveat is that it is insert-only, meaning that you cant “update” objects but that makes sense considering that you cant “read” anything to update in the first place!Excited for you to try it out and let us know if you have any other questions.Thanks,\nTyler", "username": "Tyler_Kaye" } ]
How to create permission only read in one collection and only read in another collection on flexible sync? (JAVA SDK)
2022-09-29T10:37:08.895Z
How to create permission only read in one collection and only read in another collection on flexible sync? (JAVA SDK)
3,036
null
[ "queries", "indexes" ]
[ { "code": "", "text": "I have a clustered collection that uses custom ids of unique integers. When I query using a sort by _id, the result is not sorted and sometimes returns different results.This collection has a secondary index, but that index is not usable. When explaining the query plan it seems that no index is used, as indexFilterSet is set to false.Is it possible to sort by _id descending in a clustered index? I cannot find documentation detailing that this is not possible, but it doesn’t seem to work.This is the type of query I am attempting:\ndb.collection.find({}).sort({ _id: -1}).limit(5)", "username": "A_O2" }, { "code": "1-1", "text": "Hi @A_O2, and welcome to the MongoDB Community forums! MongoDB should have no issues sorting by an index in either ascending (1) or descending (-1) order. Can you post a screen shot where this is not happening?", "username": "Doug_Duncan" } ]
Unable to sort by a Clustered Index
2022-10-06T15:52:34.738Z
Unable to sort by a Clustered Index
1,171
null
[ "serverless" ]
[ { "code": "", "text": "Hello community,I have a multi-tenant SaaS application and would want to create a serverless instance for each tenant I have. Is there any limitation on how many serverless instances I can have in a single project?Many thanks in advance!", "username": "derjanni" }, { "code": "", "text": "Hi,\nWe limit to 50 serverless instances. You can check all the serverless limitations here: https://www.mongodb.com/docs/atlas/reference/serverless-instance-limitations/Sincerely,\nVishal\nMongoDB Atlas PM - Serverless", "username": "Vishal_Dhiman" }, { "code": "", "text": "We limit to 50 serverless instances.Any chance to get this limit increased to at least 5,000?", "username": "derjanni" }, { "code": "", "text": "Hi,\nI wanted to clarify here. For serverless, here are the current limits:\nMax number of instances per project - 25\nMax number of databases per instance - 50So in total you can have 25x50=1250 databases per project. You can also create many projects per organization. To get to 5000, you can create 4 projects.Also please note that Atlas dedicated clusters allow more databases depending upon which size you pick. I’d love to chat with you to understand your use case in more depth. I’ll DM you my calendly link.Sincerely,\nVishal\nMongoDB Atlas Serverless PM team", "username": "Vishal_Dhiman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How many serverless instances can I have in a project?
2022-10-04T10:33:19.037Z
How many serverless instances can I have in a project?
2,288
null
[ "swift", "android" ]
[ { "code": "", "text": "In iOS Reading a JSON file and inserting to Realm database with below piece of code is taking 3236.17 ms . File size is 7.6MB with 21837 recordsvar config = Realm.Configuration.defaultConfigurationconfig.fileURL!.deleteLastPathComponent()config.fileURL!.appendPathComponent(“drugs”)config.fileURL!.appendPathExtension(“realm”)var defaultRealm = try! Realm(configuration: config)let fileUrl = FileManager.default.epocPrivateDataURL.appendingPathComponent(“pill_propertiesV2.json”)do {let data = try Data(contentsOf: fileUrl)let pillProperties = try JSONDecoder().decode(RLMPillProperties.self, from: data)try? defaultRealm.write {realm.add(pillProperties)}} catch {print(“Error reading monograph (error)”)}Same operation in Android it is taking 224ms with single API createObjectFromJson .Any reason why the performance is low in iOS? Is there any such kind of API at RealmSDK level similar to Android one?", "username": "Basavaraj_KM1" }, { "code": "", "text": "I ran your code with 10,000 objects and it took about a second. I am running it on iMac 2017.", "username": "Jay" }, { "code": "", "text": "I ran this in MacBook Pro (15-inch, 2019) and This is the structure of single record in JSON. Tried the same in multiple machine the result is same.{\n“drugId”: 1713,\n“ifile”: “LLY07151”,\n“color”: “white”,\n“shape”: “”,\n“score”: “”,\n“clarity”: “cloudy”,\n“coating”: “”,\n“subshape”: “”,\n“subcolor”: “milky white”,\n“flavor”: “”,\n“imprint1”: “”,\n“imprint2”: “”,\n“formulation”: “human recombinant 70 units-30 units/mL”,\n“drugForm”: “suspension”,\n“genericName”: “insulin isophane-insulin regular”\n}", "username": "Basavaraj_KM1" }, { "code": "class Thing: Object {\n @Persisted var drugId = 1713\n @Persisted var ifile = \"LLY07151\"\n @Persisted var color = \"white\"\n @Persisted var shape = \"\"\n @Persisted var score = \"\"\n @Persisted var clarity = \"cloudy\"\n @Persisted var coating = \"\"\n @Persisted var subshape = \"\"\n @Persisted var subcolor = \"milky white\"\n @Persisted var flavor = \"\"\n @Persisted var imprint1 = \"\"\n @Persisted var imprint2 = \"\"\n @Persisted var formulation = \"human recombinant 70 units-30 units/mL\"\n @Persisted var drugForm = \"suspension\"\n @Persisted var genericName = \"insulin isophane-insulin regular\"\n}\nvar startTime = Date()func writeABunchOfThings() {\n let realm = //your realm\n\n var myThingArray = [Thing]()\n\n for _ in 0...9999 { // <-10,000 objects\n let myThing = Thing()\n myThingArray.append(myThing)\n }\n\n self.startTime = Date()\n realm.writeAsync {\n realm.add(myThingArray)\n } onComplete: { error in\n let elapsed = Date().timeIntervalSince(self.startTime)\n print(\"done inserting. Elapsed time: \\(elapsed)\")\n }\n}\ndone inserting. Elapsed time: 0.5658090114593506\ndone inserting. Elapsed time: 0.5397540330886841\ndone inserting. Elapsed time: 0.5507090091705322\nself.startTime = Date()\n\nrealm.writeAsync {\n for _ in 0...9999 {\n realm.create(Thing.self)\n }\n} onComplete: { error in\n let elapsed = Date().timeIntervalSince(self.startTime)\n print(\"done inserting. Elapsed time: \\(elapsed)\")\n}\ndone inserting. Elapsed time: 0.9166730642318726\ndone inserting. Elapsed time: 0.9176139831542969\ndone inserting. Elapsed time: 0.911078929901123\n", "text": "Perhaps I am not fully understanding. Let me share my testing code:For simplicity I have Thing object that just has default values based on your JSONI then have a class var that stores the start time so we can determine the write timevar startTime = Date()and then a function that creates 10,000 Thing objects and stores them in an array, once completed the startTime is populated and then that array is written to Realm. When the write completes, the endTime is calculated and printed to console.If I run that code three times, deleting the Realm file in between runs, I get this output in consoleI believe the issue with your code is writing each object separately within a tight loop on the main thread- if you need that functionality (which may be more memory safe) this would work noting it’s an asynchronous write.and the outputSo not quite as fast but still within reason.Let me know if that helps.Jay", "username": "Jay" }, { "code": "", "text": "I didn’t see anything like realm.writeAsync in SDK… Please confirm.", "username": "Basavaraj_KM1" }, { "code": " try? realm.writeAsync {\n realm.create(SwiftStringObject.self, value: [\"string\"])\n } onComplete: { error in\n // optional handling on write complete\n }\n\n try? realm.beginAsyncWrite {\n realm.create(SwiftStringObject.self, value: [\"string\"])\n realm.commitAsyncWrite()\n }\n\n let asyncTransactionId = try? realm.beginAsyncWrite {\n // ...\n }\n try! realm.cancelAsyncWrite(asyncTransactionId)\n", "text": "@Basavaraj_KM1Confirmed! There are a bunch of ways to code that - I was just using the latest syntax from the SDK but the functionality is consistent.Xcode 13.1 is now the minimum supported version of Xcode, as Apple no longer allows submitting to the app store with Xcode 12.\nEnhancements\n\nAdd Xcode 13.4 binaries to the release package.\nAdd Swif...Add Swift API for asynchronous transactions", "username": "Jay" }, { "code": "", "text": "I am using pod ‘RealmSwift’, ‘~>10’… Any specific version I need to refer here?", "username": "Basavaraj_KM1" }, { "code": "pod update", "text": "@Basavaraj_KM1Any specific version I need to refer here?Well not really. If you’re using less than 10.26 it would probably be a good idea to update to 10.26pod updatebut if want to use and older SDK you just have to code it without the spiffy new syntax.Oh, and the new syntax in 10.26 is actually covered in the current documentation (GO REALM TEAM!)", "username": "Jay" }, { "code": "", "text": "Getting error saying \"CocoaPods could not find compatible versions for pod “RealmSwift”:\nIn Podfile:\nRealmSwift (~> 10.26)Could you please help me on how to get this?", "username": "Basavaraj_KM1" }, { "code": "project 'My Great Realm App'\ntarget 'My Great Realm App' do\n use_frameworks!\n platform :osx, '10.15'\n pod 'RealmSwift', '~>10'\nend\n", "text": "Sure!If we knew what version of CocoaPods you were using (pod --version) and could see your entire podfile we may be able to spot the issue.Here’s mine (for macOS)There’s an installation guide right on the Realm site as well for reference", "username": "Jay" }, { "code": "", "text": "Thanks Jay.Started working after pod update. But I didn’t seen any difference in performance with writeAsync and write. Still getting same results like as mentioned in the beginning", "username": "Basavaraj_KM1" }, { "code": "", "text": "@Basavaraj_KM1Well, it’s working great for me. I would propose it’s time for a split-half search to eliminate some variables. Please create a brand new macOS XCode project using my podfile from above (changing the project name)Then copy and paste my Thing object from above and then the code (func writeABunchOfThings) in the new project. I would add a button to run the code itself.Run the project.If it works and behaves correctly, something in your original project is causing the issueIf it does not work, then you’ve got an environment issue; maybe a config issue or some other issue - possibly event corrupted files or a hardware problem.I have followed those steps on three different Macs; a 2019 iMac, 2017 iMac and a 2013 Macbook Pro 15\". The results are roughly the same.", "username": "Jay" }, { "code": "realm.create([RLMIngredient].self, value: jsonObject, update: .modified)if let jsonObject = try? JSONSerialization.jsonObject(\n with: data,\n options: []\n ) as? [[String: AnyObject]] {\n try? realm.writeAsync({\n realm.create(RLMIngredient.self, value: jsonObject, update: .modified)\n }, onComplete: { error in\n print(error)\n print(\"Realm onComplete - Time taken to Read JSON file and insert ingredients \\(Date().timeIntervalSince(start) * 1000)\")\n })\n }\n", "text": "Here jsonObject we are getting array of dictionary object. Each dictionary represents one record in RLMIngredient collection.With this statement realm.create([RLMIngredient].self, value: jsonObject, update: .modified) I am getting compiler error.What is the procedure to insert array of json object to realm here other than for loop. Below snippet is not working", "username": "Basavaraj_KM1" }, { "code": ".create.create([RLMIngredient].self[].selfdatadata", "text": "My apologies but I am struggling trying to understand what you’re attempting to do with that code.The .create function creates a single realm object so this.create([RLMIngredient].selfwon’t work as [].self is an array and not a Realm object. It’s unclear what the purpose of that would be even if it did work.Here jsonObject we are getting array of dictionary object. Each dictionary represents one record in RLMIngredient collection.Why? Why do you need an array of dictionaries for? Where does data come from? Is it encoded? Why not just pass data to a Realm object when it’s instantiated, let the object decode it and assign properties and then write it out to Realm?Your new question isWhat is the procedure to insert array of json object to realmAnd the answer is you cannot insert and array of json objects to Realm. You can only insert Realm objects into Realm.I feel the thread is getting off topic.The initial issue was Realm write performance. Using my examples above, we’re not able to replicate that issue which would indicate the problem lies elsewhere in your code that was not included.Working with JSONSerialization, arrays etc are off topic for this discussion and to prevent the thread from being a mile-long, it should probably be posed as another question (it’s really a separate topic)Happy to help but it’s important to keep threads on a single topic so it may help future readers that are looking for help about this particular subject matter.So, post a separate question with the code you’re stuck on an we’ll take a look!", "username": "Jay" }, { "code": "", "text": "A post was split to a new topic: Is there any way to get updates for topics", "username": "Stennie_X" } ]
Performance issue during insertion of document to collection between Android and iOS
2022-05-18T10:02:48.682Z
Performance issue during insertion of document to collection between Android and iOS
5,214
null
[]
[ { "code": "", "text": "The below message appears when trying to create a cluster in a new project:Sorry, we’re deploying some big changes of our own and are temporarily unable to process your request. We expect to be back up and running very soon.", "username": "benjamin_danis" }, { "code": "", "text": "I get the same. I tried to delete my cluster, but it seem hung up on “Your cluster is shutting down” and never gets past this. Something is definitely wrong.", "username": "Francis_Vachon" }, { "code": "", "text": "Hi @benjamin_danis,I hope everything is fine and working now!You can refer to the status page of Atlas:\nhttps://status.cloud.mongodb.com/.The Atlas Web UI issue is resolved and all systems are operational.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to Create Clusters After Initial Project Creation: 'unable to process your request'
2022-10-05T16:55:16.518Z
Unable to Create Clusters After Initial Project Creation: &lsquo;unable to process your request&rsquo;
1,511
null
[]
[ { "code": "", "text": "Hello All,I had an application which was communicating with my database properly and without any problem. But when I set a password for the database, it stopped working.\nNow, I getQuery failed with error code 13 with name ‘Unauthorized’error when I run my Springboot apllication.\nHowever, using the shell, I can connect to the database, see the collections, etc.\nMy MongoDB version is 4.2.0 and I run this command to connect to the database in my shell:mongo --port 27017 -u “myUsername” -p “myPassword” --authenticationDatabase “admin”In my Springboot application I have the following URI:data:\nmongodb:\nuri: mongodb://myUsername:myPassword@localhost:27017/mainDatabase?authSource=admin&retryWrites=true&w=majorityBut it doesn’t work.\nCan anyone help me please?", "username": "Ana_Ha" }, { "code": "", "text": "If it works with shell it should work with your application too.\nCan you connect from shell using the exact connect string from your uri?\nWhat is mainDatabase?Is it actual name redacted\nMake sure name is matching with that from show dbs", "username": "Ramachandra_Tummala" }, { "code": "", "text": "For anyone who faces the same problem in the future.\nI had actually exactly the same settings (passwords, username, etc.) on the local and on the server but it didn’t work. Finally, I updated my MongoDB to the latest version (1.6) and my Spring Boot Application to the latest version (2.7.4 I think) and it worked!", "username": "Ana_Ha" } ]
'Unauthorized' error when connecting to MongoDB using Springboot application
2022-09-26T07:38:00.124Z
&lsquo;Unauthorized&rsquo; error when connecting to MongoDB using Springboot application
3,488
null
[]
[ { "code": "", "text": "Hi .\nI have issues with creating a new database , everytime i create a database its deleted and i dont know what is the problem.I could use some help please.", "username": "avishay_avraham" }, { "code": "db.runCommand( { dropDatabse: 1 } )db.dropDatabase()", "text": "Hello @avishay_avraham, and welcome to the MongoDB Community forums! This is not normal. MongoDB will not automatically delete a database without command like db.runCommand( { dropDatabse: 1 } ) or db.dropDatabase() being run.Can you explain the process where you see this happening?", "username": "Doug_Duncan" }, { "code": "", "text": "Welcome to the MongoDB Community @avishay_avraham !As @Doug_Duncan mentioned, databases do not delete themselves automatically.If you can provide more details we should be able to help you work out what is happening:version of MongoDB server?is this server exposed to the internet?do you have password authentication and access control enabled? (see MongoDB Security Checklist)how are you creating the new database and what is the tool or driver version used?how are you confirming the database has been deleted and what is the tool or driver version used?A few likely things to check:are you connecting to the same deployment and databaseare you using compatible tool/driver versions to connect to your deployment to read the datais your deployment open to remote connections and accidentally unsecuredRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "One other thing I just thought of is that if all the collections in the database are dropped, then the database itself is removed from the system. Are you by chance dropping collections?", "username": "Doug_Duncan" }, { "code": "", "text": "Database is automatically created when you issueuse database", "username": "Tin_Cvitkovic" }, { "code": "mongosh> show dbs\ndata 73.7 kB\nequipoule 21.7 MB\nforums 41 kB\nnota_bene_test 115 kB\npoem 41 kB\nprototype 582 kB\nsalon 22.7 MB\ntest 213 kB\nzoo 111 kB\nadmin 348 kB\nlocal 3.13 GB\nmongosh> use Tin_Cvitkovic\n'switched to db Tin_Cvitkovic'\nmongosh> show dbs\ndata 73.7 kB\nequipoule 21.7 MB\nforums 41 kB\nnota_bene_test 115 kB\npoem 41 kB\nprototype 582 kB\nsalon 22.7 MB\ntest 213 kB\nzoo 111 kB\nadmin 348 kB\nlocal 3.13 GB\nmongosh> db.test.insertOne( { _id : 1})\n{ acknowledged: true, insertedId: 1 }\nmongosh> show dbs\nTin_Cvitkovic 8.19 kB\ndata 73.7 kB\nequipoule 21.7 MB\nforums 41 kB\nnota_bene_test 115 kB\npoem 41 kB\nprototype 582 kB\nsalon 22.7 MB\ntest 213 kB\nzoo 111 kB\nadmin 348 kB\nlocal 3.13 GB\n", "text": "I am not too sure aboutDatabase is automatically created when you issueuse databaseHowever it does get created if you insert into a collection.", "username": "steevej" }, { "code": "", "text": "That’s what I meant, you dont need a special “CREATE DATABASE” command, maybe he justed switched but didn’t insert anything hence thinking that newly created database has been removed. Thats was my 0.02 $ ", "username": "Tin_Cvitkovic" } ]
Database delete automatic
2022-10-02T18:01:00.807Z
Database delete automatic
2,481
null
[ "node-js", "indexes" ]
[ { "code": "", "text": "I have customer collection with a field hospital id, bvn and phone.I want to build an index such that if I try to insert a new document and the hospital ids match it should apply a unique constraint to the bvn and phone to ensure the same hospital does not create duplicate data.\nexample :\ndoc1 (in db) = {hospital_id: 1, bvn: 1, phone: 1};\nif I try to insert a new doc {hospital_id: 1, bvn: 2, phone: 1} or {hospital_id: 1, bvn: 1, phone: 2}; it should flag as duplicate since they have the same hospital_id and therefore the unique attributes have been applied to those fields.\nif i try to insert a doc with {hospital_id: 2, bvn: 1, phone: 1}; or {hospital_id: 1, bvn: 2, phone: 2}; it should be successful since the hospital ids are unique or the fields are not duplicated with the same hospital_id as exists on the db.I tried to do this with compound index but it does not work\nhospitalSchema.index({ hospital_id: 1, bvn: 1, phone: 1 }, { key: unique: true });then with partial index\nhospitalSchema.index({ hospital_id: 1, bvn: 1, phone: 1 }, { unique: true, partialFilterExpression: { $eq: this.hospital_id} });Please help, thank you.", "username": "John_Kennedy_Kalu" }, { "code": "replset [direct: primary] test> db.test.createIndex({hospital:1, bvn:1}, {unique:true})\nhospital_1_bvn_1\n\nreplset [direct: primary] test> db.test.createIndex({hospital:1, phone:1}, {unique:true})\nhospital_1_phone_1\nhospital_id", "text": "Hi @John_Kennedy_Kalu and welcome to the MongoDB community!!If I understand your question correctly, what you need is a unique combination of bvn and phone per hospital_id. Please let me know if my understanding is correct here.As per the above assumption, you can create unique compound indexes on hospital_id and bvn and hospital_id and phone.This would restrict inserts that have existing bvn or phone within the same hospital_idPlease refer to the documentations for Unique Compound Index for further understanding.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "replset [direct: primary] test> db.test.createIndex({ bvn:1, hospital:1 }, {unique: true})\nbvn_1_hospital_1\n\nreplset [direct: primary] test> db.test.createIndex({ phone:1, hospital:1 }, {unique: true})\nphone_1_hospital_1\n", "text": "Hi @Aasawari and thank you for your response.I tried your solution however, mongodb applied the unique to the hospital_id so I flipped them around and it worked perfectly.Thank you so much!!", "username": "John_Kennedy_Kalu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create a partial index if one key is equal to key in db
2022-10-04T00:00:36.256Z
Create a partial index if one key is equal to key in db
1,288
https://www.mongodb.com/…1_2_1024x625.png
[ "aggregation" ]
[ { "code": " const door4 = ObjectId(\"someId\");\n const door3 = ObjectId(\"someId\");\n const Automatic = ObjectId(\"someId\");\n\n const filterPosts = await VFSPostViewSchema.aggregate([\n {\n $match: {\n $and: [\n { \"filterItem.id\": { $in: [door4, Automatic] } },\n { \"filterItem.id\": { $nin: [door3] } },\n ],\n },\n },\n ]);\n", "text": "\nScreen Shot 2022-09-28 at 12.52.41 PM2782×1700 400 KB\nits my code i have condition in which i have two arrays includes and excludes\nincludes array = [“id1”,“id2”,“id3”] this will be the excat match i need that same data which match these all ids and dont return that if any one of that match i want to get those document which have all of thisexclude. array = [“id4”,“id5”,“id6”] i need to exclude all that document which has that ids", "username": "Mehmood_Zubair" }, { "code": "", "text": "Hi @Mehmood_Zubair and welcome to the community!!To understand the problem in a better way, it would be very helpful if you could help with some details on the topic:Regards\nAasawari", "username": "Aasawari" } ]
Filter data from multiple strings array from object and exclude from array of strings
2022-09-28T09:04:12.380Z
Filter data from multiple strings array from object and exclude from array of strings
1,555
null
[ "aggregation" ]
[ { "code": "{ _id: ObjectId('00093eae20bde40e986354ad'), link: 'aaaaaaaa', dataRequests: { id: [ 'a', 'b'], docs: { a: { id: 'a', sampleValue: 'one' }, b: { id: 'b', sampleValue: 'two' }, c: { id: 'c', sampleValue: 'three' } } } }[ { $addFields: { lastDrIndex: { $arrayElemAt:[\"$dataRequests.ids\", -1] } } }, { $addFields: { lastDr: \"$dataRequests.docs[$lastDrIndex]\" } } ]", "text": "I am trying to query a collection that has a difficult structure to work with.\nHere is a sample document with the relevant fields.{ _id: ObjectId('00093eae20bde40e986354ad'), link: 'aaaaaaaa', dataRequests: { id: [ 'a', 'b'], docs: { a: { id: 'a', sampleValue: 'one' }, b: { id: 'b', sampleValue: 'two' }, c: { id: 'c', sampleValue: 'three' } } } }I want to add a field in an aggregate that grabs the object in dataRequests.docs where the field is the last value in dataRequests.ids.My aggregate pipeline looks like this.[ { $addFields: { lastDrIndex: { $arrayElemAt:[\"$dataRequests.ids\", -1] } } }, { $addFields: { lastDr: \"$dataRequests.docs[$lastDrIndex]\" } } ]The first part works as expected and adds a field “lastDrIndex”: “b”.\nThe second part must not be the correct syntax, because it does not add anything to the results.\nI would like the second part to take the value of lastDrIndex and use it to access the dataRequests.docs object at that position.So in this example it would add a field\n“lastDr”: { id: ‘b’, sampleValue: ‘two’ }How do I fix the second part of the aggregate pipeline to use one field as an accessor to another object ?\nOr can I somehow combine the two, since I don’t actually need the lastDrIndex and only need lastDr ?", "username": "Charles_Haines" }, { "code": "Sorry for the formatting, I hope this will fix it.\n\nSample Document\n\n{\n _id: ObjectId('00093eae20bde40e986354ad'),\n link: 'aaaaaaaa',\n dataRequests: {\n id: [ 'a', 'b'],\n docs: {\n a: { id: 'a', sampleValue: 'one' },\n b: { id: 'b', sampleValue: 'two' },\n c: { id: 'c', sampleValue: 'three' }\n }\n }\n}\n\nAggregate Pipeline\n\n[\n {\n $addFields: { lastDrIndex: { $arrayElemAt:[\"$dataRequests.ids\", -1] } }\n },\n {\n $addFields: { lastDr: \"$dataRequests.docs[$lastDrIndex]\" }\n }\n]\n", "text": "", "username": "Charles_Haines" }, { "code": "", "text": "First, you a typo in your first $addFields. Your field is dataRequests.id but you use dataRequests.ids.Since docs is not an array, I am not too sure you could use the [ ] syntax. Even when I hard code the value b inside the 2nd $addFields it does not work.I think you would need to use $objectToArray of dataRequests.docs with a $reduce to find the appropriate sampleValue. If this is a frequent use-case you might want to look at the attribute pattern. This would eliminate the need to use $objectToArray since your data will already be an array.", "username": "steevej" }, { "code": "", "text": "I did notice my typo, but I cannot edit the post.\nUnfortunately, I cannot feasibly change the way the data is stored.\nI would normally choose to use a much more standard practice like you suggested, or normalize the data into other collections.I can look at the ObjectToArray and reduce functions, thank you.", "username": "Charles_Haines" }, { "code": "$objectToArray$arrayToObjectdb.test.aggregate([\n {$addFields: {\n lastDr: {\n $arrayToObject: {\n $filter: {\n input: {$objectToArray: \"$dataRequests.docs\"},\n cond: {$eq: [\"$$this.k\", {$arrayElemAt:[\"$dataRequests.id\", -1]}]}\n }\n }\n }\n }}\n])\n[\n {\n _id: ObjectId(\"00093eae20bde40e986354ad\"),\n link: 'aaaaaaaa',\n dataRequests: {\n id: [ 'a', 'b' ],\n docs: {\n a: { id: 'a', sampleValue: 'one' },\n b: { id: 'b', sampleValue: 'two' },\n c: { id: 'c', sampleValue: 'three' }\n }\n },\n lastDr: { b: { id: 'b', sampleValue: 'two' } }\n }\n]\n", "text": "Hello @Charles_Haines ,Welcome to The MongoDB Community Forums! In addition to @steevej’s reply, below is an example of how you can use $objectToArray and $arrayToObject to achieve your desired output.Output is:Note that this is an untested query so you might want to do your own testing to ensure it works with your data.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Access a variable field of an object
2022-09-30T17:07:50.234Z
Access a variable field of an object
4,335
null
[ "replication" ]
[ { "code": "", "text": "Hi,I’m using mongodb 5.0.6 version with 3 members (PSA) replica set. When electric is gone suddenly, mongodb generally corrupted. After electric is come, mongodb primary and secondary members is not working.Centos 7.9\nMongodb 5.0.6 with replica set (PSA)\nFile system ext4Could you please help me ?", "username": "Kadir_USTUN" }, { "code": "", "text": "So after power came back what is the status of your mongods\nHow are they configured?Do they start auto on reboot or you start them manually?\nIs this prod or development?\nDo you have backups configured\nHow you came to conclusion it is corrupted\nWhat exactly you mean by not working?\nWhat does mongod.logs show?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi Ramachandra\nIt’s configured with Primary - Secondary - Arbiter. Mongodb is starting with service. So starting auto\nIt will be prod in 1 month. So it’s critical for us.\nYes we have a backup strategy.\nI saw some “database corruption” error in the logs file.\nWhen i run “systemctl start mongod” command, I’m taking below error and mongodb doesn’t open.mongod.log files is below for 2 data bearing nodes.\nI only changed our hostname to “hostname”Node1:{“t”:{“$date”:“2022-09-26T08:36:01.858+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“-”,“msg”:“***** SERVER RESTARTED **“}\n{“t”:{”$date\":“2022-09-26T08:36:01.858+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2022-09-26T08:36:01.863+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-09-26T08:36:01.863+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:36:01.863+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2022-09-26T08:36:01.938+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:36:01.938+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:36:01.938+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:94507,“port”:27017,“dbPath”:“/opt/mongo”,“architecture”:“64-bit”,“host”:“hostname”}}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.6”,“gitVersion”:“212a8dbb47f07427dae194a9c75baec1d81d9259”,“openSSLVersion”:“OpenSSL 1.0.1e-fips 11 Feb 2013”,“modules”:[],“allocator”:“tcmalloc”,“environment”:{“distmod”:“rhel70”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“CentOS Linux release 7.9.2009 (Core)”,“version”:“Kernel 3.10.0-1160.el7.x86_64”}}}\n{“t”:{“$date”:“2022-09-26T08:36:01.939+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“127.0.0.1,hostname”,“port”:27017},“processManagement”:{“fork”:true,“pidFilePath”:“/var/run/mongodb/mongod.pid”,“timeZoneInfo”:“/usr/share/zoneinfo”},“replication”:{“replSetName”:“eirs”},“security”:{“authorization”:“enabled”,“keyFile”:“/opt/mongo/mongokeyfile”},“storage”:{“dbPath”:“/opt/mongo”,“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2022-09-26T08:36:01.940+03:00”},“s”:“W”, “c”:“STORAGE”, “id”:22271, “ctx”:“initandlisten”,“msg”:“Detected unclean shutdown - Lock file is not empty”,“attr”:{“lockFile”:“/opt/mongo/mongod.lock”}}\n{“t”:{“$date”:“2022-09-26T08:36:01.941+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“/opt/mongo”,“storageEngine”:“wiredTiger”}}\n{“t”:{“$date”:“2022-09-26T08:36:01.941+03:00”},“s”:“W”, “c”:“STORAGE”, “id”:22302, “ctx”:“initandlisten”,“msg”:“Recovering data from the last clean checkpoint.”}\n{“t”:{“$date”:“2022-09-26T08:36:01.941+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22297, “ctx”:“initandlisten”,“msg”:“Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",“tags”:[\"startupWarnings”]}\n{“t”:{“$date”:“2022-09-26T08:36:01.941+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=15527M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{“$date”:“2022-09-26T08:36:02.439+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31809,“message”:“[1664170562:439217][94507:0x7f741c10abc0], connection: __wt_turtle_read, 391: WiredTiger.turtle: fatal turtle file read error: WT_TRY_SALVAGE: database corruption detected”}}\n{“t”:{“$date”:“2022-09-26T08:36:02.439+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31804,“message”:“[1664170562:439270][94507:0x7f741c10abc0], connection: __wt_turtle_read, 391: the process must exit and restart: WT_PANIC: WiredTiger library panic”}}\n{“t”:{“$date”:“2022-09-26T08:36:02.439+03:00”},“s”:“F”, “c”:“-”, “id”:23089, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:50853,“file”:“src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp”,“line”:538}}\n{“t”:{“$date”:“2022-09-26T08:36:02.439+03:00”},“s”:“F”, “c”:“-”, “id”:23090, “ctx”:“initandlisten”,“msg”:\"\\n\\naborting after fassert() failure\\n\\n”}\n{“t”:{“$date”:“2022-09-26T08:36:02.439+03:00”},“s”:“F”, “c”:“CONTROL”, “id”:4757800, “ctx”:“initandlisten”,“msg”:“Writing fatal message”,“attr”:{“message”:“Got signal: 6 (Aborted).\\n”}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31380, “ctx”:“initandlisten”,“msg”:“BACKTRACE”,“attr”:{“bt”:{“backtrace”:[{“a”:“55B2A3A23FA5”,“b”:“55B29FBA3000”,“o”:“3E80FA5”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357”,“s+”:“215”},{“a”:“55B2A3A26A39”,“b”:“55B29FBA3000”,“o”:“3E83A39”,“s”:“_ZN5mongo15printStackTraceEv”,“s+”:“29”},{“a”:“55B2A3A1F076”,“b”:“55B29FBA3000”,“o”:“3E7C076”,“s”:“abruptQuit”,“s+”:“66”},{“a”:“7F741A65E630”,“b”:“7F741A64F000”,“o”:“F630”,“s”:“_L_unlock_13”,“s+”:“34”},{“a”:“7F741A2B7387”,“b”:“7F741A281000”,“o”:“36387”,“s”:“gsignal”,“s+”:“37”},{“a”:“7F741A2B8A78”,“b”:“7F741A281000”,“o”:“37A78”,“s”:“abort”,“s+”:“148”},{“a”:“55B2A0F5FBAB”,“b”:“55B29FBA3000”,“o”:“13BCBAB”,“s”:“_ZN5mongo25fassertFailedWithLocationEiPKcj”,“s+”:“F6”},{“a”:“55B2A0A584AC”,“b”:“55B29FBA3000”,“o”:“EB54AC”,“s”:“_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216”,“s+”:“16”},{“a”:“55B2A1260B63”,“b”:“55B29FBA3000”,“o”:“16BDB63”,“s”:“__eventv”,“s+”:“403”},{“a”:“55B2A0A6AD89”,“b”:“55B29FBA3000”,“o”:“EC7D89”,“s”:“__wt_panic_func”,“s+”:“114”},{“a”:“55B2A0A6487D”,“b”:“55B29FBA3000”,“o”:“EC187D”,“s”:“__wt_turtle_read.cold.7”,“s+”:“4C”},{“a”:“55B2A1228B24”,“b”:“55B29FBA3000”,“o”:“1685B24”,“s”:“__wt_turtle_validate_version”,“s+”:“234”},{“a”:“55B2A11DC49D”,“b”:“55B29FBA3000”,“o”:“163949D”,“s”:“wiredtiger_open”,“s+”:“2B9D”},{“a”:“55B2A11875A9”,“b”:“55B29FBA3000”,“o”:“15E45A9”,“s”:“ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8”,“s+”:“B9”},{“a”:“55B2A1192AA8”,“b”:“55B29FBA3000”,“o”:“15EFAA8”,“s”:“_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb”,“s+”:“1138”},{“a”:“55B2A11694C1”,“b”:“55B29FBA3000”,“o”:“15C64C1”,“s”:“_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE”,“s+”:“171”},{“a”:“55B2A1F3AE59”,“b”:“55B29FBA3000”,“o”:“2397E59”,“s”:“_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE”,“s+”:“419”},{“a”:“55B2A10D2CCD”,“b”:“55B29FBA3000”,“o”:“152FCCD”,“s”:“_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896”,“s+”:“47D”},{“a”:“55B2A10D564F”,“b”:“55B29FBA3000”,“o”:“153264F”,“s”:“_ZN5mongo11mongod_mainEiPPc”,“s+”:“CDF”},{“a”:“55B2A0F72F0E”,“b”:“55B29FBA3000”,“o”:“13CFF0E”,“s”:“main”,“s+”:“E”},{“a”:“7F741A2A3555”,“b”:“7F741A281000”,“o”:“22555”,“s”:“__libc_start_main”,“s+”:“F5”},{“a”:“55B2A10CFB3E”,“b”:“55B29FBA3000”,“o”:“152CB3E”,“s”:“_start”,“s+”:“29”}],“processInfo”:{“mongodbVersion”:“5.0.6”,“gitVersion”:“212a8dbb47f07427dae194a9c75baec1d81d9259”,“compiledModules”:,“uname”:{“sysname”:“Linux”,“release”:“3.10.0-1160.el7.x86_64”,“version”:“#1 SMP Mon Oct 19 16:18:59 UTC 2020”,“machine”:“x86_64”},“somap”:[{“b”:“55B29FBA3000”,“elfType”:3,“buildId”:“6B144064C4AA51D5B9894904F22879DB438E9C3B”},{“b”:“7F741A64F000”,“path”:“/lib64/libpthread.so.0”,“elfType”:3,“buildId”:“2B482B3BAE79DEF4E5BC9791BC6BBDAE0E93E359”},{“b”:“7F741A281000”,“path”:“/lib64/libc.so.6”,“elfType”:3,“buildId”:“F9FAFDE281E0E0E2AF45911AD0FA115B64C2CEA8”}]}}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A3A23FA5”,“b”:“55B29FBA3000”,“o”:“3E80FA5”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357”,“s+”:“215”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A3A26A39”,“b”:“55B29FBA3000”,“o”:“3E83A39”,“s”:“_ZN5mongo15printStackTraceEv”,“s+”:“29”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A3A1F076”,“b”:“55B29FBA3000”,“o”:“3E7C076”,“s”:“abruptQuit”,“s+”:“66”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F741A65E630”,“b”:“7F741A64F000”,“o”:“F630”,“s”:“_L_unlock_13”,“s+”:“34”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F741A2B7387”,“b”:“7F741A281000”,“o”:“36387”,“s”:“gsignal”,“s+”:“37”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F741A2B8A78”,“b”:“7F741A281000”,“o”:“37A78”,“s”:“abort”,“s+”:“148”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A0F5FBAB”,“b”:“55B29FBA3000”,“o”:“13BCBAB”,“s”:“_ZN5mongo25fassertFailedWithLocationEiPKcj”,“s+”:“F6”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A0A584AC”,“b”:“55B29FBA3000”,“o”:“EB54AC”,“s”:“_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216”,“s+”:“16”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A1260B63”,“b”:“55B29FBA3000”,“o”:“16BDB63”,“s”:“__eventv”,“s+”:“403”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A0A6AD89”,“b”:“55B29FBA3000”,“o”:“EC7D89”,“s”:“__wt_panic_func”,“s+”:“114”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A0A6487D”,“b”:“55B29FBA3000”,“o”:“EC187D”,“s”:“__wt_turtle_read.cold.7”,“s+”:“4C”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A1228B24”,“b”:“55B29FBA3000”,“o”:“1685B24”,“s”:“__wt_turtle_validate_version”,“s+”:“234”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A11DC49D”,“b”:“55B29FBA3000”,“o”:“163949D”,“s”:“wiredtiger_open”,“s+”:“2B9D”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A11875A9”,“b”:“55B29FBA3000”,“o”:“15E45A9”,“s”:“ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8”,“s+”:“B9”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A1192AA8”,“b”:“55B29FBA3000”,“o”:“15EFAA8”,“s”:“_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb”,“s+”:“1138”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A11694C1”,“b”:“55B29FBA3000”,“o”:“15C64C1”,“s”:“_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE”,“s+”:“171”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A1F3AE59”,“b”:“55B29FBA3000”,“o”:“2397E59”,“s”:“_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE”,“s+”:“419”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A10D2CCD”,“b”:“55B29FBA3000”,“o”:“152FCCD”,“s”:“_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896”,“s+”:“47D”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A10D564F”,“b”:“55B29FBA3000”,“o”:“153264F”,“s”:“_ZN5mongo11mongod_mainEiPPc”,“s+”:“CDF”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A0F72F0E”,“b”:“55B29FBA3000”,“o”:“13CFF0E”,“s”:“main”,“s+”:“E”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F741A2A3555”,“b”:“7F741A281000”,“o”:“22555”,“s”:“__libc_start_main”,“s+”:“F5”}}}\n{“t”:{“$date”:“2022-09-26T08:36:02.570+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55B2A10CFB3E”,“b”:“55B29FBA3000”,“o”:“152CB3E”,“s”:“_start”,“s+”:“29”}}}node2:{“t”:{“$date”:“2022-09-26T08:37:05.088+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“-”,“msg”:“***** SERVER RESTARTED **“}\n{“t”:{”$date\":“2022-09-26T08:37:05.089+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.090+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2022-09-26T08:37:05.093+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:37:05.093+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2022-09-26T08:37:05.168+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:122834,“port”:27017,“dbPath”:“/opt/mongo”,“architecture”:“64-bit”,“host”:“hostname”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.6”,“gitVersion”:“212a8dbb47f07427dae194a9c75baec1d81d9259”,“openSSLVersion”:“OpenSSL 1.0.1e-fips 11 Feb 2013”,“modules”:[],“allocator”:“tcmalloc”,“environment”:{“distmod”:“rhel70”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“CentOS Linux release 7.9.2009 (Core)”,“version”:“Kernel 3.10.0-1160.el7.x86_64”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.169+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“127.0.0.1,hostname”,“port”:27017},“processManagement”:{“fork”:true,“pidFilePath”:“/var/run/mongodb/mongod.pid”,“timeZoneInfo”:“/usr/share/zoneinfo”},“replication”:{“replSetName”:“eirs”},“security”:{“authorization”:“enabled”,“keyFile”:“/opt/mongo/mongokeyfile”},“storage”:{“dbPath”:“/opt/mongo”,“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.170+03:00”},“s”:“W”, “c”:“STORAGE”, “id”:22271, “ctx”:“initandlisten”,“msg”:“Detected unclean shutdown - Lock file is not empty”,“attr”:{“lockFile”:“/opt/mongo/mongod.lock”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.170+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“/opt/mongo”,“storageEngine”:“wiredTiger”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.170+03:00”},“s”:“W”, “c”:“STORAGE”, “id”:22302, “ctx”:“initandlisten”,“msg”:“Recovering data from the last clean checkpoint.”}\n{“t”:{“$date”:“2022-09-26T08:37:05.170+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22297, “ctx”:“initandlisten”,“msg”:“Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",“tags”:[\"startupWarnings”]}\n{“t”:{“$date”:“2022-09-26T08:37:05.170+03:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=15527M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.674+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31809,“message”:“[1664170625:674753][122834:0x7f77fd5a8bc0], connection: __wt_turtle_read, 391: WiredTiger.turtle: fatal turtle file read error: WT_TRY_SALVAGE: database corruption detected”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.674+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31804,“message”:“[1664170625:674804][122834:0x7f77fd5a8bc0], connection: __wt_turtle_read, 391: the process must exit and restart: WT_PANIC: WiredTiger library panic”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.674+03:00”},“s”:“F”, “c”:“-”, “id”:23089, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:50853,“file”:“src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp”,“line”:538}}\n{“t”:{“$date”:“2022-09-26T08:37:05.674+03:00”},“s”:“F”, “c”:“-”, “id”:23090, “ctx”:“initandlisten”,“msg”:\"\\n\\naborting after fassert() failure\\n\\n”}\n{“t”:{“$date”:“2022-09-26T08:37:05.674+03:00”},“s”:“F”, “c”:“CONTROL”, “id”:4757800, “ctx”:“initandlisten”,“msg”:“Writing fatal message”,“attr”:{“message”:“Got signal: 6 (Aborted).\\n”}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31380, “ctx”:“initandlisten”,“msg”:“BACKTRACE”,“attr”:{“bt”:{“backtrace”:[{“a”:“55FBFEAF2FA5”,“b”:“55FBFAC72000”,“o”:“3E80FA5”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357”,“s+”:“215”},{“a”:“55FBFEAF5A39”,“b”:“55FBFAC72000”,“o”:“3E83A39”,“s”:“_ZN5mongo15printStackTraceEv”,“s+”:“29”},{“a”:“55FBFEAEE076”,“b”:“55FBFAC72000”,“o”:“3E7C076”,“s”:“abruptQuit”,“s+”:“66”},{“a”:“7F77FBAFC630”,“b”:“7F77FBAED000”,“o”:“F630”,“s”:“_L_unlock_13”,“s+”:“34”},{“a”:“7F77FB755387”,“b”:“7F77FB71F000”,“o”:“36387”,“s”:“gsignal”,“s+”:“37”},{“a”:“7F77FB756A78”,“b”:“7F77FB71F000”,“o”:“37A78”,“s”:“abort”,“s+”:“148”},{“a”:“55FBFC02EBAB”,“b”:“55FBFAC72000”,“o”:“13BCBAB”,“s”:“_ZN5mongo25fassertFailedWithLocationEiPKcj”,“s+”:“F6”},{“a”:“55FBFBB274AC”,“b”:“55FBFAC72000”,“o”:“EB54AC”,“s”:“_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216”,“s+”:“16”},{“a”:“55FBFC32FB63”,“b”:“55FBFAC72000”,“o”:“16BDB63”,“s”:“__eventv”,“s+”:“403”},{“a”:“55FBFBB39D89”,“b”:“55FBFAC72000”,“o”:“EC7D89”,“s”:“__wt_panic_func”,“s+”:“114”},{“a”:“55FBFBB3387D”,“b”:“55FBFAC72000”,“o”:“EC187D”,“s”:“__wt_turtle_read.cold.7”,“s+”:“4C”},{“a”:“55FBFC2F7B24”,“b”:“55FBFAC72000”,“o”:“1685B24”,“s”:“__wt_turtle_validate_version”,“s+”:“234”},{“a”:“55FBFC2AB49D”,“b”:“55FBFAC72000”,“o”:“163949D”,“s”:“wiredtiger_open”,“s+”:“2B9D”},{“a”:“55FBFC2565A9”,“b”:“55FBFAC72000”,“o”:“15E45A9”,“s”:“ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8”,“s+”:“B9”},{“a”:“55FBFC261AA8”,“b”:“55FBFAC72000”,“o”:“15EFAA8”,“s”:“_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb”,“s+”:“1138”},{“a”:“55FBFC2384C1”,“b”:“55FBFAC72000”,“o”:“15C64C1”,“s”:“_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE”,“s+”:“171”},{“a”:“55FBFD009E59”,“b”:“55FBFAC72000”,“o”:“2397E59”,“s”:“_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE”,“s+”:“419”},{“a”:“55FBFC1A1CCD”,“b”:“55FBFAC72000”,“o”:“152FCCD”,“s”:“_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896”,“s+”:“47D”},{“a”:“55FBFC1A464F”,“b”:“55FBFAC72000”,“o”:“153264F”,“s”:“_ZN5mongo11mongod_mainEiPPc”,“s+”:“CDF”},{“a”:“55FBFC041F0E”,“b”:“55FBFAC72000”,“o”:“13CFF0E”,“s”:“main”,“s+”:“E”},{“a”:“7F77FB741555”,“b”:“7F77FB71F000”,“o”:“22555”,“s”:“__libc_start_main”,“s+”:“F5”},{“a”:“55FBFC19EB3E”,“b”:“55FBFAC72000”,“o”:“152CB3E”,“s”:“_start”,“s+”:“29”}],“processInfo”:{“mongodbVersion”:“5.0.6”,“gitVersion”:“212a8dbb47f07427dae194a9c75baec1d81d9259”,“compiledModules”:,“uname”:{“sysname”:“Linux”,“release”:“3.10.0-1160.el7.x86_64”,“version”:“#1 SMP Mon Oct 19 16:18:59 UTC 2020”,“machine”:“x86_64”},“somap”:[{“b”:“55FBFAC72000”,“elfType”:3,“buildId”:“6B144064C4AA51D5B9894904F22879DB438E9C3B”},{“b”:“7F77FBAED000”,“path”:“/lib64/libpthread.so.0”,“elfType”:3,“buildId”:“2B482B3BAE79DEF4E5BC9791BC6BBDAE0E93E359”},{“b”:“7F77FB71F000”,“path”:“/lib64/libc.so.6”,“elfType”:3,“buildId”:“F9FAFDE281E0E0E2AF45911AD0FA115B64C2CEA8”}]}}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFEAF2FA5”,“b”:“55FBFAC72000”,“o”:“3E80FA5”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357”,“s+”:“215”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFEAF5A39”,“b”:“55FBFAC72000”,“o”:“3E83A39”,“s”:“_ZN5mongo15printStackTraceEv”,“s+”:“29”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFEAEE076”,“b”:“55FBFAC72000”,“o”:“3E7C076”,“s”:“abruptQuit”,“s+”:“66”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F77FBAFC630”,“b”:“7F77FBAED000”,“o”:“F630”,“s”:“_L_unlock_13”,“s+”:“34”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F77FB755387”,“b”:“7F77FB71F000”,“o”:“36387”,“s”:“gsignal”,“s+”:“37”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F77FB756A78”,“b”:“7F77FB71F000”,“o”:“37A78”,“s”:“abort”,“s+”:“148”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC02EBAB”,“b”:“55FBFAC72000”,“o”:“13BCBAB”,“s”:“_ZN5mongo25fassertFailedWithLocationEiPKcj”,“s+”:“F6”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFBB274AC”,“b”:“55FBFAC72000”,“o”:“EB54AC”,“s”:“_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216”,“s+”:“16”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC32FB63”,“b”:“55FBFAC72000”,“o”:“16BDB63”,“s”:“__eventv”,“s+”:“403”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFBB39D89”,“b”:“55FBFAC72000”,“o”:“EC7D89”,“s”:“__wt_panic_func”,“s+”:“114”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFBB3387D”,“b”:“55FBFAC72000”,“o”:“EC187D”,“s”:“__wt_turtle_read.cold.7”,“s+”:“4C”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC2F7B24”,“b”:“55FBFAC72000”,“o”:“1685B24”,“s”:“__wt_turtle_validate_version”,“s+”:“234”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC2AB49D”,“b”:“55FBFAC72000”,“o”:“163949D”,“s”:“wiredtiger_open”,“s+”:“2B9D”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC2565A9”,“b”:“55FBFAC72000”,“o”:“15E45A9”,“s”:“ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8”,“s+”:“B9”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC261AA8”,“b”:“55FBFAC72000”,“o”:“15EFAA8”,“s”:“_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb”,“s+”:“1138”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC2384C1”,“b”:“55FBFAC72000”,“o”:“15C64C1”,“s”:“_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE”,“s+”:“171”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFD009E59”,“b”:“55FBFAC72000”,“o”:“2397E59”,“s”:“_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE”,“s+”:“419”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC1A1CCD”,“b”:“55FBFAC72000”,“o”:“152FCCD”,“s”:“_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896”,“s+”:“47D”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC1A464F”,“b”:“55FBFAC72000”,“o”:“153264F”,“s”:“_ZN5mongo11mongod_mainEiPPc”,“s+”:“CDF”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC041F0E”,“b”:“55FBFAC72000”,“o”:“13CFF0E”,“s”:“main”,“s+”:“E”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7F77FB741555”,“b”:“7F77FB71F000”,“o”:“22555”,“s”:“__libc_start_main”,“s+”:“F5”}}}\n{“t”:{“$date”:“2022-09-26T08:37:05.806+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“initandlisten”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“55FBFC19EB3E”,“b”:“55FBFAC72000”,“o”:“152CB3E”,“s”:“_start”,“s+”:“29”}}}", "username": "Kadir_USTUN" }, { "code": "", "text": "For a corrupted replica restore is the only option\nPlease take advice from Stennie and other experts before you do anything\nAlso check this link", "username": "Ramachandra_Tummala" }, { "code": "{“t”:{\"$date\":“2022-09-26T08:36:02.439+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31809,“message”:\"[1664170562:439217][94507:0x7f741c10abc0], connection: __wt_turtle_read, 391: WiredTiger.turtle: fatal turtle file read error: WT_TRY_SALVAGE: database corruption detected\"}}\n{“t”:{\"$date\":“2022-09-26T08:36:02.439+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31804,“message”:\"[1664170562:439270][94507:0x7f741c10abc0], connection: __wt_turtle_read, 391: the process must exit and restart: WT_PANIC: WiredTiger library panic\"}}\n{“t”:{\"$date\":“2022-09-26T08:37:05.674+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31809,“message”:\"[1664170625:674753][122834:0x7f77fd5a8bc0], connection: __wt_turtle_read, 391: WiredTiger.turtle: fatal turtle file read error: WT_TRY_SALVAGE: database corruption detected\"}}\n{“t”:{\"$date\":“2022-09-26T08:37:05.674+03:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31804,“message”:\"[1664170625:674804][122834:0x7f77fd5a8bc0], connection: __wt_turtle_read, 391: the process must exit and restart: WT_PANIC: WiredTiger library panic\"}}\nWiredTiger.turtle: fatal turtle file read error: WT_TRY_SALVAGE: database corruption detected\nWiredTiger.turtlemongod ", "text": "Hi @Kadir_USTUNI agree with @Ramachandra_Tummala 's assessment that restoring from backup is probably the best way forward.However I’m curious about one thing. Here’s the error message from node 1:and here’s the error message from node 2:It strikes me as odd that both of them seem to have an identical error:I understand that this is a PSA setup, but I noticed that the two secondaries have the exact same error. Note that the file WiredTiger.turtle is a vital file, so WiredTiger is very, very careful in handling this file in particular.To have the same error of this magnitude at the same time on two different nodes is so highly unlikely that there may be other reason behind this. How are you deploying the mongod processes? Are they sharing disk, CPU, or something? What’s the spec of the deployment hardware/architecture?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,We have the same thoughts. 2 mongod process working with 2 seperate rugged pc. And 2 rugged pc has own local disc. I suspect discs. Discs, CPU or everything are seperate. The corruption on the 1st server somehow breaks the 2nd server as well.I agree with @Ramachandra_Tummala 's assessment that restoring from backup is probably the best way forward.This is how we are going now.\nCorruption happens to often \nI tried --repair option before open a ticket here, but i didn’t found any solution.Out disc is ext4 and mongodb is strongly recommended to use xfs file system. You think could this be the problem?I’m working a lot of project with mongo. We are using virtual machine for mongo and the disc coming from storage. I didn’t have any problems with those projects.there may be other reason behind thisYes agree but what is the problem? I need to find problem. Maybe hardware problem. If I can prove it’s a hardware problem, I can request a hardware replacement.Do you have any idea ?\nThanks.", "username": "Kadir_USTUN" }, { "code": "", "text": "The corruption on the 1st server somehow breaks the 2nd server as well.Yes this is a strange issue. MongoDB replication works logically instead of physically, and each node manage their own storage, so any physical level corruption won’t be replicated to the other nodes. For this to happen to two separate nodes at the same time on the same error is highly unlikely, and I would perhaps consider a hardware issue is at play here.Out disc is ext4 and mongodb is strongly recommended to use xfs file system. You think could this be the problem?As far as I am aware, Ext4 has issues with performance in the early days, but not corruption. However at this point I think there’s no harm in trying XFS since you’re restoring from backup anyway In the meantime, you might want to double check that your deployment follows the recommendations in the production notes and operations checklist.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Actually i read this articles \nBtw I was using mongodb 4.4.6 version and i had same problem. But in the mongodb releases notes for 4.4.6 , theew was an error like belowMongoDB version 4.4.6 is not recommended for production use due to critical issue WT-7995, fixed in later versions. Use the latest available patch release version.So i upgraded my mongodb version to 5.0.6 and i thought i resolve the problem.https://jira.mongodb.org/browse/WT-7995Thanks.", "username": "Kadir_USTUN" }, { "code": "", "text": "@kevinadi do you have any idea?", "username": "Kadir_USTUN" }, { "code": "WiredTiger.turtle", "text": "Hi @Kadir_USTUNYes MongoDB 4.4.2 - 4.4.8 was not recommended for production usage anymore due to the issues you mentioned. However I don’t think this is the cause for a WiredTiger.turtle error you’re seeing. The turtle file is a metadata for the main WiredTiger metadata file (see WiredTiger: Metadata for a complete explanation of what the turtle file contains).In short, it’s very peculiar that two nodes have the exact same complaint about turtle file corruption. Typically these errors are generated if the hardware is having issues, but there’s not much more knowledge we can get from the logs themselves.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi,I tried this situation on another workstation and some vm’s. (Hp Z4). Lıke you said we didn’t see any error when electric is gone. After restart the machines, mongodb worked automaticly and without error.But when we try to electric our rugged pc, 2 mongo instance couldn’t open.\nSo we decided that the problem is with our rugged pc. We will try to change our hard disk.Thank you.", "username": "Kadir_USTUN" } ]
Mongodb replicaset corruption when electric is gone
2022-09-23T15:22:25.033Z
Mongodb replicaset corruption when electric is gone
3,253
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n \"_id\": \"633360536b4cab132e2fc218\",\n \"username\": \"Alberto Silva\",\n \"role\": \"player\",\n \"questionList\": [\n {\n \"questionid\": \"A0\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A1\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A2\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n ],\n \"team\": \"0\"\n}\ndb.users.updateMany(\n { \n\t\trole: \"player\"\n\t}, \n [\n\t{ \n\t\t$set: {\"questionList.$[tocount].availablequestions\": { $count: {\"$questionList.$[tocount].questionList\" : \"not-attempted\"} }} \n\t},\n\t{ \n\t\t$set: {\"questionList.$[index].status\": \"assigned\"} \n\t},\n\t],\n\t{\n arrayFilters: [\n {\n \"elem.tocount\": {\n $eq: \"not-attempted\"\n },\n\t\t \"index._(somehow_get_one_pipelinearray_index_randomly)\": {\n $floor: { \n\t\t\t\t\t\t$multiply: [ { $rand: {} }, \"$questionList.availablequestions\" ] \n\t\t\t\t} \n },\n } \n ],\n }\n\t\n)\n", "text": "Hi everyone,I’m fascinated, studying MongoDB. But still struggling with query sintaxes and pipelines. I’m implementing it a project. Almost everything I was able to solve by myself, but for a harder task, tried for many hour to code the following query, but with no success. So I came here to ask for help. I will give the details:When I began to build this query, it was working, but when I try to use any operator or subfunction in pipeline(count,sort,rand,sample etc), it returns me a sort of errors. Maybe syntax, or maybe there a simpler way to do this, idk…Here is a document structure:And here is the current buggy / incomplete query:Can anyone give me a light on how to proceed on it?\nIf someone could give me insights on how to do that, It would be appreciated.Thanks", "username": "Fabio_Iwano" }, { "code": "[\n {\n \"_id\": \"633360536b4cab132e2fc277\",\n \"username\": \"Alberto Silva\",\n \"role\": \"player\",\n \"questionList\": [\n {\n \"questionid\": \"A0\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A1\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A2\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n \n ],\n \"team\": \"3\"\n },\n {\n \"_id\": \"633360536b4cab132e2fc218\",\n \"username\": \"Joana Oliveira\",\n \"role\": \"player\",\n \"questionList\": [\n {\n \"questionid\": \"A0\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A1\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A2\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n \n ],\n \"team\": \"0\"\n },\n {\n \"_id\": \"633360536b4cab132e2fc215\",\n \"username\": \"Renato Silvestre\",\n \"role\": \"player\",\n \"questionList\": [\n {\n \"questionid\": \"A0\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A1\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"not-attempted\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n {\n \"questionid\": \"A2\",\n \"time\": null,\n \"answered\": null,\n \"score\": null,\n \"status\": \"failed\",\n \"startDate\": null,\n \"respondedDate\": null\n },\n \n ],\n \"team\": \"0\"\n },\n \n]\ndb.collection.aggregate([\n {\n \"$addFields\": {\n \"availableQuestions\": \"$questionList\"\n }\n },\n {\n $unwind: \"$availableQuestions\"\n },\n {\n $group: {\n _id: \"$_id\",\n name: {\n $first: \"$username\"\n },\n availableQuestions: {\n $push: {\n $cond: {\n if: {\n $eq: [\n \"$availableQuestions.status\",\n \"not-attempted\"\n ]\n },\n then: \"$availableQuestions.questionid\",\n else: \"$$REMOVE\"\n },\n \n }\n },\n availableCount: {\n $sum: {\n $cond: {\n if: {\n $eq: [\n \"$questionList.status\",\n \"not-attempted\"\n ]\n },\n then: 1,\n else: \"$$REMOVE\"\n }\n }\n },\n \n },\n \n },\n {\n $group: {\n _id: \"$_id\",\n name: {\n $first: \"$name\"\n },\n questionList: {\n $push: {\n questionid: {\n $arrayElemAt: [\n \"$availableQuestions\",\n {\n $round: {\n $multiply: [\n {\n $rand: {}\n },\n {\n $subtract: [\n {\n $add: \"$availableCount\"\n },\n 1\n ]\n }\n ]\n }\n }\n ]\n },\n status: \"assigned\"\n }\n }\n }\n }\n])\n", "text": "Hi,Some update here,\nI’ve changed my strategy a little, by trying to do this task with aggregate() instead of updateMany() & pipelines. By grouping, isolating and randomizing these nested data, temporary adding fields and then choosing one “questionid” for each player and then, merge it’s output with questionList.Well, I guess there are more effective / clean ways to do that, but here it goes:Sample data:And here it’s my current progress:Test link hereLast part I’m working on is to merge that output in questionList, to update this field to each user without screw up current data.If someone could give me a light on that, or have a more effective way to do this, I would appreciate it.", "username": "Fabio_Iwano" }, { "code": "db.test.aggregate([\n {$set: {\n questionList: {\n $function: {\n body: function(items) {\n let filtered = items.filter(x => x.status == 'not-attempted')\n if (filtered.length == 0) { return items }\n let picked = Math.floor(Math.random()*filtered.length);\n let idx = items.findIndex(z => z.questionid == filtered[picked]['questionid'])\n items[idx]['status'] = 'assigned'\n return items\n },\n args: ['$questionList'],\n lang: 'js'\n }\n }\n }}\n])\n$sample", "text": "Hi @Fabio_Iwano and welcome to the community!!The feature to randomly select an array element in a document is currently not available in MongoDB. However, you can select a document using the $sample.Currently, to select a random element, one way would be to use $function in the following way:Please note that this code is untested and serves as an illustration only, so it may not do what you need it to do.However, the easiest way to do this currently is perhaps doing the operation on the application side and push the resulting changes to the databaseI would also like to mention that if the primary purpose of the collection is to select a random question, perhaps modifying the schema design to be one question per document would be easier in the long run, since you can use $sample to do the random selection, and you would not need to maintain a complex aggregation pipeline.\nAs this would make the use of $sample to select and hence readability and maintenance of the complex aggregation could be avoided.Let us know if you have any thoughts on the same.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "I would like to propose something completely different. Something that does not answer your question. But something that might simplifies your problem.From the description of your use case, I understand thatMy solution will be than rather select a random question for each from its unanswered questions every day, 1. create the list randomly when the the user is created\n2. have 2 arrays, asked, and unanswered\n3. every day you move the first unanswered into the asked and this is the question to ask for the dayThis way the order is randomly predetermined for each player when the player is created, so only once. Selecting the question of the day becomes trivial as you simply $pull and $push for each player rather than doing some complicated aggregation everyday for each player. The complicated stuff is all done at the beginning and only once.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updatemany for each document, but for each document, set value in just one nested object randomly
2022-09-27T20:49:52.822Z
Updatemany for each document, but for each document, set value in just one nested object randomly
1,629
null
[ "aggregation", "java" ]
[ { "code": "", "text": "db.collection.aggregate([\n{\n$match : { filterQuery}\n},\n{\n$addFields :{ “customGradeOrder” : { $indexOfArray: [ [“Gold”, “Silver”, “Bronze”] , “$grade” ] }}\n},{$sort : { customGradeOrder : 1 } }\n]);Looking for reference to mongo java driver which allows to create add fields for preceding expression.", "username": "Madhav_kumar_Jha" }, { "code": "addFields = new Document( \"$addFields\" ,\n new Document( \"customeGradeOrder\" ,\n new Document( \"$indexOfArray\" , Arrays.asList( Arrays.asList( \"Gold\", \"Silver\", \"Bronze\" ) , \"$grade\" ) ) ) ) ;\n", "text": "I think you can simply do:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregate add fields with array expression in mongo java driver
2022-10-03T06:35:37.345Z
Aggregate add fields with array expression in mongo java driver
1,889
null
[ "atlas-functions" ]
[ { "code": "", "text": "hiI like to move my stripe-webhook to mongodb. Currently I have a nodejs server where I can get the raw body from the request like (because stripe needs the raw request body)app.post(’/webhook’, bodyParser.raw({ type: ‘application/json’ }), (request, response)Is it possible to get the raw body request somehow in a mongodb webhook?thx", "username": "rouuuge" }, { "code": "", "text": "Do these docs help - https://docs.mongodb.com/realm/functions/json-and-bson/ ?\nSounds like you may want to use one of the parse functions.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "hm sadly not. Non of them were successfully.the raw body on my nodejs application was like:<Buffer 7b 0a 20 20 22 63 72 65 61 74 65 64 22 3a 20 31 33 32 36 38 35 33 34 37 38 2c 0a 20 20 22…and stripe error is like:“Webhook Error: No signatures found matching the expected signature for payload. Are you passing the raw request body you received from Stripe? GitHub - stripe/stripe-node: Node.js library for the Stripe API.”", "username": "rouuuge" }, { "code": "", "text": "Hi @rouuuge.For some general examples of working with Stripe from Realm. you could take a look at this eCommerce app that I built a while back: GitHub - mongodb-appeng/eCommerce-Realm: The backend portion of the MongoDB eCommerce reference app For example, this function creates a Stripe checkout session: eCommerce-Realm/source.js at main · mongodb-appeng/eCommerce-Realm · GitHubThe frontend (Vue.js) app is here: GitHub - mongodb-appeng/eCommerce: An example eCommerce store built on MongoDB Atlas and MongoDB Stitch", "username": "Andrew_Morgan" }, { "code": "", "text": "hi andrew, thx for the example! Good to know that at least the stripe api seems to work! But for me its not really a option, because I use the UI-Forms directly from Stripe directly: Stripe Checkout | Stripe Documentationto get data into mongodb I need their webhook. Of course I could run a own nodejs server like now. But I would like ot move as much code as possible to the same place.", "username": "rouuuge" }, { "code": "", "text": "Did anyone know how to get the raw request body?Stripe Webhook does need it, I have the same error as @rouuuge“Webhook Error: No signatures found matching the expected signature for payload. Are you passing the raw request body you received from Stripe? https://github.com/stripe/stripe-node#webhook-signing ”", "username": "andrefelipe" }, { "code": "app.use(express.json());\napp.use(express.urlencoded());\nexpress.json()express.raw({ type: 'application/json'})/webhook/webhookexpress.json()app.post('/webhook', express.raw({ type: 'application/json' }), (req, res) => {\n ...\n});\n\napp.use(express.json());\napp.use(express.urlencoded());\n", "text": "Hi,You need to parse the Stripe webhook event as raw data. Usually, everyone have JSON parser before any router defined:Express executes code from top to bottom. That means that express.json() will be called before express.raw({ type: 'application/json'}) defined in the /webhook endpoint.So, all you have to do is to move /webhook endpoint before defining express.json() parser.", "username": "NeNaD" }, { "code": "", "text": "Thanks, that works for NodeJS directly, sorry I meant on Realm Functions. There looks like no way to get the raw response data on Realm Functions.", "username": "andrefelipe" }, { "code": "", "text": "", "username": "Surender_Kumar" }, { "code": "", "text": "", "username": "clueless_dev" }, { "code": "", "text": "Hi… im having the same problem. Were you able to solve this using realm functions?", "username": "Mariano_Cano" } ]
Setup Stripe webhook
2021-02-28T22:40:10.310Z
Setup Stripe webhook
6,803
null
[ "aggregation", "queries", "transactions" ]
[ { "code": "db.transaction.aggregate(\n [{ $match: { \n $and: [ {\n createdAt: { $gte: ISODate('2022-09-15'), $lt:\n ('2022-09-16') } },\n { type: \"CASH_OUT\"}]}},\n {\n $group:\n {\n _id: {createdAt: {$last: \"$createdAt\"}},\n totalAmount: { $sum: \"$postBalance\" },\n \n }\n }\n \n ]\n)\n", "text": "I have list of records with the following fields - postBalance, agentId, createdAt, type. I want to filter by “type” and date. After this is done I want to get the $last postBalance for each agent based on the filter and sum up the postBalance. I have been struggling with this using this.An empty array is returned instead", "username": "Ojo_Ilesanmi" }, { "code": "", "text": "Hi @Ojo_Ilesanmi, and welcome to the MongoDB Community forums! Can you please post some sample documents? This makes it easier for the community members to help you out. Without this we could make assumptions and provide a solution that doesn’t work for you. It’s also helpful to see the output you’re looking for.", "username": "Doug_Duncan" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6334cefd0048787d5535ff16\"\n },\n \"userID\": {\n \"$oid\": \"6307baab9f51747015fdb981\"\n },\n \"aggregatorID\": \"0000375\",\n \"firstName\": \"damola\",\n \"lastName\": \"akinkunmi\",\n \"ref\": \"80573e71-38c3-4243-8660-f6dc8f988f6a\",\n \"transactionID\": \"CLV000AB-2033HRNU-092822472300\",\n \"totalAmount\": {\n \"$numberDecimal\": \"5100.0\"\n },\n \"transactionAmount\": {\n \"$numberDecimal\": \"5100.0\"\n },\n \"transactionFee\": {\n \"$numberDecimal\": \"25.5\"\n },\n \"actionableAmount\": {\n \"$numberDecimal\": \"5074.5\"\n },\n \"aggregatorCut\": {\n \"$numberDecimal\": \"5.1000000000000005\"\n },\n \"cleverCut\": {\n \"$numberDecimal\": \"9.434999999999999\"\n },\n \"type\": \"CASH_OUT\",\n \"responseCode\": \"00\",\n \"responseMessage\": \"APPROVED\",\n \"preBalance\": {\n \"$numberDecimal\": \"18213.125\"\n },\n \"postBalance\": {\n \"$numberDecimal\": \"23287.625\"\n },\n \"walletHistoryID\": 613261,\n \"walletID\": 1809,\n \"walletActionAt\": {\n \"$date\": {\n \"$numberLong\": \"1664405248000\"\n }\n },\n \"provider\": \"xxxxxx\",\n \"slug\": \"/cashout/api/v1/transactions/6334cefd0048787d5535ff16\",\n \"transactionDateTime\": {\n \"$date\": {\n \"$numberLong\": \"1664405248000\"\n }\n },\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1664405245000\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1664405245000\"\n }\n },\n \"businessManager\": \"002\",\n \"rrn\": \"092822472300\",\n \"pan\": \"539983******3741\",\n \"terminalID\": \"CLV000AB\",\n \"agentID\": \"0001787\",\n \"status\": \"COMPLETED\",\n \"debited\": true,\n \"tracked\": false\n}\ndate : 2022-10-09\nCASHOUT : 897663,088,\nFUNDS_TRANSFER: 8900877,\nWALLET_TOP_UP: 8890000\n", "text": "Thanks. Here is is a sample document.I want my output to like thisI look forward to getting help.", "username": "Ojo_Ilesanmi" }, { "code": "FUNDS_TRANSFERWALLET_TOP_UPtypeCASH_OUTpost_balance", "text": "You have only provided a single document example, but from what I can see it doesn’t line up with the data you expect in your output.It would be really useful if you provided multiple documents (only include the fields that are necessary) that cover several groupings with output showing the actual values you expect from the sample documents for the output. From the single document provided and the sample output I can’t figure out where FUNDS_TRANSFER and WALLET_TOP_UP come from. You have a value for type that is CASH_OUT, but your output has a field with a similar name, I assume that this field value contains the sum of the post_balance field that you mentioned in the original post, but again us making assumptions leads to frustrations on your part that things don’t work as expected.", "username": "Doug_Duncan" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6334d632eb511a7240a338fc\"\n },\n \"userID\": {\n \"$oid\": \"62580d9fe057e46b9184bbd9\"\n },\n \"aggregatorID\": \"0000231\",\n\n \"type\": \"FUNDS_TRANSFER\",\n \"responseCode\": \"00\",\n \"responseMessage\": \"Successful\",\n \"preBalance\": {\n \"$numberDecimal\": \"112586.39\"\n },\n \"postBalance\": {\n \"$numberDecimal\": \"36566.39\"\n },\n \n \"transactionDateTime\": {\n \"$date\": {\n \"$numberLong\": \"1664410689000\"\n }\n },\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1664407090000\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1664407680000\"\n }\n },\n \"businessManager\": \"\",\n \"agentID\": \"0000665\",\n\n}\n{\n \"_id\": {\n \"$oid\": \"6334d438c1ab8a577677cbf3\"\n },\n \"userID\": {\n \"$oid\": \"62f27bc29f51747015fdb941\"\n },\n \"aggregatorID\": \"0000116\",\n \n \"transactionFee\": {\n \"$numberDecimal\": \"0.0\"\n },\n\n \"type\": \"AIRTIME_VTU\",\n \"postBalance\": {\n \"$numberDecimal\": \"2114.675\"\n },\n \"walletHistoryID\": 613266,\n \"walletID\": 1720,\n \"walletActionAt\": {\n \"$date\": {\n \"$numberLong\": \"1664406584000\"\n }\n },\n\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1664406584000\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1664406584000\"\n }\n },\n\n\n}\n", "text": "type or paste code hereAIRTIME_VTU is same as WALLET_TOP_UP", "username": "Ojo_Ilesanmi" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use multiple conditions in $match and sum in $group
2022-10-05T17:28:44.246Z
How to use multiple conditions in $match and sum in $group
6,354
null
[ "queries", "connecting", "flexible-sync" ]
[ { "code": "", "text": "Hello everyone,Please don’t bash me for this but, is it possible to sync data between a local device realm and an Amazon Dynamo DB database. I am working on my first major project that is starting as my senior project and planning on ending it as an official product and I don’t know much about databases. I need a fast and reliable local database to keep relevant information for offline situations but need it to be able to efficiently upload any changes when online. If there is a way to sync a realm with a dynamo DB database in a clean and concise fashion, please let me know.Also… If this post is not configured with the correct tags, please net me know. In that case, it is not me purposefully trying todo that but rather me not knowing where and what since this is my first day and first posting.", "username": "Matt_Clark1" }, { "code": "", "text": "Welcome to the MongoDB Community @Matt_Clark1 !Currently the only supported sync solution for Realm is Atlas Device Sync, which integrate with MongoDB Atlas including Atlas App Services like authentication providers, serverless functions, and triggers.Atlas has a free tier providing 512MB of storage and App Services free tier thresholds.As a student you can get some additional resources (including Atlas credits) by signing up for MongoDB for Students which is part of the GitHub Student Developer Pack.Regards,\nStennie", "username": "Stennie_X" } ]
Is it possible to sync data between an offline RealmDB with an AWS DB? If so, how would that work?
2022-10-05T13:32:21.231Z
Is it possible to sync data between an offline RealmDB with an AWS DB? If so, how would that work?
2,211
null
[ "connecting", "atlas-cluster", "php" ]
[ { "code": "<?php\n\nrequire_once dirname( __FILE__ ) . '/vendor/autoload.php';\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://<my-database>:<my-password>@<my-cluster>.lkffbtl.mongodb.net/?retryWrites=true&w=majority');\n\nvar_dump( $client );\n public function __construct($uri = 'mongodb://127.0.0.1/', array $uriOptions = [], array $driverOptions = [])\n {\n $driverOptions += ['typeMap' => self::$defaultTypeMap];\n\n if (! is_array($driverOptions['typeMap'])) {\n throw InvalidArgumentException::invalidType('\"typeMap\" driver option', $driverOptions['typeMap'], 'array');\n }\n\n if (isset($driverOptions['autoEncryption']['keyVaultClient'])) {\n if ($driverOptions['autoEncryption']['keyVaultClient'] instanceof self) {\n $driverOptions['autoEncryption']['keyVaultClient'] = $driverOptions['autoEncryption']['keyVaultClient']->manager;\n } elseif (! $driverOptions['autoEncryption']['keyVaultClient'] instanceof Manager) {\n throw InvalidArgumentException::invalidType('\"keyVaultClient\" autoEncryption option', $driverOptions['autoEncryption']['keyVaultClient'], [self::class, Manager::class]);\n }\n }\n\n $driverOptions['driver'] = $this->mergeDriverInfo($driverOptions['driver'] ?? []);\n\n $this->uri = (string) $uri;\n $this->typeMap = $driverOptions['typeMap'] ?? null;\n\n unset($driverOptions['typeMap']);\n\n $this->manager = new Manager($uri, $uriOptions, $driverOptions);\n $this->readConcern = $this->manager->getReadConcern();\n $this->readPreference = $this->manager->getReadPreference();\n $this->writeConcern = $this->manager->getWriteConcern();\n }\n$this->manager = new Manager($uri, $uriOptions, $driverOptions);", "text": "Hello, I am trying to connect via the PHP driver. I have followed the installation procedures from MongoDB’s PHP Driver documentation regarding pecl install for the extension and composer install of mongodb into the root directory.I have used the below code to launch the php driver and establish the connection to the database (note that I did change the inputs for db, pass, and cluster in the actual code).However, when I run this, I receive the following fatal error:Fatal error : Uncaught Error: Class ‘MongoDB\\Driver\\Manager’ not found in /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/vendor/mongodb/mongodb/src/Client.php:124 Stack trace: #0 /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/conf.php(6): MongoDB\\Client->__construct(‘mongodb+srv://c…’) #1 /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/mongodb.php(28): require_once(’/home3/coradase…’) #2 /home3/coradase/public_html/cora-staging/wp-includes/class-wp-hook.php(307): cora_mongodb_admin_page(’’) #3 /home3/coradase/public_html/cora-staging/wp-includes/class-wp-hook.php(331): WP_Hook->apply_filters(’’, Array) #4 /home3/coradase/public_html/cora-staging/wp-includes/plugin.php(476): WP_Hook->do_action(Array) #5 /home3/coradase/public_html/cora-staging/wp-admin/admin.php(259): do_action(‘toplevel_page_c…’) #6 {main} thrown in /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/vendor/mongodb/mongodb/src/Client.php on line 124Client.php (the file referenced in the error code) is a default file that came with the Composer installation. I have not edited the file. The function in Client.php that contains line 124 (referenced in the error) is shown below:For reference, line 124 is:\n$this->manager = new Manager($uri, $uriOptions, $driverOptions);Again, this code comes directly from the composer installation of mongodb, and has not been edited at all.I appreciate any insight form the team or anyone who has had this same problem in trying to debug so that the code will properly establish the connection to the database.Thank you!", "username": "michael_demiceli" }, { "code": "extension=mongodb.so/etc/php/8.1/cli/php.ini", "text": "Did you remember to add the line extension=mongodb.so to the end of your /etc/php/8.1/cli/php.ini or whatever ini file is appropriate to your php installation?", "username": "Jack_Woehr" }, { "code": "", "text": "Hi Jack - yes, I did add the extension line to php.ini.For reference, I am trying to connect MongoDB to a web app running on Wordpress.", "username": "michael_demiceli" }, { "code": "$ php\n<?php phpinfo(); ?>\nmongodb\n\nMongoDB support => enabled\nMongoDB extension version => 1.14.0\nMongoDB extension stability => stable\nlibbson bundled version => 1.22.0\nlibmongoc bundled version => 1.22.0\nlibmongoc SSL => enabled\nlibmongoc SSL library => OpenSSL\nlibmongoc crypto => enabled\nlibmongoc crypto library => libcrypto\nlibmongoc crypto system profile => disabled\nlibmongoc SASL => disabled\nlibmongoc ICU => enabled\nlibmongoc compression => enabled\nlibmongoc compression snappy => disabled\nlibmongoc compression zlib => enabled\nlibmongoc compression zstd => disabled\nlibmongocrypt bundled version => 1.5.0\nlibmongocrypt crypto => enabled\nlibmongocrypt crypto library => libcrypto\n", "text": "Well, what’s happening is that the classes built into the mongodb.so are not being found. Whatever the reason. Try loading php at the command line …Then ctl-D to exit and PHP should spew a lot of configuration info.\nLook for lines like:and if you don’t find them, then the extension is not being loaded.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi\nMy extension have loaded, but get the same error. Also, I want to have connect in local db (it seems runing on 127.0.1.1) - what connection string should I use?", "username": "New_Triangle" }, { "code": "mongodb://user:password", "text": "what connection string should I use?mongodb://user:password should be good enough assuming you really mean 127.0.0.1", "username": "Jack_Woehr" }, { "code": "s connecting without prompt and also located in 127.0.1.1 (I use command mongodb --host 127.0.1.1) And in php file it", "text": "Thanks for reply, but problem is still here\nI use mongodb from another application and its connecting without prompt and also located in 127.0.1.1 (I use command mongodb --host 127.0.1.1) And in php file its:\n$client = new MongoDB\\Driver\\Manager(‘mongodb://127.0.1.1’);\nI try to change file Client.php with manually adding “127.0.1.1”, but it doesn`t work too", "username": "New_Triangle" }, { "code": "$serverApi = new ServerApi(ServerApi::V1); $client = new MongoDB\\Client( 'mongodb+srv://user:<password>@<url>.mongodb.net/?retryWrites=true&w=majority', [], ['serverApi' => $serverApi]); $db = $client->test;", "text": "I’ve just tried to create a free DB on Atlas and I’ve used the string from docs$serverApi = new ServerApi(ServerApi::V1); $client = new MongoDB\\Client( 'mongodb+srv://user:<password>@<url>.mongodb.net/?retryWrites=true&w=majority', [], ['serverApi' => $serverApi]); $db = $client->test;I have the same problem, so it’s something with mongodb/compose deployment", "username": "New_Triangle" }, { "code": "", "text": "Just try to reboot server - it was a solution to me", "username": "New_Triangle" } ]
Fatal error: Uncaught Error: Class ‘MongoDB\Driver\Manager’
2022-09-27T16:47:43.871Z
Fatal error: Uncaught Error: Class ‘MongoDB\Driver\Manager’
13,439
null
[ "queries" ]
[ { "code": "db.getCollection('users').createIndex({ email: 1 },{ unique: true, name: \"case_insensitive_email\", partialFilterExpression: { email: {$exists: true} }, collation: { locale: \"en\", strength: 1 } })E11000 duplicate key error collection: userDb.users index: case_insensitive_email collation: { locale: \"en\", caseLevel: false, caseFirst: \"off\", strength: 4, numericOrdering: false, alternate: \"non-ignorable\", maxVariable: \"punct\", normalization: false, backwards: false, version: \"57.1\" } dup key: { email: \"0x3b552943351216140a7a512d4b08312f510114011401\" }", "text": "Good day everyone, I’m facing a problem I hope you guys can help me solve, I have a User collection that has 2 kinds of identifiers, one is email, that users use to connect to our platform and the other is a uniqueid, used for internal purposes, we had a problem of users with duplicated emails because the case was different, we have a unique index for this but it doesn’t guard against the same email with a different case, so I’m trying to implement a unique case insensitive index, so we don’t have this problem anymore, the issue is that for some reason the DB doesn’t let me create the index this is the command I’m usingdb.getCollection('users').createIndex({ email: 1 },{ unique: true, name: \"case_insensitive_email\", partialFilterExpression: { email: {$exists: true} }, collation: { locale: \"en\", strength: 1 } })but when I try to create the index I get this error\nE11000 duplicate key error collection: userDb.users index: case_insensitive_email collation: { locale: \"en\", caseLevel: false, caseFirst: \"off\", strength: 4, numericOrdering: false, alternate: \"non-ignorable\", maxVariable: \"punct\", normalization: false, backwards: false, version: \"57.1\" } dup key: { email: \"0x3b552943351216140a7a512d4b08312f510114011401\" }I’ve checked the email field for duplicates and haven’t found anything, the dup key the error returns doesn’t seem to be in the DB and I haven’t found relevant information about this specific error, if I don’t add the collation option I can create the index but then I’ll still have the issue about email with different case added, can’t work it out in the application layer because a lot of different systems can connect to the DB.\nI hope you can help me and thank you for your time.", "username": "gapinzon" }, { "code": "{ email: \"0x3b552943351216140a7a512d4b08312f510114011401\" }", "text": "If I understand correctly what you wrote, you DO NOT HAVE any document in the collection userDb.users with{ email: \"0x3b552943351216140a7a512d4b08312f510114011401\" }It is strange because it really looks like this is what the error message indicate.It is just a hunch, but if you already have an index on email you might want to drop it before trying to create another one with different options. But I guess that you would get a different error message if that was the case. But the new index name will be email_1, which will be a duplicate of an existing index on email.", "username": "steevej" }, { "code": "", "text": "hey, thanks for your reply I do have documents in the collection, a lot of them in fact, but I’ve searched through all of them to find any duplicate and I can’t find any, that’s why I don’t understand why I can’t create the index.", "username": "gapinzon" }, { "code": "", "text": "I also deleted every other index that had email as a key and even then I couldn’t create it.", "username": "gapinzon" }, { "code": "mongodumpmongorestoreemail_1", "text": "Hey, any update on this issue?I tried mongodumping a collection, editing the metadata to use a collation, then mongorestoreing it, but it says : “error creating collection DB.users: error running create command: (BadValue) ‘idIndex’ must have the same collation as the collection.”. I even tried deleting the other indexes like email_1", "username": "Remy_Machado" } ]
Trouble creating a case-insensitive index for a non mandatory email field
2021-11-19T15:28:05.079Z
Trouble creating a case-insensitive index for a non mandatory email field
2,418
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Hi,Here is the case:\nUser makes a login → Updates the database every 1-2 minutes (4 ints and 6 strings) during one day → Next day, user decides to login from different device → app tries to sync and it takes from 2-20 minutes.Is there a way to optimize it? From logs, it is clear that it tries to download all the previous ChangeSets. Is there a way to change the Compact algorithm on the server side? Or when a user is trying to login, to take only the last data (without history/changeset) ?Thanks", "username": "Andrei_Gusan" }, { "code": "", "text": "Hello!\nAre you using flexible or partition-based sync? If you were to use flexible sync, you’ll find the that the bootstrapping period (when the device fetches all previous changesets) takes less time.If the user is performing a write-only workflow and doesn’t need to receive any of the changesets, you could consider using Asymmetric Sync instead.", "username": "Sudarshan_Muralidhar" } ]
Sync time is taking a lot of time
2021-12-06T17:13:18.108Z
Sync time is taking a lot of time
2,577
null
[ "golang", "monitoring" ]
[ { "code": "MongoMaxConnIdleTimeMins = 5 // The default is 0 (indefinite)\nMongoMaxConnecting = 0 // The default is 2\nMongoMaxPoolSize = 200 // The default is 100\nMongoMinPoolSize = 10 // The default is 0\ncursors: 0\ntransactions: 0\nother operations: 200\n", "text": "G’day,Not sure if I should open a support ticket, use the community, so I will try here first, so hopefully others can search and find this. Thanks in advance.The main question is how can we monitor within a Golang process the number of MongoDB connections please?The reason is that I would like to be able to monitor and alarm before we reach maxPoolSize.\nIdeally, we could add a flag to enable Prometheus metrics, but otherwise if we could somehow query the mongo client to find out how many it has.We are using the go.mongodb.org/mongo-driver v1.10.1, and recently I’ve been increasing the maxPoolSize which has helped performance a lot.Current config we’re using is:However, recently we started seeing error messages like this.rpc error: code = DeadlineExceeded desc = timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 200, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 200.( We do have known issues with our Mongo DB performance, which we are working on. )It would be awesome if we could monitor these x3 numbers in the error message.Also, what is “other operations”, and how can I debug that to find out more?Thanks,\nDave", "username": "Dave_Seddon" }, { "code": "PoolMonitordb_client.gomongo.Client// db_client.go\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n\n \"go.mongodb.org/mongo-driver/bson/primitive\"\n \"go.mongodb.org/mongo-driver/event\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n \"go.mongodb.org/mongo-driver/mongo/readconcern\"\n \"go.mongodb.org/mongo-driver/mongo/readpref\"\n)\n\ntype dbClient struct {\n ID primitive.ObjectID // the Client ID\n client *mongo.Client\n ConnectionCreated int\n ConnectionPoolCreated int\n ConnectionClosed int\n ConnectionReady int\n ConnectionCheckOutFailed int\n ConnectionCheckedOut int\n ConnectionCheckedIn int\n ConnectionPoolCleared int\n ConnectionPoolClosed int\n checkedOut []uint64\n}\n\nfunc newDbClient(ctx context.Context, uri string) (*dbClient, error) {\n newClient := &dbClient{\n ID: primitive.NewObjectID(),\n }\n\n monitor := &event.PoolMonitor{\n Event: newClient.HandlePoolEvent,\n }\n\n // set additional options (read preference, read concern, etc) as needed\n opts := options.Client().ApplyURI(uri).SetPoolMonitor(monitor)\n var err error\n newClient.client, err = mongo.Connect(ctx, opts)\n if err != nil {\n return nil, err\n }\n _ = newClient.client.Ping(ctx, readpref.Nearest())\n return newClient, nil\n}\n\nfunc (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {\n switch evt.Type {\n case event.ConnectionCreated:\n d.ConnectionCreated++\n case event.PoolCreated:\n d.ConnectionPoolCreated++\n case event.ConnectionClosed:\n d.ConnectionClosed++\n case event.ConnectionReady:\n d.ConnectionReady++\n case event.GetFailed:\n d.ConnectionCheckOutFailed++\n case event.GetSucceeded:\n d.ConnectionCheckedOut++\n d.checkedOut = append(d.checkedOut, evt.ConnectionID)\n case event.ConnectionReturned:\n d.ConnectionCheckedIn++\n case event.PoolCleared:\n d.ConnectionPoolCleared++\n case event.PoolClosedEvent:\n d.ConnectionPoolClosed++\n }\n}\n\nfunc (d *dbClient) Close(ctx context.Context) {\n _ = d.client.Disconnect(ctx)\n}\n\nfunc (d *dbClient) UniqueConnections() int {\n u := 0\n m := make(map[uint64]bool)\n\n for _, val := range d.checkedOut {\n if _, ok := m[val]; !ok {\n m[val] = true\n u++\n }\n }\n\n return u\n}\n\nfunc (d *dbClient) PrintStats(section string) {\n fmt.Printf(\"-- %s --\\n\", section)\n fmt.Printf(\"Pools: Created[%d] Cleared[%d] Closed[%d]\\n\", d.ConnectionPoolCreated, d.ConnectionPoolCleared, d.ConnectionPoolClosed)\n fmt.Printf(\"Conns: Created[%d] Ready[%d] Ch-in[%d] Ch-out[%d] Ch-out-fail[%d] Ch-out-uniq [%d] Closed[%d]\\n\", d.ConnectionCreated, d.ConnectionReady, d.ConnectionCheckedIn, d.ConnectionCheckedOut, d.ConnectionCheckOutFailed, d.UniqueConnections(), d.ConnectionClosed)\n fmt.Printf(\"---------------\\n\")\n}\nPrintStats(\"...\")URI := \"mongodb://.../test?...&minPoolSize=1maxPoolSize=100\"\nctx := context.Background()\n\nmongoClient, err := newDbClient(ctx, URI)\nif err != nil {\n panic(err)\n}\n\ndefer func() {\n mongoClient.Close(ctx)\n mongoClient.PrintStats(\"Closed\")\n}()\n", "text": "Hi @Dave_Seddon,All official MongoDB Drivers (include the Golang Driver) implement the Connection Monitoring and Pooling specification which defines the various events that should be raised during the operational lifecycle of a connection pool.I have a short post that relates to MongoDB Go monitoring, however the pool counters are not exposed publicly which may make the type of reporting you’re trying to do a little more difficult.Creating a PoolMonitor with some custom counter tracking however should enable you to do the type of reporting you are after.For example, below in db_client.go we define a structure that contains a mongo.Client instance and some counters that are managed by connection pool events:This wrapper can be used to print out the internal counters at any point by calling PrintStats(\"...\"):Hopefully the above example helps illustrate one possible approach and enables you to move forward with a solution appropriate for you use case.", "username": "alexbevi" }, { "code": "mongo/driver/topology/errors.gototalConnectionCount - PinnedCursorConnections - PinnedTransactionConnections", "text": "Also, what is “other operations”, and how can I debug that to find out more?From mongo/driver/topology/errors.go it appears this is the result of totalConnectionCount - PinnedCursorConnections - PinnedTransactionConnections", "username": "alexbevi" }, { "code": "HandlePoolEventimport (\n hmmm this forum thingy won't let me post links\n)\n\nvar (\n\tpC = promauto.NewCounterVec(\n\t\tprometheus.CounterOpts{\n\t\t\tSubsystem: \"mongo_counters\",\n\t\t\tName: \"my_service\",\n\t\t\tHelp: \"my_service mongo_counters counts\",\n\t\t},\n\t\t[]string{\"event\"},\n\t)\n\n\nfunc (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {\n\tpC.WithLabelValues(evt.Type).Inc()\n}\n\n\nAlthough it would be kind of nice to have increments and decrements, so we know the current number\n\n\tpG = promauto.NewGauge(\n\t\tprometheus.GaugeOpts{\n\t\t\tSubsystem: \"connections_gauge\",\n\t\t\tName: \"my_service\",\n\t\t\tHelp: \"my_service connection gauge\",\n\t\t},\n\t)\n\nfunc (d *dbClient) HandlePoolEvent(evt *event.PoolEvent) {\n\tpC.WithLabelValues(evt.Type).Inc()\n\tswitch evt.Type {\n\tcase event.ConnectionCreated:\n\t\tpG.Inc()\n\tcase event.ConnectionClosed:\n\t\tpG.Dec()\n", "text": "HandlePoolEvent@alexbevi Thanks for the reply and for your great blogs!This looks like a reasonable approach, although I can’t help but feel that these counters must already exist within the “driver”, so it’s double handling.Are those increments in HandlePoolEvent concurrency safe? I would have thought atomic increments are required ( Not sure if you’ve seen this talk Bjorn Rabenstein - Prometheus: Designing and Implementing a Modern Monitoring Solution in G - YouTube ). We might try using prometheus counters I guess. Something like:I will play around and see what I can come up with.Thanks again!", "username": "Dave_Seddon" }, { "code": "", "text": "@Dave_Seddon I just wanted to close the loop on this question. Based on the conversation in this thread our Go Driver team has filed GODRIVER-2566 to expose durations to connection pool events.Tracking the timing of these events may be more insightful than just the counters themselves (see the linked ticket for more details).", "username": "alexbevi" } ]
Golang maxPoolSize monitoring?
2022-09-16T15:13:22.625Z
Golang maxPoolSize monitoring?
4,579
null
[ "swift", "transactions" ]
[ { "code": "final class Parent: Object\n{\n @Persisted var children: List<Child>\n @Persisted var hasFlaggedChildren: Bool\n}\n\n\nfinal class Child: Object\n{\n @Persisted var flags: MutableSet<String>\n}\nfunc update(children: [Child], newFlags: [String], removedFlags: [String])\n{\n try someRealm.write {\n \n // Loop over `children` and, for each, insert all `newFlags` \n // and remove all `removedFlags`.\n\n // To update the `hasFlaggedChildren` property on `Parent`, \n // can I do this in the same write transaction?\n let parents: Results<Parent> = someRealm.objects(ofType: Parent.self)\n\n for parent: Parent in parents\n {\n let flaggedKids: Results<Child> = parent.children.where({ $0.flags.count > 0 })\n parent.hasFlaggedChildren = (flaggedKids.isEmpty) ? false : true\n }\n }\n}\nChildflagstrueflags", "text": "Suppose I have two objects, like this:Is it valid/safe to use a query inside a write transaction where I’m updating the property being queried, like this:I’m worried that because the write transaction has not been committed when I query for Child objects with an empty flags set, the query will return “stale” results. Is that the case?I have behavior in my app where the “hasFlaggedChildren” property is “out of sync” (it’s true even though all children have empty flags sets) and I believe the explanation might be this query-inside-the-write-transaction.Thanks!", "username": "Bryan_Jones" }, { "code": ".objects(ofType: Parent.self).objects(Parent.self): Results<Child>parent.hasFlaggedChildren...try! realm.write {\n parent.hasFlaggedChildren\n}\n", "text": "There are a couple of typo’s in the code .objects(ofType: Parent.self) should be .objects(Parent.self) for example, but other than that it’s works as is. note this : Results<Child> is not neededIn general the only task that must be within a write is when a managed object is modified so this is the only line that does thatparent.hasFlaggedChildren...Technically you could encapsulate just that line within a writeBut that leads to the next question; do you have any other code attempting to modify those objects after the query (the read?)Results objects reflect the current state of those objects - if they are modified elsewhere it will be reflected here so that could be a factor.", "username": "Jay" }, { "code": "", "text": "@jay Thanks. When does the Results collection reflect the new changes: IMMEDIATELY or when this write transaction is closed and committed?Note: adding/removing strings to the “flags” set is a change that must occur in the write transaction, so it can’t be narrowed down as you propose. I simply omitted that part for brevity.", "username": "Bryan_Jones" }, { "code": "writePersonClassnamedescdescJays Descdo {\n try realm.write {\n let people = realm.objects(PersonClass.self)\n let jayBefore = people.where { $0.name == \"Jay\" }.first!\n print(jayBefore.name, jayBefore.desc)\n jayBefore.desc = \"Hello, World\" //update the description\n let jayAfter = people.where { $0.name == \"Jay\" }.first!\n print(jayAfter.name, jayAfter.desc)\n throw \"Throwing\"\n }\n} catch let err as NSError {\n print(err.localizedDescription)\n}\nJay Jays desc //this is before the update\nJay Hello, World //this is after the update\nThe operation couldn’t be completed. (Swift.String error 1.) //throw causing the transaction to cancel\nJay Jays desc //back to it's original value", "text": "This may help - from the docsThe Swift SDK represents each transaction as a callback function that contains zero or more read and write operations. To run a transaction, define a transaction callback and pass it to the realm’s write method. Within this callback, you are free to create, read, update, and delete on the realm. If the code in the callback throws an exception when Realm runs it, Realm cancels the transaction. Otherwise, Realm commits the transaction immediately after the callback.So that boils down to all or none. It either all passes or all fails as it’s “one thing”. Whatever happens in the transaction, stays in the transaction.The process can be illustrated by some example code.In this case, I have a PersonClass object and each person has a name and desc property. The code loads them in, queries for me (Jay) with the property desc set to Jays Desc and then updates my description to “Hello, World”.The fetched Jay is printed before, and then after the update (you’ll see it’s updated) but then throws an exception so nothing was committed.and the outputIf we then retrieve Jay again, it’s unchanged.Jay Jays desc //back to it's original valueSo - the data within the write block is scoped to that block and only changes within that block.Does that clarify it?", "username": "Jay" }, { "code": "ListMutableSet.where({ $0.collectionProperty.count == 0 })\n", "text": "@Jay Right. That’s all straightforward. But I’m not sure the same applies to collection properties on an Object (List, MutableSet, etc.).If we change the contents of those collection properties during the write transaction, is an immediate query against the collection property while still within the open write transaction, such as:going to work with the collection property as it exists inside the open write transaction (with changes), or does it query against the “old” version of the collection property that hasn’t been updated in the database yet because the write hasn’t been committed?I’m looking for a definitive answer because, in testing with a live app that’s using Realm Sync, the answer seems to be a race condition of sorts.", "username": "Bryan_Jones" }, { "code": "jayjayBefore.desc = \"Hello, World\" //update the description.descdogListDogClassjay.dogList.removeAll() //delete all dogsdogListdogList", "text": "is an immediate query against the collection property while still within the open write transactionThat was the point of my example. Making a change to an object in any fashion is scoped only within the transaction. I changed a property on the jay objectjayBefore.desc = \"Hello, World\" //update the descriptionbut .desc property could be any property. For example suppose my PersonClass had a dogList property which is a List of DogClass objects. If this is done within the writejay.dogList.removeAll() //delete all dogsthe dogList property after that call will contain 0 entries (as long as we are within the write). If the write fails the dogList will still contain the original dogs. If it completes, it will contain 0 dogs.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are Queries Permitted/Valid Inside a Write Transaction?
2022-10-04T07:51:03.628Z
Are Queries Permitted/Valid Inside a Write Transaction?
1,777
null
[]
[ { "code": "● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Mon 2022-10-03 19:47:05 EDT; 3s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 12681 (mongod)\n Memory: 157.3M\n CGroup: /system.slice/mongod.service\n └─12681 /usr/bin/mongod --config /etc/mongod.conf\nMongoDB shell version v4.4.17\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection timed out :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n", "text": "I’ve spent hours trying to get MongoDB to work.My server was hacked after I reset the firewall and forgot to disable the port. After, Mongo wasn’t starting and I chose to reinstall. So I’ve tried to completely uninstall everything many times, and time after time it just doesn’t work!When I try to install latest/6.0, it just fails to start with the error:\n“mongod.service: Failed with result ‘core-dump’…”\n“mongod.service Failed with result core-dump / Mongodb stop working: Aborted (core dumped)”So alright, I looked up some guides and apparently that was an issue with versions 5.0+. So I installed 4.4.Now, when starting I either get the error:\n“mongod.service: Main process exited, code=exited, status=14/n/a”But now, I’m just getting NO connection! I just cannot connect to it. Through localhost, externally (setting bind port to 0.0.0.0 & allowing firewall access to my ip), or any other way. Keep in mind, I’ve completely removed any SHRED of a trace of mongo from my machine with find, before reinstalling at least 7 times now.I’m just about done, I cannot get a connection. ‘mongo’ just times out, every time. I’ve fiddled with the bindIp option in the setting then restarting the service, or even the machine to no success.Starting mongodb (systemctl start mongod && systemctl status mongod):mongo:cat /var/log/mongodb/mongod.log:0bin is a client-side-encrypted alternative pastebin. You can store code/text/images online for a set period of time and share with the world. Featuring burn after reading, history, clipboard.Any help would be much appreciated!", "username": "Cooper" }, { "code": "", "text": "cat /var/log/mongodb/mongod.log:Mirror link", "username": "Cooper" }, { "code": "{\"t\":{\"$date\":\"2022-10-03T19:49:05.268-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n127.0.0.127017{\"t\":{\"$date\":\"2022-10-03T19:49:04.959-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\nmongomongodtelnet 127.0.0.1 27017netstat -na | grep 27017", "text": "Hi @Cooper, and welcome to the MongoDB Community forums! Sorry to hear you’re having so much trouble in getting MongoDB up and running once more on your system.From the log file we can see on line 27 that the server is up and listening:On line 6 we can see that the config file used to start MongoDB up is binding to only IP 127.0.0.1 on port 27017:What I don’t understand is why you are not able to connect from that machine using mongo.Can you try running the following commands from the machine that is running the mongod process and paste the results here:", "username": "Doug_Duncan" }, { "code": "telnet 127.0.0.1 27017telnet: could not resolve 127.0.0.1/27017:: Servname not supported for ai_socktypetelnet 127.0.0.1 27017tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN \nunix 2 [ ACC ] STREAM LISTENING 42857 /tmp/mongodb-27017.sock\n", "text": "Hey, thank you so much for the reply!\nThe output from the commands are:telnet 127.0.0.1 27017:\nThis command didn’t work with a hostname error (telnet: could not resolve 127.0.0.1/27017:: Servname not supported for ai_socktype), but trying telnet 127.0.0.1 27017 resulted in no response.netstat -na | grep 27017:I’m equally as confused, as clearly Mongo did bind from what I’ve looked at, yet no response. I’m wondering if somehow the firewall is messing with localhost connections? I have somewhat messed it up trying to restore it. Thanks again for the response.", "username": "Cooper" }, { "code": "netstat -na | grep 27017localhost270170.0.0.0:*mongomongoshmongod", "text": "Thanks for posting the results Cooper.The results from netstat -na | grep 27017 does indeed show that there is a service listening on localhost port 27017. It is shows that it’s accepting requests from any host/port combination (the 0.0.0.0:* part).At this time I would say look at the firewall as you proposed. MongoDB doesn’t appear to be the problem, so it’s the connection between the client (mongo / mongosh) and the server (mongod). You could temporarily disable your firewall for testing purposes. If you are able to connect with the firewall disabled, then you can be reenable it and try to figure to the right set of rules to keep unwanted traffic out, but allow the connection to the MongoDB instance.", "username": "Doug_Duncan" }, { "code": "", "text": "Disabling the firewall did work! I cannot believe I didn’t think of trying that. I guess I will try to wipe my firewall rules and start again.Thank you so much for the help!", "username": "Cooper" }, { "code": "iptables", "text": "Hi @Cooper,The MongoDB manual has some information on Configuring Linux iptables Firewall for MongoDB that may be a helpful starting point.I also recommend reviewing the MongoDB Security Checklist for common security measures.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for the iptables link.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot Get Install To Work
2022-10-03T23:50:21.213Z
Cannot Get Install To Work
2,695
null
[ "replication", "sharding", "database-tools", "backup", "migration" ]
[ { "code": "", "text": "Hello,Is there a best practice way to migrate data from a replicaset cluster to a new sharding cluster in poduction environments? (Within minimum downtime for APIs.) .When we use mongodump/mongorestore tools to move all data (about 400GB), it takes a lot of time to complete. It doesn’t seem like the best way for a production environment migration.Is there a step-by-step scenario to consider like point-in-time backup and restore to migrate data from a replica set to a new sharding cluster?I found the following implementation, but it says it won’t be useful in sharding clusters: How to manually perform a point in time restore in MongoDBThanks,", "username": "serhat_yarat" }, { "code": "", "text": "Is there a reason you’re performing a dump/restore? Have you seen the documentation on the MongoDB site on how to Convert a Replica Set to a Sharded Cluster? Following these steps does not require a dump/restore.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Doug_Duncan,Yes, I checked that. We cant use the convert way because there is no network connection between these clusters, we won’t use the oldest node (replica set cluster nodes) when this migration is completed.Thanks,", "username": "serhat_yarat" } ]
Mongodb migration data from a replicaset cluster (4.2 V) to a new sharding cluster (4.2 V)
2022-10-03T11:47:40.522Z
Mongodb migration data from a replicaset cluster (4.2 V) to a new sharding cluster (4.2 V)
2,242
null
[ "compass", "kafka-connector" ]
[ { "code": "db.runCommand({connectionStatus: 1, showPrivileges: true});\n{\n authInfo: {\n authenticatedUsers: [],\n authenticatedUserRoles: [],\n authenticatedUserPrivileges: []\n },\n ok: 1\n}\n", "text": "Hi,\nI’m trying to use the Kafka Connect MongoDB connector to sink data to CosmosDB with Mongo API.When I create the connector, Connect returns an error immediately because the user doesn’t have the necessary privileges (insert, update, delete). However, it does have them, it’s just that CosmosDB seems to do something “peculiar” with Mongo RABC. For example, when running from Compass this:The result I’m getting is:This is happening despite having activated the newly created (Role-based access control in Azure Cosmos DB API for MongoDB: Now in preview - Azure Cosmos DB Blog).Does anybody have any idea or suggestion to make this work? I’m thinking if a small contribution to the mongo-kafka project adding a config setting that allowed disabling this check would be considered at all.Thanks.", "username": "Javier_Holguera" }, { "code": "", "text": "The CosmosDB API for MongoDB supports only a fraction of the available APIs that MongoDB does. Comparing Microsoft Cosmos DB And MongoDB | MongoDB. If you must use CosmosDB it might be better to ask the question on a Microsoft forum. If you’d like a self-hosted MongoDB database try MongoDB Atlas. MongoDB Atlas Database | Multi-Cloud Database Service | MongoDB. It works in Azure too.", "username": "Robert_Walters" }, { "code": "", "text": "Hi Robert,I’m aware of that post, and also the fact that it is not up to date; CosmosDB launched support for Mongo API 4.2 in Feb '22 and the post is based on the feature set in August '21. Also, the post doesn’t mention anything about RABC. In any case, I do need to use CosmosDB so it doesn’t really matter.Regarding asking a Microsoft forum, AFAIK it is the MongoDB community building the Kafka Connect connector, not Microsoft. I’m used to address open-source developers directly in Github and/or specific chats instead of general forums. I assume tor developers might monitor this forum, since it’s linked in the Github readme.Thanks.", "username": "Javier_Holguera" }, { "code": "", "text": "You are correct in asking on this forum as the engineers for the connector are here however the connector is designed and tested against a MongoDB instance not CosmosDB or any other third party MongoDB API. Can you use the native CosmosDB connector for Kafka in your solution ?", "username": "Robert_Walters" }, { "code": "", "text": "Hi Robert,We tried that connector first. As you mentioned, being a native connector made it the perfect candidate. However, it is a bit more immature than we were hoping.For example, I personally fixed a bug where empty collections would cause it to fail: Fixes parsing for empty arrays by javierholguera · Pull Request #466 · microsoft/kafka-connect-cosmosdb · GitHubConsidering how basic the scenario is, I didn’t feel me with confidence. I thought that MongoDB Connector would have seen more usage, be more polished and, assuming CosmosDB Mongo API lived up to its compatibility promise, a viable alternative.", "username": "Javier_Holguera" }, { "code": "", "text": "assuming CosmosDB Mongo API lived up to its compatibility promiseHi @Javier_Holguera,To help set your expectations correctly: the Cosmos DB API for MongoDB is an independent implementation emulating a subset of MongoDB features for the associated server version. Cosmos’ native interface is their SQL/Core API, and there are emulated APIs supporting wire protocols and approximate feature mapping for MongoDB, Cassandra, and Gremlin.There are some differences in behaviour including Cosmos-specific Request Units (RUs), rate limiting, and error codes. Official MongoDB drivers and connectors are not currently tested against emulated APIs like Cosmos and there is quite a gap in core compatibility.These compatibility caveats may be fine for your use case, but you should not expect full compatibility as these are different codebases and underlying implementations.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Javier_Holguera, have you solved this issue? I’m running into the exact same thing with the insert, update, delete privileges.", "username": "jordan_palamos" }, { "code": "connectionStatus", "text": "I found the answer in case anyone else comes across it. The problem is that the CosmosDB API for MongoDB does not support the connectionStatus command that Javier has shown: https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/feature-support-42This same command is used by the Kafka connector sink code to validate the connection on startup. You can edit the connector to simply skip the user validation step if you want this to work. Change would go here → https://github.com/mongodb/mongo-kafka/blob/ee5edf317508da42a62917d32fb21c2d46660991/src/main/java/com/mongodb/kafka/connect/MongoSinkConnector.java#L105I don’t know if the maintainers would be open to making the user validation step optional and exposing that in the configs?", "username": "jordan_palamos" }, { "code": "skip.user.action.validationconnectionStatus", "text": "Hi,Sorry for the late reply, was a few days off.I’ve created a ticket to raise awareness about the issue: https://jira.mongodb.org/browse/KAFKA-332I’ve also opened a PR that implements a config entry (skip.user.action.validation) so a developer can consciously force the connector (sink or source) to skip that check.I’ve tested it with CosmosDB Mongo API and it works just fine. The fact that the connectionStatus command doesn’t return the user permissions doesn’t change that the user indeed has them and all reads/writes work. This is tested with the sink connector, though. I don’t have a use case for the source connector at the moment, but I would expect the same behaviour.I hope that, even if this is not the main MongoDB backed by Atlas, since this connector is open-source, the contributors optimised for supporting as many people as possible and accept this contribution (happy to do corrections, my first contribution to the project).", "username": "Javier_Holguera" }, { "code": "", "text": "Link to the PR: Adds config 'skip.user.action.validation' to skip user permissions check by javierholguera · Pull Request #120 · mongodb/mongo-kafka · GitHub", "username": "Javier_Holguera" } ]
Kafka Connect mongo connector with CosmosDB Mongo API
2022-08-25T20:15:17.908Z
Kafka Connect mongo connector with CosmosDB Mongo API
4,829
null
[]
[ { "code": "volumes: - mongodata:/data/dbError receiving request from client. Ending connection from remote", "text": "Hello community,\nMy data is deleted every day. The mongo server run under docker, and i never encounter this issue on my local computer.\nOn the docker side:", "username": "rem_zy_flex" }, { "code": "", "text": "I had the same issue, deploying it on a server without authorization enabled. So it appeared that all data was wiped because of hack. My all dbs all removed but had only db named “read_me_to_recover_your_data”. So the solution was running mongodb with authorization enabled", "username": "Daniyar_Gilimov" } ]
Help: My data wiped at least once a day (docker instance)
2022-01-12T08:52:29.837Z
Help: My data wiped at least once a day (docker instance)
3,091
null
[ "installation" ]
[ { "code": "", "text": "We are currently running version 5.0 of mongoDB community edition and when viewing this through Cloud Manager is appears as this. We need to downgrade back to 4.4. I have been trying to follow this formal guide: https://www.mongodb.com/docs/manual/release-notes/5.0-downgrade-replica-set/ When I have completed all the steps and restart the mongodb node and services, once the Cloud Manager automation agent gets the server running again, it still appears as version 5.0.Am I missing some kind of config that needs to be set. Furthermore, when I have downloaded and installed the mongo binaries, running a mongod --version typically shows me 4.4?At this step in the documentation, they talk about replacing the 5.0 binary with the 4.4 binary, how can I actually check that I have done this correctly? REF: https://www.mongodb.com/docs/manual/release-notes/5.0-downgrade-replica-set/#downgrade-secondary-members-of-the-replica-set", "username": "Alex_Meyer1" }, { "code": "db.version()db.version()", "text": "When I have completed all the steps and restart the mongodb node and services, once the Cloud Manager automation agent gets the server running again, it still appears as version 5.0.After you downgrade the server, without involving automation, are you seeing the desired version in db.version()?If yes, it might be that Cloud Manager automation is not aware of the downgrade. Note that the downgrade procedure you linked doesn’t involve automation, so this is a possibility.If you’re using Cloud Manager, then the page Change the Version of MongoDB — MongoDB Cloud Manager might be more relevant.However, if this is not solving your issue, please log into your MongoDB Cloud account and open a support ticket.At this step in the documentation, they talk about replacing the 5.0 binary with the 4.4 binary, how can I actually check that I have done this correctly?If you connect to the server using mongosh and execute db.version() it should return the server version that is running.Best regards\nKevin", "username": "kevinadi" } ]
Downgrade Mongo not working for Cloud Automation
2022-09-30T20:45:16.123Z
Downgrade Mongo not working for Cloud Automation
2,125
null
[ "connecting" ]
[ { "code": "", "text": "Hi all,First of all, if this is not the right category, please let me know which one is.I’m new to MongoDB. Made a local deploy and it’s running fine. Now I’m trying to set up an Atlas instance, but can’t seem to be able to connect to it.\nI’m trying to use the stringmongodb+srv://pdantas:@cluster0.n2hlh.mongodb.net/testas suggested in the “connect” session in Atlas console, however, when I enter this string in Compass I get the errorquerySrv ENOTFOUND _mongodb._tcp.cluster0.n2hlh.mongodb.netI’ve also tried connecting through mongosh and get the same errorC:\\Users\\OS16S8898>mongosh “mongodb+srv://cluster0.c0tbu.mongodb.net/myFirstDatabase” --username pdantas\nEnter password: *******\nCurrent Mongosh Log ID: 6165e8a56936fb42f0cbf229\nConnecting to: mongodb+srv://cluster0.c0tbu.mongodb.net/myFirstDatabase\nError: querySrv ENOTFOUND _mongodb._tcp.cluster0.c0tbu.mongodb.netI’ve whitelisted my IP, and even added 0.0.0.0/0 to the IP access list, to no avail. Can someone please help me find out what is it that I’m doing wrong?Thanks!", "username": "Pedro_Dantas" }, { "code": "", "text": "Hi @Pedro_Dantas,Which versions are you using?Also, this topic had a solution: https://www.mongodb.com/community/forums/t/readme-running-application-error-querysrv-enotfound-solved/85196/3Does it work for you too?", "username": "MaBeuLux88" }, { "code": "", "text": "Try using google’s DNS 8.8.8.8 and 8.8.4.4.See Public DNS  |  Google Developers for more details.", "username": "steevej" }, { "code": "C:\\Users\\OS16S8898>mongosh mongodb://pdantas:[email protected]:27017,cluster0-shard-00-01.c0tbu.mongodb.net:27017,cluster0-shard-00-02.c0tbu.mongodb.net:27017/test\nCurrent Mongosh Log ID: 6166f8446f2747e87b767609\nConnecting to: mongodb://<credentials>@cluster0-shard-00-00.c0tbu.mongodb.net:27017,cluster0-shard-00-01.c0tbu.mongodb.net:27017,cluster0-shard-00-02.c0tbu.mongodb.net:27017/test\nMongoServerSelectionError: connection <monitor> to 108.129.24.16:27017 closed\n", "text": "Hi @MaBeuLux88.Those were the latest versions that I could find.About the solution detailed in the link, I tried formatting the string as instructed (using the mongodb protocol and a lista of the URLs for each shard separated by a comma) but still can’t get a connection (it does seem to be able to resolve the IP address, though).Am I still missing something?Thanks.", "username": "Pedro_Dantas" }, { "code": "", "text": "Hi @steevej I added those IP’s to the whitelist but nothing happened, was I supposed to do something else?Thanks.", "username": "Pedro_Dantas" }, { "code": "", "text": "I added those IP’s to the whitelistThe 2 IPs I provided are not at all related to the white list of your cluster. It has to do with host name resolution, aka DNS. In the link I provided you will find instructions forConfigure your network settings to use the IP addresses 8.8.8.8 and 8.8.4.4 as your DNS servers.Your ISP or VPN provider probably uses old DNS software that cannot resolve SRV connection strings.I have just notice that your original post contains 2 different connection strings, one with c0tbu and one with n2hlh.", "username": "steevej" }, { "code": "mongodb+srv://readonly:[email protected]/test\n/etc/resolv.confnameserver 8.8.8.8\n<keep the former DNS line in here>\n", "text": "Make sure to retrieve the Compass connection string from the MongoDB Atlas UI like this:\nimage605×725 43.8 KB\n\nimage1356×1180 113 KB\n\nimage1357×1146 101 KB\nOf course make sure to change the values for the username and password.\nThey come from this menu. It’s not the Atlas user/pwd.\nimage312×504 17.2 KB\nIn the end, it should look like this:Try to connect using the above connection string in Compass to see if it works. This is the public cluster where I’m hosting the Open Data COVID-19 data set.If that doesn’t work - then you have a DNS problem like @steevej explained and you have to add 8.8.8.8 and/or 8.8.4.4 in your DNS list.For me on linux, I would have to add on the first line of /etc/resolv.conf:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "SSID:\tPivonet 5G\nProtocol:\tWi-Fi 5 (802.11ac)\nSecurity type:\tWPA2-Personal\nNetwork band:\t5 GHz\nNetwork channel:\t100\nIPv6 address:\t2001:818:e348:c700:2093:b4f9:eef:80a8\nLink-local IPv6 address:\tfe80::2093:b4f9:eef:80a8%19\nIPv6 DNS servers:\t2001:4860:4860::8888\n2001:4860:4860::8844\nIPv4 address:\t192.168.1.143\nIPv4 DNS servers:\t8.8.8.8\n8.8.4.4\nManufacturer:\tIntel Corporation\nDescription:\tIntel(R) Wireless-AC 9560 160MHz\nDriver version:\t21.120.0.9\nPhysical address (MAC):\t84-C5-A6-6A-70-6B\nC:\\Users\\OS16S8898>mongosh mongodb+srv://readonly:[email protected]/covid19\nCurrent Mongosh Log ID: 61680c1143666d0fa2a10a77\nConnecting to: mongodb+srv://<credentials>@covid-19.hip2i.mongodb.net/covid19\nUsing MongoDB: 4.4.9\nUsing Mongosh: 1.1.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlas covid-19-shard-0 [primary] covid19>\n", "text": "Hi guys,I was already using the connection string provided by the Atlas console, to no avail.\nI changed the DNS to Google’sbut it’s still not working\nimage692×177 9.08 KB\nI also tried to connect to the Covid19 DB as suggested by Maxime but get the same error.However, I found out that I do can connect through mongoshell So I’m guessing I can work through here, but if you have any further idea on why the connection though Compass isn’t working I’d love to hear, as I’d rather work that way if possible.Thanks for the assistance so far PD", "username": "Pedro_Dantas" }, { "code": "mongodb+srv://readonly:[email protected]/test\n", "text": "Can you connect toUsing MongoDB Compass?", "username": "MaBeuLux88" }, { "code": "", "text": "@MaBeuLux88 No, as I said I get the same error\nimage710×179 8.86 KB\n", "username": "Pedro_Dantas" }, { "code": "", "text": "What’s your OS & are you up-to-date ? Looks like there is definitely something wrong with your Internet connection. Are you behind a VPN maybe? Firewall? Or maybe it’s just a DNS issue as @steevej suggested.\nCan you try a different internet connection & disable everything than can cause connection issues? Antivirus software can block connections sometimes as well. If it works reactive them one by one and eliminate the troublemaker !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "mongodb+srv://readonly:[email protected]/test", "text": "mongodb+srv://readonly:[email protected]/testOn my side I was having the same issue and by disabling my VPN, the connection worked.Due to work, I need to use a VPN so how can we fix this issue other than disabling the VPN?", "username": "Ivan_Escalante" }, { "code": "", "text": "Your work probably disabled connections to port 27017 or maybe they have even more restrictive rules in place. You need to talk to your IT support. Pity that your work system & network doesn’t allow you to work . I guess they are just missing an exception on this port in their gateway rules. But I’m not an expert .", "username": "MaBeuLux88" }, { "code": "", "text": "Your work probably disabled connections to port 27017Try http://portquiz.net:27017/ to see if it is the above.", "username": "steevej" }, { "code": "telnet portquiz.net 27017generateResolvConf = true", "text": "This is happening to me now after Windows automatically updated to version 10.0.19042 Build 19042.Last night things were working fine, got up today and my machine had restarted & updated. Skipped Windows 11 spam at startup, started up my dev environment in WSL1 like always, and now I can’t work. Anyone else experiencing this?I can connect with Compass just fine.I’m not connected to VPN (which does mess up connections to Atlas, although with a different error). Until today I have never seen the error in OP’s post.telnet portquiz.net 27017 is failing, so I’m guessing something in the Windows update affected Windows Defender.Couldn’t ping google.com either.[SOLVED]This worked for me: edit /etc/wsl.conf and set generateResolvConf = true<!--\n🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨\n\nI ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:\n1. If I dele…te this entire template and go my own path, the core team may close my issue without further explanation or engagement.\n2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.\n3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).\n4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.\n5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.\n6. If I file an issue without collecting logs, the WSL team may close my issue without further explanation or engagement. \n\nAll good? Then proceed!\n-->\n\n<!--\nThis bug tracker is monitored by Windows Subsystem for Linux development team and other technical folks.\n\nImportant: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues.\nInstead, send dumps/traces to [email protected], referencing this GitHub issue. Ideally, please configure your machine to capture minidumps, repro the issue, and send the minidump from \"C:\\Windows\\minidump\\\".\nYou can find instructions to do that here: https://support.microsoft.com/en-us/help/315263/how-to-read-the-small-memory-dump-file-that-is-created-by-windows-if-a\n\nIf this is a console issue (a problem with layout, rendering, colors, etc.), please post the issue to the Terminal tracker: https://github.com/microsoft/terminal/issues\nFor documentation improvements, please post to the documentation tracker: https://github.com/MicrosoftDocs/WSL/issues\nFor any other questions on contributing please see our contribution guidelines: https://github.com/Microsoft/WSL/blob/master/CONTRIBUTING.md\n\nPlease fill out the items below.\n-->\n\n# Environment\n\n```none\nWindows build number: Version 10.0.18363.1256\nYour Distribution version: Both Ubuntu 18.04 and Ubuntu 20.04\nWhether the issue is on WSL 2 and/or WSL 1: WSL 2\n```\n\n# Steps to reproduce\nInstall WSL2 with any Ubuntu distro(1804/2004) .Ping google.com or any other website. Tried on with/without VPN(Big Edge IP client).\n\n\n\n\n<!-- \nIf you'd like to provide logs you can provide an `strace(1)` log of the failing command (if `some_command` is failing, then run `strace -o some_command.strace -f some_command some_args`, and link the contents of `some_command.strace` in a gist. \nMore info on `strace` can be found here: https://www.man7.org/linux/man-pages/man1/strace.1.html\nYou can use Github gists to share the output: https://gist.github.com/\n-->\n\n<!--\nCollect WSL logs by following these instructions: https://github.com/Microsoft/WSL/blob/master/CONTRIBUTING.md#8-detailed-logs \n-->\n**WSL logs**: \n**ping google.com**\nTemporary failure in name resolution\n\nFor **sudo apt update**\n\nErr:1 http://archive.ubuntu.com/ubuntu bionic InRelease\n Temporary failure resolving 'archive.ubuntu.com'\nErr:2 http://security.ubuntu.com/ubuntu bionic-security InRelease\n Temporary failure resolving 'security.ubuntu.com'\nErr:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease\n Temporary failure resolving 'archive.ubuntu.com'\nErr:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease\n Temporary failure resolving 'archive.ubuntu.com'\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nAll packages are up to date.\nW: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic/InRelease Temporary failure resolving 'archive.ubuntu.com'\nW: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease Temporary failure resolving 'archive.ubuntu.com'\nW: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease Temporary failure resolving 'archive.ubuntu.com'\nW: Failed to fetch http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease Temporary failure resolving 'security.ubuntu.com'\nW: Some index files failed to download. They have been ignored, or old ones used instead.\n\n# Expected behavior\n\nI should get reply to the ping. And Internet should be easily accessible from within WSL.\n\nWSL 1 has no problem. I am able to connect to internet. When I try to use docker on WSL , it asks to upgrade to WSL2. So now i have no option. Just have to use WSL2 for docker where there is no internet connectivity\n\n\n\n# Actual behavior\n\n![image](https://user-images.githubusercontent.com/63879567/103714165-9f7cd300-4f83-11eb-906f-d2faf0e7f071.png)\n\n\nHave looked at the solutions from other issues. None seems to works. Follows some that I have tried but it seems not to work.\n\n1. Added (_[network]generateResolvConf = false_) to **wsl.conf** And add _nameserver 8.8.8.8_ in **resolv.conf**\n\n2. Run the following powershell script \n\n```\n`echo \"Restarting WSL Service\"\nRestart-Service LxssManager\necho \"Restarting Host Network Service\"\nStop-Service -name \"hns\"\nStart-Service -name \"hns\"\necho \"Restarting Hyper-V adapters\"\nGet-NetAdapter -IncludeHidden | Where-Object `\n {$_.InterfaceD```\nescription.StartsWith('Hyper-V Virtual Switch Extension Adapter')} `\n | Disable-NetAdapter -Confirm:$False\nGet-NetAdapter -IncludeHidden | Where-Object `\n {$_.InterfaceDescription.StartsWith('Hyper-V Virtual Switch Extension Adapter')} `\n | Enable-NetAdapter -Confirm:$False`\n```\n\n\nPlease let me know how can i fix this issue, so i can connect to internet from my WSL\n\nFYI Antivirus - McAffe and Windows Defender is enabled.", "username": "Warren_Wonderhaus" }, { "code": "", "text": "Thanks, here it worked.", "username": "Marco_Aurelio_De_Araujo_Jesus" }, { "code": "", "text": "I have the same problem because I was connected to a VPN, when I disconnect it it works immediately.", "username": "JOSE_FRANCISCO_HERNANDEZ_CRUZ" }, { "code": "", "text": "Having run into a host of issues using private endpoints, some or most self inflicted, I have come the assumption that the “Error: querySrv ENOTFOUND” has to do with the inability of the driver to resolve a FQDN of the “mongodb+srv” to names of the replica set members.", "username": "Steve_Hand1" }, { "code": "_mongodb._tcp.nslookup -type=SRV _mongodb._tcp.cluster0.k0ke7qa.mongodb.net\n<REDACTED INFO>\n\nNon-authoritative answer:\n_mongodb._tcp.cluster0.k0ke7qa.mongodb.net\tservice = 0 0 27017 ac-omkxrjv-shard-00-00.k0ke7qa.mongodb.net.\n_mongodb._tcp.cluster0.k0ke7qa.mongodb.net\tservice = 0 0 27017 ac-omkxrjv-shard-00-01.k0ke7qa.mongodb.net.\n_mongodb._tcp.cluster0.k0ke7qa.mongodb.net\tservice = 0 0 27017 ac-omkxrjv-shard-00-02.k0ke7qa.mongodb.net.\nnslookup -type=SRV _mongodb._tcp.cluster0.abcdefg.mongodb.net\n<REDACTED INFO>\n\n** server can't find _mongodb._tcp.cluster0.abcdefg.mongodb.net: NXDOMAIN\nquerySrv ENOTFOUNDquerySrv ENOTFOUNDquerySRV", "text": "The error generally indicates the FQDN for the associated host(s) for the SRV record were not able to be resolved by the DNS in use.As of the time of this message, the SRV record prefix for Atlas clusters is _mongodb._tcp.Below is a reproduction that shows this error. Let’s say I have a cluster which has an associated SRV record which resolves to the following hostnames:Although not exactly the case for all similar errors generated and only for demonstration purposes, I will pass through an invalid SRV record (to some degree attempt to mimic the DNS being unable to resolve the SRV records):Using the DNS seed list connection format with the invalid SRV record above in Compass results in the querySrv ENOTFOUND error:\nimage1242×624 52.5 KB\nIn conclusion, the error querySrv ENOTFOUND was caused by DNS resolving issues, likely from the client side. Also as several people have noted in this thread, if you’re using VPN and encountering this issue only on the VPN (i.e. Connection works fine without querySRV error outside of the VPN connection) then it is probably due to how the VPN is configured. To my knowledge there are some VPN’s which assign a new DNS for the VPN session which could result in this error.", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Can't connect to MongoDB Atlas - querySrv ENOTFOUND
2021-10-12T20:05:33.940Z
Can&rsquo;t connect to MongoDB Atlas - querySrv ENOTFOUND
50,375
https://www.mongodb.com/…6_2_1024x237.png
[ "replication", "cluster-to-cluster-sync" ]
[ { "code": "", "text": "Hello,I was following the article: https://www.mongodb.com/docs/cluster-to-cluster-sync/current/connecting/onprem-to-onprem/#connect-two-self-managed-clusters\nto connect two self managed cluster and sync data from source cluster to destination. I am running the utility on my local machine and the source and destination mongoDB clusters are in AKS.I am stuck at resolving the source and destination cluster IP addresses.\nimage1915×444 63.5 KB\nNote: I have port forwarded using kubectl in this manner\nlocalhost:27016 <-> my-db-mongodb-0.my-db-mongodb-headless.default.svc.cluster.local (Source AKS cluster)\nlocalhost:27017 <-> my-db-mongodb-1.my-db-mongodb-headless.default.svc.cluster.local (Source AKS cluster)localhost:27018 <-> my-db-mongodb-0.my-db-mongodb-headless.default.svc.cluster.local (Destination AKS cluster)\nlocalhost:27019 <-> my-db-mongodb-1.my-db-mongodb-headless.default.svc.cluster.local (Distination AKS cluster)Why is the utility considering these cluster address, even when I am mentioning the port forwarded address as commandline argument.", "username": "Ashok_manojPhilip" }, { "code": "mongod", "text": "Hi @Ashok_manojPhilip welcome to the community!Why is the utility considering these cluster address, even when I am mentioning the port forwarded address as commandline argument.This is sort of alluded to in the connection string section in the page you linked to:Specify the hostnames of the mongod instances the same way that they are listed in your replica set configuration.Since mongosync uses standard MongoDB driver to perform its function, it needs to connect to all parts of the deployment as specified in the monitoring spec. It takes the list of servers to connect to from the replica set configuration, which is how all supported drivers works currently.In short, all the nodes in the replica set needs to be reachable by their addresses in the replica set config by mongosync for it to work.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosync not able to resolve source and destination mongoDB cluster
2022-10-04T11:36:09.620Z
Mongosync not able to resolve source and destination mongoDB cluster
2,167
null
[ "react-js", "app-services-hosting" ]
[ { "code": "", "text": "Hello! I am trying to serve gzipped files from Mongodb Realm Hosting. I am wondering if anyone has tried doing this before? I also would be interested in serving brotli compressed files as well.I can’t select the index.html.gz file as the SPA file in Hosting Settings.Has anyone tried to do this? Let me know. Thanks!", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Bumping this! If anyone has any ideas or any input from the Realm product/engineering team that would be awesome.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Gonna try this and report back: https://www.mongodb.com/docs/atlas/app-services/hosting/file-metadata-attributes/", "username": "Lukas_deConantseszn1" } ]
Serving Gzipped Files from Mongodb Realm Hosting
2021-09-15T00:21:18.289Z
Serving Gzipped Files from Mongodb Realm Hosting
3,769
null
[]
[ { "code": "", "text": "Hi, I was wondering if anyone has experience with running MongoDB with data stored on an encrypted filesystem vs using the MongoDB Enterprise encrypted storage engine. I’m particularly interested in any performance or operations pros/cons. Would they both be expected to have similar performance overhead?", "username": "AmitG" }, { "code": "", "text": "Hi @AmitG,I don’t have a performance comparison to offer based on direct experience (and I expect actual outcomes would be heavily influenced by your workload and deployment resources), however some general points to consider are:LUKS is full disk encryption, so will add overhead for all file access on encrypted volumes.MongoDB Enterprise’s encrypted storage engine only affect data files used by MongoDB processes.MongoDB data files using the default storage engine will not by encrypted if copied from a LUKS volume to another standard volume (eg for backup).MongoDB data files encrypted by the MongoDB Encrypted Storage Engine will always remain encrypted.Encryption at rest is only one of the recommended security measures – see the MongoDB Security Checklist for more recommendations. MongoDB Enterprise Advanced includes additional security features (auditing, Kerberos/LDAP auth, support for automatic Queryable Encryption, …) as well as operational tools like Ops Manager.I suspect targetted in-process encryption with the MongoDB Encrypted Storage Engine will be more efficient than LUKS, but for either approach you can address deployment resources needed for your performance targets as part of your capacity planning.Regards,\nStennie", "username": "Stennie_X" } ]
LUKS vs Encrypted Storage Engine performance
2022-10-04T15:34:04.483Z
LUKS vs Encrypted Storage Engine performance
992
https://www.mongodb.com/…b5905bf5c231.png
[ "aggregation" ]
[ { "code": "pendingdone{ match: {status: \"pending\"}}[{\n id: \"Ali\",\n pendingOrder: 11, //status = \"pending\"\n doneOrder: 10, // status = \"done\"\n },\n {\n id:\"Henry\"\n pendingOrder: 12,\n doneOrder: 20\n },\n ...\n]\n", "text": "I’ve a collection like this:\nI want to group them by name, and i want get the size of the document for both status pending and done . I able to get only pending one using { match: {status: \"pending\"}} .I want to get the result like this:Is it possible to do that in one aggregation?", "username": "elss" }, { "code": "$match:{$in:[\"Pending\",\"Done\"]}$group : {\n _id : { name : \"$Name\" , \"status\" , \"$Status\" } \n count : { \"$sum\" : 1 }\n}\n$group : {\n _id : \"$_id.Name\" ,\n counts : { $push : { \"Status\" : \"_$id.Status\" , count : \"$count\" } }\n}\n", "text": "Look at $group.First do not $match, or$match:{$in:[\"Pending\",\"Done\"]}Your first $group would look like:Then a second $group likeThe result will not be in the exact format you wish but close enough. May be $arrayToObject can do the final transformation. But I prefer to do this data cosmetic in the application rather than the data server.", "username": "steevej" }, { "code": "", "text": "Hi @elssOther than @steevej suggestion, you also might want to have a look at $facet to see if it satisfies your requirements. However I would suggest that you use the method that you’re more comfortable with and can maintain easily.Best regards\nKevin", "username": "kevinadi" } ]
Can i get result of different query in one aggregation?
2022-10-04T03:33:04.359Z
Can i get result of different query in one aggregation?
1,256
null
[ "node-js" ]
[ { "code": " const reqBody = body.text();\n const jsonRequest = JSON.parse(reqBody);\n var myObj = jsonRequest.myObj;\n \n myObj.updatedAt = new Date().getTime();\n let id = myObj._id;\n delete myObj._id;\n \n const options = { \"upsert\": false };\n\n response.myObj= await context.services.get(\"myInstance\")\n .db(\"myDB\")\n .collection(\"myCollection\")\n .replaceOne({\"_id\": BSON.ObjectId(id) }, myObj, options);\n \n", "text": "I have a large document (~20KB) and through an AWS lambda i modify it. Almost all fields are updated during that process and through an http Request i want to replace the document with the new object that the lambda sends.The process below achieves that, i was just wondering whether there’s another (better) way to do that since it feels weird to delete the _id field from my object (otherwise i get the error of trying to modify the _id field which is immutable) in order to replace it.I will appreciate any advice. Thanks in advance!", "username": "Panagiotis_Milios" }, { "code": " const reqBody = body.text();\n const jsonRequest = JSON.parse(reqBody);\n var myObj = jsonRequest.myObj;\n \n myObj.updatedAt = new Date().getTime();\n\n const options = { \"upsert\": false };\n\n response.myObj= await context.services.get(\"myInstance\")\n .db(\"myDB\")\n .collection(\"myCollection\")\n .replaceOne({\"_id\": myObj._id }, myObj, options);\n_id", "text": "Hi @Panagiotis_Milios welcome to the community!I think you can get away with:Please note that this is untested code.replaceOne should not complain about modifying _id since I don’t think you’re changing it. If this doesn’t work for you, please post the original document, the document you’re trying to replace it with, and the error message you’re seeing.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is this a solid practise?
2022-09-29T08:36:18.965Z
Is this a solid practise?
1,335
null
[ "ops-manager" ]
[ { "code": "", "text": "We have mongo db software installed and our security team is asking us to remediate the log4j from the binaries. Please let us know the process to upgrade the log4j to latest secure version./mongo/opsmanager/lib/log4j-1.2.15.jar", "username": "Bharat_Kilaru" }, { "code": "", "text": "Hi @Bharat_Kilaru welcome to the community!Since Ops Manager is part of the Enterprise Advanced subscription and is not community supported, you might want to contact sales so you can have the appropriate remediation for this.In the meantime, this was discussed in the thread Update on Log4Shell Vulnerability (CVE-2021-44228) and also the blog post Log4Shell Vulnerability (CVE-2021-44228, CVE-2021-45046 and CVE-2021-45105) and MongoDB | MongoDB BlogBest regards\nKevin", "username": "kevinadi" } ]
Remediate log4j Vulnerabilities
2022-10-04T19:40:29.983Z
Remediate log4j Vulnerabilities
2,691
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.9.2 of the MongoDB Go Driver.This release contains a bugfix. For more information please see the 1.9.2 release notes.You can obtain the driver source from GitHub under the v1.9.2 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,`The Go Driver Team", "username": "benjirewis" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.9.2 Released
2022-10-04T22:57:57.881Z
MongoDB Go Driver 1.9.2 Released
1,602
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.10.3 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the 1.10.3 release notes.You can obtain the driver source from GitHub under the v1.10.3 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team", "username": "benjirewis" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.10.3 Released
2022-10-04T22:54:56.718Z
MongoDB Go Driver 1.10.3 Released
1,813
null
[ "dot-net", "transactions" ]
[ { "code": "[\n {\n \"id\": 1,\n\t\"bookcode\" : \"A001\"\n \"name\": \"Crime and punishment\"\n },\n {\n \"id\": 2,\n\t\"bookcode\" : \"A002\"\n \"name\": \"Atomic habits\"\n },\n {\n \"id\": 3,\n\t\"bookcode\" : \"A003\"\n \"name\": \"Demons\"\n },\n {\n \"id\": 4,\n\t\"bookcode\" : \"A004\"\n \"name\": \"C# for beginners\"\n }\n]\n[\n {\n \"id\": 1,\n \"userId\": 75,\n \"books\": [\n {\n \"book\": \"A001\",\n \"price\": 50\n },\n {\n \"book\": \"A002\",\n \"price\": 20\n }\n ]\n },\n {\n \"id\": 2,\n \"userId\": 184,\n \"books\": [\n {\n \"book\": \"A003\",\n \"price\": 10\n },\n {\n \"book\": \"A004\",\n \"price\": 99\n }\n ]\n }\n]\n\n[\n {\n \"id\": 1,\n \"userId\": 75,\n \"books\": [\n {\n \"book\": \"Crime and punishment\",\n \"price\": 50,\n\t\t\n },\n {\n \"book\": \"Atomic habits\",\n \"price\": 20,\n\t\t\n }\n ]\n },\n {\n \"id\": 2,\n \"userId\": 184,\n \"books\": [\n {\n \"book\": \"Demons\",\n \"price\": 10,\n\t\t\n },\n {\n \"book\": \"C# for beginners\",\n \"price\": 99,\n\t\t \n }\n ]\n }\n]\n", "text": "Collection of booksshoppinglist(The book in books is the bookcode of the book in books master collection)Desired Resultset to get a matching book’s name in shoppinglist after joining books.bookcode = shoppinglist.books.book for shopping list collection", "username": "Sandeep_B" }, { "code": "db.shoppingList.aggregate([{\n $unwind: \"$books\"\n },\n {\n $lookup: {\n from: \"books\",\n localField: \"books.book\",\n foreignField: \"bookcode\",\n as: \"booklist\"\n }\n },\n {\n $unwind: \"$booklist\"\n },\n {\n $group: {\n _id: \"$id\",\n userId: {\n $first: \"$userId\"\n },\n books: {\n $addToSet: {\n book: \"$booklist.name\",\n price: \"$books.price\"\n }\n }\n }\n },\n])\n[\n {\n _id: 1,\n userId: 75,\n books: [\n { book: 'Crime and punishment', price: 50 },\n { book: 'Atomic habits', price: 20 }\n ]\n },\n {\n _id: 2,\n userId: 184,\n books: [\n { book: 'Demons', price: 10 },\n { book: 'C# for beginners', price: 99 }\n ]\n }\n]\n", "text": "Hi @Sandeep_B, the below should allow you to get started:This gives the results you were looking for:", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to join two collections (master and transaction) in mongoDB based on a foreign key relation
2022-10-04T10:30:10.701Z
How to join two collections (master and transaction) in mongoDB based on a foreign key relation
1,842
null
[]
[ { "code": "", "text": "Hi,\nI want to create an offline-first app where the user can create blog posts offline and when online is available the data will be sent to Atlas. The offline post needs to be sent to Atlas, but the post in Atlas does not send back to the mobile app.Is Realm the right tool for this?", "username": "Argenis_Leon1" }, { "code": "", "text": "Welcome @Argenis_Leon1 to the forumsThe question kind of describes how Realm works by default, so the answer is, generally speaking, yes.Realm is an offline first database - once data is written locally, at some point in time in the near future - usually within milliseconds, that data is 'sync’ed to the server.At that time though, if there is fresh data on the server that has not been sync’d to the client, that will happen. But in your case if no other data exists to be sync’d then nothing will be sent to the client.You have control over what is sync’d and what isn’t as well via partition based sync’d and flexible sync.There are additional options which involve using the Swift SDK to write data directly to Atlas or Asymmetric objects.All of this is covered in the documentation.", "username": "Jay" } ]
Send offline post to Atlas
2022-10-04T02:31:53.323Z
Send offline post to Atlas
1,183
null
[]
[ { "code": "", "text": "I am trying to call a second function from within another app service function. I do a findOneAndReplace in my current function and get the document back. By default, it is in EJSON. I then pass this as a param to my second function via context.functions.execute(‘second function’, doc). Inside of this second function I use the http service to send a POST request to an external service. No matter what I do, JSON.stringify(), parse, or any combination the payload I sent in the POST is ALWAYS in EJSON and causes errors in the downstream system since it has no idea what the types are.Is it not possible to convert the EJSON to a pure JSON object? The docs mention using stringify, which yes if you log it out in the function the stringified result looks correct, but I cannot get it converted to a JSON object even running JSON.parse(JSON.stringify(doc)). ALWAYS EJSON.I always have encodeBodyAsJSON set on the http POST", "username": "Luke_Snyder" }, { "code": "", "text": "Are you able to log what is being received down stream?I don’t think you would easily be able to convert things with just JSON stringify and parse… I think you’ll have to manually convert things if I had to best guess. This is one of the annoying things about query docs in these functions ", "username": "Lukas_deConantseszn1" }, { "code": "JSON.stringify()", "text": "It should be that simple, from their docs on Atlas FunctionsTo return a value as standard JSON, call JSON.stringify() on the value and then return the stringified resultI can confirm that when stringified and logged out, the document has all types removed, but when I JSON.parse it and send the request, the payload is still in EJSON format", "username": "Luke_Snyder" }, { "code": "", "text": "I figured out the issue, pretty ridiculous to be honest. First of all this is documented nowhere, I had to read the code and stumbled upon the comments within the code. This is the description of the encodeBodyAsJson param you can pass to the http client within the stitch function:Sets whether or not the included body should be encoded as extended JSON when sent to the url in this request. Defaults to false.As far as I can tell this is document nowhere this explanation. So if you thought a param called “Encode body as JSON” encoded the body as JSON you would be a fool. It encodes it as EJSON", "username": "Luke_Snyder" } ]
Can't call http service with JSON as payload, always sends as EJSON
2022-10-03T23:18:41.098Z
Can&rsquo;t call http service with JSON as payload, always sends as EJSON
1,611
null
[ "atlas-search" ]
[ { "code": "", "text": "My use case is a bit strange. I’d like to use Atlas search (full text search) because I want the ability to filter on arbitrary columns (the collection has a dynamic Atlas search that covers all columns). I also want to have the search result sorted by a given column.Here are my findings:It seems the “$search” operator in the aggregation pipeline doesn’t support sorting. If I use “$sort” outside of “$search”, sorting basically happens in memory after search is done. With a large collection (eg: hundreds of millions), this would be very slow well.The closest I can find is the “$near” operator (with a heavy weight), which can be used to mimic sorting because results will be returned by how close they are to the column (to which $near is applied). One limitation of $near is that it only supports “number” and “date” columns.Elastic Search has no trouble sorting results based on any column of any type as far as I can tell.Is my understanding correct? Thanks for your expert input!", "username": "Jingjing_Duan" }, { "code": "", "text": "Would love to get an answer on how to best $sort after $search. I have a $search that uses synonyms which I believe isn’t support by fuzzy matching.", "username": "Tom_Cernera" }, { "code": "", "text": "Hi there! We have an early version of a solution to address this. Feel free to vote on this feedback item to be updated on availability. Thanks!", "username": "Elle_Shwer" }, { "code": "", "text": "Thanks for the response but I am not seeing anywhere that actually addresses a solution, just that there might be an implementation at some point.", "username": "Tom_Cernera" }, { "code": "", "text": "Any update about this? we are in the same spot, search go fast but we can’t order, drops the performance a lot.", "username": "icp" }, { "code": "", "text": "Hi, the suggestion is to use “stored source” if you plan to use $sort. This should help your performance incrementally. We are working on a more performant solution. If you could fill this form out, that would be incredibly helpful in assuring we are meeting your needs.", "username": "Elle_Shwer" }, { "code": "", "text": "The form ask for permission, seems that I dont have it", "username": "icp" }, { "code": "", "text": "Hi @icp the form should be fixed now. (https://forms.gle/HeSDMxFHxjhugQJU6)", "username": "Elle_Shwer" } ]
How to sort Atlas search results?
2021-11-10T00:43:09.075Z
How to sort Atlas search results?
4,367
null
[ "aggregation", "queries", "mongoose-odm" ]
[ { "code": "[\n {\n \"_id\": \"63364bd1c141203b1744aca9\",\n \"name\": \"First\",\n \"desc\": \"FP\",\n \"date\": \"Fri Sep 30 2022 07:22:17 GMT+0530 (India Standard Time)\",\n \"timer\": \"00:00:09\",\n \"start\": \"Fri Sep 30 2022 07:22:08 GMT+0530 (India Standard Time)\",\n \"user\": \"6335094bb6c467c3eb7c2534\",\n \"__v\": 0,\n \"id\": \"63364bd1c141203b1744aca9\"\n },\n {\n \"_id\": \"63364bf6c141203b1744acac\",\n \"name\": \"Second\",\n \"desc\": \"Second FP\",\n \"date\": \"Fri Sep 30 2022 07:22:54 GMT+0530 (India Standard Time)\",\n \"timer\": \"00:00:18\",\n \"start\": \"Fri Sep 30 2022 07:22:35 GMT+0530 (India Standard Time)\",\n \"user\": \"6335094bb6c467c3eb7c2534\",\n \"__v\": 0,\n \"id\": \"63364bf6c141203b1744acac\"\n },\n {\n \"_id\": \"63364c1ac141203b1744acb2\",\n \"name\": \"honnda\",\n \"desc\": \"honda Project\",\n \"date\": \"Fri Sep 30 2022 07:23:30 GMT+0530 (India Standard Time)\",\n \"timer\": \"00:00:10\",\n \"start\": \"Fri Sep 30 2022 07:23:19 GMT+0530 (India Standard Time)\",\n \"user\": \"633509deb6c467c3eb7c253c\",\n \"__v\": 0,\n \"id\": \"63364c1ac141203b1744acb2\"\n },\n {\n \"_id\": \"63365d1ffb94d401b4bf993a\",\n \"name\": \"Next Task\",\n \"desc\": \"New task\",\n \"date\": \"Fri Sep 30 2022 08:36:07 GMT+0530 (India Standard Time)\",\n \"timer\": \"00:00:07\",\n \"start\": \"Fri Sep 30 2022 08:36:00 GMT+0530 (India Standard Time)\",\n \"user\": \"6335094bb6c467c3eb7c2534\",\n \"__v\": 0,\n \"id\": \"63365d1ffb94d401b4bf993a\"\n },\n]\n[\n {\n _id: \"63364bd1c141203b1744aca9\",\n name: \"First\",\n desc: \"FP\",\n date: \"Fri Sep 30 2022 07:22:17 GMT+0530 (India Standard Time)\",\n timer: \"00:00:09\",\n start: \"Fri Sep 30 2022 07:22:08 GMT+0530 (India Standard Time)\",\n user: [\n {\n _id: \"6335094bb6c467c3eb7c2534\",\n name: \"sam\",\n email: \"[email protected]\",\n passwordHash:\n \"$2a$10$NjeQ0wn6sSkzTGwbFyxb2exA1XfoGvEQuQ7ZnkD2MVYR1tyOfC0ja\",\n isAdmin: true,\n __v: 0,\n },\n ],\n __v: 0,\n },\n {\n _id: \"63364bf6c141203b1744acac\",\n name: \"Second\",\n desc: \"Second FP\",\n date: \"Fri Sep 30 2022 07:22:54 GMT+0530 (India Standard Time)\",\n timer: \"00:00:18\",\n start: \"Fri Sep 30 2022 07:22:35 GMT+0530 (India Standard Time)\",\n user: [\n {\n _id: \"6335094bb6c467c3eb7c2534\",\n name: \"sam\",\n email: \"[email protected]\",\n passwordHash:\n \"$2a$10$NjeQ0wn6sSkzTGwbFyxb2exA1XfoGvEQuQ7ZnkD2MVYR1tyOfC0ja\",\n isAdmin: true,\n __v: 0,\n },\n ],\n __v: 0,\n },\n {\n _id: \"63364c1ac141203b1744acb2\",\n name: \"honnda\",\n desc: \"honda Project\",\n date: \"Fri Sep 30 2022 07:23:30 GMT+0530 (India Standard Time)\",\n timer: \"00:00:10\",\n start: \"Fri Sep 30 2022 07:23:19 GMT+0530 (India Standard Time)\",\n user: [\n {\n _id: \"633509deb6c467c3eb7c253c\",\n name: \"VJsam\",\n email: \"[email protected]\",\n passwordHash:\n \"$2a$10$.GYqYFho5bJzRrcx90CzJe..GAN86VSfJc.WT19.qeHFugCtYSwyG\",\n isAdmin: false,\n __v: 0,\n },\n ],\n __v: 0,\n },\n {\n _id: \"63365d1ffb94d401b4bf993a\",\n name: \"Next Task\",\n desc: \"New task\",\n date: \"Fri Sep 30 2022 08:36:07 GMT+0530 (India Standard Time)\",\n timer: \"00:00:07\",\n start: \"Fri Sep 30 2022 08:36:00 GMT+0530 (India Standard Time)\",\n user: [\n {\n _id: \"6335094bb6c467c3eb7c2534\",\n name: \"sam\",\n email: \"[email protected]\",\n passwordHash:\n \"$2a$10$NjeQ0wn6sSkzTGwbFyxb2exA1XfoGvEQuQ7ZnkD2MVYR1tyOfC0ja\",\n isAdmin: true,\n __v: 0,\n },\n ],\n __v: 0,\n },\n];\n[{ name: VJsam ,email: [email protected], totalTasks: 1, totaltime: sum of timer (to mins, like 120mins)},\n{ name: sam ,email: [email protected], totalTasks: 3, totaltime: sum of timer (to mins, like 120mins)}]\n", "text": "Hi, newbie here, So I have got this data here and I want to populate the user field and aggregrate.This is the JSON data I have,I would like to populate the user filed and perform aggregration likeI would then like to aggregrate this like,", "username": "Abhijith_JB" }, { "code": "", "text": "I would like to populate the user filed and perform aggregration likeYou share a user array that has some values like email::[email protected] but you do not tell us where those values comes from.Are these documents from another collection?Are these objects you have in your code?", "username": "steevej" } ]
How to populate object before aggregration?
2022-10-01T06:45:24.463Z
How to populate object before aggregration?
1,268
null
[ "aggregation", "queries", "node-js", "transactions" ]
[ { "code": " let logs = await this.profileModel.aggregate([\n {\n $match: {\n bindedClient: name,\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'logs',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalId',\n as: 'logsByTpes',\n },\n },\n\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$logsByTpes' },\n {\n $project: {\n // bindedSuperAdmin: '$bindedSuperAdmin',\n // bindedBanque: '$bindedBanque',\n // bindedClient: '$bindedClient',\n uniqueID: '$logsByTpes.uniqueID',\n sn: '$logsByTpes.sn',\n terminalId: '$logsByTpes.terminalId',\n transactionAmount: '$logsByTpes.transactionAmount',\n currencyCode: '$logsByTpes.currencyCode',\n transactionDate: '$logsByTpes.transactionDate',\n transactionTime: '$logsByTpes.transactionTime',\n transactionType: '$logsByTpes.transactionType',\n cardPAN_PCI: '$logsByTpes.cardPAN_PCI',\n onlineRetrievalReferenceNumber:\n '$logsByTpes.onlineRetrievalReferenceNumber',\n outcome: '$logsByTpes.outcome',\n encryptionKeyKCV: '$logsByTpes.encryptionKeyKCV',\n transactionEncrypted: '$logsByTpes.transactionEncrypted',\n },\n },\n ]);\n return logs;\n", "text": "I want to optimize the time of lookup query or find() for more than 1000 doc ,\nI implemented this query to make join between 3 collection as shown bellowIt take more than 5 sec just for 100 docs it’s kinda weird what if I work with more than 1000 , I’m sure there is a solution for this problem ,\nI think for pagination but I’m working with ng2-smart-table I didn’t know what can get the indexes of every pages it seems every think behind the scene which I can’t handle\nPLEASE SOME ONE HELP ME I’M STUCK", "username": "skander_lassoued" }, { "code": "", "text": "The first step is to have the following indexes:", "username": "steevej" } ]
How can I optimize a query?
2022-09-30T13:40:51.995Z
How can I optimize a query?
1,378
null
[ "queries", "node-js" ]
[ { "code": "{\n _id: '1234',\n name: 'name1',\n age: 32\n}\n{\n _id: '1235',\n name: 'name1',\n age: 34\n}\n", "text": "I have records on my table likeSo I want to update both or many records together with age: 34 for 1234 and 36 for 125. Is this possible to update these records in bulk?", "username": "Manish_Kumar20" }, { "code": "{ updateOne: {\n filter: { id: \"1234\" },\n update: { $set: { age: 34 } }\n } }\n", "text": "You use bulkWrite(), creating one updateOne for each of the id you want to update specifying the value you want to set.Each updateOne would look like:", "username": "steevej" } ]
How to update bulk record in mongo DB with different values?
2022-09-30T08:49:08.964Z
How to update bulk record in mongo DB with different values?
1,392
https://www.mongodb.com/…98cbbbbcbaee.png
[ "indexes" ]
[ { "code": "pushs?author.slug=${this.user.slug}&_sort=createdAt:desc&_start=${this.userPush.paginate}&_limit=5\ndb.pushs.createIndex({\"author.slug\": 1})\n", "text": "Hello ! i’m a beginner with this subject, and i didn’t find the solution for my problem :I got a query that get post (named pushs in my project) by author’s slug, and by chronological order, with a limit of 5.When i call this query, in production environment / database, this query is taking so long time, like 8-10s. I did my research and it seems it’s because i have 800+ documents in my collection, and mongo is doing a “collectionScan” to get the push with the good authors.so i’ve read that to get better performance, i have to create an indexe. So to me, it’s relevant to make an indexe by “author.slug”, because it’s how i call the collection, with the query above . i created it with this commandBut with this one, nothing change, the query is taking the same time? what did i wrong ?Here what a “pushs” looks like :", "username": "ImJustLucas" }, { "code": "__v:0author.slugpopulate()", "text": "Hi @ImJustLucas and welcome to the community!!To have better understanding on the issue being seen, could you please confirm with the following details which would help in reproducing the issue in localAlso, seeing the field __v:0, are you using mongoose by any chance? I’m curious if the author.slug field is the result of a mongoose populate() call ?Best Regards\nAasawari", "username": "Aasawari" }, { "code": "author.slugbecause it seems the ", "text": "Hi @Aasawari , thanks you for the answer, and sorry about this late response (i was in school week )1- here a screenshot of a author document (it is called users)\n\nSans titre1537×714 85.7 KB\nfor the rest, i’m using strapi for my backend, so idk where i can find this informations, it seems it do it for me but there is my question, can i create a index of the pushs collection, order by author.slug, where author is a another collection and slug a fields of a user ?if yes, how ? because it seems the db.pushs.createIndex({“author.slug”: 1})` don’t work, don’t increase performance on my query ", "username": "ImJustLucas" } ]
Which indexes can i create for relational fields
2022-09-21T10:02:31.073Z
Which indexes can i create for relational fields
1,909
null
[ "aggregation", "atlas-search", "text-search" ]
[ { "code": "RuosiamasiRuošiamasi", "text": "Hello,my task is simple:\nto find results based on query with skipping diacritic sensitive.What problems I am facing:\n$text indes is not supporting partial text search: https://jira.mongodb.org/browse/SERVER-15090\n$search - when using together with Atlas search index is not suporting diacritic - meaning if i search for world Ruosiamasi it will not find world Ruošiamasi.What should I do? I never thought that mongo has such limitation!\nI am stuck. Please advice.Thanks", "username": "Vytautas_Pranskunas" }, { "code": "", "text": "Hello @Vytautas_Pranskunas ,You can use a custom analyzer and run a diacritic-insensitive query. I would recommend the following documentation on How to Define a Custom Analyzer and Run a Diacritic-Insensitive Query which may suit your use case.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi, i will check custom analyzers but i think this is a common use car for all non English alphabets so what about supporting partial text search on $text index?", "username": "Vytautas_Pranskunas" }, { "code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"title\": {\n \"type\": \"document\",\n \"fields\": {\n \"lt\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n },\n \"en\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n }\n }\n }\n }\n", "text": "Ok i was able to do my indexes following your totorial but i have few more questions:is there any way to dynamic to sub field that all nested fields are indexed? because if at some point we decide to add new language we will have to not forget to create new index.", "username": "Vytautas_Pranskunas" }, { "code": "", "text": "", "username": "Vytautas_Pranskunas" } ]
Partial text search or Atlas seacrh index with diacriticSensitive are not working
2022-10-01T14:47:05.227Z
Partial text search or Atlas seacrh index with diacriticSensitive are not working
3,028
null
[ "replication", "python" ]
[ { "code": "stream {\n server {\n listen 27018;\n proxy_pass 10.1.3.108:27017;\n }\n}\nself.client = MongoClient(\n host=Bastion's public IP,\n port=27018,\n username=settings.MONGO_USERNAME,\n password=settings.MONGO_PASSWORD,\n authSource=settings.MONGO_AUTH_SOURCE,\n authMechanism=settings.MONGO_AUTH_MECHANISM,\n serverSelectionTimeoutMS=10000\n )\nself.client = MongoSession(\n host=Bastion's public IP,\n port=22,\n user='ec2-user',\n key=key_path,\n to_port=27017,\n to_host='10.1.3.108'\n )\n10.1.3.108:56424: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 6333c051c87d04dc997fdc49, topology_type: Unknown, servers: [<ServerDescription ('10.1.3.108', 56424) server_type: Unknown, rtt: None, error=NetworkTimeout('10.1.3.108:56424: timed out')>]>\n\n", "text": "HII want to ask how I solve this problem.currently I run mongoDB on AWS (EC2) and I configured replica set(which means I have 3 instances, primary, secondary, abiter, actually the configuration was successfullet me describe our AWS architectureWe have a Bastion instance on public that facing global internet(just for ssh tunneling)and as I mentioned We have 3 EC2 instances what running mongoDB replica on private subnetso, when I tried to connect from local(which means from global) I have to go to Bastion instance first and then connect to private MongoDB instancebut, \"No replica set members found yet, Timeout: 10.0s, Topology Description: <TopologyDescription id: 6333aa0a4f0ddb9c168e0143, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘10.1.3.108’, 27017) server_type: Unknown, rtt: None>, <ServerDescription (‘10.1.3.43’, 27017) server_type: Unknown, rtt: None>, <ServerDescription (‘10.1.3.49’, 27017) server_type: Unknown, rtt: None>]>\n\"\nthis error message came out firstbecause I connected to mongoDB by using nginx reverse proxyIn Bastion that facing public,\n/etc/nginx/nginx.confI add this to redirect to actual mongoDB instance(primary)and then In applicatonat the log I found It reached successfuly first,I think after first touch, the primary mongodb instance return replica member’s name which I configured first to clientafter first touch client tried to connect private IP, but cilent couldn’t reach to private IPthat’s why this situation happened I guessand then I tried to connect via ssh tunneling In python appbut this time It didn’t find internal host ‘10.1.3.108’actually It doesn’t affect to actual service operation,this problem makes coworker couldn’t test on local.I know our test environment sucksBut I can’t helpplz give me solutionthanksP.S It didn’t any happen when we use mongoDB standalone", "username": "williams3443" }, { "code": "self.client = MongoSession(\n host=bastion's public IP,\n port=22,\n user='ec2-user',\n key=key_path,\n to_port=27018,\n to_host='127.0.0.1'\n )\n10.1.3.49:27017: timed out,10.1.3.108:27017: timed out,10.1.3.43:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 633414a21bc35067764db51b, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('10.1.3.108', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('10.1.3.108:27017: timed out')>, <ServerDescription ('10.1.3.43', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('10.1.3.43:27017: timed out')>, <ServerDescription ('10.1.3.49', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('10.1.3.49:27017: timed out')>]>\n{\"t\":{\"$date\":\"2022-09-28T09:32:18.758+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.1.1.99:60790\",\"uuid\":\"54bf27c6-99ef-4d5f-afda-53e4187f1e68\",\"connectionId\":101,\"connectionCount\":15}}\n{\"t\":{\"$date\":\"2022-09-28T09:32:18.768+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn101\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.1.1.99:60790\",\"client\":\"conn101\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"4.1.1\"},\"os\":{\"type\":\"Darwin\",\"name\":\"Darwin\",\"architecture\":\"x86_64\",\"version\":\"10.16\"},\"platform\":\"CPython 3.7.9.final.0\"}}}\n{\"t\":{\"$date\":\"2022-09-28T09:32:18.779+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn101\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.1.1.99:60790\",\"uuid\":\"54bf27c6-99ef-4d5f-afda-53e4187f1e68\",\"connectionId\":101,\"connectionCount\":14}}\n{\"t\":{\"$date\":\"2022-09-28T09:32:18.788+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.1.1.99:60794\",\"uuid\":\"cba8ccb4-1353-4b41-9679-795484e7b062\",\"connectionId\":102,\"connectionCount\":15}}\n{\"t\":{\"$date\":\"2022-09-28T09:32:18.796+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn102\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.1.1.99:60794\",\"uuid\":\"cba8ccb4-1353-4b41-9679-795484e7b062\",\"connectionId\":102,\"connectionCount\":14}}\n", "text": "even I modify like thisI gotthisand I could seethis log,\nIt reached primary Mongodb instance which is located in private network once\n(10.1.1.99 < this is bastion private ip located In public)", "username": "williams3443" }, { "code": "", "text": "Hello @williams3443 ,Welcome to The MongoDB Community Forums!If I understand correctly, the bastion connection was setup to allow a connection only to 10.1.3.108:27017 however the replica set are using 10.1.3.49:27017, 10.1.3.108:27017, 10.1.3.43:27017 is this correct? Have you been successful in making the expected connections before, or it never succeed due to the bastion setup?MongoDB official drivers follow this spec for monitoring the state of all nodes in the replica set. This means that the driver must be able to connect to all nodes in the replica set. Connecting and monitoring to all nodes in a replica set is a necessity, since a replica set provides high availability. If the primary goes down, the driver needs to be able to automatically switch to the new primary. This would not be possible unless the driver can connect to all members.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "First thank you for replying @Tarun_Gaur ,actually I already solved this problem by using VPN service,this problem was a little complicatedafter first touch to primary via bastion host, It returned replica’s private IPthen client( in this case my local) tried to connect private IP again So I couldn’t reachbut after using VPN. client(local) can connect to private VPC subnetanyway really thanks to your reply,Regards,\nYoungHoon", "username": "williams3443" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect issue from local to private MongoDB replica instances via public bastion
2022-09-28T03:38:50.547Z
Connect issue from local to private MongoDB replica instances via public bastion
2,644
null
[]
[ { "code": "import nc from 'next-connect';\nimport db from '../../../utils/db';\nimport User from '../../../models/User';\nimport bcrypt from 'bcryptjs/dist/bcrypt';\nimport { signToken } from '../../../utils/auth';\n\nconst handler = nc();\n\nhandler.post(async (req, res) => {\n await db.connect();\n var isSupplier=false;\n var zipcode=false;\n if(req.body.isSupplier!=undefined)\n {\n isSupplier= true;\n }\n else{\n isSupplier=false;\n }\n if(req.body.zipcode!=undefined)\n {\n \n zipcode=req.body.zipcode;\n }\n else{\n zipcode=\"\";\n }\n const newUser = new User(\n {\n firstName: req.body.firstName,\n lastName: req.body.lastName,\n email: req.body.email,\n password: bcrypt.hashSync(req.body.password),\n isAdmin: false,\n isSupplier:isSupplier,\n zipcode:zipcode\n \n }\n );\n \n try {\n const user = await newUser.save();\n \n const token = signToken(user);\n res.send(\n {\n token,\n _id: user._id,\n firstName: user.firstName,\n lastName: user.lastName,\n email: user.email,\n isAdmin: user.isAdmin,\n isSupplier: user.isSupplier,\n zipcode: user.zipcode\n }\n );\n }\n catch(err) {\n const {code} = err\n var message;\n console.log(err);\n if (code === 11000) {\n message=\"Email already exists\"\n }\n else\n {\n message=\"An error occured\"\n }\n res.status(500).json({\n message: message,\n error: err\n });\n }\n await db.disconnect();\n \n \n \n \n \n});\n\nexport default handler;\n", "text": "i have deployed my Next js app on versel and using mongodb altas for database, I sometimes get this error “mongodb uust be connected to perform this operation”. This error never occurs on locahost it occurs on versel and it occurs sometimes not everytime.Here is my code", "username": "Faiza_Bashir" }, { "code": "", "text": "Excuse me. Could you find the error? I have exactly the same problem. Thank you", "username": "Camilo_Valenzuela" } ]
Error mongodb must be connected to perform this action
2022-06-22T22:09:26.866Z
Error mongodb must be connected to perform this action
2,813
null
[ "replication", "transactions", "containers" ]
[ { "code": "", "text": "Hi folks,we are running MongoDB on premise with docker compose. In the development environment we run a single node replica set because we use multi document transactions. In production we run usually three nodes in the replica set. A customer doesn’t care about hot backup and has only one machine. Now, the MongoDB manual says: “Use standalone instances for testing and development, but always use replica sets in production.”Long story short: Is it recommended to run a single node replica set in production environment (apart of the obvious fact that if this node fails everything fails - it’s a single PC system, so, if this PC fails, everything fails)?Thanks, cheers, Daniel", "username": "Daniel_Camarena" }, { "code": "", "text": "It is not all recommended for prod\nYou should have minimum 3 node cluster with each node in a different data center for maximum availability/fault tolerance", "username": "Ramachandra_Tummala" }, { "code": "mongod", "text": "Hi @Daniel_Camarena ,As per @Ramachandra_Tummala (and the MongoDB server manual), the strong recommendation is to deploy a minimum of a three node replica set for production use cases as this provides:If all replica set members are on the same physical host you will have a single point of failure if the host server goes down, but can still realise some of the benefits of multiple copies of data and multiple redundant mongod processes. A more ideal deployment would have replica set members on separate physical hosts, as these processes will also be competing for the same system resources on a shared host.However, If those benefits are not important for your customer’s use case you can of course choose to deploy a more minimal configuration with operational risks.I would make sure you have an appropriate (and tested!) plan for backup and recovery for any production deployments. Single points of failure will usually adversely affect SLAs and Recovery Time Objectives if a deployment needs to be completely rebuilt after catastrophic failure.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Dear @Ramachandra_Tummala,thank you for your suggestions. As I mentioned:Thank you for your statements, regards, Daniel", "username": "Daniel_Camarena" }, { "code": "", "text": "Dear @Stennie_X,thank you for the explication. But exactly there is my doubt: The MongoDB doesn’t state clearly that I should use 3 nodes for production. It states that I should use a replica set for production, which could be perfectly a single node replica set.Data redundancy, high availability, failover and administrative convenience for upgrades without downtime are strong reasons for having three nodes. I have only one.As the web service which is accessing the data is running on the same host, it doesn’t matter if the DB isn’t available anymore when the host fails.Your point regarding system resources is exactly the point I’m aiming to: Why should I have 3 nodes competing for the same resources on the same physical machine, if one node is doing the same job perfectly?Your point regarding backup is much more important in a single node replica set like this. There are backups made every x hours and copied to other machine for cold backup / disaster recovery.Thank you, regards, Daniel", "username": "Daniel_Camarena" }, { "code": "mongod", "text": "Hi @Daniel_Camarena,thank you for the explication. But exactly there is my doubt: The MongoDB doesn’t state clearly that I should use 3 nodes for production. It states that I should use a replica set for production, which could be perfectly a single node replica set.You should read “replica set” as “minimum three member replica set” unless otherwise specified. The MongoDB manual does not encourage creating single member replica sets and always describes these as a group of mongod processes. The design intent of replica set deployments is to achieve all of the benefits I mentioned.Per Replica Set Members:The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members.There are other suboptimal replica set configurations like adding an arbiter (see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie for some caveats) or having less than three members.Why should I have 3 nodes competing for the same resources on the same physical machine, if one node is doing the same job perfectly?A single node replica set does not provide the same benefits as three members, but you can choose this deployment if the caveats are acceptable for your use case.The difference is a recommended configuration for general production use cases versus a possible configuration for your specific requirements. It sounds like you are fine with a single node replica set.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Dear @Stennie_X,thank you for your reply. It gives a good impression of where to focus on.It sounds like you are fine with a single node replica set.Yes, I am - if not, more than fine because with a single node replica set I gain a couple of advantages - not only disadvantages. If a single node replica set does the job in the same way than a 3 node replica set does, despite the obvious disadvantages of not having 3 nodes, I’m fine with that.Just to sum up the pros and cons to use a single node replica set:The last answer of @Stennie_X gives a complete impression. Read it!Thank you, regards, Daniel", "username": "Daniel_Camarena" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should I use single node replica set for production?
2022-09-30T12:15:45.413Z
Should I use single node replica set for production?
4,861
null
[ "upgrading" ]
[ { "code": "", "text": "I’m a non tech founder and I’m asking for urgent help!\nYesterday, I just upgraded my free mongo db account to dedicated m10 but as soon as my upgradation was complete my app signup/login stopped working.\nBut my database is working like I can see the users signing up in the document but they in the app they can’t login or signup.Do I need to make some changes in my backend? or is it something else I need to do", "username": "ajay_singh4" }, { "code": "", "text": "Hello @ajay_singh4,Welcome to The MongoDB Community Forums! It seems like you are facing a similar issue mentioned in this thread. Could you check if the solution provided in that thread solves your issue?Regards,\nTarun", "username": "Tarun_Gaur" } ]
After upgrading from free to m10 server my app is not working
2022-10-04T03:16:49.174Z
After upgrading from free to m10 server my app is not working
2,205
null
[ "aggregation" ]
[ { "code": "{\n $lookup: {\n from: 'some_table',\n let: { productId: '$_id' },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $in: ['$$productId', '$products._id'] },\n {\n $eq: ['$_id', aTypeIOfId)],\n },\n ],\n },\n },\n },\n ],\n as: 'another_new_table',\n },\n }\n", "text": "as I’m probably familiar with another variation of $lookup (with localField, foreignField), can you explain to me what this does? Also I’m familiar with $match that has some field…", "username": "Florentino_Tuason" }, { "code": "{\n $lookup:\n {\n from: <joined collection>,\n let: { <var_1>: <expression>, …, <var_n>: <expression> },\n pipeline: [ <pipeline to run on joined collection> ],\n as: <output array field>\n }\n}\npipelinepipelinepipeline[]Join Conditions and Subqueries on a Joined Collection", "text": "Hello @Florentino_Tuason ,Welcome to The MongoDB Community Forums! MongoDB supports:In your Query, your $lookup stage is using below syntaxHere,from - Specifies the collection in the same database to perform the join operation.let (Optional) - Specifies variables to use in the pipeline stages.pipeline - Specifies the pipeline to run on the joined collection. The pipeline determines the resulting documents from the joined collection. To return all documents, specify an empty pipeline [] .as - Specifies the name of the new array field to add to the joined documents.To learn more about this, please refer Join Conditions and Subqueries on a Joined Collection.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can you tell me what is the meaning of this query? (this is part of an aggregation pipeline)
2022-09-29T16:06:59.349Z
Can you tell me what is the meaning of this query? (this is part of an aggregation pipeline)
1,186
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi Everyone,We work with mobile POS devices and the traffic is allowed only for the URLs that our app consumes over sim cards.Now we started work with Realm MongoDB Sync feature and the sync doesn’t work when we’re connected using the sim card.I need allow the traffic from the sim cards of my devices but I have no ideia what domain, port and protocol the Realm Syncronization are using. Is there nay place inside Atlas that I can check this ?Best Regards\nBrainer Konno", "username": "Brainer_Konno" }, { "code": "*.realm.mongodb.com", "text": "Welcome to the MongoDB Community @Brainer_Konno !For more information about Atlas security, please refer to Atlas App Servies - Application Security.I believe the information you are looking for is:When you use Device Sync, you can use DNS filtering to allow connections from the Sync client to the Sync server. Using DNS filtering, you can access *.realm.mongodb.com via HTTPS or port 443.Regards,\nStennie", "username": "Stennie_X" } ]
Realm Sync / Allow Traffic SIM CARDs
2022-10-03T19:16:03.461Z
Realm Sync / Allow Traffic SIM CARDs
1,614
null
[ "containers" ]
[ { "code": "WARNING: MongoDB 5.0+ requires ARMv8.2-A or higher, and your current system does not appear to implement any of the common features for that!\n see https://jira.mongodb.org/browse/SERVER-55178\n see also https://en.wikichip.org/wiki/arm/armv8#ARMv8_Extensions_and_Processor_Features\n see also https://github.com/docker-library/mongo/issues/485#issuecomment-970864306\n", "text": "Hi,as it stands, there is no current (precompiled and packaged) mongodb server for Raspberry Pi OS.Yes, it’s possible to compile the community edition, but then it’s missing systemd control files. And it’s tedious to keep it up to date.Then there is the Ubuntu version - which has a bad taste on Raspberry Pi OS (which is Debian), but seems to work as of server 4.4, while 5.0 crashs with “illegal instruction”.Finally, there’s the Docker image, which also doesn’t run in 5.0, but warns:which is more helpful than the “Illegal instruction”, but doesn’t work like that.So what does the roadmap say?", "username": "uj_r" }, { "code": "debian", "text": "Hi @uj_r,Official packaged binaries for MongoDB target server-class environments which unfortunately have more modern microarchitecture requirements in MongoDB 5.0+ than the current generation of Raspberry Pi CPUs (per the information you’ve found).Your options are as per Core dump on MongoDB 5.0 on RPi 4 - #14 by StennieI’m not aware of any current plans to add newer binary packages targeting Raspberry Pi microarchitecture, so using MongoDB 4.4 binaries would be the most straightforward solution.Yes, it’s possible to compile the community edition, but then it’s missing systemd control files.Debian packaging files are available in the debian directory of the MongoDB source code on GitHub:master/debianThe MongoDB Database. Contribute to mongodb/mongo development by creating an account on GitHub.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Will there be server and tools for Raspberry 4 - Raspberry Pi OS (arm64)?
2022-05-27T14:59:40.719Z
Will there be server and tools for Raspberry 4 - Raspberry Pi OS (arm64)?
4,067
null
[ "queries", "node-js", "crud", "server" ]
[ { "code": "const defaultUrl = 'mongodb://localhost:27017'\nconst defaultName = 'tumbleweed'\n\nclass DataBaseConfig {\n\n #dbUrl\n #dbName\n\n constructor() {\n this.#dbUrl = this.dbConnectUrl\n this.#dbName = this.dbConnectName\n }\n\n get dbConnectUrl() {\n return this.#dbUrl === undefined ? defaultUrl : this.#dbUrl\n }\n\n set dbConnectUrl(value) {\n this.#dbUrl = value === undefined ? defaultUrl : value\n }\n\n get dbConnectName() {\n return this.#dbName === undefined ? defaultName : this.#dbName\n }\n\n set dbConnectName(value) {\n this.#dbName = value === undefined ? defaultName : value\n }\n\n}\n\nconst DataBaseShareConfig = new DataBaseConfig()\nexport default DataBaseShareConfig\nimport { MongoClient } from \"mongodb\";\nimport DataBaseShareConfig from \"./db_config.js\";\n\nclass DataBase {\n\n #db\n\n constructor() {\n this.#db = null\n }\n\n async #connect() {\n return new Promise(async (resolve, reject)=> {\n try {\n console.log(`begain to connecting: ${DataBaseShareConfig.dbConnectUrl}`)\n const client = await MongoClient.connect(DataBaseShareConfig.dbConnectUrl)\n this.#db = client.db(DataBaseShareConfig.dbConnectName)\n console.log(`db: ${DataBaseShareConfig.dbConnectName} connected succeed`)\n resolve(this.#db)\n } catch (error) {\n reject(error)\n }\n })\n }\n\n async find(collectionName, json) {\n console.log(\"begain to find...\")\n return new Promise(async (resolve, reject)=> {\n try {\n if(!this.#db) {\n await this.#connect()\n const collection = this.#db.collection(collectionName)\n const result = await collection.find(json).toArray()\n resolve(result)\n } else {\n const collection = this.#db.collection(collectionName)\n const result = await collection.find(json).toArray()\n resolve(result)\n }\n } catch (error) {\n reject(error)\n }\n })\n }\n\n}\n\nconst DataBaseShareInstance = new DataBase()\nexport default DataBaseShareInstance\nimport DataBaseShareInstance from \"./db/db.js\"\nimport DataBaseShareConfig from \"./db/db_config.js\"\n\nDataBaseShareConfig.dbConnectUrl = 'mongodb://localhost:27017'\nDataBaseShareConfig.dbConnectName = 'tumbleweed'\n\n\nconst main = (function () {\n\n DataBaseShareInstance.find(\"users\", {name: 'fq'}).then(result => {\n console.log(result)\n }).catch(error => {\n console.log(error)\n })\n\n})()\n", "text": "Hello everyone, I am using the official mongodb driver package to develop some interfaces for operating the database, but I found that even if I use try and catch, I still cannot catch some errors. I use the windows operating system for development. If I want to connect to my database, I have to start the mongdb service first, but when I do not start the service and then try to use the official mongodb driver package to connect to the database, this npm package will not throw any error, this is just one of the cases where no error is thrown. Does this mongodb npm package provide any other way for users to catch errors?This is the npm package I use: mongodbThis is my code:db_config.jsdb.jsmain.js", "username": "Arrose_Chen" }, { "code": "", "text": "Welcome to the MongoDB Community @Arrose_Chen !Were you able to find an answer for this question?If not, can you clarify which errors you expect to catch but that are not caught in your current code?Per the MongoDB Server Discovery and Monitoring (SDAM) specification, clients do not perform any I/O in their constructor so perhaps you are trying catch some exceptions earlier than they will be thrown.Regards,\nStennie", "username": "Stennie_X" } ]
Why does mongodb's official npm package(mongodb) can not catch some errors?
2022-05-22T12:56:46.162Z
Why does mongodb&rsquo;s official npm package(mongodb) can not catch some errors?
2,863
null
[ "security" ]
[ { "code": "", "text": "If I modify the security lines of the mongod.conf file, I can log in without security and see the database, how could I avoid that?", "username": "ivan_mg" }, { "code": "", "text": "After modifying mongod.conf you have to stop & start mongod (assuming you edited the file while mongod was up)\nPlease explain what have you tried and what is not working", "username": "Ramachandra_Tummala" }, { "code": "", "text": "first start the service with authorization enabled then I created a new user, forgot the password and could not enter so I modified mongod.conf and in the authorization line I put disabled, and you can enter without problem, my question if mongodb allows a file of configuration allow access because it does not have some security I mean the mongod.conf file?Or could I create another file mongod.conf and have as path another database created by another configuration file for example mongod2.conf?thank you very much for your reply", "username": "ivan_mg" }, { "code": "", "text": "Or could I create another file mongod.conf and have as path another database created by another configuration file for example mongod2.conf?Definitively.You should take M103 to know more about this.", "username": "steevej" }, { "code": "", "text": "Thanks for your answer, I will follow your advice", "username": "ivan_mg" }, { "code": "", "text": "Welcome to the MongoDB Community @ivan_mg !if mongodb allows a file of configuration allow access because it does not have some security I mean the mongod.conf file?Access to the MongoDB server configuration file (and other files in your host O/S environment) is determined by your O/S security (firewalls, remote access, user account restrictions, etc). An administrator will full access to the host environment can stop, start, and reconfigure processes.Users connecting to your MongoDB deployment do not need access to the host environment and cannot reconfigure process-level options like enabling or disabling security for a MongoDB deployment.For more information on securing your MongoDB deployment, please review the MongoDB Security Checklist.For information on improving your O/S security, try searching for articles mentioning “securing” or “hardening” with your O/S version, for example: “security hardening Ubuntu 20.04” or “security hardening windows server 2019”.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to take care of the mongod.conf file?
2022-05-22T03:38:30.160Z
How to take care of the mongod.conf file?
2,625
null
[ "ops-manager" ]
[ { "code": "", "text": "Hello Community ,\nI need to setup one OpsManager to send events to external monitoring then raise tickets . We plan to use API for this external monitoring. But i do know now and cannot find anything about this : What user do i need to create and what access has to be granted ? Do i need to use different port for connection and so on. If someone has experience whit such or know how to setup i will be greatful!\nThank you in advance.", "username": "valenetin_bahchevanov" }, { "code": "", "text": "Hi @valenetin_bahchevanov!I assume you probably already found an answer for this question (and would have access to Commercial Support If you are using Ops Manager for a production environment).However, you can find more information in the documentation on Third-Party Service Integrations — MongoDB Ops Manager 6.0.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
OPsManager with API for external Monitoring
2022-04-21T05:41:41.069Z
OPsManager with API for external Monitoring
2,688
null
[ "transactions" ]
[ { "code": "", "text": "How can I make other transactions wait till the current transaction finishes. I do not want a write conflict instead I other transactions to wait till the ongoing transaction finishes. Is it possible or even efficient to do so?\nps- I am updating multiple documents and hence transactions are necessary for me in this case", "username": "Adhiraj_Kinlekar" }, { "code": "", "text": "Hi @Adhiraj_Kinlekar,A write conflict means that another concurrent update has affected a document used in the current transaction, so it is expected that the transaction should re-read the state of documents before continuing. You would not want a transaction to continue with an update based on stale data.What problem are you trying to solve by ignoring write conflicts? You could implement some sort of semaphore logic to prevent multiple transactions from starting until an in-progress transaction has completed, but that seems inefficient and limiting compared to correctly handing write conflicts from concurrent updates.The Production Considerations sections of the Transactions documentation has some information which may be helpful including In-progress Transactions and Write Conflicts and In-progress Transactions and Stale Reads.Regards,\nStennie", "username": "Stennie_X" } ]
Make other transaction wait till the current transaction finishes
2022-04-19T10:42:28.562Z
Make other transaction wait till the current transaction finishes
2,738
null
[ "aggregation", "queries", "python" ]
[ { "code": "[\n {\n \"$project\": {\n \"union\": {\n \"$setUnion\": [\n \"$query_a\",\n \"$query_b\"\n ]\n }\n }\n },\n {\n \"$unwind\": \"$union\"\n },\n {\n \"$group\": {\n \"_id\": \"$union.ID\",\n \"date_a\": {\n \"$addToSet\": \"$union.date_a\"\n },\n \"date_b\": {\n \"$addToSet\": \"$union.date_b\"\n }\n }\n },\n {\n \"$unwind\": \"$date_a\"\n },\n {\n \"$unwind\": \"$date_b\"\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"date_a\": \"$date_a\",\n \"date_b\": \"date_b\",\n \"diff\": {\n \"$subtract\": [\n {\n \"$toInt\": \"$date_b\"\n },\n {\n \"$toInt\": \"$date_a\"\n }\n ]\n }\n }\n },\n {\n \"$match\": {\n \"diff\": {\n \"$gt\": 0,\n \"$lte\": 20\n }\n }\n },\n \n]\n [\n {\n \"ID\": \"c80ea2cb-3272-77ae-8f46-d95de600c5bf\",\n \n },\n {\n \"ID\": \"cdbcc129-548a-9d51-895a-1538200664e6\",\n }\n ]\n", "text": "I have the following aggregation pipeline running in the latest version of mongoDB and pymongo:This gives the union of the 2 pipelines query_a and query_b. After this union I want to get an intersection on ID with the pipeline query_c: (query_a UNION query_b) INTERSECTION query_c.For this playground example the desired output would be:", "username": "J_P2" }, { "code": "$projectquery_c$setquery_aquery_bquery_c{\n \"$project\": {\n \"union\": {\n \"$setUnion\": [\n \"$query_a\",\n \"$query_b\"\n ]\n },\n \"query_c\": {\n \"$map\": {\n \"input\": \"$query_c\",\n \"in\": \"$$this.ID\"\n }\n }\n }\n},\n{\n \"$set\": {\n \"union\": {\n \"$filter\": {\n \"input\": \"$union\",\n \"cond\": {\n \"$in\": [\n \"$$this.ID\",\n \"$query_c\"\n ]\n }\n }\n }\n }\n},\n", "text": "Hi,You can do it with:Updating first $project stage to also project an array of IDs from query_c.Using $set as a second stage where you would filter out all items from the union of query_a and query_b, that does not have ID that’s in query_c.You can do it like this:The rest of your Aggregation pipeline can remain the same.Working example", "username": "NeNaD" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Combining Union and Intersection in the same pipeline
2022-10-03T21:38:35.363Z
Combining Union and Intersection in the same pipeline
1,342
null
[]
[ { "code": "", "text": "Initially limit was 4MB, then it was raised to 16MB in 2009.\nWe are in 2021, we have better hardware, better network, bigger documents and competitors without 16MB limit.\nThere are a ton of use cases where this limit is too small nowadays, sensors, large documents with history of data.\nI’ve opened a new JIRA Issue on this here: https://jira.mongodb.org/browse/SERVER-60040", "username": "Ivan_Fioravanti" }, { "code": "", "text": "Ticket on Jira has been closed with this comment: “ Thanks for your report. Please note that the SERVER project is for bugs for the MongoDB server. As this ticket appears to be an improvement request, I will now close it.”But main Jira page for MongoDB says:\n“", "username": "Ivan_Fioravanti" }, { "code": "", "text": "Hi @Ivan_Fioravanti welcome to the community!Apologies for the confusion. The description of the SERVER project is a bit outdated. We currently are using https://feedback.mongodb.com/ to collect ideas on how to improve the server, and dedicated the SERVER JIRA project for bug reports. Specifically in your case, you would want to go to the Database Section on that page.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @Ivan_FioravantiSince only data that’s always used together should be stored together, I’m curious about the use case that requires documents bigger than 16MBs. You mention tons of use cases, but can you be a bit more specific? Like “history of data” - this seems like a problematic example since eventually it will outgrow any document size limit, and I’m not sure I can think of a use case where you need all history (no matter how old) when reading a document.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi @Asya_Kamsky\nthere are many examples in the https://jira.mongodb.org/browse/SERVER-5923\neveryone starts using MongoDB thinking: 16MB is a lot! I’ll never hit this limit, but when you reach it is a mess.Also this one would be extremely beneficial https://jira.mongodb.org/browse/SERVER-12305 complex aggregations with many pipeline can hit this limit more often than you think.\nRemoving this limit shoild be easier, please plan at least this one.Thanks,\nIvan", "username": "Ivan_Fioravanti" }, { "code": "", "text": "when you reach it is a messThat’s because it usually is an indication of incorrect schema design.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "I kindly disagree: I have financial data of a currency symbols like USDJPY (US-Dollar vs. Japan Yen) and USDCHF. Each daily candle contains a timestamp and 4 prices: open, high, low and close price.I’ve been implementing mongo queries and complex analysis routines for many years and was - until now - happily using just one document per symbol. Just recently I figured out that from more than 3000 financial instruments USDJPY and USDCHF are the only ones that have such a huge data history (dating back to January 1970) that they exceed 16MB and thus cannot be stored entirely.With this 16MB limit I would now have to go through dozens of complicated methods and implement additional “boilerplate” logic to read in chunks of history data and deal with highly increased complexity of analysis routines that now need to see beyond the borders between a set of chunks. No fun, seriously.I do like to work with MongoDB and I don’t mind difficult tasks, but implementing additional logic just because there is no way to increase a tightened memory “budget” seems utterly wrong. At least to me. Not to mention that the whole additional logic reduces the readability of the code and lowers its performance.If there’s chance, can you at least provide a parameter in MongoDB with the default of 16MB, and in case people really need more memory, then they have the freedom to do so?", "username": "Marcel_Fitzner" }, { "code": "", "text": "The only way to have larger document size would be to change the limit in the source code and to recompile/build it yourself and then run your own changed copy. I wouldn’t recommend it though because there are a lot of other places where things can go wrong - drivers assuming documents will not be larger than 16MBs also.It’s hard to imagine that it’s actually required to have full history of a symbol in a single document. Do you always analyze the full history when querying it? If not then it’s quite inefficient to keep it all on a single document.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Similar use case here…I’m using mongoDB to store colelctions of 10000 frames of a moving system of 200.000 particles. Each frame stores 13 floating-point values per particle. So a single frame holds 2.6 million floating point values (20.8 MB). It is just not practical to split frames into more than 1 document.I have 128 processors, 16 TB of SSD and 1 TB of RAM on the server… could anybody explain the logic behind the 16MB document limit?. Sounds a bit 2009.", "username": "Pedro_Guillem" }, { "code": "", "text": "How are you using this data? Do you always fetch all 10000 frames of all the values? Sure, 16MBs is arbitrary, but 10 thousand seems arbitrary also, why not five thousand?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi Asya.For my case scenario, each frame is 2 picoseconds of motion. 10k frames are 20 nanoseconds, which is the minimum window I need to measure. Sampling less time would be statistically insufficient, and reducing sampling times would induce aliasing.After saving I need to build n-dimentioinal dataframes and do some physics with it. I’m still thinking wether I should do pipelining or work everything from python.Would you go for GridFS? or definetly compile with max document size to… say 64Mb?.Best!\nPedro", "username": "Pedro_Guillem" }, { "code": "", "text": "I don’t fully understand the implications of forcing a bigger document size. Hence the question.As for the usage, i need to query each particle in all 10.000 frames and compute its motion. This can be done by getting all 13 attributes of the same particle ID from all documents (if each frame is 1 document).So 13x8x10000 is 1MB per particle. But then each 20MB frame should fit in a document.I’m thinking splitting frame data in 2 collections would do… but its far from ideal.", "username": "Pedro_Guillem" }, { "code": "", "text": "Mongo Manual saidwe can use embedded documents and arrays to capture relationships between data in a single document structure instead of normalizing across multiple documents and collections, this single-document atomicity obviates the need for multi-document transactions for many practical use cases.So we tend to use embed documents,but sometimes one document can be very large,in our project it may reach 30M or more, so we must split it and keep reference relation, it gonna be very complicated and this way mongo doesn’t support muti-doc txn in single server, it’s so wired.\nRedis has supported RedisJson Module and the limit is 512M, I wish mongo increase this limit and support JsonPath.\nAnd i want to know why mongo only support muti-doc txn in replica and shard, sometimes we want to test transaction but mush deploy replica, it’s troublesome", "username": "timer_izaya" }, { "code": "", "text": "mongo doesn’t support muti-doc txn in single serverI’m not sure what you are talking about - MongoDB has supported transactions (included across shards) for years now…Asya", "username": "Asya_Kamsky" }, { "code": "mongod", "text": "i want to know why mongo only support muti-doc txn in replica and shard, sometimes we want to test transaction but mush deploy replica, it’s troublesomeHi @timer_izaya,Multi-document transactions rely on the replication oplog which is not present on a standalone mongod deployment.However, you can deploy a single node replica set for test purposes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Don’t you love it when a company tells you it’s your business that’s wrong, not their product.We have also crashed into the 16MB document size limit in another financial use case. The application design is sound, the issue is not the schema or the way in which we are using the tool. The issue is the tool, so we have little choice but to switch to another tool unfortunately.It seems that if multiple customers, in different industries, with very different use cases, are all struggling with the limitation it would be prudent for that company to ask whether it’s really the customers who have gotten it wrong, instead of spikily insisting that they don’t know what they are doing or don’t understand their own domain.", "username": "Graham_Bailey" }, { "code": "", "text": "I have no choice but to implement a generic method to split huge json.", "username": "timer_izaya" }, { "code": "", "text": "Same here I have simple website builder where we keep pages for easy css and text manipulation.", "username": "xoxoxo" }, { "code": " bsoncxx::types::b_binary b_blob \n { \n bsoncxx::binary_sub_type::k_binary,\n sizeof your_array_or_object,\n reinterpret_cast<uint8_t*>(&your_array_or_object)\n };\n", "text": "We’re storing documents with images and lidar pointclouds, using MongoDB as a geospatial database. Each pose of the vehicle is stored as a geojson point and queried by geographic location and radius. The collection reaches 100GB with 10min of vehicle travel (and images less than 16MB) Because we need better resolution, we’re hitting that 16MB limit now with just the image sizes. I understand the issue is a limit in BSON and there’s another extension, GridFS, to store large BLOBs by writing them directly to the filesystem in chunks. It’s my opinion that this makes the database, software, and filesystem more complex to manage, unless I’m missing something.The reason we were sold on MongoDB was it’s performance, nosql, and geospatial indexes. We’re starting questioning the true performance after using MongoDB for 4 years. Each write to the collection, with just one spherical index, takes .5s on an i9, 64GB ram, with a Samsung 970EVO ssd. Granted, this isn’t the fastest machine out there, but we’re limited to what we can fit and power on an electric vehicle. What we’re learning about database systems, like Cassandra, is they’re faster and have document size limits (including BLOBs) of 2GB. I’d really hate to rewrite my data abstraction layer.Also, the bsoncxx and mongocxx API documentation is really lacking and needs updating. It’s mostly built with doxygen, with very few descriptive comments and few if any code examples. I had to read the source code to figure out writing and reading binary. It took me way longer to figure out how to write binary than was necessary. Here’s an example of the only official documentation I could find on it.\nI don’t know how anyone could get this from that documentation:", "username": "Matthew_Richards" }, { "code": "", "text": "@Matthew_Richards what are you storing in a single document? Normally images tend to be stored separately from various other “metadata” in part because letting documents get really large means that all operations on the document will be slower - and if you’re doing any sorts of updates or partial reads of the document then you’d be better off to have things stored separately.Asya", "username": "Asya_Kamsky" } ]
Increase max document size to at least 64MB
2021-09-17T15:07:13.464Z
Increase max document size to at least 64MB
29,759
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.23 is out and is ready for production deployment. This release contains only fixes since 4.2.22, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.23 is released
2022-10-03T22:26:17.072Z
MongoDB 4.2.23 is released
2,620
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.17 is out and is ready for production deployment. This release contains only fixes since 4.4.16, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.17 is released
2022-10-03T22:24:20.861Z
MongoDB 4.4.17 is released
2,428
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 5.0.13 is out and is ready for production deployment. This release contains only fixes since 5.0.12, and is a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.13 is released
2022-10-03T22:21:36.130Z
MongoDB 5.0.13 is released
3,000
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 6.0.2 is out and is ready for production deployment. This release contains only fixes since 6.0.1, and is a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.2 is released
2022-10-03T22:19:06.450Z
MongoDB 6.0.2 is released
2,359
null
[ "containers" ]
[ { "code": "", "text": "I am trying to get my container up with charts, but it is nit proceeding\n“indexesCreated failure: not authorized on metadata to execute command { insert: “system.indexes”, ordered: true, lsid: { id: UUID(“a8dc5b18-d940-4b23-bb04-8ad9f423d126”) }, $db: “metadata” }\n^C[shiva.bhat@shiva-bhat ~]$ client_loop: send disconnect: Broken pipe”\nWhat si the best way to make progress.\nI am using Quay (latest)\nand latest mongodbroot@charts:/mongodb-charts/volumes# exec node --no-deprecation /mongodb-charts/bin/charts-cli.js startup\n parsedArgs\n installDir (‘/mongodb-charts’)\n log\n salt\n productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\n gitHash (undefined)\n supportWidgetAndMetrics (‘on’)\n tileServer (undefined)\n tileAttributionMessage (undefined)\n rawFeatureFlags (undefined)\n chartsMongoDBUri\n encryptionKeyPath\n featureFlags ({})\n lastAppJson ({})\n existingInstallation (false)\n tenantId (‘2395a2e0-f313-41c9-bb01-7eed7cf1263a’)\n tokens\n stitchMigrationsLog ({ completedStitchMigrations: [ ‘stitch-1332’, ‘stitch-1897’, ‘stitch-2041’, ‘migrateStitchProductFlag’, ‘stitch-2041-local’, ‘stitch-2046-local’, ‘stitch-2055’, ‘multiregion’, ‘dropStitchLogLogIndexStarted’ ] })\n stitchConfigTemplate\n libMongoIsInPath (true)\n mongoDBReachable (true)\n stitchMigrationsExecuted ([ ‘stitch-1332’, ‘stitch-1897’, ‘stitch-2041’, ‘migrateStitchProductFlag’, ‘stitch-2041-local’, ‘stitch-2046-local’, ‘stitch-2055’, ‘multiregion’, ‘dropStitchLogLogIndexStarted’ ])\n minimumVersionRequirement (true)\n stitchConfig\n stitchConfigWritten (true)\n stitchChildProcess\n indexesCreated failure: not authorized on metadata to execute command { insert: “system.indexes”, ordered: true, lsid: { id: UUID(“a8dc5b18-d940-4b23-bb04-8ad9f423d126”) }, $db: “metadata” }\n^C[shiva.bhat@shiva-bhat ~]$ client_loop: send disconnect: Broken pipe", "username": "Shiva_Bhat" }, { "code": "", "text": "Hi @Shiva_Bhat. MongoDB Charts on-prem is no longer supported. But this looks like a database permissions problem.Tom", "username": "tomhollander" } ]
Issue with mingo chart installation
2022-09-30T10:48:38.485Z
Issue with mingo chart installation
1,728
null
[ "dot-net", "crud" ]
[ { "code": "new FindOneAndUpdateOptions<BsonDocument, T>\n{\n ReturnDocument = ReturnDocument.After\n}\n", "text": "Right now, with the C# mongodb driver, you can utilize FindOneAndUpdateAsync. This is a very useful method as a parameter, you can set:This allows for a single call to the database (maybe not behind the scenes?) that updates the document that matches the filter and returns the updated document after it has been updated.I would like to do the same thing but while updating multiple documents. I have read the suggestion to do multiple calls to update by filter, then get by filter. This works for most cases, except when you update a property (or properties) that are in the filter. In that case, you lose the connection to the documents that were updated.", "username": "Steven_Rothwell" }, { "code": "FindOneAndUpdateAsyncfindAndModifyUpdateManyAsyncupdatemulti:trueUpdateManyAsyncFindAsyncfindAndModifynew: trueReturnDocument.AfterFindOneAndUpdateAsyncUpdateManyAsyncFindManyandUpdateAsyncfindAndModifyMany", "text": "Hi, Steven,Thank you for your suggestion. FindOneAndUpdateAsync in the .NET/C# Driver is a wrapper around the underlying MongoDB command findAndModify. This command explicitly states:The findAndModify command modifies and returns a single document.Thus this is a limitation of MongoDB and not the .NET/C# Driver per se. To update multiple documents, you can use UpdateManyAsync, which wraps the update command (with the multi:true option specified). As you note you would have to re-query the documents after the update and handle the case where another update happened between the UpdateManyAsync and FindAsync commands.One of the features that findAndModify provides is the atomic nature of the update and query. When you specify new: true (e.g. ReturnDocument.After), you are guaranteed to receive the results of the updated document without any intervening writes. This is very useful for writing persistent locks and semaphores.Hopefully this explains the distinction between FindOneAndUpdateAsync versus UpdateManyAsync. To support FindManyandUpdateAsync, the server would first need to implement support for findAndModifyMany. You can submit this feature request via the MongoDB Feedback Engine.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hey James,Thank you for explaining the distinctions and for pointing out that this is more of a mongodb engine suggestion than a driver suggestion as it needs to exist there first.The more I thought about this, the more I realized that FindManyAndUpdateAsync may not be a great idea. With it needing to be atomic, the possibility of large quantities could drastically hurt performances. I think, at least for now, UpdateManyAsync returning the number of documents updated will suffice for me.Much appreciated,Steven", "username": "Steven_Rothwell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Feature Request: FindManyAndUpdateAsync
2022-10-02T18:02:19.417Z
Feature Request: FindManyAndUpdateAsync
1,407
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "I need to use mongodump on one collection in my database, but I only want to include specific documents in the dump. I can only reference each document by their title (a string, which are all different values) and the documents do not share a common value. I wrote a “—query” in extended JSON that utilized the “$in” operator and listed each “title”, but I get a JSON error. Is there a way to accomplish this task?", "username": "Seolera3" }, { "code": "", "text": "Hi @Seolera3, and welcome to the MongoDB community forums! Can you provide the error you’re getting? With that we might be able to help you out.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Doug_Duncan I appreciate the timely response. The error I’m getting is “Failed: error parsing query as Extended JSON: invalid JSON input”My input: -q=‘{“title”: {“$in”: [“title_A”, “title_B”, “title_C”, ”title_D”, “title_E”]}}’", "username": "Seolera3" }, { "code": "", "text": "Can you please show a screenshot with the command you’re using (or a subset that causes the error) and the error?I just did a test and I’m not getting an error:\nimage917×130 20.2 KB\n", "username": "Doug_Duncan" }, { "code": "```\nmongodump -d database \\\n -c colleciton \\\n -q='{\"title\": {\"$in\": [\"title_A\", \"title_B\", \"title_C\", \"title_D\", \"title_E\"]}}'\n```\n", "text": "One thing to note here is that the input you’ve provided has fancy quotes instead of normal quotes. This could be a side effect of pasting into the forums as any quote typed in as normal text will get converted to the fancy quote version. In the future you can prevent that by using preformatted text (either put your element in a set of single backticks, or using theicon.For longer code block, you can use a format similar to the following:This allows for the text to be displayed as it was typed and removes any forum formatting issues.", "username": "Doug_Duncan" }, { "code": "\nmongodump --uri=\"mongodb://localhost:27017\" --out=\"./collection_dump\" --db=\"db_2\" --collection=\"collection_A\"\n-q=‘{\"title\": {\"$in\": [\"title_A”, “title_B”, “title_C\", “title_D”, “title_E”]}}’\n\n", "text": "I currently cannot provide a screenshot but my entire command looks like the following:I’m also working on a windows computer and running this in the command prompt.", "username": "Seolera3" }, { "code": "mongodump -q='{\"title\": {\"$in\": [\"title_A”, “title_B”, “title_C\", \"title_D\", \"title_E\"]}}'\nmongodumpmongodumpmongo---eval", "text": "Ok, I was able to reproduce this on Windows, I use a MacBook for testing and Linux for production MongoDB and found out the following things about running mongodump in Windows cmd.If I use the query in the format you have pasted it above (notice that some strings are not in red which means you’ve still get fancy quotes in the string and that will cause problems):I get the following error:\nimage1321×33 19.3 KB\nIt seems that mongodump on Windows does not like a parameter in single quotes to have spaces. So I removed them:\nimage1666×56 5.19 KB\nNow I get the same error that you are getting. Let’s try putting the parameter in double quotes, which means we have to escape all of the JSON keys and values:\nimage1669×65 10.5 KB\nThis finally let me use mongodump in Windows. Note that I am able to have spaces to break up the query so it’s a little easier to read. One my think that maybe I can double quote the outer JSON and then use single quotes around the key/value pairs, but that throws the same message you’re currently seeing:\nimage1671×60 8.8 KB\nThis seems to be something that is Windows related. Note that I see similar issues running mongo with ---eval:\nimage1023×120 12.4 KB\nIt looks like you have to jump through hoops on Windows, unless there is someone that works on Windows more than I do that knows other tricks.Note that I don’t have any problems with using single quotes and spaces in the version I run on my Mac:\nimage1670×204 43.9 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodump specific collection but only include specific documents
2022-10-03T16:46:59.341Z
Mongodump specific collection but only include specific documents
3,218
null
[ "kotlin" ]
[ { "code": "", "text": "Hi all,I am trying ot learn how to write Kotlin Android applications that have a MongoDB backend using Android Studio but , sadly, the turorials on the MongoDB website are next to useless.It isn’t very useful to just clone a shell of an application (i.e. TaskTracker) as this doesn’t tell me how to set something up from scratch. Plus, the instructions seem to be out of date and a number of the imports are not found, even though my gradle files are the same as the ones in the exercises.The other instructions on the Realm SDK pages are also useless as they are even more out of date and do not show what needs to be imported or where the code snippets are supposed to go.Can anyone point me to a better set of turorials please?Kind regards,\nIvan", "username": "Ivan_Mold" }, { "code": "", "text": "Hi Ivan,Sorry to hear you’re having trouble with the tutorial and docs!If you have specific feedback or would like to report an issue on a given page, try using the feedback form on each page. Feedback goes right to our docs team, who will assess and address it.Thanks!", "username": "Chris_Bush" }, { "code": "", "text": "You sound frustrated which is totally understandable.The TaskTracker app is kind of a living example - it evolves as the codebase does but sometimes lags a bit behind. I would not suggest using it as a shell or template to be cloned. I would suggest using that project to get comfortable with the language calls and functions.Best practice is to build that app and get it working. Once you’ve done that, things will be a lot easier developing your own app.Also, the guide does show how to set an app up from scratch. Starting with the Installation and then the real fun part - the Quick Start with shows how to initialize and open a Realm, define an object model and then write, query and read data.", "username": "Jay" }, { "code": "", "text": "I agree. I’ve been working with Realm for awhile, across web, android and ios. It’s powerful, but everything I try to do involves confusing or outdated docs. This is true across the various platforms. I’ve honestly never had a tougher time with any tool.It’s gotten a little better over the years I guess. But it can still be extremely frustrating. You can sometimes get a little help on the forums if you’re patient… Sometimes. I’m constantly trying to get my head around what could be the root of the issue. Partly I think that Realm is just a complex system. The deeper I get, the more I realize that data synchronization across platforms is just a very difficult problem, so trying to document how to use the solution to that problem must be challenging.But there are definitely some confusing practices in the docs that make matters worse. Can’t offer much beyond that there really aren’t better tutorials that I’ve found. You can dig through my code if you wanna see a functional Kotlin implementation. I’m pretty scrappy with my code, so it’s not a great model, but I do think that the way I extracted the RealmService class makes things a little easier to understand. Anyway, good luck. It is pretty cool, when everything starts working.", "username": "Ryan_Goodwin" }, { "code": "", "text": "9 months and it is still true…", "username": "Deepanshu_Balyan" }, { "code": "", "text": "Hi @Deepanshu_Balyan, welcome to the forums.I can understand your frustration but your comment doesn’t really provide any clarity on what kind of issue you are having. Perhaps if you can clarify your meaning and what code is causing difficulty, we may be able to help.Jay", "username": "Jay" }, { "code": "", "text": "", "username": "henna.s" } ]
Why is this so hard?
2022-01-11T11:29:55.492Z
Why is this so hard?
4,560
null
[]
[ { "code": "", "text": "I connected Mongo to AWS linux through putty .\nJob for mongod.service failed because the control process exited with error code. See “systemctl status mongod.service” and “journalctl -xe” for details.\nsystemctl status mongod.service\" and \"journalctl -xe - tried this log trace as well attached the screenshot", "username": "atul_kumar3" }, { "code": "mongod", "text": "Hi @atul_kumar3, and welcome to the MongoDB Community forums! It looks like the mongod process failed. Have you looked at the MongoDB logs to see what was logged? That’s the first place I would look.", "username": "Doug_Duncan" } ]
Job for mongod.service failed because the control process exited with error code
2022-10-03T14:30:32.548Z
Job for mongod.service failed because the control process exited with error code
1,241
null
[]
[ { "code": "", "text": "Is there a possibility to delete many MongoDB Atlas App users that I have created during testing. I am talking thousands!The WebUI only offers to delete individual users one by one. Can I access the MongoDB Atlas Users programatically?", "username": "Robert_Rackl1" }, { "code": "", "text": "Ah ok, found it: https://www.mongodb.com/docs/atlas/app-services/users/delete-or-revoke/", "username": "Robert_Rackl1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Bulk delete users
2022-10-03T12:45:11.334Z
Bulk delete users
1,348
null
[ "server", "installation" ]
[ { "code": "sudo apt install ./Downloads/mongodb-enterprise-server_6.0.1_amd64.deb\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nNote, selecting 'mongodb-enterprise-server' instead of './Downloads/mongodb-enterprise-server_6.0.1_amd64.deb'\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n mongodb-enterprise-server : Depends: libldap-2.4-2 (>= 2.4.7) but it is not installable\nE: Unable to correct problems, you have held broken packages.\n", "text": "Hello,\nI’ve been trying to install the mongo enterprise server in my Ubuntu 22.04.1 LTS, but it throws this error:I’ve also tried to install the 5.0.12 and 4.4.16 but with a similar result.", "username": "David_Alfonso" }, { "code": "", "text": "This has been talked about in several threads here on the forums. This thread probably has the most information. MongoDB is currently working on getting the 6.0 version to build on Unbuntu 22.04. There is no timeline on when MongoDB 6.0 will be running on this platform, but I would like to think it would be sometime soon.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot Install Server on Ubuntu 22.04 jammy
2022-10-03T07:29:24.528Z
Cannot Install Server on Ubuntu 22.04 jammy
6,393
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "i have tried many times and failed to connect my front end with mongodb usind node js", "username": "Raghav_JACK" }, { "code": "", "text": "We cannot tell you what you are doing wrong if you do not share what you are doing.We need connection string you are using.We need code you are using.We need error messages you are getting.", "username": "steevej" }, { "code": " <div class=\"container\">\n <div class=\"header\">\n <h2>Registration Form</h2>\n </div>\n <form action=\"register\" method=\"post\" class=\"form\">\n <div class=\"form-group\">\n <label for=\"\">UserName : </label>\n <input type=\"text\" placeholder=\"username\" id = \"username\" autocomplete=\"off\">\n <i class=\"ion-ios-checkmark\"></i>\n <i class=\"ion-android-alert\"></i>\n <span></span>\n </div>\n <div class=\"form-group\">\n <label for=\"\">Email : </label>\n <input type=\"text\" placeholder=\"email\" id = \"email\" autocomplete=\"off\">\n <i class=\"ion-ios-checkmark\"></i>\n <i class=\"ion-android-alert\"></i>\n <span></span>\n </div>\n <div class=\"form-group\">\n <label for=\"\">Phone Number : </label>\n <input type=\"text\" placeholder=\"phonenumber\" id = \"phonenumber\" autocomplete=\"off\">\n <i class=\"ion-ios-checkmark\"></i>\n <i class=\"ion-android-alert\"></i>\n <span></span>\n </div>\n <div class=\"form-group\">\n <label for=\"\">Password : </label>\n <input type=\"password\" placeholder=\"password\" id = \"password\" autocomplete=\"off\">\n <i class=\"ion-ios-checkmark\"></i>\n <i class=\"ion-android-alert\"></i>\n <span></span>\n </div>\n <div class=\"form-group\">\n <label for=\"\">Confirm Password : </label>\n <input type=\"password\" placeholder=\"confirm password\" id = \"confirmpassword\" autocomplete=\"off\">\n <i class=\"ion-ios-checkmark\"></i>\n <i class=\"ion-android-alert\"></i>\n <span></span>\n </div>\n <button id = 'submit'>Submit</button>\n \n\n <span class=\"text\">Not a member?\n <a href=\"login1.html\" class=\"text signup-link\">Signup Now</a>\n </span>\n </form>\n</div>\n<script src=\"app.js\"></script>\n", "text": "", "username": "Raghav_JACK" }, { "code": "", "text": "This is html code.It does not show1 code used to connect\n2 connection string\n3 error message you get", "username": "steevej" } ]
How to connect mongodb to the front end using Node js
2022-09-27T05:22:27.810Z
How to connect mongodb to the front end using Node js
2,993