image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation", "php" ]
[ { "code": "tagsMongoDB\\Model\\BSONDocument public function all_tags(int $ascdesc = self::ASC): array {\n return $this->mongodb_db->fourberie->aggregate([\n ['$project' => ['tags' => true]],\n ['$unwind' => '$tags'],\n ['$group' => ['_id' => '$tags']],\n ['$sort' => ['_id' => $ascdesc]]\n ])->toArray();\n }\n", "text": "This question is in PHP.I have an array element in my document called tags.This member function gives me an array of MongoDB\\Model\\BSONDocument :Now I want only those tags that match a regex. The following function yields no documents for any pattern.Incorrect code deleted - Jack", "username": "Jack_Woehr" }, { "code": " /**\n * Returns all matching tags as array of doc.\n * @return array all unique tags as array of doc\n * @param string $pattern pattern to match\n * @param bool $case_sensitive should the match be case sensitive, default = true\n * @param int $ascdesc ASCending or DESCending sort (optional, default self::ASC)\n * @return array of doc for all matching tags\n */\n public function all_tags_matching_regex(string $pattern, bool $case_sensitive = true, int $ascdesc = self::ASC): array {\n return $this->mongodb_db->fourberie->aggregate([\n ['$project' => ['tags' => true]],\n ['$unwind' => '$tags'],\n ['$match' => ['tags' =>\n $case_sensitive ? (['$regex' => $pattern]) : (['$regex' => $pattern, '$options' => 'i'])\n ]],\n ['$group' => ['_id' => '$tags']],\n ['$sort' => ['_id' => $ascdesc]]\n ])->toArray();\n }\n", "text": "Figured it out", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Regex match unwound array in PHP
2023-05-26T21:41:23.440Z
Regex match unwound array in PHP
668
null
[]
[ { "code": "const itemsSchema = new Schema({ \n name: String,\n checkedValue: String\n});\n\nconst listSchema = new Schema({\n name: String,\n items: [itemsSchema]\n});\n[\n {\n _id: ObjectId(\"646ec7916f8ba817e80e3377\"),\n name: \"Home\",\n items: [\n {\n name: \"home 1\",\n checkedValue: \"Off\",\n _id: ObjectId(\"646ec7966f8ba817e80e3380\")\n },\n {\n name: \"home 2\",\n checkedValue: \"Off\",\n _id: ObjectId(\"646ec79a6f8ba817e80e338c\")\n }\n ],\n __v: 2\n },\n {\n _id: ObjectId(\"646ed4136f8ba817e80e339b\"),\n name: \"School\",\n items: [\n {\n name: \"school 1\",\n checkedValue: \"Off\",\n _id: ObjectId(\"646ed41c6f8ba817e80e33a4\")\n }\n ],\n __v: 1\n }\n]\ncheckedValue: “Off”_id: ObjectId(“646ec79a6f8ba817e80e338c”)checkedValue: “On”", "text": "Sample data:If I only want to update checkedValue: “Off” of _id: ObjectId(“646ec79a6f8ba817e80e338c”) to checkedValue: “On”. How can I do it?", "username": "Md_Fahad_Rahman" }, { "code": "$[<identifier>]", "text": "Hi @Md_Fahad_Rahman - Welcome to the community Have you tried using the $[<identifier>] to see if it works for you?Regards,\nJason", "username": "Jason_Tran" }, { "code": " List.collection.findOneAndUpdate({\n name: listName\n },\n {\n \"$set\": {\n \"items.$[i].checkedValue\": \"Off\"\n }\n },\n {\n arrayFilters: [\n {\n \"i._id\": checkedItemId\n }\n ]\n })\n", "text": "@Jason_Tran Thank you so much. I am trying to use this:But the value is not changing.", "username": "Md_Fahad_Rahman" }, { "code": "ObjectIdarrayFilters", "text": "Should I convert the text string to ObjectId before passing it to arrayFilters?", "username": "Md_Fahad_Rahman" }, { "code": "db.collection.updateOne( \n { 'items.checkedValue': 'Off' },\n { '$set': { 'items.$[element].checkedValue': 'On' } },\n {\n arrayFilters: [\n {\n 'element._id': { '$eq': ObjectId(\"646ec79a6f8ba817e80e338c\") }\n }\n ]\n })\n", "text": "I’ve not yet tested this but can you try this on a test environment against more sample documents to see if it works for you?You can alter it accordingly then test on a test environment to verify if it meets your requirement / use cases.Regards,\nJason", "username": "Jason_Tran" }, { "code": "ObjectIdObjectId <form action=\"/delete\" method=\"post\">\n <div class=\"item\">\n <div class=\"item-p\">\n <input type=\"checkbox\" name=\"checkbox\" value=\"<%= item.checkedValue %>\" onclick=\"this.form.submit(); checkedif();\">\n <input type=\"hidden\" name=\"checkedItemId\" value=\"<%= item._id %>\"></input>\n <p><%= item.name %></p>\n </div>\n\n <div><button type=\"submit\" name=\"deleteIcon\" value=\"<%= item._id %>\"><i class=\"uil uil-trash-alt delete-icon\"></i></button></div>\n </div>\n <input type=\"hidden\" name=\"listName\" value=\"<%= listTitle %>\"></input>\n </form>\nhiddenStringObjectIdupdateOne()const { ObjectId } = require(\"mongodb\");\n\napp.post(\"/delete\",function(req, res){//...............\n\nconst checkedItemId = req.body.checkedItemId;\n//............\n'element._id': { '$eq': ObjectId(checkedItemId) }\n//..........\n)}\nObjectId", "text": "@Jason_Tran Thank you for your kind effort. But, It is not working either. I think the problem is with the ObjectId. I am receiving the ObjectId through the following route:I am receiving the value through the hidden input which is essentially in String format. May be I need to convert it to an ObjectId before using it in the updateOne() method. I have tried something like this:But the string is not converting to an ObjectId.", "username": "Md_Fahad_Rahman" }, { "code": "_idObjectId const checkedItemId = req.body.checkedItemId;\n const checkedItemObjectId = new mongoose.Types.ObjectId(checkedItemId);\n", "text": "@Jason_Tran Thank you so much. Your solution worked like a charm. I had to convert the _id to an ObjectId. This was done by:", "username": "Md_Fahad_Rahman" }, { "code": "", "text": "Nice one! Thanks for updating the post with those details too ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating nested array object with specific condition
2023-05-25T04:13:38.291Z
Updating nested array object with specific condition
363
null
[]
[ { "code": "", "text": "Hey guys,\nI want to confirm if there is anything we can do which restricts a query run like removeAll from the collection.Thanks", "username": "Zohaib_Shakir" }, { "code": "", "text": "from someone/something with write access?no native support. but you can put a proxy in front of mongs/mongod and analyze the opcode on your own and then block the request under certain conditions.", "username": "Kobe_W" } ]
Restrict remove all query
2023-05-26T12:06:14.484Z
Restrict remove all query
327
null
[ "containers" ]
[ { "code": "", "text": "Dear Support Team,I am writing to you under critical circumstances that require your urgent attention and support.We recently became aware of data exposure concerning one of our valuable customers. It resulted in unfortunate data exposure that has heightened the need for immediate action on our end. We are currently running a community edition of MongoDB as a docker container on our local development environment.In the interest of conducting a thorough audit and incident analysis, we are in urgent need of detailed access logs from the past week. This includes, but is not limited to, the executed queries and the associated IP addresses. Regrettably, as we did not configure the profiler level or verbose logging initially, we are facing challenges in fetching these necessary logs for our internal audit and to take appropriate mitigation steps.Given the seriousness of the situation, we sincerely request your immediate assistance in this matter. We are aware that this might involve additional support costs, and we are prepared to meet these expenses in order to rectify the situation and reassure our customers of their data’s safety.We highly appreciate your prompt attention to this critical issue and look forward to your support in addressing it.Best Regards,\nVibhas Sharma", "username": "Vibhas_Sharma1" }, { "code": "", "text": "In the interest of conducting a thorough audit and incident analysis, we are in urgent need of detailed access logs from the past week. This includes, but is not limited to, the executed queries and the associated IP addresses. Regrettably, as we did not configure the profiler level or verbose logging initially, we are facing challenges in fetching these necessary logs for our internal audit and to take appropriate mitigation steps.if such logging is not enabled, i’m not sure how mongodb support can help.", "username": "Kobe_W" } ]
Urgent Issue requires immediate action to fetch access logs for self managed mongodb community edition version
2023-05-26T09:39:35.155Z
Urgent Issue requires immediate action to fetch access logs for self managed mongodb community edition version
661
null
[ "data-modeling" ]
[ { "code": "", "text": "Is single collection design in mongodb same way as single table design in dynamodb a recommended schema design for mongod? Like I want to store different type of entities in the same collection with generic primary key and indexed attributes. For example for a document of user entity, the _id will be USR#{uuid()}, for a document of type product entity the _id will be PROD#{uuid()}. Similar all types of entities will be stored in the same collection with additional indexes. I couldn’t find anything on the internet regarding this, would love a mongodb expert’s insight and recommendation regarding this.", "username": "Anand_S3" }, { "code": "", "text": "mongodb doesn’t enforce schema by default. so you can store anything inside a document, be it same/similar/not-at-all entity or not.is that a good design? that’s a question for you, not for mongodb.", "username": "Kobe_W" } ]
Single Collection Design in Mongodb similar to single table design in dynamodb
2023-05-26T05:46:25.750Z
Single Collection Design in Mongodb similar to single table design in dynamodb
690
null
[ "node-js", "dot-net", "serverless" ]
[ { "code": "", "text": "Hi all,I’m running into some trouble with an AWS lambda function written in C# that attempts to connect to a MongoDB serverless instance and perform a basic CRUD operation.My lambda function is written with the latest C# MongoDB drivers (2.19.1) and the target framework/runtime is .NET 6.I am able to connect to the serverless instance and successfully insert/delete when I allow connections from all IP addresses, so I don’t believe the code is the problem. I’ve followed all the instructions for creating a private endpoint with AWS PrivateLink in both the network access tab in Atlas and through AWS VPC/the VPC tab of the lambda function itself. Both the endpoint and endpoint service statuses are showing as available in Atlas and the AWS VPC endpoint status is also available. However, when I remove IP access from 0.0.0.0 my function times out when attempting to connect to the serverless instance.After tweaking countless AWS settings with no success, I tried replacing the C# lambda with a node.js script. I didn’t change any of the VPC settings or connection strings and the updated lambda ran successfully using the private endpoint.Has anyone experienced something similar to this or had success using C# to connect to a serverless instance via VPC? I’m wondering if there may be a driver issue specific to VPC connections as I know the C# code works and the private endpoint configuration works when using node.js.Thanks in advance!", "username": "Cameron_McNair" }, { "code": "SslStreammongodb+srv://", "text": "Hi, @Cameron_McNair,I am not aware of any issues with connecting to Atlas via a VPC using the .NET/C# Driver. When you connect to a private endpoint, the driver performs a SRV lookup to determine the FQDNs of the cluster members. Those FQDNs will then be used to create SslStream instances to connect to the individual cluster members. Internally the .NET Framework will perform a DNS lookup to resolve the FQDNs to A records. These A records will be the private IP addresses of the cluster members. This is the same process that all drivers use to resolve mongodb+srv:// connection strings to a list of IP addresses for cluster members, including the Node.js Driver. So it is not immediately obvious why you can connect using the Node.js Driver but not the .NET/C# Driver with the same network settings.Please provide the complete error message with stack trace (removing any usernames/passwords and other sensitive data) so that we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thanks @James_Kovacs for the quick response. I was finally able to figure the problem and, unsurprisingly, it was user error. I didn’t realize that the connection strings for private endpoints are slightly different from the connection strings for standard connections. I had written and tested my C# code before I set up the private endpoint and it didn’t cross my mind that the connection string would be different. Then when I wrote the node.js script, I copied the correct connection string from atlas and didn’t notice the difference.", "username": "Cameron_McNair" }, { "code": "-pri", "text": "Glad that you resolved your issue.A short explanation of why we use different FQDNs for public versus PrivateLink. When PrivateLink was first deployed, I recall that we used the same connection string for both public and private connection strings and relied on split-horizon DNS capabilities of Route53 to resolve public versus private IP addresses. In theory this works great. In practice - due to DNS caching - this was problematic. If you were initially connected via the public network, your DNS stack would cache the public IPs. When you enabled PrivateLink, you would have those public IPs cached until the TTL expired. Our solution was to differentiate PrivateLink connection strings by adding -pri into the FQDN thus creating two sets of FQDNs, which would be cached independently and correctly.", "username": "James_Kovacs" } ]
Serverless instance connection issue with C# driver through AWS VPC
2023-05-24T21:16:56.429Z
Serverless instance connection issue with C# driver through AWS VPC
816
null
[ "dot-net" ]
[ { "code": "Exception Info: MongoDB.Driver.MongoAuthenticationException: Unable to authenticate using sasl protocol mechanism SCRAM-SHA-1.\n ---&gt; MongoDB.Driver.MongoCommandException: Command saslContinue failed: bad auth : authentication failed.\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ProcessReply(ConnectionId connectionId, ReplyMessage`1 reply)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Authentication.SaslAuthenticator.Authenticate(IConnection connection, ConnectionDescription description, CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Authentication.SaslAuthenticator.Authenticate(IConnection connection, ConnectionDescription description, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Authentication.DefaultAuthenticator.Authenticate(IConnection connection, ConnectionDescription description, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Authentication.AuthenticationHelper.Authenticate(IConnection connection, ConnectionDescription description, IReadOnlyList`1 authenticators, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.Authenticate(IConnection connection, ConnectionInitializerContext connectionInitializerContext, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.ConnectionCreator.CreateOpenedOrReuse(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionHelper.AcquireConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ServerChannelSource.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ChannelSourceHandle.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Initialize(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Create(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.Execute(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperation[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.FindSync[TProjection](IClientSessionHandle session, FilterDefinition`1 filter, FindOptions`2 options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.&lt;&gt;c__DisplayClass46_0`1.&lt;FindSync&gt;b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.FindSync[TProjection](FilterDefinition`1 filter, FindOptions`2 options, CancellationToken cancellationToken)\n at MongoDB.Driver.FindFluent`2.ToCursor(CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToList[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at AspNetCore.Identity.Mongo.Migrations.Migrator.Apply[TUser,TRole,TKey](IMongoCollection`1 migrationCollection, IMongoCollection`1 usersCollection, IMongoCollection`1 rolesCollection)\n at AspNetCore.Identity.Mongo.MongoIdentityExtensions.AddIdentityMongoDbProvider[TUser,TRole,TKey](IServiceCollection services, Action`1 setupIdentityAction, Action`1 setupDatabaseAction, IdentityErrorDescriber identityErrorDescriber)\n at AspNetCore.Identity.Mongo.MongoIdentityExtensions.AddIdentityMongoDbProvider[TUser,TRole](IServiceCollection services, Action`1 setupIdentityAction, Action`1 setupDatabaseAction)\n at Program.&lt;Main&gt;$(String[] args) in D:\\a\\1\\s\\Source\\TournamentGo.Portal\\Program.cs:line 85\n at Program.&lt;Main&gt;(String[] args)\n", "text": "I recently updated the MongoDB.Driver library from 2.17 to 2.19.1.The connection string I am using is formatted as,mongodb+srv://USERNAME:PASSWORD@SERVER/DATABASE?retryWrites=true&w=majoritywhich is exactly the same as I was using prior to the update. I am able to connect locally using these connection details both from my code, and from mongoshell.All was well until I deployed my site to an Azure app service and I started receiving this error in the logsDoes anyonw know what might have changed between these versions that would cause this and what I need to change to resolve it?", "username": "Glenn_Moseley1" }, { "code": "", "text": "Just to update, I’ve rolled it back to 2.17.1 and it didn’t resolve the issue so there is something else going on.I’ve removed all the network restrictions for this server and it still hasn’t helped.The database is hosted in Atlas, version 6.0.6", "username": "Glenn_Moseley1" }, { "code": "bad auth : authentication failed.mongosh", "text": "Hi, Glenn,The error message bad auth : authentication failed. usually indicates incorrect credentials. I would suggest connecting via mongosh or Compass with those same credentials to eliminate that as a potential root cause. I would recommend verifying that your username and password do not contain any special characters that require percent-encoding. See Connection String components for more information.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thanks for the response James.I’ve tried those things. Running the code locally with the same connection string as I’ve deployed with, then connecting using the username and password through mongoshell. These all work fine.This connection string was working previously so something else must have changed.The connection string is copied from atlas and the password also generated in Atlas so I would think that it complies with the password rules.Aside from network, is there anything else which might be blocking the connection?", "username": "Glenn_Moseley1" }, { "code": "", "text": "I’ve redeployed my last working version with the same connection and that still works.So there must be something in the code. It is failing at start-up and the from the stack trace I can see it is failing inside the AspNetCore.Identity.Mongo library I am using so I’ll attach a debugger and start stepping through there.I think there is little you could suggest at the moment. Will report back if I find any Mongo specific issues with this.", "username": "Glenn_Moseley1" }, { "code": "", "text": "Issue solved.It turned out to be a red herring. My deployments were picking up on a configuration file which had snuck into my deployment pipeline.Thanks for your time.", "username": "Glenn_Moseley1" }, { "code": "", "text": "Glad to hear that you found the root cause of your issue. Have a great weekend!", "username": "James_Kovacs" } ]
Authentication Failure from Azure app service after upgrade to 2.19.1 using C# drivers
2023-05-25T15:42:58.654Z
Authentication Failure from Azure app service after upgrade to 2.19.1 using C# drivers
782
null
[ "dot-net" ]
[ { "code": "", "text": "In version 2.15, the ability to set explain modifier was removed. How use explain in c# driver 2.19?", "username": "via96" }, { "code": "", "text": "Oh my God. I am also very concerned about this issue! Mr. Traxel, answer me if you are get an answer", "username": "BademanYah" }, { "code": "BsonDocumentRunCommandMongoCursor.Explain(bool verbose)Explain", "text": "Hi, @via96,Welcome to the MongoDB Community Forums. I understand that you’re having trouble with explaining queries with recent versions of the .NET/C# Driver.To explain commands in the 2.x API, we recommend creating an explain command using BsonDocument and executing it via the RunCommand helper.The legacy 1.x API does have MongoCursor.Explain(bool verbose). Reviewing the source code between v2.14.x and v2.19.x, I do not see any changes to Explain. NOTE: We do not recommend using the legacy 1.x API anymore as it will be removed in the next major release.If you can provide code examples that work in versions prior to v2.15, but do not work now, we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How use explain in c# driver 2.19
2023-05-26T05:40:08.568Z
How use explain in c# driver 2.19
527
null
[ "unity" ]
[ { "code": "Unable to load DLL 'realm-wrappers'", "text": "Hi,i created a Unity based project with Realm.\nEverything works just fine in the unity editor.However. i tried and builded the project for webGL via the “Build and run” option.\nIt starts my scene but as soon as the Realm features shall kick in, it stopped working.\nAccording to the developer console it shows\nUnable to load DLL 'realm-wrappers'I hope someone can help me about this", "username": "Marvin_the_martian" }, { "code": "", "text": "Hey @Marvin_the_martian,WebGL isn’t a supported target for Realm and Unity. Unity is supported for all our target platforms.", "username": "Yavor_Georgiev" } ]
Unable to load DLL 'realm-wrappers' in WebGL Unity Project
2023-05-26T18:29:06.893Z
Unable to load DLL &lsquo;realm-wrappers&rsquo; in WebGL Unity Project
776
https://www.mongodb.com/…9_2_1024x551.png
[ "kotlin", "warsaw-mug" ]
[ { "code": "Technology Team Leader @ eSkySoftware Engineer & GDG OrganizerDeveloper Advocate @ MongoDBMUG Warsaw Leader and Senior Solutions Architect @ MongoDBGoogle DSC Co-lead | CS @ PJATK", "text": "\nScreenshot 2023-05-11 0924011266×682 108 KB\nMUG Warsaw in collaboration with Google Developer Group, Warsaw, and GDSC PJATK is excited to host an event in collaboration where we will #Explore MongoDB Atlas on GCP, Kotlin Multiplatform, and more. Connect with devs, and enjoy Trivia & Swag. Don’t miss out!Dive into tech talks, engage with the developer community, and learn from real-life use cases. Enjoy SWAG and pizza! ! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Event Type: In-Person\nLocation: Google For Startups Campus Warsaw Plac Konesera 10, 03-736Technology Team Leader @ eSky–Software Engineer & GDG Organizer–\nimage512×512 88.5 KB\nDeveloper Advocate @ MongoDB–\nimage512×512 101 KB\nMUG Warsaw Leader and Senior Solutions Architect @ MongoDB–Google DSC Co-lead | CS @ PJATK", "username": "Harshit" }, { "code": "", "text": "Hello,\nI am interested in the event, but I see two different dates (on posters and emails). When exactly will the event take place - May 25 or May 26? Where can I find information about costs / how to pay?", "username": "Piotr_Buczek" }, { "code": "", "text": "Hi @Piotr_Buczek - I’m glad to hear you’re interested in attending the event. The event is on Friday, May 26. There is no cost to attend the event. To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\n[/quote]", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "We are thrilled to have you all join us tomorrow!We want to make sure everyone has a fantastic time, so please arrive on time at 05:50 PM to ensure yourself a seat and don’t miss any of the sessions, and we can all have some time to chat before the talks begin.There are a few essential things to keep in mind:Please bring along one of your Government-Issued IDs to access the building.Please stay within the designated event premises and maintain a respectful and professional atmosphere throughout the office. We also ask that you throw away any used plates and cans to keep the space clean.If you have any questions, please don’t hesitate to ask by replying to this thread!We can’t wait to see you all at the event!!", "username": "Harshit" } ]
MUG Warsaw: Innovate Together - MUG meets Google Developers Group
2023-05-08T10:14:52.118Z
MUG Warsaw: Innovate Together - MUG meets Google Developers Group
2,178
null
[ "node-js", "mongoose-odm", "compass", "atlas-cluster" ]
[ { "code": "MongooseError: Operation `users.findOne()` buffering timed out after 10000ms\n at Timeout.<anonymous> (/home/container/node_modules/mongoose/lib/drivers/node-mongodb-native/collection.js:175:23)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) Unhandled Rejection at Promise\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n at Connection.openUri (/home/container/node_modules/mongoose/lib/connection.js:825:32)\n at /home/container/node_modules/mongoose/lib/index.js:414:10\n at /home/container/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/container/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\n at Mongoose._promiseOrCallback (/home/container/node_modules/mongoose/lib/index.js:1288:10)\n at Mongoose.connect (/home/container/node_modules/mongoose/lib/index.js:413:20)\n at authorize (/home/container/index.js:54:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-0ddk8at-shard-00-00.s3lxppu.mongodb.net:27017' => [ServerDescription],\n 'ac-0ddk8at-shard-00-01.s3lxppu.mongodb.net:27017' => [ServerDescription],\n 'ac-0ddk8at-shard-00-02.s3lxppu.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-olf0a9-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n} Unhandled Rejection at Promise\n", "text": "Hello. I hope I’m right here.My Ubuntu server was running pretty smoothly and without any problems until the day before yesterday. and since then this error comes up:The connection to MongoDB works without problems.compass: \nWindows servers: \n0.0.0.0: Anyone can connect (is important for me because of my authentication bot) \n27017 Port is open With another Ubuntu server it runs without problems.\nNothing has been done. no updates or anything else…\nEven if I whitelist my IP as the error message says, it doesn’t work…", "username": "Tobias_Horacek" }, { "code": "", "text": "Hello @Tobias_Horacek ,Welcome to The MongoDB Community Forums! As you have confirmed that it is accessible via other clients, the issues seems to be with your application server. Can you please confirm a few things such as:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "I run a panel system there where I control my Discord Bots. and a few bots need a connection to a license server running over MongoDB\nI restarted the bots and the error came that they can no longer connect. After a while it worked again and then when I restarted the connection didn’t work again\nNo, the Ubunut Server is an independent server and has nothing to do with Windows. I just listed where I tested it everywhere\nUnfortunately, restarting didn’t help either.How can I connect to the Via Terminal? Because I only find how to install a server but I don’t need a server ^^", "username": "Tobias_Horacek" }, { "code": "mongosh", "text": "you can try using mongosh", "username": "francisco_Innocenti" }, { "code": "Using MongoDB: 6.0.6\nUsing Mongosh: 1.9.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nTo help improve our products, anonymous usage data is collected and sent to Mong oDB periodically (https://www.mongodb.com/legal/privacy-policy).\nYou can opt-out by running the disableTelemetry() command.\n\nAtlas atlas-olf0a9-shard-0 [primary] test>\n", "text": "I get this result when connecting to the MongoDB\nSo does this look ok?", "username": "Tobias_Horacek" }, { "code": "Current Mongosh Log ID: 646ef827dc68e74726cf2055\nConnecting to: mongodb://127.0.0.1:27015/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.9.0\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27015\n", "text": "And that’s what I get when I call this up. Unfortunately, I don’t know if this is generally an error or not", "username": "Tobias_Horacek" }, { "code": "", "text": "I got a connection twice now so the Discord Bot was able to connect. But that’s about it.\nWith my test panel it works without any problems and the same system is installed there as on the main server. I’m slowly not understanding it anymore", "username": "Tobias_Horacek" } ]
Connection problems etc
2023-05-20T07:18:02.939Z
Connection problems etc
801
null
[]
[ { "code": "", "text": "Hello guys, I have a trouble. I created a new user in the cluster and applied ROLES (only “READ” few databases), unfortunately when I acess this user all databases is showed. How Can I resolve it ?", "username": "Pedro_Henrique_Brandao" }, { "code": "", "text": "all databases is showedwhat you mean by this? which command?Also did you confirm indeed authentication is enabled on the server?", "username": "Kobe_W" }, { "code": "", "text": "I’m using NoSQLBooster, databases are showed beside on the screen without command. I wrote a normal script and ran this script. Is Autthentication enable on the server only by admin ?", "username": "Pedro_Henrique_Brandao" }, { "code": "", "text": "", "username": "tapiocaPENGUIN" }, { "code": " roles: [{\"role\":\"read\",\"db\":\"database\"}]\n", "text": "1- This is the script that I ran, It’s a example.db.createUser(\n{\nuser: “x”,\npwd: “xy”,}\n)2- Yes I’m logged as the new user. I applied only “read” roles, but this user read all databases.", "username": "Pedro_Henrique_Brandao" }, { "code": "", "text": "Perhaps, Is problem the authentication ? Each new user I need to add a authentication ?", "username": "Pedro_Henrique_Brandao" } ]
Trouble with roles in the cluster
2023-05-24T19:49:31.493Z
Trouble with roles in the cluster
339
null
[ "compass", "containers" ]
[ { "code": "{\n _id: 'rs0',\n version: 1,\n term: 3,\n members: [\n {\n _id: 0,\n host: 'mongo1:30001',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: 'mongo2:30002',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 2,\n host: 'mongo3:30003',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"646dd34139abd7f32bb09ea1\")\n }\n}\n {\n set: 'rs0',\n date: 2023-05-24T10:01:40.074Z,\n myState: 2,\n term: Long(\"3\"),\n syncSourceHost: 'mongo2:30002',\n syncSourceId: 1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1684922497, i: 1 }), t: Long(\"3\") },\n lastCommittedWallTime: 2023-05-24T10:01:37.139Z,\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1684922497, i: 1 }), t: Long(\"3\") },\n appliedOpTime: { ts: Timestamp({ t: 1684922497, i: 1 }), t: Long(\"3\") },\n durableOpTime: { ts: Timestamp({ t: 1684922497, i: 1 }), t: Long(\"3\") },\n lastAppliedWallTime: 2023-05-24T10:01:37.139Z,\n lastDurableWallTime: 2023-05-24T10:01:37.139Z\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1684922447, i: 1 }),\n electionParticipantMetrics: {\n votedForCandidate: true,\n electionTerm: Long(\"3\"),\n lastVoteDate: 2023-05-24T09:30:57.058Z,\n electionCandidateMemberId: 1,\n voteReason: '',\n lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1684920580, i: 1 }), t: Long(\"2\") },\n maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1684920580, i: 1 }), t: Long(\"2\") },\n priorityAtElection: 1,\n newTermStartDate: 2023-05-24T09:30:57.071Z,\n newTermAppliedDate: 2023-05-24T09:30:57.349Z\n },\n members: [\n {\n _id: 0,\n name: 'mongo1:30001',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 1846,\n optime: [Object],\n optimeDurable: [Object],\n optimeDate: 2023-05-24T10:01:37.000Z,\n optimeDurableDate: 2023-05-24T10:01:37.000Z,\n lastAppliedWallTime: 2023-05-24T10:01:37.139Z,\n lastDurableWallTime: 2023-05-24T10:01:37.139Z,\n lastHeartbeat: 2023-05-24T10:01:38.263Z,\n lastHeartbeatRecv: 2023-05-24T10:01:38.266Z,\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'mongo2:30002',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 1,\n configTerm: 3\n },\n {\n _id: 1,\n name: 'mongo2:30002',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 1846,\n optime: [Object],\n optimeDurable: [Object],\n optimeDate: 2023-05-24T10:01:37.000Z,\n optimeDurableDate: 2023-05-24T10:01:37.000Z,\n lastAppliedWallTime: 2023-05-24T10:01:37.139Z,\n lastDurableWallTime: 2023-05-24T10:01:37.139Z,\n lastHeartbeat: 2023-05-24T10:01:38.263Z,\n lastHeartbeatRecv: 2023-05-24T10:01:39.779Z,\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1684920657, i: 1 }),\n electionDate: 2023-05-24T09:30:57.000Z,\n configVersion: 1,\n configTerm: 3\n },\n {\n _id: 2,\n name: 'mongo3:30003',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 1849,\n optime: [Object],\n optimeDate: 2023-05-24T10:01:37.000Z,\n lastAppliedWallTime: 2023-05-24T10:01:37.139Z,\n lastDurableWallTime: 2023-05-24T10:01:37.139Z,\n syncSourceHost: 'mongo2:30002',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 1,\n configTerm: 3,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1684922497, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"71462de63bcd84a20b0fd37f9f8ecbeeeeff4646\", \"hex\"), 0),\n keyId: Long(\"7236672499625230342\")\n }\n },\n operationTime: Timestamp({ t: 1684922497, i: 1 })\n}\n", "text": "Hi, i’m trying to setup a replicat between 3 MongoDB instance in a Docker environnement. All instances of MongoDB seems to works well. I’m able to be connected to thoses instances with Compass. The replicat seems also to works, a primary is elected and nothing in the logs altert me of a potential problem.Here is the rs.conf() output:And the rs.status():I think i’m missing somethings in the initiatial parameters but i d’ont know what.\nAny help will be useful.Thanks,\nMallory LP.", "username": "MalloryLP" }, { "code": "", "text": "What happens if you make a majority write?", "username": "Kobe_W" }, { "code": "mongodb://******:******@XX.XX.XX.XX:30001/?replicaSet=rs0&directConnection=true&authMechanism=DEFAULT&authSource=admin&w=majority\nmongodb://******:******@XX.XX.XX.XX:30001,XX.XX.XX.XX:30002,XX.XX.XX.XX:30003/?replicaSet=rs0&authMechanism=DEFAULT&authSource=admin&w=majority\ngetaddrinfo ENOTFOUND mongo1\nmongo2 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.887+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"YY.YY.YY.YY:59230\",\"uuid\":\"fcdd2403-e65a-4572-a505-90a5865b0650\",\"connectionId\":70,\"connectionCount\":13}}\nmongo3 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.887+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"YY.YY.YY.YY:52958\",\"uuid\":\"5be994ac-1641-4b4f-a6c2-d172dd90125a\",\"connectionId\":38,\"connectionCount\":9}}\nmongo1 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.886+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"YY.YY.YY.YY:36398\",\"uuid\":\"3403d40f-9766-4897-80e6-7ee9b10cf35d\",\"connectionId\":50,\"connectionCount\":7}}\nmongo2 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.890+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn70\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"YY.YY.YY.YY:59230\",\"client\":\"conn70\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.22621\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB\nCompass\"}}}}\nmongo3 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.890+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn38\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"YY.YY.YY.YY:52958\",\"client\":\"conn38\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.22621\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB\nCompass\"}}}}\nmongo1 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.891+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn50\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"YY.YY.YY.YY:36398\",\"client\":\"conn50\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.22621\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB\nCompass\"}}}}\nmongo2 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.946+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn70\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"YY.YY.YY.YY:59230\",\"uuid\":\"fcdd2403-e65a-4572-a505-90a5865b0650\",\"connectionId\":70,\"connectionCount\":12}}\nmongo1 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.946+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn50\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"YY.YY.YY.YY:36398\",\"uuid\":\"3403d40f-9766-4897-80e6-7ee9b10cf35d\",\"connectionId\":50,\"connectionCount\":6}}\nmongo3 | {\"t\":{\"$date\":\"2023-05-25T07:25:54.947+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn38\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"YY.YY.YY.YY:52958\",\"uuid\":\"5be994ac-1641-4b4f-a6c2-d172dd90125a\",\"connectionId\":38,\"connectionCount\":8}}\n", "text": "I have added w=majority in the URI but nothing new.\nI think I have misunderstood the way to connect to the replica via MongoDBCompass, I’m using this URI :But shouldn’t I use this URI with the 3 MongoDB instances ?When i try this URI, I have this error :Here are the logs :Thanks for your reply,\nMalloryLP.", "username": "MalloryLP" }, { "code": "directConnection=true", "text": "UsingdirectConnection=trueis probably wrong when your mongod is running on docker unless your client is connecting from the same docker instance.But shouldn’t I use this URI with the 3 MongoDB instances ?You do not really have to use all of them. When you connect with rs0, the driver get the replica set info from the node/nodes it connects, then try to reconnect to all members.The fact that you get ENOTFOUND on mongo1 seems to indicate that some docker config is missing or not accessible from where you try to connect.", "username": "steevej" }, { "code": "directConnection=true", "text": "I have tried serval connections string and i can connect to all mongodb instance in the replica with directConnection=true form external client but when I don’t use this parameter I can’t. As you can see in the logs, the connection is detected.\nI will investigate in Docker configuration.", "username": "MalloryLP" }, { "code": "", "text": "", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data are not replicated between MongoDB instance (replication in docker compose)
2023-05-24T10:06:19.503Z
Data are not replicated between MongoDB instance (replication in docker compose)
996
null
[]
[ { "code": "", "text": "I’m planning to create an Atlas Search Index via Terraform or Atlas APIs. Both of these methods accept name as a parameter. Now, if I do this CREATE operation again with the same name but with a different property, say analyzer, will it update the existing index or throw an error?", "username": "Shabir_Hamid" }, { "code": "", "text": "Why not use Update?", "username": "Elle_Shwer" }, { "code": "", "text": "Yes but I’m trying to automate managing indexes and using UPDATE would require me to maintain the indexId. I still didn’t the answer for what happens when CREATE is called for second time", "username": "Shabir_Hamid" }, { "code": "", "text": "I haven’t personally tried it to know but my guess is it will fail and say “index already exists with this name” or something to that degree", "username": "Elle_Shwer" }, { "code": "", "text": "Hi @Shabir_Hamid, I have implemented some automated search index creation and can confirm that the API throws an error if you try to create an index with the same name.In my case I implemented logic to request the existing search indexes on the collection and then check to see if any of them have the same name. If not then the request to create the index is created.", "username": "Junderwood" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create atlas search index with same index name
2023-05-25T08:19:27.276Z
Create atlas search index with same index name
546
null
[]
[ { "code": "{\"t\":{\"$date\":\"2023-05-09T07:03:14.573+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23330, \"ctx\":\"main\",\"msg\":\"ERROR: Cannot write pid file to {path_string}: {errAndStr_second}\",\"attr\":{\"path_string\":\"/var/run/mo\nngodb/mongod.pid\",\"errAndStr_second\":\"No such file or directory\"}}\nExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb\nPermissionsStartOnly=true\n", "text": "It is observed that after an upgrade of mongodb to 4.4.21, mongodb service unable to start after a system reboot and reporting error like:On RHEL 7 server. Creating the path and updating the permission temporarly fixing issue however issue will popup after restart of system.\n/var/run is a temporary filesystem path and this will be cleared on every reboot.Upon further verifcation on the commit history, found that this commit removed the required lines from systemd unit file.Could you please look into the issue and fix this?", "username": "mad_buck" }, { "code": "", "text": "I hope you understand that if you have to recreate /var/run/mongodb every time you restart it means you lose all your data every time.It might be what you want. But it is not for most. Having those command in the service file is very dangerous because they have more to do about installating MongoDB rather than running MongoDB. The directories and permissions should be created and set once and only once.", "username": "steevej" }, { "code": "The /var/run/ directory contains system information data describing the system since it was booted. The files under this directory are cleared (removed or truncated as appropriate) at the beginning of the boot process. The directory is for programs that use more than one run-time file (e.g. utmp file, which stores information about who is currently using the system). The files in this directory are created dynamically by individual services as they start and during the boot time.\n", "text": "There is no data being kept there, but the pid information.The /var/run directory contains lot of files and is increasing while the system is running. What is the purpose of this directory? Why files/directorys are deleted from /var/run after reboot ?and the pid path/ file generation should be taken care by the systemd unit files. Hope the situation is more clear now!", "username": "mad_buck" }, { "code": "", "text": "I recently setup a new install on Almalinux and am faced with the same problem. There seems to be a lot of posts out there regarding this that point to adding the directory creation/permission entries to the service file which I did. Each time our servers patch and mongodb updates, the changes are reverted and I am faced with an unhappy service owner. What is the real ‘fix’ for this??", "username": "Mark_Hill" } ]
Unable to start mongodb service with mongodb version 4.4.21 on RHEL 7, required start commands got removed
2023-05-09T13:18:26.225Z
Unable to start mongodb service with mongodb version 4.4.21 on RHEL 7, required start commands got removed
1,203
null
[ "aggregation", "queries", "node-js", "data-modeling" ]
[ { "code": "\"keys\" : {\n \"searchPeer\" : [ \n \"parampampam2\", \n \"parampampam1\" \n ],\n \"datacenter\" : [ \n \"East-US\" \n ],\n \"company\" : [ \n \"company-name\" \n ],\n \"testId\": [...],\n \"location\": [...],\n \"url\": [...],\n\n}\n{$or: [\n {name: {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.qrtcTestId\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.searchPeer\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.datacenter\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.location\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.company\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}},\n {\"keys.url\": {$regex: '.*shortrundampshiproom1.*', $options: 'i'}} \n ]}\n", "text": "Hi, there!\nWe have a product and faced a problem with slow searching.\nHere the details:Q: how we can make the search faster?", "username": "Alexander_Gulakov" }, { "code": "", "text": "Q: how we can make the search faster?By not abusing $regex with ‘i’ $options.If this is a frequent use-case you should normalize the data into all lower cases or all upper cases.", "username": "steevej" }, { "code": "", "text": "No, this make response even longer\nWith option ‘i’ it is about 16-19 seconds, without ‘i’ - 44-49 seconds.", "username": "Alexander_Gulakov" } ]
Slow Search by nested fields
2023-05-25T14:45:08.323Z
Slow Search by nested fields
563
null
[ "aggregation", "compass", "mongodb-shell" ]
[ { "code": "db.movies.aggregate([\n { \n $addFields: {\n fromFunction: {\n $function: {\n body: \"function(){return 'hello'}\",\n args: [], \n lang: 'js'\n }\n }\n }\n }\n ])\n", "text": "Hi all. I’m trying to use a function in order to parse a json format field in a collection. But it seems that I’m missing some previous steps in order to do so, if I run a simple aggregation like this:nothing happens, the collection is not created ( Reference: How to Use Custom Aggregation Expressions in MongoDB 4.4 | MongoDB).\nDo I need to install any driver or package, or setup the shell in order to run such javascript function? I’m using MongoDB Compass Shell. Versions: MongoDB Compass 1.36.4, mongoDB 6.0.6", "username": "Luis_Leon" }, { "code": "fromFunction'hello'db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\ndb.movies.countDocuments()", "text": "Hi @Luis_Leon - Welcome to the community I presume you’re expecting it to return a document with an additional field called fromFunction with the value 'hello' - please correct me if I’m wrong here.nothing happensCan you try running the aggregation below?:If nothing is returned, can you advise the output of db.movies.countDocuments()?Regards,\nJason", "username": "Jason_Tran" }, { "code": "db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\n", "text": "Hello Jason, thanks for your reply. I tried running your aggregation, but it justs hangs and the prompt is not returned. By the way, when I tried running a function using JSON.parse it behave the same way, it just hanged, that’s why I tried running a simpler function in order to see if the shell configuration was Ok.", "username": "Luis_Leon" }, { "code": "", "text": "db.movies.countDocuments()\nimage1900×1010 34.1 KB\n", "username": "Luis_Leon" }, { "code": "use test\"test\"db.movies.insertOne({a:1})\"movies\"db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\ntest>", "text": "Hi Luis,Looks like theres no documents in that collection. Can you try the following on a test environmentand let me know the output?Additionally, when you state the prompt hangs, do you mean that the test> indicator in the embedded mongoshell in compass is not returned and you cannot type further?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\n", "text": "Hi Json. Step 2 runs ok\n\nimage1860×983 46.8 KB\nStep 3, again the prompt hangs, with that I mean that the test indicator in the embedded mongoshell never returns, the only way to quit is closing Compass window and start again, like picture below:Am I missing some driver or software installation in order to run a javascript function?", "username": "Luis_Leon" }, { "code": "", "text": "\nimage1847×982 46.2 KB\n", "username": "Luis_Leon" }, { "code": "mongosh", "text": "What version of Compass are you using? Is this behaviour replicated if you try with the standalone mongosh shell?I’ll try replicate this behaviour using Compass on my own test environment.Regards,\nJason", "username": "Jason_Tran" }, { "code": "mongoshtest> db.movies.insertOne({a:1})\n{\n acknowledged: true,\n insertedId: ObjectId(\"646ff4aa0929e08e64012e21\")\n}\ntest> db.movies.find()\n[ { _id: ObjectId(\"646ff4aa0929e08e64012e21\"), a: 1 } ]\n$functiontest> db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\n[\n {\n _id: ObjectId(\"646ff4aa0929e08e64012e21\"),\n a: 1,\n fromFunction: 'hello'\n }\n]\n", "text": "Tested via mongosh on my test environment, the output is per the below code snippets:The aggregation with $function:", "username": "Jason_Tran" }, { "code": "mongoshmongosh", "text": "I was able to replicate the prompt hanging in the embedded mongosh shell in Compass. It does work from the standalone mongosh so if you’re able to try that in the mean time hopefully it works for you.", "username": "Jason_Tran" }, { "code": "", "text": "Yes, it seems to work in standalone shell and something is wrong in Compass. Even trying to visualize the document in compaas seems to be wrong.\nimage1920×1080 225 KB\n", "username": "Luis_Leon" }, { "code": "", "text": "Good news is javascript function is running ok now using standalone shell. Now the problem is different, but I think I will open a new topic. Thanks Jason", "username": "Luis_Leon" }, { "code": "", "text": "Thanks for updating the post with confirmation Luis ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using $function to run a javascript function is not working in mongosh
2023-05-25T00:13:45.127Z
Using $function to run a javascript function is not working in mongosh
1,122
null
[ "atlas-search" ]
[ { "code": "{\n \"mappingType\": \"explicit\",\n \"input\": [\"beer\"],\n \"synonyms\": [\"beer\", \"brew\", \"pint\"]\n}\n{\n \"mappingType\": \"explicit\",\n \"input\": [\"trash bin\"],\n \"synonyms\": [\"garbage can\"]\n}\n{\n \"mappingType\": \"explicit\",\n \"input\": [\"computer\"],\n \"synonyms\": [\"computing device\", \"information processing system\"]\n}\n", "text": "I am trying to add “explicit” synonyms in my source synonym collection in my “synonym” array.\nHowever, whenever I add a open compound words (i.e. words made up of 2 or more words with a space between them i.e. “radio station”), the document is considered by Atlas to be invalid. However, it is essential that my synonym lists contain open compound words.\nA sample document provided by MongoDB only shows @ https://www.mongodb.com/docs/atlas/atlas-search/synonyms/#std-label-synonyms-coll-format :but what I need also covered are cases like this one:as well as:Any help would be greatly appreciated! Thanks!", "username": "shards" }, { "code": "", "text": "Hi Sha,Thanks for your question - the use case and synonym documents you’ve described and listed here both seem valid, and should be supported by Atlas Search!Could you perhaps share the full JSON of your index definition, an example query you are running, and the exact error you are seeing? I’d love to help understand the issue you are facing in more detail and see if I can pinpoint a change that would get your index and queries working as expected!Best, Evan", "username": "Evan_Nixon" } ]
Entering an open compound word in "synonyms" in synonym collection renders the document invalid
2023-05-24T03:21:19.021Z
Entering an open compound word in &ldquo;synonyms&rdquo; in synonym collection renders the document invalid
677
null
[ "aggregation", "queries" ]
[ { "code": "{overs: [ \n {over:1,\n deliveries: [{runs:1},{runs:0},{runs:2},{runs:4},{runs:0},{runs:0}]\n },\n {over:2,\n deliveries: [{runs:3},{runs:6},{runs:2},{runs:0},{runs:0},{runs:0}]\n },\n ...]\n}\nballNo{overs: [\n {over:1,\n deliveries: [{ballNo:1, runs:1},{ballNo:2, runs:0},{ballNo:3, runs:2},{ballNo:4, runs:4},{ballNo:5, runs:0},{ballNo:6, runs:0}]\n }, \n {over:2,\n deliveries: [{ballNo:1, runs:3},{ballNo:2, runs:6},{ballNo:3, runs:2},{ballNo:4, runs:0},{ballNo:5, runs:0},{ballNo:6, runs:0}]\n },\n ...]\n}\n", "text": "I am working on a json object that has the following structure:Inside the deliveries array, I want to add a new field ballNo whose value is equal to the position of the sub-document", "username": "Vamsi_Kumar_Naidu_Pallapothula" }, { "code": "", "text": "Using aggregation pipeline, I would use:$range to $addFields an array [ 1 … $size of deliveries ]\n$map the $range above with $mergeObject for each element of deliveriesAnd if what you want needs to be permanent, a $merge stage back in the original collection.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding the position of a sub-document in an array as field inside that sub-document
2023-05-23T09:06:58.122Z
Adding the position of a sub-document in an array as field inside that sub-document
375
null
[ "atlas-cluster", "atlas" ]
[ { "code": "", "text": "for a multi region cluster with 3 regions and 3:3:2 node distribution, the majority write concern will act as local to the primary region like if write operation is successful to 2 nodes in region 1 or will it wait for for completion of write to 5 nodes i.e. confirmation from cross region node that the write is complete?", "username": "Gopal_Sharma" }, { "code": "", "text": "You can have up to 7 voting members, so majority count is 4. Apart from the primary node, at least 3 others (any 3, same region or not) will have to acknowledge the writes before a response can be returned to client.", "username": "Kobe_W" } ]
Majority write concern in multi region
2023-05-25T14:32:58.319Z
Majority write concern in multi region
668
null
[ "indexes" ]
[ { "code": "ceateIndexuniquecurrentOpgetIndexescreateIndex", "text": "I’m working with a 10 year old service originally written against MongoDB 2.4, and currently on 3.6. The service relies on several unique indexes to maintain data integrity, which means those indexes must exist at all time the service is active. As such, the service calls ceateIndex on all indexes at startup and if any are missing (which generally shouldn’t happen, but…) blocks until they’re built.We’re looking to finally get off an EOL version of Mongo, but the new index build method is concerning as it allows writes during the build, and those writes can violate the unique constraint. I haven’t been able to find any method to ensure the index is built other than polling currentOp / getIndexes to check the build status. Is there some way to force clients to block on a createIndex call, or an equivalent way of ensuring the indexes are built before completing service startup?", "username": "Dave_Smith" }, { "code": "", "text": "The API manual doesn’t mention any such thing.Just put your status check code in a loop and wait until it’s done, though you may waste some cpu resources.", "username": "Kobe_W" } ]
Block until index is created
2023-05-25T21:42:13.938Z
Block until index is created
555
null
[ "dot-net" ]
[ { "code": "{\t\n\t\"dataSource\":\"mysource\",\n\t\"database\":\"mydb\",\t\n\t\"collection\":\"mycollection\",\n\t\"filter\": {\n\t\t\"accountSalesforceReference\":\"willnotfindthis\"\n\t},\t \n\t\"update\": {\n\t\t\"$set\":\t{\n\t\t\t\"accountName\":\"Test\",\n\t\t\t\"version\":\"1.0\",\n\t\t\t\"address\": {\n\t\t\t\t\t\"line1\":\"10 Bingo Road\",\n\t\t\t\t\t\"town\":\"Bingotown\",\n\t\t\t\t\t\"county\":\"Bingoshire\",\n\t\t\t\t\t\"postcode\":\"B1N G0\",\n\t\t\t\t\t\"country\":\"Bingo\"\n\t\t\t},\n\t\t\t\"billingAddress\": {\n\t\t\t\t\t \"line1\":\"10 Bingo Road\",\n\t\t\t\t\t \"town\":\"Bingotown\",\n\t\t\t\t\t \"county\":\"Bingoshire\",\n\t\t\t\t\t \"postcode\":\"B1N G0\",\n\t\t\t\t\t \"country\":\"Bingo\"\t\t\n\t\t\t}\n\t\t}\n\t},\n\t\"upsert\":true\n}\n", "text": "I am sending data to our data API and keep getting a response 400 bad request. I have written a C# console app to send some bogus data in before writing the production version.I am following the information found here (https://www.mongodb.com/docs/atlas/api/data-api-resources/#update-a-single-document) which suggests json needs to be sent in the format of the request body.Here is a copy of the json I am sending as the content of the http request which should in theory trigger a new document (with field values changed).However, I receive a 400 error. If I try this directly from Insomnia/Postman I get the same error with some extra detail:“Failed to update document: Invalid or unspecified ‘dataSource’, ‘database’, or ‘collection’ field’”Which makes little sense since these are the first three values of the JSON data. I am also setting the content-type header and apikey header as suggested in the documentation above.What is wrong here?", "username": "David_Hirst" }, { "code": "dataSourceCluster0dataSourcecurl --location --request POST 'https://data.mongodb-api.com/app/<REDACTED>/endpoint/data/v1/action/updateOne' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: <REDACTED>' \\\n--data-raw '{\n \"collection\":\"updatecoll\",\n \"database\":\"db\",\n \"dataSource\":\"RandomClusterName\",\n \"filter\": {\n \"accountSalesforceReference\":\"willnotfindthis\"\n },\n \"update\": {\n \"$set\": {\n \"accountName\":\"Test\",\n \"version\":\"1.0\",\n \"address\": {\n \"line1\":\"10 Bingo Road\",\n \"town\":\"Bingotown\",\n \"county\":\"Bingoshire\",\n \"postcode\":\"B1N G0\",\n \"country\":\"Bingo\"\n },\n \"billingAddress\": {\n \"line1\":\"10 Bingo Road\",\n \"town\":\"Bingotown\",\n \"county\":\"Bingoshire\",\n \"postcode\":\"B1N G0\",\n \"country\":\"Bingo\"\n }\n }\n },\n \"upsert\":true\n}'\n\"Failed to update document: Invalid or unspecified 'dataSource', 'database', or 'collection' field'\"\ndataSourcecurl --location --request POST 'https://data.mongodb-api.com/app/<REDACTED>/endpoint/data/v1/action/updateOne' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: <REDACTED>' \\\n--data-raw '{\n \"collection\":\"updatecoll\",\n \"database\":\"db\",\n \"dataSource\":\"Cluster0\",\n \"filter\": {\n\t\t\"accountSalesforceReference\":\"willnotfindthis\"\n\t},\t \n\t\"update\": {\n\t\t\"$set\":\t{\n\t\t\t\"accountName\":\"Test\",\n\t\t\t\"version\":\"1.0\",\n\t\t\t\"address\": {\n\t\t\t\t\t\"line1\":\"10 Bingo Road\",\n\t\t\t\t\t\"town\":\"Bingotown\",\n\t\t\t\t\t\"county\":\"Bingoshire\",\n\t\t\t\t\t\"postcode\":\"B1N G0\",\n\t\t\t\t\t\"country\":\"Bingo\"\n\t\t\t},\n\t\t\t\"billingAddress\": {\n\t\t\t\t\t \"line1\":\"10 Bingo Road\",\n\t\t\t\t\t \"town\":\"Bingotown\",\n\t\t\t\t\t \"county\":\"Bingoshire\",\n\t\t\t\t\t \"postcode\":\"B1N G0\",\n\t\t\t\t\t \"country\":\"Bingo\"\t\t\n\t\t\t}\n\t\t}\n\t},\n\t\"upsert\":true\n}'\n{\"matchedCount\":0,\"modifiedCount\":0,\"upsertedId\":\"646e92ac77e0dbb07408d38f\"}%\n", "text": "Hi @David_Hirst Can you double check your dataSource value? I believe it should be the name of the cluster.For example, I have a cluster named Cluster0, when I use a non-existing cluster name for dataSource:Output (same error you’ve receieved):When using the correct dataSource value:Output:Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you Jason, it was indeed an issue with the data source name. I was mistakenly using the projects name not the clusters name. All working now!", "username": "David_Hirst" }, { "code": "", "text": "Good to know that it was just a simple change required. Thanks for updating the post to confirm David ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Recieve bad request error when sending data to API
2023-05-24T15:30:55.978Z
Recieve bad request error when sending data to API
1,397
https://www.mongodb.com/…_2_1024x673.jpeg
[ "swift" ]
[ { "code": "extension CGPoint: CustomPersistableCGPointCGPoint@SceneStorageCGPointRawRepresentableextension CGPoint: RawRepresentable {\n public var rawValue: String {\n guard let data = try? JSONEncoder().encode(self),\n let string = String(data: data, encoding: .utf8)\n else {\n return \"{}\"\n }\n return string\n }\n\n public init?(rawValue: String) {\n guard let data = rawValue.data(using: .utf8),\n let result = try? JSONDecoder().decode(CGPoint.self, from: data)\n else {\n return nil\n }\n self = result\n }\n}\nRawRepresentableCustomPersistable", "text": "As the documentation mentions I have implemented an extension CGPoint: CustomPersistable to persist CGPoint values. So far so good. Additionally I have to persist CGPoint values in SwiftUI’s @SceneStorage. For this I added another CGPoint extension which implements RawRepresentable.Unfortunately these two extensions don’t want to coexist. If I implement the RawRepresentable extension I get an error for the CustomPersistable extension.Any ideas?\nimage1775×1168 83.3 KB\n", "username": "phranck" }, { "code": "", "text": "Two thingsWhen posting, please include the code textually - that way if we need to use it in an answer we won’t have to re-type it. Also, we need a clearer picture of your objects.The PersistedType on CGSize must be a type Realm Supports - what is CGSizeObject?", "username": "Jay" }, { "code": "public class CGPointObject: EmbeddedObject {\n @Persisted var x: Double\n @Persisted var y: Double\n}\n\nextension CGPoint: CustomPersistable {\n public typealias PersistedType = CGPointObject\n public init(persistedValue: CGPointObject) {\n self.init(x: persistedValue.x, y: persistedValue.y)\n }\n public var persistableValue: PersistedType {\n CGPointObject(value: [x, y])\n }\n}\n\nextension CGPoint: RawRepresentable {\n public var rawValue: String {\n guard let data = try? JSONEncoder().encode(self),\n let string = String(data: data, encoding: .utf8)\n else {\n return \"{}\"\n }\n return string\n }\n\n public init?(rawValue: String) {\n guard let data = rawValue.data(using: .utf8),\n let result = try? JSONDecoder().decode(CGPoint.self, from: data)\n else {\n return nil\n }\n self = result\n }\n}\nCGSizeObjectCGSize", "text": "Okay, it seems it’s not possible to edit/update a post, so I write it again.This is the code in question:By adding the last extension I get the error as shown in the screenshot. CGSizeObject is the type Realm uses to persist CGSize.", "username": "phranck" }, { "code": "CGSizeObjectCGSizeCGSizepublic typealias PersistedType = CGSizeObjectCGSizeObject", "text": "You can edit your post by clicking the pencil at the bottomCGSizeObject is the type Realm uses to persist CGSize.Gotcha. However, if you look at your code, the CGSize object PersistedType is CGSizeObjectpublic typealias PersistedType = CGSizeObjectbut CGSizeObject is not defined. Not saying that’s the issue, but for clarity it should be included so we know what it looks like.", "username": "Jay" }, { "code": "CGSizeCGPointCGPointCGSizeCGRect", "text": "Ah, now I see my mistake. It’s the wrong screenshot. It is for CGSize instead CGPoint. However, I made the same extensions for all three types CGPoint, CGSize and CGRect. It’s every time the same error for all three.", "username": "phranck" }, { "code": "CGPointCGSizeCGRectpublic typealias PersistedType = CGSizeObjectCGSizeObjectCGSizeObject", "text": "the same extensions for all three types CGPoint, CGSize and CGRectRight and the problem is still the same:public typealias PersistedType = CGSizeObjectCGSizeObject is undefined. We don’t know what that is and there’s a possibility Realm doesn’t know what it is either as it’s not one of the supported types - unless it’s Type Projected as well - but we don’t know that.Please include what CGSizeObject looks like so we can eliminate it as being the issue.", "username": "Jay" }, { "code": "CGPointObjectRawRepresentableCGSizeObjectpublic class CGSizeObject: EmbeddedObject {\n @Persisted var width: Double\n @Persisted var height: Double\n}\nCustomPersistableextension CGSize: CustomPersistable {\n public typealias PersistedType = CGSizeObject\n public init(persistedValue: CGSizeObject) {\n self.init(width: persistedValue.width, height: persistedValue.height)\n }\n public var persistableValue: PersistedType {\n CGSizeObject(value: [width, height])\n }\n}\n", "text": "It’s similar to the CGPointObject of my example from above. It doesn’t matter. The error message is always the same, for all three types when I add the RawRepresentable extension.For the sake of completeness, this is the CGSizeObject:and its belonging CustomPersistable extension:", "username": "phranck" }, { "code": "", "text": "Ok. interesting issue. Something to note from the Type Projections docs is thisThese are protocols modeled after Swift’s built-in RawRepresentable.which led me to this Github Bug ReportCannot use RawRepresentable/OptionSet types with CustomPersistable/FailableCustomPersistable #7612Which seems to be a similar (exact) issue with no resolution.", "username": "Jay" }, { "code": "", "text": "Exactly! The same problem we have.", "username": "phranck" } ]
CustomPersistable alongside RawRepresentable
2023-05-24T15:08:48.133Z
CustomPersistable alongside RawRepresentable
759
null
[ "field-encryption" ]
[ { "code": "ollan@Ollans-Air ~ % mongod --dbpath /Users/ollan/mongodb/data/db\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.396+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.408+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.408+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":24758,\"port\":27017,\"dbPath\":\"/Users/ollan/mongodb/data/db\",\"architecture\":\"64-bit\",\"host\":\"Ollans-Air\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.5.0\"}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.411+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/Users/ollan/mongodb/data/db\"}}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.412+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.412+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.412+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-05-25T11:41:55.413+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}} `\n", "text": "Hey all I have recently joined Mongodb and have been trying to run it in my server for the past week and I am currently stuck here every time I try to run the mongod server. I don’t know what do to do.\n#ops-admin", "username": "Ollan_Muza" }, { "code": "{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}", "text": "From you log:{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}it appears you are trying to start a second mongod at the same address.The fact that your title says mongod terminal, I suspect that you are trying to start mongod rather than mongosh.", "username": "steevej" } ]
Mongod terminal keeps crashing
2023-05-25T10:48:02.514Z
Mongod terminal keeps crashing
820
null
[ "next-js" ]
[ { "code": "import { MongoClient, Db } from \"mongodb\";\n\nlet db: Db;\nlet uri = process.env.NEXT_PUBLIC_MONGODB_URI;\n\nexport const connectToDatabase = async (): Promise<void> => {\n const url = uri;\n const dbName = \"taftafemails\";\n\n const client = new MongoClient(url, {\n useUnifiedTopology: true,\n } as any);\n\n try {\n await client.connect();\n console.log(\"Connected to the MongoDB server\");\n\n db = client.db(dbName);\n } catch (error) {\n console.error(\"Error connecting to the MongoDB server:\", error);\n }\n};\n\nexport const getDatabase = (): Db => {\n if (!db) {\n throw new Error(\"Database connection not established\");\n }\n return db;\n};\n\nimport { NextApiRequest, NextApiResponse } from \"next\";\nimport { connectToDatabase, getDatabase } from \"database/mongo\";\n\nexport default async function handler(\n req: NextApiRequest,\n res: NextApiResponse\n) {\n if (req.method === \"POST\") {\n try {\n const { name, email } = req.body;\n\n await connectToDatabase();\n\n const db = getDatabase();\n\n const collection = db.collection(\"contacts\");\n\n const existingContact = await collection.findOne({ email });\n if (existingContact) {\n res.status(409).json({\n error: \"Cette adresse existe déjà, veuillez choisir un autre\",\n });\n return;\n }\n\n await collection.insertOne({ name, email });\n\n res.status(200).json({\n message: \"Vous êtes maintenant abonnés, on vous tient informés !\",\n });\n } catch (error) {\n console.error(\"Error submitting contact form:\", error);\n console.dir(error);\n res.status(500).json({ error: \"Internal Server Error\" });\n }\n } else if (req.method === \"GET\") {\n res.status(200).json({ message: \"Ceci est le formulaire d'inscription\" });\n } else {\n res.status(405).json({ error: \"Méthode non autorisée\" });\n }\n}\n\ntry {\n const response = await fetch(\"/api/contact\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n },\n body: JSON.stringify({ name, email }),\n });\n ...\n", "text": "I have a Next.js app project that allows storing names and emails using a MongoDB database with MongoDB Atlas.When I enter a name and an email on my local machine, I don’t encounter any errors, and these data (name and email) are stored perfectly in the database. I can see them in a collection that I created on MongoDB Atlas.However, when I try to perform the same operations while being online, I receive an error message saying “Internal Server Error.”My question is: Why is the database able to store the entered data on my local machine but unable to do so when I’m online?So this my configFile Database :This is my file for api routes : pages/api/contact.ts:This is my Form file to communucate with api/route :Please help me ", "username": "Diop_Nikuze" }, { "code": "", "text": "My question is: Why is the database able to store the entered data on my local machine but unable to do so when I’m online?What do you mean by “when I’m online?”It sounds like you can’t connect when the Nextjs app is hosted somewhere else besides your local. I would verify that 1. the IP address of the other location is whitelisted on Atlas 2. There are no FW’s blocking the connectivity (if it’s a VM or other hosted server)", "username": "tapiocaPENGUIN" }, { "code": "", "text": "the IP address of the other location is whitelisted on AtlasSo in the “Add IP Whitelist Entry”, I select the “Allow Access From Anywhere” which is 0.0.0.0/0, please I’m not familiar with MongoDB, please help ", "username": "Diop_Nikuze" }, { "code": "", "text": "Okay so the IP is whitelisted which is good. So MongoDB isn’t blocking the IP.If you have the mongosh installed or can install it whichever machine is having issues connecting you can try to connect to your atlas cluster via mongosh. This will rule out any connectivity issues if it can connect.", "username": "tapiocaPENGUIN" } ]
Nextjs and MongoDB Internal Server Error
2023-05-24T20:24:16.866Z
Nextjs and MongoDB Internal Server Error
1,332
null
[ "dot-net" ]
[ { "code": "", "text": "The C# BSON creates a ObjectId value with a call to System.Diagnostics.Process.GetCurrentProcess().Id. The Blazor Webassembly framework does not support calls to Process.GetCurrentProcess() (see here and here)", "username": "Isaac_Borrero" }, { "code": "", "text": "Hi, @Isaac_Borrero,Welcome to the MongoDB Community Forums. Thank you for filing CSHARP-4551 as a feature request. This is a reasonable request and we are discussing when we can schedule this work. Please follow the linked ticket for updates.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Is there any workaround for that? I’m working on a blazor app and wanted to use mongo db with it but the MongoDB.Bson.ObjectId throws an exception when I want to use it.", "username": "Kevin_Schafer" }, { "code": "", "text": "@James_Kovacs @Tyler_KayeWouldn’t this be a fit for the C# SDK for Realm?I think the MongoDB Realm C# SDK may work, I’m uncertain if it really works with Blazor yet, but it does work with MAUI which has a lot of the same components.It might be worth a try to see?It looks like it should be fixed for MDB usage, but it done actually throw an exception using the driver. I haven’t dived that deep with Blazor on the SDK though, and am curious how Realm can maybe solve this?", "username": "Brock" }, { "code": "", "text": "I’ve seen that there is a fix for that on github already on master, I’ve downloaded it to continue working. Thanks for your fix, looking forward to the update.", "username": "Kevin_Schafer" }, { "code": "MongoDB.Bson", "text": "The Realm C# SDK uses the MongoDB.Bson package for BSON support. So it would have the same problem as the MongoDB .NET/C# Driver. As noted, we have a fix that will be released in the coming weeks. I don’t have a timeline on exactly when, but I’m glad that you’re able to work around this by compiling from source.", "username": "James_Kovacs" }, { "code": "", "text": "I actually didn’t know that about the BSON package, I always thought they were different.Thank you for explaining that actually.", "username": "Brock" }, { "code": "", "text": "I would also like to request that Bson-related attributes be moved into a separate library without dependencies. This adds 1.4 MB to the website build. It accounts for 26% of the size of my WASM website.To share data classes with BsonIdAttribute and the like requires MongoDB.Bson.dll (496 KB) and also MongoDB.Driver.Core.dll (941 KB).Even with br compression, that’s 155 KB and 280 KB, respectively, added to the website download.", "username": "Josh_Brown" }, { "code": "", "text": "I just downloaded MongoDB.Bson 2.19.2 → I can generate an ObjectId now. Thank you for getting this done. Much appreciated!!!", "username": "Isaac_Borrero" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB C# BSON not supported in Blazor WebAssembly
2023-02-28T02:02:52.352Z
MongoDB C# BSON not supported in Blazor WebAssembly
1,258
null
[ "aggregation" ]
[ { "code": "", "text": "‘’‘db.Repository.aggregate([\n{\n$lookup: {\nfrom: “hello”,\nlocalField: “repository_id”,\nforeignField: “id”,\nas: “hello_data”\n}\n},\n{\n$unwind: “$hello_data”\n},\n{\n$project: {\n_id: 0,\nrepository_id: “$repository_id”,\ncapacity: “$capacity”,\nhello_id: “$hello_data.id”,\nhello_name: “$hello_data.name”\n}\n}\n])\ndb.Repository.aggregate([\n{\n$lookup: {\nfrom: “hello”,\nlocalField: “Repository_id”,\nforeignField: “id”,\nas: “hello_data”\n}\n},\n{\n$unwind: “$hello_data”\n},\n{\n$project: {\n_id: 0,\nrepository_id: “$Repository_id”,\ncapacity: “$capacity”,\nhello_id: “$hello_data.id”,\nhello_name: “$hello_data.name”\n}\n}\n])’‘’\nthis is my code and it outputs this error.\nclone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be cloned.\nWhat’s wrong here?", "username": "Danial_Turisbek" }, { "code": "", "text": "Most likely a syntax error in your code. However, the way you published it makes it impossible for us to help you.", "username": "steevej" } ]
Could not be cloned
2023-05-24T19:57:50.146Z
Could not be cloned
782
null
[ "aggregation", "node-js", "mongodb-shell" ]
[ { "code": "const { Worker, isMainThread, parentPort, workerData } = require('worker_threads');\nconst {MongoClient} = require('mongodb');\nconst uri = \"mongodb+srv://localhost/mydb?retryWrites=true\";\n\nbar = 'global info';\n\nif (isMainThread) {\n\tconst threads = new Set();\n\tthreads.add(new Worker(__filename, { workerData: { foo: '1', bar }}));\n\tthreads.add(new Worker(__filename, { workerData: { foo: '2', bar }}));\n\tfor (let worker of threads) {\n\t\tworker.on('error', (err) => { throw err; });\n\t\tworker.on('exit', () => {\n\t\t\tthreads.delete(worker);\n\t\t\tif (threads.size === 0) {\n\t\t\t\tconsole.log('done');\n\t\t\t}\n\t\t});\n\t\tworker.on('message', (msg) => {\n\t\t\tconsole.log('Worker Message: ', msg);\n\t\t});\n\t}\n} else {\n\tasync function main(){\n\t\tconst client = new MongoClient(uri);\n\n\t}\n\tmain().catch(console.error);\n\tparentPort.postMessage(workerData);\n}\nmongosh --quiet mydb ./testWorkerThreads.js\n\nUncaught:\nError: Cannot find module 'mongodb'\nRequire stack:\n- ./testWorkerThreads.js\nUncaught:\nError: Cannot find module 'mongodb'\nRequire stack:\n- ./testWorkerThreads.js\ndone\n", "text": "I’m porting some old aggregation scripts that used ScopedThread from parallelTester.js to split mapReduce operations to multiple threads to make more effective use of the available cores. The scripts are run directly on the mongo shell.I have rewritten that threading part with NodeJS Worker; I am hitting a roadblock there, though, as the threads spawned in that way don’t inherit any of the mongosh environment. So neither db nor connect() are available. I cannot pass db as workerData either, as this cannot be cloned. If I install mondogb via npm, I can require it on the MainThread, but the children cannot find any npm modules (tried npm install with and without -g).I’m just about to give up and rewire the whole thing so as to not try to run in parallel threads at all, but I was hoping that someone here might have has some success using Worker inside mongosh?Here’s some mockup code as an example:Running this yields two error messages, one for each of the threads:Using “db” inside the threads (which I’d prefer, as I wouldn’t need new connections) gets me a similar result - db is not defined.Any ideas?", "username": "Markus_Wollny" }, { "code": "worker_threadsrequire('mongodb')require.resolve('mongodb')npm install mongodbmapReduce", "text": "@Markus_Wollny Right, worker_threads inside mongosh just refers to the Node.js API, and when you use one you’ll get a plain Node.js worker thread, without mongosh APIs.It is odd that require('mongodb') works in the main thread but not in Worker threads. That might be worth investigating; e.g. what does require.resolve('mongodb') look like in the main thread, and how does that related to the path of the script you’re running? (If I locally run npm install mongodb and then run a similar test script, it works just fine.)That being said, it might be worth taking a step back and looking at the bigger picture:", "username": "Anna_Henningsen" }, { "code": "const {MongoClient} = require('/usr/local/lib/node_modules/mongodb');\n", "text": "Thank you for your response. As I said, these MR-tasks are very old and are running on a server now with a lot more I/O than when they were originally deployed. These MRs are doing analytics aggregations for a couple of websites, each thread is used to process a bundle of one or more sites. Parallel processing was originally used to avoid having to set up separate tasks for each bundle while making sure that the bundles would make use of the multiple cores. The segmentation for these tasks is somewhat different from parallelization within an aggregation such as is provided for example by $facet in an aggegration pipeline.But again, I guess you’re correct on both counts - the threading here is probably more a case of premature optimization and I’ll likely get rid of it. Regardless, I’d simply like to understand what’s going on here, otherwise this would feel too much like surrender.Your suggestion of trying require.resolve has helped tremendously. I now understand that the main thread will always search the module in the current directory, whereas the children will look for the module in the directory of the script. Easy workaround is installing the module globally and referencing the full path in the require likeNo more guesswork in either case. I just checked with require.resolve() at this point, and that does look promising, but I will probably just remove the threading altogether. I just wanted to understand what’s going on - there might still be a more valid use case, so this solution might still come in handy.So thank you very much!", "username": "Markus_Wollny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Worker in mongosh
2023-05-24T14:37:38.578Z
Worker in mongosh
681
null
[ "dot-net" ]
[ { "code": "", "text": "Well, in older versions of the driver I remember that it was possible to use untyped collections, which just worked with BsonDocument.I have had this issue several times now that I’d like to implement a generic solution for common issues which require me to collect a set of for example FilterDefinitions without generic context.For example I’d like to update multiple entities which are of similiar schema (ensured through some interfaces). The code to update these is always the same, various checks and other logic happening. In the end I generate a list of Update statements that all change a specific entity type. But since this code should be used to also work on other entity types, I cannot use generics in a meaningful way.I can outsource the logic which generates my filter definitions and update definitions, but since these have no non-generic base class, I cannot use them in a non-generic context.Why was this changed? Is there a way around? All I need is a non-generic base. Having all classes typed is a huge issue for a lot of cases I’ve had so far. We’re working a lot with dynamic data structures and in this case we have plenty of issues with this typing. We basically have to implement the same code for every single entity type as we cannot go with a generic (ironically inverse) approach here.So any workaround? Any reason why this was changed? Any way it’s considerable in the future to have untyped base classes instead of “abstract Class” bases?Thanks!", "username": "Manuel_Eisenschink" }, { "code": "var collection = db.GetCollection<Person>(\"people\");\ndb.GetCollection(\"people\")var collection = db.GetCollection<BsonDocument>(\"people\");\nBsonDocumentvar filter = Builders<BsonDocument>.Filter.Empty;\nBsonDocument", "text": "Hi, @Manuel_Eisenschink,Thank you for your question. If I am understanding correctly, you are wondering why we require a generic type argument for a collection. For example:In particular you want to work with untyped data or you don’t care about the particular type and would like to call db.GetCollection(\"people\") instead.While our 1.X API allowed non-generic access to collections, we chose a different API for 2.X. We still support untyped access to collections via:You can create filter and update definitions using BsonDocument as well.If using BsonDocument as your generic type parameter doesn’t work for your use case, please provide some sample code to illustrate what you are attempting to accomplish so that we can discuss further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "ArrayFilterDefintionArrayFilterDefinition<MySpecificClass>", "text": "Hi James,I am aware that untyped data access is possible with BsonDocument as type. The problem in my case is, that generics come with certain constraints when writing dynamic code. What I’d require is that I can work with typed classes but execute them untyped.A good example is the ArrayFilterDefintion class. It’s the only class I found that has a non-generic base class. So I can create a typed instance i.e. ArrayFilterDefinition<MySpecificClass> and cast it to it’s base class. The driver doesn’t care about the generic anymore. This way I can write very dynamic methods which get their input from more specific methods that know their entity type they’re working on.I certainly see the benefits of generics, but it’d be great if it was still possible to use non-generic variants they derive from. So far this is only possible for ArrayFilterDefintion which puzzles me why exactly there this schema was not continued.Is there any reason for this? Probably complexity handling things, but since you started with this approach in V1, I guess some of the work was already done. Having non-generic base classes would definetly give greater flexibility.", "username": "Manuel_Eisenschink" } ]
Non-generic classes for better dynamic access
2023-05-17T14:49:56.981Z
Non-generic classes for better dynamic access
703
null
[ "unity" ]
[ { "code": "", "text": "Dear support,We are testing RealmDB with unity and need to sync the data from unity app to the server. Currently we are using Firebase. Please advise as how this can be done.Cheers", "username": "Tien_Nguyen5" }, { "code": "", "text": "Have you read the Quick Start with Unity and the Add Device Sync to an App docs? If you tried following the instructions there and have questions, can you post them here?", "username": "nirinchev" }, { "code": "", "text": "Thanks Nikola, I have read the quick start and installed realm db by following the instructions that you provided but couldn’t implement the add-sync-to-app and I don’t see there any clear instructions on how to add sync in unity.", "username": "Tien_Nguyen5" }, { "code": "", "text": "Are you asking how to add the Realm package itself or how to sync data? If it’s the latter, the instructions for Unity are exactly the same as the instructions for regular .NET applications - you create an App instance, login a user, create a FlexibleSyncConfiguration, and finally, open a Realm.Can you share the code/project you’re working on and add some comments in the place where you want to add sync and I’d be happy to take a look and try and point you in the right direction.", "username": "nirinchev" }, { "code": "", "text": "The Realm package is successfully installed in unity. The question is how to sync the data that is saved in the RealmDB to the server and vice versa. Would be great to have concrete examples of this kind of common operation with RealmDB in unity and a backend server.", "username": "Tien_Nguyen5" } ]
RealmDB UNITY3D sync?
2023-05-21T12:26:27.193Z
RealmDB UNITY3D sync?
833
null
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.19.1 was released.The list of JIRA tickets resolved in this release is available at CSHARP JIRA project.Documentation on the .NET driver can be found here.There are no known backwards breaking changes in this release.", "username": "Robert_Stam" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.19.2 Released
2023-05-25T04:18:40.947Z
.NET Driver 2.19.2 Released
794
null
[]
[ { "code": "", "text": "I deleted Project 0 and only have MDB_EDU left. Followed all instructions and still can not check without getting “The users collection document count was incorrect. Please try again.”Any ideas?", "username": "Denny_Doan" }, { "code": "MDB_EDUmdbuser_test_dbusers", "text": "Hi @Denny_Doan,Welcome to the MongoDB Community forums Ensure you are working on the correct project, i.e., MDB_EDU. Then, reload the sample dataset and generate the mdbuser_test_db database. Lastly, create your users collection and insert your first document.If the issue persists please share the link to the lab, a screenshot, and the workflow you followed.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lesson 3: Lab 1Can Not Check
2023-05-24T21:28:18.207Z
Lesson 3: Lab 1Can Not Check
700
null
[ "queries", "indexes" ]
[ { "code": "", "text": "Hello,Reading the documentation on covered queries, I see that “Geospatial indexes cannot cover a query.”Why is that no possible?", "username": "Omer_Toraman1" }, { "code": "", "text": "Hi @Omer_Toraman1 ,Although I don’t know the official technical limitation, I assume that the way this index index the data it does not allow to store the raw data sets of the indexed fields , therefore just the index cannot setisfy a covered query and documents needs to be accessed.Maybe I am missing the point and someone more expert can correct me.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I’m curious about this too, any one know exactly why?", "username": "recordable0711" } ]
Why can't geospatial indexes cover a query?
2021-07-17T11:02:34.982Z
Why can&rsquo;t geospatial indexes cover a query?
2,246
https://www.mongodb.com/…24223942a903.png
[]
[ { "code": "systemctl status mongodsystemctlsudo apt-get purge mongodb-org*/var/log/mongodb/var/lib/mongodbmongomongodbStill the same problem", "text": "I am trying to install and start a MongoDB on a fresh installation of Ubuntu 18.04.\nI followed the setup tutorial and did the following steps:When I run systemctl status mongod I get the following error:\nimage820×627 31.7 KB\nI tried the following things to fix this error:I am really stuck here, I hope someone has an idea because MongoDB is essential to the stuff I’m working on arm…", "username": "test_test6" }, { "code": "", "text": "Needs a later version of Arm than you have.You will be limited to v4.4 unless you compile from source.Matt_Kneiser recently posted regrading this along with his repo and binaries.", "username": "chris" }, { "code": "echo \"mongodb-org hold\" | sudo dpkg --set-selections\necho \"mongodb-org-server hold\" | sudo dpkg --set-selections\necho \"mongodb-org-shell hold\" | sudo dpkg --set-selections\necho \"mongodb-org-mongos hold\" | sudo dpkg --set-selections\necho \"mongodb-org-tools hold\" | sudo dpkg --set-selections\necho \"mongodb-org-database-tools-extra hold\" | sudo dpkg --set-selections\n", "text": "I was playing about with installs on Pi’s today:4.4.18 is the most up-to-date release that ran on the Pi’s I have available (Pi3B, Pi4B and an Orange Pi 3LTS), i.e. on the ARMv8.0 microarchitecture. MongoDB 4.4.19 and later, 5.0 and later and 6.0 and later all need the ARMv8.2-A microarchitecture. There’s no Raspberry Pi available with that microarchitecture as far as I’m aware, but the Orange Pi 5 does have it so should - in theory - run MongoDB 5 and 6.4.4.18 will not install on Ubuntu 22.04LTS - there’s missing libssl dependencies. Use Ubuntu 20.04LTS. Alternatively, I did find the Ubuntu packages for 4.4.18 installed without any apparent issue on the latest (Debian Bullseye based) Pi OS distributions. I haven’t dug too far into this and the possibility of incompatibilities with this combination exists so use at your own risk.You’ll want to pin the installed packages to 4.4.18:4.4.18 was released in November 2022 and probably supports most of the features people would want in a SBC based installation. If you do need access to newer versions, a free-tier Atlas instance is always an option.", "username": "Graeme_Robinson" } ]
MongoDB Community 6.0.5 Illegal instruction (core dumped) ubuntu 18.04 on Cortex-A72 aarch64
2023-04-27T09:17:40.210Z
MongoDB Community 6.0.5 Illegal instruction (core dumped) ubuntu 18.04 on Cortex-A72 aarch64
2,594
null
[ "data-api" ]
[ { "code": "", "text": "Hello, I’m trying to query my Mongo DB on Atlas through the Data API, from the browser (with an Angular app) but I’m blocked by CORS.\nI guess it’s by design and the Data API is supposed to be consumed from server apps, not client apps, as the UI in the admin dashboard doesn’t show anything that makes me think I can configure CORS, but just in case there’s a way around this I’d be glad to hear it.", "username": "Andrea_Bertoldo" }, { "code": "", "text": "Hey Andrea - you’re correct, currently it’s blocked by CORS because we don’t yet offer additional authentication and authorization options. Feel free to request it on Atlas: Top (1026 ideas) – MongoDB Feedback Engine and choose the ‘Data API’ category so we can gauge interest.Other ways to get around this is to wrap it in a backend or another managed service like Lambda or put an API gateway in front of it.", "username": "Sumedha_Mehta1" }, { "code": "data-api", "text": "Hi folks,For anyone who has a similar use case where CORS or a browser-only application environment currently prevents using the Data API (Preview), please watch/upvote the feature request to Support CORS from the Data API (MongoDB Feedback Engine).This feature request is currently under review, so any additional context on desired use case support would be helpful – start a new forum discussion using the data-api tag.In the interim, some alternative approaches to consider include:Putting an API gateway or serverless function in front of the Data API calls (as mentioned earlier in this topic).Using the MongoDB Realm GraphQL API:If you need an API from MongoDB Atlas right now that is not blocked by CORS, I suggest using the GraphQL API that is offered by MongoDB Realm. The Steps to generate this API involve -Using the Realm Web SDK.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi –Just wanted to hop on here to update this thread and announce that we now support client side access for the Data API using Bearer Authentication. This will allow you to get around CORS errors and call the Data API from any platform that supports HTTPS, which includes web browsers. Check out our docs page that goes over how to authenticate Data API requests for web browser access.If you would like to share any additional context on your use case or general feedback regarding this feature, please feel free to email me at [email protected].", "username": "Kaylee_Won" } ]
Mongo Data API and CORS
2021-12-04T13:16:38.788Z
Mongo Data API and CORS
11,970
https://www.mongodb.com/…_2_1024x550.jpeg
[ "node-js", "replication", "atlas-cluster" ]
[ { "code": "mongodb://username:[email protected]:27017,shard.mongodb.net:27017,shard.mongodb.net:27017/admin?ssl=true&replicaSet=atlas-shard0&readPreference=primary&connectTimeoutMS=10000&authSource=admin&authMechanism=SCRAM-SHA-1", "text": "\nScreen Shot 2022-08-10 at 12.18.13 PM1920×1033 92.5 KB\n\nI received the above error after trying to connect to my mongo cluster via nodejs.Here is my uri string with credentials omitted:\nmongodb://username:[email protected]:27017,shard.mongodb.net:27017,shard.mongodb.net:27017/admin?ssl=true&replicaSet=atlas-shard0&readPreference=primary&connectTimeoutMS=10000&authSource=admin&authMechanism=SCRAM-SHA-1I also tested this when whitelisting all IP addresses and still getting the error. What am i doing wrong here?", "username": "Daniel_Chicchon" }, { "code": "", "text": "were you able to find any solution", "username": "santhosh_h1" }, { "code": "", "text": "@Stennie_X can you please help on this issue?", "username": "santhosh_h1" }, { "code": "", "text": "@Stennie_X @Piti.Champeethong can you help on this… i’m working on mongoose with lambda function it would work first time but after sometime would get this error?", "username": "santhosh_h1" }, { "code": "", "text": "Welcome to the MongoDB Community @santhosh_h1 !Please share more details about your environment:You may find this tutorial a helpful reference: Write A Serverless Function with AWS Lambda and MongoDB | MongoDBIf your Lambda function works for a while and eventually runs into a connection error, I think this suggests you may have cached a stale connection and perhaps need some retry logic.Regards,\nStennie", "username": "Stennie_X" }, { "code": " let conn = null;\n\n// module.exports = conn;\nexports.connect = async function connect(url = MONGO_URL) {\n\ttry {\n\t\tif (conn == null) {\n\t\t\tconn = mongoose\n\t\t\t\t.connect(url, {\n\t\t\t\t\tserverSelectionTimeoutMS: 30000,\n\t\t\t\t})\n\t\t\t\t.then(() => mongoose);\n\t\t\tawait conn;\n\t\t\tconsole.log('mongodb connected successfully');\n\t\t} else {\n\t\t\tconsole.log('reused the connection');\n\t\t}\n\n\t\treturn conn;\n\t} catch (e) {\n\t\tconsole.error(e);\n\t}\n};\n\n", "text": "this is my connection code usedsorry for the delayed response", "username": "santhosh_h1" }, { "code": "", "text": "Hey @santhosh_h1 did you find a solution? Facing same issue since a couple of days!\nThanks", "username": "Rasool_Khan" }, { "code": "", "text": "I have not got any solution", "username": "santhosh_h1" }, { "code": "", "text": "hey there? any solution?", "username": "francisco_Innocenti" } ]
Unable to connect to database: ReplicaSetNoPrimary Error
2022-08-10T19:21:57.483Z
Unable to connect to database: ReplicaSetNoPrimary Error
4,164
https://www.mongodb.com/…842872f9efe7.png
[ "flutter" ]
[ { "code": "flutter pub run realm generateimport 'package:realm/realm.dart'; // import realm package\n\npart 'app.g.dart'; // declare a part file.\n\n@RealmModel() // define a data model class named `_Car`.\nclass _Car {\n late String make;\n\n late String model;\n\n int? kilometers = 500;\n}\n\n", "text": "I followed the basic example, but when I run the command flutter pub run realm generate, it can’t run forever (about 30 minutes). Please help me. Thank you.Ran on: Dell | Flutter 3.7.12 | Dart 2.19.6 | realm 1.0.3\nWhat I’ve done:Repro StepsCode Snippet\n/lib/app.dart", "username": "Minh_Quang_H_Vu" }, { "code": "lib/app.dart❯ time dart run realm generate\nBuilding package executable... \nBuilt realm:realm.\n[INFO] Generating build script...\n[INFO] Generating build script completed, took 149ms\n\n[INFO] Initializing inputs\n[INFO] Reading cached asset graph...\n[INFO] Reading cached asset graph completed, took 30ms\n\n[INFO] Checking for updates since last build...\n[INFO] Checking for updates since last build completed, took 476ms\n\n[INFO] Running build...\n[INFO] 1.5s elapsed, 0/1 actions completed.\n[INFO] Running build completed, took 1.6s\n\n[INFO] Caching finalized dependency graph...\n[INFO] Caching finalized dependency graph completed, took 18ms\n\n[INFO] Succeeded after 1.6s with 1 outputs (1 actions)\n\n\n________________________________________________________\nExecuted in 3.73 secs fish external\n usr time 4.45 secs 0.09 millis 4.45 secs\n sys time 0.82 secs 1.64 millis 0.82 secs\npubspec.yam", "text": "Adding your model to a freshly created flutter project in the file lib/app.dart and running generateSo a little under 4s on my M1 Max. It shouldn’t hang like you describe.Could you share what output you do see, as well as how your pubspec.yaml looks?", "username": "Kasper_Nielsen1" }, { "code": "flutter doctor -v", "text": "Also, what is the output of flutter doctor -v?", "username": "Kasper_Nielsen1" }, { "code": "name: flutter_application_4\ndescription: A new Flutter project.\n\npublish_to: 'none' # Remove this line if you wish to publish to pub.dev\n\n\n\nversion: 1.0.0+1\n\nenvironment:\n sdk: '>=2.19.6 <3.0.0'\n\n\ndependencies:\n flutter:\n sdk: flutter\n\n\n cupertino_icons: ^1.0.2\n realm: ^1.0.3\n build_runner: ^2.3.3\n\ndev_dependencies:\n flutter_test:\n sdk: flutter\n realm_generator: ^1.0.3 \n\n \n flutter_lints: ^2.0.0\n\n\nflutter:\n\n \n uses-material-design: true\n\n \n", "text": "", "username": "Minh_Quang_H_Vu" }, { "code": "", "text": "\nScreenshot_104886×907 33.9 KB\n", "username": "Minh_Quang_H_Vu" }, { "code": "", "text": "When I run the command there is no output and it just hangs, even though I have created a brand new project\nScreenshot_1051392×729 47.6 KB\n", "username": "Minh_Quang_H_Vu" }, { "code": "❯ flutter pub run realm generate\npub finished with exit code 255\n", "text": "On Mac I don’t hang with that combination, but I do get a failure on exit.I’ll need to investigate further.", "username": "Kasper_Nielsen1" }, { "code": "", "text": "I look forward to hearing from you", "username": "Minh_Quang_H_Vu" }, { "code": "realm_generator", "text": "This is a bit puzzling. Can I ask you to update to latest stable (3.10.1) and remove the realm_generator dev dependency (Realm will pull it in as normal dependency).", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Thank you for your help. The problem has been resolved. The problem was caused by my network. I solved using VPN 1.1.1.1 and the command flutter pub run realm generate worked.", "username": "Minh_Quang_H_Vu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can’t generate RealmModel in Flutter
2023-05-24T14:02:03.540Z
Can’t generate RealmModel in Flutter
979
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "Hi - I recently seemingly randomly came across this error attempting to access my MongoDB using PyMongo.(xxx.xxx replacing the db host name, which I have triple checked matches what’s posted under the database connect menu)<ServerDescription (xxxx.xxxx.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘xxxx.xxxx.mongodb.net:27017: [Errno -3] Temporary failure in name resolution’)>Everything was working smoothly until roughly 6 hours ago. No changes were made whatsoever, so I am unclear as to what is suddenly causing this error.I have tried downgrading, and upgrading, exact same error appears. I double checked the hostWould greatly appreciate it if someone could point me in the right direction!Thank you!", "username": "Cheshire_N_A" }, { "code": "", "text": "Did you:", "username": "santimir" }, { "code": "", "text": "Thanks for the reply.I have tried connecting from another client and works fine. I suppose this is an issue with Replit. It seems it is a known issue without a definitive fix yet:I’m now getting mongodb errors again (node.js), with Internal Server Error and Unknown system error -122. Looks like this issue wasn’t fully fixed and a timeframe on when this could be resolved would be great.I have spoken to Atlas support and confirmed there are no issues on that side. I will continue trying to get ahold of Replit support in this case.Thanks again!", "username": "Cheshire_N_A" }, { "code": "", "text": "hey @Cheshire_N_AI am having similar issue using AWS Lambda.Could you figurate out what it was or how to fix it?regards", "username": "francisco_Innocenti" } ]
Mongodb.net:27017: [Errno -3]
2023-01-06T03:31:01.172Z
Mongodb.net:27017: [Errno -3]
1,574
null
[ "aggregation", "queries", "atlas-search", "atlas" ]
[ { "code": "db.getCollection(\"collection\").aggregate([{\"$search\":{\"index\":\"fulltext\",\"phrase\":{\"path\":[\"headline\",\"subtitle\",\"fulltext\"],\n \"query\":[\"ON TIME\",\"FLIGHT\"],\"slop\":10}}}] )\n", "text": "I am trying to use slope MongoDB search, I am wondering if I can use it with filter.\nI have huge collection data where if I can able to filter with filed - date and clientid and use slope in it, but I found out I cannot use compound with slope.In below query it is finding the phrase in whole collection, but I think If I found a solution to filter out that will be reduce a lot of time.", "username": "Utsav_Upadhyay2" }, { "code": "slopphrase", "text": "slop works with the phrase query operator, which itself can work as a clause anywhere that query operators can be used, including within a compound/filter clause.The main difference between using a filter versus a must clause is whether the queries are being used for relevancy scoring or purely for filtering.", "username": "Erik_Hatcher" }, { "code": "\"query\":[\"ON TIME\",\"FLIGHT\"][\n {\n $search: {\n index: \"fulltext\",\n compound: {\n filter: [\n {\n range: {\n path: \"pubdateRange\",\n gte: ISODate(\"2023-05-01T00:00:00.000Z\"),\n lte: ISODate(\"2023-05-20T18:29:59.000Z\"),\n },\n },\n ],\n must: [\n {\n phrase: {\n query: \"AB1\",\n path: [\"clientidArray\"],\n },\n },\n\n {\n phrase: {\n query: [\"ON TIME\",\"FLIGHT\"],\n \n path: [\"headline\", \"subtitle\", \"fulltext\"],\n ...(slop ? { slop } : {}),\n },\n },\n ],\n },\n },\n }\n]\n", "text": "Adding to this - \"query\":[\"ON TIME\",\"FLIGHT\"] do I get all the data which has these two keywords - “ON TIME” and “FLIGHT” or if any of these keywords found it will return the data.My scenario is I need to search data based on phrases with slope ex -\nPreviously I am using below query for slope, but as I noticed it returns data which have any of these keywords present, but I need if both keywords present in certain distance then it will give results.", "username": "Utsav_Upadhyay2" }, { "code": "", "text": "@Erik_Hatcher do you have any suggestions according to the query I m using?", "username": "Utsav_Upadhyay2" }, { "code": "compoundmust", "text": "To achieve AND behavior, where all clauses are mandatory, you will need to use the compound operator with each of those separated into separate must clauses.", "username": "Erik_Hatcher" }, { "code": "[\n {\n \"$search\":{\n \"index\":\"fulltext\",\n \"compound\":{\n \"filter\":[\n {\n \"range\":{\n \"path\":\"pubdateRange\",\n \"gte\":\"2023-05-24T00:00:00.000Z\",\n \"lte\":\"2023-05-25T18:29:59.000Z\"\n }\n }\n ],\n \"must\":[\n {\n \"phrase\":{\n \"query\":\"I0027\",\n \"path\":[\n \"clientidArray\"\n ]\n }\n },\n {\n \"phrase\":{\n \"query\":\"on time\",\n \"airport\",\n \"path\":[\n \"headline\",\n \"subtitle\",\n \"fulltext\"\n ],\n \"slop\":10\n }\n }\n ]\n }\n }\n },\n {\n \"$project\":{\n \"articleid\":1,\n \"_id\":0\n }\n }\n]\n", "text": "You mean something like this (I am using exactly this query, and slope is giving me OR operator results) -Could you please recommends me or write a query according to above with AND Operator for slope ?", "username": "Utsav_Upadhyay2" }, { "code": "phrasemust", "text": "Split that second phrase operator into 2 different ones within the must array, one for each query phrase you have. If you put multiple paths or queries into a single operator, it OR’s (should’s) them, rather than ANDing them.Note that your posts say “slope” but use the correct parameter name “slop” (as in how sloppy the phrase can be) - just wanted to point that out in case that is confusing for others reading this.", "username": "Erik_Hatcher" } ]
How to use filter or compound operator with Slope in Atlas search?
2023-05-18T08:47:41.092Z
How to use filter or compound operator with Slope in Atlas search?
900
null
[]
[ { "code": "", "text": "We are using Mongodb driver 4.0.25. Is there any breaking change between 4.0 and 6.0? If we move up to 6.0, do we anticipate any code change?", "username": "Yanni_Zhang" }, { "code": "", "text": "Hi! Could you share details on both your driver and server versions, as well as the language that you’re using?", "username": "Ashni_Mehta" }, { "code": "", "text": "4.0.25 is the version we use. We support English, German, French, Italian, Spanish, Brazilian Portuguese, Japanese, Simplified Chinese, Traditional Chinese, Korean", "username": "Yanni_Zhang" } ]
MongoDB upgrade
2023-03-02T23:25:03.251Z
MongoDB upgrade
620
null
[ "replication" ]
[ { "code": "", "text": "I have installed a on premise Replica set which has 6 data nodes and 1 arbiter. The nodes are spread across 3 data centers: 3 data nodes on DC1, 3 data nodes on DC2, arbiter on DC3. We chose this infrastructure so that we can achieve High Availability in case of a data center failure.\nA few days ago the application could not connect to the Replica because it was in a weird state, two of the data nodes and the arbiter went down, but the other 4 data nodes were still running. I ran db.hello() on the primary node of the remaining nodes and the isWritablePrimary flag was false. It was strange because the remaining nodes could form a majority (4/7 nodes were running), the nodes elected a primary and every secondary was pointing to the same primary, but still, when i ran db.hello() on the primary it had isWritablePrimary: false, i couldn’t run any queries or insert any data on it.\nWhen i tried to run a query on a collection it prompted the following error: “NotPrimaryNoSecondaryOk” which implies that the node i was trying to run the query was a seconday node, but guess what it wasn’t, every secondary node recognized the node on which i ran the queries as primary and so did the primary, it was recognizing itself as a primary node.\nRegardless of the fact that the 3 nodes were down, the question is, why was the replica set not valid?", "username": "Albert_Ion" }, { "code": "rs.status()", "text": "Show us the output of rs.status() from your “primary”?", "username": "Kobe_W" }, { "code": "", "text": "I’m sorry, i don’t have the output anymore.", "username": "Albert_Ion" }, { "code": "uncaught exception: Error: error: {\n\t\"topologyVersion\" : {\n\t\t\"processId\" : ObjectId(\"645cabac8243064457faa4a4\"),\n\t\t\"counter\" : NumberLong(48)\n\t},\n\t\"operationTime\" : Timestamp(1684863010, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"not master and slaveOk=false\",\n\t\"code\" : 13435,\n\t\"codeName\" : \"NotPrimaryNoSecondaryOk\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1684863052, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDBCommandCursor@src/mongo/shell/query.js:703:15\nDBQuery.prototype._exec@src/mongo/shell/query.js:112:28\nDBQuery.prototype.hasNext@src/mongo/shell/query.js:287:5\nDBCollection.prototype.findOne@src/mongo/shell/collection.js:260:10\n@(shell):1:1\nreplica-set-0:PRIMARY> rs.hello()\n{\n\t\"topologyVersion\" : {\n\t\t\"processId\" : ObjectId(\"645cabac8243064457faa4a4\"),\n\t\t\"counter\" : NumberLong(48)\n\t},\n\t\"hosts\" : [\n\t\t\"c1-mongoshdb01:27017\",\n\t\t\"c2-mongoshdb02:27017\",\n\t\t\"c2-mongoshdb03:27017\",\n\t\t\"c1-mongoshrd01:27017\",\n\t\t\"c1-mongoshrd02:27017\",\n\t\t\"c2-mongoshrd03:27017\"\n\t],\n\t\"arbiters\" : [\n\t\t\"nxt-db-arb:27017\"\n\t],\n\t\"setName\" : \"replica-set-0\",\n\t\"setVersion\" : 6,\n\t\"isWritablePrimary\" : false,\n\t\"secondary\" : true,\n\t\"primary\" : \"c2-mongoshdb02:27017\",\n\t\"me\" : \"c2-mongoshdb02:27017\",\n\t\"electionId\" : ObjectId(\"7fffffff0000000000007b2b\"),\n\t\"lastWrite\" : {\n\t\t\"opTime\" : {\n\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"lastWriteDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\"majorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684862963, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"majorityWriteDate\" : ISODate(\"2023-05-23T17:29:23Z\")\n\t},\n\t\"maxBsonObjectSize\" : 16777216,\n\t\"maxMessageSizeBytes\" : 48000000,\n\t\"maxWriteBatchSize\" : 100000,\n\t\"localTime\" : ISODate(\"2023-05-24T13:42:35.787Z\"),\n\t\"logicalSessionTimeoutMinutes\" : 30,\n\t\"connectionId\" : 2861657,\n\t\"minWireVersion\" : 0,\n\t\"maxWireVersion\" : 9,\n\t\"readOnly\" : false,\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1684863052, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1684863010, 1)\n}\n{\n\t\"set\" : \"replica-set-0\",\n\t\"date\" : ISODate(\"2023-05-24T13:44:09.778Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(31531),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 4,\n\t\"writeMajorityCount\" : 4,\n\t\"votingMembersCount\" : 7,\n\t\"writableVotingMembersCount\" : 6,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684862963, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-05-23T17:29:23.122Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684862963, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2023-05-23T17:29:23.122Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1684862963, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-05-23T17:30:52.026Z\"),\n\t\t\"electionTerm\" : NumberLong(31531),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1684862963, 1),\n\t\t\t\"t\" : NumberLong(31529)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1684862981, 1),\n\t\t\t\"t\" : NumberLong(31530)\n\t\t},\n\t\t\"numVotesNeeded\" : 4,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"priorPrimaryMemberId\" : 2,\n\t\t\"targetCatchupOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1684862981, 1),\n\t\t\t\"t\" : NumberLong(31530)\n\t\t}\n\t},\n\t\"electionParticipantMetrics\" : {\n\t\t\"votedForCandidate\" : true,\n\t\t\"electionTerm\" : NumberLong(31528),\n\t\t\"lastVoteDate\" : ISODate(\"2023-05-22T17:36:31.798Z\"),\n\t\t\"electionCandidateMemberId\" : 0,\n\t\t\"voteReason\" : \"\",\n\t\t\"lastAppliedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1684776975, 1),\n\t\t\t\"t\" : NumberLong(31527)\n\t\t},\n\t\t\"maxAppliedOpTimeInSet\" : {\n\t\t\t\"ts\" : Timestamp(1684776975, 1),\n\t\t\t\"t\" : NumberLong(31527)\n\t\t},\n\t\t\"priorityAtElection\" : 1\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"c1-mongoshdb01:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 72838,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.074Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.173Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"c2-mongoshdb02:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"c2-mongoshdb02:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 1140989,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"syncing from: c2-mongoshdb03:27017\",\n\t\t\t\"electionTime\" : Timestamp(1684863052, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-05-23T17:30:52Z\"),\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"c2-mongoshdb03:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 72838,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684862981, 1),\n\t\t\t\t\"t\" : NumberLong(31530)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1684862963, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:29:41Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-05-23T17:29:23Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:29:41.665Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:29:23.122Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.237Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.237Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1684862980, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-05-23T17:29:40Z\"),\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"c1-mongoshrd01:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 72838,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.074Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.131Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"c2-mongoshdb02:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 4,\n\t\t\t\"name\" : \"c1-mongoshrd02:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 72838,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.130Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.170Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"c2-mongoshdb02:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 5,\n\t\t\t\"name\" : \"c2-mongoshrd03:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 72901,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1684863010, 1),\n\t\t\t\t\"t\" : NumberLong(31529)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-05-23T17:30:10Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-05-23T17:30:10.839Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.074Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.635Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"c2-mongoshdb02:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 6,\n\t\t\t\"name\" : \"nxt-db-arb:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 7,\n\t\t\t\"stateStr\" : \"ARBITER\",\n\t\t\t\"uptime\" : 1140988,\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-05-24T13:44:08.357Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-05-24T13:44:08.914Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 6,\n\t\t\t\"configTerm\" : 31530\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1684863052, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1684863010, 1)\n}\n", "text": "The problem happened again.This is the output when i try to read from the primary node:This is the output when I run rs.hello():And this is the rs.status() output:", "username": "Albert_Ion" } ]
Replica set Primary node has isWritablePrimary: false when running db.hello()
2023-05-11T11:31:38.874Z
Replica set Primary node has isWritablePrimary: false when running db.hello()
965
null
[]
[ { "code": "", "text": "I’ve been thinking how to model this for a while, but can’t seem to get how. Here is how the data is: There are 4 electronics types, namely:Mobiles, Laptops, Mobile Accessories and Other They include the following:I was thinking of using discriminators, but having a hard time with the mobile accessories having subtypes", "username": "Abdi_Mussa" }, { "code": "const MobileAccessoriesSchema = new mongoose.Schema({\n type: String,\n manufacturer: String,\n model: String,\n // Other fields specific to Mobile Accessories\n}, { discriminatorKey: 'accessoryType' });\n\nconst ChargerSchema = new mongoose.Schema({\n chargingSpeed: String,\n});\n\nconst CoverSchema = new mongoose.Schema({\n color: String,\n});\n\nconst OtherAccessorySchema = new mongoose.Schema({\n // Additional fields specific to Other Mobile Accessories\n});\n\n// Embed the subtype schemas within the MobileAccessoriesSchema\nMobileAccessoriesSchema.add({\n charger: ChargerSchema,\n cover: CoverSchema,\n otherAccessory: OtherAccessorySchema,\n});\n\n// Create a mobile accessory of type Charger\nconst chargerAccessory = new MobileAccessories({\n name: 'Mobile Accessory',\n manufacturer: 'Manufacturer 2',\n model: 'Model 2',\n accessoryType: 'Charger',\n charger: {\n chargingSpeed: 'Fast',\n },\n});\nchargerAccessory.save();\n", "text": "Hey @Abdi_Mussa,Welcome to the MongoDB Community Forums! A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.Using discriminators (as you mentioned) and embedding/referencing techniques would be a good approach. You can also use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Regarding mobile accessories having different subtypes like Chargers, Covers, and Others, one approach can be to create separate schemas for each subtype and embed them within the MobileAccessoriesSchema, ie, something like this:Please note that this is just a suggestion and that the actual schema should depend on your queries, hence I would suggest you work on your queries as I stated above.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "If I embed each accessories in MobileAccessories as you suggested, how can I anticipate what the user is searching for? Like for example if he wants to search for both a Samsung charger and cover, how can I write a general search query that will execute any search queries?", "username": "Abdi_Mussa" }, { "code": "const samsungChargerAndCover = await MobileAccessories.find({\n $and: [\n {\n manufacturer: 'Samsung',\n accessoryType: { $in: ['Charger', 'Cover'] },\n },\n ],\n});\n", "text": "Hey @Abdi_Mussa,You can use a query like this to search for both a Samsung Charger and cover:Also, do note, that if the use of discriminators is making things complex, you can always choose to create different collections for different Mobile Accessories. I would start by thinking about the kind of queries that will be used most often and then start to model my data.Regards,\nSatyam", "username": "Satyam" }, { "code": "{ type:{$in:[typesFromSearch]} }", "text": "This query doesn’t work for other searches, as it is not general. What if one wanted to search for both mobile and mobile accessory with manufacturer ‘Samsung’? I wanted to have something as such in the query:\n{ type:{$in:[typesFromSearch]} }\nwhich would work as a general query.", "username": "Abdi_Mussa" } ]
How to model this simple but somewhat complex data
2023-05-20T09:29:43.734Z
How to model this simple but somewhat complex data
458
null
[ "queries", "crud" ]
[ { "code": "db.testElemMatch.insertMany([\n { tableau: [ \"1\", \"2\" ] },\n]);\n//The first 2 commands returns the same result (the expected document)\n//so does that mean that elemMatch search for satisfying each crtiteria with a differnet array element?\n//.. regardless of query order\ndb.testElemMatch.find( { tableau : {$elemMatch: {$eq:\"1\", $eq:\"2\"} } } )\ndb.testElemMatch.find( { tableau : {$elemMatch: {$eq:\"2\", $eq:\"1\"} } } )\n//Why that command returns the expected document as well since the first criteria is NOT satisfy \ndb.testElemMatch.find( { tableau : {$elemMatch: {$eq:\"5\", $eq:\"2\"} } } )\n//Why the criteria order affect the result here? \ndb.testElemMatch.find( { tableau : {$elemMatch: {$eq:\"2\", $eq:\"5\"} } } )\n\n", "text": "I would like to understand how $elemMatch is implemented (logically speaking) to explain the following simple test case ;", "username": "vincent_leduc1" }, { "code": "{ tableau : {$elemMatch: {$eq:\"2\"} } }\n{ tableau : {$elemMatch: {$eq:\"1\"} } }\n{ tableau : {$elemMatch: {$eq:\"2\"} } }\n{ tableau : {$elemMatch: {$eq:\"5\"} } }\nquery = { tableau : {$elemMatch: {$eq:\"2\", $eq:\"5\"} } }\n", "text": "None of the queries are what you think they are.The first query is reallyThe second is reallyThirdAnd finally the forthA query is a JSON document. In most implementation of JSON, only the last occurrence of key is significant. In this case the key is $eq. You have to use $and which takes an array.You may find what you queries are in mongosh withor by looking at the parsed query in the explain plan.", "username": "steevej" }, { "code": "", "text": "Thank you so much, crystal clear", "username": "vincent_leduc1" } ]
elemMatch behavior
2023-05-24T11:02:45.545Z
elemMatch behavior
473
null
[ "queries" ]
[ { "code": "restaurants{\n \"address\": {\n \"building\": \"1007\",\n \"coord\": [ -73.856077, 40.848447 ],\n \"street\": \"Morris Park Ave\",\n \"zipcode\": \"10462\"\n },\n \"borough\": \"Bronx\",\n \"cuisine\": \"Bakery\",\n \"grades\": [\n { \"date\": { \"$date\": 1393804800000 }, \"grade\": \"A\", \"score\": 2 },\n { \"date\": { \"$date\": 1378857600000 }, \"grade\": \"A\", \"score\": 6 },\n { \"date\": { \"$date\": 1358985600000 }, \"grade\": \"A\", \"score\": 10 },\n { \"date\": { \"$date\": 1322006400000 }, \"grade\": \"A\", \"score\": 9 },\n { \"date\": { \"$date\": 1299715200000 }, \"grade\": \"B\", \"score\": 14 }\n ],\n \"name\": \"Morris Park Bake Shop\",\n \"restaurant_id\": \"30075445\"\n}\ndb.restaurants.find(\"{grades.5\": {\"grade\": \"A\"})\ndb.restaurants.find({\"grades.5.grade\": \"A\"})\n", "text": "Hi,\nRegarding a document structure that looks like that, in a collection named restaurants:What is the difference between the two following queries?AndWhy does the first query seem to “not work”?Thanks!", "username": "Dan_Katzuv" }, { "code": "{ \"grades.5\": {\"grade\": \"A\"} }{ \"grades.5.grade\": \"A\" }\n", "text": "You are looking at object equality in{ \"grades.5\": {\"grade\": \"A\"} }versus field equality inFor 2 objects to be equals they need the same field with same values in the same order. In your case, the object grades.5 cannot be equal to the object {grade:A} because it has the extra fields date and score.", "username": "steevej" }, { "code": "", "text": "I see, thanks a lot!", "username": "Dan_Katzuv" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difference between two embedded documents queries
2023-05-23T12:08:43.773Z
Difference between two embedded documents queries
905
null
[ "node-js", "replication", "change-streams", "kafka-connector" ]
[ { "code": "MongoServerError: Entry field \"o.version\" should be string, found: long\n at MessageStream.messageHandler (/usr/src/app/node_modules/mongodb/lib/cmap/connection.js:467:30)\n at MessageStream.emit (events.js:400:28)\n at MessageStream.emit (domain.js:475:12)\n at processIncomingData (/usr/src/app/node_modules/mongodb/lib/cmap/message_stream.js:108:16)\n at MessageStream._write (/usr/src/app/node_modules/mongodb/lib/cmap/message_stream.js:28:9)\n at writeOrBuffer (internal/streams/writable.js:358:12)\n at MessageStream.Writable.write (internal/streams/writable.js:303:10)\n at Socket.ondata (internal/streams/readable.js:731:22)\n at Socket.emit (events.js:400:28)\n at Socket.emit (domain.js:475:12) {\n operationTime: new Timestamp({ t: 1682491411, i: 106 }),\n ok: 0,\n code: 40532,\n codeName: 'Location40532',\n '$clusterTime': {\n clusterTime: new Timestamp({ t: 1682491411, i: 108 }),\n signature: {\n hash: new Binary(Buffer.from(\"e72109e0cc06eed5230ee17b39ae1db873ae91fa\", \"hex\"), 0),\n keyId: new Long(\"7186622876357755161\")\n", "text": "We have a producer application which gets data via changestream from mongodb and posts to kafka. But it started failing from last night with below error. What could be the cause?", "username": "Balu_Daggubati" }, { "code": "stringlongoplogsystem collection", "text": "Hello @Balu_Daggubati,I noticed that you haven’t received a response yet. Are you still experiencing the issue?Although, based on the error message it indicates that there is an issue with the “o.version” field in the data being processed by the producer application. The error specifically states that the “o.version” field should be a string, but it is found to be of type long.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi Kushagra, Thanks for replying!\nYes, we are depending on oplog. We are saving it in our memory and reusing it incase of crashing to start fetching from the time where it was left. The issue got resolved after creating new that key and restarting the service.We didn’t have any changes in the recent times. But all of sudden it started failing. We are using this as a producer to replicate the data and it will be used by the consumers via kafka. The data modification happens only in the consumer’s side but not at the producer.Please check and let me understand on the issue.Thanks,\nBala", "username": "Balu_Daggubati" } ]
Change stream is failing to connect
2023-04-26T06:46:03.971Z
Change stream is failing to connect
1,036
null
[ "aggregation", "queries" ]
[ { "code": "{\n \"name\": \"school1\",\n \"sections\": [\n {\n \"name\": \"section1\",\n \"cabinets\": [\n {\n \"name\": \"cabinet1\",\n \"columns\": [\n {\n \"_id\": \"1\"\n \"index\": 1,\n }\n ]\n }\n ]\n }\n ]\n}\n \n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n },\n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n },\n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n }\n \n {\n \"name\": \"school1\",\n \"sections\": [\n {\n \"name\": \"section1\",\n \"cabinets\": [\n {\n \"name\": \"cabinet1\",\n \"columns\": [\n {\n \"_id\": \"1\",\n \"index\": 1,\n \"lockers\": [\n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n },\n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n },\n {\n \"_id\": \"...\",\n \"columnId\": \"1\"\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n }\n", "text": "hi all,I have the following school document from the Schools collection:and the following locker document from the Lockers collection.\neach locker belongs to one specific column in a cabineti want to query all schools and join each column with its corresponding lockers like so:i tried to achieve it using aggregate and $lookup, the thing is the results are unwind, meaning for each section, cabinet and column i get a separate school document.how do i aggregate properly and retain the structure of school with all its arrays and still get for every column its corresponding lockers arrayhere is a sample of my invalid aggregation:\nhttps://mongoplayground.net/p/CE-2YxHZWP8thanks!", "username": "Roy_Yair" }, { "code": "{\n '$project': {\n '_id': 0, \n 'sections.name': 0, \n 'sections._id': 0\n }\n", "text": "Hi @Roy_Yair and welcome to MongoDB community forums!!From the aggregation query mentioned in the playground, the output returned is similar to a few fields extra fields displayed which could be skipped using the $project fields.\nFor example:As you might already know, the query performance in MongoDB is highly dependent on the efficient data modelling.\nIn saying so, I would recommend changing the schema design to a more efficient design which would eliminate the use of multiple $unwind aggregation pipeline and produce the results.\nIf this aggregation would be used in a huge dataset which involves more traversal into the lower level may or may not impact the query performance.\nPlease visit the documentation on Data Modelling in MongoDB for further understanding.Let us know if you have further questions.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari!thanks for your reply,\nI guess my modeling isn’t ideal, what would you suggest,\nplease consider I’m expecting a lot of writes to the locker collection. and i dont want write locks to affect other users so I have separated it from Schools collection.do you think I should’ve put everything on the same document? or maybe take the opposite direction and separate each layer into its own collection?anyhow, I still confused on how to properly join in mongo as you can see in the playground - it seem to flatten the array and turn arrays into objects", "username": "Roy_Yair" }, { "code": "", "text": "Hi @Roy_Yairplease consider I’m expecting a lot of writes to the locker collection. and i dont want write locks to affect other users so I have separated it from Schools collection.The locks in MongoDB are not specific to collection. The write conflict would occur when multiple process try to write onto the same document, so if a document contains reference for multiple entities, the lock is certainly inevitable.\nYou can read more about granularity using locks in MongoDB for further informations.do you think I should’ve put everything on the same document? or maybe take the opposite direction and separate each layer into its own collection?I think those are two extreme ends of possible designs. With MongoDB you have the flexibility of choosing a design that is in-between those two extremes.\nTo start with, I would suggest you reading the blog post on Extended Reference Pattern and Building with Patterns would be a good starting point to design the schema.Let us know if you have further questionsRegards\nAasawari", "username": "Aasawari" } ]
$lookup inside nested arrays results in an unwind document
2023-05-15T23:38:58.592Z
$lookup inside nested arrays results in an unwind document
622
null
[]
[ { "code": "end_customerend_customer_fieldend_customerend_customer_fieldend_customer_field", "text": "I’m bootstrapping a Saas where I’ve got several customers, each having their own set of “end_customers”. On average, a customer has around 3,000 to 5,000 end customers, but this could range anywhere from 1,000 up to 300k.Each customer can define up to 30 custom fields for their end customers, with a mix of field types including text, number, date, or a constrained text value. All of this data is currently stored in a MariaDB database, with a main end_customer table and a end_customer_field table for the custom fields. However, I’ve run into some serious performance issues when trying to allow my customers to execute dynamic filter queries on their end customers.These filter queries involve joining the end_customer table with the end_customer_field table up to 30 times, and converting text values to numbers or dates for certain data types. I’ve also implemented a variety of logical comparison operators based on the data type. Unfortunately, more complex filters are currently taking several minutes to run on my local system before timing out, so I urgently need to find a solution to this problem.I’ve been exploring a few potential approaches to improve performance, including limiting the number of search criteria, restructuring the data storage, and even considering a switch to a different type of database. For instance, I could change the structure of the end_customer_field table to have multiple columns for different data types, or I could split it into separate tables per data type. This would complicate the join/lookup logic, but could avoid the need for casting.Another option I’m considering is a switch to a NoSQL database, such as MongoDB. I haven’t used NoSQL databases before, but I’ve heard they could potentially handle this type of data and querying more efficiently. However, I’m not exactly sure how to go about this, or whether it would really solve my performance issues.I’m reaching out to the community here to see if anyone has any insights or suggestions. Would a NoSQL database be a good fit for this use case? How could I structure my data in MongoDB (or another NoSQL database) to handle these dynamic filter queries efficiently?Any guidance or advice would be greatly appreciated.", "username": "Marcus_Biel" }, { "code": "\"specs\": [\n { k: \"volume\", v: \"500\", u: \"ml\" },\n { k: \"volume\", v: \"12\", u: \"ounces\" }\n]\n{\"specks.k\": 1, \"specs.v\": 1, \"specs.u\": 1}\n", "text": "Hello @Marcus_Biel ,Welcome to The MongoDB Community Forums! I saw that you haven’t had a response to this topic yet, were you able to find guidance for this?\nIf not, then can you please confirm if my understanding for your use-case is correct?Would a NoSQL database be a good fit for this use case? How could I structure my data in MongoDB (or another NoSQL database) to handle these dynamic filter queries efficiently?As you mentioned that your application is supposed to handle large number of unstructured data, MongoDB may be a good fit for such solutions. For guidance on schema design patterns, please refer\nBuilding with Patterns: A Summary. I think Attribute Pattern along with Extended Reference pattern could be a good fit for your use case. As one will provide the support required for freeform fields and latter will help with minimising joins.Attribute pattern example\nIf our data collection was on bottles of water, our attributes might look something like:Here we break the information out into keys and values, “k” and “v”, and add in a third field, “u” which allows for the units of measure to be stored separately.It provides for easier indexing the documents, targeting many similar fields per document. By moving this subset of data into a key-value sub-document, we can use non-deterministic field names, add additional qualifiers to the information, and more clearly state the relationship of the original field and value. When we use the Attribute Pattern, we need fewer indexes, our queries become simpler to write, and our queries become faster.Extended Reference Pattern example\nIn an e-commerce application, the idea of an order exists, as does a customer, and inventory. They are separate logical entities.\nimage2048×944 97.2 KB\nInstead of embedding all of the information or including a reference to JOIN the information, we only embed those fields of the highest priority and most frequently accessed, such as name and address.\nimage2048×944 107 KB\nBy identifying fields on the lookup side and bringing those frequently accessed fields into the main document, performance is improved. This is achieved through faster reads and a reduction in the overall number of JOINs. Be aware, however, that data duplication is a side effect of this schema design pattern.Personally, I will test a mixture of different patterns first which could help me with my requirements and help provide an easy structure to work along my application. After deciding on the design pattern, then comes performance enhancement, which can be done by ways such as extending the design horizontally using sharding.Regards,\nTarun", "username": "Tarun_Gaur" } ]
MongoDB for complex filter queries?
2023-05-12T11:23:34.296Z
MongoDB for complex filter queries?
968
null
[ "queries" ]
[ { "code": "{\nclassic : 1\nsection: 'A'\nteachers[\n{id :123,\nname:\"teacher 1\",\nisClassTeacher:True,\nsubjects:[science,Maths]},\n{id:098,\nname:\"teacher2\",\nisClassTeacher:false,\nsubject:[\"social\",\"English\"]\n]\n}\n{\nclassic : 1,\nsection: 'A',\nclassTeacherName :\"teacher 1\",\nteachers[\n{id :123,\nname:\"teacher 1\",\nisClassTeacher:True,\nsubjects:[science,Maths]},\n{id:098,\nname:\"teacher2\",\nisClassTeacher:false,\nsubject:[\"social\",\"English\"]\n]\n}\ndb.collection.udateOne({},{$set:{\"classTeacherName\": {$first:\"$teachers.name}}})", "text": "i have a collection as bellowhere I want to introduce one more field “classTeacherName” in the root, and value should be from our teachers Array\noutput look likeI have tried with below querydb.collection.udateOne({},{$set:{\"classTeacherName\": {$first:\"$teachers.name}}})and I have tried many other ways. Please help me to resolve this", "username": "madhan_joe" }, { "code": "{\nclassic : 1\nsection: 'A'\nteachers[\n{id :123,\nname:\"teacher 1\",\nisClassTeacher:True,\nsubjects:[science,Maths]},\n{id:098,\nname:\"teacher2\",\nisClassTeacher:false,\nsubject:[\"social\",\"English\"]\n]\n}\n{\n _id: ObjectId(\"646d9e2e133a7d563dd508c1\"),\n classic: 1,\n section: 'A',\n teachers: [\n {\n id: 123,\n name: 'teacher 1',\n isClassTeacher: true,\n subjects: [ 'science', 'Maths' ]\n },\n {\n id: 98,\n name: 'teacher2',\n isClassTeacher: false,\n subject: [ 'social', 'English' ]\n }\n ]\n}\nupdateOne()db>db.teachers.updateOne({},[{$set:{classTeacherName:{$first:'$teachers.name'}}}])\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n\ndb>db.teachers.find({})\n[\n {\n _id: ObjectId(\"646d9e2e133a7d563dd508c1\"),\n classic: 1,\n section: 'A',\n teachers: [\n {\n id: 123,\n name: 'teacher 1',\n isClassTeacher: true,\n subjects: [ 'science', 'Maths' ]\n },\n {\n id: 98,\n name: 'teacher2',\n isClassTeacher: false,\n subject: [ 'social', 'English' ]\n }\n ],\n classTeacherName: 'teacher 1'\n }\n]\n", "text": "Hi @madhan_joe,i have a collection as bellowJSON is invalid as there are syntax errors above, please correct these in future as it will make it easier for other community members to try reproduce / import any sample documents on their own test environments when they assist you.In saying so, I have the following in my test environment:The above document after the updateOne() command:I’ve only tested this on a single sample document briefly. If you believe it works for you please test accordingly against a larger test dataset thoroughly to verify it suits all your use case and requirements and alter / adjust it accordingly.Please go over the following update with an aggregation pipeline documentation which may be of use.Regards,\nJason", "username": "Jason_Tran" } ]
Collection.upate()
2023-05-23T13:12:55.199Z
Collection.upate()
335
null
[ "serverless" ]
[ { "code": "", "text": "Hello EveryoneI want to ask four questions:Thanks", "username": "Dev_Teqexpert" }, { "code": "", "text": "Hi @Dev_Teqexpert - Welcome to the community.No, as per the Serverless Instance Limitations documentation serverless instances do not currently support Network Peering (VPC/VNet) configurations.You can set up AWS PrivateLink for serverless instances using the Atlas UI or the Atlas Administration API. Please refer to the Set Up a Private Endpoint for a Serverless Instance documentation.As per the Unsupported Actions documentation for serverless instances, use of Atlas Triggers and Atlas Device Sync are currently unsupported for serverless instances.Hope the above answers your questions.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran, Thanks for the response.", "username": "Dev_Teqexpert" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection via VPC Peering with MongoDB Serverless Cluster
2023-05-23T15:05:31.750Z
Connection via VPC Peering with MongoDB Serverless Cluster
808
null
[ "replication", "atlas-cluster" ]
[ { "code": "", "text": "Hi,We have a MongoDB atlas cluster (replica set - 3 nodes - M50) and one of the replicas uses more than 85% of the system’s memory after an index creation and won’t go down.I saw that MongoDB doesn’t free up memory by default and we could set tcmallocAggressiveMemoryDecommit to enable this behavior, but when I try to do it, I receive a not authorized error.I can’t contact support because I don’t have a Developer or Premium support.Can someone help me?Thanks", "username": "Felipe_Gaudio" }, { "code": "", "text": "Hi @Felipe_Gaudio - Welcome to the community.I can’t contact support because I don’t have a Developer or Premium support.You can contact the in-app chat support which doesn’t require a developer or premium support subscription. In saying so, you may wish to confirm the topic of this memory release is within support scope for the Atlas support team.We have a MongoDB atlas cluster (replica set - 3 nodes - M50) and one of the replicas uses more than 85% of the system’s memory after an index creation and won’t go down.However, could I ask how much of a spike in memory you saw during this time in GB? Additionally, what is the metric you monitored for this? (System Memory, Memory, etc)Lastly, do you have any details that you could provide regarding the index created? E.g., The create index command issue, whether it was an Atlas search index, etc.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" } ]
Release system's memory after index creation
2023-05-20T08:32:45.203Z
Release system&rsquo;s memory after index creation
639
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi everyone,I am interested in learning educational software for teachers. I was wondering if it is possible to learn this using MongoDB? I have some experience with MongoDB and I think it could be a useful tool for educational software developmentI have looked into MongoDB’s Educator Center, but I am not sure if it covers educational software development for teachers. I would appreciate any advice or resources you could recommend for learning educational software development using MongoDB.Thank you for your help!", "username": "Charles_Devis" }, { "code": "", "text": "Hi @Charles_Devis welcome to the communityThe link you posted seem to be about a classroom management system. What are you trying to learn using MongoDB that’s related to that software? Could you give some examples so we may be able to point you toward the right direction?Best regards\nKevin", "username": "kevinadi" } ]
Can I learn educational software for teachers from MongoDB?
2023-05-23T09:27:20.979Z
Can I learn educational software for teachers from MongoDB?
564
null
[ "sharding" ]
[ { "code": "", "text": "Hello,Queries to one of our shards are regularly timing out with the following error:“sharding status of collection db.collection is not currently available for description and needs to be recovered from the config server”I’ve spent a sizeable amount of time scouring the internet but the only references I find are to test bug reports in Jira.Has anyone experienced similar issues?", "username": "Joao_Galrito" }, { "code": "", "text": "Hi @Joao_Galrito and welcome to MongoDB community forums!!In order to understand the concern better and to help you with a possible solution, could you share a few details about the sharded deployment.The architecture for the sharded cluster.Is there a possible way to reproduce this error in the local environment. For instance, is there a specific command being performed which results into this error message?The MongoDB versions being used.Finally, can you confirm if any upgrade has been done in the sharded cluster between the versions?I’ve spent a sizeable amount of time scouring the internet but the only references I find are to test bug reports in Jira.As mentioned in the SERVER-74195: Transaction failed after version upgrade, identifies a similar error message, however this was related to the use of transactions.\nCan you also confirm if the transactions are being used inside the sharded cluster?Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hello Aasawari, and thank you!Thank you for any assistance you might be able to provide!", "username": "Joao_Galrito" }, { "code": "", "text": "Hi @Joao_GalritoAll instances are running 5.0.5, except for the problematic shard which is running 5.0.9 (just noticed this), and a newer shard running 6.0.5 (set on 5.0 compatibility mode)Mismatching between the versions in a sharded cluster is allowed unless they are from adjacent major MongoDB server releases however, a sharded cluster should have consistent versions.\nHence, I would suggest to keep all shard at the same versions to avoid any inconsistencies.Along with the shard servers, I would also suggest to upgrade mongos and the config servers also to the same versions and same feature compatibility version.\nOnce you have performed the necessary upgrades, try running the similar query again and let us know if the issue persists.Please follow the documentation to Upgrade a Sharded Cluster to 6.0 to help with the steps to be followed for smooth upgrade.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hello Aasawari,We moved the entire cluster to 6.0.4 and it seems to have solved the issue.Thank you!", "username": "Joao_Galrito" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Shard queries timing out - Sharding status not available
2023-05-16T17:59:39.838Z
Shard queries timing out - Sharding status not available
1,093
null
[]
[ { "code": "", "text": "Hello , is there a maintenance with GCP / Belgium (europe-west1) or what happen my cluster went downCluster undergoing maintenance\nWe are deploying your changes: 3 of 3 servers complete (current action: waiting for restore to finish)i made no changes", "username": "L_R2" }, { "code": "", "text": "Just FYI, this group isn’t the official support of MongoDB it just a place for the community to try and help (although there are MongoDB employees that do answer) but if you need critical information quickly then I would suggest reaching out to support.To your question, I believe you SHOULD receive an email if you have a cluster that is in a zone that will be impacted by maintenance, especially if they will make changes to your cluster.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I got no notification, in the status it’s shows that everything working I even buy\nA support plan to get some useful support because the chat one just say they will look into it then never replyI get this since 2 hours We are currently syncing your organization to the Support Portal. Return to this page to create a new case in 5-10 mins when the sync process is finished.", "username": "L_R2" }, { "code": "", "text": "It’s been down litteraly for 10 hours now, I am so desperate and have no idea if it’s only me i am looking if anyone have similar issue that got resolved", "username": "L_R2" }, { "code": "", "text": "According to their website there are no issues currently with the MongoDB Cloud platform. Hopefully support will get back to you soon. Welcome to MongoDB Cloud's home for real-time and historical data on system performance.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "that exactly my concern , the last thing i got from support is that the issue is complex and they can’t give me time", "username": "L_R2" }, { "code": "", "text": "UPDATE:\nAfter 8 hours , my cluster is back online , and i didn’t lose any data, the support althought take sometime is very helpfull , i am so glad , thanks to everyone", "username": "L_R2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cluster down GCP / Belgium (europe-west1)
2023-05-23T12:53:17.181Z
Cluster down GCP / Belgium (europe-west1)
401
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hi,I have requirement to upgrade from mongo dB 4.2 to 6.0, since mongo is recommending using the sequential order to upgrade from 4.2 → 4.4 → 5.0 → 6.0.I also tried direct upgrade from 4.2 to 6.0 by dumping from 4.2 and restoring to 6.0 and worked as expected.based on the above analysis I have some questions:Thanks,\nBhagyashree", "username": "Bhagyashree_Patil" }, { "code": "", "text": "Hi your post is very similar to this post, I would take a look at it, and see if you have any other questions.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Data dump/restore format might be different from version to version. So better not rely on this.", "username": "Kobe_W" } ]
MongoDB direct upgrade from 4.2 to 6.0 using mongodump and mongorestore
2023-05-23T10:39:49.589Z
MongoDB direct upgrade from 4.2 to 6.0 using mongodump and mongorestore
810
null
[ "replication", "containers" ]
[ { "code": "mongod --upgradedb.adminCommand( { setFeatureCompatibilityVersion: \"4.4\" } );mongod --upgrade ", "text": "Hi,I need to upgrade my mongodb replicaSet from 4.2 to 5.0.\nI realize I need to upgrade to 4.4 first, and then to 5.0\nMy question is in regards to the upgrade procedure.My Mongo is running on docker containers, which were created and deployed using Ansible playbook.\nNow that I need to upgrade I do the following:deploy the existing containers using mongo 4.4 image.\n– when this is finished, I can see that all containers are deployed successfully, and are using version 4.4Connect to each secondary mongo container, and run:mongod --upgradeThe output looks like this:\nmongod --upgrade{“t”:{“$date”:“2023-05-16T14:10:40.788+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“-”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2023-05-16T14:10:40.789+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledP\n{“t”:{”$date\":“2023-05-16T14:10:40.791+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2023-05-16T14:10:40.791+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOlient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2023-05-16T14:10:40.825+03:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2023-05-16T14:10:40.825+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationfig.tenantMigrationDonors”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.825+03:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrati”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.825+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2023-05-16T14:10:40.825+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:136,“port”:27017,“dbPath”:“/data/dbt”,“host”:“10.97.7.150”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.826+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.15”,“gitVersion”:“9854b831107c0b118”,“openSSLVersion”:“OpenSSL 1.1.1f 31 Mar 2020”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.826+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Ubuntu”,“version”:“20.04”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.826+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“upgrade”:true}}}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“E”, “c”:“CONTROL”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“DBPathIe lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory”}}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“initandlisten”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“wai\n{“t”:{”$date\":“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“initandlisten”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“initandlisten”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“initandlisten”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“initandlisten”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“initandlisten”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“initandlisten”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“initandlisten”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“initandlisten”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2023-05-16T14:10:40.930+03:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“initandlisten”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“initandlisten”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“initandlisten”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“initandlisten”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“initandlisten”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“initandlisten”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“initandlisten”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“initandlisten”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2023-05-16T14:10:40.931+03:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:100}}Stop the Primary container. so now one of the other nodes becomes the Primary.Start this container (which is now secondary) and run the mongod --upgrade command as well.Connect to the Primary container mongo shell and run the following command:\ndb.adminCommand( { setFeatureCompatibilityVersion: \"4.4\" } );when I finish all this, I seem to have a healthy replicaset, with version 4.4My question is this:\nIn most documented upgrade instructions I do not see the requirement to use the mongod --upgrade \nIn what cases is this required, and when can it be skipped?Thanks,\nTamar", "username": "Tamar_Nirenberg" }, { "code": "mongod --upgrade", "text": "Hi @Tamar_Nirenberg and welcome to MongoDB community forums!!While upgrading or downgrading the MongoDB version, the binaries of the specific version needs to be replaced with the desired version.\nThis replacing of the binaries is similar to changing the image with the relevant tags in the docker-compose files.\nTherefore, in a containerised environment, changing the image tag would perform the upgrade to the respective version.However, there are few points to consider before the upgrade.For the production environment, the upgrade of the replica set should be done should be done in the rolling fashion, i.e, first upgrade the secondaries and then finally upgrade the primary member of the replica set.Remember to map the volumes to the correct path during the upgrade process else this may lead to data log.Finally, for the upgrade process, it should be in accordance with your specific environment i.e. if you’re using Ansible and Docker, then the basic procedure need to be tailored for that environmentHowever to answer your question, the mongod --upgrade is not the necessary procedure while performing the upgrade in the docker environment.Let us know if you have any further questions.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari,Thank you for the clarification!\nIs there a relation between the size of the data and the time it takes to upgrade?Thanks,\nTamar", "username": "Tamar_Nirenberg" }, { "code": "", "text": "NO.upgrading means changing the binaries (code) and config files, which has nothing to do with data size. (size only, others may matter, e.g. data format on disk).", "username": "Kobe_W" } ]
Upgrade Procedure for MongoDB
2023-05-16T12:11:04.894Z
Upgrade Procedure for MongoDB
954
null
[ "node-js" ]
[ { "code": "", "text": "I want to get count and documents for more than 5lakhs records by using parallel collection scan using node js.", "username": "Santhosh_V" }, { "code": "parallelCollectionScanparallelCollectionScancollection.countDocuments() collection.countDocuments({}, (error, count) => {\n if (error) {\n console.error(error);\n } else {\n console.log('Total document count:', count);\n }\n });\ncollection.find() collection.find({}, { projection: { _id: 0 } }).toArray((error, documents) => {\n if (error) {\n console.error(error);\n } else {\n console.log('Retrieved documents:', documents);\n }\n });\n", "text": "Hello @Santhosh_V,If I understand the question correctly, you want to use parallelCollectionScan. It’s important to note that the parallelCollectionScan command was removed in MongoDB version 4.2.I want to get count and documents for more than 5lakhs recordsCould you please provide additional details about your specific requirements?However, to count the documents, you can use the collection.countDocuments() method in Node.js. Here’s an example code snippet for the same:And, to retrieve the documents, you can use the collection.find() method with appropriate filters, options, and projections. Here’s an example code snippet for your reference:I hope this helps! If you have any further questions, feel free to ask.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Parallel collectionScan using node.js
2023-05-16T11:31:59.686Z
Parallel collectionScan using node.js
420
null
[ "java", "kafka-connector" ]
[ { "code": " \"mongodb.delete.on.null.values\": \"true\",\n \"delete.on.null.values\": \"true\",\n \"document.id.strategy.overwrite.existing\": \"true\",\n \"writemodel.strategy\": \"com.mongodb.kafka.connect.sink.writemodel.strategy.DefaultWriteModelStrategy\",\n \"document.id.strategy\": \"com.mongodb.kafka.connect.sink.processor.id.strategy.ProvidedInKeyStrategy\",\n \"transforms\": \"hk\",\n \"transforms.hk.type\": \"org.apache.kafka.connect.transforms.HoistField$Key\",\n \"transforms.hk.field\": \"_id\",\n \"post.processor.chain\": \"com.mongodb.kafka.connect.sink.processor.DocumentIdAdder,com.mongodb.kafka.connect.sink.processor.AllowListKeyProjector,com.mongodb.kafka.connect.sink.processor.AllowListValueProjector\",\n \"key.projection.type\": \"AllowList\",\n \"key.projection.list\": \"_id\",\n \"value.projection.type\": \"AllowList\",\n \"value.projection.list\": \"recommended_listings_prediction_current,shopper_engagement,cdp_id,last_modified_ts\"\n{\n \"exception\": {\n \"stacktrace\": \"org.apache.kafka.connect.errors.DataException: Could not build the WriteModel,the `_id` field was missing unexpectedly\\n\\tat com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneDefaultStrategy.createWriteModel(ReplaceOneDefaultStrategy.java:50)\\n\\tat com.mongodb.kafka.connect.sink.writemodel.strategy.DefaultWriteModelStrategy.createWriteModel(DefaultWriteModelStrategy.java:36)\\n\\tat com.mongodb.kafka.connect.sink.writemodel.strategy.WriteModelStrategyHelper.createValueWriteModel(WriteModelStrategyHelper.java:44)\\n\\tat com.mongodb.kafka.connect.sink.writemodel.strategy.WriteModelStrategyHelper.createWriteModel(WriteModelStrategyHelper.java:33)\\n\\tat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModel$2(MongoProcessedSinkRecordData.java:92)\\n\\tat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.tryProcess(MongoProcessedSinkRecordData.java:105)\\n\\tat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.buildWriteModel(MongoProcessedSinkRecordData.java:85)\\n\\tat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.createWriteModel(MongoProcessedSinkRecordData.java:81)\\n\\tat com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.<init>(MongoProcessedSinkRecordData.java:51)\\n\\tat com.mongodb.kafka.connect.sink.MongoSinkRecordProcessor.orderedGroupByTopicAndNamespace(MongoSinkRecordProcessor.java:45)\\n\\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:101)\\n\\tat com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:582)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:330)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\\n\\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)\\n\\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:237)\\n\\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\\n\\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:829)\",\n \"exception_class\": \"org.apache.kafka.connect.errors.DataException\",\n \"exception_message\": \"Could not build the WriteModel,the `_id` field was missing unexpectedly\"\n },\n \"source_host\": \"ip-172-28-195-18.ec2.internal\",\n \"method\": \"tryProcess\",\n \"level\": \"ERROR\",\n \"message\": \"Unable to process record SinkRecord{kafkaOffset=25529965, timestampType=CreateTime} ConnectRecord{topic='dev-feature.consumer_profile', kafkaPartition=2, key=Struct{_id=adefffa6-b9ae-40cf-b6de-429c59e4f2c5}, keySchema=Schema{STRUCT}, value={last_modified_ts=1677466685723, research_indicator_30_mins={windowstart=1663612200000, windowend=1663614000000, value=false}, review_indicator_30_mins={windowstart=1663612200000, windowend=1663614000000, value=false}, popup_indicator_30_mins={windowstart=1663612200000, windowend=1663614000000, value=false}, imps_news_page_30_mins={windowstart=1663612200000, windowend=1663614000000, value=[]}, dealer_profile_indicator_30_mins={windowstart=1663612200000, windowend=1663614000000, value=false}, cdp_id=adefffa6-b9ae-40cf-b6de-429c59e4f2c5, imp_mm_research_page_30_min={windowstart=1663612200000, windowend=1663614000000, value=[]}}, valueSchema=null, timestamp=1663613512730, headers=ConnectHeaders(headers=[ConnectHeader(key=x-datadog-trace-id, value=7756784277501315046, schema=Schema{INT64}), ConnectHeader(key=x-datadog-parent-id, value=1083778841350792478, schema=Schema{INT64}), ConnectHeader(key=x-datadog-sampling-priority, value=1, schema=Schema{INT8})])}\",\n \"mdc\": {\n \"connector.context\": \"[dev-rtff-cdc-sink|task-1] \"\n },\n \"@timestamp\": \"2023-02-27T20:04:12.823Z\",\n \"file\": \"MongoProcessedSinkRecordData.java\",\n \"line_number\": \"109\",\n \"thread_name\": \"task-thread-dev-rtff-cdc-sink-1\",\n \"@version\": 1,\n \"logger_name\": \"com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData\",\n \"class\": \"com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData\"\n}\n", "text": "I have the following connect configuration sink connector configuration.But when it runs I get the following stack trace error.When I run the configuration without the key and value projection, it works fine. Has anyone had any success running this combination of processor chain with the transform?", "username": "Shawn_Nguyen" }, { "code": "", "text": "Hi! I was experiencing a very similar problem when configuring the processor chain with the transform.\nIn the end, the problem was on my “value.projection.list”.\nThe DefaultWriteModelStrategy, specifically the ReplaceOneDefaultStrategy for some reason needs the _id field to also be present on the value of the record. By adding “_id” to my value.projection.list my problem was solved.Hope this helps!", "username": "Luisa_Emme" } ]
Sink Connector with AllowedList and key Transform Failing on missing _id field
2023-02-27T20:23:38.648Z
Sink Connector with AllowedList and key Transform Failing on missing _id field
1,228
null
[ "time-series" ]
[ { "code": "", "text": "Hey Folks.I’m seeing a weird thing in Atlas with Online archive. This is a time series collection in question and originally we set 365 days for the age limit and then set deletion to 180 days. Now I’m seeing it keep 365+180 days of data per the time field in the collection instead of just 365 days.I know the deletion age says it’s in preview, but doesn’t seem to be working as I think it should. Now I’ve blanked out the deletion age and so far no change.Anyone know how to reset an archive somehow so that I only keep 365 days in the collection?Thanks,\nSteven", "username": "Steven_Tate" }, { "code": "", "text": "Hello @Steven_Tate ,I think what you are describing is actually working as expected. The way that Online Archive + Expiration works is that data expires from the cluster after the period of time that you have set in your rule based on the date field in the time series collection, data then stays in the archive for the period of time specified in the expiration rule based on when it was written to the archive (not based on the date field in the document).So if you had a bunch of data that should immediately archive once you set the rule, then that data should be archived as soon as possible, but it would then stay in the archive for 180 days from the day it was archived.Does that make sense?", "username": "Benjamin_Flast" }, { "code": "", "text": "I understand that that’s how is supposed to work. Makes sense to be in the cluster for the “archival age limit” per the time series date field AND THEN move to the archive when it hits that number of aged days.And then yes, it should remain in the archive for the deletion age limit and then get deleted after that point. This is NOT the way it is working.For now, I’m desperate to get this stuff archived so I have removed the limit to see if it would matter and lowered the archive to 180 days. Here’s a screenshot of my current serttings:\nimage1078×990 66.7 KB\nAnd here is the collection as of right now showing records from almost 2 years ago (well more than 180 days):\nimage1106×478 48.6 KB\nI’ve read all the blogs and all the docs. And it seems that it is simply not doing what it is supposed to be doing.Also, I have a 2 hour archiving window set and when it hits, it basically uses all resource of my M10 for the duration and then still does not seem to move anything to the archive.Any ideas? I’m at the point of creating a support ticket as I have to get this aged data out of my collection.Thanks Benjamin. Keep up the awesome work!Steven", "username": "Steven_Tate" }, { "code": "", "text": "Ahh thanks for clarifying @Steven_Tate and pardon the misunderstanding.Based on these details it would there is something inefficient happening during the archival job which is causing it to fail if there are no documents being moved at all.\nIt’s possible that the archival query is just extremely inefficient which is why you are running out of resources. You have an index index on the date field you’re using for archival right?Also, have you connected to the archive only and validated that no docs have been moved into it at all? It is possible it’s just running slowly and hasn’t gotten all of the old documents off yet.", "username": "Benjamin_Flast" }, { "code": "", "text": "Hey Benjamin,Thanks again. Yes there is an index on the created field which sits as the time field:\nimage1142×1364 70 KB\nAlso, there are 36 million records in the collection. So while that is alot, it shouldn’t be breaking the bank as a simple match and count of 180 days does use the index (and takes nearly a minute). Could certainly be an issue with my indexes:\nimage419×655 43.7 KB\nI guess technically, it’d be doing the opposite which is querying anything more than 180 days old. That could certainly just be repeatedly timing our perhaps?I have connected directly to the archive and seen that records are in there. But no records have been copied since I changed from 365 days and 180 to delete to 180 days and never delete.How do you archive big data sets then if it doesn’t determine that the archival is efficient?Thanks again,\nSteven", "username": "Steven_Tate" }, { "code": "", "text": "There should not be any issue archiving large datasets as long as the indices allow for efficient archival jobs.I think it would be good to open a support ticket, the team should be able to help guide you on whether or not we are able to take advantage of indices for the queries and they will be able to escalate this to a bug if that’s not happening.", "username": "Benjamin_Flast" } ]
Online Archive Dates
2023-05-17T17:57:53.870Z
Online Archive Dates
876
null
[ "database-tools", "containers", "backup", "time-series" ]
[ { "code": "\tfinished restoring test.system.buckets.timeseries (0 documents, 0 failures)\n2023-05-08T23:01:00.472+0000\tFailed: test.system.buckets.timeseries: error restoring from system.buckets.timeseries.bson.gz: connection(enode2:27017[-4]) socket was unexpectedly closed: EOF\n2023-05-09 01:00:30 {\"t\":{\"$date\":\"2023-05-08T23:00:30.448+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn52\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n", "text": "Hi, I use MongoDB 6.0.5 on Docker with a replica (3 Containers having 3 different instances of DB).\nRegularly dump with ‘mongodump --quiet --gzip’ command.Today, have created again the instances on Docker, for my 3 Containers, and wanted to restore DB from dumps. For most of bson files the restore with mongorestore went easy.\nBut for the biggest DB/collection, which is time-series it crashes.I am using it in a recommended / standard waymongorestore --host=node2 --port=27017 --username=jj --password=pp --authenticationDatabase=admin --nsInclude=my_db.my_collection --gzip system.buckets.timeseries.bson.gzwhere all: jj, pp, my_db, my_collection are checked, and correct.I have also tried to set different servers/primary, etc. but the result is the same.\nThe error is as:and node_2 is even exiting. So I have looked into logs, and it shows something likeAnyone can help on this? Maybe there is another way to restore what was dumped before?", "username": "Jakub_Polec" }, { "code": "2023-05-09 08:38:47 {\"t\":{\"$date\":\"2023-05-09T06:38:47.680+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23079, \"ctx\":\"conn29\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"!collectionName.startsWith(\\\"system.buckets.\\\")\",\"file\":\"src/mongo/db/auth/resource_pattern.h\",\"line\":133}}\n2023-05-09 08:38:47 {\"t\":{\"$date\":\"2023-05-09T06:38:47.680+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23080, \"ctx\":\"conn29\",\"msg\":\"\\n\\n***aborting after invariant() failure\\n\\n\"}\n2023-05-09 08:38:47 {\"t\":{\"$date\":\"2023-05-09T06:38:47.680+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn29\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n2023-05-09 08:38:47 {\"t\":{\"$date\":\"2023-05-09T06:38:47.680+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23079, \"ctx\":\"conn30\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"!collectionName.startsWith(\\\"system.buckets.\\\")\",\"file\":\"src/mongo/db/auth/resource_pattern.h\",\"line\":133}}\n", "text": "more on the logs is here. It looks like it’s some issue with MongoDB.It’s strange as I just did mongodump as usual, not specifying any other names, etc.", "username": "Jakub_Polec" }, { "code": "", "text": "Hey @Jakub_Polec,Thanks for bringing this issue to our attention. I’ll let the concerned team know about this behavior.Regards.\nSatyam", "username": "Satyam" } ]
Issue with mongorestore - system crash
2023-05-08T23:14:23.383Z
Issue with mongorestore - system crash
890
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "DB_CONNECT = 'mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority';\nconst dotenv = require(\"dotenv\").config();\nmongoose.set(\"strictQuery\", false);\n\nmongoose.connect(process.env.DB_CONNECT, {\n\n useUnifiedTopology: true,\n\n useNewUrlParser: true,\n\n}).then(console.log('connect sucess to mongodb'))\n\nbot.ticketTranscript = mongoose.model('transcripts',\n\n new mongoose.Schema({\n\n Channel : String,\n\n Content : Array\n\n })\n\n)\nthrow new MongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"');\n\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"\n", "text": "Hello,\nI have an error that blocks me, have you ever encountered this error?File .env =>index.jsError =>Thanks you in advance", "username": "bill" }, { "code": "process.env.DB_CONNECT", "text": "Hello @bill, Welcome to the MongoDB community forum,Can you please make sure by consol print this variable process.env.DB_CONNECT has the correct connection string?", "username": "turivishal" }, { "code": "'mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority';\n", "text": "Hi, thanks you for answer,console displayWith ’ ';It’s normal ?", "username": "bill" }, { "code": " useUnifiedTopology: true,useNewUrlParser: true", "text": "It looks good, check out the documentation, I think you are missing something debug your code step by step,\nhttps://mongoosejs.com/docs/connections.htmlOut of the question, If you are using mongoose latest version then don’t need to pass useUnifiedTopology: true, and useNewUrlParser: true in connection because by default it set true", "username": "turivishal" }, { "code": "", "text": "It’s normal ?it notremove the quotes and if it still does not work remove leading and trailing spacss", "username": "steevej" }, { "code": "", "text": "The trailing semicolon is probably erroneous too.", "username": "steevej" }, { "code": "strictQueryfalsemongoose.set('strictQuery', false);mongoose.set('strictQuery', true);", "text": "[MONGOOSE] DeprecationWarning: Mongoose: the strictQuery option will be switched\nback to false by default in Mongoose 7. Use mongoose.set('strictQuery', false); if you want to prepare for this change. Or use mongoose.set('strictQuery', true); to suppress this warning.Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”both the error will be gone\nfirst use the\n// mongoose.set(‘strictQuery’, true) in top\nand remove the extra space in the link of mondodb", "username": "Sachin_Pandey" }, { "code": "", "text": "remove the ‘;’ from the last of .env file, and it will work", "username": "Tushar_Kumar2" }, { "code": "", "text": "why are you storing string value in env??? and also why ‘;’\nThis should be-DB_CONNECT = mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority", "username": "Shadab_Ahmed" } ]
Error "Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://""
2022-12-14T08:41:29.707Z
Error &ldquo;Invalid scheme, expected connection string to start with &ldquo;mongodb://&rdquo; or &ldquo;mongodb+srv://&rdquo;&rdquo;
12,909
https://www.mongodb.com/…4_2_1024x512.png
[ "sharding", "time-series" ]
[ { "code": "sh.shardCollection(\n \"test.weather\",\n { \"metadata.sensorId\": 1 },\n {\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n }\n }\n)\n\ndb.weather.insertOne( {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T00:00:00.000Z\"),\n \"temp\": 12\n} )\n", "text": "Hi All,\nwith respect to this documentationMongodb version used: 5.0.17I am trying to shard the time series collection, but it is not getting replicated to other shard members.Now when I login to each shard, the time series collection is available on only one shard from where it was created. I am connecting to router to create the time series collection.\nThe above steps works fine for regular collection.Thanks in advance.", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "A post was merged into an existing topic: Does updateZoneKeyRange works with Time Series Collection In Mongodb?", "username": "Kushagra_Kesav" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Sharding a timeseries collection
2023-05-23T12:17:59.408Z
Sharding a timeseries collection
914
null
[ "aggregation", "performance" ]
[ { "code": "db={\n \"foo\": [\n {\n \"fooid\": \"my_foo_1\",\n \"name\": \"name of foo1\",\n \"organizationId\": \"myOrganization\"\n },\n {\n \"fooid\": \"my_foo_2\",\n \"name\": \"name of foo2\",\n \"organizationId\": \"myOrganization\"\n },\n {\n \"fooid\": \"my_foo_3\",\n \"name\": \"name of foo3\",\n \"organizationId\": \"myOrganization\"\n }\n ],\n \"fooCombinations\": [\n {\n \"id\": \"combination1\",\n \"foos\": [\n \"my_foo_1\",\n \"my_foo_2\"\n ],\n \"organizationId\": \"myOrganization\"\n },\n {\n \"id\": \"combination2\",\n \"foos\": [\n \"my_foo_1\",\n \"my_foo_3\"\n ],\n \"organizationId\": \"myOrganization\"\n },\n {\n \"id\": \"combination3\",\n \"foos\": [\n \"my_foo_2\",\n \"my_foo_3\"\n ],\n \"organizationId\": \"myOrganization\"\n }\n ]\n}\nfoosfooCombinationsfooidfoodb.fooCombinations.aggregate([\n {\n $lookup: {\n from: \"foo\",\n let: {\n foos: \"$foos\",\n organizationId: \"$organizationId\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\n \"$organizationId\",\n \"$$organizationId\"\n ]\n },\n {\n $in: [\n \"$fooid\",\n \"$$foos\"\n ],\n },\n ],\n },\n },\n }\n ],\n as: \"fooAdditionalInfo\",\n },\n }\n])\nfooCombinationsfooid", "text": "I have a collection which refers in string array to another collection. Much simplified the data looks like:You see, foos in fooCombinations collection refers to fooid in the foo collection.Now my $lookup looks like:( Put that also on Mongo playground: Mongo playground )Now the problem is performance. The fooCombinations collection has in reality about 2000 documents for that organization and will grow. Even right now, that query takes about 20 seconds - all the parts without the $lookup take just 400ms. So, the $lookup is defintely the problem here.\nThere is an index on fooid but I know that $in inside $lookup cannot use indexes. I also know that certain improvements have been made on newer versions of Mongo, but right now I’m stuck with version 4.So, what can I do to drastically decrease the execution time? (Small improvements won’t help. Reducing it from 20s to 10s won’t help. It need to be under 1 second.)My line of thinking is: Is it wise to first create an array of all foo ids of all fooCombinations in one organization and then query “foo” just once with that big array and then afterwards “distribute” to foo results back to fooCombinations?\nIf that makes sense performance-wise, how would I do that?", "username": "Christian_Schwaderer" }, { "code": "{ fooid : { foo: \"my_foo_1\" , organization: \"myOrganization\" } ,\n name : \"name of foo 1\"\n} \n\"foos\": [\n \"my_foo_1\",\n \"my_foo_2\"\n ]\n\"foos\": [\n { foo: \"my_foo_1\" , organization: \"myOrganization\" }\n { foo: \"my_foo_2\" , organization: \"myOrganization\" }\n ]\n", "text": "I’m stuck with version 4.It is really bad because the new flavour of $lookup are more flexible. Not using indexes it definitively a performance killer.Another issue is if a given foo (like my_foo_1) appears in multiple combinations you potentially (potentially since there might exist an optimization in the server to avoid this) in the looking it up multiple time, without an index. Since it looks like we only have part of the use case it is hard to make recommendation. For example, do you have some kind of $match before the $lookup. One way to reduce the multiple $lookup for the same foo is to use $facet where $group on foo ids of the foos array.Are the foo ids unique for a given organization? If they are you might forgo the $eq on the organization. You could always make them unique by making the real id to be the concatenation of the original foo id and organization. Or use an object for the _id likefoo collectionThen before the $lookup you would use $addFields to $map foos arrays from:to the shape of ids in the foo collectionYou will then be able to use localField:foos and foreignField: fooid in you $lookup thus being able to use the unique index of fooid.All the above might be more work than upgrading to a more recent version which you will have to do any way within a year.", "username": "steevej" }, { "code": "", "text": "Many thanks for your answer!The foo ids are uniqe within an organization, but not across organizations. That’s why I think I need to compare them as well.Yes, I know that it’s problematic that I cannot show my complete use case, but that would be too complicated and contains code I cannot show.But many thanks anyway. That already helped quite a good deal!", "username": "Christian_Schwaderer" } ]
Performance problem for lookup with $in
2023-05-18T03:21:57.188Z
Performance problem for lookup with $in
835
null
[ "react-native" ]
[ { "code": "export class People extends Realm.Object<People> {\n _id!: string;\n name: string;\n email?: string;\n tags?: Realm.List<Tags>;\n org_id!: string;\n\n static schema: Realm.ObjectSchema = {\n primaryKey: '_id',\n name: 'people',\n properties: {\n _id: {\n type: 'string',\n default: () => new Realm.BSON.ObjectID().toHexString(),\n },\n name: 'string',\n email: 'string?',\n createdAt: { type: 'date', default: () => new Date() },\n tags: 'tags[]',\n org_id: 'string',\n },\n };\n}\nexport class Tags extends Realm.Object<Tags> {\n _id!: string;\n name: string;\n org_id: string;\n\n static schema: Realm.ObjectSchema = {\n primaryKey: '_id',\n name: 'tags',\n properties: {\n _id: {\n type: 'string',\n },\n name: 'string',\n is_active: { type: 'bool', default: true },\n org_id: 'string',\n },\n };\n}\n\nconst person = realm.write(() => {\n return new People(realm, {\n email,\n name,\n org_id: org._id,\n tags: [{ name: '2021', _id: `${org._id}-2021` }],\n });\n });\n", "text": "I have a person that has a related collection of “tags”. Each person can have multiple tags.I’ve found the relationship CRUD section of the docs, but it skips the part that says how to actually write the code to connect two documents.Here is a simplified version of my People schema.And here is my Tags modelHere is my code for adding a new person with one tag.I get the error \"Exception in HostFunction: Attempting to create an object of type ‘tags’ with an existing primary key value ‘‘id123-2021’’This is happening because realm is trying to create this same tag that already exists in the DB. Instead of creating it I need to connect the two. How do I do that?Cross posted here as well", "username": "Joshua_Bechard" }, { "code": "", "text": "Hey @Joshua_Bechard - Welcome to the community.I’ve found the relationship CRUD section of the docs, but it skips the part that says how to actually write the code to connect two documents.Wondering if the examples on the following forum post help : Mobile Bytes #6: What are Realm Relationships?Just to clarify as well, could you advise which SDK version you are using?Regards,\nJason", "username": "Jason_Tran" }, { "code": " const person = realm.write(() => {\n const realmPerson = new People(realm, {\n email,\n name,\n org_id: org._id,\n });\n \n person.tags = [{ _id: `${org._id}-2021` }];\n return realmPerson;\n });\n", "text": "The linked post has Java examples, but no JavaScript examples. I think without a concrete JavaScript example it will be difficult to understand how these documents are linked together. I did find a workaround and posted my answer on the stackoverflow article.Here is that answer.\nRealm objects can be mutated inside the realm.write function directly. So in this case the way I was able to connect tags with people is like this.And since the _id is the primary key field Realm just knows these objects are linked and now if I later call person.tags, I actually get the whole array of tags with all the object properties, not just the _id field.", "username": "Joshua_Bechard" }, { "code": "realm.write(() => {\n const tmpId = `org-${_id}-2021`;\n let tag = realm.objectByPrimaryKey<Tag>(Tag, tmpId);\n if (!tag) {\n tag = new Tag(realm, { name: '2021', _id: tmpId });\n } \n return people = new People(realm, {\n email,\n name,\n org_id: org._id,\n tags: [tag]\n });\n});\ntag", "text": "Hello @Joshua_Bechard ,Thank you for being part of our community, and helping fellow members with your suggestions I will try to add JavaScript examples to the article as well.For your above question, the team recommends avoiding constructors when you have a list of Realm objects.The above code will check if the tag has already been created by looking it up by primary key. If it hasn’t, it will create it.I hope the provided information is helpful.Cheers, \nHenna", "username": "henna.s" }, { "code": "", "text": "Henna,Thank you!This is great. It would be really helpful for these kinds of examples to exist in the docs as well. It isn’t clear from the docs that the function realm.objectByPrimaryKey even exists. At least I couldn’t find it anywhere in the React Native docs.I did some digging, did you mean objectForPrimaryKey?\nhttps://www.mongodb.com/docs/realm-sdks/js/latest/Realm.html#objectForPrimaryKeyThanks again", "username": "Joshua_Bechard" }, { "code": "objectForPrimaryKey", "text": "Great Catch @Joshua_Bechard, this is indeed objectForPrimaryKey.Cheers,\nhenna", "username": "henna.s" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issue with realm and related collection
2023-05-08T21:34:31.788Z
Issue with realm and related collection
1,257
https://www.mongodb.com/…4ac88544979e.png
[ "java" ]
[ { "code": "[LOGGG] INFO [MaintenanceTimer-1-thread-1][AnonymousUser] [] org.mongodb.driver.connection: Opened connection [connectionId{localValue:12, serverValue:1403}] to localhost:27017", "text": "I have changed configuration of mongodb driver to DEBUG, but it doesn’t help. I have tried other options of setting logs to, but it all seems not to work.\nI am getting info logs like this constantly on INFO level:\n[LOGGG] INFO [MaintenanceTimer-1-thread-1][AnonymousUser] [] org.mongodb.driver.connection: Opened connection [connectionId{localValue:12, serverValue:1403}] to localhost:27017Maybe someone know what is the valid way of changing log level of opening connection as for now?", "username": "Arsen_Ilchyniak" }, { "code": "", "text": "I’m not sure why that doesn’t work. What logging system are you using and is it plugged into SLF4J (which is what the driver uses for all logging)?", "username": "Jeffrey_Yemin" }, { "code": "logback2023-05-19 15:21:07,317 [LOGGG] DEBUG [SpringApplicationShutdownHook][AnonymousUser] [] org.mongodb.driver.connection: Closing connection connectionId{localValue:10, serverValue:1673} ", "text": "I am using logback system for logging.\nMoreover, closing the connection works fine and level is set to DEBUG.\n2023-05-19 15:21:07,317 [LOGGG] DEBUG [SpringApplicationShutdownHook][AnonymousUser] [] org.mongodb.driver.connection: Closing connection connectionId{localValue:10, serverValue:1673} ", "username": "Arsen_Ilchyniak" } ]
How to change logs level for opening connection in mongoDB?
2023-05-19T12:04:01.798Z
How to change logs level for opening connection in mongoDB?
577
null
[ "sharding" ]
[ { "code": "", "text": "Test Env >>I shutdown 3 Secondary members result in that majority for votes is satisfied but majority of readwrite concern of data-bearing member is not satisfied.After shutdown 3 Secondaries,\nAt every 5 mins, I could see a message from mongos log like the following\n\"…[LogicalSessionCacheRefresh] Failed to refresh session cache: WriteConcernFailed: waiting for replication timed out; Error details: { wtimeout: true} at testreplicaset \"So I expected the activeSessionCount of mongos would not be initiated but keep increasing\nbut it was initiated every 5mins.Is this normal?", "username": "peterkim" }, { "code": "\"…[LogicalSessionCacheRefresh] Failed to refresh session cache: \nWriteConcernFailed: waiting for replication timed out; \nError details: { wtimeout: true} at testreplicaset \"\nconfig.system.sessionactiveSessionCount", "text": "Hello @peterkim,Welcome to the MongoDB Community forums I shut down 3 Secondary members resulting in the majority for votes is satisfied but majority of read-write concern of the data-bearing members is not satisfied.In my reproduction, I simulated a replica set deployment consisting of 1 primary node, 5 secondary nodes (data-bearing members), and 1 arbiter - I observed that when I shut down 3 out of the 5 secondary nodes, the write operations remained unaffected and I’m ensuring the majority of data-bearing members with 2 secondaries (data-bearing members), 1 primary and 1 arbiter. However, when I shut down one more secondary node, there was no remaining primary node, resulting in the inability to perform write operations. To read more, please refer to the MongoDB documentation for Write Concern for Replica Sets.This error is part of the regular logical session routine. By default, MongoDB periodically (every 5 minutes) persists the content of the cached information to the config.system.session collection. The error indicates that there was no quorum to satisfy the required write concern for the job responsible for maintaining the logical session mechanism.So I expected the activeSessionCount of mongos would not be initiated but keep increasing\nbut it was initiated every 5mins.\nIs this normal?Could you please give an example of the scenario that you have in mind?In addition, I recommend updating your MongoDB version to the latest release. It’s worth noting that MongoDB 4.2 is no longer supported, and upgrading to a newer version can provide improved stability, bug fixes, and additional features. You can refer to the EOL Support Policies for more information on MongoDB versions and their support status.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "activeSessionCount", "text": "It’s your answer >>\n“In my reproduction, I simulated a replica set deployment consisting of 1 primary node, 5 secondary nodes (data-bearing members), and 1 arbiter - I observed that when I shut down 3 out of the 5 secondary nodes, the write operations remained unaffected and I’m ensuring the majority of data-bearing members with 2 secondaries (data-bearing members), 1 primary and 1 arbiter. However, when I shut down one more secondary node, there was no remaining primary node, resulting in the inability to perform write operations.”My Answer>>\nYes, when majority of votes is satisfied , Primary exists and cluster works normally, but when majority of data-bearing member is not satisfied , write transaction with majority write concern will fail. So I expected that the write for LogicalSessionCacheRefresh with majority write concern would fail with lack of data-bearing members. As I expected, I saw an error “Failed to refresh session cache:\nWriteConcernFailed: waiting for replication timed out” in the mongos error log with lack of data-bearing members ( 3 secondaries shutting down), but activeSessionCount of mongos was initiated successfully every 5 mins. It should be kept increasing because refresh session cache was failed with above Error. Is my description too poor to make you understand this situation? ", "username": "peterkim" } ]
Failed to refresh session cache but activeSessionCount of mongos is initiated every 5 mins ( logicalSessionRefreshMillis)
2023-05-17T06:45:30.123Z
Failed to refresh session cache but activeSessionCount of mongos is initiated every 5 mins ( logicalSessionRefreshMillis)
972
null
[ "node-js", "mongoose-odm", "connecting", "mongodb-shell", "configuration" ]
[ { "code": "conn.on(\"disconnected\", function () {\n setTimeout(() => {\n logger.info(\"MongoDB disconnected, reconnecting...\");\n if (mongoose.connection.readyState === 0) {\n global.dbConn = { status: \"error\", error: \"down\" };\n mongoose.connect(dbURI, options);\n global.dbConn = { status: \"ok\" };\n }\n }, 1000);\n});\n\"Error receiving request from client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":141,\"codeName\":\"SSLHandshakeFailed\",\"errmsg\":\"The server is configured to only allow SSL connections\"\nmongosh", "text": "Our setup is using Heroku as our web servers with static IPs that are whitelisted on our cluster, which is on AWS EC2 (Centos 8). The connection works and the application works (most of the time). However, the problem is that we have lots of disconnections we see in our logs (we have this in our connection code - mongoose):On the Mongo side logs, we can see the errors:Versions:\nmongoose 6.2.0\nmongo 5.0.6\nNodeJS 18I can run mongosh commands etc… as needed, but I’m wondering if anyone has an idea or direction I can investigate? I thought it might relate to Cloudflare being in the middle, but the Mongo URIs aren’t listed in there.Any help will be appreciated.", "username": "Moshe_Azaria" }, { "code": "\"Error receiving request from client. Ending connection from remote\",\n\"attr\":{\"error\":{\"code\":141,\"codeName\":\"SSLHandshakeFailed\",\n\"errmsg\":\"The server is configured to only allow SSL connections\"\nssltruemongoose.connect()", "text": "Hello @Moshe_Azaria,Welcome back to the MongoDB Community forums Our setup is using Heroku as our web servers with static IPs that are whitelisted on our cluster, which is on AWS EC2 (Centos 8)Based on the shared details, it appears that you have more than one server in your setup. Could you please confirm it?The connection works and the application works (most of the time).To better understand the issue, could you provide more details about the frequency or percentage of occurrences when the disconnections happen?The error message you shared suggests an SSL handshake failure, indicating that some of the web servers are not using TLS. To ensure that your web servers are using TLS, make sure to set the ssl option to true in the mongoose.connect() function or in your connection string. Additionally, it would be helpful to know if any recent changes were made, such as modifications in code or any differences in server configurations.If you suspect any possible factors contributing to the issue, please let us know.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "tlsssl=true", "text": "Hi @Kushagra_Kesav and thank you for helping out!Yes, our cluster comprised of 3 servers, the primary is where we see almost all of the disconnections from (99%).Regarding frequency: We can see disconnections happen about ~12 times an hour on average.tls option is set to true. However, I will try to deploy with the ssl=true.There weren’t any changes that caused this, it’s simply a new environment which is in the EU wheres we have another one in the US with the exact same configurations.", "username": "Moshe_Azaria" } ]
Web servers disconnect frequently from Mongo cluster using Mongoose
2023-05-18T09:09:56.473Z
Web servers disconnect frequently from Mongo cluster using Mongoose
976
null
[ "aggregation", "queries" ]
[ { "code": "{alphabets1:\n [{a:value11},\n {b:value12},\n {c:value13},...]\n},\n{alphabets2:\n [{a:value21},\n {b:value22},\n {c:value23},...]\n}\n{alphabets1:value11+value12+value13...},\n{alphabets2:value21+value22+value23...}\n{alphabets1:[{fieldName:a,value:value11},\n {fieldName:b,value:value12},\n {fieldName:c,value:value13},...]},\n{alphabets2:[{fieldName:a,value:value21},\n {fieldName:b,value:value22},\n {fieldName:c,value:value23},...]},\n$group", "text": "I have a collection that’s structure is somewhat similar to the following:-Initial:I need to get the following information from the above collection:Required:To get the above structure above, I have to modify the documents structure in the array to the following:and use $group aggregate.And for this, I have to access the field name, but I couldn’t find a way to do it in mongodb and I was wondering if there are any other ways to get the Required format from the Initial document structure.", "username": "Vamsi_Kumar_Naidu_Pallapothula" }, { "code": "{alphabets1:value11+value12+value13...},\n{alphabets2:value21+value22+value23...}\n", "text": "Hi @Vamsi_Kumar_Naidu_Pallapothula and welcome to MongoDB community forums!!Could you elaborate on the requirement as this is not very clear from the above structure shared.\nHowever, from my understanding, are you trying to add all the values for all the filenames .Can you help me understand the use case in a more better way withRegards\nAasawari", "username": "Aasawari" }, { "code": "$objectToArray//overs: [ {over: runs scored in that over}...]\n{\ninnings: 1,\novers: [{\"0\":5},{\"1\":9},{\"2\":8},...,{\"19\":13}]\n}\n{ innings:1, runs:5+9+8+...+13 }\n$objectToArray$unwind{\n{ innings:1, overs:{k:\"0\",v:5}},\n{ innings:1, overs:{k:\"1\",v:9}},\n...\n{ innings:1, overs:{k:\"19\",v:13}},\n}\n$group$sumdb.dummy.insertOne({\n \"innings\":1,\n \"overs\": [\n {\"0\":3}, {\"1\":4}, {\"2\":9}, {\"3\":3}, {\"4\":9}, {\"5\":19}, {\"6\":1}, {\"7\":11}, {\"8\":21}, {\"9\":11},\n {\"10\":3}, {\"11\":4}, {\"12\":9}, {\"13\":3}, {\"14\":4}, {\"15\":9}, {\"16\":11}, {\"17\":11}, {\"18\":22}, {\"19\":1}\n ]\n})\n\ndb.dummy.find()\n\ndb.dummy.aggregate([\n {$unwind: \"$overs\"},\n {$project: {\n innings:\"$innings\",\n overs: {$objectToArray: \"$overs\"}\n }},\n {$unwind:\"$overs\"},\n {$group: {\n _id: {innings:\"$innings\"},\n runs: {\n $sum: \"$overs.v\"\n }\n }}\n])\n", "text": "Hi @Aasawari, thanks for responding to my query. I have solved this issue using $objectToArray operator. For the sake of completion I am adding the issue along with solution that I thought of below. As I am a beginner in mongoDB, I’m not sure if this is an optimal solution, so please feel free to add any other approaches that you might think of :I am working on a cricket related data. Following is the kind of structure that I have:and I wanted to know the total number of runs scored in the 20 overs, like shown belowusing $objectToArray and $unwind I modified the initial structure to something that looks like thisthen I used $group and $sum operators to get the total runs in an innings.This process can be replicated using the code provided below:", "username": "Vamsi_Kumar_Naidu_Pallapothula" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using a field name as another field value
2023-05-18T06:34:07.413Z
Using a field name as another field value
514
https://www.mongodb.com/…d_2_1024x374.png
[ "queries" ]
[ { "code": "", "text": "Hi Team ,I have millions of records in collection . I have created a scheduled trigger which will run every minute . On trigger function i am retrieving 1000 records then updating it . Below is the query i have usedconst filteredData = await auditEvents.find({ ID: { $exists: true }, DATA: { $exists: true } }).limit(1000).toArray();However in profiler i saw it is returning only 101 records which is default batch size . so i updated the query as below:const filteredData = await auditEvents.find({ ID: { $exists: true }, DATA: { $exists: true } }).limit(1000).batchSize(1000).toArray();But it is giving me error : Error occurred while executing : ‘batchSize’ is not a functionSame query works locally (vscode) . Is there any option to enable batch size in mongodb atlas ?\nScreenshot 2023-05-19 at 11.57.24 AM3055×1117 373 KB\nCould you please help here .", "username": "Yogini_Manikkule" }, { "code": "collection.find()batchSize()batchSize", "text": "Hi @Yogini_Manikkule - Welcome to the community.const filteredData = await auditEvents.find({ ID: { $exists: true }, DATA: { $exists: true } }).limit(1000).batchSize(1000).toArray();But it is giving me error : Error occurred while executing : ‘batchSize’ is not a functionAs per the Return Value section of the MongoDB API Reference documentation:The collection.find() method returns a cursor object that points to any documents that match the specified query.You can manipulate and access documents in the query result set with the cursor methods mentioned in the documentation linked above but it does not yet have batchSize() as a method to work off. However, in saying so, can you detail the use for batchSize in this scenario? You may wish to create a feedback request for this as well in which others can vote for.However in profiler i saw it is returning only 101 records which is default batch size . so i updated the query as below:Could it be that the full query itself is returning 101 documents that match? i.e. There are no more than 101 documents that match.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to execute batchSize function in mongodb app services functions
2023-05-19T06:36:35.545Z
Unable to execute batchSize function in mongodb app services functions
516
null
[ "queries" ]
[ { "code": "", "text": "Hi Team ,\nThere is a requirement of generating DDL statements for users and roles that are already existing in the environment .\nRequired your help if there are any kind of customise scripts that I can run from mongo Shell to get the DDL of existing users and roles.Regrad\nPrince", "username": "Prince_Das" }, { "code": "db.getUsers()viewUserdb.getUsers()", "text": "Hi @Prince_Das,Welcome back to the MongoDB Community forums requirement of generating DDL statements for users and roleCan you please clarify the above statement or provide additional details about your specific requirements?However, if I understand your question correctly, you are looking for a way to obtain a list of all the database users with their roles. You can achieve this by using the db.getUsers() command in the MongoDB Shell. This command retrieves the necessary details you need. Make sure you have the appropriate privileges, specifically the viewUser action, to access the information of other users in their respective databases. You can find more details about this command and its usage in the MongoDB documentation on db.getUsers().I hope this information helps! If you have any further questions or need further assistance, please feel free to reach out.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Extract DDL for users & roles
2023-05-16T12:51:03.833Z
Extract DDL for users &amp; roles
997
null
[ "python" ]
[ { "code": "", "text": "How can I delete 3:4 for example?{id:12345,dict : {1:2,3:4}}", "username": "DoubleK_N_A" }, { "code": "", "text": "filter = {“_id”: 12345}update = {‘$unset’: {‘key_to_remove’: 1}} # Replace ‘key_to_remove’ with the actual key you want to removecollection.update_many(filter, update)", "username": "sagar_sadhu" } ]
Delete item from dict PyMongo
2021-04-19T12:04:36.013Z
Delete item from dict PyMongo
2,522
null
[]
[ { "code": "", "text": "Hi everyone. I started with MongoDB a few days ago and noticed it installed in the local C drive.I have my OS booting from that disk, so I wanted to make Mongo store its data on the D drive. When looking up ways to do it, I found about using --dbpath, as well as modifying the .config file on the “bin” folder. However, after doing both of those things, I noticed that everytime I made a change on a database, the updates where made on the C: folder where Mongo installed, rather than the folder I indicated with both the --dbpath and the config file (checked it did this because of the times the files where modified in there).I definitely think I’m doing something wrong and I didn’t find any posts online that apply to the last version of MongoDB (6.0). Can anyone give a step by step guide to do this, that someone who just started on Mongo can follow?Appreciate it!", "username": "JuanmaEiroa_N_A" }, { "code": "", "text": "I’m not sure why that happens. Every thing related to mongodb should be in that same folder. No idea.But maybe you can work around it by creating a dump of your data (should be small since only few days) and restore it to a local server with different path.", "username": "Kobe_W" } ]
Is there a step to step guide to change the place where my MongoDB data is stored?
2023-05-22T21:42:52.062Z
Is there a step to step guide to change the place where my MongoDB data is stored?
753
null
[ "sharding", "indexes" ]
[ { "code": "file_idfile_idversion(file_id, version)(version)", "text": "There is a collection with a sharding key on the hashed index file_id . In addition, commonly used queries in the business require both file_id and version as query conditions. Given that I already have the sharding key, do I still need to create a compound index (file_id, version) ? Or is it sufficient to create an index on (version) only?", "username": "Liu_Wenzhe" }, { "code": "(file_id, version)(version)", "text": "Given that I already have the sharding key, do I still need to create a compound index (file_id, version) ?No, if your shard key on file id is selective enough. (e.g. unique). Yes otherwise.Or is it sufficient to create an index on (version) only?No. this won’t work. Check manual page for indexes for more info.", "username": "Kobe_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compound index suggestion for sharded collection
2023-05-22T08:34:57.673Z
Compound index suggestion for sharded collection
585
null
[]
[ { "code": "", "text": "We’re coming from the relational SQL world and are curious about how to shape nested data.Assume we have a collection, with each document with multiple properties and some of those properties are arrays of lots and lots of other documents… How do we create a query that will give us a clean subset of those properties and subsets (potentially ordered/filtered) of the arrays of other documents?Any thoughts/ideas? Newbie here.", "username": "Christopher_Eaton" }, { "code": "productsdb.products.insertMany([{\n \"name\": \"Product A\",\n \"price\": 100,\n \"reviews\": [\n {\n \"rating\": 5,\n \"comment\": \"This product is great!\"\n },\n {\n \"rating\": 6,\n \"comment\": \"Best product in market!\"\n },\n {\n \"rating\": 4,\n \"comment\": \"I like this product.\"\n }\n ]\n},\n{\n \"name\": \"Product B\",\n \"price\": 200,\n \"reviews\": [\n {\n \"rating\": 5,\n \"comment\": \"This product is Best!\"\n },\n {\n \"rating\": 3,\n \"comment\": \"This product is okay.\"\n },\n {\n \"rating\": 4,\n \"comment\": \"I don't like this product.\"\n }\n ]\n},\n{\n \"name\": \"Product C\",\n \"price\": 200,\n \"reviews\": [\n {\n \"rating\": 2,\n \"comment\": \"This product is Bad!\"\n },\n {\n \"rating\": 3,\n \"comment\": \"This product is not okay.\"\n },\n {\n \"rating\": 4,\n \"comment\": \"I like this product.\"\n }\n ]\n}])\ndb.products.aggregate([\n { \"$unwind\": \"$reviews\" },\n { \"$group\": {\n \"_id\": \"$name\",\n \"average_rating\": { \"$avg\": \"$reviews.rating\" },\n \"reviews\": { \"$push\": \"$reviews.comment\" }\n }},\n { \"$match\": {\n \"average_rating\": { \"$gte\": 4 }\n }},\n { \"$sort\": {\n \"average_rating\": -1\n }}\n])\n{\n _id: 'Product A',\n average_rating: 5,\n reviews: [\n 'This product is great!',\n 'Best product in market!',\n 'I like this product.'\n ]\n}\n{\n _id: 'Product B',\n average_rating: 4,\n reviews: [\n 'This product is Best!',\n 'This product is okay.',\n \"I don't like this product.\"\n ]\n}\n$avgfield>=how to design a schema as per your application requirements", "text": "Hello @Christopher_Eaton ,Welcome to The MongoDB Community Forums! How do we create a query that will give us a clean subset of those properties and subsets (potentially ordered/filtered) of the arrays of other documents?You can take advantage of The MongoDB Aggregation Framework which consists of one or more stages that process documents. Please refer below links to learn more about Aggregation OperationsFor example, I have added below documents to my collection named productsNow, suppose I want to get a list of all products withWe can use below query to achieve above mentioned requirementsOutput will beFor the above example, I used the following aggregation stages:Notes: If you run the test by yourself, try running each stage one by one and check the output, that will help you understand the working of each stage with results. Also, this query involves a very few but most common operations and there are many more things that can be done, please refer Operators. All this computation/manipulation is done at database end so no need to worry about the data manipulation at Applicaiton layer.Lastly, MongoDB is document database meaning that it will match documents like an SQL query matches a row. Each document may consist of many Key-value pairs with value as any data type. This data type can be an array which comes under Embedding schema design. There are several advantages and a few limitations to any schema design. I would recommend you to refer below Schema design links which will help you understand how to design a schema as per your application requirements.Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!To learn more, please visit MongoDB University which provides free MongoDB courses.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Using document (nested object/array) storage vs flat collections/SQL-relational... Approach?
2023-05-12T21:40:20.049Z
Using document (nested object/array) storage vs flat collections/SQL-relational&hellip; Approach?
1,012
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "> nodemon index.js\n\n[nodemon] 2.0.22\n[nodemon] to restart at any time, enter `rs`\n[nodemon] watching path(s): *.*\n[nodemon] watching extensions: js,mjs,json\n[nodemon] starting `node index.js`\nserver is fucked running sucessfulloy on port 8000\nError while connecting with the database MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n at _handleConnectionErrors (/home/verma/medium1/server/node_modules/mongoose/lib/connection.js:792:11)\n at NativeConnection.openUri (/home/verma/medium1/server/node_modules/mongoose/lib/connection.js:767:11)\n at processTicksAndRejections (internal/process/task_queues.js:95:5)\n at async Connection (file:///home/verma/medium1/server/database/db.js:12:9) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-x7kmxhp-shard-00-00.mtsikj6.mongodb.net:27017' => [ServerDescription],\n 'ac-x7kmxhp-shard-00-01.mtsikj6.mongodb.net:27017' => [ServerDescription],\n 'ac-x7kmxhp-shard-00-02.mtsikj6.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-brwrks-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n", "text": "hey guys i am getting this error from last day please help", "username": "sanjay_web" }, { "code": "", "text": "Hi @sanjay_web,Please refer to the following post : Server timed out when connect with mongoDB after upgrading in atlasPerform the network tests mentioned there and advise the results as well as any other relevant information regarding the cluster.hey guys i am getting this error from last day please helpAdditionally, when you state “from the last day”, do you mean that the connection previously worked in the past?Regards,\nJason", "username": "Jason_Tran" } ]
Mongodb connection problem
2023-05-22T17:56:08.925Z
Mongodb connection problem
569
null
[]
[ { "code": "│ Error: error reading MongoDB Cluster (xxxx-xxx-xxx): GET https://cloud.mongodb.com/api/atlas/v1.0/groups/6467103204b5e028bfeaba66/clusters/xxxx-xxx-xxx: 403 (request \"ORG_REQUIRES_ACCESS_LIST\") This organization requires access through an access list of ip ranges.\n", "text": "I am trying to configure access to a MongoDB db on Atlas (MongoDB ver6.0) from my AWS organization.I am facing now the following error while running my terraform code:Please advise.\nThanks", "username": "Haytham_Mostafa" }, { "code": "This organization requires access through an access list of ip ranges.\nRequire IP Access List for the Atlas Administration API", "text": "Hi @Haytham_Mostafa,If the Require IP Access List for the Atlas Administration API setting toggled on for your organization then you’ll need to ensure the client’s IP that is performing the call is on that list.You may also wish to go over the list of Use API Resources that Require an Access List as well.Regards,\nJason", "username": "Jason_Tran" } ]
403 (request "ORG_REQUIRES_ACCESS_LIST") This organization requires access through an access list of ip ranges
2023-05-22T09:18:22.248Z
403 (request &ldquo;ORG_REQUIRES_ACCESS_LIST&rdquo;) This organization requires access through an access list of ip ranges
508
https://www.mongodb.com/…4_2_1024x512.png
[ "replication", "sharding" ]
[ { "code": "vagrant@mongoos:~$ sudo mongos --configdb rs2oscfg/rscfg3.mydomain:27019 --bind_ip_all --port 27017\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.857Z\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":24132, \"ctx\":\"-\",\"msg\":\"Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.\"}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.858+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.859+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"I\", \"c\":\"HEALTH\", \"id\":5936503, \"ctx\":\"main\",\"msg\":\"Fault manager changed state \",\"attr\":{\"state\":\"StartupCheck\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"main\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22138, \"ctx\":\"main\",\"msg\":\"You are running this process as the root user, which is not recommended\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"mongosMain\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.18\",\"gitVersion\":\"796abe56bfdbca6968ff570311bf72d93632825b\",\"openSSLVersion\":\"OpenSSL 1.1.1 11 Sep 2018\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu1804\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"mongosMain\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"18.04\"}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.862+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"mongosMain\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"*\",\"port\":27017},\"sharding\":{\"configDB\":\"rs2oscfg/rscfg3.mydomain:27019\"}}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.863+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4603701, \"ctx\":\"mongosMain\",\"msg\":\"Starting Replica Set Monitor\",\"attr\":{\"protocol\":\"streamable\",\"uri\":\"rs2oscfg/rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.863+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333223, \"ctx\":\"mongosMain\",\"msg\":\"RSM now monitoring replica set\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"nReplicaSetMembers\":1}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.863+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333226, \"ctx\":\"mongosMain\",\"msg\":\"RSM host was added to the topology\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"host\":\"rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.864+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333218, \"ctx\":\"mongosMain\",\"msg\":\"Rescheduling the next replica set monitoring request\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"host\":\"rscfg3.mydomain:27019\",\"delayMillis\":0}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.864+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.865+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23729, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"ServerPingMonitor is now monitoring host\",\"attr\":{\"host\":\"rscfg3.mydomain:27019\",\"replicaSet\":\"rs2oscfg\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"newTopologyDescription\":\"{ id: \\\"f713611c-5c62-4d86-9450-16ee7035c4f5\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bab2ff147879aeea9bb3f'), counter: 6 }, roundTripTime: 710, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg3.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, electionId: ObjectId('7fffffff0000000000000008'), primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573865), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg1.mydomain:27019: { address: \\\"rscfg1.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, rscfg2.mydomain:27019: { address: \\\"rscfg2.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs2oscfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000008'), setVersion: 7 } }\",\"previousTopologyDescription\":\"{ id: \\\"3119503c-f814-497f-885d-6987220d4a74\\\", topologyType: \\\"Unknown\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, compatible: true }\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":471693, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Updating the shard registry with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333226, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host was added to the topology\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"host\":\"rscfg1.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333226, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host was added to the topology\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"host\":\"rscfg2.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"rscfg1.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"rscfg2.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.866+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22846, \"ctx\":\"Sharding-Fixed-0\",\"msg\":\"Updating sharding state with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.869+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23729, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"ServerPingMonitor is now monitoring host\",\"attr\":{\"host\":\"rscfg2.mydomain:27019\",\"replicaSet\":\"rs2oscfg\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.869+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"newTopologyDescription\":\"{ id: \\\"fd536199-4633-40f5-b3a8-424b28a93899\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bab2ff147879aeea9bb3f'), counter: 6 }, roundTripTime: 891, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg3.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, electionId: ObjectId('7fffffff0000000000000008'), primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573867), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg1.mydomain:27019: { address: \\\"rscfg1.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, rscfg2.mydomain:27019: { address: \\\"rscfg2.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646baaf9493bf439b50d8a5c'), counter: 60 }, roundTripTime: 837, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg2.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573869), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs2oscfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000008'), setVersion: 7 } }\",\"previousTopologyDescription\":\"{ id: \\\"093f41d2-6385-4007-9d50-dd3f3d26fc2d\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bab2ff147879aeea9bb3f'), counter: 6 }, roundTripTime: 891, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg3.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, electionId: ObjectId('7fffffff0000000000000008'), primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573867), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg1.mydomain:27019: { address: \\\"rscfg1.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, rscfg2.mydomain:27019: { address: \\\"rscfg2.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs2oscfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000008'), setVersion: 7 } }\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.869+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":471693, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Updating the shard registry with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.869+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22846, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Updating sharding state with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.870+00:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":23834, \"ctx\":\"mongosMain\",\"msg\":\"Error initializing sharding state, sleeping for 2 seconds and retrying\",\"attr\":{\"error\":{\"code\":13,\"codeName\":\"Unauthorized\",\"errmsg\":\"Error loading clusterID :: caused by :: command find requires authentication\"}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.871+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4620201, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Error running reload of ShardRegistry for RSM update\",\"attr\":{\"error\":\"Unauthorized: could not get updated shard list from config server :: caused by :: command find requires authentication\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.871+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4620201, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Error running reload of ShardRegistry for RSM update\",\"attr\":{\"error\":\"Unauthorized: could not get updated shard list from config server :: caused by :: command find requires authentication\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.871+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22727, \"ctx\":\"ShardRegistryUpdater\",\"msg\":\"Error running periodic reload of shard registry\",\"attr\":{\"error\":\"Unauthorized: could not get updated shard list from config server :: caused by :: command find requires authentication\",\"shardRegistryReloadIntervalSeconds\":30}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:53.872+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":200}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.084+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":400}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.301+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23729, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"ServerPingMonitor is now monitoring host\",\"attr\":{\"host\":\"rscfg1.mydomain:27019\",\"replicaSet\":\"rs2oscfg\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.301+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"rs2oscfg\",\"newTopologyDescription\":\"{ id: \\\"ad569a17-67a4-4d79-afaf-eaa70760af04\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bab2ff147879aeea9bb3f'), counter: 6 }, roundTripTime: 1081, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg3.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, electionId: ObjectId('7fffffff0000000000000008'), primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573870), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg1.mydomain:27019: { address: \\\"rscfg1.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bae1364405e05c1cc2d7b'), counter: 3 }, roundTripTime: 428436, lastWriteDate: new Date(1684777784000), opTime: { ts: Timestamp(1684777784, 1), t: 7 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg1.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, lastUpdateTime: new Date(1684781574301), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg2.mydomain:27019: { address: \\\"rscfg2.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646baaf9493bf439b50d8a5c'), counter: 60 }, roundTripTime: 1105, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg2.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573872), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs2oscfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000008'), setVersion: 7 } }\",\"previousTopologyDescription\":\"{ id: \\\"04d990a2-b9ac-4eb6-81fe-508c4b851b7b\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { rscfg3.mydomain:27019: { address: \\\"rscfg3.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646bab2ff147879aeea9bb3f'), counter: 6 }, roundTripTime: 1081, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg3.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, electionId: ObjectId('7fffffff0000000000000008'), primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573870), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} }, rscfg1.mydomain:27019: { address: \\\"rscfg1.mydomain:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, rscfg2.mydomain:27019: { address: \\\"rscfg2.mydomain:27019\\\", topologyVersion: { processId: ObjectId('646baaf9493bf439b50d8a5c'), counter: 60 }, roundTripTime: 1105, lastWriteDate: new Date(1684781573000), opTime: { ts: Timestamp(1684781573, 1), t: 8 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"rscfg2.mydomain:27019\\\", setName: \\\"rs2oscfg\\\", setVersion: 7, primary: \\\"rscfg3.mydomain:27019\\\", lastUpdateTime: new Date(1684781573872), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"rscfg1.mydomain:27019\\\", 1: \\\"rscfg2.mydomain:27019\\\", 2: \\\"rscfg3.mydomain:27019\\\" }, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs2oscfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000008'), setVersion: 7 } }\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.302+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":471693, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Updating the shard registry with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.302+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22846, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Updating sharding state with confirmed replica set\",\"attr\":{\"connectionString\":\"rs2oscfg/rscfg1.mydomain:27019,rscfg2.mydomain:27019,rscfg3.mydomain:27019\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.303+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4620201, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Error running reload of ShardRegistry for RSM update\",\"attr\":{\"error\":\"Unauthorized: could not get updated shard list from config server :: caused by :: command find requires authentication\"}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:54.489+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":600}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:55.092+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":800}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:55.877+00:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":23834, \"ctx\":\"mongosMain\",\"msg\":\"Error initializing sharding state, sleeping for 2 seconds and retrying\",\"attr\":{\"error\":{\"code\":13,\"codeName\":\"Unauthorized\",\"errmsg\":\"Error loading clusterID :: caused by :: command find requires authentication\"}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:55.896+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":1000}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:56.907+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":1200}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:57.878+00:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":23834, \"ctx\":\"mongosMain\",\"msg\":\"Error initializing sharding state, sleeping for 2 seconds and retrying\",\"attr\":{\"error\":{\"code\":13,\"codeName\":\"Unauthorized\",\"errmsg\":\"Error loading clusterID :: caused by :: command find requires authentication\"}}}\n{\"t\":{\"$date\":\"2023-05-22T18:52:58.109+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"Unauthorized: command find requires authentication\",\"nextWakeupMillis\":1400}}\n\n", "text": "try create new shard from zero by this guidecreate 1 data replica set OK\ncreate 1 config replica set OK\ncreate mongos vm and get errors", "username": "_N_A37" }, { "code": "", "text": "disable security at config replicaset helps", "username": "_N_A37" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongos dont started
2023-05-22T18:54:44.261Z
Mongos dont started
765
null
[]
[ { "code": "", "text": "Hello Angela,I am running into the same issue when my allowed IP access is 0.0.0.0/0 and i have the timeout, with the exact same setup.\nI have checked the credentials in my AWS lambda and everything is setup correctly, do you have any idea where else it can come from ?\nIt works perfectly fine with Password authentication but times out when using the IAM, even by trying to get the server info (so it’s not connected), but i have no connection errors.Note : the in app chat session support told me it’s out of scope and i can’t currently subscribe to the developer support plan unfortunately that’s why i am asking again here… Maybe it’s the issue as the above that you have solved privately.Thanks in advance for any help,\nTom", "username": "Tommy_Deshairs" }, { "code": "", "text": "Hi @Tommy_Deshairs,Welcome back to the MongoDB Community forums Have you considered configuring a static IP address for the lambda? You can place it in a private subnet and then use a NAT gateway. This way, you can add the IP to your MongoDB whitelist. Alternatively, you can also keep your IP access list as 0.0.0.0/0, and it will still work.Also, can you please share what steps you followed? Meanwhile, you can also check out the Manage Connections with AWS Lambda documentation.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "client = MongoClient(os.environ.get(\"DB_URL\"))\ndb = client.get_database(os.environ.get(\"DB_NAME\"))\nDB_URLuri = 'mongodb://' + quote_plus(os.environ.get(\"AWS_ACCESS_KEY_ID\")) + ':' + quote_plus(os.environ.get(\"AWS_SECRET_ACCESS_KEY\")) + os.environ.get(\"DB_LAST_SEGMENT_URL\") + quote_plus(os.environ.get(\"AWS_SESSION_TOKEN\"))\nclient = MongoClient(uri)\nurimongodb://AWSACCESSKEYIDVALUE:dCizkdoafqo2Kdxm3d9NHh34Jbcj0daXK%[email protected]/dbname?retryWrites=true&authMechanismProperties=AWS_SESSION_TOKEN:IQoJb3JpZ2luX2VjEAwaCWV1LXdlc3QtMyJHMEUCIFmUMGLbXeeBzZJ%2BJ6W%2F%2BP8HXyEFtWGMpF%2FyzDq9lD0UAiEA4IHvHFTfbmrKyTeOBGMfmoRIa%2FuzuQK8WPt7pqbfBEq7AIIABGgwwOTQ1Nzg0NDI4ODIiDNWsS5JNhNvazG%2BdLirJAmUoBYN8jChs2RZpAnFS0kzy7pq0QTXTR4JOJRG9Rf3LE%2B4iPbd9903xL4Ye9D0vzLxMuOdWW4YSIEmSZclM0HyfG8WucC95%2Bw0BeJYfjBkziK%2BHrqu84nJyw0d07gM3%2FSgBHMxbksJ04vKd31RwQugpceDvg8SKJ8mdP1h4sfnCPqNO7WKZYpS1tN8%2FzaSicTmbap70vGbfLNaa5RPWooQkCcXEdgPvWEJmxXhrIbZAhm9jBTymmduKprDzHCy%2BkOoxFtrP7nsNJncGDdoJHtJgbVykktj%2By8ZKMGy3JaBJ%2FzxWS1%2FJTmqQBtQdfXIBYYGhyDpCQCfV8VK0b1%2FEXBY%2FaPZas6ZovF4cKZkFb3YrWPi0URF5X2yx6GsOS6NRphwLeJ%2FpIMf2DdGuathlc4PS%2FuNbZMjhuekX%2F66Tg%2FO0ikYksD%2BQKfnmtDf2ZnYAEA%2FmaQKwaICDyTCnPWEsAyJ92kdFBQJMYJcs9q76WlcQw96AS12E8Brlg%3D\n/AWS_SECRET_ACCESS_KEY", "text": "Hi @Kushagra_Kesav thanks for your reply !Regarding the static IP, isn’t better to use the “Peering” ? Or is it the same thing ?\nI should simply connect my Lambda to a VPC and do the peering i guess, my Lambda should never be exposed on the internet, i have an API Gateway for requests.\nI will indeed remove eventually the 0.0.0.0/0, it is currently for testing to be sure my setup is correct and once it’s validated i will add network security.In the meantime, here are the steps i have made so far :Which is working fine with the DB_URL value using the password string.Note : i have edited manually the values to give fake IDs, but so you see the structure of the output. the / are escaped in the AWS_SECRET_ACCESS_KEY as requested by pymongoThanks again for your help,\nTom", "username": "Tommy_Deshairs" }, { "code": "pymongo[aws]", "text": "Hello @Tommy_Deshairs,Thanks for sharing the detailed steps.Can you share the full output of the error message you got after encountering the issue? In addition, can you confirm that you have the required dependencies, pymongo[aws] according to our Authentication Mechanism documentation?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "pymongo[aws]pymongo[srv]authMechanism=MONGODB-AWSuri = \"mongodb://example.com/mydatabase?authMechanism=MONGODB-AWS&retryWrites=true\"[INFO]\t2023-04-25T18:14:32.183Z\td20d74ef-a284-4102-b4bf-05f6a84bc7e5\tUri after parse : mongodb://myproject.80ebt.mongodb.net/?authMechanism=MONGODB-AWS&retryWrites=true and client : Database(MongoClient(host=['myproject.80ebt.mongodb.net:27017'], document_class=dict, tz_aware=False, connect=True, authmechanism='MONGODB-AWS', retrywrites=True), 'mydatabase')\n2023-04-25T18:14:37.190Z d20d74ef-a284-4102-b4bf-05f6a84bc7e5 Task timed out after 5.01 seconds\nuri = \"mongodb://example.com/mydatabase?authMechanism=MONGODB-AWS\"retryWrites=trueuri = \"mongodb://example.com/?authMechanism=MONGODB-AWS\"[INFO]\t2023-04-25T18:15:17.319Z\t783af7f1-6238-4b05-96d5-2e2798888d56\tUri after parse : mongodb://myproject.80ebt.mongodb.net/?authMechanism=MONGODB-AWS and client : Database(MongoClient(host=['myproject.80ebt.mongodb.net:27017'], document_class=dict, tz_aware=False, connect=True, authmechanism='MONGODB-AWS'), 'mydatabase')\n2023-04-25T18:15:22.326Z 783af7f1-6238-4b05-96d5-2e2798888d56 Task timed out after 5.01 seconds\npymongo[aws]==4.3.3", "text": "@Kushagra_Kesav thanks for your information (the link) and specifying that i should use pymongo[aws], indeed i was still using pymongo[srv].However this still times out… But i did read in the link you provided this :\nCapture d’écran 2023-04-25 à 19.55.01821×369 53.6 KB\nAnd i notice 2 things compared to my URI string :I did the following tests and all failed (i have attached some logs, not all of them) :All these tests were made using pymongo[aws]==4.3.3.Really appreciate your help on this,\nTom", "username": "Tommy_Deshairs" }, { "code": "3 secondsLambda function -> Configuration -> General configuration -> Edit Timeout\n", "text": "Hello @Tommy_Deshairs,I will suggest a couple of workarounds for you to try and see if they work:Make sure you have the proper ARN given under AWS IAM in Atlas as shown below:\natlas-aws-iam1011×1009 158 KB\nSecondly, could you try increasing the timeout for the lambda function? By default, the value is set to 3 seconds.You can do so by going to the AWS Management Console and navigating to the below option:\naws-console1158×340 47.5 KB\nAlso, please refer to Configuring Lambda function options - AWS Lambda to read more.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hello @Kushagra_Kesav thanks for the following up.I have triple checked the ARN and also granted specific priviledges (dbAdmin) instead of my custom role (which was allowing insert / remove / update / find) to see if this was a permission issue but no…Also, i have setup to 10 seconds the time out of my lambda and this is still in time out unfortunately I am attaching here the link of my related SO question if someone has any other ideas.In the meantime, i will simply delete this DB user and create a new one from scratch to see if there is something wrong with this one.Best regards,\nTom", "username": "Tommy_Deshairs" }, { "code": "80ebt.mongodb.net/dbname?retryWrites=true&authMechanismProperties=AWS_SESSION_TOKEN:IQoJb3JpZ2luX2VjEAwaCWV1LXdlc3QtMyJHMEUCIFmUMGLbXeeBzZJ%2BJ6W%2F%2BP8HXyEFtWGMpF%2FyzDq9lD0UAiEA4IHvHFTfbmrKyTeOBGMfmoRIa%2FuzuQK8WPt7pqbfBEq7AIIABGgwwOTQ1Nzg0NDI4ODIiDNWsS5JN%2F", "text": "Hi @Tommy_Deshairs,80ebt.mongodb.net/dbname?retryWrites=true&authMechanismProperties=AWS_SESSION_TOKEN:IQoJb3JpZ2luX2VjEAwaCWV1LXdlc3QtMyJHMEUCIFmUMGLbXeeBzZJ%2BJ6W%2F%2BP8HXyEFtWGMpF%2FyzDq9lD0UAiEA4IHvHFTfbmrKyTeOBGMfmoRIa%2FuzuQK8WPt7pqbfBEq7AIIABGgwwOTQ1Nzg0NDI4ODIiDNWsS5JNIf the suggested steps are not working, then I think there might be an issue with parsing the environment variable into your final URI. For example, I noticed a few %2F within the final URI that you shared above.Could you try embedding the variable directly and see if that resolves the issue?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "+srvmongodb://example.com/?authMechanism=MONGODB-AWSmongodb+srv://example.com/?authMechanism=MONGODB-AWSmongodb://...", "text": "Hi @Kushagra_Kesav ,\nIt was a silly mistake.\nI have added the +srv field to this url :\nmongodb://example.com/?authMechanism=MONGODB-AWSWhich gives a working solution :\nmongodb+srv://example.com/?authMechanism=MONGODB-AWSActually not that silly because the documentation states it’s mongodb://... Thank you very much for your time and effort trying to help me out, really appreciated.\nBest regards,\nTom", "username": "Tommy_Deshairs" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Task timed out after 5.01 seconds - MongoDB Atlas AWS connection issue
2023-04-23T13:09:54.303Z
Task timed out after 5.01 seconds - MongoDB Atlas AWS connection issue
886
null
[ "swift" ]
[ { "code": "CurrentStateSubject", "text": "Hello there.\nI searched through the documents but I couldn’t find an answer to my question there.\nContext: I have a class that has a function that subscribes to collection changes on a specific realm. The result is transformed and stored in a CurrentStateSubject so I can subscribe to that and get all updated objects from the Realm whenever something changes.\nEverything’s working fine.\nNow I want to test that code. I use an in-memory Realm to test that functionality. The in-memory realm is also receiving the object but the subscription is not working/firing.Question: Does the in-memory Realm allow subscriptions and does it actually send updates?If not: How am I suppose to test that? Creating a realm and deleting it from the file system seems a bit strange to do in a test.Thanks ", "username": "Frank_Zielinski" }, { "code": "", "text": "Ok, I found out that I was using `.assign(to: on:) wrong // wasn’t working as expected. I also need to refresh the realm to get an update. Now I’m trying to solve how to test the background-threading on this published stream ", "username": "Frank_Zielinski" }, { "code": "", "text": "An In-Memory Realm will behave almost exactly like one on disk - including read, writes and observes.In what context are you ‘refreshing’ the realm - that should rarely, if ever, be needed.Do you have a code example of what you’re asking about?Jay", "username": "Jay" } ]
In-memory Realms
2023-05-21T16:30:31.113Z
In-memory Realms
631
null
[]
[ { "code": "", "text": "Hi! I hope everything goes well.I want to know if it is possible to set permissions for a user, not a database user, but an organization user, in order when he logs in with his mail and password on the MongoAtlas page, he only can read some databases and only can read and write in some others (All database are in the same cluster)Many thanks for considering my request.\nVíctor", "username": "Victor_Merino" }, { "code": "Organization Read OnlyProject Read Only", "text": "Hello @Victor_Merino,I want to know if it is possible to set permissions for a user, not a database user, but an organization user, in order when he logs in with his mail and password on the MongoAtlas page, he only can read some databases and only can read and write in some others (All database are in the same cluster)Let me provide you with some insights regarding permissions for organization users in MongoDB Atlas.In general, MongoDB Atlas offers organization-based roles and one of them is “Organization Read Only” which grants read-only access to the entire organization, including all projects. These roles can be assigned to specific user emails. You can find more details about the roles in the Atlas User Roles documentation.To achieve the desired configuration where a user can read some databases and read/write in others within the same cluster, you can utilize custom roles and define appropriate permissions for each database or collection.It’s worth noting that accessing the database outside the Atlas UI dashboard requires creating a database user. This can also be done by generating an API key related to the Data API or using other authentication methods. Otherwise by default, even the lowest access role, “Project Read Only” grants metadata view-only access to the project control panel, including activity, operational data, users, and user roles. However, access to the Data Explorer and retrieval of process and audit logs is restricted.While the specific configuration you mentioned might not be possible at this time, MongoDB Atlas offers various roles and customization options to manage user access. If you believe that the Atlas User Data Access Permissions need to be configured on a more granular level, you can upvote the related feedback on MongoDB Feedback Engine to express your interest in this feature.I hope this provides clarity on the available options. If you have any further questions, please feel free to ask.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi, @Kushagra_Kesav. Many thanks for the response. I already upvote in the link that you provide.Regards,\nVíctor.", "username": "Victor_Merino" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
User organizations and access to DB
2023-05-18T21:38:25.860Z
User organizations and access to DB
431
null
[]
[ { "code": "User1partition1UserNpartition1", "text": "Hello there! I have a question which is supposedly targeted to the devs as I haven’t found any mention of it in the docs, and it’s pretty implementation specific.Every App Services user is assigned an ID which acts as a partition in Realm. So the question is are these IDs reused when a user is deleted from App Services or every user gets a unique ID every time?For example, User1 had partition1 and at some point the user has been deleted. Is it possible for UserN to be assigned partition1 once again?", "username": "Gleb_Ignatev" }, { "code": "ObjectIdObjectIdUserNpartition1partition1", "text": "Hi @Gleb_Ignatev,are these IDs reused when a user is deleted from App ServicesNo, they can’t be: user ID is an ObjectId, that are meant to be unique. In fact, part of the structure of an ObjectId is the timestamp when the object is created, so each and every user will always be unique.Is it possible for UserN to be assigned partition1 once again?That depends on the logic you apply between the user and the partition: if the partition value is just the ID of the user itself, it won’t be possible to bind the two, and when the user is gone, the partition will be inaccessible, but if partition1 refers to some other value to be bound to a specific user, then yes, it can be re-assigned.", "username": "Paolo_Manna" }, { "code": "", "text": "I see, it’s been really on the surface. Thank you for the quick response!", "username": "Gleb_Ignatev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are Realm partition IDs recycled if a user is deleted?
2023-05-22T11:55:00.915Z
Are Realm partition IDs recycled if a user is deleted?
388
null
[ "queries", "replication", "compass", "performance" ]
[ { "code": "", "text": "We are using a mongoDB replicaset and facing the issue about slow query. Actually if i run below query on terminal or Compass, query is very fast. But if the query comes from application, so slow.\nI set profiling level to 1 and slowms=100 and i saw the query and query was so slow because missing index.\nAnd i create a compound index, so our query got so fast like 150ms.\nWhen I fill in the relevant blanks from the application and search, it keeps me waiting for at least 10 minutes.db.historydata.find({“topic.sistemID”:2, “topic.header.MesajALan”: 177, “topic.header.MesajId”:1071, “timestamp”:{$lte: 1684334263}}).sort({-1}).limit(1)Then I came across something like this in the logs.“replanReason”:“cached plan was less efficient than expected: expected trial execution to take 2 works but it took at least 20 works”,“cursorExhausted”:true,do you have any idea?\nThanks", "username": "Kadir_USTUN" }, { "code": "", "text": "", "username": "tapiocaPENGUIN" }, { "code": "Queries cannot use indexes for the $bitsAllSet portion of a query, although the other portions of a query can use indexes, if applicable.", "text": "Hi @tapiocaPENGUINi analyzed it wrong i’m sorry. Query is using $bitsAllSet. When i search on the internet, i found this commentQueries cannot use indexes for the $bitsAllSet portion of a query, although the other portions of a query can use indexes, if applicable.I create a new compound index that doesn’t use that column. So my problem is solved.Thank you very much. Sorry for westing your time ", "username": "Kadir_USTUN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow query. Replicaset
2023-05-17T14:40:43.724Z
Slow query. Replicaset
938
https://www.mongodb.com/…b_2_1024x500.png
[ "configuration" ]
[ { "code": "", "text": "Hello, I’m having problems with my mongod, every time I restart the machine, systemctl doesn’t start the mongod process automatically, in addition to deleting my pidFilePath: /var/run/mongodb (I imagine that the mongod process doesn’t start because that reason). After I start the mongod process manually (after having created the pidFilePath again), I can perform operations normally, but the status of mongod.service remains as failed. Is there a way to correct such an error, what consequences does it bring?!!The remaining prints are in the comments!!\nmongod.service1918×937 81.8 KB\n", "username": "Breno_Fernandes_de_Castro" }, { "code": "", "text": "other prints! (part 1)\n\nmongod.service1918×937 81.8 KB\n", "username": "Breno_Fernandes_de_Castro" }, { "code": "", "text": "other prints! (part 2)\n\nmongod.conf689×944 63.6 KB\n", "username": "Breno_Fernandes_de_Castro" }, { "code": "", "text": "other prints! (part 3)\n\noperations1919×936 57.2 KB\n", "username": "Breno_Fernandes_de_Castro" }, { "code": "", "text": "Check this thread as it seems to be the same issue.", "username": "steevej" }, { "code": "", "text": "Hi Steve, first of all I would like to thank you for the reply, as I was able to make some headway on account of this post. But, I have a new error, systemctl is not able to start the mongod process through mongod --config /etc/mongod. I’ve already tested uploading the mongod.conf manually to validate the configuration file and it worked, so I think the cause of the problem is not the file. Do you have any more information or ideas? @steevej\nIMG-20230519-WA00571600×826 210 KB\n", "username": "Breno_Fernandes_de_Castro" }, { "code": "", "text": "You need to look at the log file for error description.", "username": "steevej" } ]
Mongod.service failed
2023-05-19T18:52:24.145Z
Mongod.service failed
937
null
[ "aggregation", "queries" ]
[ { "code": ">>> db.foo.find()\n[\n { _id: 1, intV: 1, charV: 'a', stringV: 'abc', arrV: [ 1, 2 ] }\n]\n\n>>> db.foo.aggregate([ { $addFields: { total: { $reduce: { input: \"$arrV\", initialValue:null+1, in: { $add: [\"$$value\", \"$$this\"] } } } } }])\n[\n {\n _id: 1,\n intV: 1,\n charV: 'a',\n stringV: 'abc',\n arrV: [ 1, 2 ],\n total: 4\n }\n", "text": "I ran the below query -It should return null and not 4.", "username": "Abhishek_Chaudhary1" }, { "code": "java script > 0 == null\n> false\njava script > 0 != null\n> true\njava script > null + 1\n> 1\njava script > null - 5\n> -5\njava script > null * 6\n> 0\n", "text": "It should return null and not 4.Why do you think it should?I think that 4 is the correct value. Despite the fact that we haveWhen null is use inside arithmetic expression it is converted to 0 as in the following.So your initialValue: null+1, is really equivalent to 1.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$reduce must output null if we pass null+1 as initialValue
2023-05-22T05:52:56.686Z
$reduce must output null if we pass null+1 as initialValue
305
null
[ "storage" ]
[ { "code": "", "text": "I tried upgrade DB Mongo from 4.4.18 to 4.4.21 with yum update the upgrade work correctly.\nAfter reboot system the Mongo go to error and doesn’t start.\nI have this errore in the log:{“t”:{“$date”:“2023-05-22T12:45:27.649+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“main”,“msg”:“***** SERVER RESTARTED *****”}\n{“t”:{“$date”:“2023-05-22T12:45:27.653+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-05-22T12:45:27.669+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2023-05-22T12:45:27.669+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:23330, “ctx”:“main”,“msg”:“ERROR: Cannot write pid file to {path_string}: {errAndStr_second}”,“attr”:{“path_string”:“/var/run/mongodb/mongod.pid”,“errAndStr_second”:“No such file or directory”}}My configurations is:systemLog:\ndestination: file\nlogAppend: true\npath: /var/log/mongodb/mongod.logstorage:\ndbPath: /var/lib/mongo\njournal:\nenabled: trueprocessManagement:\nfork: true # fork and run in background\npidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\ntimeZoneInfo: /usr/share/zoneinfonet:\nport: 27017\nbindIp: 127.0.0.1\nbindIpAll: true#security:#operationProfiling:replication:\nreplSetName: rs0\n#sharding:#auditLog:#snmp:", "username": "Dario_Ugliola" }, { "code": "", "text": "HI,\nI managed to solve on test environment:\nI commented on the config file\nprocessManagement:\nfork: true # fork and run in backgroundtimeZoneInfo: /usr/share/zoneinfoThen I gave the mongod user privileges for the /var/lib/mongo folder\nchown -R mongod:mongod /var/lib/mongoSeeing the note you don’t need to specify the pidfile:", "username": "Dario_Ugliola" } ]
"ERROR: Cannot write pid file to {path_string}: {errAndStr_second}","attr":{"path_string":"/var/run/mongodb/mongod.pid","errAndStr_second":"No such file or directory"}} after upgrade from 4.4.18 to 4.4.21
2023-05-22T11:52:06.460Z
&ldquo;ERROR: Cannot write pid file to {path_string}: {errAndStr_second}&rdquo;,&rdquo;attr&rdquo;:{&ldquo;path_string&rdquo;:&rdquo;/var/run/mongodb/mongod.pid&rdquo;,&rdquo;errAndStr_second&rdquo;:&rdquo;No such file or directory&rdquo;}} after upgrade from 4.4.18 to 4.4.21
1,688
null
[]
[ { "code": "", "text": "I have no technology experience, but after working on a web based application tool, I have developed an interest to build my own web application. I am looking for the guidance and path to design, develop, deploy and maintain a web application successfully. I am very interested to learn new things, but after seeing lot of competitive options (java vs python) (back end/front end/full stack) (mySQL vs MangoDB). I am confused how to begin from the scratch. Kindly help.Know someone who can answer?", "username": "louis_thomas" }, { "code": "", "text": "Hello @louis_thomas, welcome to the MongoDB Community forum.I have no technology experience… I am confused how to begin from the scratch. Kindly help.This is a difficult question to answer. I am writing somethings and hope it is useful to you in some way.A web application has various components - mainly the front-end and the back-end. Take a typical email program like Gmail or Yahoo mail. These are web applications and has front and back ends.The browser is where the front-end of the application runs. This is where you view the data, enter data, and click a button to store the data. The program that runs in the web browser is made up of HTML, CSS, JavaScript + maybe other components depending upon technologies. This is the user interface to the web application.A back-end has multiple components; mostly a database server and a web server.\nThe database server is where you have the database and it is where the data is stored. These are databases like MySQL or MongoDB or even a data files. The emails you save are stored and retrieved from this database. Note that you can have a web application entirely without a database.The programs which actually does this writing to and reading from the database, do things like business calculations and programming logic like which web page to display - is the other part of the back-end. This can be called as the application part of the back-end. These applications are your Java, Python, Go, etc., programs. These programs are deployed and run on servers, and are called as web servers or application servers.That makes the web application. Together, the front-end, the application and the database is often referred as full-stack.As such computer programming is the main aspect of the front-end and back-end technologies. You write a program using a programming language. Programming is the essential part of developing applications (though programming the browser is called as scripting).… design, develop, deploy and maintain a web application successfully.That’s a very clear goal. But, it is also not an easy one, since you don’t have any technology experience. In fact each of it can be a specialized field on its own. Broadly, to create any kind of application, you will go thru the design, develop, deploy and maintenance aspects of it. Even if you want to write simple calculator program, you will end up with these steps. These steps together are often referred as application development life cycle.This is an example, an actual web application the author had designed, developed, deployed and also maintain it. It is a simple quiz application where there are about forty questions, each with multiple answers and you select one to score correct (or not). The front-end is written in HTML, CSS, JavaScript + Java’s web technology called as JSP (JavaServer Pages) and the back-end is a Java programming language component called as Servlet. The database is a text file (I used a file as the data is very little).You can access this quiz application (it’s a Java language basics quiz) at Javaquizplayer.comWhen you first click the link, the request is processed by a Java Servlet program which reads the quiz data file and prepares the data to be used in the application. Then it displays the landing web page with instructions to play the quiz. Further, as you press a button to start the quiz, the first quiz question is read and displayed in the web page (all this the programming logic happens in the Servlet), etc.This application is not of the latest of the today’s technologies, as it was written about seven years back, but in essence has the components of a typical web app. The web app runs on an Apache Tomcat web server.", "username": "Prasad_Saya" } ]
How can I start a web application?
2021-01-22T07:28:18.248Z
How can I start a web application?
2,273
https://www.mongodb.com/…0c07bb1cdc8f.png
[ "dot-net", "crud", "performance" ]
[ { "code": "var bulkWriteOptions = new BulkWriteOptions\n{\n BypassDocumentValidation = true,\n IsOrdered = false\n};\n\nusing (var cursor = await collection.FindAsync(filterToGetAllDocuments, options))\n{\n while (await cursor.MoveNextAsync())\n {\n stopwatch.Restart();\n\n batch++;\n\n Parallel.ForEach(cursor.Current, document =>\n {\n // This creates the list of UpdateOneModel<BsonDocument>, in parallel.\n cleanUpAction(document, documentsToUpdate);\n });\n\n // Now update all the documents in this BATCH.\n if (documentsToUpdate.Any())\n {\n bulkWriteAsyncStopwatch.Restart();\n var result = await collection.BulkWriteAsync(documentsToUpdate, bulkWriteOptions);\n bulkWriteAsyncStopwatch.Stop();\n }\n }\n}\n", "text": "Hi there,I’m trying to UPDATE about 2.1 million rows in a mongodb collection in a .NET Core 3.1 console app. When I batch this into batches of 1000, I get TIMINGS of each call to BulkWriteAsync. Each call it gets slower and slower and slower. So I’ve done the following to try and eliminate possible reasons:Ok. so some code!I’ve removed some boring stuff like logging or me recording the stopwatch timings to the console.out … but that’s the gist of it.Finally, here are my timings:\nimage653×668 13.3 KB\n| Total Time | BulkWriteAsync Time | Difference (time it took to generate the list of 1000 UpdateOneModel’s | – | — | — |\n| 00:00.40 | 00:00.31 | 00:00.09 |\n| 00:00.87 | 00:00.78 | 00:00.09 |\n| 00:01.34 | 00:01.26 | 00:00.08 |\n| 00:01.83 | 00:01.74 | 00:00.08 |\n| 00:02.37 | 00:02.29 | 00:00.08 |\n| 00:02.81 | 00:02.72 | 00:00.08 |\n| 00:03.26 | 00:03.17 | 00:00.08 |\n| 00:03.79 | 00:03.70 | 00:00.09 |\n| 00:04.37 | 00:04.27 | 00:00.09 |So we can see:… and it keeps growing.So this means each batch is taking longer and long to complete.I would have thought that each call to BWA would be -roughly- the same time? It -feels- like it’s doing all the previous batches and then this batch? I donno!Here’s a screen shot of the BulkWriteResult for batch row #42 (which took 20.91 seconds to run)So can someone PLEASE help? I just don’t know how to figure this one out?I feel like this is an issue with the .NET driver?", "username": "Pure.Krome" }, { "code": "", "text": "As a “seedling”, I couldn’t figure out how to edit my previous post. (I also tried reading the Readme.1st for help, prior).Anyways, here’s the missing image from the opening post:Here’s a screen shot of the BWR for batch row #42", "username": "Pure.Krome" }, { "code": "", "text": "Hello,\nDid you find any solutions?", "username": "Alexander_Halutin" } ]
Using the .NET BulkWriteAsync gets slower with each batch (have timings to proove)
2021-10-03T12:45:38.823Z
Using the .NET BulkWriteAsync gets slower with each batch (have timings to proove)
5,012
null
[ "realm-web" ]
[ { "code": "", "text": "I often find myself needing to update a web-app’s UI and content, based on changes to the data on the cloud/server. For example, notifications, several users collaborating on the same data, users working with rapidly changing data, etc.Sync appears to only work on mobile apps and I have read in a reply, here in community, that this will likely remain so.So that said, I can’t seem to find any documentation that states how my above browser based use cases should/could be addressed in Realm or Atlas. I am not not going to keep repeatedly querying for such new data of course. It looks like I would need to create a web-socket server, outside Atlas, that could then have it’s own Realm-Sync local data store; feeding any updates to a “subscriber”.Is this last assessment basically correct or am I missing some more standard way of doing this in this ecosystem? Thank you.", "username": "Erik_Elverskog" }, { "code": "", "text": "I am looking for the same thing as you. Here is the summary of what I have found so far (I am splitting my message into several posts because I am limited to 3 links and 1 attachment per message).WebSockets is not the only solution to stream events to the browser. You also can use Server-sent events. Basically, it is an HTTP connection between the browser and the server being kept open to stream events.The web SDK of Realm (which is provided by MongoDB) is using the Server-sent_events throughout the BAAS of MongoDB Atlas (Backend As A Service). See the tutorial here.\nScreenshot 2023-05-22 at 11.14.311794×693 209 KB\n", "username": "Gabriel" }, { "code": "", "text": "\nScreenshot 2023-05-22 at 11.15.191286×424 105 KB\nThis MongoDB tutorial explains how to set up WebSocket by deploying your own backend code. Would it be possible to do it with BAAS? Would it be possible even without BAAS?Would it be possible to stream using HTTP (Server-sent events) directly with the “Data API” / “HTTPS endpoints”?", "username": "Gabriel" }, { "code": "", "text": "", "username": "Gabriel" } ]
Is there an equivalent to web-sockets for web-apps in Realm/Atlas?
2022-06-02T03:59:56.807Z
Is there an equivalent to web-sockets for web-apps in Realm/Atlas?
3,136
https://www.mongodb.com/…5f12538c7ae1.png
[ "replication" ]
[ { "code": "", "text": "verison:4.0.9\nreplicaset,3 node\nprimary node 's max memory:4GB\nbut info from mongostat output follows:\nshow dbs output:\nPRIMARY> show dbs;\nadmin 0.000GB\nconfig 0.000GB\nxxx 0.013GB\nlocal 0.342GBwhy primary instance 's memory so high?", "username": "NOVALUE_wendywong" }, { "code": "PRIMARY> db.serverStatus().tcmalloc\n{\n \"generic\" : {\n \"current_allocated_bytes\" : NumberLong(\"36921151424\"),\n \"heap_size\" : NumberLong(\"40264519680\")\n },\n \"tcmalloc\" : {\n \"pageheap_free_bytes\" : NumberLong(1460367360),\n \"pageheap_unmapped_bytes\" : NumberLong(1508446208),\n \"max_total_thread_cache_bytes\" : NumberLong(1073741824),\n \"current_total_thread_cache_bytes\" : 11085832,\n \"total_free_bytes\" : 374554688,\n \"central_cache_free_bytes\" : 358786648,\n \"transfer_cache_free_bytes\" : 4682208,\n \"thread_cache_free_bytes\" : 11085832,\n \"aggressive_memory_decommit\" : 0,\n \"pageheap_committed_bytes\" : NumberLong(\"38756073472\"),\n \"pageheap_scavenge_count\" : 1558237,\n \"pageheap_commit_count\" : 4035069,\n \"pageheap_total_commit_bytes\" : NumberLong(\"681721012224\"),\n \"pageheap_decommit_count\" : 1558514,\n \"pageheap_total_decommit_bytes\" : NumberLong(\"642964938752\"),\n \"pageheap_reserve_count\" : 20923,\n \"pageheap_total_reserve_bytes\" : NumberLong(\"40264519680\"),\n \"spinlock_total_delay_ns\" : NumberLong(\"10426415504\"),\n \"formattedString\" : \"------------------------------------------------\\nMALLOC: 36921152000 (35210.8 MiB) Bytes in use by application\\nMALLOC: + 1460367360 ( 1392.7 MiB) Bytes in page heap freelist\\nMALLOC: + 358786648 ( 342.2 MiB) Bytes in central cache freelist\\nMALLOC: + 4682208 ( 4.5 MiB) Bytes in transfer cache freelist\\nMALLOC: + 11085256 ( 10.6 MiB) Bytes in thread cache freelists\\nMALLOC: + 336584960 ( 321.0 MiB) Bytes in malloc metadata\\nMALLOC: ------------\\nMALLOC: = 39092658432 (37281.7 MiB) Actual memory used (physical + swap)\\nMALLOC: + 1508446208 ( 1438.6 MiB) Bytes released to OS (aka unmapped)\\nMALLOC: ------------\\nMALLOC: = 40601104640 (38720.2 MiB) Virtual address space used\\nMALLOC:\\nMALLOC: 5221570 Spans in use\\nMALLOC: 82 Thread heaps in use\\nMALLOC: 4096 Tcmalloc page size\\n------------------------------------------------\\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\\nBytes released to the OS take up virtual address space but no physical memory.\\n\"\n \n", "text": "", "username": "NOVALUE_wendywong" }, { "code": "7fc4f4f24000-7fce68a61000 rw-p 00000000 00:00 0 \nSize: 39644404 kB\nRss: 37846436 kB\nPss: 37846436 kB\nShared_Clean: 0 kB\nShared_Dirty: 0 kB\nPrivate_Clean: 0 kB\nPrivate_Dirty: 37846436 kB\nReferenced: 37278820 kB\nAnonymous: 37846436 kB\nAnonHugePages: 0 kB\nSwap: 0 kB\nKernelPageSize: 4 kB\nMMUPageSize: 4 kB\n", "text": "", "username": "NOVALUE_wendywong" }, { "code": "{\n \"application threads page read from disk to cache count\" : 146,\n \"application threads page read from disk to cache time (usecs)\" : 6604,\n \"application threads page write from cache to disk count\" : 17030483,\n \"application threads page write from cache to disk time (usecs)\" : 698950227,\n \"bytes belonging to page images in the cache\" : 3638217,\n \"bytes belonging to the cache overflow table in the cache\" : 182,\n \"bytes currently in the cache\" : 2252442633,\n \"bytes dirty in the cache cumulative\" : NumberLong(\"18672485991968\"),\n \"bytes not belonging to page images in the cache\" : 2248804415,\n \"bytes read into cache\" : 3367237,\n \"bytes written from cache\" : 125547076860,\n \"cache overflow cursor application thread wait time (usecs)\" : 0,\n \"cache overflow cursor internal thread wait time (usecs)\" : 0,\n \"cache overflow score\" : 57,\n \"cache overflow table entries\" : 0,\n \"cache overflow table insert calls\" : 0,\n \"cache overflow table remove calls\" : 0,\n \"checkpoint blocked page eviction\" : 717,\n \"eviction calls to get a page\" : 7284869,\n \"eviction calls to get a page found queue empty\" : 7284049,\n \"eviction calls to get a page found queue empty after locking\" : 0,\n \"eviction currently operating in aggressive mode\" : 0,\n \"eviction empty score\" : 0,\n \"eviction passes of a file\" : 0,\n \"eviction server candidate queue empty when topping up\" : 0,\n \"eviction server candidate queue not empty when topping up\" : 0,\n \"eviction server evicting pages\" : 0,\n \"eviction server slept, because we did not make progress with eviction\" : 28068,\n \"eviction server unable to reach eviction goal\" : 0,\n \"eviction state\" : 32,\n \"eviction walk target pages histogram - 0-9\" : 0,\n \"eviction walk target pages histogram - 10-31\" : 0,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 0,\n \"eviction walk target pages histogram - 64-128\" : 0,\n \"eviction walks abandoned\" : 0,\n \"eviction walks gave up because they restarted their walk twice\" : 0,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 0,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 0,\n \"eviction walks reached end of tree\" : 0,\n \"eviction walks started from root of tree\" : 0,\n \"eviction walks started from saved location in tree\" : 0,\n \"eviction worker thread active\" : 4,\n \"eviction worker thread created\" : 0,\n \"eviction worker thread evicting pages\" : 164,\n \"eviction worker thread removed\" : 0,\n \"eviction worker thread stable number\" : 0,\n \"failed eviction of pages that exceeded the in-memory maximum count\" : 7,\n \"failed eviction of pages that exceeded the in-memory maximum time (usecs)\" : 48,\n \"files with active eviction walks\" : 0,\n \"files with new eviction walks started\" : 0,\n \"force re-tuning of eviction workers once in a while\" : 0,\n \"hazard pointer blocked page eviction\" : 7,\n \"hazard pointer check calls\" : 145606,\n \"hazard pointer check entries walked\" : 484,\n \"hazard pointer maximum array length\" : 1,\n \"in-memory page passed criteria to be split\" : 591,\n \"in-memory page splits\" : 282,\n \"internal pages evicted\" : 0,\n \"internal pages split during eviction\" : 0,\n \"leaf pages split during eviction\" : 2,\n \"maximum bytes configured\" : 4294967296,\n \"maximum page size at eviction\" : 0,\n \"modified pages evicted\" : 145331,\n \"modified pages evicted by application threads\" : 0,\n \"operations timed out waiting for space in cache\" : 0,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 0,\n \"page written requiring cache overflow records\" : 0,\n \"pages currently held in the cache\" : 526,\n \"pages evicted because they exceeded the in-memory maximum count\" : 145406,\n \"pages evicted because they exceeded the in-memory maximum time (usecs)\" : 2618210631,\n \"pages evicted because they had chains of deleted items count\" : 29,\n \"pages evicted because they had chains of deleted items time (usecs)\" : 37391,\n \"pages evicted by application threads\" : 0,\n \"pages queued for eviction\" : 0,\n \"pages queued for urgent eviction\" : 164,\n \"pages queued for urgent eviction during walk\" : 0,\n \"pages read into cache\" : 159,\n \"pages read into cache after truncate\" : 122,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages read into cache requiring cache overflow entries\" : 0,\n \"pages read into cache requiring cache overflow for checkpoint\" : 0,\n \"pages read into cache skipping older cache overflow entries\" : 0,\n \"pages read into cache with skipped cache overflow entries needed later\" : 0,\n \"pages read into cache with skipped cache overflow entries needed later by checkpoint\" : 0,\n \"pages requested from the cache\" : 1275199575,\n \"pages seen by eviction walk\" : 0,\n \"pages selected for eviction unable to be evicted\" : 7,\n \"pages walked for eviction\" : 0,\n \"pages written from cache\" : 17030484,\n \"pages written requiring in-memory restoration\" : 145166,\n \"percentage overhead\" : 8,\n \"tracked bytes belonging to internal pages in the cache\" : 63250,\n \"tracked bytes belonging to leaf pages in the cache\" : 2252379383,\n \"tracked dirty bytes in the cache\" : 2704172,\n \"tracked dirty pages in the cache\" : 1,\n \"unmodified pages evicted\" : 0\n}\n\n", "text": "", "username": "NOVALUE_wendywong" }, { "code": "\"bytes currently in the cache\" : 2252442633,", "text": "\"bytes currently in the cache\" : 2252442633,“bytes currently in the cache” : 2252442633,", "username": "NOVALUE_wendywong" }, { "code": "", "text": "Hi @NOVALUE_wendywong , can you let me know if you had any findings in this case?", "username": "Susheem_Koul" } ]
Why mongo instance memory so high?
2022-05-16T08:15:28.192Z
Why mongo instance memory so high?
3,406
null
[ "replication" ]
[ { "code": "db.getSiblingDB(\"local\").system.replset.deleteOne({_id: oldId})\nuncaught exception: WriteCommandError({\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on local to execute command { delete: \\\"system.replset\\\", ordered: true, lsid: { id: UUID(\\\"37921152-f609-4493-891f-f7bd0b3dff72\\\") }, $db: \\\"local\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n}) :\nWriteCommandError({\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on local to execute command { delete: \\\"system.replset\\\", ordered: true, lsid: { id: UUID(\\\"37921152-f609-4493-891f-f7bd0b3dff72\\\") }, $db: \\\"local\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n})\nWriteCommandError@src/mongo/shell/bulk_api.js:417:48\nexecuteBatch@src/mongo/shell/bulk_api.js:915:23\nBulk/this.execute@src/mongo/shell/bulk_api.js:1163:21\nDBCollection.prototype.deleteOne@src/mongo/shell/crud_api.js:375:17\n", "text": "Hi all,\nI’m following the procdure ( https://www.mongodb.com/docs/v4.4/tutorial/rename-unsharded-replica-set/ ) to rename a three node Mongodb replicaset ver 4.4 .\nAfter inserting the new document in local db, system.replset cllection, I got an error in delete the old document.\nThe error is that I don’t have permissions on the local db.Has anyone fixed the issue?\nThanks for any help you can give me.\nKing regards\nGiorgio Prandi", "username": "Giorgio_Prandi" }, { "code": "", "text": "What privileges your user has?\nI think access to system objects is removed from Inbuilt roles\nYou have to create a custom role giving explicit privs/actions on that collection or create a temporary user and grant __system internal role to this user", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi Ramachandra,\nthanks a lot for your suggestion, it was very helpful and i was able to rename the replica set.\nKing regards\nGP", "username": "Giorgio_Prandi" } ]
Rename replica set: not authorized on local to execute command delete
2023-05-19T08:26:08.357Z
Rename replica set: not authorized on local to execute command delete
975
null
[ "data-modeling", "document-versioning" ]
[ { "code": "", "text": "what is the best document structure for the below case, I have RDBMS table structure.\n|Gender | Age Min (>=) | Age Max (<) | Income Min (>=) | Income Max (<) | Classification |\n| Male | 18 | 22 | 1 | 5000 | A |\n| Male | 18 | 22 | 5000 | 7000 | B |\n| Male | 18 | 22 | 7000 | 99000 | C |\n| Male | 22 | 45 | 1 | 7500 | A |The purpose of table is to fetch Classification by providing Gender, Age and Income of person\nPlease help", "username": "Ravikiran_Chikkamath" }, { "code": "{\n Gender: \"Male\",\n minAge:18,\n maxAge: 22,\n incomeMin:2000,\n incomeMax:5000,\n classification: \"A\"\n\n}\n", "text": "Hey @Ravikiran_Chikkamath,Welcome to the MongoDB Community Forums! A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.Based on what you described, one example design you can document your data is in the following manner (please test and alter according to your use case and requirements):This way, you can search classification by providing Gender, Age, and income. Adding index to these fields would help improve query performance as well. Do read about the ESR rule as well since that will help you a lot while planning your index and queries. I would suggest you use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document design in MongoDB for the given case
2023-05-13T07:19:40.037Z
Document design in MongoDB for the given case
741
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "let fieldName = stage\n {\n $group: {\n _id:\"$$fieldName \",\n count: {$sum:1},\n },\n },\nfieldName", "text": "Hi friends,I have faced a problem when trying to bind the variable inside the $group. My requirement is incoming field must be bound inside the $group aggregation.I have tried the followingThe above is not worked.it is showing an error like an undefined variable is used fieldName,Please help to solve thisThanks\nPravin", "username": "Pravin_Raja" }, { "code": "let fieldName = stage\n{\n $group: {\n _id: \"$$fieldName \",\n count: { $sum: 1 },\n },\n},\n$group$$$groupconst fieldName = 'stage';\nconst pipeline = [\n {\n $group: {\n _id: '$' + fieldName,\n count: { $sum: 1 }\n }\n }\n];\ncollection.aggregate(pipeline)\n .toArray()\n .then((result) => {\n console.log(result);\n // Handle the result as needed\n\n console.log('Find query executed...');\n console.log(result);\n\n client.close(); // Close the MongoDB connection\n })\n$$group", "text": "Hello @Pravin_Raja,Welcome to the MongoDB Community forums It seems that you are trying to bind a variable in the $group stage on the client side using the $$ operator. It’s important to note that in MongoDB, the aggregation pipeline is executed on the server side, and variables declared on the client side cannot be directly accessed within the pipeline.However, one possible workaround is to dynamically construct the aggregation pipeline using an application code (here JavaScript). This approach will allow you to build the pipeline dynamically and inject the variable value into the $group stage. Here’s an example code snippet:In the above example, I constructed the pipeline dynamically by concatenating the field name with the $ operator to reference the field in the $group stage.Please note that this is just one possible approach, and the actual implementation may vary depending on your specific requirements.I hope this helps. Please feel free to reach out if you have any further questions.Best Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "pipelineHi !!!\nYeah it worked @kushagra_kesav.Thanks for Your response.", "username": "Pravin_Raja" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to Bind variable as a Field Name inside Aggregate using node js
2023-05-19T04:43:27.468Z
How to Bind variable as a Field Name inside Aggregate using node js
617
null
[ "security" ]
[ { "code": "", "text": "Is there a way to connect to a mongodb server without passing the cert files (.csr, .crt, .pem) on the client side having SSL/TLS enabled on the server? We are trying to figure out how do our App Servers will connect without the need for certificates since we are using App services on Azure.", "username": "sg_irz" }, { "code": "", "text": "if mongo cluster simply uses tls (not mTLS), then clients don’t have to pass any certs to server", "username": "Kobe_W" }, { "code": "", "text": "Hello @Kobe_W are you referring to this one?\nimage848×694 35.3 KB\n", "username": "sg_irz" }, { "code": "", "text": "yes. But i don’t fully understand your wording in your question, so not sure if this is what your want.", "username": "Kobe_W" }, { "code": "net:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: requireTLS\n certificateKeyFile: C:\\Program Files\\OpenSSL\\bin\\mongodb2.pem\n", "text": "Sorry about that. I just want to know if it is possible for clients to connect to mongodb server with TLS enabled without having to pass the .crt or .pem file in the connection string.Here is my tls config for my mongodb server:", "username": "sg_irz" }, { "code": "", "text": "Not sure if there’s such an option in connection string, but looks like the client has to pass a tlsCAFile in order to validate mongo server’s certificate.", "username": "Kobe_W" } ]
How to connect to mongodb server with SSL/TLS Enabled without using cert on client side?
2023-02-28T08:43:45.036Z
How to connect to mongodb server with SSL/TLS Enabled without using cert on client side?
1,493
null
[ "security" ]
[ { "code": "net:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: allowTLS\n certificateKeyFile: C:\\Program Files\\Certbot\\bin\\certificateKeyFile.pem\n", "text": "I am running Mongodb 6.0.5 on windows.Here is my connection string:mongodb://username:[email protected]:27017/?tls=trueHowever when I use this command:db.runCommand({whatsmyuri: 1})It only shows{ you: ‘public-IP:59499’, ok: 1 }Why is it not showing the TLS/SSL settings of my connection? Is there any other way to check it?", "username": "sg_irz" }, { "code": "", "text": "check this answer.Or you can disable tls from server side, and if you fail to connect with same connection string, then “your current connection is TLS”.", "username": "Kobe_W" } ]
How to check my current connection status if using TLS?
2023-03-16T08:28:14.442Z
How to check my current connection status if using TLS?
1,262
null
[ "queries", "crud" ]
[ { "code": "{\n name: string;\n quantity: number;\n status: string;\n}\n", "text": "Hello,If we have a DB model asIs it possible to write an updateOne query with upsert true, so thatI’m struggling with points 2 and 3.Thank’s for the help.", "username": "Drago_Kojadinovic" }, { "code": "", "text": "According to the manual, updateOne can’t support all these in one call.\nBut if you need atomic guarantee, you can try transaction.", "username": "Kobe_W" } ]
Query for upserting data but update should happen only if condition is true
2023-05-20T11:39:04.130Z
Query for upserting data but update should happen only if condition is true
569
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "MongoNetworkError: Socket connection timeout\nat connectionFailureError (C:\\Users\\Deepak\\Desktop\\Mern-app\\bachend\\node_modules\\mongodb\\lib\\cmap\\connect.js:370:20)\nat TLSSocket. (C:\\Users\\Deepak\\Desktop\\Mern-app\\bachend\\node_modules\\mongodb\\lib\\cmap\\connect.js:293:22)\nat Object.onceWrapper (node:events:626:26)\nat TLSSocket.emit (node:events:511:28)\nat emitErrorNT (node:internal/streams/destroy:151:8)\nat emitErrorCloseNT (node:internal/streams/destroy:116:3)\nat process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\ncause: Error [ERR_SOCKET_CONNECTION_TIMEOUT]: Socket connection timeout\nat new NodeError (node:internal/errors:399:5)\nat internalConnectMultiple (node:net:1099:20)\nat Timeout.internalConnectMultipleTimeout (node:net:1638:3)\nat listOnTimeout (node:internal/timers:575:11)\nat process.processTimers (node:internal/timers:514:7) {\ncode: ‘ERR_SOCKET_CONNECTION_TIMEOUT’\n},\nconnectionGeneration: 1,\n[Symbol(errorLabels)]: Set(1) { ‘ResetPool’ }\n}[nodemon] clean exit - waiting for changes before restart", "username": "Deepak_Wasiya" }, { "code": "", "text": "i have not able to connect mongodb atlas ny mongoose driver in express app", "username": "Deepak_Wasiya" } ]
Mongoose connectivity with express app
2023-05-22T03:36:27.325Z
Mongoose connectivity with express app
705
https://www.mongodb.com/…cf_2_1024x47.png
[ "node-js", "compass", "server", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2023-05-13T14:28:54.751+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.752+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.754+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.754+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.755+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"thread1\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.755+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.755+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.755+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.756+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":6467,\"port\":27017,\"dbPath\":\"/opt/homebrew/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"Harshas-MacBook-Air.local\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.756+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.15\",\"gitVersion\":\"935639beed3d0c19c2551c93854b831107c0b118\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.756+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.4.0\"}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.756+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/opt/homebrew/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1, ::1\",\"ipv6\":true},\"storage\":{\"dbPath\":\"/opt/homebrew/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/opt/homebrew/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.757+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.757+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.757+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/opt/homebrew/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:54.757+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.341+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:341518][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.365+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:365883][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.396+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:396346][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 5/17280 to 6/256\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.444+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:444883][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.475+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:475243][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.499+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:499734][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.499+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:499772][6467:0x209933280], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.507+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1683968335:507759][6467:0x209933280], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7497\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.527+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":770}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.527+05:30\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.554+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.555+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.555+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22184, \"ctx\":\"initandlisten\",\"msg\":\"Soft rlimits for open file descriptors too low\",\"attr\":{\"currentValue\":256,\"recommendedMinimum\":64000},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.557+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.558+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.579+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.580+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/opt/homebrew/var/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.581+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.582+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.582+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.582+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"::1\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:28:55.582+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.546+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49513\",\"uuid\":\"bc067e58-9a5a-4228-b1fa-54f5dbda9528\",\"connectionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.550+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49513\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.553+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49514\",\"uuid\":\"93424120-a931-44a0-9003-98f9960ca502\",\"connectionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.554+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49514\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.580+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49515\",\"uuid\":\"6f85ce98-e76e-404a-984c-d80534046b1d\",\"connectionId\":3,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.580+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49516\",\"uuid\":\"e8019277-8707-47fd-a7db-98600ee6130a\",\"connectionId\":4,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.581+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49515\",\"client\":\"conn3\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.581+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49516\",\"client\":\"conn4\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.583+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49517\",\"uuid\":\"8a4fa175-b8c6-402c-934a-3cfc0bb29c92\",\"connectionId\":5,\"connectionCount\":5}}\n", "text": "Hi , I recently tried to install [email protected] in macbook air m2 apple silicon chip. The installation was successfull , but the service is getting stooped whenever i tried to connect to mongodb using node or whenver i try to query using mongodb-compass\nimage2568×120 41.3 KB\nThe mongo.log appears as below :", "username": "Harsha_Vardhan_Moka" }, { "code": "sudo brew services list/opt/homebrew/var/mongodb{\"t\":{\"$date\":\"2023-05-13T14:29:25.550+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49513\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.553+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49514\",\"uuid\":\"93424120-a931-44a0-9003-98f9960ca502\",\"connectionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.554+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49514\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.580+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49515\",\"uuid\":\"6f85ce98-e76e-404a-984c-d80534046b1d\",\"connectionId\":3,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.580+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49516\",\"uuid\":\"e8019277-8707-47fd-a7db-98600ee6130a\",\"connectionId\":4,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.581+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49515\",\"client\":\"conn3\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.581+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49516\",\"client\":\"conn4\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"arm64\",\"version\":\"22.4.0\"},\"platform\":\"Node.js v16.17.1, LE (unified)|Node.js v16.17.1, LE (unified)\",\"application\":{\"name\":\"MongoDB Compass\"}}}}\n{\"t\":{\"$date\":\"2023-05-13T14:29:25.583+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:49517\",\"uuid\":\"8a4fa175-b8c6-402c-934a-3cfc0bb29c92\",\"connectionId\":5,\"connectionCount\":5}}\n", "text": "Hello @Harsha_Vardhan_Moka ,Welcome to The MongoDB Community Forums! Lastly, at the end of the logs shared, It seems like you were able to connect, can you share additional logs to make sure if something is a miss?Below threads seems to have similar issues and were solved, can you take a look at these in case it solves your issue as well.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Consistent stopping of [email protected] service
2023-05-13T09:30:42.487Z
Consistent stopping of [email protected] service
1,054
https://www.mongodb.com/…835d60777b4c.png
[ "backup", "ops-manager" ]
[ { "code": "", "text": "Hi guys, I’m new to MongoDB and Ops manager.\nI was able to set up the ops manager with 3 hosts, but I have a question about how to backup ops manager Mongodb.\nIf I upgrade the ops manager MongoDB version from 4.2 to 4.4, will it cause any data loss?\nIf yes, how does it work and what should I do about this?I found this on MongoDB’s official website, is this the solution or architected to backup ops manager Mongodb?\n\nimage745×407 66.7 KB\n", "username": "Leslie_Lee" }, { "code": "", "text": "Hi @Leslie_Lee and welcome to the MongoDB Community forum!!I have a question about how to backup ops manager Mongodb.The documentation on how to backup an Ops Manager is a resource that you can refer to.If I upgrade the ops manager MongoDB version from 4.2 to 4.4, will it cause any data loss?\nIf yes, how does it work and what should I do about this?Ideally, during an upgrade, there should not be any data loss, unless something goes south during the upgrade.\nYou can refer to the documentation for Ops Manager upgrade to know about the process of upgradation.As Ops Manager is part of the MongoDB Enterprise edition, I would recommend contacting MongoDB support for additional assistance.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari,Thanks for your reply and advise.\nWhich mean if I want to do the same deployment as below diagram, I should use File system store or S3 blockstore?\nHow about this two ops manager? How it connect in same replica set?", "username": "Leslie_Lee" } ]
Discuss about how to backup Ops-manager mongodb
2023-04-11T02:04:09.356Z
Discuss about how to backup Ops-manager mongodb
929
null
[]
[ { "code": "", "text": "Hello everyone!I’m excited to be a member of this user group. I am a freelance front-end developer who occasionally serves as a project manager. Building intuitive user interfaces and leveraging technologies such as the MERN framework to create engaging web applications is a passion of mine.Over the past four years, I’ve had the opportunity to work on a variety of initiatives, including open-source community events, Google Developer Group events, and Polygon community events. These experiences have allowed me to collaborate with talented individuals and broaden my understanding of a variety of technologies.In my spare time, I appreciate investigating the newest frontend development trends and experimenting with new frameworks and libraries. MongoDB and its capabilities as a flexible and scalable database solution fascinate me in particular. Also, writing article is one of my skill.This community group provides a venue for us to connect, learn from one another, and share our experiences, so I’m thrilled to be a part of it. I believe we can contribute to the development of our tech community and support one another in our professional endeavors by working together.I look forward to engaging in conversation, collaborating on initiatives, and exchanging ideas with you all. Feel free to contact me if you have any queries, suggestions, or would like to connect.I am delighted to be a part of this community.", "username": "Ayodele_A" }, { "code": "", "text": "Hello @Ayodele_A, nice to have you here.You’re warmly welcome to the mongoDB community", "username": "Trust_Jamin" }, { "code": "", "text": "thank you @Trust_Jamin", "username": "Ayodele_A" } ]
Hey Everyone, Ayodele Leom from Abuja, Nigeria
2023-05-20T18:24:34.845Z
Hey Everyone, Ayodele Leom from Abuja, Nigeria
745
null
[ "atlas-online-archive" ]
[ { "code": "", "text": "Mongodb provides options to archive the data to cloud object storage. We can use Mongodb’s native “Online Archive” feature or archive our data to S3 storage. And access those data through federated queries. I wanted to know which is best archiving process in terms of query read speed and cost?", "username": "Vivek_Paramasivam1" }, { "code": "", "text": "Hi @Vivek_Paramasivam1 and welcome to MongoDB community forums!!The MongoDB Online Archival feature is a fully managed service in which moves infrequently accessed data from your Atlas cluster to a MongoDB-managed read-only Federated Database Instance on a cloud object storage. In saying so, for using a managed service for archival, such as Atlas to S3, one should use Online Archival as:If you want to configure your own S3 buckets then you can configure Atlas Data Federation to access data in your AWS S3 buckets - More information regarding this on the Atlas Data Federation Overview documentation.However, using the MongoDB Online archival comes with a few limitations which could be read on the Online Archival Limitations on the documentations.And access those data through federated queries. I wanted to know which is best archiving process in terms of query read speedA performance consideration between the two is if you activate Online Archive for an AWS cluster, the cloud object storage exists in the same region in AWS as your cluster. For comparison, Atlas Data Federation provides an elastic pool of agents in the region that is nearest to your data where Atlas Data Federation can process the data for your queries. An additional note from a cost point of view, as per the Atlas Data Federations Regions documentation:To prevent excessive charges on your bill, create your Atlas Data Federation in the same AWS region as your S3 data source.The below documentations, would help you to understand the pricing information.Let us know if you have any further questions.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hey @Vivek_Paramasivam1, everything Aasawari said is a great start in how to evaluate the two options.That said, there is a lot of nuance in deciding between the two options. Generally speaking using Data Federation can allow you to fine tune a lot of different parameters to your specific use case which can allow you to get the best possible performance, that said it can become very complex and does need to be managed over time.Online Archive on the other hand is a fully managed solution, and we do have some really exciting improvements coming which will drastically improve query performance in the next few months.Let me know if you’d like to discuss this further, and feel free to drop a meeting on my calendar here if that’s easiest: Calendly - Benjamin FlastBest,\nBen", "username": "Benjamin_Flast" } ]
Archiving data to "Mongodb Online Archive" or "S3 Archive"? Which is best in performance and cost?
2023-05-16T19:31:59.693Z
Archiving data to &ldquo;Mongodb Online Archive&rdquo; or &ldquo;S3 Archive&rdquo;? Which is best in performance and cost?
1,442
null
[ "serverless" ]
[ { "code": "", "text": "I’m looking at remix.run and how they manage to deploy the server on all kind of serverless and edge infrastructure.Is it possible to write an adapter to deploy a remix.run application to a mongodb realm https endpoint?I imagine the first obstacle would be to define a wildcard https endpoint, but I’m not sure…", "username": "Vegar_Vikan" }, { "code": "", "text": "From what I read about Remix and Realm it should definitely be possible.", "username": "quang" } ]
Realm HTTPS Endpoints and functions suitable for remix.run?
2022-05-05T12:04:41.434Z
Realm HTTPS Endpoints and functions suitable for remix.run?
2,387