image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6397d344742db0acbdc8ce42\"\n },\n \"item_id\": 1,\n \"box_id\": 1,\n \"name\": \"NZ\",\n \"condition\": {\n \"name\": \"A\",\n \"id\": 5\n },\n \"apple_date\": null,\n \"apple_size_kilograms\": null,\n \"orange_date\": \"2018-12-17\",\n \"orange_size_kilograms\": 325,\n},{\n \"_id\": {\n \"$oid\": \"6397d344742db0acbdc8ce43\"\n },\n \"item_id\": 1,\n \"box_id\": 1,\n \"name\": \"NZ\",\n \"condition\": {\n \"name\": \"A\",\n \"id\": 5\n },\n \"apple_date\": \"2010-11-06\",\n \"apple_size_kilograms\": 352,\n \"orange_date\": \"2008-11-29\",\n \"orange_size_kilograms\": 234,\n}{\n \"_id\": {\n \"$oid\": \"6397d344742db0acbdc8ce47\"\n },\n \"item_id\": 3,\n \"box_id\": 1,\n \"name\": \"US\",\n \"condition\": {\n \"name\": \"F\",\n \"id\": 7\n },\n \"apple_date\": \"2017-09-17\",\n \"apple_size_kilograms\": 342,\n \"orange_date\": \"2017-06-24\",\n \"orange_size_kilograms\": 344,\n}\n[\n {\n $match: {\n box_id: 1,\n item_id: 1,\n },\n },\n {\n $facet: {\n apple: [\n {\n $match: {\n apple_size_kilograms: {\n $ne: null,\n },\n },\n },\n {\n $sort: {\n apple_size_kilograms: 1,\n },\n },\n {\n $limit: 1,\n },\n ],\n orange: [\n {\n $match: {\n orange_size_kilograms: {\n $ne: null,\n },\n },\n },\n {\n $sort: {\n orange_size_kilograms: 1,\n },\n },\n {\n $limit: 1,\n },\n ],\n },\n },\n {\n $unwind: \"$apple\",\n },\n {\n $unwind: \"$orange\",\n },\n]\n[\n{\n\t1:\n\t{ apple: {the_whole document_from_this_item-id_apple_with_the_lowest_kilograms},\n\t{ orange: {the_whole document_from_this_item-id_orange_with_the_lowest_kilograms},\n},{\n\t3:\n\t{ apple: {the_whole document_from_this_item-id_apple_with_the_lowest_kilograms},\n\t{ orange: {the_whole document_from_this_item-id_orange_with_the_lowest_kilograms},\n}\n", "text": "I have this data:I have this aggregation query, that gets me each item, with the lowest values in each category.My issues is thats just a example of the output I am after, I dont want to filter by item_id, i actually want to group by item_id. but I cant work out how. I have tried every method I can think so. Can anyone point my in the right direction please.I am really after my output to look like:Please let me know if I have posted this wrong.thanks", "username": "Zane_Shus" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$item_id\"\n} } ,\n{ \"$lookup\" : {\n \"from\" : collection ,\n \"as\" : \"result\" ,\n \"localField : \"_id\" ,\n \"foreignField\" : \"_id\"\n \"pipeline\" : [\n localFeld\n ]\n} }\n", "text": "One idea is to forgo $facet as the main stage and start with a $group with _id:$item_id and then for each group perform a $lookup with a pipeline that $facet, $sort and $limit. Something that might look like:", "username": "steevej" }, { "code": "", "text": "Looks like I went to sleep before terminating my last post in this thread. B-(Please forgive me. I will give it another try soon.", "username": "steevej" }, { "code": "group = { \"$group\" : {\n \"_id\" : \"$item_id\"\n} }\n\norange = [\n { \"$match\" : { \"orange_size_kilograms\" : { \"$ne\" : null } } } ,\n { \"$sort\" : { \"orange_size_kilograms\" : 1 } } ,\n { \"$limit\" : 1 }\n]\n\napple = [\n { \"$match\" : { \"apple_size_kilograms\": { \"$ne\" : null } } } ,\n { \"$sort\" : { \"apple_size_kilograms\" : 1 } } ,\n { \"$limit\" : 1 }\n]\n\nfacet = { \"$facet\" : { orange , apple }}\n\n{ \"$lookup\" : {\n \"from\" : \"the-same_collection\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"item_id\" ,\n \"as\" : \"result\" ,\n \"pipeline\" : [ facet ]\n} }\n\npipeline = [ group , lookup ]\n", "text": "I am back.This seems to extract the correct information. The format is not exactly want you want but some $unwind and $set and $unset stages should bring you pretty close.", "username": "steevej" } ]
Group By for a Existing Facet Query
2023-05-19T05:02:21.581Z
Group By for a Existing Facet Query
652
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "hello\ni have kinda big dataset (about 10 million records, 4gb). i have some keywords. i need to search these keywords in this collection and remove the documents containing any of them. the problem is my keyword list is also not small. it is about 20 thousand.\nis there a faster way other than making 20 thousand queries ?", "username": "Ali_ihsan_Erdem1" }, { "code": "", "text": "With a sophisticated enough regular expression, yes.", "username": "Jack_Woehr" }, { "code": "", "text": "oh god right why didnt i think of this before. i will do some speed testing with this. thanks.", "username": "Ali_ihsan_Erdem1" } ]
Is there a way to do "string contains" type of search with multiple keywords?
2023-06-03T22:15:34.019Z
Is there a way to do “string contains” type of search with multiple keywords?
466
null
[ "node-js" ]
[ { "code": "/Users/name/Desktop/mern/server/node_modules/mongodb-connection-string-url/lib/index.js:86\n throw new MongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"');\n ^\n\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"\n at new ConnectionString (/Users/name/Desktop/mern/server/node_modules/mongodb-connection-string-url/lib/index.js:86:19)\n at parseOptions (/Users/name/Desktop/mern/server/node_modules/mongodb/lib/connection_string.js:191:17)\n at new MongoClient (/Users/name/Desktop/mern/server/node_modules/mongodb/lib/mongo_client.js:48:63)\n at file:///Users/name/Desktop/mern/server/db/conn.mjs:10:16\n at ModuleJob.run (node:internal/modules/esm/module_job:194:25)\n\nNode.js v19.4.0\n", "text": "I was following the “How to Use MERN Stack: A Complete Guide” and got stuck at the Server API Endpoints section. How To Use MERN Stack: A Complete Guide | MongoDBI am getting the following error when I try running “node server.mjs”I have followed the tutorial completely as I am very new to MERN, so I am not sure what I am doing wrong as my code is the same as in the tutorial.", "username": "Samarth_Grover" }, { "code": "", "text": "Check your ATLAS_URI\nCan you connect to your mongodb from the shell using above connect string?", "username": "Ramachandra_Tummala" }, { "code": "ATLAS_URI=mongodb+srv://<username>:<password>@clusterfirst.hetrgml.mongodb.net/?retryWrites=true&w=majority\nimport { MongoClient } from \"mongodb\";\n\nconst connectionString = process.env.ATLAS_URI || \"\";\n\nconst client = new MongoClient(connectionString);\n\nlet conn;\n\ntry {\n\nconn = await client.connect();\n\n} catch(e) {\n\nconsole.error(e);\n\n}\n\nlet db = conn.db(\"sample_training\");\n\nexport default db;\n", "text": "My ATLAS_URI has the correct connection string in config.envconfig.envBut when I call process.env.ATLAS_URI in conn.mjs it returns undefinedconn.mjs", "username": "Samarth_Grover" }, { "code": "", "text": "Can you connect by shell?You did not respond on this\nDid you replace userid and password without these <…>\nAny special characters in your password?\nMay be space needed before and after equal sign? Like ATLAS_URI =", "username": "Ramachandra_Tummala" } ]
Error while following "How to Use MERN Stack: A Complete Guide"
2023-06-03T00:43:07.987Z
Error while following &ldquo;How to Use MERN Stack: A Complete Guide&rdquo;
568
https://www.mongodb.com/…_2_1024x576.jpeg
[ "delhi-mug" ]
[ { "code": "Security Research Engineer @ PrivadoDeveloper Advocate @ MongoDBDeveloper Advocate @ MongoDBLead - MUG Delhi NCR | Software Engineer @ SAP LabsLead - MUG Delhi NCRLead - MUG Delhi NCR | Founder @CosmoCloud | Sr. SWE @ LinkedIn", "text": "\nDelhi - MUG-26_11_20221920×1080 257 KB\nDelhi-NCR MongoDB User Group is hosting a meetup on 4th March 2023 @ MongoDB Office, Gurugram for MongoDB Community in the region.RSVP to join the Waitlist: Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you RSVPed. Join us for some amazing tech sessions, networking, and fun. Meet other MongoDB Developers, Enthusiasts, Customers, and Experts to get all the required knowledge and ideas you need to build your giant idea.RSVP to join the Waitlist: Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you’ve RSVPed. Stay tuned for more updates! In the meantime make sure you join the Delhi-NCR Group to introduce yourself and stay abreast with future meetups and discussions.Event Type: In-Person\n Location: 8th Floor, MongoDB Office, Gurugram .\n Floor 8th, Building - 10C, DLF Cyber City, Sector 24, Gurugram, Haryana 122001Please Note: We have limited seats available for the event. RSVP on the event page to express your interest and enter the waitlist. We will contact you to collect more information and confirm your attendance.Event Type: In-Person\nLocation: 8th Floor, MongoDB Office, GurugramSecurity Research Engineer @ PrivadoDeveloper Advocate @ MongoDBDeveloper Advocate @ MongoDBLead - MUG Delhi NCR | Software Engineer @ SAP Labs\nsanchit_khurana1920×1536 177 KB\nLead - MUG Delhi NCR\nshrey batra800×800 151 KB\nLead - MUG Delhi NCR | Founder @CosmoCloud | Sr. SWE @ LinkedIn", "username": "Priyanka_Taneja" }, { "code": "", "text": "Wow Excited for the event . Last Mongo DB Event at Linkedin was just Amazing . Looking forward for again wonderful Experience!!", "username": "Yash_Sisodia27" }, { "code": "", "text": "Looking forward to have you again! Do share the linkedin post with your network!", "username": "shrey_batra" }, { "code": "", "text": "Thank you sir , Sure Sir", "username": "Yash_Sisodia27" }, { "code": "", "text": "This time learning grid and graphql impl will be great", "username": "KapilChaudhary" }, { "code": "", "text": "How can I withdraw my RSVP?", "username": "Neeraj_Bhojwani" }, { "code": "", "text": "Hey Neeraj,\nYou can click on the top right green RSVP button, right below the event title, to withdraw the same ", "username": "Harshit" }, { "code": "", "text": "@Harshit @Priyanka_Taneja I want to join this event ,How can i join .", "username": "Chirag_kumar1" }, { "code": "", "text": "Hey @Chirag_kumar1 - Unfortunately, we are already booked out and have limited space! Very Sorry!\nPlease join the group to stay informed about our upcoming meetups. https://www.mongodb.com/community/forums/delhi-mug", "username": "Harshit" }, { "code": "", "text": "No worry ,Happy to join the group see u guys in upcomming events.Let me know if anyone not comming the last time i will surely replace ,Haha .Thanks", "username": "Chirag_kumar1" }, { "code": "", "text": "Hi @Harshitevent is tomorrow onwards but still did not received any entry ticket yet on mail, after filling up the quick google form ", "username": "Mohammad_Farhan" }, { "code": "", "text": "Hey @Mohammad_Farhan and Everyone!\nWe rolled out the confirmations last night. In case you didn’t get an email from me, that means we unfortunately were not able to accommodate you. We tried our level best, but due to limited space, it is not possible to accommodate everyone. Very Sorry!We plan to bring a much bigger spaced event next time so that we can accommodate everyone!", "username": "Harshit" }, { "code": "", "text": "ok😢it was about to be my first technical event, but will try next time!", "username": "Mohammad_Farhan" }, { "code": "", "text": "I RSVP’d this event almost 10-12 days ago, and when multiple slots were available, also filled the google form. How is it possible that I didn’t get an confirmation mail. Can you please check if its not a mistake?", "username": "Ashutosh_Dubey1" }, { "code": "", "text": "Hey @Ashutosh_Dubey1,\nUnfortunately, we have a pretty small space that can accommodate only 60-70 attendees. Instead of only looking at the time of registration, we also looked at the relevance of the talks with MongoDB experience everyone shared in the confirmation form as well to be fair to those who found out about the event a little later.We are still reaching out to the people on the list if someone is backing out or is not able to make it.The User Group Leaders are planning these events on a regular basis and there would be for sure more such events regularly happening. We plan to keep this group of people who missed out this time in the preference to make sure we are able to accommodate everyone over the course of events! ", "username": "Harshit" }, { "code": "", "text": "Hi Harshit,\nThanks for organizing such a community event. I have registered for the same but unfortunately I couldn’t join the event today. Please replace someone else who are eagerly waiting for the event.\nHope I will get opportunity in future.", "username": "Ishwar_Kumar1" } ]
Delhi-NCR MUG: MongoDB Delhi NCR March Meetup
2023-02-21T19:00:51.961Z
Delhi-NCR MUG: MongoDB Delhi NCR March Meetup
5,567
https://www.mongodb.com/…092ff940ed72.png
[ "node-js" ]
[ { "code": "", "text": "I am getting this error, please help me\n", "username": "uzim_man" }, { "code": "findfindnode-tuts.blogs", "text": "user is not allowed to do action [find] on [node-tuts.blogs]Looks like this is an Atlas cluster. The above error message indicates the database user isn’t allowed to perform a find command on the namespace mentioned above. Double check the specific user being used when the error was being generated in the Database Users section of the Atlas UI and ensure they have the correct privlidges to perform the find operation on the node-tuts.blogs namespace.Regards,\nJason", "username": "Jason_Tran" }, { "code": "SecurityDatabase AccessEditBuilt-in roleRead and write to any database", "text": "Check if this works\nIn the Atlas cluster, select Security , go to Database Access click on Edit under database user privileges select Built-in role and under the dropdown select Read and write to any database to the user. Try again or refresh your connection to see the result.", "username": "Shubham_Jaiswal2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting Error while connecting to MongoDB using Node.js in VS code
2023-06-02T16:21:23.454Z
Getting Error while connecting to MongoDB using Node.js in VS code
604
null
[ "dot-net", "transactions", "serverless" ]
[ { "code": "MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytes(Stream stream, Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n--- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.Dropbox.RemoveMessage(Int32 responseTo)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquiredConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable`1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action`1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer`1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteCommandOperationBase.ExecuteAttempt(RetryableWriteContext context, Int32 attempt, Nullable`1 transactionNumber, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatches(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.Execute(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass28_0.<BulkWrite>b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase`1.<>c__DisplayClass68_0.<InsertOne>b__0(IEnumerable`1 requests, BulkWriteOptions bulkWriteOptions)\n at MongoDB.Driver.MongoCollectionBase`1.InsertOne(TDocument document, InsertOneOptions options, Action`2 bulkWrite)\n at MongoDB.Driver.MongoCollectionBase`1.InsertOne(TDocument document, InsertOneOptions options, CancellationToken cancellationToken)\n", "text": "Hi All,We are using a Serverless Instance of MongoDb using the 2.18 version of the .NET driver. During reads, the following exception occurs from time to time.System.IO.EndOfStreamException: Attempted to read past the end of the stream.There doesn’t seem to be a pattern of why this exception occurs. Any help understanding what is going on would be greatly appreciated. Full call stack below:", "username": "Rich_Levy" }, { "code": "EndOfStreamExceptionsEndOfStreamExceptionsInsertOnew: majorityInsertOneInsertOne", "text": "Hi, @Rich_Levy,Welcome to the MongoDB Community Forums. I understand that you’re occasionally seeing EndOfStreamExceptions from the MongoDB .NET/C# Driver v2.18 when using a Serverless instance.Typically EndOfStreamExceptions happen when the remote end of a network connection hangs up on a client. From the stack trace, I can see that you’re performing an InsertOne operation and the exception happens while waiting for an acknowledgement from the cluster that the write was completed successfully. Even though the default write concern in MongoDB Atlas is w: majority, I wouldn’t expect the InsertOne response to timeout waiting for the majority write to complete.I would suggest enabling Logging to help discern a pattern around when the exceptions occur. My educated guess is that they happen when there is a delay in receiving the write acknowledgement and an intermediate router terminates the TCP connection because it thinks it is idle.Hopefully that gives you some ideas to assist with troubleshooting.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "var data = await (await collection.FindAsync(filter, null, CancellationToken).ToListAsync(cancellationToken); MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Dropbox.RemoveMessage(Int32 responseTo)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.ExecuteAsync[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n var data = await collection.Find(filter).ToListAsync(cancellationToken); MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Dropbox.RemoveMessage(Int32 responseTo)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.ExecuteAsync[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n", "text": "Hi James,Thanks for responding to my question. Here are some answers to your follow-up questions:We will enable Logging to see if this provides additional information.Do these additional call stacks help diagnose the problem? Do they point to your educated guess regarding the terminated TCP connection? Is there a known remedy for terminated TCP connections because mongo thinks its idle?Thanks,\nRichStack trace when using FindAsync:\nvar data = await (await collection.FindAsync(filter, null, CancellationToken).ToListAsync(cancellationToken);Stack trace when using ToListAsync:\n var data = await collection.Find(filter).ToListAsync(cancellationToken);", "username": "Rich_Levy" }, { "code": "keepaliveEndOfStreamExceptionEndOfStreamException", "text": "Hi, @Rich_Levy,Thank you for providing the additional information and stack traces. I was hoping that we would see a pattern with a particular operation, time of day, document size, or other variable.Part of the challenge is that neither the MongoDB .NET/C# Driver nor the MongoDB Atlas Serverless instance is the one terminating the TCP connection. Typically the culprit is some intermediate load balancer or router in the cloud infrastructure. We enable TCP keepalives by default, which send periodic empty TCP messages (e.g. keepalives) on the socket if there is no data traffic. This is a standard TCP mechanism to keep connections alive even when they are waiting for responses.Azure is known to have very short default idle timeouts for its Azure load balancers. Typical idle timeouts are 7200 seconds (2 hours), but Azure load balancers are set to 240 seconds (4 minutes) by default. We recently audited our Serverless infrastructure and adjusted TCP keepalive times to account for the low default timeout used by Azure. These changes were deployed in the last week and I am cautiously optimistic that this will resolve this issue since you are deployed on Azure. Please let us know if you observe any more timeouts after today.You may also wish to review Does TCP keepalive time affect MongoDB Deployments? and ensure that your default TCP keepalive settings on your app servers are configured correctly. Although we enable TCP keepalives in the .NET/C# Driver by default, Microsoft does not provide an API to modify the keepalive time on non-Windows platforms. On Linux and MacOS, we must use the OS-configured value for TCP keepalive. If you are hosting your C# application on a Linux app server, the FAQ linked above explains how to modify the operating system’s TCP keepalive value correctly.Lastly enabling the Logging API is prudent as it will provide us with additional information should the problem happen again. In particular how long the operation was inflight before the EndOfStreamException occurred.Please let me know if you have any questions and especially if you see another EndOfStreamException after the recent tweaks to our TCP keepalives in the Atlas Serverless infrastructure.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hi James,We are in the process of deploying logging for the mongo driver for better understanding of this error. Which categories and at what log levels do you recommend for capturing the relevant information? We would like to capture the relevant information without blowing out our log sizes.Unfortunately we are still seeing the EndOfStreamException even with the recent updates to your Azure environment.We are considering adding a Retry around our Mongo calls. Is this something you recommend?\nFor the InsertOne, InsertMany, InsertOneAsync, InsertManyAsync operations is it safe to retry? For these calls, is it necessary to check if the data was persisted before retrying the command?Our configuration has Connection Pooling enabled. We were wondering if Connection Pooling is contributing to the problem or is in any way related. Is it advisable to turn off Connection Pooling?We are running our code in a http app server environment using linux. You recommended changing the OS TCP keepalive. If we modify the OS TCP keepalive will it have any adverse effects on other operations outside the realm of mongo?Thanks,\nRich", "username": "Rich_Levy" }, { "code": "MongoDB.CommandMongoDB.ConnectionDebugMongoDB.CommandMongoDB.ConnectionminPoolSizemaxPoolSizemaxIdleTimeMS", "text": "Hi, @Rich_Levy,I was cautiously optimistic that the infrastructure changes to TCP keepalive would address the issue that you’ve encountered. It is unfortunate that they did not.Given the nature of the problem, MongoDB.Command and MongoDB.Connection both at Debug would be a good place to start. MongoDB.Command will emit command started, succeeded, and failed messages providing timings of when commands start and complete. MongoDB.Connection will inform when connections are created, checked in/out of a pool, and terminated.Regarding whether to implement retry logic around your MongoDB calls, this is a recommended practice. Retryable reads and writes will attempt retryable operations again if they fail, but only once. Retrying once handles the most common case of an election or sporadic network failure, but does not handle more complex scenarios. Build a Resilient Application with MongoDB Atlas is a good read. Especially relevant here is the section on Error Handling.Regarding the safety of retries, the built-in retryable writes mechanism (which retries once) handles the case where a write was performed successfully but the response message was lost. If you implement your own retry mechanism as a backstop to retryable writes, you will have to implement your own idempotent transformation logic, data persistence checks, optimistic locking mechanism, or ACID transactions.ASIDE: Client-side Operations Timeouts (CSOT) will more elegantly solve this problem by allowing multiple retries within a specified timeout period, but has not been implemented in the .NET/C# Driver yet. Please follow CSHARP-3393 for updates.Regarding connection pooling, the .NET/C# Driver performs connection pooling automatically, which amortizes the cost of connection creation through connection re-use. There is no way to disable connection pooling, but you can adjust various pooling parameters, such as minPoolSize, maxPoolSize, maxIdleTimeMS, and other pooling parameters. Based on the observed behaviour, I don’t believe that adjusting connection pooling parameters will help address the issue.Regarding the TCP keepalive change, it is configured on Linux at the operating system level. While it can be configured on the per-socket level, the .NET socket library does not expose these parameters to us. Adjusting the TCP keepalive for the operating system will affect all TCP socket connections, but generally does not have an adverse affect. Keepalives are small packets (64 bytes over ethernet) designed to keep a connection alive even when no data is transitting the connection. If data is actively being exchanged in either direction, no keepalives will be sent. It is only when socket traffic goes quiet that keepalives are sent.I’m happy to answer any additional questions you might have. I do understand that you are working with our technical services team as well. Please share any logs and diagnostics with them. They can then route that data to me for analysis so that you don’t have to share it in a public forum.Sincerely,\nJames", "username": "James_Kovacs" } ]
EndOfStreamException: Attempted to read past the end of the stream
2023-05-18T18:53:57.515Z
EndOfStreamException: Attempted to read past the end of the stream
1,414
null
[ "queries", "data-modeling", "crud" ]
[ { "code": "{\n total: \"23.42\",\n applyTax: true\n}\nTrueFalse", "text": "MongoDB states that updating single documents is atomic and there is no question about it. However, it is rarely the case when only updating the document is enough, even if/ when working with a single document only. Here is what I mean:Sample document structure:A User 1 does the following:Simultaneously, or through a different endpoint, a User 2 does the following:Suppose the User 2’s update happens milliseconds after the User1’s step one. Would that skew the data? Is that a valid operational concern regarding atomicity? Or is such a scenario highly unlikely?If this is indeed potentially problematic, how can this be obviated? Thank you.", "username": "Vladimir" }, { "code": "", "text": "Would that skew the data?yes.Is that a valid operational concern regarding atomicity?Yes, but it’s a concern to you. Mongodb won’t care.The problem is that read-then-update is not an atomic operation.Mongo’s statement on single doc atomicity is that any update on a single doc matching a specific filter is atomic. This is different from read-then-write in application layer. (though a lock equivalent mechanism is still there)how can this be obviated?You can use a transaction, or make sure this will not happen from application code. (e.g. use lease from threads)", "username": "Kobe_W" }, { "code": "", "text": "Thank you for your reply. Regarding your transaction suggestion, I was thinking of that too. The issue here is that that would introduce quite a few transactions in the code. Because of that I have 2 concerns:Thank you very much @Kobe_W !", "username": "Vladimir" }, { "code": "", "text": "Hi @VladimirI think @Kobe_W has answered most of the question here, but I’d like to add my 2 cents as well.I have watched quite a few official MongoDB youtube videos on transactions and most of them transmit this vibe that if you have to reach for transactions in MongoDB, chances are you are doing something wrong. But in this use case, a transaction seems to be a valid choice, right?Yes this is why transaction was added to MongoDB. There are some workflow that necessitates modifying multiple documents atomically, and perhaps there’s no way around that fact. In those cases, then using transaction is definitely the right way forward.Suppose there is a lot of logic happening between the READ and UPDATE operations. Is it a bad idea to put a lot of code inside the transaction’s callback?Depends on how much code and what they’re doing, I think If the User 2 hits the resource while it is still locked, will that return a TRANSIENT error?There are different possible errors that can happen in a transaction. Transient transaction error generally means that it’s safe to retry, but the driver does not retry this automatically. See transaction error handling for more details.MongoDB gives you a lot of freedom to design your schema. But in many cases, using SQL design methodologies is the default mindset, since we live with SQL for so long. SQL practically depends on the existence of transactions since an entity’s data is usually spread across many different tables. Thus, to modify that entity, you’ll need transactions to modify it atomically.In contrast, MongoDB allows you to store an entity’s data as-is inside a single document. For myself, it’s helpful to think 1 entity == 1 document.However different workflows have different requirements. Sometimes you need to modify multiple entities in a single command. This is where MongoDB’s transaction can help.Otherwise, there are certain design patterns that may be able to help you minimize transaction use and maximize concurrency, with various levels of tradeoff.Best regards\nKevin", "username": "kevinadi" }, { "code": "a = 1(a == 1)....update b to 2a=1", "text": "We already have a lot of useful information here.To complement @kevinadi 's answer to our specific questions:Suppose there is a lot of logic happening between the READ and UPDATE operations. Is it a bad idea to put a lot of code inside the transaction’s callback?It’s always best to avoid keeping a transaction for too long time. The reason is transaction consumes resources, (especially in a sharded cluster) and can hold locks for write operations (i recall write locks are only released upon transaction completion). A long life-time transaction can give you trouble.If the User 2 hits the resource while it is still locked, will that return a TRANSIENT error?You can search for “transaction write conflict”. When a transaction B tries to modify a same doc that has been already modified by in-progress transaction A (and thus locked), it will raise this error and then abort. (i remember there’s one post asking why transaction B can’t be put into “blocked” state instead).Depending on your detailed logic. read-then-write sometimes doesn’t need a transaction.Let’s say you read a = 1, then do something in if (a == 1).... section, then update b to 2. In this simple case, for the write you can just use a=1 as the update filter. By this way, you only update b to 2 if a=1 still holds. (otherwise during your logic, some other requests have modified a’s value). So no need for a transaction.", "username": "Kobe_W" }, { "code": "TransientTransactionErrorTransientTransactionError", "text": "There are different possible errors that can happen in a transaction. Transient transaction error generally means that it’s safe to retry, but the driver does not retry this automatically. See transaction error handling for more details.You can search for “transaction write conflict”. When a transaction B tries to modify a same doc that has been already modified by in-progress transaction A (and thus locked), it will raise this error and then abort. (i remember there’s one post asking why transaction B can’t be put into “blocked” state instead).Thank you for your detailed replies. I do understand the meaning of the Transaction error and that an attempted update operation on a resource that has not released its write lock will result in an error. What I’d like to specifically learn is if this specific case will trigger the TransientTransactionError . I have watched all the videos on MongoDB TXs on youtube and I had read your linked mongodb documentation articles previously, they do explain what a TransientTransactionError is and that it is up to the developer to identify it and address it, but I could not find a single mention whether it is thrown when attempting to modify a document that still has a write lock on it.Depends on how much code and what they’re doing, I thinkSorry, I should have provided more context. It is a busy endpoint with a bunch of other roundtrips to MongoDB Atlas. An update operation normally takes around 200-300ms to complete without transactions. Also, it just feels very untidy having to push all of that inside a transaction callback with potentially inadvartent side effects (getting locks for queries that don’t need them by themselves). In general, all the articles and videos about transactions in MongoDB have so many disclaimers that I always get the feeling that unless the project is a national bank, transactions should be avoided and a better solution is right around the corner, one just has to identify it for their project. Transactions also seem like a very impactful decision because they require one to know what and when exactly documents are locked to prevent dead locks and/ or reduced performance.Depending on your detailed logic. read-then-write sometimes doesn’t need a transaction.Thank you very much @Kobe_W ! I have learned very recently about the Optimistic Concurrency Control, is your suggestion that?", "username": "Vladimir" }, { "code": "TransientTransactionError", "text": "if this specific case will trigger the TransientTransactionErrori’m not sure. Based on my research, it seems no. following links can be relatedis your suggestion that?No. i’m not talking about optimistic locking. I just wanted to give you an example about how to use the mongodb atomicity guarantee more flexibly to avoid explicit transactions", "username": "Kobe_W" } ]
Single document atomicity question
2023-06-01T12:25:27.875Z
Single document atomicity question
597
null
[]
[ { "code": "sudo systemctl status mongodbactive (running)mongo", "text": "Hi,\ni have a problem to start the mongo Shell over the Terminal in Ubuntu 20.04.\ni Have installed Mongodb like the description in the MongoDB Doc.\nIf i check the status with sudo systemctl status mongodb i can see the Active Status is active (running).if i want to start the shell with mongo command in Terminal, so i get this Error:MongoDB shell version v5.0.18\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1Thanks for you help!", "username": "Senel_Ekiz" }, { "code": "", "text": "You should be using latest shell mongosh\nCheck your mongod.log\nIs mongod up and accepting connections?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "No process is listening on that port. That’s why.So you will need to check if mongodb server is indeed listening on that port as said in above comment. (likely, no), and if not, fix it", "username": "Kobe_W" }, { "code": "logpath=/var/log/mongodb/mongodb.logmongod.lock6965service mongodb statusActive: active (running)", "text": "i check the mongodb.conf to see where the log files from mongodb is saved.\nThe path is → logpath=/var/log/mongodb/mongodb.logBut in this path is not a file like mongodb.log. There are only a file mongod.lock, and in this file is only the number 6965. i think it is a processID.And how can i check that mongod is up and accepting connections?\nWith service mongodb status, i can see the Status Active: active (running).I have an external server with ubuntu 20.04. and i use Visual Studio Code to connect over the Terminal to the Server. Should i use the same port number, which i use to connect with ubuntu Server, or has nothing to do with it?", "username": "Senel_Ekiz" }, { "code": "", "text": "If your mongod is up you should be able to connect\nAlso mongod.log should be there\nAre you checking the correct location?\nIf there are permission issues it’s possible mongod is not able to create logfile and terminating?\nDid you try stop/start service?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "If I understand correctly the following:I have an external server with ubuntu 20.04. and i use Visual Studio Code to connect over the Terminal to the Server1 - you have 2 machines\n2 - you are running mongod on machine-1\n3 - you are trying to connect with mongo on machine-2 using 127.0.0.1Read about localhost to understand why it does not work.Localhost or 127.0.0.1 refers to the same machine as you are running. So on machine-2, 127.0.0.1 is machine-2 and on machine-1, 127.0.0.1 is machine-1. From machine-2 you have to specify the host name or IP address of machine-1 in order to connect to mongod running on machine-1.", "username": "steevej" }, { "code": "2023-06-01T09:36:59.137+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2023-06-01T09:36:59.139+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2023-06-01T09:36:59.139+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-10000.sock\n2023-06-01T09:36:59.142+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2023-06-01T09:36:59.145+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2023-06-01T09:36:59.215+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2023-06-01T09:36:59.215+0000 I CONTROL [signalProcessingThread] now exiting\n2023-06-01T09:36:59.215+0000 I CONTROL [signalProcessingThread] shutting down with code:0\n\n", "text": "You’re right. It was my failure. The path to mongodb.log was wrong.\nInside the log file is:I have also stop and start the service, but it doesn´t solve the problem.", "username": "Senel_Ekiz" }, { "code": "localhostservice mongodb status.\n.\nActive: active (running) since Thu 2023-06-01 17:46:43 UTC; 2min 36s ago\n.\n.\nmongomongo", "text": "I have connect over the terminal from Visual Studio Code with SSH to the external Server. So im on the shell from the external machine. I have installed mongodb over the npm Manager.\nI know what localhost means.\nMongodb is installed on the same machine, where i have an ssh access to the shell.if i check the status from mongodb with service mongodb status in the terminal i get:That means that my mongodb is correctly installed an running. Right?Normally i should be able start the mongo-shell with the command mongo. Right?\nMaybe, i use the wrong command to start the mongo shell on the server. But in the examples on the internet explains, in order to start he shell, you scould use the command mongo", "username": "Senel_Ekiz" }, { "code": "Active: active (running)shutdown: going to close listening sockets.../tmp/mongodb-10000.sockss -tlnp\nps -aef | grep [m]ongo\n", "text": "Some of your posts contradict themself. In some the server isActive: active (running)and in someshutdown: going to close listening sockets...In addition the log/tmp/mongodb-10000.sockseems to indicate that you specified a port number in the configuration file.Please share the output of the commands run in the remote ssh shell on the serverThe command mongo is deprecated and the replacement is mongosh.", "username": "steevej" }, { "code": "State Recv-Q Send-Q Local Address:Port Peer Address:Port Process \nLISTEN 0 4096 0.0.0.0:10000 0.0.0.0:* users:((\"mongod\",pid=10375,fd=11)) \nLISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=561,fd=13)) \nLISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((\"sshd\",pid=630,fd=3)) \nLISTEN 0 100 0.0.0.0:25 0.0.0.0:* users:((\"master\",pid=1312,fd=13)) \nLISTEN 0 511 127.0.0.1:38011 0.0.0.0:* users:((\"node\",pid=11744,fd=18)) \nLISTEN 0 100 [::]:25 [::]:* users:((\"master\",pid=1312,fd=14)) \nmongodb 10375 1 0 17:46 ? 00:00:15 /usr/bin/mongod --config /etc/mongodb.conf\nroot 11896 11809 0 19:27 ? 00:00:00 /root/.vscode-server/bin/b3e4e68a0bc097f0ae7907b217c1119af9e03435/node /root/.vscode-server/extensions/mongodb.mongodb-vscode-1.0.1/dist/languageServer.js --node-ipc --clientProcessId=11809\n", "text": "ss -tlnp:ps -aef | grep [m]ongo:I myself did not specify any port number in the config.Thanks for your help.", "username": "Senel_Ekiz" }, { "code": "mongosh --port 10000\n", "text": "This confirms that mongod is listening to the non-standard 10000 port rather that the default 27017.Since youdid not specify any port number in the configit would be nice if you could share the configuration file and share the location from where you took it.According to documentation you should connect to the non-standard port 10000 with", "username": "steevej" }, { "code": "mongosh --port 10000/etc/mongodb.confPORT 10000mongosh", "text": "Thank you a lot. That was the problem.\nWith mongosh --port 10000 it was possible to start the Bash.\nAnd yes the port number in the mongodb.conf, which is in the Path /etc/mongodb.conf, was set to PORT 10000.\nI change it to the default Port number and now i can start the bash with mongosh.\nI don’t know why the default port wasn’t set during installation, but now its work.", "username": "Senel_Ekiz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can´t start mongo shell in Ubuntu 20.04 (focal)
2023-05-31T22:30:28.799Z
Can´t start mongo shell in Ubuntu 20.04 (focal)
1,047
null
[ "data-modeling" ]
[ { "code": "", "text": "I am working on an e-commerce project and need help with the database architecture/schema design with Node.js.Guys if anyone can help me with the same it will be greatly helpful.", "username": "Amiya_Panigrahi" }, { "code": "", "text": "It’s really good to show us where you have reached.\nWhat have you done before you request help?", "username": "kabonge_muhamadi" } ]
E-commerce Database schema design
2021-05-04T09:15:01.438Z
E-commerce Database schema design
5,665
null
[]
[ { "code": "", "text": "Hello Everyone!\nI recently joined the Developer Relations Team, as a Senior Community Manager to focus on user groups in the community. I have been a great fan of MongoDB for a long time now and am excited to join the team and the community here. I am very passionate about user groups and believe in how they can bring like-minded people in a region, organization, or institution together to share, grow and even take care of each other. Would love to hear any ideas you have in making the user groups a success here at MongoDB.What have I been doing until now?\nBefore joining MongoDB, I was managing Developer Relations Programs at Topcoder for almost 8 years. In these 8 years, I fell in love with the concept called community(family)(developer communities in particular ). I got the opportunity to be involved with the community, design/execute community campaigns and programs, train the community, run events, establish cross-community collaborations and partnerships to grow and engage the 1.6M+ developer community at Topcoder.Excited to join MongoDB and looking forward to collaborating with everyone and using all the experience and learnings to grow our community here! Hurray!Oh! I am currently in Delhi, India, and hopefully, will be back in Singapore soon. So if you are around, let’s catch up. Feel free to reach out on LinkedIn and Twitter.Looking forward to being a part of this amazing community and interacting with everyone soon!\nHarshit", "username": "Harshit" }, { "code": "", "text": " Welcome to the team @Harshit – great to have you onboard Cheers,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Welcome to the team @Harshit !!", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Hello @Harshit\nwelcome to our amazing community, I am looking forward working with you.\nRegards,\nMichaelAnd if all go for gifs - let’s have a welcome dance\n", "username": "michael_hoeller" }, { "code": "", "text": "Thanks, @Stennie_X @Arkadiusz_Borucki and @michael_hoeller!\nLooking forward to meeting you all soon (hopefully in person) Here’s my own gif - Thank you all for the warm welcome!\nPS: Ask me the gif story when we meet! ", "username": "Harshit" }, { "code": "", "text": "Can You Please Check Your Email Or LinkedIn Message It’s Regarding Tomorrow Event And It’s Urgent", "username": "Himanshu_Singh6" } ]
🌱 Hello Everyone - Harshit here from MongoDB! ☜(⌒▽⌒)☞
2021-11-16T09:53:21.815Z
:seedling: Hello Everyone - Harshit here from MongoDB! ☜(⌒▽⌒)☞
5,087
null
[ "aggregation" ]
[ { "code": "[\n { $match: selectors },\n { $group: {\n _id: '$status',\n count: { $sum: 1 },\n anotherField: { $sum: { $cond: { if: { $eq: ['$isVerified', true] }, then: 1, else: 0 } } },\n } },\n ]\nanotherField: { $sum: { $cond: [{ $eq: ['$isVerified', true] }, 1, 0] } },\n", "text": "Hi all, I have this aggregation:if i change the anotherField to a turnary operator like:will it be faster, since the if/then/else is removed?", "username": "nicoskk" }, { "code": "", "text": "Hello @nicoskk,It’s a $cond conditional expression operator, there are two syntaxes, and both are the same in performance.", "username": "turivishal" }, { "code": "explain", "text": "Hi @turivishal , thanks for replying, so performance is not affected not matter what the collection size is? When i run explain the results are inconsistent", "username": "nicoskk" }, { "code": "", "text": "Hi @nicoskk,I am saying both syntaxes’ performance is the same,\nAre you saying that both takes different execution times?\nIf yes then it is possible you will get difference in milliseconds, but if you have a problem with any specific syntax and getting a major difference then share the explain object for both the syntax.", "username": "turivishal" } ]
Turnary operator vs if/then/else comparison
2023-06-02T08:40:12.949Z
Turnary operator vs if/then/else comparison
342
null
[ "replication" ]
[ { "code": "storage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n oplogMinRetentionHours: 24\n\nsystemLog:\n destination: file\n logAppend: true\n logRotate: reopen\n path: /var/log/mongodb/mongod.log\n\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n fork: true\n pidFilePath: /run/mongodb/mongod.pid\n\nreplication:\n replSetName: test-replica-name\n", "text": "Hi, just updated my test server (Ubuntu 20.04) from a previous 5.x release to 5.18 and I’m now getting this error when restarting:Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding “processManagement.fork” to falseMongod is configured to run as a single-node replicaset on this server, more details here:", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding “processManagement.fork” to falseCheck this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I saw this, but it comes with no explanation and looks like a word-around more than a proper solution.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "What does mongod.log shows\nSo mongod does not start?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Mongod does start, but systemd doesn’t detect it as running so it then triggers a shutdown. I’m using the default unit file. All of this is related to the changes down starting with 5.15 related to forking.If I remember correctly, forking was needed to achieve a sane behavior with logrotate.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "Try to comment the fork parameter in config file and start the service\nMay be it is already taken care in the mongod.service", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I reverted to 5.13 until I have a better understanding of this.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "Hi @Jean-Francois_LebeauPlease note that MongoDB 5.13, 5.15, and 5.18 are not official MongoDB versions. Did you mean MongoDB 5.0.18 instead?Either way, could you post how did you install MongoDB, and whether you’re using Docker or similar?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Obviously I meant 5.0.x. No docker, standard Ubuntu install (apt).5.0.13 works with the forking option, but not 5.0.18, I recall that forking was needed for logrotate.", "username": "Jean-Francois_Lebeau" } ]
MONGODB_CONFIG_OVERRIDE_NOFORK after upgrade to 5.18
2023-05-31T17:42:32.050Z
MONGODB_CONFIG_OVERRIDE_NOFORK after upgrade to 5.18
2,719
null
[ "time-series" ]
[ { "code": "", "text": "Hi,Im a lifelong MySQL/Postgres user and I decided to use MongoDB for a project recently.\nI have a very simple and small timeseries collection( indexed on timestamp field ) with 100 000 documents total.\nWhen i run a simple query to return all the documents sorted by timestamp, I get an error that the maximum memory limit is reached( the query requires 100Mb or so according to .explain() ). After I enabled allowDiskUese(), the query works, but it still takes 500ms or so to execute.Is this performace expected and normal?Im asking this because ive worked with MySQL databases and sorted tables with millions of rows and it usually takes 10-20 milliseconds to execute the query. Even if I throw in some joins, it’s still faster than simply sorting 100 000 rows in MongoDB.So, I’m assuming that I’m doing something wrong. Im sorting on {timestamp: -1} and I have an index {timestamp: -1}So, the data should already be indexed and also sorted( since time series collections are already sorted by default if i remember correctly )", "username": "Rainer_Plumer" }, { "code": "explain", "text": "what’s your sort query like? what’s output of explain?", "username": "Kobe_W" }, { "code": "\n\n{\n \"explainVersion\" : \"1\",\n \"stages\" : [\n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"namespace\" : \"64356bd6bfd6bd1c5a5b688f_test.system.buckets.stockdatapoints\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n\n },\n \"queryHash\" : \"17830885\",\n \"planCacheKey\" : \"17830885\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"backward\"\n },\n \"rejectedPlans\" : [\n\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 26583.0,\n \"executionTimeMillis\" : 483.0,\n \"totalKeysExamined\" : 0.0,\n \"totalDocsExamined\" : 26583.0,\n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\",\n \"nReturned\" : 26583.0,\n \"executionTimeMillisEstimate\" : 4.0,\n \"works\" : 26585.0,\n \"advanced\" : 26583.0,\n \"needTime\" : 1.0,\n \"needYield\" : 0.0,\n \"saveState\" : 33.0,\n \"restoreState\" : 33.0,\n \"isEOF\" : 1.0,\n \"direction\" : \"backward\",\n \"docsExamined\" : 26583.0\n },\n \"allPlansExecution\" : [\n\n ]\n }\n },\n \"nReturned\" : NumberLong(26583),\n \"executionTimeMillisEstimate\" : NumberLong(30)\n },\n {\n \"$match\" : {\n \"$expr\" : {\n \"$lte\" : [\n {\n \"$subtract\" : [\n \"$control.max.timestamp\",\n \"$control.min.timestamp\"\n ]\n },\n {\n \"$const\" : NumberLong(86400000)\n }\n ]\n }\n },\n \"nReturned\" : NumberLong(26583),\n \"executionTimeMillisEstimate\" : NumberLong(70)\n },\n {\n \"$_internalUnpackBucket\" : {\n \"exclude\" : [\n\n ],\n \"timeField\" : \"timestamp\",\n \"metaField\" : \"meta\",\n \"bucketMaxSpanSeconds\" : 86400.0,\n \"assumeNoMixedSchemaData\" : true,\n \"includeMinTimeAsMetadata\" : true\n },\n \"nReturned\" : NumberLong(163877),\n \"executionTimeMillisEstimate\" : NumberLong(238)\n },\n {\n \"$_internalBoundedSort\" : {\n \"sortKey\" : {\n \"timestamp\" : -1.0\n },\n \"bound\" : {\n \"base\" : \"min\",\n \"offsetSeconds\" : NumberLong(86400)\n },\n \"limit\" : NumberLong(0)\n },\n \"totalDataSizeSortedBytesEstimate\" : NumberLong(149418079),\n \"usedDisk\" : false,\n \"spills\" : NumberLong(0),\n \"nReturned\" : NumberLong(163877),\n \"executionTimeMillisEstimate\" : NumberLong(401)\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"ac-dw7yihy-shard-00-01.mc6ycp2.mongodb.net\",\n \"port\" : 27017.0,\n \"version\" : \"6.0.6\",\n \"gitVersion\" : \"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 16793600.0,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 33554432.0,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0,\n \"internalQueryMaxAddToSetBytes\" : 104857600.0,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n },\n \"command\" : {\n \"aggregate\" : \"system.buckets.stockdatapoints\",\n \"pipeline\" : [\n {\n \"$_internalUnpackBucket\" : {\n \"timeField\" : \"timestamp\",\n \"metaField\" : \"meta\",\n \"bucketMaxSpanSeconds\" : 86400.0,\n \"assumeNoMixedSchemaData\" : true,\n \"usesExtendedRange\" : false\n }\n },\n {\n \"$sort\" : {\n \"timestamp\" : -1.0\n }\n }\n ],\n \"cursor\" : {\n\n },\n \"collation\" : {\n\n }\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1685651310, 1),\n \"signature\" : {\n \"hash\" : BinData(0, \"v84A1OLjrUT4fEWuCOhH3HFz9ZQ=\"),\n \"keyId\" : NumberLong(7200384509919887361)\n }\n },\n \"operationTime\" : Timestamp(1685651310, 1)\n}\n", "text": "The query is blank {}, sort is {timestamp: -1}\nExplain Output:", "username": "Rainer_Plumer" }, { "code": "executionStats", "text": "executionStatsFrom this section you can see it is doing a collection scan/full table scan.Looks like the docs are already sorted in that order so there’s no sort stage. (maybe that’s one reason why time series is special).I believe most time it spends is to read everything from disk. (26583 full rows data from disk, which can be very slow)", "username": "Kobe_W" } ]
Is sorting in MongoDB supposed to be very slow and memory consuming
2023-06-01T14:05:00.047Z
Is sorting in MongoDB supposed to be very slow and memory consuming
716
null
[ "security", "configuration" ]
[ { "code": "", "text": "Hello, I am a long time mongodb user. I recently found out that even I set a password to my mongodb and set security features from my mongod.cfg likesecurity:\nauthorization: enabled\nsetParameter:\nenableLocalhostAuthBypass: falseIf it is binded to 0.0.0.0, any user can connect to my database from remote.When they connect to the database without a password, they cant see databases or collections but can run scripts. Even if they cant read or write to database, this is a security risk even if they can run simple scripts they can consum cpu etc…Am I missing something? I tried a lot of parameters. I want only authenticated people to connect to the database.", "username": "Aytek_Ustundag" }, { "code": "db.collection.drop()", "text": "Hi @Aytek_Ustundag welcome to the community!I want only authenticated people to connect to the database.I’d like to turn that question around. What if you deliberately restrict everyone from connecting to the database?Of course this depends on your use case. However, if the goal is to provide data access to many people in a limited context (e.g. they’re not DBAs), then how about creating an e.g. REST API interface in front of the database? This way, you can put the database behind very secure firewall, and only allow connection from the REST API app. As a bonus, it can act as a shield since it’s not possible for people to accidentally call db.collection.drop() unless your API allows it.Best regards\nKevin", "username": "kevinadi" } ]
Disabling connections to mongodb server without a password
2023-06-01T16:57:29.214Z
Disabling connections to mongodb server without a password
771
null
[ "aggregation", "queries", "node-js", "crud", "mongoose-odm" ]
[ { "code": "const UserSchema = new mongoose.Schema({\n firstName: String,\n lastName: String,\n homeFeeds:[{type: Schema.Types.ObjectId, requried: true, ref: \"Activity\"}];\n}); // User , is the referenece name\n\nconst ActivitySchema = new mongoose.Schema({\n requester: {type: Schema.Types.ObjectId, requried: true, ref: \"User\"},\n message: String,\n recipient: {type: Schema.Types.ObjectId, requried: true, ref: \"User\"},\n}) // Activity, is the reference name\nawait User.find({_id: ID})\n .populate(\"homeFeeds\", \"requester\")\n .updateMany({\n $pull: {\n homeFeeds.requester: ID\n }\n });\n", "text": "So I have a situation where I need to delete elements in an array of reference / ObjectIds, but the delete condition will be based on a field in the reference.For example, I have the following schemas:Now I need to delete some of the homeFeeds for a user, and the ones that should be deleted need to be by certain requester. That’ll require the homeFeeds (array of 'Activity’s) field to be populated first, and then update it with the $pull operator, with a condition that the Activity requester matches a certain user.I do not want to read the data first and do the filtering in Nodejs/backend code, since the array can be very long.Ideally I need something like:But it does not work, Id really appreciate if anyone can help me out with this?Thanks", "username": "Shaun_Zeng" }, { "code": "$merge$lookup", "text": "Originally I was thinking you can do this using aggregation pipeline update syntax (new in 4.2). I’m not familiar with Mongoose so I’m not sure I understand your schema nor do I know if they support this syntax for updates but it allows referencing fields in the document in update expression.But looking at this a bit closer it seems like it’s fields stored in another document that you would want to base your update on, right?Can you provide a sample document from User and Activity collections - it’s possible that this can be done using $merge stage in aggregation (along with $lookup) but again, Mongoose would have to support it (otherwise you can do it in the shell if it’s a one-off operation).Asya", "username": "Asya_Kamsky" }, { "code": "const friendId = req.body.friendId;\n\n const userId = req.params.id;\n\n User.findByIdAndUpdate(\n\n userId,\n\n { $pull: { friends: friendId } },\n\n { new: true }\n\n )\n", "text": "", "username": "Abdurraouf_Sadi" }, { "code": "", "text": "If you wanna delete some ObjectId ref its simple like this using $in with updateManyUser.updateMany({},\n{ $pull: { homeFeeds: { $in: idToDelete } } }, { new: true }\n)if you wanna delete ObjectId ref at specific id User document the same but with updateOneUser.updateOne({ _id: idUser },\n{ $pull: { homeFeeds: { $in: idToDelete } } }, { new: true }\n)", "username": "Fabian_Armando_Yapura_Claros" } ]
How can I populate reference, and delete element in array after based on the ID of the reference
2022-08-25T02:34:17.519Z
How can I populate reference, and delete element in array after based on the ID of the reference
7,494
null
[ "serverless", "php" ]
[ { "code": "//Simple canned sensor reading to test POST\n\n#include <Arduino.h>\n#include <WiFi.h>\n//#include <esp_wifi.h>\n#include <HTTPClient.h>\n\n#define xMYSQL 0 //set to 1 to enable sending readings to a MYSQL database...0 implies MongoDB\n\n//------------Server pointers---------------------\n#define MySQL_DBASE \"http://192.168.1.86/UR_do_solar/urpost-esp-data3.php\" //pointer to php files that reads the data into MYSQL\nchar serverAddressMY[] = \"192.168.1.86\";\nconst char resourceMY[] = \"/UR_do_solar/urpost-esp-data3.php\";\n\n\n//#define MyMONGO_DBASE \"https://us-east-2.aws.data.mongodb-api.com\"\n#define MyMONGO_DBASE \"https://us-east-2.aws.data.mongodb-api.com/app/data-gwtpz/endpoint/ur_sensor/do\"\n//#define MyMONGO_DBASE \"https://us-east-2.aws.data.mongodb-api.com/app/data-gwtpz/endpoint/ur_sensor/do/app/application-0-astev/endpoint\"\n\n//char serverAddressMD[] = \"https://us-east-1.aws.data.mongodb-api.com\";\nchar serverAddressMD[] = \"https://us-east-2.aws.data.mongodb-api.com/app/data-gwtpz/endpoint/ur_sensor/do\";\n\n//const char resourceMD[] = \"\";\nconst char resourceMD[] = \"/app/application-0-astev/endpoint\";\n//const char resourceMD[] = \"/ur_sensor/do/app/application-0-astev/endpoint\";\n\nconst int port = 80;\nconst int mdport = 27017;\n\nString DBASE;\n\n// Wifi network credentials---------------------\nconst char* ssid = \"************\";\nconst char* password = \"************\";\n\n//----------Networking variables if you need it, or to assign static IP-----------------------------------------------------\n\nIPAddress Server_ip(192, 168, 1, 208); // IP address of this box\nIPAddress gateway(192, 168, 1, 254); // gateway of your network\nIPAddress subnet(255, 255, 255, 0); // subnet mask of your network\nIPAddress dns(192, 168, 1, 254); //dns address needed to help get to internet AND to ntp site below\n\n//message buffering\nString postData; //String to post status data\n\n//Client and Server starts\nWiFiClient client;\nHTTPClient http;\n\n/****************************** Setup *************************************/\n/***************************************************************************/\n/***************************************************************************/\n\n\nvoid setup() {\n Serial.begin(9600);\n WiFi.disconnect(true);\n //esp_wifi_start();\n Serial.println(\"POST test program\");\n WiFi.mode(WIFI_STA);\n\n //WiFi.config(Server_ip, dns, gateway, subnet); // forces to use the fixed IP\n WiFi.begin(ssid, password);\n delay(1000);\n while (WiFi.status() != WL_CONNECTED) {\n delay(100);\n Serial.print(\".\");\n }\n\n Serial.print(\"Connecting to: \");\n Serial.print(WiFi.SSID());\n Serial.print(\"\\n\");\n Serial.print(\"This device IP address: \");\n Serial.println(WiFi.localIP());\n\n\n //choose the database target\n if (xMYSQL) {\n DBASE = MySQL_DBASE;\n } else {\n DBASE = MyMONGO_DBASE;\n }\n}\n/****************************** Close Setup *************************************/\n/****************************** Close Setup *************************************/\n/****************************** Close Setup *************************************/\n/*********************************************************************************/\n\nvoid loop() {\n\n String Bstr; //string received from sensor\n\n //Bstr= \"{\\\"Temp\\\":\\\"22.22\\\",\\\"DO\\\":\\\"11111.00\\\",\\\"HWBRD\\\":\\\"DO Sensor Solar2 sw:0.0.2\\\",\\\"Battery_Voltage_Monitor\\\":\\\"4.01\\\",\\\"VoltageRange\\\":\\\"MONGOTRY\\\",\\\"Charging\\\":\\\"15:48\\\",\\\"ChargeDone\\\":\\\"0\\\"}\\r\\n\\r\\n\";\n Bstr = \"{\\\"Temp\\\":\\\"22.22\\\",\\\"DO\\\":\\\"11111.00\\\",\\\"HWBRD\\\":\\\"DO Sensor Solar2 sw:0.0.2\\\"}\\r\\n\\r\\n\";\n SendSensorData(Bstr);\n\n delay(40000);\n} //end loop\n/****************************** Close Loop *************************************/\n/****************************** Close Loop *************************************/\n/****************************** Close Loop *************************************/\n/*********************************************************************************/\n\n\n//***************************** SENSOR DATA TRANSMITTING *****************************//\nvoid SendSensorData(String sensor_message) {\n //HTTPClient http; //Declare object of class HTTPClient\n //HttpClient http(client, serverAddressMD, mdport); /* changed *******************************/////////\n\n postData = sensor_message;\n\n Serial.println(postData);\n\n //http.begin(client, serverAddressMY, port, resourceMY);/* changed *******************************/////////\n http.begin(client, DBASE);\n\n http.addHeader(\"Content-Type\", \"application/json\"); // SPECIFY JSON\n http.addHeader(\"firstkey\", \"6466305061dc2db2354b04c4\"); //specify API key\n // String contentType = \"application/json\" ; /* changed *******************************/////////\n //int httpCode = http.post(resourceMD, contentType, postData); /* changed *******************************/////////\n\n\n int httpCode = http.POST(postData); //Send the request\n delay(2000); //do we need to wait longer?\n String payload = http.getString(); //Get the response payload\n\n Serial.println(httpCode); //Print HTTP return code\n Serial.println(payload); //Print request response payload\n\n\n http.end(); //Close connection\n}\n\n18:45:42.910 -> {\"Temp\":\"22.22\",\"DO\":\"11111.00\",\"HWBRD\":\"DO Sensor Solar2 sw:0.0.2\"}\n18:45:42.973 ->\n18:45:42.973 ->\n18:45:43.005 -> [ 45607][V][HTTPClient.cpp:252] beginInternal(): url: https://us-east-2.aws.data.mongodb-api.com/app/data-gwtpz/endpoint/ur_sensor/do\n18:45:43.133 -> [ 45690][D][HTTPClient.cpp:303] beginInternal(): protocol: https, host: us-east-2.aws.data.mongodb-api.com port: 443 url: /app/data-gwtpz/endpoint/ur_sensor/do\n18:45:43.292 -> [ 45857][D][HTTPClient.cpp:598] sendRequest(): request type: 'POST' redirCount: 0\n18:45:43.388 ->\n18:45:43.388 -> [ 46090][D][HTTPClient.cpp:1170] connect(): connected to us-east-2.aws.data.mongodb-api.com:443\n18:45:43.484 -> [ 46143][D][WiFiClient.cpp:546] connected(): Disconnected: RES: -1, ERR: 104\n18:45:43.580 -> [ 46143][D][HTTPClient.cpp:642] sendRequest(): sendRequest code=-5\n18:45:43.645 ->\n18:45:43.645 -> [ 46211][W][HTTPClient.cpp:1483] returnError(): error(-5): connection lost\n18:45:45.597 -> [ 48291][W][HTTPClient.cpp:1483] returnError(): error(-4): not connected\n18:45:45.693 -> -5\n", "text": "Hello experts,Does anyone work with arduinos/ESP32 boards and HTTP POSTING to a MongoDB?I’m a Newbie to MongoDB, I’m struggling to make a connection from an ESP32-based board to an Atlas-Mongo db. I’m developing in the Arduino IDE.I managed to create a serverless endpoint in Mongdb and I have successfully used an HTTP POST JSON request to add data to MongoDB using both the online “Postman” tool and a local “Postman” app on my machine. I’ve made sure I whitelisted my Router’s external IP address. So in theory, the endpoint works. I would think that success also validates the API key.Coding that up in the Arduino IDE world is a different story for me. In searching for solutions, I’ve found three examples that I tried to stay as close to as possible but all are either slightly out of date or just different enough not to lead to an answer. I’ve never seen an Arduino example using the newest form of serverless Mongo-Atlas endpoints for example. Doing an HTTP POST seems so simple, I’ve done it many times in other situations, but I’m stymied by the connection to MongoDB.I’m posting the code below. I’m moving from using a MYSQL database to a MongoDB. The Arduino code posted here chooses between them with a simple #define and it works fine with the MYSQL path.For this forum post, I’ve only obscured a few personal details. I left the Mongodb details (that I can change later) open because those details might be key. I left in the comments some of the minor adjustments I’ve tried, I hope that isn’t too confusing.I have the debug compile option on and get the following error results. My interpretation of this error is a connection is made to the MongoDB website but it disconnects. I can’t figure out why, I get lost tracing thru the libraries to find the reason. One example mentioned the need for a secure certificate but the other two examples did not so I am not sure if that is the relevant debugging path to take.I suspect it’s how I’m specifying the endpoint in some manner, maybe a router/network issue, or confusion with using HTTPS that I don’t understand.Can anyone give some guidance?~\nkurt h.", "username": "kurt_h" }, { "code": "", "text": "I’m solving my own problem….documenting for others if their search brings them here. I found two related solutions. Atlas-Mongodb does indeed require a more secure communication link above just an API key.\nFirst the Solution(s):\nReferencing the original code I posted\na.\tI needed to change the WifIClient library to the WiFiClientSecure. (.h)\nb.\tI then added the client.setInsecure() function before calling http.begin(…).That was it!Alternatively, I found a security certificate on the Mongdb webite…….I set the certificate to a char constant named rootCACertificate and included the function client.setCACert(rootCACertificate) INSTEAD of client.setInsecure() , again before http.begin.\nWhy did it take so long?:\nI was using the following examples as a guide for doing a POST to Mongdb but only ONE mentioned the added security and commented they didn’t understand why it worked. (When I tried that example out originally, it didn’t work…forcing me to try other things. I’m guessing the original failure was only because of an expired certificate). For the examples that do not mention certificates or TSL or anything but an API key, I’m guessing Mongodb security has evolved over time too.\nHere were the examples:\n//Christmas Lights and Webcams with the MongoDB Data API | MongoDB (mentioned the need for WifIClientSecure.h)\n//https://jimb-cc.medium.com/esp-and-mongodb-sitting-in-a-tree-7d043fb1a4d (no mention of certificate need or WiFiClientSecure.h)\n//https://www.donskytech.com/control-your-arduino-iot-projects-with-a-mongodb-database/ (no mention of certificate need or WiFiClientSecure.h)My success with “Postman” helped pull me off track too. I’m now guessing that the “postman” app handles this security in a hidden manner explaining it’s success and my failure.Further comments are welcomed….especially my conjectures on my original failures.", "username": "kurt_h" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connecting to Mongodb with an ESP32 based board
2023-05-30T20:23:17.899Z
Connecting to Mongodb with an ESP32 based board
2,409
null
[ "auckland-mug" ]
[ { "code": "", "text": "Welcome to the Auckland MongoDB User Group!MUG number 3 is coming up fast on the 19th of July.The team at Figured (https://figured.com) has kindly offered to host us for the evening at their offices on Fanshawe Street. Complimentary food and drinks as well as some SWAG will be available, proudly sponsored by the team at MongoDB.Please RSVP for the event here: Auckland MongoDB User Group #3, Wed, Jul 19, 2023, 5:00 PM | Meetup Also, join our Auckland User Group on the MongoDB Community to stay abreast of upcoming events and activities.Adam Holt (linkedin) - Adam is a founder and CTO, with experience in the B2B world. In his upcoming presentation, Adam will delve into the fascinating capabilities of MongoDB’s Atlas App services, specifically focusing on their potential to power advanced chatbots with Atlas Search. He will take you on a journey of exploration, from harnessing the capabilities of triggers and change streams to the implementation of serverless functions for seamless integration with OpenAI. Adam’s talk will provide an understanding of how these functionalities can be leveraged to develop an AI-powered chatbot using ChatGPT’s API.Erich Kuba (linkedin) - Erich is the Founder and Director of Cloudize, a company that specializes in building high-performance APIs on MongoDB Atlas. In this lightning talk he’s going to discuss the high-availability configuration options and strategies available within MongoDB Atlas to ensure that your favourite database survives a cloud outage.Thanks,\nThe Auckland MUG Team!Event Type: In-Person\nLocation: Figured, Level 5, 7/9 Fanshawe Street, Auckland CBD", "username": "Jake_McInteer" }, { "code": "", "text": "Will there be an RSVP link on here at some point?", "username": "Julian_Eden" }, { "code": "", "text": "Hi @Julian_Eden - good point, you can RSVP here: Auckland MongoDB User Group #3, Wed, Jul 19, 2023, 5:00 PM | MeetupI’ll get this added to the main post also ", "username": "Jake_McInteer" } ]
Auckland MongoDB User Group #3
2023-05-31T04:18:52.064Z
Auckland MongoDB User Group #3
1,794
null
[ "queries", "storage" ]
[ { "code": "", "text": "Hi Team, Even after running compact command we are not able to reclaim space. Anything that we are missing here?\nIntiial Data size 7 TB almost 70% data has been deleted. Still no improvement in storage. Any help here is appreciated.", "username": "Akshay_shet" }, { "code": "", "text": "What’s the output for dbStats", "username": "Kobe_W" } ]
Running compact command is not releasing storage space
2023-06-01T09:50:02.612Z
Running compact command is not releasing storage space
653
null
[ "aggregation", "node-js", "time-series" ]
[ { "code": "mongodbrunCursorCommandDb#runCursorCommandDb#commandbucketMaxSpanSeconds bucketRoundingSecondsmongodb", "text": "The MongoDB Node.js team is pleased to announce version 5.6.0 of the mongodb package!The MongoDB Node.js Driver now supports Node.js 20! We have added the Db#runCursorCommand method which can be used to execute generic cursor commands. This API complements the generic Db#command method.The driver now has TypeScript support for the bucketMaxSpanSeconds and bucketRoundingSeconds options which will be available in MongoDB 7.0. You can read more about these options here.We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "Warren_James" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB NodeJS Driver 5.6.0 Released
2023-06-01T17:35:51.344Z
MongoDB NodeJS Driver 5.6.0 Released
899
null
[]
[ { "code": "", "text": "Hi, Installed mongodb few weeks ago and was working fine, but today I found an error in the server(centos7)this is the error:\nEnvironment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \"processManagement.fork\" to falseI dont know what to do", "username": "Jose_Salazar_N_A" }, { "code": "{\"t\":{\"$date\":\"2023-05-30T23:16:54.136+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.138+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.140+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.140+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.407+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.407+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.407+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-05-30T23:16:54.407+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23330, \"ctx\":\"main\",\"msg\":\"ERROR: Cannot write pid file to {path_string}: {errAndStr_second}\",\"attr\":{\"path_string\":\"/var/run/mongodb/mongod.pid\",\"errAndStr_second\":\"No such file or directory\"}}\n", "text": "log fileI couldnt find /var/run/mongodb/mongod.pid, but it was working fine, any idea?", "username": "Jose_Salazar_N_A" }, { "code": "ERROR: Cannot write pid file to {path_string}: {errAndStr_second}\",\"attr\":{\"path_string\":\"/var/run/mongodb/mongod.pid\",\"errAndStr_second\":\"No such file or directory\"}}/var/run/mongodb", "text": "Hey @Jose_Salazar_N_A,Thank you for reaching out to the MongoDB Community forums!ERROR: Cannot write pid file to {path_string}: {errAndStr_second}\",\"attr\":{\"path_string\":\"/var/run/mongodb/mongod.pid\",\"errAndStr_second\":\"No such file or directory\"}}Could you please share the steps you followed to install MongoDB and confirm if you are using any container such as Docker?Also, could you provide the content of the directory /var/run/mongodb?Hi, Installed MongoDB a few weeks ago, and was working fine, but today I found an error in the server(centos7)Additionally, can you confirm if any recent changes have been made that could be causing this error?Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "I solved it, doing this\nmodified this fild\n/etc/systemd/system/mongod.service\nthen I created a copy here\n/usr/lib/systemd/system/mongod.servicemodify the file only in service[Service]\nUser=mongod\nGroup=mongod\nEnvironment=“OPTIONS=-f /etc/mongod.conf”\nEnvironmentFile=-/etc/sysconfig/mongod\nExecStart=/usr/bin/mongod $OPTIONS\nExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb\nExecStartPre=/usr/bin/mkdir -p /var/lib/mongo\nExecStartPre=/usr/bin/chown mongod:mongod /var/lib/mongo\nExecStartPre=/usr/bin/chmod 0700 /var/lib/mongo\nPermissionsStartOnly=true\nPIDFile=/var/run/mongodb/mongod.pid\nType=forkingthensudo systemctl daemon-reloadand restart mongo", "username": "Jose_Salazar_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1
2023-05-30T23:33:06.164Z
Error Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1
4,022
null
[ "react-native", "android" ]
[ { "code": "bundle.realmandroid/app/src/main/assets/pathbundle.realmRealm.copyBundledRealmFiles()realm.path/data/data/com.<app name>/files/bundle.realmRealm.copyBundledRealmFiles()database.tsimport { TimeSeriesData } from './models';\nimport { Realm, createRealmContext } from '@realm/react';\n\nRealm.copyBundledRealmFiles();\n\nconst { RealmProvider, useRealm, useObject, useQuery } = createRealmContext({\n schema: [TimeSeriesData],\n path: 'bundle.realm',\n});\n\nexport { RealmProvider, useRealm, useObject, useQuery };\n", "text": "I want to read some initial data from Realm database file when starting the application (Android), but I do not understand why database file is not copied over. I followed Bundle a Realm File tutorial - created and put bundle.realm file in android/app/src/main/assets/, added path attribute with value bundle.realm and call Realm.copyBundledRealmFiles() before creating Realm context. Anyway, from realm.path I found out that it looks for database file in /data/data/com.<app name>/files/bundle.realm, where it does not exist. I can upload it manually though Device File Explorer, but I expected that Realm.copyBundledRealmFiles() would copy the file where it’s supposed to be.What do I need to change to read from existing Realm file automatically?Here is a relevant code segment:\ndatabase.ts", "username": "Eoic" }, { "code": "copyBundledRealmFilesbundle.realmbundle.realmcopyBundledRealmFiles", "text": "@Eoic One thing to be aware of, is that if you happened to render the RealmProvider before calling copyBundledRealmFiles, then the bundle.realm would not be overwritten. Can you remove the existing app from your testing environment and rebuild? I want to make sure that there wasn’t a bundle.realm already existing when copyBundledRealmFiles was called.", "username": "Andrew_Meyer" } ]
Reading initial data from Realm database file in React Native application
2023-05-30T13:24:35.100Z
Reading initial data from Realm database file in React Native application
828
null
[]
[ { "code": "", "text": "I have an app where a few documents will not have a fixed schema. That is, say I have a project document and in my project a user can add sections to the document. So, maybe:Is this possible to do with Realm? I know it supports the more simple case where I know exactly the structure of an object ahead of time. But, what if I don’t know the structure ahead of time?My app is pretty large and will have hundreds of these types sub-documents (nested objects say within a Project document) where the schema / structure will be highly dynamic. In fact, a high degree of my data will have a structure that changes frequently.I’m hoping I can still use Realm to just sync/upload/download whatever I hand it. But, looking through the docs I’m not so sure.Thanks!", "username": "d33p" }, { "code": "class Project: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name = \"\"\n @Persisted var servers: MutableSet<Servers>\n}\n\nclass Servers: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name = \"\"\n @Persisted var os: OS?\n @Persisted var properties: Map<String, AnyRealmValue>\n}\n\nclass OS: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name = \"\"\n @Persisted var properties: Map<String, AnyRealmValue>\n}\n", "text": "There are many ways you can choose to model a structure like that. Usually the easiest way will be to add a dictionary to the objects so it can have extra unstructured values. You could do something like this (in Swift, but you could do this in any of the SDK’s):", "username": "Alexander_Stigsen" }, { "code": "", "text": "Ok that makes sense. Glad to hear and so far I’m just basically in love with Realm for my app so I’m glad this use case can work.Alright, off I go…", "username": "d33p" } ]
How to handle flexible data models
2023-06-01T00:50:47.784Z
How to handle flexible data models
496
null
[ "atlas-data-lake", "atlas-online-archive" ]
[ { "code": "", "text": "I’ve been looking into the Online Archive capability within Atlas. I understand from a user perspective how to set it up and how it works, but not from an architecture/implementation standpoint. When one sets up an “online archive” what is actually happening behind the scenes? Is it just using Atlas Data Lake capabilities to query data that is stored in an S3 bucket? If so, what tier of S3 Storage Class is being used? How is data protected in the S3 bucket and do we have direct access to that S3 bucket outside of Atlas?", "username": "Greg_Harabedian" }, { "code": "", "text": "Hey @Greg_Harabedian,Thank you for reaching out to the MongoDB Community forums!When one sets up an “online archive” what is actually happening behind the scenes?MongoDB Atlas Online Archive is a feature of the MongoDB Cloud Data Platform. It allows you to set a rule to automatically archive data off of your Atlas cluster to fully-managed cloud object storage that is optimized for analytical queries. It reformats, creates partition indexes, and partitions data as it is ingested, creating an isolated workload ready to support large and complex analytical queries.When you set up an Online Archive rule MongoDB Atlas configures a capability that runs on a schedule and safely moves data out of your cluster when it reaches qualification based on the rule configured.Is it just using Atlas Data Lake capabilities to query data that is stored in an S3 bucket?Under the hood, it is using a storage service that is built on top of S3 and other technologies to store and manage the archival data. This data is then queried through the same federated query capability present in Data Federation.If so, what tier of S3 Storage Class is being used?It incorporates more storage technologies beyond S3, making it inaccurate to think about it in terms of just S3 Storage Class tiers. We optimize our storage infrastructure to meet various requirements, prioritizing performance, cost, and durability. Categorizing the storage solely based on S3 Storage Class tiers would not accurately represent the diverse capabilities MongoDB Atlas employ.How is data protected in the S3 bucketMongoDB Atlas encrypts your archived data using Amazon’s server-side encryption S3-managed keys (SSE-S3) for archived data.do we have direct access to that S3 bucket outside of Atlas?As of now you cannot directly access the S3 bucket outside of MongoDB Atlas.I hope it answers your questions. In case of any additional questions or concerns, please feel free to reach out.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How is Atlas Online Archive Implemented?
2023-05-31T14:34:17.323Z
How is Atlas Online Archive Implemented?
676
null
[]
[ { "code": "User Creation Function", "text": "Hey,I am trying to run a setup function to create some documents after a user signs up. I set the function under App Services → App Users → Custom User Data → User Creation FunctionBut when I try to sign up the function I specified in User Creation Function is not found. I\"m not sure why it happens or how to fix it. Does anyone have any ideas?I’m using the Custom Function Authentication Flow.", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "I’m having the exact same issue.I went through the Custom User Data tutorial and selected a new user creation function with the default name “onUserCreation”. After saving, I can see it listed with my other functions. However, when I try to create a new account the logs show the error “function not found: ‘onUserCreation’” and the function is never called.", "username": "Shane_Bridges" }, { "code": "Application AuthenticationtrueApplication Authenticationfalse", "text": "I ran into this same issue as well, managed to solve it by changing the following configuration settings:\n\nScreenshot 2023-02-27 at 3.37.59 PM2934×1518 240 KB\nNOTE: (see Edit 1)\nPreviously, I had set my Authentication to Application Authentication and Private to true.Edit 1:\nAfter some testing, you can leave Authentication to Application Authentication; just make sure Private is set to false.", "username": "Alexander_Ye" }, { "code": "", "text": "Thanks. That did work however it would seem that it should be private so that clients can’t call into this function. Would be good to hear from the MongoDb team on it.Thanks again!", "username": "Shane_Bridges" }, { "code": "", "text": "Yh, same thing here. It should be private to prevent that the function is called manually.", "username": "paD_peD" }, { "code": "", "text": "Hi, same issu, thanks for the tip", "username": "Vincent_Boulet" } ]
Custom User Data FunctionNotFound Error
2023-02-22T21:31:45.944Z
Custom User Data FunctionNotFound Error
1,654
null
[ "node-js", "crud" ]
[ { "code": "app.patch('/update', async(req, res) => {\n await client.connect();\n db = await client.db(\"Lab3\");\n let collection = await db.collection(\"students\");\n let susername = req.body.username\n\n let result = await collection.findOneAndUpdate(\n {username : susername}, req.body, {new: true}\n )\n\n res.send(result).status(200);\n});\n", "text": "Hi everyone, I’m currenly facing a problem when I want to update data to my mongoDB database. I try to not specified what to update from the user, but let the user choose what to update. So that my findOneandUpdate parameter will be username as filter, req.body as the content to update to the database. Can anyone help me to solve this problem? Thank you so much!", "username": "WoNGG" }, { "code": "", "text": "Same issue so same solution as your other thread", "username": "steevej" } ]
Update document requires atomic operators w
2023-05-29T16:42:16.105Z
Update document requires atomic operators w
1,016
null
[ "ahmedabad-mug" ]
[ { "code": "", "text": "Hello MongoDB Community !I’m Vishal Turi, and I’m excited to announce that I’ve been selected as the new leader of our local MongoDB User Group (MUG). As a team lead at Ancubate, I’ve been using MongoDB for three years, learning through community courses and documentation.My motivation for starting this group stems from my passion for contributing to developer communities. I love solving real problems. By connecting with more developers through our MUG, we can collaborate, share experiences, and find innovative solutions together.Based in Ahmedabad MUG, I want to create a vibrant platform for developers of all backgrounds to come together, support each other, and make a positive impact on our community.Join me on this journey as we explore the potential of MongoDB and inspire each other to grow. Let’s build a strong developer community together!Looking forward to meeting you all soon!Best regards,\nVishal Turi", "username": "turivishal" }, { "code": "", "text": "Hey Vishal,Your contributions to the MongoDB Community are truly appreciated! Thank you for spearheading the MongoDB User Group (MUG) in Ahmedabad with @viraj_thakrar.Exciting times lie ahead as you build and lead the MUG.Your dedication is commendable, and we are eager to see the impact you will make as the leader of the community. We look forward to joining you on this journey and witnessing the passion you have for giving back to the community.", "username": "Harshit" } ]
Introducing Myself as the New MUG Leader!
2023-05-31T17:40:05.799Z
Introducing Myself as the New MUG Leader!
838
null
[ "spark-connector" ]
[ { "code": "", "text": "I have a collection that has mixed types of _id fields. Some documents have strings while some documents have ObjectId. When I load the data using spark connector, by default I only see non ObjectId documents. TO see the ObjectId records I have to specifically use a pipeline { ‘_id’ : {‘$type’: ‘objectId’} }. I am not able to find a way to query all the documents.Is there a known solution to this problem.", "username": "Vinay_Avasthi2" }, { "code": "pipeline = [\n {\"$match\": {\"_id\": {\"$exists\": True}}}\n]\n\ndf = spark.read.format(\"mongo\").option(\"pipeline\", pipeline).load()\n", "text": "Hi @Vinay_Avasthi2,Have you tried using $exists?", "username": "Prakul_Agarwal" }, { "code": "", "text": "I tried this, it still gives wrong count 29863 vs 30605. Only case it works fine is when I create two different RDDs, one with Aggregates.match(Filters.type(“_id”, “objectId”)) and Aggregates.match(Filters.not(Filters.type(“_id”, “objectId”))) and union both the RDDs. But this seems to be expensive compared to just a plain RDD creation.", "username": "Vinay_Avasthi2" } ]
Wrong document count with mixed _id type
2023-05-16T18:02:09.574Z
Wrong document count with mixed _id type
801
null
[ "queries", "node-js", "crud" ]
[ { "code": "updateManyss_characteristicsnew_fieldconst updateResult = await Form.updateMany(\n {},\n { $set: { \"ss_characteristics.$[elem].new_field\": '' } },\n { arrayFilters: [{ \"elem.new_field\": { $exists: false } }] }\n);\n\nconsole.log('updateResult: ', util.inspect(updateResult, false, null));\n// that logs the below:\n// updateResult: { acknowledged: false }\nconst all_forms = await Form.find({})\n// logging all_forms will give me something similar to below:\n\n[\n {\n ss_characteristics: [\n { name: 'test 1' },\n { name: 'test 2', new_field: 'No'}\n ]\n },\n {\n ss_characteristics: [\n { name: 'test 3' , test_field: ''},\n { name: 'test 4' },\n ]\n },\n {\n ss_characteristics: []\n }\n]\n", "text": "I am attempting to add a new field into every object within an array only when that field does not already exist. I am trying to use updateMany but the below script does not seem to work…The logic should be, for every object in the array ss_characteristics, if the field new_field is not present, add it with an empty string value.Please help!", "username": "Eden_Hikri" }, { "code": "const updateResult = await Form.updateMany(\n {},\n { $set: { \"ss_characteristics.$[elem].new_field\": '' } },\n { arrayFilters: [{ \"elem.new_field\": { $exists: false } }] }\n);\n", "text": "Hello @Eden_Hikri, Welcome to the MongoDB community forum,Your query looks good, see the working playgroud.Can you please check your implementation, might be you are missing something in your nodejs code.", "username": "turivishal" }, { "code": "console.log('updateResult: ', util.inspect(updateResult, false, null));\n// logs the below:\n// updateResult: { acknowledged: false }\nacknowledged: falseupdateMany", "text": "Thanks! What could be the reason that:I am struggling to traceback the error more than acknowledged: false. Using other methods works completely fine! Even updateMany works but not this particular snippet.", "username": "Eden_Hikri" }, { "code": "", "text": "You need to provide more details:", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding field to every object in array
2023-06-01T03:00:27.276Z
Adding field to every object in array
929
null
[]
[ { "code": "", "text": "Hi all, freshman here So hope I’ve followed the guidelines correctlyI’ve done some reading upon whether to have multiple collections vs not. And far as I can tell it seems to be almost use case specific (not much help haha). So I guess best case would be to pitch my use case and see whether it is the best approachOur setup is per client base, so for clientA we will have: clientA_users, clientA_stock. Would it be better in that approach or to rather have: users, stock → with a field to help identify which client a document belongs to?Much appreciated", "username": "Kieran_Bester" }, { "code": "", "text": "You may get trouble with too many collections - Massive Number of Collections | MongoDBSo i prefer using client id within the same collection", "username": "Kobe_W" }, { "code": "", "text": "With out looking at the code its a little hard to exactly comprehend your concern. Some things to consider are as follows.Having said all that it is worthy to mention that you have to have a good understanding about the approach you take on how you structure your data and the trade offs associated with it.", "username": "Ateeb_Ahmed" } ]
Help with DB design (collections)
2023-05-30T14:43:18.742Z
Help with DB design (collections)
295
null
[]
[ { "code": "", "text": "Hi everyone. I needed some insight on how to structure a database for a project. The project is basically a school management application made using the mern stack. I have already inserted preexisting data for 1400 students. The students are divded in class prep, nursery, 1 - 10. Every class has 4 sections A to D. Every class has a particular fee associated with it. In the application i would want to add and remove students from a class also classs would change after a student successfully passes the session. I would want to gather and update fee payment history. I would want to calculate expenses and do some analytics about the budget. Focusing just on the fee collection how should I go about structing the database. Should i make a collection of classes with the relevant sections and fee. When the students pays the fee how would I go about storing the data should I create a hasPaid field that accepts a boolean for every student in the collection or make a field that holds an array of student fee information. How should I index the db without affecting performance.Some insight on the matter would mean a lot for me.Thanks in advance", "username": "Ateeb_Ahmed" }, { "code": "{\n _id: ObjectId,\n name: String,\n class: String, // Class name (prep, nursery, 1-10)\n section: String, // Section (A, B, C, D)\n feePaid: Boolean,\n feePaymentHistory: [{\n date: Date,\n amount: Number\n }],\n // Other student information\n}\n{\n _id: ObjectId,\n name: String, // Class name (prep, nursery, 1-10)\n sections: [String], // Sections (A, B, C, D)\n fee: Number, // Fee associated with the class\n}\nfeePaidtruefeePaymentHistoryfeePaid", "text": "Hey @Ateeb_Ahmed,Welcome to the MongoDB Community Forums! A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.Based on what you described, one example design you can document your data is in the following manner (please test and alter according to your use case and requirements):\nStudent Collection:Classes collection:Coming to fee payment, one option is to set a feePaid field in the student document as a Boolean, indicating whether the fee has been paid or not. You can update this field to true when the fee is paid. Simultaneously, store fee payment information as an array of objects in the feePaymentHistory field of the student document. Each object can contain the date and amount of the fee payment. You can add a new object to this array whenever a fee payment is made. When the fee gets due, set the feePaid field to ‘false’.I would suggest you use mgeneratejs to quickly create and test different design scenarios. This will also help you test will fields you should index on, to improve query performance. Additionally, for more advanced design patterns, you might want to have a look at building with patterns and see if any of the patterns might help your use case.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "const classSchema = new mongoose.Schema({\n ClassName: String,\n Section: Array,\n fee: Number,\n});\n\n\nconst employeeModel = new mongoose.Schema({\n first_name: {\n type: String,\n required: true,\n },\n last_name: {\n type: String,\n required: true,\n },\n gender: {\n type: String,\n enum: [\"Male\", \"Female\"],\n required: true,\n },\n type: {\n type: String,\n enum: [\"Teaching\", \"Non Teaching\"],\n required: true,\n },\n father_name: {\n type: String,\n // required: true,\n },\n address: {\n type: String,\n // required: true,\n },\n cnic: {\n type: Number,\n },\n phone: {\n type: Number,\n required: true,\n },\n dob: {\n type: Date,\n // required: true,\n // set: (val) => {\n // const [day, month, year] = val.split(\"/\");\n // return new Date(year, month - 1, day);\n // },\n },\n last_qualification: {\n type: String,\n },\n passing_year: {\n type: Number,\n },\n marks_obtained: {\n type: String,\n },\n board_uni: {\n type: String,\n },\n designation: {\n type: String,\n required: true,\n },\n joining_date: {\n type: Date,\n\n // set: (val) => {\n // const [day, month, year] = val.split(\"/\");\n // return new Date(year, month - 1, day);\n // },\n },\n package: {\n type: Number,\n required: true,\n },\n status: {\n isActive: {\n type: Boolean,\n default: true,\n },\n comments: {\n type: Array,\n default: [],\n },\n },\n});\n\n\nconst PaymentSchema = new mongoose.Schema(\n {\n studentId: { type: String, required: true },\n ClassName: { type: String, required: true },\n Section: {\n type: String,\n enum: [\"A\", \"B\", \"C\", \"D\", \"E\"],\n required: true,\n },\n amount: Number,\n date: Date,\n payId: String,\n },\n { timestamps: true }\n);\n\n\nconst studentMODEL = new mongoose.Schema({\n Name: {\n type: String,\n required: true,\n },\n DOB: {\n type: Date,\n required: true,\n },\n Gender: {\n type: String,\n required: true,\n },\n Father_Name: {\n type: String,\n required: true,\n },\n Phone_No: {\n type: String,\n required: true,\n },\n Address: {\n type: String,\n required: true,\n },\n ClassID: {\n type: String,\n required: true,\n },\n Section: {\n type: String,\n required: true,\n },\n createdAt: {\n type: Date,\n },\n status: {\n isActive: {\n type: Boolean,\n default: true,\n },\n comments: {\n type: Array,\n default: [],\n },\n },\n});\n\n\n", "text": "Ty very much for you reply.I issue is that I have to run a lot of complex queries and my collections are very intertwined. For some reason my query time has shot up to unrealistic times especially for the students collection where I am housing 1400 documents. And if you look closely at payments I am making invoking functions to dynamically generate payID which I think is a great solution to restrict to the user for making duplicate entries for fee payment of a monthly fee payment twice. Again the price I am paying over here is with performance. Can you please guide me on what approach to take. I am thinking of implementing a service oriented architecture to have cacheing coupled with mongo to improve performance but the implementation Im am not very comfortable with. Also it has made me to rethink the api paradigm because using only the restful approach is making the code unmaintainable and libraries like graphql keep making breaking changes so its not a comfortable fall back.", "username": "Ateeb_Ahmed" } ]
Database structure advice
2023-05-22T20:59:37.706Z
Database structure advice
448
null
[ "cxx" ]
[ { "code": "openssl", "text": "I am installing Realm C++ SDK using cmake.I am following instructions from https://www.mongodb.com/docs/realm/sdk/cpp/install/#install\nso the CMakeList.txt content I use is the same, except for the GIT_TAG where I use the hash from Realm C++ SDK: Realm C++ SDK at the top of the page next to the SDK name, that is 5dab867db1e3ed63b1c4aba611991724d16cd0ce.However, cmake doesn’t build successfully, which I guess is because of mismatch in OpenSSL version. Realm seems to be using OpenSSL 3.0.8, while my pc is using OpenSSL 1.1.1f.My question is, is OpenSSL 3.0.8 a strict requirement for Realm?\nWhat should be done to install Realm successfully?edit: I am using ubuntu 20.04, so the latest openssl package I can get is 1.1.1f-1ubuntu2.19.", "username": "znyi" }, { "code": "set(REALM_USE_SYSTEM_OPENSSL ON)FetchContent_Declare", "text": "Hey @znyi, by default Realm downloads a precompiled OpenSSL binary to build against and it just so happens that the latest available was 3.0.8, but that’s not required. You can force Realm to use whatever version of OpenSSL you have installed on your system by adding set(REALM_USE_SYSTEM_OPENSSL ON) just before the FetchContent_Declare call in your CMakeLists.txt.Out of curiosity, what’s the exact build error you’re currently getting?", "username": "Yavor_Georgiev" }, { "code": "-- CMake version: 3.16.3\nDependencies: PACKAGE_NAME=realm-core;VERSION=13.9.2;OPENSSL_VERSION=3.0.8;WIN32_ZLIB_VERSION=1.2.13;MDBREALM_TEST_SERVER_TAG=2023-04-13\n-- Using linker gold\nCMake Error at build/_deps/cpprealm-src/realm-core/CMakeLists.txt:275 (string):\n string sub-command REGEX, mode MATCH needs at least 5 arguments total to\n command.\n\n\n-- Configuring incomplete, errors occurred!\nSee also \"/home/malacoda/am_offline_db/cloud_db_test/build/CMakeFiles/CMakeOutput.log\".\nSee also \"/home/malacoda/am_offline_db/cloud_db_test/build/CMakeFiles/CMakeError.log\".\nbuild/_deps/cpprealm-src/realm-core/CMakeLists.txt:275string(REGEX MATCH \"^([0-9]+)\\\\.([0-9]+)\" OPENSSL_VERSION_MAJOR_MINOR ${OPENSSL_VERSION})\nset(REALM_USE_SYSTEM_OPENSSL ON)", "text": "On build, this this what I got:Examining build/_deps/cpprealm-src/realm-core/CMakeLists.txt:275, that isso I guess there is something wrong with the OpenSSL version.However, adding set(REALM_USE_SYSTEM_OPENSSL ON) doesn’t solve this problem. Did I miss something here?", "username": "znyi" }, { "code": "cmakemakecmakemake...\nScanning dependencies of target cpprealm_exe_tests\n[ 90%] Building CXX object _deps/cpprealm-build/CMakeFiles/cpprealm_exe_tests.dir/tests/str_tests.cpp.o\n[ 90%] Building CXX object _deps/cpprealm-build/CMakeFiles/cpprealm_exe_tests.dir/tests/list_tests.cpp.o\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp: In lambda function:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:331:19: error: use of deleted function ‘realm::notification_token::notification_token(const realm::notification_token&)’\n 331 | return token;\n | ^~~~~\nIn file included from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/object.hpp:22,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/persisted_embedded.hpp:22,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/sdk.hpp:29,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/main.hpp:5,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:1:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/notifications.hpp:39:8: note: ‘realm::notification_token::notification_token(const realm::notification_token&)’ is implicitly declared as deleted because ‘realm::notification_token’ declares a move constructor or move assignment operator\n 39 | struct notification_token {\n | ^~~~~~~~~~~~~~~~~~\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp: In lambda function:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:371:19: error: use of deleted function ‘realm::notification_token::notification_token(const realm::notification_token&)’\n 371 | return token;\n | ^~~~~\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp: In lambda function:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:402:19: error: use of deleted function ‘realm::notification_token::notification_token(const realm::notification_token&)’\n 402 | return token;\n | ^~~~~\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp: In function ‘void CATCH2_INTERNAL_TEST_0()’:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:445:37: error: cannot convert ‘int’ to ‘const std::variant<std::monostate, long int, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, double, std::vector<unsigned char, std::allocator<unsigned char> >, std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> > >, realm::uuid, realm::object_id>&’\n 445 | obj.list_mixed_col.push_back(42);\n | ^~\n | |\n | int\nIn file included from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/sdk.hpp:32,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/main.hpp:5,\n from /home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/tests/list_tests.cpp:1:\n/home/malacoda/am_offline_db/cloud_db_test/build/_deps/cpprealm-src/src/cpprealm/persisted_list.hpp:171:33: note: initializing argument 1 of ‘void realm::persisted<std::vector<Duration>, typename std::enable_if<std::negation<std::disjunction<std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, std::optional<realm::internal::bridge::obj_key> >, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::obj_key>, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::list>, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::dictionary> > >::value, void>::type>::push_back(const T&) [with T = std::variant<std::monostate, long int, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, double, std::vector<unsigned char, std::allocator<unsigned char> >, std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> > >, realm::uuid, realm::object_id>; typename std::enable_if<std::negation<std::disjunction<std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, std::optional<realm::internal::bridge::obj_key> >, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::obj_key>, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::list>, std::is_same<typename realm::internal::type_info::type_info<ValueType, void>::internal_type, realm::internal::bridge::dictionary> > >::value, void>::type = void]’\n 171 | void push_back(const T& value)\n | ~~~~~~~~~^~~~~\nmake[2]: *** [_deps/cpprealm-build/CMakeFiles/cpprealm_exe_tests.dir/build.make:76: _deps/cpprealm-build/CMakeFiles/cpprealm_exe_tests.dir/tests/list_tests.cpp.o] Error 1\nmake[1]: *** [CMakeFiles/Makefile2:374: _deps/cpprealm-build/CMakeFiles/cpprealm_exe_tests.dir/all] Error 2\nmake: *** [Makefile:130: all] Error 2\n", "text": "UPDATE: after removing everything and performing cmake and make again (according to official documentations), cmake is done successfully, but make returns error as follows:I believe this is because of the codes in the provided sdk, so I also raised a github issue.Should I make another topic in mongodb community to address this?", "username": "znyi" } ]
OpenSSL Version for Realm C++ SDK
2023-05-31T08:52:29.181Z
OpenSSL Version for Realm C++ SDK
787
null
[]
[ { "code": "", "text": "one of my user asks… my understanding is default is 12 hours but not configurable. - I was told few years ago though ", "username": "Woo_Snag_Lee" }, { "code": "", "text": "Hi @Woo_Snag_Lee - Welcome to the community my understanding is default is 12 hours but not configurable.I believe this to still be the case but you can verify with the Atlas in-app chat support if you wish. There’s currently a feedback post regarding this in which you can vote for.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "thanks, have great day!", "username": "Woo_Snag_Lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I setup timeout for Atlas web console?
2023-06-01T00:01:33.966Z
Can I setup timeout for Atlas web console?
303
null
[ "ahmedabad-mug" ]
[ { "code": "", "text": "I am delighted to introduce myself as the Ahmedabad MongoDB User Group(MUG) leader. My name is Viraj Thakrar and I am thrilled to be a part of this amazing community.I’ve 8+ years of experience working with different technologies and have been using MongoDB for almost 7+ years now. I am also a MongoDB Certified Developer by MongoDB Inc. I’ve worked on various projects including small scale applications to large scale enterprise systems.\nI’ve learned MongoDB from its great documentation and MongoDB University.I believe that by working together, we can explore new frontiers, tackle challenges, and discover novel use cases for MongoDB. I am eager to facilitate discussions, organize workshops, and host events that promote learning and growth within our community. I value open communication and encourage every member to share their thoughts, ideas, and experiences freely.I am looking forward to working closely together to make our MongoDB group a vibrant hub of knowledge and expertise. Let’s embark on this exciting journey together, embracing the power and possibilities of MongoDB.Best,\nViraj Thakrar", "username": "viraj_thakrar" }, { "code": "", "text": "Welcome to the MongoDB community Viraj Glad to hear you’re a MUG leader!", "username": "Jason_Tran" } ]
Hello Ahmedabad MongoDB Community
2023-06-01T04:55:02.785Z
Hello Ahmedabad MongoDB Community
767
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi, I have an interesting case, which I failed to sorted-out myself.\nOn my Host_1, I am running three containers, Node_1, Node_2, Node_3, and MongoDB runs on all of them in Replica. All work great.\nThen I wish to extend the replica to Node_4, and Node_5, which should run on Host_2.\nBoth hosts are in the same network. From both, I can ping each other through IP, so there is connectivity.So as the next step I log in to Primary on Host_1 and did rs.add(‘192.168.1.4:27020’) to add IP of Node_4 and port. And it doesn’t work. After login to Mongosh on Node_4 the log shows 'no connection to Node_1)…Please advise, where to look for the answer, or/and it’s something simple which I can’t see now.Thanks.", "username": "Jakub_Polec" }, { "code": "", "text": "It sounds like the mongodb container on host1 is not able to connect to mongo container on host2.I’m guessing it’s something related to your docker config. Did you verify that the IP “192.168.1.4” is indeed reachable from mongodb container on host1 ? (e.g. ssh to that container and use telnet to test tcp connection to that ip)", "username": "Kobe_W" } ]
Replica on containers on two hosts
2023-05-31T16:23:28.154Z
Replica on containers on two hosts
504
null
[]
[ { "code": "", "text": "I need help in taking the third quiz on the second portion of the training - I encountered a glitch. Who do I contact with MongoDB?", "username": "Karla_Ferrel-Castillo" }, { "code": "", "text": "Hey @Karla_Ferrel-Castillo,Apologies for the late reply. It’s been a while since you posted. Are you still encountering the issue? Can you please provide more details about the glitch that you are encountering in the lab along with which course you’re taking? This would help us better able to understand the problem and help you.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Glitch with Quiz materials
2023-04-03T22:53:49.271Z
Glitch with Quiz materials
956
null
[]
[ { "code": "", "text": "Hello, I am learning how MongoDB Atlas works and I am at the moment facing an error whilst creating a new collection inside the “myAtlasClusterEDU” database.I followed the steps, created the collection “users” and “items”. Added a document in the “users” collection, once I return to the lab and click on “check”. The following error appears: The users collection document count was incorrect. Please try again.I have tried to re-do everything and refreshed the page but the error still appears.\nThanks in advance!", "username": "Leandra_Magan-Tier" }, { "code": "", "text": "Hey @Leandra_Magan-Tier,Welcome to the MongoDB Community Forums! It’s been a few days since you posted this problem. Were you able to find a solution? If not, kindly make sure you are correctly naming everything as mentioned in the lab instructions. If the problem still persists, please mail the issue to [email protected] the problem has been resolved, kindly share what worked so that anyone else facing this issue in the future can benefit from your solution.Please feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot proceed with task Managing Databases, Collections and Documents in Atlas
2023-05-25T10:35:32.530Z
Cannot proceed with task Managing Databases, Collections and Documents in Atlas
839
https://www.mongodb.com/…3_2_1024x694.png
[]
[ { "code": "", "text": "I’m in the lab, “Managing Databases, Collections, and Documents in Atlas Data Explorer” which is in Lesson 3 of “MongoDB and the Document Model”. In the text field where I am supposed to paste the JSON representation of the user document to be inserted, the text field is not editable. I cannot type into the box, delete the text that is there, nor paste into it.I’m using the latest Chrome browser in Windows 10.Here is a screenshot of the uneditable field, with the right-click context menu shown.\n\ncannot-edit1622×1100 106 KB\n", "username": "poscogrubb" }, { "code": "", "text": "Hey @poscogrubb,Welcome to the MongoDB Community Forums! Did refreshing the page or clearing the cache help? If the issue still persists, please mail the university team at [email protected] feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't edit text field inside the Lab
2023-05-23T16:01:23.992Z
Can&rsquo;t edit text field inside the Lab
754
null
[ "aggregation", "node-js" ]
[ { "code": "[\n {\n \"$match\": {\n \"_id\": {\n \"$in\": <JavaScript array of billion ObjectId>\n }\n }\n },\n {\n \"$facet\": {\n \"nbSession\": [\n {\n \"$group\": {\n \"_id\": \"$sessionId\"\n }\n },\n {\n \"$count\": \"count\"\n }\n ],\n \"nbUser\": [\n {\n \"$group\": {\n \"_id\": \"$userId\"\n }\n },\n {\n \"$count\": \"count\"\n }\n ]\n }\n }\n]\nRangeError [ERR_OUT_OF_RANGE]: The value of \"offset\" is out of range. It must be >= 0 && <= 17825792.", "text": "Hello! I encountered an error while executing this MongoDB aggregation with nodejs:The error I received is as follows:RangeError [ERR_OUT_OF_RANGE]: The value of \"offset\" is out of range. It must be >= 0 && <= 17825792.Could someone please help me understand and resolve this issue? Thank you in advance!", "username": "hoc_Tac" }, { "code": "\"$in\": <JavaScript array of billion ObjectId>query", "text": "Hello @hoc_Tac ,Welcome to The MongoDB Community Forums! I saw that you haven’t had a response on this topic yet, were you able to find a solution for this error?\nIf not, then could you please provide a few additional details for me to understand your use-case better?\"$in\": <JavaScript array of billion ObjectId>RangeError [ERR_OUT_OF_RANGE]: The value of “offset” is out of range. It must be >= 0 && <= 17825792.What is throwing this error? Could you post the whole error message, including lines before and after this error?Regards,\nTarun", "username": "Tarun_Gaur" } ]
RangeError in aggregation
2023-05-18T18:35:33.622Z
RangeError in aggregation
524
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 6.0.6 is out and is ready for production deployment. This release contains only fixes since 6.0.5, and is a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "Hi!The official docker image for 6.0.6 has not been released or at least has not been tagged as 6.0.6.Any news on when this might happen?Here’s the repo I’m looking at: DockerThank you!", "username": "German_Bourdin" }, { "code": "", "text": "Hi @German_Bourdin,Here is the link for the 6.0.6 docker image. Does this help?– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.6 is released
2023-05-15T12:35:52.740Z
MongoDB 6.0.6 is released
1,499
null
[ "aggregation" ]
[ { "code": "\ndb.getCollection('users').aggregate([\n {$match: {date: {$gt: ISODate(\"2023-05-24 08:01:08.604Z\")}} },\n {$lookup: { from: 'courses', localField: '_id', foreignField: 'uid', as: 'courses' } },\n {$unwind: '$courses' },\n {$match: { 'courses.distance': {$gt: 0.1 } } },\n {$group: {_id: '$courses.uid',\n nb: {$sum: 1},\n totalDuration: {$sum: '$courses.duration'},\n totalDistance: {$sum: '$courses.distance' },\n v: {$sum: '$courses.info.v'},\n s: {$sum: '$courses.info.s'},\n d: {$sum: '$courses.info.d'},\n b: {$sum: '$courses.info.b'},\n u: {$sum: '$courses.info.u'},\n }}\n ])\n{\n \"explainVersion\" : \"1\",\n \"stages\" : [ \n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"namespace\" : \"redacted.users\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"date\" : {\n \"$gt\" : ISODate(\"2023-05-24T08:01:08.604Z\")\n }\n },\n \"queryHash\" : \"9890BE05\",\n \"planCacheKey\" : \"23BBE46F\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"_id\" : 1,\n \"courses.duration\" : 1,\n \"courses.info.v\" : 1,\n \"courses.info.s\" : 1,\n \"courses.info.d\" : 1,\n \"courses.info.b\" : 1,\n \"courses.info.u\" : 1,\n \"courses.distance\" : 1,\n \"courses.uid\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"date\" : -1\n },\n \"indexName\" : \"date_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"date\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"date\" : [ \n \"[new Date(9223372036854775807), new Date(1684915268604))\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : []\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 5790,\n \"executionTimeMillis\" : 160018,\n \"totalKeysExamined\" : 5790,\n \"totalDocsExamined\" : 5790,\n \"executionStages\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"nReturned\" : 5790,\n \"executionTimeMillisEstimate\" : 218,\n \"works\" : 5791,\n \"advanced\" : 5790,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 15,\n \"restoreState\" : 15,\n \"isEOF\" : 1,\n \"transformBy\" : {\n \"_id\" : 1,\n \"courses.duration\" : 1,\n \"courses.info.v\" : 1,\n \"courses.info.s\" : 1,\n \"courses.info.d\" : 1,\n \"courses.info.b\" : 1,\n \"courses.info.u\" : 1,\n \"courses.distance\" : 1,\n \"courses.uid\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 5790,\n \"executionTimeMillisEstimate\" : 217,\n \"works\" : 5791,\n \"advanced\" : 5790,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 15,\n \"restoreState\" : 15,\n \"isEOF\" : 1,\n \"docsExamined\" : 5790,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 5790,\n \"executionTimeMillisEstimate\" : 6,\n \"works\" : 5791,\n \"advanced\" : 5790,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 15,\n \"restoreState\" : 15,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"date\" : -1\n },\n \"indexName\" : \"date_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"date\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"date\" : [ \n \"[new Date(9223372036854775807), new Date(1684915268604))\"\n ]\n },\n \"keysExamined\" : 5790,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n },\n \"allPlansExecution\" : []\n }\n },\n \"nReturned\" : NumberLong(5790),\n \"executionTimeMillisEstimate\" : NumberLong(221)\n }, \n {\n \"$lookup\" : {\n \"from\" : \"courses\",\n \"as\" : \"courses\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"uid\",\n \"let\" : {},\n \"pipeline\" : [ \n {\n \"$match\" : {\n \"distance\" : {\n \"$gt\" : 0.1\n }\n }\n }\n ],\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : false\n }\n },\n \"totalDocsExamined\" : NumberLong(441038),\n \"totalKeysExamined\" : NumberLong(441038),\n \"collectionScans\" : NumberLong(0),\n \"indexesUsed\" : [ \n \"uid_1\"\n ],\n \"nReturned\" : NumberLong(433350),\n \"executionTimeMillisEstimate\" : NumberLong(158406)\n }, \n {\n \"$group\" : {\n \"_id\" : \"$courses.userId\",\n \"nb\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n },\n \"totalDuration\" : {\n \"$sum\" : \"$courses.duration\"\n },\n \"totalDistance\" : {\n \"$sum\" : \"$courses.distance\"\n },\n \"v\" : {\n \"$sum\" : \"$courses.info.v\"\n },\n \"s\" : {\n \"$sum\" : \"$courses.info.s\"\n },\n \"d\" : {\n \"$sum\" : \"$courses.info.d\"\n },\n \"b\" : {\n \"$sum\" : \"$courses.info.b\"\n },\n \"u\" : {\n \"$sum\" : \"$courses.info.u\"\n }\n },\n \"maxAccumulatorMemoryUsageBytes\" : {\n \"nb\" : NumberLong(456640),\n \"totalDuration\" : NumberLong(456640),\n \"totalDistance\" : NumberLong(456640),\n \"v\" : NumberLong(456640),\n \"s\" : NumberLong(456640),\n \"d\" : NumberLong(456640),\n \"b\" : NumberLong(456640),\n \"u\" : NumberLong(456640)\n },\n \"totalOutputDataSizeBytes\" : NumberLong(3772988),\n \"usedDisk\" : false,\n \"spills\" : NumberLong(0),\n \"nReturned\" : NumberLong(5708),\n \"executionTimeMillisEstimate\" : NumberLong(160000)\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"redacted\",\n \"port\" : redacted,\n \"version\" : \"6.0.6\",\n \"gitVersion\" : \"redacted\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0,\n \"internalQueryMaxAddToSetBytes\" : 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600\n },\n \"command\" : {\n \"aggregate\" : \"users\",\n \"pipeline\" : [ \n {\n \"$match\" : {\n \"date\" : {\n \"$gt\" : ISODate(\"2023-05-24T08:01:08.604Z\")\n }\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"courses\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"uid\",\n \"as\" : \"courses\"\n }\n }, \n {\n \"$unwind\" : \"$courses\"\n }, \n {\n \"$match\" : {\n \"courses.distance\" : {\n \"$gt\" : 0.1\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : \"$courses.uid\",\n \"nb\" : {\n \"$sum\" : 1.0\n },\n \"totalDuration\" : {\n \"$sum\" : \"$courses.duration\"\n },\n \"totalDistance\" : {\n \"$sum\" : \"$courses.distance\"\n },\n \"v\" : {\n \"$sum\" : \"$courses.info.v\"\n },\n \"s\" : {\n \"$sum\" : \"$courses.info.s\"\n },\n \"d\" : {\n \"$sum\" : \"$courses.info.d\"\n },\n \"b\" : {\n \"$sum\" : \"$courses.info.b\"\n },\n \"u\" : {\n \"$sum\" : \"$courses.info.u\"\n }\n }\n }\n ],\n \"cursor\" : {\n \"batchSize\" : 1.0\n },\n \"$db\" : \"redacted\"\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1685022863, 6),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"zeFocRuBcH7U2lePwakQsnU8yCg=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(7193055453171941378)\n }\n },\n \"operationTime\" : Timestamp(1685022863, 6)\n}\n", "text": "Hello,i’m working on a M30 cluster where one of the aggregation request make the “Disk Util%” rising up to 100%.it seems to be the Group part that is the problem :here is the Explain :am i doing something wrong in this query ? or is this normal ?Sincerly,Yann.", "username": "Yann_Guillerm2" }, { "code": "{ \"$lookup\" : {\n \"from\" : \"courses\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"uid\" ,\n \"as\" : \"courses\" ,\n \"pipeline\" : [\n { \"$match\" : { \"distance\" : { \"$gt\" : 0.1 } } } ,\n { \"$group\" : {\n \"_id\" : null ,\n \"nb\" : { \"$sum\" : 1 } ,\n \"totalDuration\" : { \"$sum\" : \"$duration\" } ,\n \"totalDistance\" : { \"$sum\" : \"$distance\" } ,\n \"v\" : { \"$sum\" : \"$info.v\" } ,\n /* ... */\n } }\n ]\n} }\n", "text": "A $group stage is blocking in a sense that all incoming documents are processed before one outgoing document is produced.I always try to $group on a smaller subset.In your case, I think you could move your $group inside a pipeline in your $lookup. This way the $group of each user can be produced right away. This way you may avoid a much bigger $group at the end, you would also avoid an $unwind. This $lookup would look like:", "username": "steevej" }, { "code": "", "text": "Thanks a lot for the answer .\nit work very very well.Thanks again.Yann.", "username": "Yann_Guillerm2" } ]
Aggregation group reaching Disk Util% up to 100%
2023-05-25T14:09:32.147Z
Aggregation group reaching Disk Util% up to 100%
489
null
[ "python", "spark-connector" ]
[ { "code": "", "text": "Hi\nI am trying to connect mongodb from pyspark. I have installed mongodb 6 in AWS EMR instance.\nI have installed mongodb spark connector in the EMR. But when i am trying to connect mongodb from spark, i am getting class not found exception\nCan someone please help me to connect and read collection from mongodb fromm pyspark\nThanks\nSaswata", "username": "Saswata_Dutta" }, { "code": "", "text": "Hi Saswata,\nAre you able to connect to MongoDB using regular MongoClient from the EMR instances? This can inform if this a more general networking issue. Heres a thread which talks about such networking issues: Unable to read data from mongoDB using Pyspark or PythonOtherwise here are some questions that can help us understand whats going on.\n1)How is your mongodb setup? Is it self hosted or are you using Mongodb Atlas?\n2) Can you share which version of MongoDB spark connector are you using?\n3) Can you share the detailed error log", "username": "Prakul_Agarwal" } ]
Unable to connect from pyspark to mongodb
2023-05-07T00:13:22.710Z
Unable to connect from pyspark to mongodb
1,167
null
[ "python" ]
[ { "code": "from pymongo import MongoClient\n\nclient = MongoClient()\n\nclient = MongoClient(host=\"141.212.130.128\", port=27017, username='atomate_readwrite', password='mongo_readwrite', authSource='SunGroupCentral_atomatedb')\n\natomate_db = client.SunGroupCentral_atomatedb.tasks\nresult = atomate_db.find_one({\"task_id\": 1022})\n\nfrom pymatgen.core.structure import Structure #Get structure from Atomate.\nstruct = Structure.from_dict(result['input']['structure'])\n\nresult['output']['energy']\nTraceback (most recent call last):\n File \"QueryDB.py\", line 2, in <module>\n from pymongo import MongoClient\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/__init__.py\", line 106\n def has_c() -> bool:\n ^\nSyntaxError: invalid syntax\nTraceback (most recent call last):\n File \"QueryDB.py\", line 2, in <module>\n from pymongo import MongoClient\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/__init__.py\", line 87, in <module>\n from pymongo.collection import ReturnDocument\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/collection.py\", line 29, in <module>\n from pymongo import (common,\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/common.py\", line 35, in <module>\n from pymongo.ssl_support import (validate_cert_reqs,\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/ssl_support.py\", line 27, in <module>\n import pymongo.pyopenssl_context as _ssl\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/pymongo/pyopenssl_context.py\", line 27, in <module>\n from OpenSSL import SSL as _SSL\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/OpenSSL/__init__.py\", line 8, in <module>\n from OpenSSL import SSL, crypto\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/OpenSSL/SSL.py\", line 9, in <module>\n from OpenSSL._util import (\n File \"/home1/09341/jamesgil/mambaforge/envs/atomate_env/lib/python3.9/site-packages/OpenSSL/_util.py\", line 21\n def text(charp: Any) -> str:\n ^\nSyntaxError: invalid syntax\n", "text": "Hello,I am a new user of mongoDB, which I need in order to write and run atomate workflows on an HPC. I have downloaded and activated an atomate environment on an HPC (TACC Stampede2) using Mamba, and I am having a great amount of trouble with library/package compatibility. I have written a simple script to get information from a Mongodb database, see below:When I try to run this script, an error is thrown in the import statement. I will note that this same error is thrown if I open python in the command prompt and simply type “import pymongo”:I suspect that this is a compatibility issue, but I have been unsuccessful in trying to resolve it. I currently am using Python 3.9.16 and Pymongo 4.3.3. I tried downgrading to Pymongo 3.11.0, but encountered a similar error again:Does anyone have any idea what might be throwing this error and how I can begin to resolve it? Thanks in advance for the help.", "username": "Gillian_James" }, { "code": "python -m venv myenv --system-site-packagessource myenv/bin/activatepip install pymongo", "text": "Best way to debug this is to try it outside your special environment.", "username": "Jack_Woehr" }, { "code": "", "text": "It runs completely fine when I run the code on a clean environment with just Pymongo (and Pymatgen)", "username": "Gillian_James" }, { "code": "", "text": "If I had gotten paid overtime for every hour over 40 years I’ve spent debugging other people’s pestilential frameworks … ", "username": "Jack_Woehr" } ]
Trouble with Pymongo installation and dependencies
2023-05-30T18:27:08.733Z
Trouble with Pymongo installation and dependencies
929
null
[]
[ { "code": "", "text": "I am new to MongoDB and we are looking for Connecting to Mongodb from plsql. Please let me know is there a way we can connect to Mongodb from plsql.", "username": "Surendra_Mullapudi" }, { "code": "", "text": "MongoDB is not a relational database and doesn’t use the SQL query language, what are you trying to do with plsql and MongoDB? The MongoShell https://www.mongodb.com/docs/mongodb-shell/ is a way to interact with MongoDB on the command line. Also there is MongoDB Compass | MongoDB MongoDB Compass a GUI tool and a VS Code extension MongoDB for VS Code - Visual Studio Marketplace.", "username": "Robert_Walters" }, { "code": "", "text": "Thanks for the Reply robert. we have existing oracle database and Mongo db both. Our requirment is to compare the data between these two databases for a table and collection and send audit email.", "username": "Surendra_Mullapudi" }, { "code": "", "text": "ok, the easiest way to do this would be to write a small app to query both oracle and mongodb. I don’t know of a tool that does this. Also data in MongoDB should never really be 1:1 mapped to a row in a table, it doesn’t enable the power of the document model and data denormalization.Learn about different types of databases and things to consider when choosing what database technology to use in your project.", "username": "Robert_Walters" } ]
Connecting to Mongo from Plsql
2023-05-31T07:27:50.884Z
Connecting to Mongo from Plsql
492
null
[ "node-js" ]
[ { "code": "", "text": "Hello, I’m working on an Electron app with Realm sync enabled and I’m confused with the doc, especially this paragraph Open a Synced Realm While Offline\nInitially I thought that you could log in and out of your app while Offline if the very first time you logged in with an Internet connection. It was my understanding that mongodb Realm did some magic caching of the credentials and allowed you to do logins checks while offline the subsequent times.\nThis was also influenced by the orange important box saying \" Offline Login is Supported for Both Flexible and Partition-Based Sync Configurations\".\nNow that I try to implement this and that it doesn’t work I’m not so sure anymore of my first understanding.\nNow I’m starting to believe that the user has to be already logged in, you cannot actually perform a user.logIn while offline. Am I correct?", "username": "Benoit_Werner" }, { "code": "logOut()app.currentUser let user = app.currentUser;\n\n if (!user || !user.isLoggedIn) {\n // No current user, log them in\n let credentials;\n\n // … set credentials, according to your app setup…\n\n user = await app.logIn(credentials);\n\n console.log(`Logged in with the user: ${user.id}`);\n } else {\n console.log(`Skipped login with the user: ${user.id}`);\n }\n // …proceed to open the realm with the set user\n", "text": "Hi @Benoit_Werner,you cannot actually perform a user.logIn while offline.Yes, of course, the authentication process requires a connection to proceed.It was my understanding that mongodb Realm did some magic caching of the credentialsIndeed it does: unless you, in your code, execute an explicit logOut(), the app.currentUser property will still be set, and you can proceed to work offline.In practice, a typical login workflow looks like", "username": "Paolo_Manna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "config = new SyncConfiguration.Builder(app.currentUser())\n .allowWritesOnUiThread(true)\n .allowQueriesOnUiThread(true)\n .compactOnLaunch()\n .waitForInitialRemoteData(500, TimeUnit.MILLISECONDS)\n .initialSubscriptions(new SyncConfiguration.InitialFlexibleSyncSubscriptions() {\n @Override\n public void configure(Realm realm, MutableSubscriptionSet subscriptions) {\n // add a subscription with a name\n Log.e(TAG, \"configure: \");\n\n Subscription userSubscription = subscriptions.find(kMap.userSubs);\n if (userSubscription == null) {\n subscriptions.addOrUpdate(Subscription.create(kMap.userSubs,\n realm.where(users.class)));\n }\n\n Subscription collegeSubscription = subscriptions.find(kMap.collegeSubs);\n if (collegeSubscription == null) {\n subscriptions.addOrUpdate(Subscription.create(kMap.collegeSubs,\n realm.where(colleges.class)));\n }\n\n Subscription courseSubscription = subscriptions.find(kMap.courseSubs);\n\n if (courseSubscription == null) {\n subscriptions.addOrUpdate(Subscription.create(kMap.courseSubs,\n realm.where(courses.class)));\n }\n\n Subscription studentsSubs = subscriptions.find(kMap.studentsSubs);\n if (studentsSubs == null) {\n subscriptions.addOrUpdate(Subscription.create(kMap.studentsSubs,\n realm.where(students.class)));\n }\n\n\n realm.close();\n }\n })\n .build();\n\n Realm.setDefaultConfiguration(config);\n StaticValues.syncCount++;\n", "text": "AlwaysIm using Java Realm SDK.\nThe app works fine when network connectivity is available. App even works when i turn off data connectivity after opening app.\nBut when i restart app with no data connectivity. It is unable to fetch the local data.How to Reproduce:\nTurn off data connectivity.\nOpen app\nApp unable to sync.What I got after diagnosing:After I restart the app without data connectivity. The app is unable to set SyncConfiguration as a result realm is not able to fetch any data.\nHence as Mongo says it is offline first I’m able to use this important feature.But when I turn off data while running app. the app works fine. as SyncConfiguration is already set.This is My sync configuration:Error I get while internet is off:E/REALM_JAVA: Session Error[wss://realm.mongodb.com/]: UNKNOWN(realm.util.network.resolve:1): Host not found (authoritative)\nE/REALM_SYNC: Failed to resolve ‘ws.ap-south-1.aws.realm.mongodb.com:443’: Host not found (authoritative)", "username": "Clink_App" }, { "code": "", "text": "G’Day @Clink_App ,Thank you for raising your concern.Your question seemed similar to this post that has been answered. I presume when you restart the app, you remove the cache data that is saved and you would need internet connectivity to login back again so the data can be synced from the server.I hope this helps answer your question or let me know if I mistook a co-relation here.Cheers, \nhenna", "username": "henna.s" } ]
Is realm offline login really supported?
2023-05-23T10:01:40.298Z
Is realm offline login really supported?
947
https://www.mongodb.com/…0_2_1023x187.png
[ "installation" ]
[ { "code": "/etc/yum.repos.d/mongodb-org-6.0.repo[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\nsudo yum install -y mongodb-org", "text": "Hi everyone,\nI tried to install Mongo 6 on a Alma 9 ARM based architecture.\nI followed the installation guide for Mongo 6 on Red hatI already changed the baseurl to target the arm RPM into the /etc/yum.repos.d/mongodb-org-6.0.repo like the following:When I try to run sudo yum install -y mongodb-org the following error message appear:\nCapture d’écran 2023-05-24 à 11.14.501700×312 30.4 KB\nAnd in the mongo RPM repository for RHEL9 and ARM https://repo.mongodb.org/yum/redhat/9/mongodb-org/6.0/aarch64/RPMS/ there is no mongodb-org-tools-6.0.x-1.el9.aarch64 available.Am i missing a step somewhere ?Thanks in advance for anyone willing to provide help.Regards.", "username": "Kyllian_Chartrain" }, { "code": "", "text": "I see the mongodb-database-tools is in the arm64 repo. Could you try adding that repo and see if it resolves this dependency?Alternatively you could just install the component you need.Then try a manual install of the mongodb tools if you need them on that host specifically.Not sure what why database tools is in a different repo, but that or the documentation would be good to update(it doesn’t cover the arm repo as it is either)", "username": "chris" }, { "code": "sudo yum install mongodb-database-tools-<version>.arm64.rpm\n/etc/yum.repos.d/mongodb-org-6.0.repo[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\nsudo yum install -y mongodb-org", "text": "Thank you for the answer.I didn’t see that there was an arm64 repository, but it only contains the mongodb-database-tools so it’s still not automatic.\nFor everyone with a similar issue you’ll have to download manually the mongodb-database-tools RPM and install it on the server:Then you target the aarch64 into the yum repository here /etc/yum.repos.d/mongodb-org-6.0.repo with:And now that the mongodb-database-tools are installed you can simply install the mongodb server with sudo yum install -y mongodb-orgIf someone know where we can ask the mongodb team if it is possible to add every dependancies package into the aarch64 or the arm64 RPM Index, it would greatly improve the mongodb 6 installation on any RHEL ARM based architecture.", "username": "Kyllian_Chartrain" }, { "code": "", "text": "I didn’t see that there was an arm64 repository, but it only contains the mongodb-database-tools so it’s still not automatic.No arm platform for testing this. But I thought adding this second repo would automatically resolve this dependency.If someone know where we can ask the mongodb team if it is possible to add every dependancies package into the aarch64 or the arm64 RPM Index, it would greatly improve the mongodb 6 installation on any RHEL ARM based architecture.This does seem like a packaging bug on database tools, https://jira.mongodb.org/ .", "username": "chris" }, { "code": "/etc/yum.repos.d/mongodb-org-6.0.repo[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\n\n[mongodb-org-tools-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/arm64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\nsudo yum install -y mongodb-org", "text": "No arm platform for testing this. But I thought adding this second repo would automatically resolve this dependency.I did not thought at that and it worked like a charm I have now here /etc/yum.repos.d/mongodb-org-6.0.repo the following input:I don’t know if there is a better way to do this, but now with just the sudo yum install -y mongodb-org command everything got installed just fine.\nThanks you @chris for answering.", "username": "Kyllian_Chartrain" }, { "code": "", "text": "Whew, thanks for checking. Restoring my faith in my RPM knowledge!p.s.\n@Kyllian_Chartrain Did you create a JIRA ?", "username": "chris" }, { "code": "", "text": "yes I just did, here is the link: https://jira.mongodb.org/browse/TOOLS-3309 if you want to follow it.", "username": "Kyllian_Chartrain" }, { "code": "", "text": "Hi @Kyllian_Chartrain , thank you for reporting this issue! This should have already been fixed with the release of Database Tools v100.7.2.", "username": "Jian_Guan1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to install Mongo 6.0.6 on RHEL9 ARM based system
2023-05-24T09:23:20.469Z
Unable to install Mongo 6.0.6 on RHEL9 ARM based system
970
null
[ "atlas", "charts" ]
[ { "code": "// This should work, if we have access to ObjectId method. \nfunction getFilter(context) {\n return {organisationId: ObjectId(\"123\")\n}\n// This does not work (kinda expected I guess)\nfunction getFilter(context) {\n return {organisationId: \"123\")\n}\n\n// No longer a valid query? \nfunction getFilter(context) {\n return {organisationId: {\"$oid\": \"123\"})\n}\n", "text": "HII have been trying to get the MongoDB Charts to only show data relevant to the organisation logged into the app. I have the JWT token being passed to the Charts SDK and that is all working great, however, because the field is a ObjectId, no results are being returned.To make sure it was not the JWT I have tried hard coding a value in different ways to see if I can get it to work, none are successful.Anyone got any suggestions or have I missed something very obvious somewhere?\nThanksMike", "username": "Mike_Rudge" }, { "code": "", "text": "Did you ever figure this out? I am struggling with the same thing.", "username": "Matt_Griffith" } ]
Inject Filter Per User does not work with ObjectIds
2022-01-14T09:48:23.266Z
Inject Filter Per User does not work with ObjectIds
3,012
null
[ "react-native", "typescript" ]
[ { "code": "", "text": "Hello Everybody, I hope you are having a good time using MongoDB Realm in your applications. I am listing some examples I came across while looking into a question about Typescript support in Realm’s React Native SDK:I really hope you find them useful and discover ways to create your realm react-native applications. If you are struggling with any concept or have any suggestions, please feel free to reach out!Happy Coding!", "username": "henna.s" }, { "code": "", "text": "G’Day Folks,React Native is now TS by default, so I have updated the links for both Realm Typescript and Expo.Kindly please drop a message if you get a 404 on the links.Happy Coding.Cheers, \nhenna", "username": "henna.s" } ]
React Native - Realm Typescript Sample Apps
2021-12-21T09:45:09.767Z
React Native - Realm Typescript Sample Apps
6,176
null
[ "queries" ]
[ { "code": "", "text": "Hello team and all.\nfor some reason, I lost all triggers from my clusters for 40 mins, and after 40 mins all of them got back to the list.\nDoes anyone have an idea?", "username": "Ahmed_Azzo" }, { "code": "", "text": "Hi @Ahmed_Azzo,I lost all triggers from my clusters for 40 mins, and after 40 mins all of them got back to the list.Does the issue matches the timing of this outage? In general, when something like this happens, a look at the Status page is a good idea.Just to be clear, the issue only prevented the app administration part from working, the triggers themselves should have run regardless.", "username": "Paolo_Manna" } ]
Lost all triggers from trigger list in my Atlas cluster
2023-05-31T08:57:59.453Z
Lost all triggers from trigger list in my Atlas cluster
732
null
[ "data-modeling", "python", "atlas-device-sync", "cxx", "time-series" ]
[ { "code": "", "text": "I am new to MongoDB and I want to know the feasibility of my idea.I want to make an offline-first, time-series database which keep the new data generated by a robot locally (ubuntu) and add the data to cloud DB, where the cloud DB is going to be the main source of information for all users.Since I am using ROS2 for my robot, and it supports C++ and Python, I am thinking of using C++ to manage my database, although the C++ sdk is also very new.Because it seems like the concept of Time Series Collection is still quite new in MongoDB, is my idea feasible?Also, I will appreciate if there is any material that i can refer to or any other suggestions to make this work.", "username": "znyi" }, { "code": "", "text": "Hi Xin_Yi_Wong,\nYes, with realm sync you can use “data ingest” (see https://www.mongodb.com/docs/atlas/app-services/sync/configure/sync-settings/#std-label-optimize-data-ingest) which will allow you to execute write-only operations to a collection, including time-series collections. The C++ SDK now supports this feature as well.", "username": "mpobrien" }, { "code": "", "text": "Here’s a code example in this commit for both data streamed/ingested to Atlas and then data which is synchronized to and from Atlas.", "username": "otso" }, { "code": "", "text": "Although most of the operations needed are write operations, i might need to occationally read the data through the robot. Does Data Ingest still fit my use case?", "username": "znyi" }, { "code": "", "text": "Data Ingest objects cannot be queried - for your use case you could consider mixing of Data Ingest and “normal” Realm Objects or possibly copying the Data Ingest objects to a local only Realm.", "username": "otso" } ]
Using MongoDB Realm Sync on Time Series Collection
2023-05-30T07:19:38.304Z
Using MongoDB Realm Sync on Time Series Collection
807
null
[]
[ { "code": "query {\n product_contributions(query: { actor: { name: \"test\"} } ) {\n product{\n name\n }\n actor {\n name\n }\n }\n products(query: { contributions: { actor : { name: \"test\" } } }) {\n name\n contributions {\n actor {\n name\n }\n }\n }\n}\n{\n \"data\": {\n \"product_contributions\": [],\n \"products\": [\n {\n \"contributions\": [\n {\n \"actor\": {\n \"name\": \"Coopérative Pur Ardenne\"\n }\n }\n ],\n \"name\": \"Lait de Pâturage demi-écrémé\"\n }\n ]\n }\n}\n{\n \"contributions\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/digicirco/product_contributions\",\n \"is_list\": true\n }\n}\n{\n \"actor\": {\n \"ref\": \"#/relationship/mongodb-atlas/digicirco/actors\",\n \"foreign_key\": \"_id\",\n \"is_list\": false\n },\n \"product\": {\n \"ref\": \"#/relationship/mongodb-atlas/digicirco/products\",\n \"foreign_key\": \"_id\",\n \"is_list\": false\n }\n}\n", "text": "I defined product, contributions and actors (a collection for each). A product has contributions (list of ObjectID) and a contribution has an actor [and a product] (objectID reference). When I want all products with contributions with actors with a certain name or postcode, it doesn’t work. The name or postcode filter is not applied.The query :givesBoth should be empty …Defined relationship for product :Defined relationships for product_contributions :", "username": "Olivier_Wouters" }, { "code": "", "text": "Hello @Olivier_Wouters ,I have the same problem,\nDid you find a solution?Thank you", "username": "cyril_moreau" }, { "code": "", "text": "Hi Cyril, no I didn’t ", "username": "Olivier_Wouters" }, { "code": "", "text": "Hi @Olivier_WoutersI have found a way. Maybe not the best but it works If i take your example, what i do is managing my record with product_contribution query and mutation. This way i can create Products and Contribution from the product_contribution definition.\nIf you use the product_contribution mutation, you can query product_contribution alsoMoreover when you use the product_contribution mutation, you can also query Product but you wont be able to get the relation with contribution (you can do that only with product_contribution)You can query any product (per id or all of them)\nYou can query any contribution (per id or all of them)The entry point for your mutation should be product_contributionBest regards", "username": "cyril_moreau" }, { "code": "", "text": "I even noticed that one to many with filter doesn’t always work … For instance, greater than doesn’t work for subfield in one to many relationship …\nScreenshot 2023-05-31 at 08.23.041766×930 128 KB\n", "username": "Olivier_Wouters" } ]
GraphQL filter on second relationship not working (one to many to one)
2022-01-26T12:49:28.945Z
GraphQL filter on second relationship not working (one to many to one)
3,875
null
[ "python" ]
[ { "code": "[\n {\n _id: ObjectId(\"647382945becfcf89a67bd96\"),\n stock_ticker_name: 'a',\n created_date: ISODate(\"2023-05-28T16:34:28.750Z\"),\n update_date: ISODate(\"2023-05-28T16:34:28.750Z\")\n },\n {\n _id: ObjectId(\"647382e25becfcf89a67bd97\"),\n stock_ticker_name: 'b',\n created_date: ISODate(\"2023-05-28T16:35:46.290Z\"),\n update_date: ISODate(\"2023-05-28T16:35:46.290Z\")\n },\n {\n _id: ObjectId(\"6474ae299b2c8d1ec0850e7a\"),\n stock_ticker_name: 'c',\n created_date: ISODate(\"2023-05-29T13:52:41.426Z\"),\n update_date: ISODate(\"2023-05-29T13:52:41.426Z\")\n },\n {\n _id: ObjectId(\"6474ae379b2c8d1ec0850e7b\"),\n stock_ticker_name: 'd',\n created_date: ISODate(\"2023-05-29T13:52:55.360Z\"),\n update_date: ISODate(\"2023-05-29T13:52:55.360Z\")\n },\n {\n _id: ObjectId(\"6475e531d0fdcdc1879844f9\"),\n stock_ticker_name: 'e',\n created_date: ISODate(\"2023-05-30T11:59:45.666Z\"),\n update_date: ISODate(\"2023-05-30T11:59:45.666Z\")\n },\n {\n _id: ObjectId(\"6475e59fd0fdcdc1879844fa\"),\n stock_ticker_name: 'f',\n created_date: ISODate(\"2023-05-30T12:01:35.286Z\"),\n update_date: ISODate(\"2023-05-30T12:01:35.286Z\"),\n quantity: '3'\n }\n]\n\n now = datetime.today().strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n deltas = datetime.today() - timedelta(days=3)\n last_sync = deltas.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n connection = MongoClient('mongodb://<conn>:27017')\n mongo_db = connection.keblingers\n mongo_collection = mongo_db.stock_ticker\n result = pd.DataFrame(list(mongo_collection.find({\"created_date\": {'$gte': f\"new Date('{last_sync}')\", '$lte': f\"new Date('{now}')\"}})))\n print(result)\nEmpty DataFrame\nColumns: []\nIndex: []\n", "text": "Hi,i am trying to get all data between range of two datetime using python pandas, but i get empty dataframe.this is my dummy dataand this is my python codeand this is the outputwhere is the wrong code?", "username": "blinksatan_182" }, { "code": "nowdeltaslast_syncprint()result", "text": "Hi @blinksatan_182,Not too familiar with pandas however, can you provide the values of the following variables when you run the code:You can try add an additional line under each with print() perhaps. This will highlight what is being passed through to the result.Additionally, can you provide reproducible code including the libraries used so that we can attempt to replicate this behaviour on our test environments? Please also advise any versioning where possible.Regards,\nJason", "username": "Jason_Tran" }, { "code": "this is now -> 2023-05-31T10:43:50.621256Z\nthis is last_sync -> 2023-05-28T10:43:50.621256Z\nthis is deltas -> 2023-05-28 10:43:50.621256\nfrom pymongo import MongoClient\nimport pandas as pd\nfrom datetime import datetime, timedelta\n\ndef get_data_mongo():\n now = datetime.today().strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n deltas = datetime.today() - timedelta(days=3)\n last_sync = datetime.strftime(deltas,\"%Y-%m-%dT%H:%M:%S.%fZ\")\n connection = MongoClient('mongodb://conn')\n mongo_db = connection.keblingers\n mongo_collection = mongo_db.stock_ticker\n result = pd.DataFrame(list(mongo_collection.find({\"created_date\": {'$gte': f\"new Date('{last_sync}')\", '$lte': f\"new Date('{now}')\"}})))\n print(result)\nresult = pd.DataFrame(list(mongo_collection.find({\"stock_ticker_name\": \"a\"})))\nresult = pd.DataFrame(list(mongo_collection.find({\"quantity\": {'$gte': 1}})))\n", "text": "Hi Jason,this is the output of now, deltas and last_syncand below for the full scriptand by the way, if i try to find like name or by quantity it is works, it is just by date i dont know why it is empty data frame", "username": "blinksatan_182" }, { "code": "get_data_mongo()def get_data_mongo():\n now = datetime.today()\n deltas = datetime.today() - timedelta(days=3)\n last_sync = deltas\n connection = MongoClient('mongodb+srv://<REDACTED>:<REDACTED>@cluster0.<REDACTED>.mongodb.net/?retryWrites=true&w=majority')\n mongo_db = connection.panda\n mongo_collection = mongo_db.collection\n result = pd.DataFrame(list(mongo_collection.find({\"created_date\": {'$gte': last_sync, '$lte': now}})))\n print(result)\nnowlast_sync$gte$lte>>>get_data_mongo()\n _id stock_ticker_name created_date update_date quantity\n0 647382945becfcf89a67bd96 a 2023-05-28 16:34:28.750 2023-05-28 16:34:28.750 NaN\n1 647382e25becfcf89a67bd97 b 2023-05-28 16:35:46.290 2023-05-28 16:35:46.290 NaN\n2 6474ae299b2c8d1ec0850e7a c 2023-05-29 13:52:41.426 2023-05-29 13:52:41.426 NaN\n3 6474ae379b2c8d1ec0850e7b d 2023-05-29 13:52:55.360 2023-05-29 13:52:55.360 NaN\n4 6475e531d0fdcdc1879844f9 e 2023-05-30 11:59:45.666 2023-05-30 11:59:45.666 NaN\n5 6475e59fd0fdcdc1879844fa f 2023-05-30 12:01:35.286 2023-05-30 12:01:35.286 3\n6 6476d2927bc0c144580bfcdf NaN 2023-05-29 00:00:00.000 NaT NaN\n", "text": "Hi @blinksatan_182,Not sure if this works for you but please take a look at the below code snippet I used for the get_data_mongo() portion:Note: I redacted some credentials from the aboveIn the above example, I changed variables now and last_sync to datetime objects rather than strings. Additionally, within the $gte and $lte operators, I just provided the datetime objects rather than the stringified versions as well.The above query returns:Hope this helps. If not, please provide any further details / queries you have. With any code snippets, it’s highly recommend to alter accordingly and test thoroughly to ensure it meets all your use case and requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason,ohh i see so the problem is in the date variable using formatting strftime. so i dont need to format the date.\nthank you it is works.", "username": "blinksatan_182" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find between two dates return empty dataframe
2023-05-30T14:49:16.517Z
Find between two dates return empty dataframe
637
null
[ "node-js" ]
[ { "code": "", "text": "Hi, pro\nNow, I want to update my mongodb from 2.6.9 to 4.0.28 or maybe higher.\nBut i am worried when i update mongodb will affect system program. For example, there is syntax change (mongodb driver syntax) that causes the project to crash.\nMy back-end is Nodejs (v6.11.5) and mongodb driver version is 2.1.0If it doesn’t affect anything, that’s great.Thanks very much", "username": "Dinh_Giang_A" }, { "code": "", "text": "Check release notes for any incompatible changes.", "username": "Kobe_W" } ]
Does updating the version of mongodb affect the system's program?
2023-05-30T04:30:54.578Z
Does updating the version of mongodb affect the system&rsquo;s program?
697
null
[]
[ { "code": "", "text": "Hi guys, when I try access “charts” page from my account, I’m redirected to the login page.\nIs the “charts” service unstable?", "username": "Robson_Pelegrini" }, { "code": "", "text": "We’re not aware of any instability. What happens if you sign in from this page?\nIf you can send the URL for your Atlas page I can have a poke around (or use the in-product chat to talk to our support team).Tom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom, thanks for the reply.\nThe issue was resolved, the MongoDb charts were itermittent at the moment I was accessing it.\nimage688×541 9.45 KB\n", "username": "Robson_Pelegrini" } ]
MongoCharts is not accessible
2023-05-30T18:22:07.328Z
MongoCharts is not accessible
604
null
[]
[ { "code": "", "text": "HiWhen I login to my Atlas dashboard, I see at the top of the page a message saying “We are deploying your changes”. This has been going on for approximately 8 hours now. In the meantime, the database is not accessible.In the “Project Activity Feed” I see many occurences of the following events:A few questions:Thanks in advance", "username": "dimitris.xanthopoulos" }, { "code": "", "text": "Hi @dimitris.xanthopoulos,This has been going on for approximately 8 hours now. In the meantime, the database is not accessible.I would advise you to contact the Atlas in-app chat support for these types of operational issues since they have more insight into your Atlas project / cluster. They should be able to advise you if what you are experiencing is expected or not.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks @Jason_TranI did cummunicate with support. They eventually fixed the issueRegards,\nDimitris", "username": "dimitris.xanthopoulos" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Outage (We are deploying your changes)
2023-05-30T21:08:52.688Z
Outage (We are deploying your changes)
363
null
[ "node-js", "react-native" ]
[ { "code": "const Car = {\n name: \"Car\",\n properties: {\n _id: \"objectId\",\n make: \"string\",\n model: \"string\",\n miles: \"int?\",\n },\n};\nclass Car extends Realm.Object {\n static schema = {\n name: \"Car\",\n properties: {\n _id: { type: 'objectId', default: () => new Realm.BSON.ObjectId() },\n make: \"string\",\n model: \"string\",\n miles: \"int?\",\n },\n primaryKey: '_id',\n };\n}\nstatic schematypegenerateRealm.createtype Label = {\n name: string;\n color: string;\n}\n\nexport class Task extends Realm.Object<Task> {\n _id: Realm.BSON.UUID;\n userId!: string;\n name!: string;\n description: string;\n createdAt: Date = new Date();\n labels: Label[];\n static primaryKey = '_id';\nLabel[]realm.writeconst Task: Task = {\n name: 'Example Task',\n description: 'This is an example',\n createdAt: new Date(),\n author: {\n id: 'some_uuid',\n name: 'Author Name',\n authorIconUrl: 'some_url'\n },\n labels: [\n {\n 'name': 'Important',\n 'color': 'Red'\n },\n {\n 'name': 'Family',\n 'color': 'Gold'\n },\n ]\n}\nTaskauthorlabelsrealm.writeauthorlabelsauthorrealm.writeClasses extending Realm.Object cannot define their own `schema` static, all properties must be defined using TypeScript syntax\n", "text": "Hi everyone, I’m very confused about how to define schemas when writing in TypeScript (specifically React Native), especially how to write schemas such that embedded objects and arrays work in a Realm database.The Node.js SDK and React Native SDK say there are two options for defining a realm object model:JavaScript objectsJavaScript ClassesThen there’s a completely separate method of defining schemas, outlined in the Realm React documentation.If one defines a Realm object like so:How does this map to the schema defined in the Realm/Atlas schema UI, which uses JSON?(Also note that, for me, Label[] does not work.)When working with embedded objects and arrays—trying to write non-primitives to my database—I run into a situation where the objects and arrays are simply ignored in the realm.write operation.For example, suppose I have the following object:When I tried writing this in the past, the resultant “Task” in the database would contain everything except for the author and labels properties; it’s as though the realm.write function catches that author and labels are objects and ignores them without any warning/error message/feedback.I changed my schema implementation (I honestly have no idea what I did, hence the questions about schema declarations above), and magically author and the embedded object showed up. However, the behavior with the array (realm.write quietly ignores it) still happens, and I don’t know how to solve this issue.Please help! I think I have a barely functional knowledge of realm objects, classes, and schemas, especially in the context of TypeScript, and as such I have no idea how to write embedded objects and arrays of objects to Realm database documents.Error:Well… ", "username": "Alexander_Ye" }, { "code": "@realm/react", "text": "Hi @Alexander_Ye,I’m sorry you’ve encountered so much resistance in getting your app working. We’re aware of these pain points in the docs are working to improve them.Currently, we’re updating the React Native SDK to default to @realm/react and TypeScript. You can take a look at our progress in this PR. Keep in mind that this work is not complete and may change before it’s merged. We also haven’t gotten to all of the React Native SDK pages yet, so some pages in the staging site are still using the old guidance.The updated Define a Realm Object Mode and Embedded Objects pages should be more helpful.Please take a look and let me know if the newer docs help.", "username": "Kyle_Rollins" }, { "code": "schema", "text": "The article you link “Define a Realm Object Model” recommends a syntax which will throw an error when used:\n“Classes extending Realm.Object cannot define their own schema static, all properties must be defined using TypeScript syntax”", "username": "Brian_Luther" }, { "code": "", "text": "Hmm. @Brian_Luther, can you share more info about your app? I can’t reproduce the error. Though I do recall seeing it in the past. What version of realm and realm/react are you using?", "username": "Kyle_Rollins" }, { "code": "{ \"createdBy\": { \"ref\": \"#/relationship/mongodb-atlas/bolo-6/User\", \"foreignKey\": \"_id\", \"isList\": false } }Error: Exception in HostFunction: Missing value for property 'User.userId'", "text": "Versions I’m using:\nrealm 11.3.1\n@realm/react 0.4.1\n@realm/babel-plugin 0.1.1\nexpo 47.0.12\nreact-native 0.70.5Perhaps this has to do with using @realm/babel-plugin to transpile typescript classes into the JSON schema format? I may try removing the babel-plugin and using JSON schemas instead. Many of the docs are written using the JSON schema syntax so it’s hard to tell how to accomplish the same things using the typescript syntax, I think it might be easier to just use the JSON schema.For example when including a relationship in a class schema defined in the frontend in development mode, I see the JSON schema syntax that the docs describe being generated on the backend in the app services UI. Meaning the type of the field is an ObjectId and there is a relationship definition, eg{ \"createdBy\": { \"ref\": \"#/relationship/mongodb-atlas/bolo-6/User\", \"foreignKey\": \"_id\", \"isList\": false } }But when I try to construct an instance in the frontend, it expects the entire related object as an argument rather than the ObjectId of the related object. If I pass just the ObjectId to the constructor, I get the following error:Error: Exception in HostFunction: Missing value for property 'User.userId'Where User.userId is another property on the referenced object, seeming to indicate that it wants to be passed the entire referenced object. (Note that this example is slightly confusing, I have an Atlas collection called User to store application data about users, and each document stores the equivalent Realm userId). I am able to construct an object with a relationship to another object only by passing the entire related object to the constructor.I can’t figure out if the latter issue I described is related to the OP’s or not, but I seem to be hitting a dead-end because many of the docs describe things in terms of JSON syntax and the frontend will only allow me to use the new TypeScript syntax.", "username": "Brian_Luther" }, { "code": "static schema", "text": "@Brian_Luther, I’m so sorry for the veeeery delayed response. I haven’t had much time to spend on the forums in the last month… and a half. Are you still running into this issue? I promise I won’t disappear for another month and a half, but I don’t want to dig into this too much if you’ve already found a solution.Some high-level things to consider, just in case:", "username": "Kyle_Rollins" }, { "code": "static schemaPetOwner{ pet: 'Pet?' }Realm.create()PetOwnerPetrealm.create('PetOwner', {\n name: 'Alice',\n birthDate: new Date('1987-01-01'),\n pet: {\n _id: aBsonId,\n name: 'Spot',\n age: 7,\n animalType: 'Dog'\n})\ncreate", "text": "Hey Kyle, no worries at all, thanks for getting back.I’ll check out the new version of the docs, that could definitely be helpful. Removing the babel plugin seems like a good call to me, bouncing back and forth between the typescript syntax and the static schema syntax was confusing. Add on top of that a different syntax in Atlas app services - JSON schema as it seems to be called - and it’s difficult to figure out how to do what you’re trying to do, or even where to do so (on the front-end or in Atlas). Some information regarding relationships was written in the JSON schema syntax (eg the syntax you see in Atlas app services), which made it really unclear if that’s what I needed to use in the front-end. Can’t say exactly where I encountered that, some information seems to be spread between different SDKs (or was).Anyways, the point of that was just to communicate that 3 similar and not clearly differentiated schema syntaxes was a stumbling point, so getting the Babel/TypeScript version out of the mix seems helpful.I still have not been able to clarify “the way” to create an object in Realm with a relationship property if you could chime in on that, it might be helpful to have in the CRUD docs too. By relationship property I mean the one-to-one or one-to-many relationships (not an embedded object) described here.", "username": "Brian_Luther" }, { "code": "realm.create('PetOwner', {\n name: 'Alice',\n birthDate: new Date('1987-01-01'),\n pet: {\n _id: aBsonId,\n name: 'Spot',\n age: 7,\n animalType: 'Dog'\n})\nPet", "text": "the point of that was just to communicate that 3 similar and not clearly differentiated schema syntaxes was a stumbling pointThis is an excellent point. If/when we add the babel plugin way to the docs, we’ll need to make sure we do so in a way that doesn’t confuse folks. I appreciate you sharing your experience! It should help us guide people better in the future.We could also potentially add some information about how the JS client maps client schemas to the Atlas App Services JSON schema. Between the client SDK and App Services docs, “schema” can mean so many different things. We’re working on addressing this, but it’s a complicated issue. Generally, though, we now refer to client object “schemas” as object models in an attempt to disambiguate.Anyway, about creating objects and defining relationships:Most Realm operations happen locally. This means that what you see in Atlas App Services is not directly comparable to what your client has. For example, the differences in the client object model and App Services JSON schema.So, you can’t pass only an ID when creating a relationship. Locally, all Realm objects are indeed Objects. Theoretically, the JS SDK team could create an API for creating relationships in this way, but that doesn’t exist right now.When you create a relationship, like your example:You’re creating a new Pet object in addition to establishing the relationship. When you instead query for an existing object and pass the object, you’re adding a new relationship to that existing object.Regarding your last bullet point: relationships can be hard to mentally map. Historically, I’m not sure the docs have done a great job helping map them. I don’t think I have any additional advice at the moment, but I’m looking into what you’ve posted and I’ll try to make these docs clearer. Relationships were definitely a stumbling block for me when I started using Realm.I really appreciate you taking the time to share your thoughts and experiences!", "username": "Kyle_Rollins" }, { "code": "Realm.createconst existingPetId = new BSON.ObjectId(\"645512e5b73d72169ac61b8c\")\nconst existingPet = realm.objectForPrimaryKey(existingPetId)\n// existingPet: {\n// _id: '645512e5b73d72169ac61b8c',\n// name: 'Spot',\n// age: 7,\n// animalType: 'Dog'\n// }\n\nconst create1 = realm.create('PetOwner', {\n\tname: 'Alice',\n\tbirthDate: new Date('1987-01-01'),\n\tpet: existingPet\n})\n\nconst create2 = realm.create('PetOwner', {\n\tname: 'Alice',\n\tbirthDate: new Date('1987-01-01'),\n\tpet: {\n\t\t_id: existingPetId,\n\t\tname: 'Spot',\n\t\tage: 7,\n\t\tanimalType: 'Dog'\n\t}\n})\nRealm.createPet_idPetPetscreate2", "text": "Just to clarify a bit further, let’s say we have two different Realm.create operations, seen here:Happy to share, seems like it can be helpful to have a newcomer/outsider perspective sometimes, and it’s good for me to clarify how this stuff is working so thanks for taking the time. On the schema topic, I personally would’ve found it helpful to have the different schema syntaxes explicitly addressed side-by-side in the docs, but you may be doing that now and the typescript syntax is gone anyway, so this might already be addressed for someone coming in now.", "username": "Brian_Luther" }, { "code": "const create1const create2create1create2_idclass CarOwner extends Realm.Object<CarOwner> {\n _id!: BSON.ObjectId;\n name!: string;\n birthDate!: Date;\n car!: Car | null;\n \n static schema = {\n name: \"CarOwner\",\n properties: {\n _id: \"objectId\",\n name: \"string\",\n birthDate: \"date\",\n car: \"Car\",\n },\n primaryKey: \"_id\",\n };\n}\n \nclass Car extends Realm.Object<Car> {\n _id!: BSON.ObjectId;\n make!: string;\n model!: string;\n miles!: number;\n \n static schema = {\n name: \"Car\",\n properties: {\n _id: \"objectId\",\n make: \"string\",\n model: \"string\",\n miles: \"int\",\n },\n primaryKey: \"_id\",\n };\n}\n\n// Open realm with your object models.\nconst realm = await Realm.open({\n schema: [Car, CarOwner],\n});\n\nconst existingCarId = new BSON.ObjectId(\"645512e5b73d72169ac61b8c\");\n// Create car object with specific _id.\nrealm.write(() => {\n realm.create(Car, {\n _id: existingCarId,\n make: \"Hyundai\",\n model: \"Accent\",\n miles: 12000,\n });\n});\n\nconst existingCar = realm.objectForPrimaryKey(Car, existingCarId);\n\n// Do both creation ops in one write transaction. More efficient.\nrealm.write(() => {\n realm.create(CarOwner, {\n _id: new BSON.ObjectId(),\n name: \"Leia\",\n birthDate: new Date(\"1987-01-01\"),\n car: existingCar,\n });\n\n realm.create(CarOwner, {\n _id: new BSON.ObjectId(),\n name: \"Han\",\n birthDate: new Date(\"1987-01-01\"),\n car: existingCar,\n });\n});\n\n// Contains an array of the two new CarOwner objects\nconst carOwners = realm.objects(CarOwner);\n", "text": "Agh, again I must apologize for time getting away from me. Sorry, @Brian_Luther, You’re correct in your assumptions in the previous post.I don’t know if it’s helpful, but here’s how I would write your sample code (tested), but with cars because that’s what I already have set up. . And assuming you want to create two CarOwners who own the same Car.", "username": "Kyle_Rollins" } ]
[Realm React] Schema and Embedded Array Confusion
2023-02-08T21:18:21.671Z
[Realm React] Schema and Embedded Array Confusion
2,416
null
[ "python", "spark-connector" ]
[ { "code": "", "text": "Hi all,Is there a way to retrieve a complete list of collections (similar to ‘show collections’ using PySpark? I would like to execute a query across multiple collections but avoid creating a new spark read session each time I do so.Cheers,Ben.", "username": "Ben_Halicki" }, { "code": "", "text": "import pyspark\nfrom pyspark.sql import SparkSessionspark = SparkSession.builder.appName(“Get MongoDB Collections”).getOrCreate()spark.conf.set(“spark.mongodb.input.uri”, “mongodb://localhost:27017/mydb”)collections = spark.read.format(“mongo”).listCollectionNames()for collection in collections:\nprint(collection)", "username": "sagar_sadhu" }, { "code": "", "text": "Hi @sagar_sadhu,Thanks for your reply. I get the following error when I tried your code:\nAttributeError: ‘DataFrameReader’ object has no attribute ‘listCollectionNames’I can see listCollectionNames is a part of the standard mongodb libraries, but not pyspark. Does this sound correct to you?Kind regards,Ben.", "username": "Ben_Halicki" }, { "code": "for collection in collections:\n sparkDF = spark.read.format(\"mongo\").option(\"collection\", collection).load()\n", "text": "Are you trying to only get a list of collections? As you pointed out that can be done via standard mongo drivers.\nFor example in python:\nhttps://pymongo.readthedocs.io/en/stable/api/pymongo/database.html#pymongo.database.Database.list_collection_namesimport pymongoclient = pymongo.MongoClient()db = client.my_databasecollections = db.list_collection_names()The MongoDB Spark connector is limited to interacting with only one MongoDB collection during each read or write operation. As a result, it does not natively support reading or writing from multiple database/collections/ schemas, simultaneously in a single operation.You can create a loop that iterates over the list of collections you want to read from, and for each collection, use the MongoDB Spark Connector to read the data into Spark.", "username": "Prakul_Agarwal" } ]
Pyspark get list of collections
2023-05-08T09:26:26.147Z
Pyspark get list of collections
1,142
null
[ "replication", "spark-connector" ]
[ { "code": "f\"mongodb://{username}:{password}@{host}/?replicaSet={replicaSet}&readPreference={readPreference}&authSource={authSource}&tls=true&tlsCAFile={tlsCAFile path}&tlsCertificateKeyFile={tlsCertificateKeyFile path}\"\nself.spark = SparkSession.builder \\\n.config(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector:10.1.0\") \\\n.appName(\"APP NAME\") \\\n.getOrCreate()\n\ndf = self.spark.read.format(\"mongodb\") \\\n.option(\"connection.uri\", connection_string) \\\n.option(\"database\", <DB NAME>) \\\n.option(\"collection\", <COLLECTION NAME>) \\\n.load()\nexception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}}]\n", "text": "I am trying to connect my MongoDB instance to spark in databricks using the mongo spark connector v10.1.0. I am able to connect to MongoDB through a MongoClient instance with the same connection string that I am trying to load into spark through the connection.uri option. The connection string I am using follows the convention below.Here are my spark configurations that I am using and the way I am trying to connect.Here is the error I am running into", "username": "KYLE_HORNACEK" }, { "code": "", "text": "Also it should be noted that the the tls files are stored in dbfs.", "username": "KYLE_HORNACEK" }, { "code": "", "text": "I am able to connect to MongoDB through a MongoClient instance with the same connection stringWhere are you running this test? did you try this both on the spark master node and the worker nodes? Also are these paths accessible in bash? Are these certificate file available to all nodes in the cluster?Because my first suspect would be incorrectly configured/inaccessible DBFS (Databricks File System), that is resulting into path not resolving on all of the spark cluster.", "username": "Prakul_Agarwal" } ]
MongoDB spark driver not connecting to MongoDB Atlas cluster through databricks
2023-05-22T19:08:00.329Z
MongoDB spark driver not connecting to MongoDB Atlas cluster through databricks
799
null
[ "python", "spark-connector" ]
[ { "code": "https://spark.apache.org/third-party-projects.html", "text": "When trying to execute the code in Streaming Data with Apache Spark and MongoDB | MongoDB receiving an error message which states that \"\norg.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find data source: mongodb. Please find packages at https://spark.apache.org/third-party-projects.html.\"Any thoughts on what is going wrong here. The Mongodb is Mongo Atlas. Spark engine is thru Databricks", "username": "Srinivasan_Subramanian" }, { "code": "", "text": "[DATA_SOURCE_NOT_FOUND] Failed to find data source: mongodbHi Srinivasan,Have you installed Mongodb Spark connector to your databricks environment?Here are the steps:\nOnce the cluster is up and running, click on “Install New” from the Libraries menu.\nHere we have a variety of ways to create a library, including uploading a JAR file or downloading the Spark connector from Maven. In this example, we will use Maven and specify org.mongodb.spark:mongo-spark-connector_XXX: as the coordinates.Remaining here: Exploring Data with MongoDB Atlas, Databricks, and Google Cloud | MongoDB Blog", "username": "Prakul_Agarwal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Streaming data using databricks spark connector
2023-05-29T11:28:28.977Z
Streaming data using databricks spark connector
1,276
https://www.mongodb.com/…b_2_1024x552.png
[]
[ { "code": "", "text": "I want to get the data in my cluster from 4 years ago, when I cannot resume the cluster since it the snapshot is too old and I dont have any backups. Is there any other way to download the data?\n\nunable1810×976 59 KB\n", "username": "Vishwa_Pravin" }, { "code": "", "text": "Use the in app support. The icon in the bottom right corner of your screen shot.", "username": "chris" }, { "code": "", "text": "Hi @Vishwa_Pravin ,Thank you for reaching out. If you could reach out to our support team ( our in app support chat in Atlas can be used for this as Chris nicely pointed out! ), then our support team should be able to assist you in gathering your data from this cluster.", "username": "Evin_Roesle" } ]
How to download very old snapshot?
2023-05-30T06:52:31.376Z
How to download very old snapshot?
351
null
[ "swift", "transactions" ]
[ { "code": "Error integrating bootstrap changesets: Failed to transform received changeset: Schema mismatch: Link property 'category' in class 'TransactionObject' points to class 'CategoryObject' on one side and to 'SubcategoryObject' on the other.class TransactionObject: Object, ObjectKeyIdentifiable {\n\t@Persisted(primaryKey: true) var _id: ObjectId\n\t// ...\n\t@Persisted var category: CategoryObject?\n\t// ...\n}\n", "text": "I changed database collection name and updated a type name on database schema from “SubcategoryObject” to “CategoryObject”.I also have a TransactionObject that has a property that was linked to the SubcategoryObject (which has now been renamed). I turned off sync on Atlas service app and turned it on later but I’m getting a sync error:Error integrating bootstrap changesets: Failed to transform received changeset: Schema mismatch: Link property 'category' in class 'TransactionObject' points to class 'CategoryObject' on one side and to 'SubcategoryObject' on the other.The Swift Realm Object has already been updated:I’ve also incremented schemaVersion and provided a migration block but the migration block is not being called, I set a breakpoint and it’s not hitting the breakpoint.What else do I need to do to update the Local Realm database to match the schema on the Atlas service app so that I can get sync working again.", "username": "tobitech" }, { "code": "", "text": "Without seeing more of your models, I can’t say for sure what the issue is. I’m guessing it’s one of a few possible things:When you’re using Device Sync, you don’t need to “migrate” the realm file that is on a device because it’s not a “local” realm. It’s a synced realm. That’s why your migration block breakpoint isn’t being hit - a synced realm doesn’t call the migration block.After changing the schemas, the realm file on device should experience a client reset and re-download a new version of the synced realm. The Realm object models in your Swift code need to be updated to match the App Services schema. That, plus handling a client reset, are the only things your Swift code needs to do.I’d say double check your schemas in App Services and in your Realm object models and make sure you’ve updated any documents in your linked Atlas collection that use your old schema to match your new schema, and one of those things should fix the issue.(Also, if any of these are fields or object types you’re syncing on in the Flexible Sync subscription in your Swift app code, make sure you’ve updated the subscription query.)", "username": "Dachary_Carey" }, { "code": "", "text": "Thank you @Dachary_Carey this is very helpful and provides a lot of insight on how schema changes are handled with a synced realm. I will keep digging based on this info", "username": "tobitech" } ]
How do I update Local Realm to match Atlas App Service Schema
2023-05-29T10:05:10.717Z
How do I update Local Realm to match Atlas App Service Schema
654
null
[ "dot-net" ]
[ { "code": "System.Diagnostics.DiagnosticSourcenamespace OpenTelemetry.AutoInstrumentation;\n\ninternal class Initializer\n{\n public static void EnableAutoInstrumentation(InstrumentationOptions options)\n {\n // Library owner implements bootstrapping\n }\n}\n", "text": "OpenTelemetry AutoInstrumentation has issues supporting libs with custom bootstrapping, external libs, etc.\nI’m researching currently if it’s possible for library authors to address some feedback for better compatibility.There are 2 ways to make automatic instrumentation easier for us to implement:A library should always try to create activities by default. If there is no listener, no activities are created (this is System.Diagnostics.DiagnosticSource behaviour).\nMajor examples: ASP.NET Core, HttpClientIn any case if library authors do not want to create activities by default (overhead caused by architecture, etc), it is the backup behaviour how AutoInstrumentation can still easily wire up activity creation.A library should contain specialized type for bootstrapping Auto Instrumentation:This proposal is opened for feedback to research generic / easy patterns how to enable automatic instrumentation for libraries.", "username": "Rasmus_Kuusmann" }, { "code": "", "text": "Hi, @Rasmus_Kuusmann,Thank you for reaching out to us. Integration with OpenTelemetry is a product question and I would encourage you to start up a conversation with our Product Team. @Patrick_Gilfether1 is the PM for the .NET/C# Driver. I’ll ask him to reach out to you to start a discussion.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thanks for reaching out @Rasmus_Kuusmann. I’ve sent you a DM!", "username": "Patrick_Gilfether1" } ]
Better compatibility with OpenTelemetry AutoInstrumentation
2023-05-26T09:46:09.588Z
Better compatibility with OpenTelemetry AutoInstrumentation
1,023
null
[ "data-modeling", "swift", "flexible-sync" ]
[ { "code": "final class User: Object {\n @Persisted var params: List<UserParam>\n}\n\nfinal class UserParam: EmbeddedObject {\n @Persisted var key: UserParamKey\n @Persisted var value: String\n}\n\nenum UserParamKey: Int, PersistableEnum {\n case firstName, lastName, businessName [etc]\n}\n{\n \"title\": \"User\",\n \"type\": \"object\",\n \"required\": [\n \"_id\",\n \"lastOnboardingStep\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"calendarId\": {\n \"bsonType\": \"string\"\n },\n \"lastOnboardingStep\": {\n \"bsonType\": \"long\"\n },\n \"ownerId\": {\n \"bsonType\": \"string\"\n },\n \"params\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"title\": \"UserParam\",\n \"type\": \"object\",\n \"required\": [\n \"key\",\n \"value\"\n ],\n \"properties\": {\n \"key\": {\n \"bsonType\": \"long\"\n },\n \"value\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n }\n}\n{\n \"logs\":[\n {\n \"_id\":\"6465f6ed0a9d994f8d714314\",\n \"co_id\":\"6465f6eb0a9d994f8d7142c9\",\n \"type\":\"SYNC_SESSION_END\",\n \"user_id\":\"6465f6ebeb4f073b5d5f2ca9\",\n \"domain_id\":\"645915213a82d1d7fbafefca\",\n \"app_id\":\"645915213a82d1d7fbafefc9\",\n \"group_id\":\"64591410ffb83f492c3916c7\",\n \"request_url\":\"/api/client/v2.0/app/billy-jgeoz/realm-sync\",\n \"request_method\":\"GET\",\n \"remote_ip_address\":\"5.14.130.67\",\n \"started\":\"2023-05-18T09:59:09.083Z\",\n \"completed\":\"2023-05-18T09:59:09.126Z\",\n \"function_call_location\":\"DE-FF\",\n \"function_call_provider_region\":\"aws-eu-central-1\",\n \"error\":\"ending session with error: failed to generate history batches: error generating object modifications: error generating post image: image generator encountered error applying instruction to state: error applying instruction to object in table 'User' with primary key '6464e5cc20c418867833b418' at field path 'params.0': ArrayInsert.prior_size was 0 but built-up array was only of length 14 (ProtocolErrorCode=212)\",\n \"error_code\":\"BadChangeset\",\n \"messages\":[\n \"Session was active for: 0s\"\n ],\n \"platform\":\"unknown\",\n \"platform_version\":\"Version 16.4 (Build 20E247)\",\n \"sdk_name\":\"Realm Swift\",\n \"sdk_version\":\"10.39.1\",\n \"sync_query\":{\n \"Settings\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"User\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Appointment\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Client\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Invoice\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"RecurrenceStatus\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Service\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\"\n },\n \"sync_session_metrics\":{\n \"uploads\":2,\n \"downloads\":4,\n \"downloaded_changesets\":3,\n \"downloaded_changesets_size\":920,\n \"changesets\":2\n }\n },\n {\n \"_id\":\"6465f6ed0a9d994f8d714313\",\n \"co_id\":\"6465f6eb0a9d994f8d7142c9\",\n \"type\":\"SYNC_CLIENT_WRITE\",\n \"user_id\":\"6465f6ebeb4f073b5d5f2ca9\",\n \"domain_id\":\"645915213a82d1d7fbafefca\",\n \"app_id\":\"645915213a82d1d7fbafefc9\",\n \"group_id\":\"64591410ffb83f492c3916c7\",\n \"request_url\":\"/api/client/v2.0/app/billy-jgeoz/realm-sync\",\n \"request_method\":\"GET\",\n \"remote_ip_address\":\"5.14.130.67\",\n \"started\":\"2023-05-18T09:59:09.083Z\",\n \"completed\":\"2023-05-18T09:59:09.126Z\",\n \"function_call_location\":\"DE-FF\",\n \"function_call_provider_region\":\"aws-eu-central-1\",\n \"error\":\"failed to generate history batches: error generating object modifications: error generating post image: image generator encountered error applying instruction to state: error applying instruction to object in table 'User' with primary key '6464e5cc20c418867833b418' at field path 'params.0': ArrayInsert.prior_size was 0 but built-up array was only of length 14 (ProtocolErrorCode=212)\",\n \"error_code\":\"BadChangeset\",\n \"messages\":[\n \"Upload message contains 1 changesets (total size 1.4 kB) to be integrated\"\n ],\n \"platform\":\"unknown\",\n \"platform_version\":\"Version 16.4 (Build 20E247)\",\n \"sdk_name\":\"Realm Swift\",\n \"sdk_version\":\"10.39.1\",\n \"sync_query\":{\n \"Service\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Settings\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"User\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Appointment\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Client\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"Invoice\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\",\n \"RecurrenceStatus\":\"(ownerId == \\\"6465f6ebeb4f073b5d5f2ca9\\\")\"\n },\n \"sync_write_summary\":{\n \"Service\":{\n \"inserted\":[\n \"6464ce7583f82645c6f9bc2d\"\n ]\n },\n \"Settings\":{\n \"inserted\":[\n \"6464e5cc20c418867833b419\"\n ]\n },\n \"User\":{\n \"inserted\":[\n \"6464e5cc20c418867833b418\"\n ]\n }\n }\n }\n ]\n}\n", "text": "My setup for flexible sync in an iOS app, using RealmSwift (schema seems fine, it was created automatically by using development mode):User schema:When I’m using a local Realm everything works fine. When I do anonymous login, I’m getting these errors:It should be possible to have a list of embedded objects, right?", "username": "Madalin_Sava" }, { "code": "", "text": "I’m getting the same errors with development mode on or off.\nAlso, I know the issue is with the list property because of the count (14) of items in the logs.", "username": "Madalin_Sava" }, { "code": "__realm_sync_645915213a82d1d7fbafefc9", "text": "Hi @Madalin_Sava,Your schema looks correct, but the error indicates that your sync history has been corrupted somehow. We’ll need some additional information to figure out the root cause of this issue. Can you provide the following if possible in a DM?", "username": "Kiro_Morkos" }, { "code": "UserObjectId('6464e5cc20c418867833b418')ownerId = 6465f1adab0d491cdd8353f0ownerId == \"646788c12ed19c14cc6e4966\"", "text": "Thanks for sharing those!Given the logs and history you shared, it appears that the issue is that the user is trying to update a User object that is not in their query view. i.e. the user is trying to modify the object with primary key ObjectId('6464e5cc20c418867833b418'), which has ownerId = 6465f1adab0d491cdd8353f0. However, the user’s active query on the User table is ownerId == \"646788c12ed19c14cc6e4966\". Normally this would trigger a compensating write, but there is a known issue being investigated now that triggers the error you’re seeing instead when lists are involved.In the meantime, I don’t have enough information to know how your app is able to modify an object that it cannot see. Generally this is only possible if you’re not using unique primary keys, and try to create an object that already exists outside of the user’s active query view.", "username": "Kiro_Morkos" }, { "code": "let user =app.login(credentials: .anonymous)let configuration = user.flexibleSyncConfiguration { subs in\nsubs.append(\n QuerySubscription<User>(name: User.className()) {\n $0.ownerId == user.id\n }\n )\n// similar subscriptions for the other objects\n}\nRealm(configuration: configuration, downloadBeforeOpen: .never)create(type, value: newObject, update: .modified)createownerIdthrow;", "text": "I did some more debugging and got stuck, can you help me out?\nWhat I’m doing:I checked several times the User._id and ownerId and I didn’t see any mismatch like you mentioned. ownerId is always the right one and it’s the same when I check the logs in AppServices. Maybe you looked at an older subscription query?\nI also tried to look at the files I sent you but didn’t find the subscription queries, maybe you can show me how to check them and debug on my own.How can I find the instruction that causes the illegal write? I never get any errors for a Realm.write call or in the logger. Also, is there a way to log the subscription query as it’s being executed? In the AppServices Logs, the subscriptions look ok, always with the logged in User.id.Also, I checked and there is no other write besides the create call, and the object I pass has the right ownerId.Sometimes I’m getting an exception in RealmDatabase:network.hpp.do_recycle_and_execute:2770 (the single throw; instruction) when debugging or at startup (every time, in this case) if I try to open a synced realm from the cached anonymous user, not sure if it’s related.", "username": "Madalin_Sava" }, { "code": "_id_id", "text": "It sounds like this may be your issue. If an object already exists on the server with the same primary key (_id value), then this will be considered “updating an object outside your view”, which is not allowed. When copying the object, try setting a new, unique _id value. Let me know if that works!", "username": "Kiro_Morkos" }, { "code": "", "text": "That’s right, but then I’m not sure how to solve my use case. Let’s say I have an Object subclass called “Preferences” and I need exactly one instance of it for every user.\nIf a user starts with a local realm (doesn’t login), I’ll have a Preferences instance with _id of “1” (for simplicity).\nThen, the user logs in with Apple and I copy the object to the synced realm and change the _id to “2”.\nIf the user logs out, I go back to another local realm and I’ll have the instance with _id “3”.\nUser logs in with Google, _id will be “4”. What happens if the user decides to also log in with Apple (link identities)? Is there a way to manually merge the realms or what is the best practice?", "username": "Madalin_Sava" }, { "code": "PreferencesownerIdownerId", "text": "If I’m understanding correctly, when the user logs back in with an existing account, the Preferences object with ownerId set to the user id will be synced back down to the device (because of the subscription). If applicable, you could then merge the “local” preferences into the synced one. Does that answer your question?You also mentioned linking identities, but I’m not sure how that’s applicable here. Linking an additional identity provider to an existing user does not create a new user object, so the ownerId should not change in that case.", "username": "Kiro_Morkos" }, { "code": "", "text": "Makes sense, I think at this point my uncertainties are rather coming from the business logic requirements. The Realm usage seems clear for now.Thank you for your help!", "username": "Madalin_Sava" } ]
Error applying instruction to object in table - when doing anonymous login, for an Object with a List<EmbeddedObject> property (ProtocolErrorCode=212)
2023-05-18T10:17:02.466Z
Error applying instruction to object in table - when doing anonymous login, for an Object with a List&lt;EmbeddedObject&gt; property (ProtocolErrorCode=212)
1,044
https://www.mongodb.com/…f_2_1024x576.png
[ "serverless", "newyork-mug" ]
[ { "code": "Associate Developer Advocate, MongoDBPrincipal Developer Advocate, MongoDBData & Analytics Partner SA, Amazon Web Services (AWS)", "text": "\nNYC MUG1920×1080 247 KB\nMay 31st, 2023, 6:00 pm - 8:30 pm ESTNew York MongoDB User Group is excited to conclude the beautiful month of May with a bang!Register here for a confirmed seat!This upcoming meetup is being hosted to unite interested developers, MongoDB, and AWS enthusiasts in the region, to share about how to move ahead of traditional relational database management systems (RDBMS) to a NoSQL-first mentality, NoSQL data modeling and learn about building Serverless Event-Driven Applications with MongoDB Atlas & Atlas App Services on AWS.We will then host an Interactive Data Safari with MongoDB Atlas Charts!In the meantime make sure you join the New York MongoDB User Group to introduce yourself and stay up to date with future meetups and discussions. With MongoDB.local New York a few weeks after the event, we will be awarding some free passes at the event. You could also register right now with coupon code MUG50 and stack it with Justin10 to get 60% off.Please bring your laptop to participate!Event Type: In-Person\nLocation: AWS Loft, 350 W Broadway, New York, NY 10013, United StatesSpeakers and HostsAnaiya Raisinghani\nAssociate Developer Advocate, MongoDB–\nPoveda, Justin-Headshot554×554 78 KB\nJustin Poveda @Justin_Poveda\nNYC MongoDB User Group Leader–Michael Lynn - @Michael_Lynn\nPrincipal Developer Advocate, MongoDB–Alekseev Igor\nData & Analytics Partner SA, Amazon Web Services (AWS)", "username": "Justin_Poveda" }, { "code": "May 31st 2023, 6:00pm – April 30th 2023, 8:00pm, (GMT-04:00) Eastern Time (US & Canada)", "text": "The date for the event says May - April May 31st 2023, 6:00pm – April 30th 2023, 8:00pm, (GMT-04:00) Eastern Time (US & Canada) I’m guessings it’s May 31st 6:00pm - 8pm?", "username": "Taj_English" }, { "code": "", "text": "Yes! I just fixed the date, thank you for commenting!", "username": "Justin_Poveda" }, { "code": "", "text": "Kind Reminder: Just a friendly heads-up that the event is scheduled for tomorrow, and we’re really looking forward to seeing all of you there! ", "username": "Harshit" } ]
AWS meets MongoDB @ AWS Startup Loft NYC
2023-05-02T19:40:38.782Z
AWS meets MongoDB @ AWS Startup Loft NYC
1,759
null
[ "dot-net" ]
[ { "code": "", "text": "Hi, i send Realm db zipped inside my App.If adopted FTS , it’s possible create index for FTS on demand after unzip in local device ?thanks", "username": "Sergio_Carbonete1" }, { "code": "", "text": "Hi @Sergio_Carbonete1,Adding a FTS index does not require a migration, so even when you created the zipped realm you didn’t have an index defined, then it should not be a problem. The index will be created when the realm is opened the first time.", "username": "papafe" }, { "code": "[Indexed(IndexType.FullText)]", "text": "Hi thanks,If i understand first define a class without defining a [Indexed(IndexType.FullText)] attribute to generate zipped realm . And in class used to read realm i define this attribute.thanks for help", "username": "Sergio_Carbonete1" }, { "code": "", "text": "If you open the Realm as readonly, you won’t be able to create an index if one doesn’t already exist. But you can create the file with an index and zip that in your app.", "username": "nirinchev" } ]
Index Full text creation on demand
2023-05-25T17:13:09.180Z
Index Full text creation on demand
545
https://www.mongodb.com/…_2_452x1024.jpeg
[ "golang", "storage" ]
[ { "code": "4.0.16go1.16globalsign/mgov0.0.0-20181015135952-eeefdecb41b8", "text": "Hi -I’d like to solicit feedback about an issue we’re experiencing on a self-hosted mongo cluster.Occasionally, the application that depends on this 3 node mongo cluster will experience high levels of latency which we have correlated with a sudden drop in available read tickets. As seen in the screenshot below, this also happens after a spike in new connections, all while CPU, memory, and disk metrics remain relatively stable.\nimage789×1784 208 KB\nDo you have any suggestions about what we should investigate to help with this issue?mongodb version: 4.0.16\napplication: go\napplication version: 1.16\nmongo driver: globalsign/mgo\nmongo driver version: v0.0.0-20181015135952-eeefdecb41b8Thanks for your time!", "username": "a_ins" }, { "code": "", "text": "Hi @a_ins,Welcome to the MongoDB Community forums experience high levels of latency which we have correlated with a sudden drop in available read tickets. As seen in the screenshot below, this also happens after a spike in new connections, all while CPU, memory, and disk metrics remain relatively stable.From my observations between the time frame of 16:06 and 16:07, there was a significant fluctuation in the number of tickets available, going from 120 or more to zero and then back up to 120 or more within a span of 30 seconds. At the same time, there was an increase in the number of connections, which rose from 0.5K to 1.7K, representing a three-fold increase.I believe this is working as intended, and not really an issue from the server side, but rather a design feature of WiredTiger to prevent it from being overwhelmed and thus having difficulty servicing the work assigned to it. The decrease in available read/write tickets indicates that the number of concurrent running operations has reached the configured read/write values, which means that any additional functions must wait until one of the running threads finishes its work on the storage engine.In the majority of cases, a drop in the number of reads/write tickets is an indication of an increase in load or slow performance, such as slow hardware or poorly-indexed queries, causing operations to run for much longer than expected, leading to an increase in latency. However, in this instance, I think it’s the flood of incoming connections and concurrent operation and this is the correlation due to that. From the graphs you posted, though, it seems that WiredTiger was able to process all work within 30 seconds.Let us know if you have any further questions.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you for the response Kushagra, this was helpful to see.I’ll be going back to my team for more input.", "username": "a_ins" }, { "code": "", "text": "Read and write tickets control concurrency in WiredTiger. That is, they control how many read and write operations can execute on the stor...This doc explains a bit on the concepts.Those increased number connections must have been causing high latency on some queries. I believe in modern world there’s no one to one mapping from connection to thread on the server. If all queries run fast enough, even a single thread is able to handle all of them (e.g. redis). So some queries must take long to finish.You can also check query latency graph during that window.", "username": "Kobe_W" }, { "code": "", "text": "on the other hand, if number of connection is indeed the root cause , you can consider adding a proxy in front of mongodb servers (if this is not done yet). Many proxy servers can employ a connection pool with backend servers to limit the connection number. (e.g. proxySQL for mysql).", "username": "Kobe_W" }, { "code": "", "text": "Hi all, I’m @a_ins’s coworker and I can add a little more detail to our issue. It’s a bit baffling and we’d definitely appreciate guidance from those with more specific Mongo experience.This has been a recurring incident in which database response times and query run times spike. Queries that normally take tens of ms take several seconds. Read tickets drop and the cluster effectively locks up for a while. Then it eventually seems to work itself free and resume normal operation. The drop in read tickets occurs on both reader nodes, though sometimes one occurs before the other by seconds or minutes.Connections do seem to spike, but I’ve found at least one incident where connections spiked after the drop in read tickets. The drop preceded the spike.We definitely have some poorly optimized queries and indexes, and in the incident where tickets dropped prior to the connection spike there was an increase in documents returned and in unindexed queries. However, in other incidents we don’t see that.If it was simply a slow query or queries, I wouldn’t expect it to consistently impact both readers in this way. And, while I’m still getting up to speed on the issues, my understanding of previous investigations is that there doesn’t appear to be a consistent pattern to the queries running during these incidents.This really looks like a resource contention issue to me. My understanding of read tickets dropping to zero is that it’s usually a symptom of some other resource bottleneck that’s causing queries to hang on to read tickets for longer than usual, resulting in tickets getting used up and queries queuing. Is this an accurate understanding? Are there other potential root causes of read tickets dropping?I struggle to think of a database scenario that should cause the whole database to lock up with out some sort of resource contention - but, granted, I’m far more experienced with relational databases than Mongo or NoSQL. And we haven’t come any where near any of our resource limits - like @a_ins said: CPU, disk, memory all look fine. We’ve got the readers on huge boxes and Mongo’s nowhere near the limits. Is it possible there’s some sort of internal resource limit preventing Mongo 4 from using all the resources on the boxes? Our writer is on a much smaller box - is it possible the resources available to the writer are limiting the readers in some way?Is this something a poorly written or indexed query could cause: a complete lock up of the cluster with out apparent resource contention?We’d definitely appreciate any thoughts or guidance!", "username": "Daniel_Bingham" }, { "code": "", "text": "Hi @Daniel_Bingham,Firstly, it’s important to note that the WT ticket is a purely reactive mechanism that responds to how much work is being processed and incoming work, to prevent WiredTiger from having too much work backlog and making the situation worse. This means that if the workload is very high, the WT ticket will react to that.There are a few possible causes of performance issues previously mentioned, such as slow queries and underprovisioned hardware. However, if all the metrics appear to be fine, it’s possible that we are looking at metrics that are not showing the actual bottleneck. It’s important to investigate further and consider other potential causes.In terms of resource contention, can I ask what makes you think that this could be the cause?However, it’s worth noting that there are no artificial limitations on resources from the MongoDB server side. In fact, MongoDB will try to use all available resources in order to provide the best possible performance. This is why running multiple mongod instances on a single node is discouraged, as it can lead to resource contention.Another possible cause of performance issues could be poorly written or indexed queries. If a query is not optimized, and you have many of them simultaneously, they could be holding read tickets for an extended amount of time, reducing the number of available tickets for incoming work. It’s worth checking the logs for any slow queries and investigating whether they could be the cause.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Daniel_Bingham @a_insMy cluster is experiencing the same issue.\nHow did you resolve the issue?\nAny input is appreciated!", "username": "yahao.xing" } ]
Investigating Occasional Low WiredTiger Read Ticket Availability
2023-02-07T19:47:32.890Z
Investigating Occasional Low WiredTiger Read Ticket Availability
3,146
null
[ "queries", "node-js", "crud" ]
[ { "code": "app.patch('/update', async(req, res) => {\n await client.connect();\n db = await client.db(\"Lab3\");\n let collection = await db.collection(\"students\");\n let susername = req.body.username\n\n let result = await collection.findOneAndUpdate(\n {username : susername}, req.body, {new: true}\n )\n\n res.send(result).status(200);\n});\n", "text": "Hi everyone, I’m currenly facing a problem when I want to update data to my mongoDB database. I try to not specified what to update from the user, but let the user choose what to update. So that my findOneandUpdate parameter will be username as filter, req.body as the content to update to the database. Can anyone help me to solve this problem? Thank you so much!", "username": "WoNGG" }, { "code": "{ $set: req.body }\n", "text": "Hello @WoNGG,Can you please share the error you are getting?I can see you missed the set stage name in the update part, it should be,Note: Make sure you filter the properties before updating the database, otherwise it will increase the junk data those are not useful in the future.", "username": "turivishal" }, { "code": "", "text": "Hello, @turivishal. The error before I get is [Update document requires atomic operators]. After I entering the $set parameter, the program works successfully! Thanks for your helping, I have find for the way to solve this error for hours.", "username": "WoNGG" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atomic Operators needed when updating document
2023-05-30T00:25:44.432Z
Atomic Operators needed when updating document
586
null
[ "queries" ]
[ { "code": "", "text": "I have a collection with some document sctructure like this\n{\nname: string\nemails:[string, string]\n}\nI have exact name and an email. And I need to find document that have that name and have the email in array that MAY BE THE END of given email. (Example: array is [[email protected], [email protected]], given email is [email protected] - so its true, we have an email that may be the end of the given)\nQuestion - can I write this in query or it only can be done in code. I don’t want to use that much RAM because the array may be pretty big.\nI am new in Mongo, maybe I just dont know the basics but I didn`t find the answer.", "username": "Andrew_Kondratyev" }, { "code": ".findOne({ emails: \"search email\" })\n", "text": "Hello @Andrew_Kondratyev, Welcome to the MongoDB community forum,You can directly check your input in array of strings without specifying any position,Checkout the documentation with detailed examples:", "username": "turivishal" }, { "code": "emails: [\"endOfString\", \"otherString\"]\ngivenEmail: \"startOfString_endOfString\"\n", "text": "I don’t need exact match. Just some of string in array shoud be the end of the given string\nLike we have:And this should match withOr I can maybe store them somehow?", "username": "Andrew_Kondratyev" }, { "code": "\"_\"", "text": "I would suggest splitting your input email by \"_\" and do the exact match with the last value.", "username": "turivishal" } ]
Find if string ends on some of the string in array
2023-05-29T19:09:01.416Z
Find if string ends on some of the string in array
364
null
[ "queries" ]
[ { "code": "", "text": "I’m experiencing this behaviour: I have a very complex (therefore very slow) query which does not honours the specified maxTimeMS.I found this in the docs:MongoDB only terminates an operation at one of its designated interrupt points.But there’s no clarification of what an “interrupt points” really is. I’ve read elsewhere that the number of batches may be related, but did not find anything describing what happen in case of single batch (my case, as my query only returns a bunch of docs).My suspect: a query comprised of only one batch would never be terminated no matter the value of maxTimeMS nor the time required to complete the batch.Can anyone confirm this?thanks", "username": "MatteoSp" }, { "code": "maxTimeMS", "text": "Hey @MatteoSp,Welcome to the MongoDB Community forums I have a very complex (therefore very slow) queryCould you please share the logline for this query, which took longer than maxTimeMS, showing its duration, and query plan?But there’s no clarification of what an “interrupt points” really is.As per the documentation, interrupt point is a point in an operation’s lifecycle when it can safely abort.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "maxTimeMS", "text": "Hi Kushagra!Could you please share the logline for this query, which took longer than maxTimeMS , showing its duration, and query plan?No, I cannot.As per the documentation, interrupt point is a point in an operation’s lifecycle when it can safely abort.This precisely what I mean by there’s no clarification. Where are these points located within an operation lifecycle? After every document? After a numbers of docs? After every batch? Elsewhere?thanks", "username": "MatteoSp" }, { "code": "batch", "text": "Hey @MatteoSp,The interrupt points are implementation details and may change from version to version, and as far as I know there is not a single exhaustive list that shows all of them (since they’re implementation details). However, in general terms, MongoDB query operations have a ‘yield point’, where it can pause and give control to other operations, typically while waiting for data to load from disk.Regarding the ‘batch’ parameter in MongoDB, I believe you meant batchSize(). This determines the number of documents returned in each batch of a response, which I don’t feel is the main cause of what you’re seeing unless you have evidence otherwise.To further assist you, may I ask if you are using the latest version of MongoDB? If not, I kindly suggest updating it. Additionally, it would be helpful to know if you are consistently reproducing the issue and provide a script for reference. Without a reproduction script, it’s quite impossible to determine what’s happening in your specific case.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi Kushagra,\nhere a piece of the log (taken from Atlas advisor, I removed some sensitive detail):\nimage981×752 35.5 KB\nAs you can see I have a maxTimeMS of 55K ms, but the find operation completes in 300K ms.About batch/batchSize: I was asking if being in the case of a single batch (nReturned = 2) prevents any interrupt point to be reached (because of this: Clarification regarding working of maxTimeMS options - #3 by Jason_Tran).thanks", "username": "MatteoSp" }, { "code": "maxTimeMSmaxTimeMSmaxTimeMSmaxTimeMSmaxTimeMSmaxTimeMSnReturned = 2nReturned", "text": "Hey @MatteoSp,Apologies for the late response.As you can see I have a maxTimeMS of 55K ms, but the find operation completes in 300K ms.Understanding maxTimeMS:The maxTimeMS parameter serves to limit resource consumption and prevent operations from running indefinitely. However, it is important to note that maxTimeMS is not a hard limit and does not guarantee operations will stop precisely at the specified time. Instead, it represents the “cumulative processing time”, excluding yield time.Analysis of your scenario:Based on the provided information, the query yielded 23,357 times. Each yield represents an interrupt point where the query takes a break. So it’s possible to blow past maxTimeMS in wall clock time, and the query will not be stopped since it hasn’t cumulatively spent maxTimeMS in processing time. It’s possible that the server is very busy, leading the query to yield a lot, and it may seem like the maxTimeMS was ignored. However, cumulatively, the query hasn’t exceeded the maxTimeMS setting in processing time since most of the time was spent on waiting and yielding.About batch/batchSize: I was asking if being in the case of a single batch (nReturned = 2) prevents any interrupt point to be reachedRegarding the question about a single batch with nReturned = 2, it does not directly prevent interrupt points from being reached. Interrupt points occur during query execution regardless of the batch size. However, it’s important to consider the overall execution time and resource consumption. If the query involves multiple batches or has a higher nReturned value, it may result in a longer processing time also.I hope it provides you with an understanding of your scenario. In case of any further questions or concerns feel free to reach out.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
About maxTimeMS() and interrupt points
2023-05-04T09:12:42.438Z
About maxTimeMS() and interrupt points
603
null
[ "sharding", "time-series" ]
[ { "code": " db.createCollection( \"test_timeseries\", {\n timeseries: {\n timeField: \"timeStamp\",\n metaField: \"metaData\",\n granularity: \"seconds\"\n },\n expireAfterSeconds: 604800\n });\n\n db.adminCommand({ shardCollection: \"test.test_timeseries\", key: { \"metaData.location\": 1 } });\n db.adminCommand({ updateZoneKeyRange: \"test.test_timeseries\", min: { \"metaData.location\": 11 }, max: { \"metaData.location\": 20 }, zone: \"rs1\" });\n db.adminCommand({ updateZoneKeyRange: \"test.test_timeseries\", min: { \"metaData.location\": 1 }, max: { \"metaData.location\": 10 }, zone: \"rs2\" });\n\n db.createCollection( \"test_regular\"});\n\n db.adminCommand({ shardCollection: \"test.test_regular\", key: { \"metaData.location\": 1 } });\n db.adminCommand({ updateZoneKeyRange: \"test.test_regular\", min: { \"metaData.location\": 11 }, max: { \"metaData.location\": 20 }, zone: \"rs1\" });\n db.adminCommand({ updateZoneKeyRange: \"test.test_regular\", min: { \"metaData.location\": 1 }, max: { \"metaData.location\": 10 }, zone: \"rs2\" });\n", "text": "Hi All,\nI am trying to create a sharded time series collection and add updateZoneKeyRange as belowBut the time series collection “test_timeseries” is getting created only on rs1 shard. Also, documents inserted to “test_timeseries” collection are getting inserted into only rs1 irrespective of metaData.location value.If I execute same commands for regular collection as belowHere, for regular collection, “test_regular” collection is getting created on rs1 as well as rs2. Also, documents inserted to “test_regular” collection are getting inserted into rs1 and rs2 shards based on the metaData.location.Mongodb version:5.0.17Does updateZoneKeyRange works with Time Series Collection In Mongodb? What is wrong here?CC: @Kushagra_KesavThanks in advance.", "username": "Yogesh_Sonawane1" }, { "code": "db.adminCommand({ updateZoneKeyRange: \"test.system.buckets.test_regular\", min: { \"meta.location\": 11 }, max: { \"meta.location\": 20 }, zone: \"rs1\" });\ndb.adminCommand({ updateZoneKeyRange: \"test.system.buckets.test_regular\", min: { \"meta.location\": 1 }, max: { \"meta.location\": 10 }, zone: \"rs2\" });\n\ndb.adminCommand({ shardCollection: \"test.test_regular\", key: { \"metaData.location\": 1 } });\n\n", "text": "The problem got solved when executed updateZoneKeyRange on bucket collection rather than view.also updateZoneKeyRange should be called before shardCollection.\nRequesting team to document this.", "username": "Yogesh_Sonawane1" }, { "code": "sh.shardCollection(\n \"test.weather\",\n { \"metadata.sensorId\": 1 },\n {\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n }\n }\n)\n\ndb.weather.insertOne( {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T00:00:00.000Z\"),\n \"temp\": 12\n} )\n", "text": "Hi All,\nwith respect to this documentationMongodb version used: 5.0.17I am trying to shard the time series collection, but it is not getting replicated to other shard members.Now when I login to each shard, the time series collection is available on only one shard from where it was created. I am connecting to router to create the time series collection.\nThe above steps works fine for regular collection.Thanks in advance.", "username": "Yogesh_Sonawane1" }, { "code": "sh.addShard(\"<replica_set1>/<rs1_node_1_ip_address>:27018,<rs1_node_2_ip_address>:27018,<rs1_node_3_ip_address>:27018\");\nsh.addShardToZone(\"<replica_set1>\", \"rs1\");\nsh.addShard(\"<replica_set2>/<rs2_node_1_ip_address>:27018,<rs2_node_2_ip_address>:27018,<rs2_node_3_ip_address>:27018\");\nsh.addShardToZone(\"<replica_set2>\", \"rs2\");\n\n//enabling sharding for test.collection1.\n\nsh.enableSharding(\"test\");\nsh.shardCollection(\"test.collection1\", {key:1});\nsh.updateZoneKeyRange(\"test.collection1\", { key: 1 }, { key: 5 }, \"rs1\");\nsh.updateZoneKeyRange(\"test.collection1\", { key: 6 }, { key: 10 }, \"rs2\");\n", "text": "Hi All,\nI am trying to shard the time series collection and adding shard zones using belowThis works fine for if test.collection1 is a regular collection, documents are stored in different shards based on the zone key range.\nBut if test.collection1 is a time series collections, all documents are getting stored in rs1 only, irrespective of value of key.Is it that updateZoneKeyRange does not work with time series collection?\nhow to achieve that?Thanks in advance.", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "Thanks for flagging this; we have a warning on 5.0 rapid releases around not supporting zones for sharded time series collections, but not 5.0.We’re looking into it now.", "username": "Chris_Kelly" }, { "code": "", "text": "Thank you for your reply.\nI would like to understand your reply.\nI am using 5.0.17 version, which supports time series collection sharding as per your documentation.Thank you in advance.", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "Dear team,\nCan you please recommend here ?Thank You", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does updateZoneKeyRange works with Time Series Collection In Mongodb?
2023-05-23T13:42:54.308Z
Does updateZoneKeyRange works with Time Series Collection In Mongodb?
851
null
[ "aggregation", "data-modeling", "compass", "mongodb-shell" ]
[ { "code": "db.movies.aggregate([\n { \n $addFields: {\n fromFunction: {\n $function: {\n body: \"function(){return 'hello'}\",\n args: [], \n lang: 'js'\n }\n }\n }\n }\n ])\nuse test\"test\"db.movies.insertOne({a:1})\"movies\"db.movies.aggregate([\n {\n '$addFields': {\n fromFunction: { '$function': { body: function(){return 'hello'}, args: [], lang: 'js' } }\n }\n }\n])\n", "text": "This topic comes from Using $function to run a javascript function is not working in mongosh - #2 by Jason_TranI need to use a javascript function inside an aggregation, so I started from basic aggregation, like “Hello” functions (reference: How to Use Custom Aggregation Expressions in MongoDB 4.4 | MongoDB)\nIn previous topic I learnt that I need to use standalone shell, not compass shell, to run these kind of aggregations (If I use Compass shell, the code hangs and the prompt is not returned). I have 2 sintaxes for this simple code, but they don’t produce any output. Do I need to install anything else so these codes work?", "username": "Luis_Leon" }, { "code": "> db.version()\n6.0.6\n\n> db.test.find()\n[ { _id: 0 } ]\n\n> db.test.aggregate([\n... {\n... $addFields: {\n... fromFunction: {\n... $function: {\n... body: \"function(){return 'hello'}\",\n... args: [],\n... lang: 'js'\n... }\n... }\n... }\n... }\n... ])\n[ { _id: 0, fromFunction: 'hello' } ]\nfunction(){...}$function", "text": "Hi @Luis_LeonI tried the exact code you posted and got the expected output:I literally just copy pasted the first example you have. With regards to your second example, the function(){...} is unquoted. Perhaps this is the issue?Having said that, note that using $function will be a lot less performant compared to using built-in aggregation pipelines. It’s recommended for you to do this if your needs cannot be satisified otherwise.Best regards\nKevin", "username": "kevinadi" }, { "code": "db.movies.aggregate([\n { \n $addFields: {\n fromFunction: {\n $function: {\n body: \"function(){return 'hello'}\",\n args: [], \n lang: 'js'\n }\n }\n }\n }\n ])\n", "text": "Hi @kevinadi Thanks for your reply. Did you do anything else in your mongo installation in order to run a function? I tried again, I’m using the exact mongo version but I’m not getting an output like yours. Anyway, you mentioned that a built-in aggregation pipeline has better performance. The reason why I’m trying to use a function is parsing a json string, I haven’t find any other way to parse that kind of data field without using a function. Can you think an alternative way to do it?", "username": "Luis_Leon" }, { "code": "MongoServerError: $function not allowed in this atlas tier", "text": "Hi @Luis_LeonNo I didn’t do anything special. It’s just a plain vanilla MongoDB 6.0.6 single-node replica set deployment that I use locally for testing purposes. However, it shouldn’t matter if it’s a standalone node, a replica set, or even a sharded cluster.However if you’re using Atlas shared tier (M0/M2/M5) then server-side Javascript is not supported (see Atlas M0 (Free Cluster), M2, and M5 Limitations). Otherwise you’ll see an error like MongoServerError: $function not allowed in this atlas tierThe reason why I’m trying to use a function is parsing a json string, I haven’t find any other way to parse that kind of data field without using a functionCan you show us an example document, and how the output should look like?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Sure. Look at this link, it uses real data I’m working with. I don’t know why it works in this mongo playground but is not working in my environment (I didn’t create this link but some guy in stackoverflow):Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Luis_Leon" }, { "code": "function(jsonString){return JSON.parse(jsonString)}workflowParams$function", "text": "Hi @Luis_LeonThanks for providing the example.I see that the javascript function executed is:function(jsonString){return JSON.parse(jsonString)}basically it just parses a JSON document that was stored as a string, and dump the parsed document in the output, creating a sub-document.Wouldn’t this be better if the JSON string is stored as an actual sub-document in MongoDB? After all, this is what MongoDB is good at. Depending on your case, maybe you can pre-process the raw workflowParams string during document insertion, so you don’t need to use $function in the query.Also, I copy pasted the example document and the query, and the output I get is identical to the MongoPlayground output.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks Kevin, that’s a good observation, I don’t know if the customer is willing to pre-process the raw string, but maybe is the only solution to this problem for now, as I checked and noticed that they have mongo 4.0.0, and apparently $function is not supported in that mongodb version (my local installation is Mongo 6.0.6).", "username": "Luis_Leon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Javascript function does not produce any output
2023-05-26T21:23:04.715Z
Javascript function does not produce any output
803
null
[]
[ { "code": "", "text": "Hi, currently I have to add IP one at a time. Would be great if it can add multiple IPs at once.", "username": "4f2e3ec58d5a891addcea73b4222362" }, { "code": "", "text": "As a work around have you considered scripting it with Atlas cli or the Atlas api ?", "username": "chris" }, { "code": "", "text": "Further alternatives might be:", "username": "michael_hoeller" }, { "code": "", "text": "Didn’t know about this, thanks. But would be easier if there’s a UI.", "username": "4f2e3ec58d5a891addcea73b4222362" }, { "code": "", "text": "Hi @4f2e3ec58d5a891addcea73b4222362,Didn’t know about this, thanks. But would be easier if there’s a UI.There’s currently a feedback post regarding multiple IP’s being added at once (comma separated) for which you can vote for.Regards,\nJason", "username": "Jason_Tran" } ]
Feature: Add multiple IP in Network Access
2023-05-29T08:58:59.304Z
Feature: Add multiple IP in Network Access
470
https://www.mongodb.com/…6_2_1024x873.png
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "After a short while of successfully running, the application randomly crashes and terminal outputs the following error log:\nScreenshot 2023-05-28 at 13.45.311982×1690 371 KBMy .env file contains the following code: DATABASE_URL=mongodb://127.0.0.1:27017/T-shirts-2\nI’ve tried changing “127.0.0.1:27017” in many ways e.g to “localhost:27017” or just “localhost”, but those changes only made another errors pop-up in the place of the previous one.I suppose it is the problem with just the hostname, but somehow i can’t get it right. Maybe somebody knows how to solve that problem?\nI would really appreciate any help.", "username": "Grzegorz_Diaconescu" }, { "code": "mongod", "text": "@Grzegorz_Diaconescu the error is being forwarded from the Node.js Driver as the payload being returned from the server exceeds the max document size.The result you’ve reported (1.34GB???) in the error seems way off, so something’s clearly not right here.To start off, what versions of the MongoDB server are you running locally and what version of Mongoose is being used? Did you start the mongod with any custom options?", "username": "alexbevi" }, { "code": "", "text": "@alexbevi\nSo i’m running MongoDB shell version v3.6.4 and Mongoose v7.2.1.\nAs i can remember i did not change any of the mongod default options.", "username": "Grzegorz_Diaconescu" }, { "code": "", "text": "@Grzegorz_Diaconescu seeing as the version you’ve indicated (3.6.4) is considered End of Life I’d recommend upgrading the server version to at least MongoDB 4.4.The version of Mongoose shouldn’t be an issue, however due to the server version being several generations old it’s potentially contributing to the issue.If after upgrading MongoDB to 4.4+ these issues persist we can investigate further.", "username": "alexbevi" }, { "code": "", "text": "@alexbevi\nYeah I’ve tried updating MongoDB via brew and now the whole app is not crashing.Previously I’ve tried updating it via npm, and somehow it didn’t have the latest MongoDB version available.Yeah so pretty much the cause of the error is solved.Thank you very much for your help ", "username": "Grzegorz_Diaconescu" }, { "code": "", "text": "@Grzegorz_Diaconescu glad I could help ", "username": "alexbevi" } ]
I'm constantly facing the following error: err = new ServerSelectionError(); ^ MongooseServerSelectionError: Invalid message size: 1347703880, max allowed: 67108864
2023-05-28T11:53:45.150Z
I&rsquo;m constantly facing the following error: err = new ServerSelectionError(); ^ MongooseServerSelectionError: Invalid message size: 1347703880, max allowed: 67108864
1,661
null
[ "connecting" ]
[ { "code": "", "text": "Hello, folks!I need to connect my service from another AWS region into my MongoDB Atlas Cluster.Using Private Link, how is this possible?Scenario:\n- Application in us-east-2.\n- Mongo DB Atlas cluster in us-east-1.If I just create a Private Link in us-east-2, will it work?\nOr do I need to maintain VPC compatibility between us-east-1 (MongoDB) and us-east-2 (Application)? What is the best approach?Note: In the future the application will be migrated to us-east-1.", "username": "Leonardo_Augusto_Gallo" }, { "code": "us-east-1us-east-1us-east-2us-east-2us-east-1us-east-1us-east-1us-east-2", "text": "Hi @Leonardo_Augusto_Gallo - Welcome to the community From the Set Up a Private Endpoint documentation:Scenario:\n- Application in us-east-2.\n- Mongo DB Atlas cluster in us-east-1.I believe in your particular example based off the information provided, you’ll need to set up a VPC in Region us-east-1 and have the private link and private endpoint associated with the Atlas connection in this VPC.You would then need to set up VPC peering between your 2 VPC’s (Your AWS VPC us-east-1 ← VPC peering → Your AWS VPC us-east-2). Essentially the connection / traffic from your Application in us-east-2 would go through the VPC peering to your VPC in region us-east-1 which would then go via the private endpoint / private link to the Atlas cluster in us-east-1.In saying the above, since you’ll move the application to us-east-1 eventually, you would then no longer need us-east-2 VPC or the inter-region VPC peering connection when this happens assuming nothing else changes.Hope this helps and let us know if you require any further help from the MongoDB Atlas side.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to connect to mongo db from different aws region?
2023-05-29T11:57:53.826Z
How to connect to mongo db from different aws region?
684
null
[ "aggregation", "queries", "graphql" ]
[ { "code": "result = await collection.aggregate([\n { $match: { post_id: input.post_id} },\n { $group: { _id: '$unicode', count: { $sum: 1 } } },\n { $project: { unicode: '$_id', count: 1, _id: 0 } }\n ]).toArray();\n", "text": "Noob here. I created a custom resolver on graphql and the function for that resolver filters and counts each type of one field. This is the code:This function works when I test it under the console with non-system users and also on graphiql. But when I fetch the data from the client side it is giving this error.Error aggregating reactions: FunctionError: intermediate aggregation document does not satisfy schema: reason=“could not validate document: \\n\\t(root): user_id is required\\n\\t(root): post_id is required\\n\\t(root): _id is required”; code=“SchemaValidationFailedRead”; untrusted=“read not permitted”; details=mapI don’t see the reason for requiring user_id and _id as I am not using to query or filter the collection. Appreciate any help", "username": "Bis" }, { "code": "", "text": "Did you ever find the solution to this? I’ve encountered the same thing after updating my schema to include require fields. I can retrieve single documents without issue but my custom resolver fails.", "username": "w3e" } ]
Graphql custom resolver with aggregate isn't working on the client side
2023-04-01T01:57:54.097Z
Graphql custom resolver with aggregate isn&rsquo;t working on the client side
1,084
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hello,I have an web app that is connected to a database with approx 10000 documents (about 30mb) on a M2 instance. Each document have 35 fields of which 4 are array fields. 3 array fields for numbers, where the biggest array has 168 elements, and 1 array with mixed data types. In my world this constitute a small database with little data.The problem I have is that simple queries where I include the array fields take a lot of time to return. Note that I do not do any sorting or anything on those arrays. With out the array fields a typical query I make takes about 300ms to return. For each array field I include it adds about one second(!) to the query, and if I include all 4 the query takes about 4-5 seconds to return.What is also very odd, is that from time to time, there is no slow down when I add the array fields. Then it stays like that for some hours or a day, then it goes back backing super slow again. This affects both my production code and my development code at the same time. According to MongoCompass the query is lightning fast, as I imagine it should be. But in reality it is not.Any suggestion what could be wrong?I am using node, express, and mongoose.Regards/Anders", "username": "Anders_Lindstrom" }, { "code": "db.collection.explain(\"executionStats\").find(...)db.collection.getIndexes()", "text": "Hi @Anders_Lindstrom - Welcome to the community.Could you provide the following information regarding the query performance you’re experiencing?:According to MongoCompass the query is lightning fast, as I imagine it should be. But in reality it is not.Please remove any personal or sensitive information before posting here.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello Jason,Thank you for your reply. Right now when I was about to do the tests you suggested, the speed up is there. But I’ll give you the results anyway, so you do not think I am ignoring your reply. So.I will start with replying to point 3: Yes I have experienced it from the beginning. I started my Atlas usage with a M0 instance to get things setup. Then I upgraded to a M2 instance. The performance issues have been there all the time on both instances.4: Lightning fast means double digit ms execution times, as you can see its about 20msFor or point 5 I have attached a dropbox link to a json file with 10 documents with no personal data (new users can not upload files here…).5: Dropbox - sampledocuments.json - Simplify your lifeFor point 1 & 2 I just copy the output.1 & 2:\nexecutionSuccess: true,\nnReturned: 100,\nexecutionTimeMillis: 22,\ntotalKeysExamined: 0,\ntotalDocsExamined: 11524,\nexecutionStages: {\nstage: ‘PROJECTION_DEFAULT’,\nnReturned: 100,\nexecutionTimeMillisEstimate: 11,\nworks: 11927,\nadvanced: 100,\nneedTime: 11826,\nneedYield: 0,\nsaveState: 11,\nrestoreState: 11,\nisEOF: 1,\ntransformBy: { anim: 0 },\ninputStage: {\nstage: ‘SKIP’,\nnReturned: 100,\nexecutionTimeMillisEstimate: 10,\nworks: 11927,\nadvanced: 100,\nneedTime: 11826,\nneedYield: 0,\nsaveState: 11,\nrestoreState: 11,\nisEOF: 1,\nskipAmount: 0,\ninputStage: [Object]\n}\n},\nallPlansExecution: \n}\n[\n{ v: 2, key: { _id: 1 }, name: ‘id’ },\n{ v: 2, key: { id: 1 }, name: ‘id_1_autocreated’ }\n]Let me know if there is any more info you need right now? I will get back when the speed goes down againRegards/Anders", "username": "Anders_Lindstrom" }, { "code": "executionTime22msdb.collection.find({},{<projection>}).skip(...)mongosh", "text": "The problem I have is that simple queries where I include the array fields take a lot of time to return. Note that I do not do any sorting or anything on those arrays. With out the array fields a typical query I make takes about 300ms to return. For each array field I include it adds about one second(!) to the query, and if I include all 4 the query takes about 4-5 seconds to return.Thanks for providing all those details Anders! I can’t see anything out of the blue in terms of the execution time but I believe another piece of information I would require is - how are you measuring the 300ms and 4-5 seconds to return (assuming the executionTime is 22ms).1 & 2:\nnReturned: 100,\nexecutionTimeMillis: 22,\ntotalKeysExamined: 0,\ntotalDocsExamined: 11524,\nexecutionStages: {\nstage: ‘PROJECTION_DEFAULT’,\nnReturned: 100,Based off the execution stats output you provided - I wasn’t able to locate any particular filter. Can you share the full query used? I imagine it may look something like db.collection.find({},{<projection>}).skip(...) but will wait for your confirmation.Lastly, can you try running the same queries using mongosh to see if the performance you’re experiencing is the same? This may give us an indication on whether or not it has something to do with the app code.Might be also worth checking with the in-app chat support team if you have exceeded any data transfer limitations (it is the sum based off all nodes).Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello Jason,Thank you for your quick reply.\nWell, right now the database is back to it slow performance again. I measure the speed in my code, I just time the completion of the of the .find() operation. But it is also directly visible in the app. With server side page rendering loading in data for 100 documents to the browser takes about 700-1000ms when the database is fast and 4500-5500ms when it is slow. I suppose the 22ms is just the internal computation time for the operation but then there is some overhead for the actual transfer. But 4-5 seconds is way to much for the small amount of data that is transferred.However, I have tried with mongosh and I get the same results, i.e., it is really slow. I guess that this means that its not my app code that is the problem.here is a fully query that takes about 4-5 seconds to get the data from. note the two empty {}, they are additional filters but I left them empty for the test query. andding addtional filters do not change the time. But what change the time is if I remove the array fields. in this query the “anim” field is excluded by “default”. If I exclude all the array fields then the round takes about one second less per array field!db.collection.find({$and:[{mcaprank: {$ne : null} }, {}, {}]}, { anim: 0 }).limit(100).sort({mcaprank:-1}).skip(0)Regards/Anders", "username": "Anders_Lindstrom" }, { "code": "mongosh", "text": "Interesting - I’ll try reproduce this behaviour if I can on an M0 using mongosh.Well, right now the database is back to it slow performance again. I measure the speed in my code, I just time the completion of the of the .find() operation. But it is also directly visible in the app. With server side page rendering loading in data for 100 documents to the browser takes about 700-1000ms when the database is fast and 4500-5500ms when it is slow.The performance issues have been there all the time on both instances.I do note that you’ve noted it happened “all the time” on both instances however you’ve written “is back to slow performance again”. Does this mean it was faster at some points? Just want to clarify - If there are some scenarios where it is performing faster you should check with the atlas in-app chat support team if you’ve exceeded any limitations which result in throttling.Regards,\nJason", "username": "Jason_Tran" }, { "code": "mongoshmongoshperfdb> db.test.countDocuments()\n10040\nperdb> db.test.find({$and:[{mcaprank: {$ne : null} }, {}, {}]}, { anim: 0 }).limit(100).sort({mcaprank:-1}).skip(0).explain(\"executionStats\")\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'perfdb.test',\n indexFilterSet: false,\n parsedQuery: { mcaprank: { '$not': { '$eq': null } } },\n queryHash: '2921F291',\n planCacheKey: 'D913DAF6',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'PROJECTION_DEFAULT',\n transformBy: { anim: 0 },\n inputStage: {\n stage: 'SORT',\n sortPattern: { mcaprank: -1 },\n memLimit: 33554432,\n limitAmount: 100,\n type: 'simple',\n inputStage: {\n stage: 'COLLSCAN',\n filter: { mcaprank: { '$not': { '$eq': null } } },\n direction: 'forward'\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 100,\n executionTimeMillis: 18,\n totalKeysExamined: 0,\n totalDocsExamined: 10040,\n executionStages:\n...\n", "text": "However, I have tried with mongosh and I get the same results, i.e., it is really slow. I guess that this means that its not my app code that is the problem.Can you explain the behaviour you experienced with mongosh? I tried to see if I could notice any “slow” performance in mongosh but the command itself I ran against 10040 sample documents based off the ones you provided returned the cursor and first 20 results of the cursor almost instantaneously. I also ran the execution stats and got a similar time ~18ms:", "username": "Jason_Tran" }, { "code": "", "text": "Hello Jason,Sorry for my late reply. What I mean with the slow behavior in mongosh is that when I used the find() it took 4-5 seconds before the respond start to print out ( I have a fast internet connection). So even if the internal search is fast, the round trip is SLOW. IF I remove the array fields from what I want to get return it decrease the time it takes about a second per field I remove.Regards/Anders", "username": "Anders_Lindstrom" } ]
Query speed for documents containing arrays on M2 instance very sluggish
2023-05-10T14:36:57.896Z
Query speed for documents containing arrays on M2 instance very sluggish
821
null
[ "atlas-cluster" ]
[ { "code": "", "text": "I created a mean stack app connected to atlas cluster and deployed it on render.com.\nBut 30 mins after I stop working on my mean stack app, the home page takes at least a minute to load.\nOnce it does load after a minute’s delay, it doesn’t cause a delay anymore, but only as long as I keep working on my app. 30 minutes of inactivity later, the homepage load gets delayed again.My question is, why is my cluster going into “sleep” mode just because I haven’t used the app in the last 20-30 mins? Is there a setting that always keeps the atlas cluster up and running, without causing home page load delays? Am I missing some configuration?", "username": "Deepak_KM" }, { "code": "", "text": "Hey @Deepak_KM,Thank you for reaching out to the MongoDB Community forums!But 30 mins after I stop working on my mean stack app, the home page takes at least a minute to load.I suspect that the behavior you’re experiencing might be due to a feature or setting of the hosting platform you are using.why is my cluster going into “sleep” mode just because I haven’t used the app in the last 20-30 mins?Can you please elaborate on what you meant by “sleep mode” and how are you determining that your cluster is going into “sleep” mode? Are there any error messages or specific indicators you’ve observed?Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "“https://meanstack-foodmine-app.onrender.com/”This is the link to my mean stack app, deployed on render.com.\nLet’s say today, If you access this link for the first time, it will take at least a minute just for the home page to load.\nAfter that, for all subsequent accesses, it loads instantly. (FYI, my home page contains data & images that are retrieved from mongodb atlas.).\nThen when I’m done using the app and I stop triggering the app (and therefore the mongodb atlas) for like 30 mins or so, I again face the 1 minute delay when I access my app link.Based on this, I am assuming that this initial delay is happening because my app is unable to acquire that db connection initially, probably because the db went inactive since the last time it was triggered?I do not see any error messages in console.I would like to know if there is a setting that keeps the mongodb atlas cluster running all the time. Or is there any resume/pause button somewhere?I am new to mongodb atlas.Thanks.", "username": "Deepak_KM" } ]
My mean stack connected to mongodb atlas stops working after 30 mins of inactivity
2023-05-28T18:31:53.141Z
My mean stack connected to mongodb atlas stops working after 30 mins of inactivity
556
null
[ "aggregation", "queries", "indexes" ]
[ { "code": "db.orders.aggregate([\n {\n \"$lookup\": {\n \"from\": \"inventory\",\n \"as\": \"result\",\n \"let\": {\"_id\": \"$_id\"},,\n \"pipeline\": [ \n { $match: \n { $expr: \n { \n $and: [ { $eq: [\"$_id\", ObjectId()] }, { $eq: [ \"$friend_id\", \"$$_id\" ] } ] \n } \n } \n }\n ]\n }\n }\n])\n$lookup", "text": "In this $lookup, there are two $match conditions. The first is static, while the second is dynamic.Would mongodb’s query planner be able to optimize and run the static match condition once to find the superset of matching documents, cached it, and then from this superset run the second match condition for each document? And, bonus, is it able to use the index for the second conditon?And if mongodb currently isn’t smart enough to do the above, how can I make request for this feature to be added?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.id1.find()\n[\n { _id: ObjectId(\"6474851a9cf9ae4249964107\") },\n { _id: ObjectId(\"647487b49cf9ae4249964109\") },\n { _id: ObjectId(\"647487b99cf9ae424996410a\") }\n]\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.id2.find()\n[\n {\n _id: ObjectId(\"6474879a9cf9ae4249964108\"),\n friend_id: ObjectId(\"6474851a9cf9ae4249964107\")\n }\n]\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.id1.aggregate([\n... {\n... '$match': {\n... '_id': ObjectId('6474851a9cf9ae4249964107')\n... }\n... }, {\n... '$lookup': {\n... 'from': 'id2',\n... 'localField': '_id',\n... 'foreignField': 'friend_id',\n... 'as': 'result'\n... }\n... }\n... ])\n[\n {\n _id: ObjectId(\"6474851a9cf9ae4249964107\"),\n result: [\n {\n _id: ObjectId(\"6474879a9cf9ae4249964108\"),\n friend_id: ObjectId(\"6474851a9cf9ae4249964107\")\n }\n ]\n }\n]\n", "text": "Hi @Big_Cat_Public_Safety_Act and welcome to MongoDB community forums!!If I understand your question correctly, you are trying to match the _id with a specific ObjectID and then trying to perform a lookup with the friend_id as the foreign field.Based on my understanding, I created two collections as follows:and I tried to execute the query as mentioned which did not returned any result.\nI tried to rewrite the query as shown below:and it returned the result as:If this is not something you are looking for, could you help me with a sample document from the collections along with the desired output for the aggregation performed. Also, specify the MongoDB version you are using.Regards\nAasawari", "username": "Aasawari" } ]
Partial static and dynamic $lookup
2023-05-24T06:17:32.197Z
Partial static and dynamic $lookup
732
https://www.mongodb.com/…1b6ed07479fd.png
[ "compass" ]
[ { "code": "Documents\nDBName...\n", "text": "Hi,\nWhen you open a DB with Compass, all of the tabs are showingand you should mouse over them or click them to find out which they belong to which collection\nInstead of always showing “Documents” and “DBName”, you may show which collection they are\n\nimage691×79 2.5 KB\n", "username": "Mehran_Ishanian1" }, { "code": "", "text": "Hello @Mehran_Ishanian1,Welcome to the MongoDB Community forums! Thank you for bringing this to our attention. We appreciate your feedback.I’ll let the concerned team know about this request. Additionally, I recommend upvoting a similar feature request on the MongoDB Feedback Engine to express your interest in this feature.If you have any further questions or need help with anything else, please feel free to reach out.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Feature Request - Compass - Tabs
2023-05-25T14:08:25.926Z
Feature Request - Compass - Tabs
571
null
[ "data-modeling" ]
[ { "code": "", "text": "I had data stored in MongoDB on my previous machine, and I copied the MongoDB folder to my current machine. Now, I want to insert the data from the copied folder into MongoDB.What is the process for accomplishing this task?", "username": "Omar_Abu_Sanad" }, { "code": "", "text": "", "username": "Kobe_W" }, { "code": "storage.dbPath/var/lib/mongo/var/lib/mongodbchown -R mongodmongodbstorage.dbPath", "text": "Did you want to import this into an existing MongoDB or just start a MongoDB with these copied files?If you were intending to start a MongoDB with these files you can use the storage.dbPath as @Kobe_w linked to.I would move the files to the default MongoDb path for the installation (RedHat: /var/lib/mongo, Ubuntu: /var/lib/mongodb) and chown -R to the appropriate user (RedHat: mongod, Ubuntu: mongodb).If you want to import the databases in these files to an existing MongoDB you would need to start a separate instance on a different port, specifying the storage.dbPath and use the MongoDB Database Tools to export/import or dump/restore to the existing instance.", "username": "chris" } ]
How can I import the data from a copied MongoDB folder into MongoDB on my current machine?
2023-05-27T19:57:23.011Z
How can I import the data from a copied MongoDB folder into MongoDB on my current machine?
517
https://www.mongodb.com/…592a19df4b9f.png
[ "node-js", "cxx", "kotlin", "flutter" ]
[ { "code": "", "text": "Realm Kotlin 1.9.0 was released on Maven Central and Gradle Plugin Portal.This release includes support for Kotlin Serialization, bundled Realm files and simple full-text search, and more. A detailed explanation is provided by our lead engineer Christian Melchior on the Realm product team blog.Realm JS team has been working on redesigning the Realm JavaScript version 12. The new version is implemented in TypeScript, involves writing less C++ boilerplate code, and offers more optimal code for supported JavaScript engines. Try it out and give your feedback here.Our newest community member @gymbuddy_ai (Evan Burbidge) from Dublin, Ireland shared his experience of creating Gymbuddy.ai app using Atlas Device Sync and React Realm SDK in a user-group event.\nFind out more on MongoDB user-groups closer to your location.Bravo to our new community member @Mateusz_Piwowarski for trying to resolve their case of unsynced documents. Some steps to take:Community member @Paramjit_Singh asked for “Best practices for using Realm environments” which was answered and turned into a blog post about “How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions”The Realm Flutter team is participating in DroidCon/FlutterCon in Berlin 5th-7th July. Lead engineer Lubo Blagoev will give a talk on Writing a Flutter and Dart FFI Plugin.Henna Singh\nCommunity Manager, Mobile\nMongoDB Community Team\nRealm Community ForumPS: Know someone who might be interested in this newsletter? Share it with them. Subscribe here to receive a copy in your inbox", "username": "henna.s" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Realm JS & Kotlin SDKs | Community Update May 2023
2023-05-29T11:00:31.477Z
Realm JS &amp; Kotlin SDKs | Community Update May 2023
694
null
[ "queries", "cxx" ]
[ { "code": "#include <cstdint>\n#include <iostream>\n#include <vector>\n#include \"assert.h\"\n\n#include <bsoncxx/builder/basic/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/instance.hpp>\n#include <mongocxx/stdx.hpp>\n#include <mongocxx/uri.hpp>\n\nusing bsoncxx::builder::basic::kvp;\nusing bsoncxx::builder::basic::make_array;\nusing bsoncxx::builder::basic::make_document;\nint main()\n{\n mongocxx::instance instance{}; // This should be done only once.\n mongocxx::client client{ mongocxx::uri{} };\n auto db = client[\"mydb\"];\n auto collection = db[\"test\"];\n\n auto doc_value = make_document(\n kvp(\"name\", \"MongoDB\"),\n kvp(\"type\", \"database\"),\n kvp(\"count\", 1),\n kvp(\"versions\", make_array(\"v6.0\", \"v5.0\", \"v4.4\", \"v4.2\", \"v4.0\", \"v3.6\")),\n kvp(\"info\", make_document(kvp(\"x\", 203), kvp(\"y\", 102))));\n auto doc_view = doc_value.view();\n auto element = doc_view[\"name\"];\n auto name = element.get_string().value; // For C++ driver version < 3.7.0, use get_utf8()\n assert(element.type() == bsoncxx::type::k_string);\n assert(0 == name.compare(\"MongoDB\"));\n\n auto insert_one_result = collection.insert_one(make_document(kvp(\"i\", 0)));\n assert(insert_one_result); // Acknowledged writes return results.\n\n auto doc_id = insert_one_result->inserted_id();\n assert(doc_id.type() == bsoncxx::type::k_oid);\n\n std::vector<bsoncxx::document::value> documents;\n documents.push_back(make_document(kvp(\"i\", 1)));\n documents.push_back(make_document(kvp(\"i\", 2)));\n\n auto insert_many_result = collection.insert_many(documents);\n assert(insert_many_result); // Acknowledged writes return results.\n\n auto doc0_id = insert_many_result->inserted_ids().at(0);\n auto doc1_id = insert_many_result->inserted_ids().at(1);\n assert(doc0_id.type() == bsoncxx::type::k_oid);\n assert(doc1_id.type() == bsoncxx::type::k_oid);\n\n auto cursor_all = collection.find({});\n for (auto doc : cursor_all) {\n // Do something with doc\n assert(doc[\"_id\"].type() == bsoncxx::type::k_oid);\n }\n\n cursor_all = collection.find({});\n std::cout << \"collection \" << collection.name()\n << \" contains these documents:\" << std::endl;\n for (auto doc : cursor_all) {\n std::cout << bsoncxx::to_json(doc, bsoncxx::ExtendedJsonMode::k_relaxed) << std::endl;\n }\n std::cout << std::endl;\n\n auto index_specification = make_document(kvp(\"i\", 1));\n collection.create_index(std::move(index_specification));\n\n}\n\nSeverity\tCode\tDescription\tProject\tFile\tLine\tSuppression State\nError\tLNK1120\t63 unresolved externals\tTestAppTutorial\tC:\\Users\\jeremy.beaty\\Documents\\Visual Studio 2019\\TestAppTutorial\\Debug\\TestAppTutorial.exe\t1\t\nError\tLNK2019\tunresolved external symbol \"__declspec(dllimport) bool __cdecl mongocxx::v_noabi::operator!=(class mongocxx::v_noabi::cursor::iterator const &,class mongocxx::v_noabi::cursor::iterator const &)\" (__imp_??9v_noabi@mongocxx@@YA_NABViterator@cursor@01@0@Z) referenced in function _main\tTestAppTutorial\tC:\\Users\\jeremy.beaty\\Documents\\Visual Studio 2019\\TestAppTutorial\\TestAppTutorial.obj\t1\t\nError\tLNK2019\tunresolved external symbol \"__declspec(dllimport) class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl bsoncxx::v_noabi::to_json(class bsoncxx::v_noabi::document::view,enum bsoncxx::v_noabi::ExtendedJsonMode)\" (__imp_?to_json@v_noabi@bsoncxx@@YA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@Vview@document@12@W4ExtendedJsonMode@12@@Z) referenced in function _main\tTestAppTutorial\tC:\\Users\\jeremy.beaty\\Documents\\Visual Studio 2019\\TestAppTutorial\\TestAppTutorial.obj\t1\t\nError\tLNK2019\tunresolved external symbol \"__declspec(dllimport) private: class boost::optional<class mongocxx::v_noabi::result::insert_many> __thiscall mongocxx::v_noabi::collection::_exec_insert_many(class mongocxx::v_noabi::bulk_write &,class bsoncxx::v_noabi::builder::basic::array &)\" (__imp_?_exec_insert_many@collection@v_noabi@mongocxx@@AAE?AV?$optional@Vinsert_many@result@v_noabi@mongocxx@@@boost@@AAVbulk_write@23@AAVarray@basic@builder@2bsoncxx@@@Z) referenced in function \"private: class boost::optional<class mongocxx::v_noabi::result::insert_many> __thiscall mongocxx::v_noabi::collection::_insert_many<class std::_Vector_const_iterator<class std::_Vector_val<struct std::_Simple_types<class bsoncxx::v_noabi::document::value> > > >(class mongocxx::v_noabi::client_session const *,class std::_Vector_const_iterator<class std::_Vector_val<struct std::_Simple_types<class bsoncxx::v_noabi::document::value> > >,class std::_Vector_const_iterator<class std::_Vector_val<struct std::_Simple_types<class bsoncxx::v_noabi::document::value> > >,class mongocxx::v_noabi::options::insert const &)\" (??$_insert_many@V?$_Vector_const_iterator@V?$_Vector_val@U?$_Simple_types@Vvalue@document@v_noabi@bsoncxx@@@std@@@std@@@std@@@collection@v_noabi@mongocxx@@AAE?AV?$optional@Vinsert_many@result@v_noabi@mongocxx@@@boost@@PBVclient_session@12@V?$_Vector_const_iterator@V?$_Vector_val@U?$_Simple_types@Vvalue@document@v_noabi@bsoncxx@@@std@@@std@@@std@@1ABVinsert@options@12@@Z)\tTestAppTutorial\tC:\\Users\\jeremy.beaty\\Documents\\Visual Studio 2019\\TestAppTutorial\\TestAppTutorial.obj\t1\t\n\n", "text": "I built the mongo-c-driver 1.23.3 using VS2013. Then I built the mongo-cxx-driver 3.7.1 using the following cmake commands, all on a Windows 11 system:cmake … -G “Visual Studio 16 2019” -A x64 -DBOOST_ROOT=C:\\local\\boost_1_82_0 -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver\ncmake --build .\ncmake --build . --target installNow when I try to build the code from the tutorial I get link errors. I haven’t found anything to tell me what I may have done wrong or how to resolve this problem. I added the include folders C:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi and C:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi. I added the link folder C:\\mongo-cxx-driver\\lib as well.Any help would be appreciated.Code:", "username": "Jeremy_Beaty" }, { "code": "", "text": "Are you linking to all the required libs?\nSee this article for reference - Getting Started with MongoDB and C++ | MongoDB", "username": "Rishabh_Bisht" } ]
Mongocxx link errors with VS2019
2023-05-26T17:09:37.749Z
Mongocxx link errors with VS2019
871
null
[ "aggregation", "python", "database-tools" ]
[ { "code": "", "text": "Hello Experts,I have the following database and i am trying to export records from it at 15 mins interval.db.nodeLatestStatus.aggregate( [\n… { “$match” : { “nodeName” : “nodexxx”, “type” : “eNodeB” } },\n… {“$sort” : {“time” : 1}},\n… { “$project” : {“_id” : 0, “nodeName” : 1,“time” : 1,“type”: 1 } }\n… ] )\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\n{ “nodeName” : “nodexxx”, “type” : “eNodeB”, “time” : ISODate(“2023-05-22T11:51:28.686Z”) }\nType “it” for moreMy python code is constructing the below command , however i get 0 records from it.mongoexport --host=xx.xx.xx.xx --port=27017 --db=test_nodedata --collection=nodeLatestStatus --query=‘{ “time” : { “$gte” : { “$date” : “2023-05-22T11:45:00Z” },“$lte” : { “$date” : “2023-05-22T11:59:59Z” } } }’node:database> mongoexport --host=xx.xx.xx.xx --port=27017 --db=test_nodedata --collection=nodeLatestStatus --query=‘{ “time” : { “$gte” : { “$date” : “2023-05-22T11:45:00Z” },“$lte” : { “$date” : “2023-05-22T11:59:59Z” } } }’\n2023-05-22T16:59:43.654+0200 connected to: mongodb://xx.xx.xx.xx:27017/\n2023-05-22T16:59:43.710+0200 exported 0 recordsI also tried with the below format but no records.mongoexport --host=xx.xx.xx.xx --port=27017 --db=test_nodedata --collection=nodeLatestStatus --query=‘{ “time” : { “$gte” : { “$date” : “2023-05-22T11:45:00.000Z” },“$lte” : { “$date” : “2023-05-22T11:59:59.999Z” } } }’node:database> mongoexport --host=xx.xx.xx.xx --port=27017 --db=test_nodedata --collection=nodeLatestStatus --query=‘{ “time” : { “$gte” : { “$date” : “2023-05-22T11:45:00.000Z” },“$lte” : { “$date” : “2023-05-22T11:59:59.999Z” } } }’\n2023-05-22T16:59:43.654+0200 connected to: mongodb://xx.xx.xx.xx:27017/\n2023-05-22T16:59:43.710+0200 exported 0 recordsThanks!!", "username": "Shashikant_Saxena" }, { "code": "mongoexportdb.nodeLatestStatus.find({ \"time\" : { \"$gte\" : { \"$date\" : \"2023-05-22T11:45:00Z\" },\"$lte\" : { \"$date\" : \"023-05-22T11:59:59Z\" } } })", "text": "Hi @Shashikant_Saxena welcome back to the community!It’s been some time since you posted this. Have you managed to do this?If not, my question is, what happens when you try to execute the query you’re using in mongoexport, i.e.db.nodeLatestStatus.find({ \"time\" : { \"$gte\" : { \"$date\" : \"2023-05-22T11:45:00Z\" },\"$lte\" : { \"$date\" : \"023-05-22T11:59:59Z\" } } })do you see any documents? I’m suspecting that the query you used actually didn’t match any documents.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi !! many thanks for reply.You are absolutely right. find() doesn’t return any result as you expected and it is due to the fact that the data stored is in format YYYY-MM-DDTHH:MM:SS.sssZ format.\nIn the same DB there are collections where data is stored in format YYYY-MM-DDTHH:MM:SSZ and for those monogexport and find () works well.Regards,\nShashi", "username": "Shashikant_Saxena" } ]
Mongoexport with query time in milliseconds
2023-05-22T15:08:41.024Z
Mongoexport with query time in milliseconds
764
https://www.mongodb.com/…feb97002a178.png
[ "atlas-triggers" ]
[ { "code": "awaitconst doc = collection.findOne({ name: \"mongodb\" });await{}asyncawaitfind", "text": "Hello,I am trying to build my very first Scheduled Atlas trigger and I am experiencing some difficulties:When starting a new trigger, there will be some commented example code provided by MongoDB inside the text area where you are supposed to write your trigger code. What immediately surprised me was that that example code did not contain await. For instance, this is what MongoDB uses as an example:const doc = collection.findOne({ name: \"mongodb\" });Why are these official query examples not including await?When trying to write something similar I was receiving an empty object {} as a response.I have then added async to the function as well as added await to the find query and it worked, I can see data now when console.log’ing it. However, the trigger function is now giving me the following warning:Is there a way to correct this within the trigger?Thank you!", "username": "Vladimir" }, { "code": "const doc = collection.findOne({ name: \"mongodb\" });await", "text": "Hello @Vladimir,Welcome to the MongoDB Community forums const doc = collection.findOne({ name: \"mongodb\" });\nWhy are these official query examples not including await?Thank you for highlighting this. We have taken note of it and will let the concerned team know. However, please note that the example snippet is intended for reference only and can be modified to suit the user’s specific use case.However, the trigger function is now giving me the following warning:I am unable to find anything similar to what you are referring to. Would you please be able to share the complete code snippet with us?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav ,\nI also had the same problem as above. I have a function that needs it to run on a schedule and I found a scheduled trigger\nhere is the code i want to convert it on Imgur: The magic of the Internet and you will see “async” or “await”. It helps me to process this function on this line when it’s done, the system will let me run the next function on the next line to explain why I need to do it. I would appreciate you very much if you have some ideas or solutions for this. I look forward to hearing your thoughts. Thank you so much in advance", "username": "Phan_Thanh_Lam_N_A" }, { "code": "'async functions' is only available in es8 (use 'esversion: 8')async/await", "text": "Hey @Phan_Thanh_Lam_N_A,Apologies for the late response.I tried to create a trigger using the code you provided, and it worked fine for me. I didn’t encounter any errors related to 'async functions' is only available in es8 (use 'esversion: 8'). If you’re experiencing an error message, could you please share the exact message or provide more details about the specific issue you’re encountering while creating the trigger?\nImage1688×726 120 KB\nAdditionally, if you’re facing challenges with using async/await, you can consider using promises as a workaround.It helps me to process this function on this line when it’s done, the system will let me run the next function on the next line to explain why I need to do it.Could you please elaborate or give an example of the scenario that you have in mind?Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Unable to use "async" or "await" in a Scheduled Trigger
2023-04-22T15:16:53.812Z
Unable to use &ldquo;async&rdquo; or &ldquo;await&rdquo; in a Scheduled Trigger
1,088
https://www.mongodb.com/…33cb1954b19b.png
[ "compass", "atlas-cluster", "app-services-user-auth", "app-services-cli" ]
[ { "code": " 2023-05-22T10:09:09.959-0300 Failed: can't create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: ac-gpjd2aw-shard-00-01.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, { Addr: ac-gpjd2aw-shard-00-02.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, { Addr: ac-gpjd2aw-shard-00-00.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, ] }", "text": "Hello, overnight one of our data services stopped working. We cannot connect to it at all, not even using realm-cli.These error happens when try to connect via MongoDB Compass\nThese error happens when try to push something to the App Service via realm-cli\n 2023-05-22T10:09:09.959-0300 Failed: can't create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: ac-gpjd2aw-shard-00-01.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, { Addr: ac-gpjd2aw-shard-00-02.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, { Addr: ac-gpjd2aw-shard-00-00.wabrz9e.mongodb.net:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: x509: “*.wabrz9e.mongodb.net” certificate is expired }, ] }A strange thing that is happening is that the Data Service has been in update status for almost 1 week. And this Data Service don’t have auto-scale configuration.We try to search for this error, and try to understand why this happened, but nothing work. We created another Data Service and switch the linked data source for this environment(App Service) temporarily.Any idea what it could be?", "username": "Maycon_Santos" }, { "code": "", "text": "Hey @Maycon_Santos,A strange thing that is happening is that the Data Service has been in update status for almost 1 week. And this Data Service doesn’t have auto-scale configuration.Could I ask that you contact the in-app chat support as soon as possible regarding this issue? The in-app chat support does not require any payment to use and can be found at the bottom right corner of the Atlas UI:Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't connect anymore into Data Service
2023-05-25T11:34:03.465Z
Can&rsquo;t connect anymore into Data Service
740
null
[]
[ { "code": "", "text": "Hi friends,I’m currently putting together a data structure for a new application I’m putting together and a really basic question I have is regarding grouping data within a document. By this I mean putting together a structure like this:_id\nQuestion.Id\nQuestion.NameRather than_id\nQuestionId\nQuestionNameWhen I’m looking at the data in my collection it would certatinly be simpler if I can group the data together as in the first example.Are there any downsides to this? Does it slow down performance? Is there any documentation I can be pointed to that will help me make an informed choice?ThanksChris", "username": "Chris_Boot1" }, { "code": "", "text": "I’ve had a look at MongoDb documentation on sub-document querying and can’t find anything that says you shouldn’t do it or that it has negative impacts on performance, sorry I’ve phrased my original message so poorly.", "username": "Chris_Boot1" }, { "code": "", "text": "Hi @Chris_Boot1 and welcome to MongoDB community forums!!Does it slow down performance?A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern. You can read more about Data Modelling and Query Performance to understand further.From the above two schema shared, the former displaying use of Embedded Documents and the later a plain document which contains all data into a single field.The performance would completely depend on your specific use case and application requirement. However, to understand better, you can visit the documentation to understand the difference between Model One-to-One Relationships with Embedded Documents and Model One-to-Many Relationships with Embedded Documents.Also, to understand better, I would recommend you taking up Introduction to MongoDB Data Modelling-MongoDB University for more clear understanding.Let us know if you have any further questions.Regards\nAasawari", "username": "Aasawari" } ]
Data Structure General Question
2023-05-23T06:52:06.662Z
Data Structure General Question
402
null
[ "queries", "replication", "time-series" ]
[ { "code": "{\n name:\"ts1\",\n type:\"timeseries\",\n options:{\n timeseries:{\n timeField:\"timestamp\",\n metaField:\"meta\",\n granularity: \"seconds\",\n bucketMaxSpanSeconds: 3600\n }\n },\n info: {readonly: false}\n},\n {\n _id: ObjectId(\"646b3ff0c5a28d25568373ae\"),\n control: {\n version: 1,\n min: {\n timestamp: ISODate(\"2023-05-22T10:12:00.000Z\"),\n cowe: 201.939,\n pmnp10: 14,\n hum: 62.3,\n pmnp50: 0,\n pmspug10: 3,\n pm10: 60.37,\n calpm10: '',\n no2ae: 397.264,\n calco: '',\n batt: 100,\n temp: 24,\n pmnp05: 166,\n co: 0.583,\n pmspug100: 4,\n pmaeug10: 3,\n no2: 50.284,\n vafe: 262.362,\n pmnp100: 0,\n pmaeug25: 4,\n no2o3ae: 409.052,\n calpm2_5: '',\n pmaeug100: 4,\n no2we: 109.925,\n pm25: 198.91,\n no2o3we: 422.077,\n calno2: '',\n pmnp03: 537,\n calo3: '',\n pmnp25: 4,\n pmspug25: 4,\n coae: 160.378,\n pmdiag: 0,\n o3: 40.285,\n _id: ObjectId(\"646efee320e52ccbfd5ef937\")\n },\n max: {\n timestamp: ISODate(\"2023-05-22T10:13:00.000Z\"),\n cowe: 300.939,\n pmnp10: 14,\n hum: 62.3,\n pmnp50: 0,\n pmspug10: 3,\n pm10: 60.37,\n calpm10: 'test',\n no2ae: 397.264,\n calco: '',\n batt: 100,\n temp: 24,\n pmnp05: 166,\n co: 0.583,\n pmspug100: 4,\n pmaeug10: 3,\n no2: 50.284,\n vafe: 262.362,\n pmnp100: 0,\n pmaeug25: 4,\n no2o3ae: 409.052,\n calpm2_5: 'test',\n pmaeug100: 4,\n no2we: 109.925,\n pm25: 198.91,\n no2o3we: 422.077,\n calno2: '',\n pmnp03: 537,\n calo3: 'test',\n pmnp25: 4,\n pmspug25: 4,\n coae: 160.378,\n pmdiag: 0,\n o3: 40.285,\n _id: ObjectId(\"646eff3220e52ccbfd5ef939\")\n }\n },\n meta: {\n idafe: 'AirH358',\n lbllocation: 'Poli-TO-1',\n location: {\n coordinates: [15.047446452336533, 37.36084832653023],\n type: 'Point'\n },\n schemaver: 1,\n verfmw: 'AH_1.3.1'\n },\n data: {\n vafe: {\n '0': 262.362,\n '1': 262.362\n },\n no2we: {\n '0': 109.925,\n '1': 109.925\n },\n cowe: {\n '0': 201.939,\n '1': 300.939\n },\n no2: {\n '0': 50.284,\n '1': 50.284\n },\n coae: {\n '0': 160.378,\n '1': 160.378\n },\n calco: {\n '0': '',\n '1': ''\n },\n temp: {\n '0': 24,\n '1': 24\n },\n pmnp10: {\n '0': 14,\n '1': 14\n },\n timestamp: {\n '0': ISODate(\"2023-05-22T10:12:00.000Z\"),\n '1': ISODate(\"2023-05-22T10:13:00.000Z\")\n },\n no2o3ae: {\n '0': 409.052,\n '1': 409.052\n },\n calno2: {\n '0': '',\n '1': ''\n },\n pmaeug100: {\n '0': 4,\n '1': 4\n },\n o3: {\n '0': 40.285,\n '1': 40.285\n },\n pmnp03: {\n '0': 537,\n '1': 537\n },\n hum: {\n '0': 62.3,\n '1': 62.3\n },\n pm25: {\n '0': 198.91,\n '1': 198.91\n },\n _id: {\n '0': ObjectId(\"646efee320e52ccbfd5ef937\"),\n '1': ObjectId(\"646eff3220e52ccbfd5ef939\")\n },\n pmdiag: {\n '0': 0,\n '1': 0\n },\n pmnp100: {\n '0': 0,\n '1': 0\n },\n no2ae: {\n '0': 397.264,\n '1': 397.264\n },\n pmaeug25: {\n '0': 4,\n '1': 4\n },\n pmspug25: {\n '0': 4,\n '1': 4\n },\n pmspug100: {\n '0': 4,\n '1': 4\n },\n pmaeug10: {\n '0': 3,\n '1': 3\n },\n batt: {\n '0': 100,\n '1': 100\n },\n pmnp50: {\n '0': 0,\n '1': 0\n },\n pm10: {\n '0': 60.37,\n '1': 60.37\n },\n calpm2_5: {\n '0': '',\n '1': 'test'\n },\n pmnp05: {\n '0': 166,\n '1': 166\n },\n pmnp25: {\n '0': 4,\n '1': 4\n },\n calpm10: {\n '0': '',\n '1': 'test'\n },\n no2o3we: {\n '0': 422.077,\n '1': 422.077\n },\n co: {\n '0': 0.583,\n '1': 0.583\n },\n pmspug10: {\n '0': 3,\n '1': 3\n },\n calo3: {\n '0': '',\n '1': 'test'\n }\n }\n }\n\n", "text": "I’m exploring how to use the Time Series collection in a MongoDB replica set (on-premise community edition 6.0.6).\nI have define a TS collection as below:Below same example data from “db.system.buckets.ts1”:I would like to be able some sensor readings by timestamp or index, i.e. delete readings having index 1 or readings whose timestamp is greater than a set timestamp, but I’m not able to figure out how to design such a type of query given this specific schema.\nAny suggestion/example?\nThanks!", "username": "Sergio_Ferlito1" }, { "code": "system.bucketmetaFieldjustOne: falsedeleteMany()", "text": "Hello @Sergio_Ferlito1,Thank you for reaching out to the MongoDB Community forums!Below same example data from “db.system.buckets.ts1”:Could you please share a sample document from the ts1 collection? It is highly recommended to work with the actual time series collection rather than querying the “internal system bucket” collection.Note: The internal system bucket documents are for internal use only, and they might change without notice. Note that this has already taken place, where in some cases, the data in the system.bucket collection is compressed. To read further, please refer to the Time Series Compression documentation.delete readings having index 1Regarding the statement to delete readings with index 1, could you please provide some additional information or examples? This will help us understand your requirements better and help you.delete readings having index 1 or readings whose timestamp is greater than a set timestamp, but I’m not able to figure out how to design such a type of query given this specific schema.Could you please provide further clarification regarding the need for deletion, and can it be accomplished using the $match operator?Also, it’s worth noting that starting in MongoDB 5.1, you can perform some delete and update operations. For example, delete commands can only match the metaField field values, and you need to ensure that the command does not limit the number of documents to be deleted. You can set justOne: false or use the deleteMany() method. For more details, please refer to the Updates and Deletes documentation.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{ timestamp: 2023-05-22T10:11:00.000Z,\n meta: \n { idafe: 'AirH358',\n lbllocation: 'Poli-TO-1',\n location: \n { coordinates: [ 15.047446452336533, 37.36084832653023 ],\n type: 'Point' },\n schemaver: 1,\n verfmw: 'AH_1.3.1' },\n cowe: 201.939,\n pmnp10: 14,\n hum: 62.3,\n pmnp50: 0,\n pmspug10: 3,\n pm10: 60.37,\n calpm10: '',\n no2ae: 397.264,\n calco: '',\n batt: 100,\n temp: 24,\n pmnp05: 166,\n co: 0.583,\n pmspug100: 4,\n _id: ObjectId(\"646b2461fdca2cffecec33c0\"),\n pmaeug10: 3,\n no2: 50.284,\n vafe: 262.362,\n pmnp100: 0,\n pmaeug25: 4,\n no2o3ae: 409.052,\n calpm2_5: '',\n pmaeug100: 4,\n no2we: 109.925,\n pm25: 198.91,\n no2o3we: 422.077,\n calno2: '',\n pmnp03: 537,\n calo3: '',\n pmnp25: 4,\n pmspug25: 4,\n coae: 160.378,\n pmdiag: 0,\n o3: 40.285 }\n", "text": "Below is an example document from the TS collection:I would like to delete a portion of data, either from a specific timestamp (not in the meta field so not possible to act directly on TS collection) or, acting on the internal bucket, using, for example, key ‘1’ in all sub-documents of the data field.\nThe actual limitations on TS collection (no change streams support and very limited support for delete/update) are, in my opinion, too strong, if I can’t either operate on internal bucket collection to obtain what I intended, the TS collection, although very useful for inserting data, are not very useful, or at least difficult to use, if I what change them after inserting.", "username": "Sergio_Ferlito1" }, { "code": "", "text": "Hey @Sergio_Ferlito1,I would like to delete a portion of data, either from a specific timestamp (not in the meta field so not possible to act directly on TS collection)As of MongoDB 6.0.6, this is not supported. However, the feature is planned and I believe it will be released in a future MongoDB version. For example, see SERVER-73285 for more info.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How overcome, if possible, Time Series limitation acting directly on bucket collection
2023-05-25T09:11:41.932Z
How overcome, if possible, Time Series limitation acting directly on bucket collection
837
null
[ "queries", "transactions", "storage" ]
[ { "code": "dbpath=../data\nlogpath=../log/mongo.log\nport=27017\nmaxConns=5000\nbind_ip=0.0.0.0\nauth=true\nwiredTigerCacheSizeGB=4\n@echo off \n@echo mongodb started……\ntitle mongo-4.2.17\ncd bin\nmongod -f ../conf/mongo.conf\npause\n2023-05-24T13:35:27.440+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2023-05-24T13:35:27.763+0800 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] MongoDB starting : pid=2328 port=27017 dbpath=../data 64-bit host=WIN-FT73QUGLUH6\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] db version v4.2.17\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] git version: be089838c55d33b6f6039c4219896ee4a3cd704f\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] allocator: tcmalloc\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] modules: none\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] build environment:\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] distmod: 2012plus\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] distarch: x86_64\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] target_arch: x86_64\n2023-05-24T13:35:27.764+0800 I CONTROL [initandlisten] options: { config: \"../conf/mongo.conf\", net: { bindIp: \"0.0.0.0\", maxIncomingConnections: 5000, port: 27017 }, security: { authorization: \"enabled\" }, storage: { dbPath: \"../data\", journal: { enabled: true }, wiredTiger: { engineConfig: { cacheSizeGB: 4.0 } } }, systemLog: { destination: \"file\", path: \"../log/mongo.log\" } }\n2023-05-24T13:35:27.764+0800 I STORAGE [initandlisten] Detected data files in ../data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-05-24T13:35:27.765+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4096M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2023-05-24T13:35:27.795+0800 I STORAGE [initandlisten] WiredTiger message [1684906527:794745][2328:140732587905856], txn-recover: Recovering log 21 through 22\n2023-05-24T13:35:27.860+0800 I STORAGE [initandlisten] WiredTiger message [1684906527:859661][2328:140732587905856], txn-recover: Recovering log 22 through 22\n2023-05-24T13:35:27.925+0800 I STORAGE [initandlisten] WiredTiger message [1684906527:925680][2328:140732587905856], txn-recover: Main recovery loop: starting at 21/4992 to 22/256\n2023-05-24T13:35:28.041+0800 I STORAGE [initandlisten] WiredTiger message [1684906528:40831][2328:140732587905856], txn-recover: Recovering log 21 through 22\n2023-05-24T13:35:28.115+0800 I STORAGE [initandlisten] WiredTiger message [1684906528:115241][2328:140732587905856], txn-recover: Recovering log 22 through 22\n2023-05-24T13:35:28.166+0800 I STORAGE [initandlisten] WiredTiger message [1684906528:165491][2328:140732587905856], txn-recover: Set global recovery timestamp: (0, 0)\n2023-05-24T13:35:28.379+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)\n2023-05-24T13:35:28.389+0800 I STORAGE [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 1\n2023-05-24T13:35:28.390+0800 I STORAGE [initandlisten] Timestamp monitor starting\n2023-05-24T13:35:28.395+0800 I STORAGE [initandlisten] Flow Control is enabled on this deployment.\n2023-05-24T13:35:28.438+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '../data/diagnostic.data'\n2023-05-24T13:35:28.443+0800 I NETWORK [listener] Listening on 0.0.0.0\n2023-05-24T13:35:28.443+0800 I NETWORK [listener] waiting for connections on port 27017\n2023-05-24T13:35:31.438+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49773 #1 (1 connection now open)\n2023-05-24T13:35:31.441+0800 I NETWORK [conn1] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49773 (connection id: 1)\n2023-05-24T13:35:31.441+0800 I NETWORK [conn1] end connection 127.0.0.1:49773 (0 connections now open)\n2023-05-24T13:35:31.441+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49774 #2 (1 connection now open)\n2023-05-24T13:35:31.472+0800 I NETWORK [conn2] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49774 (connection id: 2)\n2023-05-24T13:35:31.473+0800 I NETWORK [conn2] end connection 127.0.0.1:49774 (0 connections now open)\n2023-05-24T13:35:31.474+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49775 #3 (1 connection now open)\n2023-05-24T13:35:31.475+0800 I NETWORK [conn3] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49775 (connection id: 3)\n2023-05-24T13:35:31.475+0800 I NETWORK [conn3] end connection 127.0.0.1:49775 (0 connections now open)\n2023-05-24T13:35:31.958+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49776 #4 (1 connection now open)\n2023-05-24T13:35:31.960+0800 I NETWORK [conn4] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49776 (connection id: 4)\n2023-05-24T13:35:31.960+0800 I NETWORK [conn4] end connection 127.0.0.1:49776 (0 connections now open)\n2023-05-24T13:35:31.961+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49777 #5 (1 connection now open)\n2023-05-24T13:35:31.961+0800 I NETWORK [conn5] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49777 (connection id: 5)\n2023-05-24T13:35:31.961+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49778 #6 (2 connections now open)\n2023-05-24T13:35:31.961+0800 I NETWORK [conn5] end connection 127.0.0.1:49777 (1 connection now open)\n2023-05-24T13:35:32.472+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49779 #7 (2 connections now open)\n2023-05-24T13:35:32.472+0800 I NETWORK [conn6] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49778 (connection id: 6)\n2023-05-24T13:35:32.472+0800 I NETWORK [conn6] end connection 127.0.0.1:49778 (1 connection now open)\n2023-05-24T13:35:32.472+0800 I NETWORK [conn7] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49779 (connection id: 7)\n2023-05-24T13:35:32.472+0800 I NETWORK [conn7] end connection 127.0.0.1:49779 (0 connections now open)\n2023-05-24T13:35:32.534+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49780 #8 (1 connection now open)\n2023-05-24T13:35:32.534+0800 I NETWORK [conn8] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49780 (connection id: 8)\n2023-05-24T13:35:32.534+0800 I NETWORK [conn8] end connection 127.0.0.1:49780 (0 connections now open)\n2023-05-24T13:35:33.075+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49781 #9 (1 connection now open)\n2023-05-24T13:35:33.077+0800 I NETWORK [conn9] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49781 (connection id: 9)\n2023-05-24T13:35:33.077+0800 I NETWORK [conn9] end connection 127.0.0.1:49781 (1 connection now open)\n2023-05-24T13:35:33.077+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49782 #10 (2 connections now open)\n2023-05-24T13:35:33.078+0800 I NETWORK [conn10] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49782 (connection id: 10)\n2023-05-24T13:35:33.078+0800 I NETWORK [conn10] end connection 127.0.0.1:49782 (0 connections now open)\n2023-05-24T13:35:33.154+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49783 #11 (1 connection now open)\n2023-05-24T13:35:33.154+0800 I NETWORK [conn11] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49783 (connection id: 11)\n2023-05-24T13:35:33.154+0800 I NETWORK [conn11] end connection 127.0.0.1:49783 (0 connections now open)\n2023-05-24T13:35:33.626+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49784 #12 (1 connection now open)\n2023-05-24T13:35:33.628+0800 I NETWORK [conn12] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49784 (connection id: 12)\n2023-05-24T13:35:33.628+0800 I NETWORK [conn12] end connection 127.0.0.1:49784 (0 connections now open)\n2023-05-24T13:35:33.628+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49785 #13 (1 connection now open)\n2023-05-24T13:35:33.629+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49786 #14 (2 connections now open)\n2023-05-24T13:35:33.629+0800 I NETWORK [conn13] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49785 (connection id: 13)\n2023-05-24T13:35:33.629+0800 I NETWORK [conn13] end connection 127.0.0.1:49785 (1 connection now open)\n2023-05-24T13:35:34.409+0800 I NETWORK [conn14] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49786 (connection id: 14)\n2023-05-24T13:35:34.409+0800 I NETWORK [conn14] end connection 127.0.0.1:49786 (0 connections now open)\n2023-05-24T13:35:34.411+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49787 #15 (1 connection now open)\n2023-05-24T13:35:34.411+0800 I NETWORK [conn15] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49787 (connection id: 15)\n2023-05-24T13:35:34.411+0800 I NETWORK [conn15] end connection 127.0.0.1:49787 (0 connections now open)\n2023-05-24T13:35:35.064+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49788 #16 (1 connection now open)\n2023-05-24T13:35:35.073+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49789 #17 (2 connections now open)\n2023-05-24T13:35:35.074+0800 I NETWORK [conn16] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49788 (connection id: 16)\n2023-05-24T13:35:35.074+0800 I NETWORK [conn16] end connection 127.0.0.1:49788 (1 connection now open)\n2023-05-24T13:35:35.074+0800 I NETWORK [conn17] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49789 (connection id: 17)\n2023-05-24T13:35:35.074+0800 I NETWORK [conn17] end connection 127.0.0.1:49789 (0 connections now open)\n2023-05-24T13:35:35.162+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49790 #18 (1 connection now open)\n2023-05-24T13:35:35.162+0800 I NETWORK [conn18] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49790 (connection id: 18)\n2023-05-24T13:35:35.162+0800 I NETWORK [conn18] end connection 127.0.0.1:49790 (0 connections now open)\n2023-05-24T13:35:35.656+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49791 #19 (1 connection now open)\n2023-05-24T13:35:35.659+0800 I NETWORK [conn19] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49791 (connection id: 19)\n2023-05-24T13:35:35.659+0800 I NETWORK [conn19] end connection 127.0.0.1:49791 (0 connections now open)\n2023-05-24T13:35:35.659+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49792 #20 (1 connection now open)\n2023-05-24T13:35:35.660+0800 I NETWORK [conn20] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49792 (connection id: 20)\n2023-05-24T13:35:35.660+0800 I NETWORK [conn20] end connection 127.0.0.1:49792 (0 connections now open)\n2023-05-24T13:35:35.745+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49793 #21 (1 connection now open)\n2023-05-24T13:35:35.745+0800 I NETWORK [conn21] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49793 (connection id: 21)\n2023-05-24T13:35:35.746+0800 I NETWORK [conn21] end connection 127.0.0.1:49793 (0 connections now open)\n2023-05-24T13:35:36.205+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49794 #22 (1 connection now open)\n2023-05-24T13:35:36.207+0800 I NETWORK [conn22] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49794 (connection id: 22)\n2023-05-24T13:35:36.207+0800 I NETWORK [conn22] end connection 127.0.0.1:49794 (0 connections now open)\n2023-05-24T13:35:36.208+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49795 #23 (1 connection now open)\n2023-05-24T13:35:36.208+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49796 #24 (2 connections now open)\n2023-05-24T13:35:36.268+0800 I NETWORK [conn23] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49795 (connection id: 23)\n2023-05-24T13:35:36.268+0800 I NETWORK [conn23] end connection 127.0.0.1:49795 (1 connection now open)\n2023-05-24T13:35:36.268+0800 I NETWORK [conn24] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49796 (connection id: 24)\n2023-05-24T13:35:36.268+0800 I NETWORK [conn24] end connection 127.0.0.1:49796 (0 connections now open)\n2023-05-24T13:35:36.269+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49797 #25 (1 connection now open)\n2023-05-24T13:35:36.272+0800 I NETWORK [conn25] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49797 (connection id: 25)\n2023-05-24T13:35:36.272+0800 I NETWORK [conn25] end connection 127.0.0.1:49797 (0 connections now open)\n2023-05-24T13:35:36.779+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49798 #26 (1 connection now open)\n2023-05-24T13:35:36.783+0800 I NETWORK [conn26] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49798 (connection id: 26)\n2023-05-24T13:35:36.783+0800 I NETWORK [conn26] end connection 127.0.0.1:49798 (0 connections now open)\n2023-05-24T13:35:36.783+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49799 #27 (1 connection now open)\n2023-05-24T13:35:36.876+0800 I NETWORK [conn27] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49799 (connection id: 27)\n2023-05-24T13:35:36.876+0800 I NETWORK [conn27] end connection 127.0.0.1:49799 (0 connections now open)\n2023-05-24T13:35:36.877+0800 I NETWORK [listener] connection accepted from 127.0.0.1:49800 #28 (1 connection now open)\n2023-05-24T13:35:36.877+0800 I NETWORK [conn28] Error receiving request from client: ProtocolError: Client sent an HTTP request over a native MongoDB connection. Ending connection from 127.0.0.1:49800 (connection id: 28)\n2023-05-24T13:35:36.877+0800 I NETWORK [conn28] end connection 127.0.0.1:49800 (0 connections now open)\n2023-05-24T13:35:40.643+0800 I CONTROL [thread29] CTRL_CLOSE_EVENT signal\n2023-05-24T13:35:40.643+0800 I CONTROL [consoleTerminate] got CTRL_CLOSE_EVENT, will terminate after current cmd ends\n2023-05-24T13:35:40.643+0800 I REPL [consoleTerminate] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms\n2023-05-24T13:35:40.643+0800 I SHARDING [consoleTerminate] Shutting down the WaitForMajorityService\n2023-05-24T13:35:40.644+0800 I CONTROL [consoleTerminate] Shutting down the LogicalSessionCache\n2023-05-24T13:35:40.645+0800 I NETWORK [consoleTerminate] shutdown: going to close listening sockets...\n2023-05-24T13:35:40.645+0800 I NETWORK [consoleTerminate] Shutting down the global connection pool\n2023-05-24T13:35:40.645+0800 I STORAGE [consoleTerminate] Shutting down the FlowControlTicketholder\n2023-05-24T13:35:40.645+0800 I - [consoleTerminate] Stopping further Flow Control ticket acquisitions.\n2023-05-24T13:35:40.645+0800 I STORAGE [consoleTerminate] Shutting down the PeriodicThreadToAbortExpiredTransactions\n2023-05-24T13:35:40.646+0800 I STORAGE [consoleTerminate] Shutting down the PeriodicThreadToDecreaseSnapshotHistoryIfNotNeeded\n2023-05-24T13:35:40.646+0800 I REPL [consoleTerminate] Shutting down the ReplicationCoordinator\n2023-05-24T13:35:40.646+0800 I SHARDING [consoleTerminate] Shutting down the ShardingInitializationMongoD\n2023-05-24T13:35:40.646+0800 I REPL [consoleTerminate] Enqueuing the ReplicationStateTransitionLock for shutdown\n2023-05-24T13:35:40.646+0800 I - [consoleTerminate] Killing all operations for shutdown\n2023-05-24T13:35:40.646+0800 I COMMAND [consoleTerminate] Shutting down all open transactions\n2023-05-24T13:35:40.646+0800 I REPL [consoleTerminate] Acquiring the ReplicationStateTransitionLock for shutdown\n2023-05-24T13:35:40.646+0800 I INDEX [consoleTerminate] Shutting down the IndexBuildsCoordinator\n2023-05-24T13:35:40.646+0800 I NETWORK [consoleTerminate] Shutting down the ReplicaSetMonitor\n2023-05-24T13:35:40.646+0800 I CONTROL [consoleTerminate] Shutting down free monitoring\n2023-05-24T13:35:40.646+0800 I CONTROL [consoleTerminate] Shutting down free monitoring\n2023-05-24T13:35:40.646+0800 I FTDC [consoleTerminate] Shutting down full-time data capture\n2023-05-24T13:35:40.646+0800 I FTDC [consoleTerminate] Shutting down full-time diagnostic data capture\n2023-05-24T13:35:40.648+0800 I STORAGE [consoleTerminate] Shutting down the HealthLog\n2023-05-24T13:35:40.648+0800 I STORAGE [consoleTerminate] Shutting down the storage engine\n2023-05-24T13:35:40.648+0800 I STORAGE [consoleTerminate] Deregistering all the collections\n2023-05-24T13:35:40.648+0800 I STORAGE [consoleTerminate] Timestamp monitor shutting down\n2023-05-24T13:35:40.648+0800 I STORAGE [consoleTerminate] WiredTigerKVEngine shutting down\n2023-05-24T13:35:40.660+0800 I STORAGE [consoleTerminate] Shutting down session sweeper thread\n2023-05-24T13:35:40.660+0800 I STORAGE [consoleTerminate] Finished shutting down session sweeper thread\n2023-05-24T13:35:40.660+0800 I STORAGE [consoleTerminate] Shutting down journal flusher thread\n2023-05-24T13:35:40.693+0800 I STORAGE [consoleTerminate] Finished shutting down journal flusher thread\n2023-05-24T13:35:40.693+0800 I STORAGE [consoleTerminate] Shutting down checkpoint thread\n2023-05-24T13:35:40.694+0800 I STORAGE [consoleTerminate] Finished shutting down checkpoint thread\n2023-05-24T13:35:40.714+0800 I STORAGE [consoleTerminate] shutdown: removing fs lock...\n2023-05-24T13:35:40.715+0800 I - [consoleTerminate] Dropping the scope cache for shutdown\n2023-05-24T13:35:40.715+0800 I CONTROL [consoleTerminate] now exiting\n2023-05-24T13:35:40.715+0800 I CONTROL [consoleTerminate] shutting down with code:12\n", "text": "Hi,\nI have installed mongodb on my local machine, and when I access port 27017 through my browser after the startup is completed, I may have the problem that the connection is reset when I refresh the page frequently, please help me to see what causes it.Here is my usage information:\nOS: Windows Server 2012 R2 Standard\nMemory: 8G\nMongoDB Version: 4.2.17\nConf:Launch Script:This is the log of 10 visits to the http://127.0.0.1:27017 page after I started mongodb (6 normal responses and 4 connection reset errors).", "username": "Qiong_Wang" }, { "code": "", "text": "your mongodb doesn’t support http protocol. Check manual.", "username": "Kobe_W" }, { "code": "27017It looks like you are trying to access MongoDB over HTTP on the native driver port.", "text": "Sorry, I didn’t understand what you meant, are you referring to MongoDB Wire Protocol? The problem I’m having is that frequent access to the server port (27017 ) occasionally results in a connection reset, and in most cases the access returns: It looks like you are trying to access MongoDB over HTTP on the native driver port.", "username": "Qiong_Wang" }, { "code": "", "text": "Normal Response:\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.Abnormal response during frequent refresh:\n\n2982×748 13.6 KB\n", "username": "Qiong_Wang" }, { "code": "", "text": "You connect to mongodb using the mongosh command line tool or via a driver in your application.The response you see and the log lines are expected when you connect with a web browser.", "username": "chris" }, { "code": "mongosh command line toolconnect via a driverconnection has resetedcurlwireshark tool", "text": "Thank you for your replay.\nIf I use the mongosh command line tool or connect via a driver there is no problem.But if I access port 27017 via http protocol I get a connection has reseted problem.If I use the curl command to access it, The following response was obtained:\n\nMeanwhile, I grabbed the network packet information by wireshark tool, but from the results, both the normal return and the connection was reset to grab the network packets are similar:\n\nimage2250×557 60.6 KB\n", "username": "Qiong_Wang" }, { "code": "", "text": "I think we are all trying to understand why you are trying to connect over http.The response you see when connecting using a browser or cURL is expected. It is telling you what you are doing is wrong and then hangs up the connection.", "username": "chris" }, { "code": "", "text": "In order to access port 27017 via http protocol before using the driver to link mongo, it is used to check whether mongo has started properly", "username": "Qiong_Wang" }, { "code": "", "text": "I just confirmed the logic and the original author’s idea is to access port 27017 via http protocol for the purpose of quickly verifying if the mongodb service is available.", "username": "Qiong_Wang" } ]
Help: Occasional connection reset error when accessing port 27017 via Http
2023-05-24T05:38:45.902Z
Help: Occasional connection reset error when accessing port 27017 via Http
957
https://www.mongodb.com/…72879dea0dec.png
[ "replication" ]
[ { "code": "", "text": "Hey all,\nI logged onto Atlas today to create a new user, and found that all the hosts in the replica set are showing as degraded (pictured below), meaning I cannot connect to the db.\nAt one point the primary host was still working, so I tried to create a new user, and now all my hosts are down and Atlas is stuck on “We are deploying your changes (current action: configuring MongoDB)”.Is there anything I can do to solve this, or is it just a case of waiting until monday for support?Thanks in Advance", "username": "James_Cook2" }, { "code": "", "text": "Hi @James_Cook2,Hopefully you are able to contact the in-app chat support now.Unfortunately we won’t be able to further assist here on the forums as we don’t have insight into your cluster.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
All hosts are degraded
2023-05-27T13:17:10.525Z
All hosts are degraded
654
null
[ "queries", "time-series" ]
[ { "code": "", "text": "Hi all,\nCould you kindly share the benchmark figures for CPU and memory usage in timeseries collection? Additionally, I would like to know if any tests have been conducted to compare CPU and memory utilization between timeseries collections and regular collections.Thank you in advance.", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "Hey @Yogesh_Sonawane1,Welcome back to the MongoDB Community forums Could you kindly share the benchmark figures for CPU and memory usage in the time-series collection?We strive to minimize CPU and memory usage, but it will be difficult to compare actual benchmark numbers as they may vary depending on the specific use case.I would like to know if any tests have been conducted to compare CPU and memory utilization between time-series collections and regular collections.In terms of time series vs. regular collection, these are two distinct features with different characteristics. For example, time series does not have a unique index, and therefore, comparisons can only be made for a specific workload.Additionally, there is a difference in disk usage between the two types of collections. I suggest referring to the article “Analyzing Data Storage: Regular Collection vs Time Series Collection” for further information on this topic.May I ask if you have experienced any performance issues with time series collection in your workload? Have you conducted any tests that demonstrate this?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you for getting back to us. At this point, we haven’t run any tests, but we’re seeking to determine whether migrating from a regular collection to a timeseries collection would result in increased CPU and/or memory usage?", "username": "Yogesh_Sonawane1" }, { "code": "", "text": "Hi @Yogesh_Sonawane1,we’re seeking to determine whether migrating from a regular collection to a timeseries collection would result in increased CPU and/or memory usage?I would recommend testing it as it will vary depending on factors such as the volume of data, the complexity of queries and aggregations getting performed on the data, and the specific implementation of the time series collection.However to read further please refer to the Time Series Compression and regular collections compression documentation.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
CPU and Memory Utilization for timeseries collections in mongodb
2023-05-04T07:07:20.340Z
CPU and Memory Utilization for timeseries collections in mongodb
934
null
[]
[ { "code": "", "text": "Does MongoDB work with any RWO provider formatted as ext4/XFS, or does it have any limitation/req on storage provider/solution?", "username": "Yanni_Zhang" }, { "code": "", "text": "hi @Yanni_ZhangCould you elaborate on the deployment architecture that you’re planning?In terms of recommendations, I would suggest you follow the production notes, as they are tested and supported. Using architecture/settings not aligned with the production notes recommendations could lead to unexpected issues, suboptimal performance, or even data loss.Best regards\nKevin", "username": "kevinadi" } ]
Storage requirement on MongoDB
2023-05-24T15:43:53.951Z
Storage requirement on MongoDB
606
null
[]
[ { "code": "", "text": "I am trying to delete the Shard which is in accessible, can. you please provide the steps for the same", "username": "nanda1212" }, { "code": "", "text": "Hi @nanda1212It’s been a while since you posted this. If I understand correctly, you want to delete an inaccessible shard? Short answer is, you cannot do this. The procedure to remove a shard is not very simple (https://www.mongodb.com/docs/manual/tutorial/remove-shards-from-cluster/) and it involves moving all data from the to-be-removed shard and distributing them to the remaining shards. If MongoDB allows you to just remove the inaccessible shard, you will lose data.Best regards\nKevin", "username": "kevinadi" } ]
Remove inaccessible Mongo shard
2023-05-15T09:10:05.236Z
Remove inaccessible Mongo shard
478
null
[]
[ { "code": "", "text": "Hi all.Glad to tell you that we have released a new Intellij plugin - NoSQL Navigator for MongoDB. If you need powerful functionality for working with MongoDB in your IDE, then try it. You can also install it in any free version of JetBrains IDEs.Plugin link: https://plugins.jetbrains.com/plugin/21833-nosql-navigator-for-mongodb\nLink to official site: https://nosqlnavigator.com/At the moment, an early version of the plugin has been released, it will have many more features. We will be very grateful if you leave some feedback.Thank you in advance!", "username": "Alex_Abara" }, { "code": "", "text": "Thanks @Alex_AbaraI’ve used PyCharm for many years, I’ll take it out for a spin this week.", "username": "chris" } ]
Try new Intellij plugin for MongoDB
2023-05-28T16:22:15.119Z
Try new Intellij plugin for MongoDB
714
null
[ "upgrading" ]
[ { "code": "", "text": "I was in the process of upgrading my free M0 cluster to M2 when I encountered the following error message. Upon checking my collection, it appears that the data has been lost. I attempted to contact the chat support, but unfortunately, I was unable to reach anyone for assistance.Error Message:\n“Your cluster upgrade failed during data migration, and in some rare cases, your cluster may be missing data.”", "username": "Patrick_Li" }, { "code": "", "text": "Hi @Patrick_Li,I understand you attempted to contact support already. Can I request that you try contacting the in-app chat support team again now? I believe there are active chat support members at this time.Regards,\nJason", "username": "Jason_Tran" } ]
Failed to upgrade cluster, and lost most of the data
2023-05-28T16:54:49.518Z
Failed to upgrade cluster, and lost most of the data
617
null
[]
[ { "code": "", "text": "I’m creating a Realm application and as I complete the wizard steps to pick an application type, etc I don’t see a way of specifying the name. And, I can’t find a way to change the name after it’s completed. Currently, it seems to only to default to “application-0-hash”… What am I missing? I really need to change it to make it easier to manage bc I will most likely have several applications in my backend. I’ll be using Realm client from web and desktop as well.Thanks", "username": "d33p" }, { "code": "", "text": "Hi DeepDev,It should allow you to name the app at the very beginning when trying to create a new app.\nThe name field will already be populated as “Application-0” which is the default but you can change this as detailed below.Unfortunately once you create the app you cannot rename it as this needs to stay consistent, you will need to create a new app with a name you want.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Gotcha. I knew I must have missed how to edit the name in the setup dialogs somehow.Thanks!", "username": "d33p" }, { "code": "", "text": "Hmm, Ok I just deleted the app I had created and went through to create a new one and I don’t get any opportunity to enter a name for the app. When I click the App Services tab in my Atlas UI, it shows a dialog “Build apps faster with App Services”. I select the Build you Own App template and it just creates the project with the default ‘application-0’ name.Is this bc I’m on the shared tier doing this Atlas / Realm testing for my app? If I upgrade to a dedicated instance will I get more options? Or, is there something else I’m missing?\nScreenshot 2023-05-26 at 8.37.37 AM956×812 65.4 KB\n", "username": "d33p" }, { "code": "", "text": "Hi DeepDev,I see that you’re talking about the first app created for a new project.\nYou’re right, it appears that the first app created is given the name Application-0 which I believe is done because of the implication that the first app you create is going to be for testing/sandpit purposes only to get accustomed to the features.Please leave the initial app (don’t delete it) and go back to /apps page by clicking the App Services tab to create a new app via the “create a new app” button on the top right of the page rather than the modal screen. It will now allow you to name the app and also give you additional configuration options.\nApps App Services 2023-05-29 at 10.02.17 am722×244 16.8 KB\nRegards\nManny", "username": "Mansoor_Omar" } ]
Change name of Realm application
2023-05-25T20:03:08.468Z
Change name of Realm application
457
null
[ "aggregation", "queries" ]
[ { "code": "from: 12collection_name", "text": "I need to do a $lookup against a collection that have starts with a number and has underscore (_) in the name.using the below is not working. what is best practies when doing this lookup?from: 12collection_name", "username": "Eirik_Andersen" }, { "code": "mongoshs0 [primary] test> db.foo.aggregate({$lookup:{from: \"61thing_foo\", localField: 'b', foreignField:'_id',as: 'x'}})\n[\n {\n _id: ObjectId(\"6472b1eb000b5a7208da81ba\"),\n a: 'one',\n b: 1,\n x: [ { _id: 1, t: 'some words' } ]\n }\n]\n", "text": "Hi @Eirik_AndersenWhat version shell/driver and mongodb are you using?This works for me in mongosh", "username": "chris" }, { "code": "", "text": "Hi Chris,\nSorry, forgot to mention the version. We are running on MongoDB 4.4.", "username": "Eirik_Andersen" }, { "code": "from: \"12collection_name\"\n", "text": "Infrom: 12collection_nameit looks like you are not putting the name within quotes. In most programming languages, this means you are using a variable. In most programming languages, variables cannot start with numbers. I would try withas shown in Chris’ example.", "username": "steevej" }, { "code": "", "text": "Hi steevej,Already tried that, and it doesn’t help.", "username": "Eirik_Andersen" }, { "code": "", "text": "Hi @Eirik_Andersen\ndid you also try single quotes? ’ instead of \" ?\nCan’t imagine that this really solves it, just to not miss this point out.\nThe exact error, your code, env setup might be helpful to share.", "username": "michael_hoeller" }, { "code": "", "text": "Show the code you are using and share any error/warning message you are getting.", "username": "steevej" } ]
Using $lookup on collectionname including underscore
2023-05-27T19:50:12.892Z
Using $lookup on collectionname including underscore
734
null
[ "swift", "atlas-device-sync" ]
[ { "code": "null", "text": "Hi, I noticed that when I add an object with optional fields using Swift Realm SDK - Device Sync writes null for those fields in the Mongo Database. This actually takes some disk space and it looks like there is no harm in removing those fields from objects. Is there a way to achieve this using Swift Realm SDK?\nCurrently, I just execute a cleanup function from time to time.", "username": "Anton_P" }, { "code": "null", "text": "The question is a little unclear; optional properties can contain a value or be nil (in Swift). Atlas represents nil as null on the server, however the field itself still exists within the object schema (object graph)no harm in removing those fieldsI think is that comment is where the unclarity lies - how do you propose removing the fields while still maintaining that optional property as part of the object graph?Do you mean just remove the property from the object entirely as it will never have a value (e.g. an unused property)? If so, sure, you can freely alter and edit your object graphs while in development - keeping in mind destructive changes may require a sync restart and/or client wipe and re-sync.If the app is in production, it can still be done as well, although a bit more work.The’s an entire section in the getting started guide covering Changing an Object ModelIf the use case is something else, can you clarify?", "username": "Jay" }, { "code": "nilcrynilnull111B102Bnilnull", "text": "Thanks for the reply @Jay. Sorry for the unclarity. Here is a real word example.I added an object with optional fields having a nil value. Then I check what it looks like in the MongoDB:\nThere are c, r, and y optional fields containing nil value and are stored in the MongoDB as nulls. This object size is 111B. Now, I can just manually edit this object in the MongoDB and delete those fields:\n\nimage720×328 29.4 KB\n\nThe resulting object size is 102B so I just freed ~10% of storage space and everything still works as expected. If I check the object in the iOS app those fields will be nil. So the question is: is it possible to somehow store optional values as “absence of a value” instead of putting null there?", "username": "Anton_P" }, { "code": "B", "text": "Hmm. If B means bytes and one object is 111 bytes and the updated object is 102 bytes, that’s a difference of 9 bytes. Thats a pretty small amount of space, even over a large dataset.If the properties are not needed then remove them (per above) from your object model in your app!Otherwise, if the schema on the server is altered and does not match the client, I would imagine at some point it will cause a client reset which would erase the data on the client and align it with the server.However, the object models won’t match and well - that’s probably unrecoverable OR the server will add the fields back into future objects going forward to match your client graph.It would be a good idea to read this Breaking vs. Non-Breaking Change Quick Reference.Looking at the chart, far right “Remove a property in Atlas” is considered a breaking change and would require sync to be terminated along with other steps.Along with that, it seems this section Remove A Property directly addresses what’s being askedIf you remove a property from the server-side schema, it is a breaking change. For this reason, we recommend that you remove the property from the client-side object model only and leave it in place on the server-side schema.", "username": "Jay" } ]
Swift Realm and Device Sync for optional values
2023-05-28T11:40:44.974Z
Swift Realm and Device Sync for optional values
722
null
[ "queries", "data-modeling" ]
[ { "code": "", "text": "MongoDB ODBC Driver Update Query…IssueHello Team,Requriment,Need to connect the Mongo Database with Power Automate Desktop.I have established the database connection with PowerAutomate Desktop and can execute the Select Query in the Power Automate SQL statement.But can’t able to execute the Update Query in the Power Automate SQL statement.Thanks!!", "username": "Raghul_Manickam" }, { "code": "2 inputArguments, Dictionary", "text": "Exact Erro is \nCorrelation Id: a8392f27-b45b-4d07-b145-41ba3450c120ERROR [42000] [MySQL][ODBC 1.4(w) Driver][mysqld-5.7.12 mongosqld v2.14.5]parse sql ‘UPDATE powerAutomatePosts SET instaPostId = ‘123455ry4t667’ WHERE _id = ‘646d9c2a9ad6287f1ccfa760’’ error: unexpected UPDATE at position 8 near UPDATE: Microsoft.PowerPlatform.PowerAutomate.Desktop.Actions.SDK.ActionException: Error in SQL statement ERROR [42000] [MySQL][ODBC 1.4(w) Driver][mysqld-5.7.12 mongosqld v2.14.5]parse sql ‘UPDATE powerAutomatePosts SET instaPostId = ‘123455ry4t667’ WHERE _id = ‘646d9c2a9ad6287f1ccfa760’’ error: unexpected UPDATE at position 8 near UPDATE —> System.Data.Odbc.OdbcException: ERROR [42000] [MySQL][ODBC 1.4(w) Driver][mysqld-5.7.12 mongosqld v2.14.5]parse sql ‘UPDATE powerAutomatePosts SET instaPostId = ‘123455ry4t667’ WHERE _id = ‘646d9c2a9ad6287f1ccfa760’’ error: unexpected UPDATE at position 8 near UPDATE\nat System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)\nat System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object methodArguments, SQL_API odbcApiMethod)\nat System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)\nat System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)\nat System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)\nat System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)\nat System.Data.Common.DbDataAdapter.Fill(DataSet dataSet)\nat Microsoft.Flow.RPA.Desktop.Modules.Database.Actions.DatabaseActions.ExecuteSQLStatement(Variant connectionString, Variant sqlConnectionVariable, Variant sqlCommand, Variant& result, Int32 timeout, Int32 getConnection)\n— End of inner exception stack trace —\nat Microsoft.Flow.RPA.Desktop.Modules.Database.Actions.ExecuteSqlStatement.Execute(ActionContext context)\nat Microsoft.Flow.RPA.Desktop.Robin.Engine.Execution.ActionRunner.Run(IActionStatement statement, Dictionary2 inputArguments, Dictionary2 outputArguments)", "username": "Raghul_Manickam" } ]
ODBC driver can't execute the Update Query in the Power Automate desktop
2023-05-28T07:52:24.187Z
ODBC driver can&rsquo;t execute the Update Query in the Power Automate desktop
662
null
[ "swift" ]
[ { "code": "class NSManagedUserProfile: Object {\n var account: SomeModelFromExternalLibrary\n var paymentMethods: [AnotherModelFromExternalLibrary]\n}\n", "text": "Hi I’m currently considering migrating from core data to realm, at the same time, i’m modularizing the codebase. While planning the dependency layers I got stuck with a question.I will move my data models to external modules, but I wanted them to be agnostic of their persistence solution (realm/core data), so I don’t have to import Realm across all other modules/packages.I thought I could mimic Core Data and have a class that interfaces with realm like thisThis class would stay in the main app target and it would be used to get/set things from Realm, but for the rest of the other modules in the layers above I could use a dependency free UserProfile class.The problem (and please correct me if I misunderstood the documentation) is that the properties on an Realm.Object class have to be on Realm.EmbeddedObject, which forces me to adopt EmbeddedObject in those models (account, paymentMethods) and have to import Realm in all the modules they are contained For a second I thought Realm.EmbeddedObject was a protocol and tried to conform to it on an extension, but its a superclass.How can I keep my models in their modules while having my Realm classes on my main app target?", "username": "Gregorio_Gevartosky" }, { "code": "class PersonClass: Object {\n embeddedDog: EmbeddedDogClass\n}\n\nclass EmbeddedDogClass: EmbeddedObject {\n name = \"\"\n}\nperson\n dog\n name = spot\nclass PersonClass: Object {\n dog: DogClass\n}\n\nclass DogClass: Object {\n name = \"\"\n}\nperson\n reference to the Dog object\n", "text": "Welcome to the forums - and to Realm!properties on an Realm.Object class have to be on Realm.EmbeddedObjectThat is not correct. Properties on objects can be any of the directly supported types and then if you want to persist an unsupported type, it can be Type Projected to supported types. An EmbeddedObject behaves more like a var, a String or Int that’s a property of the parent class.For clarity, an Embedded Object is and object that can be re-used as a property of other objects but the object itself is embedded (becomes part of) the parent objects structure. e.g. it not a separate object in Realm and cannot be queried independently of it’s parent object.An example with some psuedo-codeIf a person and an embedded dog are instantiateed, and the dog is added to the person, it becomes part of the person, there is no separate, managed dog - so, the underlying data looks like thiswhereas if we set it up so the dog is an Object instead of EmbeddedObjectand a dog is added to the person, it is a reference to the managed DogObjectHow can I keep my models in their modules while having my Realm classes on my main app target?There are a number of ways. As mentioned above, if you want to persist data that is not a directly supported type, it can be Type Projected.However, and I may not fully understand the use case, it appears the desire is to isolate the objects to keep them agnostic of their persistence. While it can be done, going down that path will eliminate a lot of the power and flexibility Realm offers.In a nutshell, Realm objects can be observed, react to changes and are lazily loaded which makes them very memory friendly even on massive datasets. “Agnostizising” them (is that a word?) may remove those features - which is giving up a lot.I would suggest modeling your data using Realm objects and working within the Realm eco-system as there are a LOT of benefits and very few, if any, downsides.", "username": "Jay" }, { "code": "", "text": "@Jay thank you very much for the explanation! In a nutshell, Realm objects can be observed, react to changes and are lazily loaded which makes them very memory friendly even on massive datasets. “Agnostizising” them (is that a word?) may remove those features - which is giving up a lot.I would suggest modeling your data using Realm objects and working within the Realm eco-system as there are a LOT of benefits and very few, if any, downsides.This is interesting. I’m probably going for a separate module dedicated to Models + Realm to keep these benefits.Again, thank you!", "username": "Gregorio_Gevartosky" } ]
Realm models in modular codebase
2023-05-27T16:12:04.033Z
Realm models in modular codebase
623
null
[ "database-tools", "backup", "devops" ]
[ { "code": "", "text": "Hi,I’m not sure whether this is allowed, but I’d like to promote an Ansible role I developed for MongoDB, all versions. I hope this will ease the installation, configuration and management process of this application a lot for system engineers.It’s fair to say that I’m an experienced Ansible & system user. The main functionalities of the role are.The role can be found on my GitHub .", "username": "Kevin_Csuka" }, { "code": "", "text": "Welcome to the MongoDB Community Forums @Kevin_Csuka!I moved this topic from “Ops and Admin” to “Open Source Projects”, but sharing and discussing projects using or related to MongoDB is definitely encouraged.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "FYI: MongoDB, starting from version 6.0, for Ubuntu 22.04 can be found here: GitHub - csuka/ansible_role_mongodb_ubuntu: An Ansible role that installs, configures and manages MongoDB for Ubuntu 22.04..", "username": "Kevin_Csuka" } ]
A complete Ansible role for MongoDB
2022-02-09T13:31:48.439Z
A complete Ansible role for MongoDB
10,835
null
[ "dot-net", "connecting" ]
[ { "code": "1.Execute(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol1 protocol, ICoreSession session, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer1 transactionNumber, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.Execute(RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation1.BulkWrite(IClientSessionHandle session, IEnumerable1.<>c__DisplayClass27_0.b__0(IClientSessionHandle session) at MongoDB.Driver.MongoCollectionImpl2 func, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionBase1 requests, BulkWriteOptions bulkWriteOptions) at MongoDB.Driver.MongoCollectionBase2 bulkWrite) at MongoDB.Driver.MongoCollectionBasevar client = new MongoClient(config.GetConnectionString(\"MongoDB\"));\nvar mongoDbContext = client.GetDatabase(config.GetSection(\"MongoDBSetting:DBName\").Value);\nvar collection = mongoDbContext.GetCollection<T>(tableName);\n\npublic T Create(T T)\n{\n collection.InsertOne(T);\n}\n\nCreate(someValue);\n", "text": "MongoConnectionException:An exception occurred while receiving a message from the server.at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquiredConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1.Execute(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol1.Execute(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol1 protocol, ICoreSession session, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableWriteCommandOperationBase.ExecuteAttempt(RetryableWriteContext context, Int32 attempt, Nullable1 transactionNumber, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation1 operation, RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatches(RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.Execute(RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.BulkWrite(IClientSessionHandle session, IEnumerable1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.<>c__DisplayClass27_0.b__0(IClientSessionHandle session) at MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSession[TResult](Func2 func, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.BulkWrite(IEnumerable1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionBase1.<>c__DisplayClass68_0.b__0(IEnumerable1 requests, BulkWriteOptions bulkWriteOptions) at MongoDB.Driver.MongoCollectionBase1.InsertOne(TDocument document, InsertOneOptions options, Action2 bulkWrite) at MongoDB.Driver.MongoCollectionBase1.InsertOne(TDocument document, InsertOneOptions options, CancellationToken cancellationToken) at Cloud.DataBase.MongoDB.MongoDBRepository`1.Create(T T)", "username": "ob_n" }, { "code": "MongoDB.Driver 2.19.2 MongoDB 4.2.0\n", "text": "", "username": "ob_n" }, { "code": "", "text": "My English is poor. I hope you can help me.", "username": "ob_n" }, { "code": "", "text": "When the first request comes in and the timing starts, the problem is triggered 30 minutes later.", "username": "ob_n" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoConnectionException:An exception occurred while receiving a message from the server
2023-05-27T08:05:33.802Z
MongoConnectionException:An exception occurred while receiving a message from the server
875