image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 4.4.17-rc0 is out and ready for testing. This is a release candidate containing only fixes since 4.4.16. The next stable release 4.4.17 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 4.4.17-rc0 is released | 2022-08-31T18:05:49.409Z | MongoDB 4.4.17-rc0 is released | 2,195 |
null | [
"database-tools"
]
| [
{
"code": "",
"text": "We have a WIX bundle that installs both MongoDB local database MSI and also the MongoDB Tools MSI along with another custom application that uses both.When the installer starts to execute the MongoDB Tools MSI installer it invokes a message box indicating the user doesn’t have sufficient privileges to access the folder to install in. I have tried multiple folders in the system ‘C:\\Program Files’ folder with no luck. Running the WIX bundle installer using “Run as Admin” works with no problems.Why does the MongoDB database MSI not have an issue, but the Mongo Tools MSI does? I assume that the MongoDB Tools MSI package was created using:InstallPrivileges=\"elevated\"I have looked at both MSI installers using ORCA.exe, but it is not obvious where the issue is. I can always use a brute force to install the Mongo Tools files as components, but was hoping to simply execute the Tools MSI package.Any ideas would be appreciated.Thanks!",
"username": "Bill_Leibold"
},
{
"code": "",
"text": "I finally solved this issue for those that may run into this in the future.The MongoDB Tools MSI does not set explicitly set the INSTALLSCOPE attribute, which apparently it is defaulting to “PerUser”, when it needs to be “PerMachine” to prevent the need for UAC.Since we are not the author of the MSI, there is an attribute in MSIPackage element named ForcePerMachine, which needs to be set to the value of ‘Yes’.",
"username": "Bill_Leibold"
}
]
| Using Wix to install MongoDB Tools MSI package | 2022-08-30T02:18:51.543Z | Using Wix to install MongoDB Tools MSI package | 1,573 |
null | [
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 5.0.12-rc0 is out and ready for testing. This is a release candidate containing only fixes since 5.0.11. The next stable release 5.0.12 will be a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 5.0.12-rc0 is released | 2022-08-31T17:52:06.040Z | MongoDB 5.0.12-rc0 is released | 2,185 |
null | [
"python",
"production"
]
| [
{
"code": "",
"text": "We are pleased to announce the 0.5.1 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.This is a bug fix release that addresses a bug in the schema auto-detection logic and adds more documentation around that feature.See the changelog for a high level summary of what’s new and improved or see the 0.5.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| PyMongoArrow 0.5.1 Released | 2022-08-31T17:45:50.277Z | PyMongoArrow 0.5.1 Released | 1,484 |
null | [
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 6.0.2-rc0 is out and ready for testing. This is a release candidate containing only fixes since 6.0.1. The next stable release 6.0.2 will be a recommended upgrade for all 6.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 6.0.2-rc0 is released | 2022-08-31T17:43:42.788Z | MongoDB 6.0.2-rc0 is released | 2,161 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "Looking through the list here: https://www.mongodb.com/docs/atlas/atlas-search/operators-and-collectors/\nI don’t see any operators that could be used to query if a field is “not null” (or “not equal” to null). Am I mistaken?",
"username": "Francesca_Ricci-Tam"
},
{
"code": "null$nedb.collection.aggregate([\n {\n \"$match\": {\n \"field\": {\n \"$ne\": null\n }\n }\n }\n])\n",
"text": "Hi,Since null is a valid value, you can leverage operator $ne:Working example",
"username": "NeNaD"
},
{
"code": "$search$search$match$ne$match$search",
"text": "Hi @Francesca_Ricci-Tam,This is not directly available within the $search stage to my knowledge. There’s also the associated feedback engine post for your request which you can vote for in the meantime.You could possibly perform the $search stage first as per normal followed by a $match stage using the $ne operator (As Nenad has mentioned) if it suits your use case although I do understand that this would not make use of any indexes (for the $match stage that’s after the $search stage).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @NeNaD and @Jason_Tran – indeed, using a $match right after the $search stage is exactly what I ended up doing in the end. : )\nI realized that I was approaching it the wrong way – I shouldn’t be trying to filter on non-text parameters inside the $search stage (which would be primarily for text-based searching).",
"username": "Francesca_Ricci-Tam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregation operator for "is not null" | 2022-08-30T18:19:47.402Z | Aggregation operator for “is not null” | 7,543 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[{\nusers: [\n {\n name: 'frank',\n age: 15,\n email: '[email protected]'\n },\n{\n name: 'daniel',\n age: 18,\n email: '[email protected]'\n },\n{\n name: 'george',\n age: 18,\n email: '[email protected]'\n }\n]\n},\n{\nusers: [\n {\n name: 'dan',\n age: 19,\n email: '[email protected]'\n },\n{\n name: 'steve',\n age: 18,\n email: '[email protected]'\n },\n{\n name: 'chris',\n age: 21,\n email: '[email protected]'\n }\n]\n}\n]\n{$match:{ 'entries.age': { age:{ $gte: 18}} }}\n[{\nusers: [\n {\n name: 'frank',\n age: 15,\n email: '[email protected]'\n },\n{\n name: 'daniel',\n age: 18,\n email: '[email protected]'\n },\n{\n name: 'george',\n age: 18,\n email: '[email protected]'\n }\n]\n},\n{\nusers: [\n {\n name: 'dan',\n age: 19,\n email: '[email protected]'\n },\n{\n name: 'steve',\n age: 18,\n email: '[email protected]'\n },\n{\n name: 'chris',\n age: 21,\n email: '[email protected]'\n }\n]\n}\n]\n[{\n\nusers: [\n {\n name: 'dan',\n age: 19,\n email: '[email protected]'\n },\n{\n name: 'steve',\n age: 18,\n email: '[email protected]'\n },\n{\n name: 'chris',\n age: 21,\n email: '[email protected]'\n }\n]\n}\n]\n",
"text": "Hey Guys, I have an array of objects in my schemaI want to query all the documents with all it’s users being 18 or above. The methods I found likeIt works, but it matches all documents with atleast one user with age 18+.Instead I want the query to return only the documents where all the users are 18+",
"username": "homesite_area"
},
{
"code": "$filter$size$eqdb.collection.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$users\",\n \"cond\": {\n \"$lt\": [\n \"$$this.age\",\n 18\n ]\n }\n }\n }\n },\n 0\n ]\n }\n }\n }\n])\n",
"text": "Hi,You can do it like this:Working example",
"username": "NeNaD"
},
{
"code": "",
"text": "Thank you so much for the solution. I do have 2 questions1-) Running the working example is returning the first object (where a user is below 18) instead I needed second objet(where all users are above 18)2-) How efficient is this query? My collection has over 200k+ documents with each users field having 100+ users.",
"username": "homesite_area"
},
{
"code": "[{\n $match: {\n users: {\n $all: [\n {\n $elemMatch: {\n age: {\n $gt: 18\n }\n }\n }\n ]\n }\n }\n}]\n",
"text": "Hi @homesite_area ,You need to use the $all operator with $elemMatch:Example:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel, Your solution seemed to work but on my actual application it did not. On closer look at the query you given age: { $gt: 18 } while I needed $gte. If you change that and try on the data , it would again return both the objects.The reason $gt worked was all the users in first object were less than or equal to 18 so your query excluded that object. But as soon as $gte is used , it finds atleast one user in object one and returns it too.",
"username": "homesite_area"
},
{
"code": "db.collection.aggregate([\n {\n $match: {\n users: {\n $not: {\n $all: [\n {\n $elemMatch: {\n age: {\n $lt: 18\n }\n }\n }\n ]\n }\n }\n }\n }\n])\n",
"text": "Hmm I see,Try the following:",
"username": "Pavel_Duchovny"
},
{
"code": "$gt$equsers.age",
"text": "Hi @homesite_area,Sorry, just instead of $gt use $eq. I updated my answer.When it comes to performance, you should create an index on users.age property.P.S. @Pavel_Duchovny solution is also great, so you should check his solution too.",
"username": "NeNaD"
},
{
"code": "",
"text": "It worked! @Pavel_Duchovny Thank you so much.",
"username": "homesite_area"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Query data inside an array of objects where all the objects should match the given value | 2022-08-26T19:56:08.053Z | Query data inside an array of objects where all the objects should match the given value | 10,004 |
null | [
"aggregation",
"dot-net",
"xamarin"
]
| [
{
"code": "string connString = \"mongodb+srv://\" + user + \":\" + secret + \"@\" + server + \"/\"+ project +\"?retryWrites=true&w=majority\";\n\nvar settings = MongoClientSettings.FromConnectionString(connString);\nsettings.ServerApi = new ServerApi(ServerApiVersion.V1);\nvar mongoClient = new MongoClient(settings);\nstring connString = \"mongodb+srv://\" + user + \":\" + secret + \"@\" + server + \"/\"+ project +\"?retryWrites=true&w=majority\";\nvar mongoClient = new MongoClient(connString);\n",
"text": "Hi, I am finding an error whenever I try to instantiate a MongoClient or a MongoClientSettings object (from MongoDB.Driver) when passing the connection string to their constructor.The whole error message that I can get a hold off says:{System.TypeInitializationException: The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception. —> System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file “/etc/resolv.conf”) —> System.NullReferenceException: Object reference not set to an instance of an object. at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] in <519bb9af32234e5dba6bd0b076a88151>:0 — End of inner exception stack trace — at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options) [0x00000] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor () [0x00006] in <519bb9af32234e5dba6bd0b076a88151>:0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…ctor () [0x00006] in :0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…cctor () [0x00000] in :0 — End of inner exception stack trace — at MongoDB.Driver.Core.Configuration.ConnectionString…ctor (System.String connectionString) [0x00000] in :0 at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrlBuilder…ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrl…ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at --REDACTED FILENAME–The lines of code that cause this error are:It will break at MongoClientSettings.FromConnectionString(connString);\nOr this will also break:This issue is only happening when I run this code in a Xamarin project. I have the exact same code in a .NET 6 project and everything works fine there. The Xamarin project on the other hand targets .NET Standard 2.0 (also tried 2.1 with same issue) and I’ve been debugging it in an Android 12 device. The driver versions I’ve tested are 2.4.4 and 2.18.0. IDE is Visual Studio Version 17.2.0I appreciate any help I could get here. Thank you.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Hi @Santiago_Suarez, as this appears to be environmental (Xamarin/Android) I’d recommend opening a new ticket at https://jira.mongodb.org/projects/CSHARP/ so the Driver team can investigate this as a potential bug.",
"username": "alexbevi"
},
{
"code": "{System.TypeInitializationException: The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception. —> \n System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file “/etc/resolv.conf”) —> \n System.NullReferenceException: Object reference not set to an instance of an object. \n at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] \n in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] \n in <519bb9af32234e5dba6bd0b076a88151>:0 \n— \nEnd of inner exception stack trace \n— \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options) [0x00000] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor () [0x00006] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…ctor () [0x00006] in :0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper…cctor () [0x00000] in :0 — End of inner exception stack trace — at MongoDB.Driver.Core.Configuration.ConnectionString…ctor (System.String connectionString) [0x00000] in :0 \n at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] \n in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrlBuilder…ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrl…ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0\n at --REDACTED FILENAME–\n",
"text": "You may want to format errors as code blocks as it will improve readability.Error resolving name servers … (Could not find file “/etc/resolv.conf”)I am not expert over android and just wandering around. could that be the problem? and if may I ask: why don’t you use string extrapolation to form your connection url?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you, I’ve opened this ticket and hopefully someone will take up on it.\nhttps://jira.mongodb.org/browse/CSHARP-4436",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Thank you for the suggestion, I didn’t know how it would end up looking. Shame I can’t edit the top entry to make it more readable.Regarding the “/etc/resolv.conf” I have no idea since that error comes from some MongoDriver component I don’t have access to, nor can I control. I don’t create nor need that file myself, but it is most likely at least very close to the issue underneath.As for the string interpolation, that was just a fast and dirty example, but I’m pretty sure the connString variable works since the same code works in dotnet 6.",
"username": "Santiago_Suarez"
},
{
"code": "mongodb://mongodb+srv://",
"text": "Hi, @Santiago_Suarez,The .NET/C# Driver uses DnsClient.NET, a third-party DNS library, for resolving SRV and TXT records. Unfortunately it appears that increased security restrictions around DNS introduced in Android Oreo prevent DnsClient.NET from working correctly. See issue #17 in DnsClient.NET’s issue tracker. Given that the issue is closed, it doesn’t appear that a fix is forthcoming.You can work around this issue by using the standard connection string format (AKA mongodb://) rather than the DNS seedlist format (AKA mongodb+srv://). A and CNAME record lookups use .NET’s built-in capabilities and don’t require any third-party libraries. (Unfortunately these built-in capabilities do not include SRV and TXT record lookups, which is why we depend on DnsClient.NET for these record types.)Please let us know if this workaround is successful for you.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "var client = new MongoClient(\"mongodb://user:[email protected]:27017\");",
"text": "Hi @James_Kovacs, thank you for your reply. I have tried to no avail the suggested workaround and I keep getting the same exception when running this line:var client = new MongoClient(\"mongodb://user:[email protected]:27017\");I apologize if I’m misunderstanding something from the provided documentation.",
"username": "Santiago_Suarez"
},
{
"code": "mongodb://name:password@machine1:port1,machine2:port2,machine3:port3\nmongo+srv://main_set_address",
"text": "127.0.0.1:27017this will try to connect tolocalhost, in this case to your Android’s own network. it won’t work. you need to give addresses to your mongodb servers. something like this:you can say, DNS resolution is basically resolves mongo+srv://main_set_address to this format to find each member’s address.check this for more info on connection strings: Connection String URI Format — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb://",
"text": "The fastest way to get working connaction string in mongodb:// format:",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongo+srv://main_set_address+srv",
"text": "Yes, I know the local address won’t work, I also tried the actual address. But even if I use the local address it shouldn’t break the execution with the System.TypeInitializationException, just throw a connection error.I cannot use mongo+srv://main_set_address because that would not be a standard connection string format as mentioned in the workaround from @ James_Kovacs. As I understood from his reply it should not include the +srv but I’m not sure of what else it entails. Also, just removing it from the one provided from the cluster’s Atlas page does not work in normal .dotnet client where it otherwise does work.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "please read my answer, just one above your last response to get the “standard” connection string you need. that should get your app up and running. report back if your problem continues even with that.",
"username": "Yilmaz_Durmaz"
},
{
"code": "string standardString = $\"mongodb://{user}:{secret}@{shard0},{shard1},{shard2}/?ssl=true&replicaSet={replicaShard}&authSource=admin&retryWrites=true&w=majority\";\nvar settings = MongoClientSettings.FromConnectionString(standardString);\n_client = new MongoClient(settings);\n",
"text": "Sorry, I skipped the select version when reading your reply. Thank you for your help.So the code ends up looking like this:This still works fine in dotnet 6, but still throws System.TypeInitializationException in Xamarin.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "the error you are getting now might have something different. can you share what comes out now? try formatting to look nicers as I did before ",
"username": "Yilmaz_Durmaz"
},
{
"code": "{System.TypeInitializationException: The type initializer for 'MongoDB.Driver.Core.Misc.DnsClientWrapper' threw an exception. ---> System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file \"/etc/resolv.conf\") ---> System.NullReferenceException: Object reference not set to an instance of an object.\n at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] in <519bb9af32234e5dba6bd0b076a88151>:0 \n --- End of inner exception stack trace ---\n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor (DnsClient.LookupClientOptions options) [0x00000] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor () [0x00006] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper..ctor () [0x00006] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper..cctor () [0x00000] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Configuration.ConnectionString..ctor (System.String connectionString) [0x00000] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrlBuilder..ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrl..ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at --FILENAME-- }\n",
"text": "Sure,It seems to me that it is still using a dns lookup, despite the standard connection string format.",
"username": "Santiago_Suarez"
},
{
"code": "Could not find file \"/etc/resolv.conf\"",
"text": "Could not find file \"/etc/resolv.conf\"this line still boils hot. parts of libraries tries to fetch from that file and Android does not have that. it is not something specific to MongoDB.Driver either. I have met an issue on github related to MAUI. anyway, I get to this topic from a year and a half ago: c# - Last version of MongoDB.Driver not working for Android 8+: Could not find file “/etc/resolv.conf” - Stack Overflow.Interpreting that post, they resolved with an older driver version at the time, I suggest you try lowering your driver version, a major version at a time, and see if you can find a working one within your project’s dotnet version. and also from @James_Kovacs answer above, depending on another library, you should not expect a resolution on newer versions anytime soon (maybe never, but I feel optimistic)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I’ve rolled back the driver version to 2.4.4 and it seems to work using the standard format. By 2.5.0 it doesn’t break but doesn’t work (timeouts). I’ll keep an eye on any further releases but I am a bit inclined to rework this into using the Atlas Data API. Still not sure if I’ll change it I but can say at least that the 2.4.4 version of the driver does work with the standard format and Android 12 and Xamarin forms 5",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Hey did you ever find another fix to this? I cannot roll back my drivers but am running into the same issue now. Did you just wind up using the Atlas Data API instead of the Drivers?",
"username": "George_Ely"
},
{
"code": "",
"text": "Hi, I’d recommend the Data API if rolling back is not an option… I moved to working on some other features in the meantime but I do want to eventually move to using http calls to the Data API. I find it more platform agnostic in case I ever need to move away from Xamarin for some reason. For now my connection is still the Mongo Driver in 2.4.4.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Okay gotcha, I appreciate the response. I think I have gotten the Realm SDK to work in Xamarin w/ Device Syncing which was my end goal so I’ll just ignore the .NET Drivers for now and possibly look at the Data API later on if need be. Thanks",
"username": "George_Ely"
}
]
| Error System.TypeInitializationException: 'The type initializer for 'MongoDB.Driver.Core.Misc.DnsClientWrapper' threw an exception.' | 2022-08-31T15:45:18.745Z | Error System.TypeInitializationException: ‘The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.’ | 3,519 |
null | [
"queries",
"java"
]
| [
{
"code": "explainexplainDBQuery.shellBatchSize = 10000",
"text": "Hi !\nI was comparing execution time between indexed and non-indexed values on a local database with a Java program. To get the query execution time I used the explain function and I also tried to get the execution time directly in my program to compare the values. The execution times from the explain function are much lower than the value returned by the Mongo shell for a query (btw: when executing queries from the Mongo shell and the Java program, I add the DBQuery.shellBatchSize = 10000 command to endure that all documents are returned). Is there another proper way to get the execution time of a query while using the Mongo functions?\nThanks for your help!",
"username": "Steroux"
},
{
"code": "executionStats",
"text": "Hi @SterouxDid you set the explain verbosity to executionStats this is needed to run the optimizer and the run the winning plan to completion.",
"username": "chris"
},
{
"code": "executionStatsplanCache",
"text": "Yes, the explain verbosity is set to executionStats, I also clear the planCache before each query execution in order to get full execution times.",
"username": "Steroux"
},
{
"code": "executionStats",
"text": "With executionStats the entire query will run to completion on the server but the results ar not transmitted to the client. This could account for the discrepancy you are seeing.Can you quantify the difference between the explain and actual execution?",
"username": "chris"
},
{
"code": "",
"text": "Yes, the execution times obtained in milliseconds are listed in the following tab. On the left side: the explain results and on the right side: the actual execution observed.\nimage201×792 48.3 KB\n",
"username": "Steroux"
},
{
"code": "long start = System.currentTimeMillis();\nAggregateIterable aggregateIterable = collection.aggregate(pipeline).batchSize(1000000).maxTime(Const.MAX_EXECUTION_TIME, TimeUnit.MILLISECONDS);\nfor (Object o : aggregateIterable) {\n //get execution time using java\n executionTimeMillis = (int) (System.currentTimeMillis() - start);\n break;\n}\n",
"text": "Yes, the execution times obtained in milliseconds are listed in the following tab. On the left side: the explain results and on the right side: the actual execution observed.To be more precise: the right side of the previous tab is obtained with the java program coded this way.Here is the tab including the shell values :",
"username": "Steroux"
},
{
"code": "",
"text": "Hi!\nI am still looking for a solution on this topic. If anyone has an idea feel free to share! \nThanks for your help!",
"username": "Steroux"
}
]
| Proper way to get queries execution time | 2022-08-23T10:39:06.366Z | Proper way to get queries execution time | 8,060 |
null | [
"queries"
]
| [
{
"code": "",
"text": "i am using Studio 3T.\nWhen i try to run\ndb.system.profile.find({“millis”:{$gt:10}}) .sort({millis:-1})getting below error:\nFailed to parse SQL query at cursorSyntax error near ‘db’db.system.profile.find({“millis”:{$gt:10}}) .sort({millis:-1}) (on line 1, character 0)Stacktrace:\n|/ t3.utils.a.a: Syntax error near ‘db’\n|…\n|… db.system.profile.find({“millis”:{$gt:10}}) .sort({millis:-1}) (on line 1, character 0)\n|___/ org.antlr.v4.runtime.InputMismatchException",
"username": "Vijay_Kumar8"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| Syntax error near 'db' db.system.profile.find | 2022-08-31T06:10:07.020Z | Syntax error near ‘db’ db.system.profile.find | 1,595 |
null | [
"production",
"field-encryption",
"c-driver"
]
| [
{
"code": "",
"text": "Announcing 1.23.0 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Improvements:Features:Improvements:Bug fixes:Thanks to everyone who contributed to this release.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB C Driver 1.23.0 Released | 2022-08-31T14:01:45.751Z | MongoDB C Driver 1.23.0 Released | 2,075 |
null | [
"node-js",
"data-modeling"
]
| [
{
"code": "",
"text": "Hello, I am working in a multinenancy approach with 1 MongoDB database per client (up to 1000 potential clients). We have created the scripts to create each new customer environment with all the collections needed in the datamodel.The problem comes when trying to create a procedure to add a new collection with scheme to all the clients (to make possible to expand the software functionality with new features needed additional data collections).NodeJS is not managing properly establishing connections and waiting from MongoDB the creation of new collection to close the connection and open the new ones. The Promise mechanism is not working properly.Anyone with experience in this type of massive db datamodel updates could provide a clue.thanks in advance.",
"username": "juan_mateu"
},
{
"code": "",
"text": "Be aware of Massive Number of Collections | MongoDBNodeJS is not managing properly establishing connections and waiting from MongoDB the creation of new collection to close the connection and open the new ones. The Promise mechanism is not working properly.Most likely is the issue is from your code. Please share.You are not supposed to open and close the connections frequently.",
"username": "steevej"
}
]
| Adding new collection to each DB in a multinenancy deployment? Issues | 2022-08-31T13:30:02.969Z | Adding new collection to each DB in a multinenancy deployment? Issues | 1,094 |
null | []
| [
{
"code": "",
"text": "I am trying to understand from the documentation the lifecycle management of flexible sync subscriptions.I was just implementing a view and was getting all data I had read access to, even though I had added a ‘$0.owner_id = user.id’ parameter to the initialSub block for it. I wanted to list only this particular user’s data in this view.Eventually, I deleted the app and data and reinstalled the app, and now I have the expected dataset.I understand that initial subscriptions are relatively static and need to be updated or rerunOnOpen, but I think I misunderstood the lifecycle of subscriptions entirely. I thought rerunOnOpen was necessary if the sync session is still active and the app is restarted. I assumed that if the sync session was not active (user has been offline for days), then it would need to establish a new subscription (session).I’m now thinking that for the purposes of subscription lifecycle, it is considered still existing if the sub’s dataset still exists in the realm (which it would pretty indefinitely if the user hasn’t used the app in a long time). The lifecycle has nothing to do with actual session activity?",
"username": "Joseph_Bittman"
},
{
"code": "trueinitialSubscriptions",
"text": "Hey @Joseph_Bittman - assuming you’re talking about the Swift SDK, I believe it works like this:The subscription is on a given realm. The subscription itself persists until you explicitly unsubscribe, or use updateQuery to update it. If you have a subscription in an initialSub block, but you do not set rerunOnOpen to true, the initialSub essentially gets ignored if a realm matching the configuration already exists on the device. That block checks for an existing realm, and if one exists & it’s not re-running on open, it won’t run. The use case for initialSubscriptions is to bootstrap a realm with data that must exist when an app starts, but you don’t expect the subscription to change necessarily. Think something like loading a public catalog of data as in an ecommerce app. I think Swift is the only SDK that has initialSubscriptions that work like this; eventually this feature may come to the other SDKs.If you need to manage subscriptions more dynamically, and you don’t need an initial data set when your app opens, you might be better served using the other subscription APIs to manage your subscriptions.The objects that sync based on the subscription are subject to a user’s read and write permissions. Those permissions are evaluated at the beginning of a session. If a user’s permissions change between sessions, that should change the objects that sync to that user’s realm. So that’s how sessions come into play, but otherwise they don’t affect the subscription.Hope this helps!",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "@Dachary_Carey That is super helpful. Thank you!Do you know how the following situation is handled?\nGiven swift view1 has an ObservedResults that subscribes to all X objects\nGiven swift view2 has an ObservedResults that subscribes to some filtered set of X objects\nWhen a user navigates from view1 to 2 and back\nThen does two subscriptions exist or a single subscription that keeps getting updated on the filter?If only one subscription, and its filter keeps being updated, then is it wiping and pulling down the full dataset each time the user navigates back and forth?Does the same answer hold if I add into the mix an initialSubscription that pulls down the entire dataset when the app loads the first time (like a product catalog).? It would be nice to not accidentially re-create the subscription by accident and have needless delay in data being pulled. I have some pre-existing views that I’m trying to understand if I need to re-work how they query data…Thank you so much!",
"username": "Joseph_Bittman"
},
{
"code": "",
"text": "Ah! Are you using the awaitable ObservedResults Flexible Sync API? If yes, those are two separate subscriptions. I was just talking with the engineers and product owner about a similar case a few days ago.So here’s my understanding, and @Diana_Maria_Perez_Af or @Jason_Flax can feel free to add details if I’m missing anything important:Say View 1 subscribes to all Dog objects. And View 2 subscribes to Dog objects where age < 2. That’s technically two subscriptions, but the data that is in the realm shouldn’t change, because View 1 has a subscription to all Dog objects.What I am uncertain about in this particular case is what happens when an object goes “out of view” of a subset subscription query. In this example, if a Dog’s age changes and now he’s 3, I would expect him to not be visible in View 2, but would expect him to still exist in View 1 because that’s a superset of the data. So the object should still be in the realm since it falls within Subscription 1. Subscription 2 in this case should probably not be a subscription at all, but the plain old ObservedResults to just query from the superset established in View 1.I asked about a similar case where ObservedResults subscriptions did not overlap. So in that case, View 1 might subscribe to all Dog objects where Age <= 2, and View 2 might subscribe to all Dog objects where Age > 2. So when an object falls out of the results set of View 1, it is technically unsynced because it’s no longer part of the subscription set - and subscription 2 syncs it again, because it becomes part of that subscription set. In that case, the object would be removed from the realm but immediately re-added because it’s part of the other subscription.If you have an initialSubscription that pulls down the entire data set that your app needs, and your subsequent views are just using data that is contained in the initialSubscription, the subsequent views should probably not be subscriptions at all, but should just be ObservedResults queries to get the data that was pulled down in the initialSubscription.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Lifecycle explanation of flexible sync subscriptionn | 2022-08-31T00:55:03.468Z | Lifecycle explanation of flexible sync subscriptionn | 2,082 |
null | [
"queries"
]
| [
{
"code": " query = JSON.stringify(payload.query); //prints out fine\n console.log(payload.query.type); //prints out undefined [object Object]\n \n if(payload.query.type){\n //prints \"message\":\"Cannot access member 'type' of undefined\",\"name\":\"TypeError\"}\n }\n \n const collection = context.services.get(\"mongodb-atlas\").db(\"portfolio\").collection(\"blogdatas\");\n let postList = await collection.find().toArray();\n\n postList.forEach(blogdata => {\n blogdata._id = blogdata._id.toString();\n })\n\n let responseData = {\n blogdatas: postList,\n query:query,\n totalNum: postList.length.toString()\n }\n",
"text": "Hello I am quite new to this tool. I am trying to intake a query to perform a search in the atlas app, but am not sure why my code won’t understand the queries given. I am quite new to programming so any help with details would be greatly appreciated// This function is the endpoint’s request handler.\nexports = async function(payload, response) {return responseData;\n};",
"username": "ben_lee"
},
{
"code": "",
"text": "console.log(payload.query); // returns prints out undefined [object Object]",
"username": "ben_lee"
},
{
"code": "",
"text": "I’m pretty new too, but I think I can answer this…Your code assumes that your “payload” parameter document has a key “query”, which contains a document that has a key “type”.If you invoke the function from the console, this should work:exports({“query”: {“type”: “some value for type”}})",
"username": "Phyllip_Hall"
}
]
| Understanding query in app functions | 2022-07-29T21:25:44.138Z | Understanding query in app functions | 1,534 |
null | []
| [
{
"code": "aws4",
"text": "i am trying to do aws I AM ROLE to aauthenticate mongo db from aws lamda.but i am getting this error\nMongoMissingDependencyError: Optional module aws4 not found. Please install it to enable AWS authentication at Object. i am not able to find a solution to this",
"username": "Nikhil_Biju"
},
{
"code": "",
"text": "AWS authentication is optional in the driver; you need to install aws4: https://www.mongodb.com/docs/drivers/node/current/fundamentals/authentication/mechanisms/#mongodb-aws",
"username": "Fuat_Ertunc"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Not able to authenticate using aws I AM Role from lamda | 2022-08-30T09:31:41.894Z | Not able to authenticate using aws I AM Role from lamda | 2,019 |
null | []
| [
{
"code": "",
"text": "Hi, all.On my Organization Access Manager screen for my project, it says “Email has not been verified” under the “Email Last Verified Date” heading. So, I want to ask: how do I verify my email address now?Also: how can I make sure that I used my database user password to try to connect to the database and not my account password?",
"username": "Osman_Zakir"
},
{
"code": "",
"text": "What email id you have used while creating your account?\nCheck that mail box and follow the link\nRegarding userid/password to access your database have you created the user?After creating your Sandbox cluster you have step to create database user",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I’m using Google as an ID provider for the account. I can’t find a verification email now, but maybe I had one before.I was able to connect successfully just now, though, so that problem’s been taken care of.",
"username": "Osman_Zakir"
},
{
"code": "",
"text": "For the second part of your question, In MongoDB Atlas control plane and data plane users are separate. So users cannot use their Atlas UI credentials to connect to database. The only exception is Data Explorer which allows interaction with your data through Atlas UI. You can turn it off if needed https://www.mongodb.com/docs/atlas/atlas-ui/#disable-atlas-ui-data-interactionYou can create database users under Data Access tab of respective project. These users can interact with your database using tools like MongoDB Shell, Compass or drivers.",
"username": "Fuat_Ertunc"
}
]
| How Do I Verify Email Address (No Email in Inbox) | 2022-08-30T22:33:39.259Z | How Do I Verify Email Address (No Email in Inbox) | 2,210 |
null | []
| [
{
"code": "",
"text": "Hi everyone! I mean hello world. But you all knew that already. I am Jason. I am here to better my understanding of MongoDB and teach anyone that may benefit with the knowledge that I have. Currently working on a new project that will need to have a MogoDB cluster so I wanted to join the best community that I could in order to help me grow and get better at understanding and implementing into my express react app.",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "Welcome @Jason_Nutt ! Glad to have you join our community!",
"username": "TimSantos"
},
{
"code": "",
"text": " Welcome to the MongoDB Community Forums @Jason_Nutt !If you can share a bit more about your project and the sort of learning resources you are looking for, the community may have some suggestions for you.I see you are already making your way through MongoDB University, which is a great starting point.There are also some posts in the forums with helpful suggestions, like @michael_hoeller’s response on How do I model relationships with MongoDB? - #2 by michael_hoeller.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks so much! We are in the ideas phase of something we want to build that will serve as a resource/community/help app and/ or site for mental health issues such as depression, bi-polar, ptsd and other mental health issues. It would be separated by locations or regions and provide a one stop resource locator for suffering individuals. Like I said, it is a project that is in the ideas phase. I know it will need to have a database to store our data. Thanks for asking about it.",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "Thanks so much @TimSantos ! Much appreciated.",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "Welcome to the Forums @Jason_Nutt! We’re glad to have you here!",
"username": "Michael_Grayson"
},
{
"code": "",
"text": "Thank you @Michael_Grayson. I am feeling welcome and working my way through MongoDB university course M001. Appreciate the welcome.",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "I am feeling welcome and working my way through MongoDB university course M001.Today I will be pressing into M103 Cluster Administration and M121 The MongoDB aggregation framework. I am also looking ahead and getting started with Charts because they are thrilling to me, I love to see the data visualized in a meaningful way, that’s why I started doing this. I want to see data charts and graphs! SO data modeling and charts give me a great incentive to really dive in and get these foundational concepts solid and move forward to become a data visualizer. I’ll be seeing y’all at the end of the days travels!",
"username": "Jason_Nutt"
}
]
| Hello World of MongoDB! I'm Jason Nutt! | 2021-08-05T13:50:05.273Z | Hello World of MongoDB! I’m Jason Nutt! | 6,680 |
null | [
"mongodb-shell"
]
| [
{
"code": "db.TestColl.insertOne(\n {\n _id: ObjectId(\"630e00266bca1d7face1be49\"),\n cId: Long(\"10003014\"),\n vId: Long(\"1006\"),\n vc: {\n code: '4',\n tagif: Decimal128(\"0\"),\n tagsD: Decimal128(\"0\")\n },\n tag: \n\t\t{\n\t\t rc: { typeId: 75, name: 'STR' },\n\t\t tsN: '206158433290',\n\t\t tStDt: ISODate(\"2014-08-08T14:24:12.000Z\"),\n\t\t tEnDt: ISODate(\"2015-09-16T15:21:42.000Z\"),\n\t\t tagEx: \n\t\t\t[\n\t\t\t\t{\n\t\t\t\t reasonCode: 'INACTIVE',\n\t\t\t\t exDate: ISODate(\"2018-11-15T00:00:00.000Z\")\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t reasonCode: 'NEG',\n\t\t\t\t exDate: ISODate(\"2018-11-15T11:19:28.000Z\")\n\t\t\t\t}\n\t\t\t],\n\t\t tagAs: { typeId: 3079, value: 'Suspended' }\n\t\t}\n }\n);\n",
"text": "All,I need help on how to update deeply nested sub document fields from Mongo Shell. Is it possible to do it or not.The below is my collection design:In the above collection, I would like to update the exDate filed of all the sub documents in tagEx Array by adding a date or using current date.I tried by using forEach and multiple other ways, I am unable to update it from Mongo Shell.Can you please help me how this can be done. The issue is, the sub document may be more deeply nested.What are my options?",
"username": "Vikram_Bade"
},
{
"code": "db.TestColl.find({cId : Long(\"10003014\")}).forEach(function (doc) {\n\n\tvar array = doc.tag.tagEx;\n\tprint(array);\n\tif (array != undefined){\n\t\tfor(var i=0;i<array.length;i++)\n\t\t\t{\n\t\t\t\tprint(array[i].exDate);\n\t\t\t\tdb.TestColl.updateOne({_id:doc._id},{ $set: { \"doc.tag.tagEx[i].exDate\" : \"2018-01-01\" }});\n\t\t\t}\n\t}\n});\n",
"text": "I am attempting to do something like this using for Each:Regards",
"username": "Vikram_Bade"
}
]
| Update Deeply Nested Sub Document - From Mongo Shell | 2022-08-31T07:52:53.503Z | Update Deeply Nested Sub Document - From Mongo Shell | 1,510 |
null | [
"replication"
]
| [
{
"code": "local.oplog.rscommand local.oplog.rs command: getMore { getMore: 8464225076500301533, collection: \"oplog.rs\", batchSize: 13981010,.........\n",
"text": "I have noticed in one of the secondary Mongodb logs about the local.oplog.rs command which more or less caused disk IO and memory to increase at the same time and resulted in issues.what exactly does this command do and why was it invoked and it seems that its initiated huge batch size. i dont think anything significant happened in primary which would have caused it to replay the oplog.Thanks",
"username": "Vinay_Manikanda"
},
{
"code": "local.oplog.rs",
"text": "Hi @Vinay_Manikanda and welcome to the community!!what exactly does this command doThe getMore indicated that something is requesting the next batch of results. The log snippet above indicates that something is tailing the oplog, either another secondary (if chained replication is enabled – it is by default), or an applicationI have noticed in one of the secondary Mongodb logs about the local.oplog.rs command which more or less caused disk IO and memory to increase at the same time and resulted in issues.As mentioned, could you help me understand how did you figure out that issue(mentioned in the logs) was caused by this?Also, could you confirm a few more things based on the above issues observed:Please help us with the above details so we could assist you further.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi Aasawari,Thanks for your response.I am not exactly sure if the disk latency and memory spike caused by this operation but it aligns with the exact time we had this issue on both the secondary’s but not primary. I see this local.oplog.rs getMore call on both secondary’s.Mongodb version is 4.2.14. I don’t have the rs.status() during the time of the issue but this memory spike eventually triggered OOM killer to target the mongod in both the secondary instances and replicaset went into bad state. I couldn’t trace the connection id back to any of the ip address or initial connection in any of the logs.",
"username": "Vinay_Manikanda"
},
{
"code": "getMore: <some large number>, collection: \"oplog.rs\", batchSize: <some other large number>rs.status()rs.conf()",
"text": "Hi @Vinay_ManikandaCould you please help me in understanding the issue in a more specific way by clarifying the below concerns:In terms of associating the connection source IP to a certain connection number, in MongoDB 4.2 series the log lines would be similar to this:2022-08-31T10:19:33.101+0530 I NETWORK [listener] connection accepted from 127.0.0.1:64149 #16 (8 connections now open)The above line signifies a new connection from 127.0.0.1:64149 which is assigned the number 162022-08-31T10:19:33.101+0530 I NETWORK [conn16] received client metadata from 127.0.0.1:64149 conn16: { driver: { name: “NetworkInterfaceTL”, version: “4.2.14” }, os: { type: “Darwin”, name: “Mac OS X”, architecture: “x86_64”, version: “21.6.0” } }Subsequently operations from this IP are marked using the string [conn16]Could you find a similar pair of log lines that can show the originating IP of the getMore queries? This will help you identify the source of the queries.Please note that in MongoDB 4.2 series, the latest version is 4.2.22. I would strongly recommend you to upgrade to the latest version for improvements and bug fixes (see the release notes for more details) to ensure that you’re not encountering any issues that were fixed in newer versions.Also, the latest MongoDB version is currently 6.0.1 which contains major improvements from the 4.2 series. Please consider upgrading to the 6.0 series as well.If the issue still persists even, could you please share the rs.status() and rs.conf() details for the deployment, along with any information that will help us reproduce what you’re seeing?Best regards\nAasawari",
"username": "Aasawari"
}
]
| What does oplog.rs getMore command do? will it cause disk latency and memory to increase? | 2022-08-25T03:18:10.760Z | What does oplog.rs getMore command do? will it cause disk latency and memory to increase? | 2,004 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hi and Good Day,Just wanted to ask if its really affecting the transactions being process on the primary if a secondary node is offline / unreachable status.Currently when our DR site is down which is part of the replica set of PRODUCTION environment for replication, transactions monitored tend to long run causes slow down on the application. After removing the inactive nodes , transaction runtime became normal.Is it possible to prevent the slowdown without removing the members. the DR servers have already a priority 0 and voting 0 configuration.Hoping you can give me some insights to resolve this one. Thanks !MongoDB Version 2.6.12 / Oracle Linux 7",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi @Daniel_InciongCould you clarify what you mean by “transaction” in this context?MongoDB supports multi-document transactions starting from the 4.0 series, but you mentioned that you’re using MongoDB 2.6.12, which doesn’t have this feature.If you are still using MongoDB 2.6, this version is very outdated (released in March 2016) and not supported anymore (e.g. not receiving updates and fixes). Is it possible for you to move to a supported version as soon as possible?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,transactions like execution of queries on the primary node. There is already an ongoing plan for the upgrade to MongoDB 4.4. version.",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi @Daniel_InciongUnfortunately due to the massive difference between MongoDB 2.6 and the latest ones (i.e. MMAPv1 vs. WiredTiger, replica set Protocol Version 0 vs. Protocol Version 1, just to name a couple), it’s difficult to determine what went wrong there (if there is anything wrong with it at all).Having said that, there are some troubleshooting tips that works pretty much universally, such as:Hopefully those will help you pinpoint some patterns/causes of the slowdowns.Best regards\nKevin",
"username": "kevinadi"
}
]
| Offline node causes slowdown on PRIMARY operations | 2022-08-30T11:17:47.083Z | Offline node causes slowdown on PRIMARY operations | 1,518 |
null | [
"compass"
]
| [
{
"code": "",
"text": "Hello Team,We are trying to export all our Mongo DB database Trigger errors to a collection through ‘Log Forwarder’ module.Now, we want to take a daily dump of that collection to check the number of errors occurred on daily basis.Is there any way to achieve this ? In Compass, we have an option to export the collection, however, is there any scheduled process or anything we can configure?",
"username": "Saptarsi_Mondal"
},
{
"code": "",
"text": "Hello @Saptarsi_Mondal,Welcome to the MongoDB Community Forum! I think from the MongoDB side, the application you are looking for is mongoexport. It is a command-line tool that produces a JSON or CSV export of data stored in a MongoDB instance.In terms of scheduling jobs:I believe the most straightforward way to achieve what you wanted (daily export of a collection), is to use cronjob/task scheduler to run a script that executes mongoexport with the required parameters.However if this is not applicable to your use case, please share below details:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Need to export a collection on daily basis | 2022-08-25T13:17:10.201Z | Need to export a collection on daily basis | 1,726 |
null | [
"indexes"
]
| [
{
"code": "{\n \"_id\": \"book1\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 3.6\n},\n{\n \"_id\": \"book2\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 4.2\n}\n",
"text": "Hi! This is my first question In my DB, I have documents that look something like this:I have a text index setup for the title and description with weights making the title more important than the description. Let’s say I search for “example” is there a way to take the rating into account? What I mean by this is that a book with a higher rating is more important, and in this specific case, book2 should show up above book1. Is this possible, or is processing outside of MongoDB necessary- I’m kind of a database noob .",
"username": "Samuel_Tinnerholm"
},
{
"code": "$text[\n {\n \"_id\": \"book1\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 3.6\n },\n {\n \"_id\": \"book2\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 4.2\n }\n]\n>db. collection.createIndex(\n {\n title: \"text\",\n description: \"text\"\n },\n {\n weights:{\n title: 2,\n description: 2\n },\n name: \"Text1Index\"\n }\n)\n>db.Testing.find(\n {\n $text: {\n $search: \"example\"\n }\n }\n).sort({ rating: -1})\n[\n {\n \"_id\": \"book2\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 4.2\n },\n {\n \"_id\": \"book1\",\n \"title\": \"example title\",\n \"description\": \"example description\",\n \"rating\": 3.6\n }\n]\n",
"text": "Dear @Samuel_Tinnerholm,Welcome to The MongoDB Community forums! Just to clarify, the text index you’re referring to is the $text operator instead of Atlas Search, right?If it’s about the $text operator, I think you are looking for Sort functionality which you can use with $search.\nPlease check below documentation for reference:These operators will work with your text index setup and you may be able to use them according to your requirements.As a quick demonstration, let’s say we have a collection of documents similar to the example you posted:Create a Text Index on it:Query the collection using the index and sorting the result by rating:Output will look like below:Is this the output you are expecting?In case you need more information, please share below detailsRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Indexing with Ranks/Ratings | 2022-08-17T10:23:44.958Z | Indexing with Ranks/Ratings | 1,287 |
null | [
"aggregation",
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
]
| [
{
"code": "export interface ResourceComment {\n comment: string,\n user: mongoose.Types.ObjectId,\n resourceId: mongoose.Types.ObjectId,\n parentCommentId?: mongoose.Types.ObjectId,\n replies?: mongoose.Types.DocumentArray<ResourceComment>, // fetched through lookup\n}\n\nconst resourceCommentSchema = new mongoose.Schema<ResourceComment>({\n comment: { type: String, ref: 'User', required: true },\n user: { type: mongoose.Schema.Types.ObjectId, ref: 'User', required: true },\n resourceId: { type: mongoose.Schema.Types.ObjectId, required: true },\n parentCommentId: { type: mongoose.Schema.Types.ObjectId },\n}, { timestamps: true })\nresourceIdresourceIdresourceIdblogPostIdblogPostIdresourceId",
"text": "I’ve tried googling this but didn’t find a definitive answer.I’m implementing a comment system on my website using MongoDB. My plan is to store all comments in a single collection. The schema looks like this:resourceId defines where this comment belongs.My questions:Question 1: Is this schema good? Will I be able to query these comments (by resourceId) fast enough even when the collection grows into the millions?Question 2: I’m also planning to add comments to my blog posts. Instead of the resourceId the comment belongs to, I would need an identifier for the specific post. Should I use the same schema, add another blogPostId field, and make both blogPostId and resourceId optional? Or should I create a separate model + collection? The rest of the feels are the exact same and I want to avoid unnecessary duplication.",
"username": "Florian_Walther"
},
{
"code": "resourceIdresourceId",
"text": "Hi @Florian_WaltherMy plan is to store all comments in a single collectionI think this should be fine. It’s probably a better option vs. putting the comments in an array of sub-document inside e.g. a “post” document, since if a post generated a lot of comments, the “post” document can grow indefinitely, which is probably not what you want.Question 1: Is this schema good? Will I be able to query these comments (by resourceId ) fast enough even when the collection grows into the millions?Well “good” is relative I believe as long as the collection is indexed properly (see Create Indexes to Support Your Queries) and if the working set fit in RAM, it should be fast enough. Of course this is also subject to the hardware spec, and whether the hardware can handle the workload or not.Question 2: I’m also planning to add comments to my blog posts . Instead of the resourceId the comment belongs to, I would need an identifier for the specific post.To me that use case doesn’t sound too different from the first one you mentioned. If it’s serving the same purpose and you’re expecting a similar usage pattern, I don’t see why you can’t reuse the same schema with minor modifications.Obligatory caveat: I’m not 100% familiar with the use case you have in mind, so these are just generalized opinion on my part. Before committing to any one solution, I’d recommend you to simulate the workload first to see if the design would work or not Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "ObjectIdresourceId",
"text": "Thank you for your answer!Just to clarify, do you recommend putting resource and blog post comments into the same collection or keeping them separate? The only difference in the schema is that one needs a resource id and the other one the blog post id to specify where this comment belongs to.Now that I think about it, maybe I don’t even need different names for that field since they’re both ObjectIds? Maybe I can just give it a generic name (which resourceId already kinda is) and use it for both resource and blog post ids.",
"username": "Florian_Walther"
},
{
"code": "ObjectIdresourceId_idObjectId_idresourceId_id_id",
"text": "Now that I think about it, maybe I don’t even need different names for that field since they’re both ObjectId s? Maybe I can just give it a generic name (which resourceId already kinda is) and use it for both resource and blog post ids.I think this is reasonable. However I’d like to point out that with regard to the _id field, ObjectId is just the default auto-generated value that is unlikely to be duplicated. If you need to, you can use a custom _id field (and thus would perhaps create a more informative reference in the resourceId field).Using a custom _id field would be an advantage for some application, since e.g. if you know the primary key for a collection and have a method to generate one, your app won’t be able to insert two identical documents, since the_id field is uniquely indexed.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you for the explanation!",
"username": "Florian_Walther"
}
]
| Storing (possible millions) of comments in a single collection? | 2022-08-27T05:46:46.841Z | Storing (possible millions) of comments in a single collection? | 3,018 |
null | []
| [
{
"code": "",
"text": "So I recently upgraded from M0–>M10I have an external application that connects to my MongoDB, all was working great with the M0 tier, but I decided to upgrade to leverage the Power BI Connection capabilities.Anyways, now I get an error:not authorized on EmergDB to execute command { find: “system.views”, filter: {}, limit: 50, returnKey: false, showRecordId: false, lsid: { id: UUID(“2eb7f5f9-7e75-4a72-a20f-a8ca0ab65243”) },\n$clusterTime: { clusterTime: Timestamp(1661880398, 1), signature: { hash: BinData(0, 28DC930BC47675C14EAB04D802D8CE93E636C55A), keyId: 7137659592053358596 } }, $db: “EmergDB” }And now I see system.views as a collection in my database (never existed before).I’m really struggling here because now my application won’t connect to the database at all. Any help would be greatly appreciated.",
"username": "Ahmed_Chaarani"
},
{
"code": "db.collection.find()\"system.views\"EmergDB<database>.system.*<database>.system.views",
"text": "Hi @Ahmed_Chaarani,I’m really struggling here because now my application won’t connect to the database at all.Just to clarify, are you not able to connect at all after upgrading? Based off the error, it seems connection is possible but the database user is not authorized to perform a db.collection.find() command on the \"system.views\" collection within the EmergDB database.And now I see system.views as a collection in my database (never existed before).As per the following documentation:MongoDB stores system information in collections that use the <database>.system.* namespace, which MongoDB reserves for internal use.Additionally:The <database>.system.views collection contains information about each view in the database.To further help us assist you with this issue, can you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
}
]
| System.views shows up in my database after upgrading (causing connection issues) | 2022-08-30T23:25:21.924Z | System.views shows up in my database after upgrading (causing connection issues) | 1,069 |
null | [
"dot-net"
]
| [
{
"code": " Exception while emitting periodic batch from Serilog.Sinks.Scalyr.ScalyrSink: System.AggregateException: One or more errors occurred. (Specified cast is not valid.)\n ---> System.InvalidCastException: Specified cast is not valid.\n at MongoDB.Bson.BsonValue.System.IConvertible.ToType(Type conversionType, IFormatProvider provider)\n at Newtonsoft.Json.JsonWriter.ResolveConvertibleValue(IConvertible convertible, PrimitiveTypeCode& typeCode, Object& value)\n at Newtonsoft.Json.JsonWriter.WriteValue(JsonWriter writer, PrimitiveTypeCode typeCode, Object value)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializePrimitive(JsonWriter writer, Object value, JsonPrimitiveContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonSerializer.Serialize(JsonWriter jsonWriter, Object value)\n at Newtonsoft.Json.Linq.JToken.FromObjectInternal(Object o, JsonSerializer jsonSerializer)\n at Newtonsoft.Json.Linq.JObject.FromObject(Object o, JsonSerializer jsonSerializer)\n at Serilog.Sinks.Scalyr.ScalyrFormatter.MapToScalyrEvent(LogEvent logEvent, Int32 index)\n at System.Linq.Enumerable.SelectIterator[TSource,TResult](IEnumerable`1 source, Func`3 selector)+MoveNext()\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonSerializer.Serialize(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonConvert.SerializeObjectInternal(Object value, Type type, JsonSerializer jsonSerializer)\n at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, Type type, JsonSerializerSettings settings)\n at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, JsonSerializerSettings settings)\n at Serilog.Sinks.Scalyr.ScalyrFormatter.Format(IEnumerable`1 events)\n at Serilog.Sinks.Scalyr.ScalyrSink.EmitBatchAsync(IEnumerable`1 events)\n --- End of inner exception stack trace ---\n at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)\n at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)\n at System.Threading.Tasks.Task.Wait()\n at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.EmitBatch(IEnumerable`1 events)\n at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.OnTick()\n",
"text": "Hello!My C# code is throwing a MongoDB.Driver.MongoAuthenticationException exception which in-turn is calling Serilog to format the exception using Newtonsoft Json serialization. Since one of the values in the exception is of type BsonValue and is IConvertible the serializer is trying to call ToType() which is throwing the exception on this line mongo-csharp-driver/BsonValue.cs at master · mongodb/mongo-csharp-driver · GitHubI’ve downloaded the driver source and the two types it’s trying to convert are BsonType.Binary and BsonType.Timestamp.Stacktrace",
"username": "Jonathan_Chapman"
},
{
"code": "",
"text": "I have the some issue, can anyone find the silution for this error?",
"username": "raypanwj"
}
]
| MongoDB.Bson.BsonValue.System.IConvertible.ToType throwing exception when using Serilog | 2021-04-14T20:30:07.008Z | MongoDB.Bson.BsonValue.System.IConvertible.ToType throwing exception when using Serilog | 3,189 |
null | [
"aggregation",
"sharding",
"performance"
]
| [
{
"code": "",
"text": "Hi everyone!I am currently working on a MongoDB test system on a server for a proof of concept project, to see the limits for a later telecommunication project, but I have some performance issues with the MongoDB on the server.The specs of the server:\nCPU: 2x AMD epyc 7453\nRAM: 256GB\nStorage: 12x 20TB ultrastar HDD, 2x 1TB NVMe SSD\nOS: Debian 11I’m working with a more or less realistic dummy data, which includes about 8.5kB of data per document (timestamp, IPv6 addresses, random 32 and 64 bit values…etc), in database sizes on the scale of 10…100GB (later scaling up to the terrabyte territory). All test ran on the same server on the localhost address.Without sharding, replica sets, and with one mongod process, the results were something like this:This seemed like a realistic range, based on articles and previous tests (also to be clear, these are the speed of the inserts themselves, no data handling is calculated in this). Though the server was not running at full power, it should be capable of higher performance overall.At this point it seemed to be a good opportunity to test sharding on a single server - the concept was that if more mongod instances ran on the same server, the overall performace would be higher. I used the timestamp as a shard key, in ranged mode, since it’s more or less an “incemental” value and therefore doesn’t really need hashing for appropriate load balancing. And…this is where I lost track.When I used 3 different shard servers on one SSD, the insertion speed was around 15-40.000 inserts per second. With 11 different shard servers still on one SSD, I got around 15-35.000 inserts per second, and when I switched to 11 different shard servers, each on a dedicated HDD, I got around 1000-30.000 inserts per second, which is very far behind the un-sharded test results.The CPU was not running on full power, the full system memory is about 25 times more than the size of the database(s), and even a single HDD (or one SSD) should be able to write more data than that (I mean in data speed). Maybe I should test other shard key strategies, change the test scenario in case the storage cache is corrupting the results, but I’m not really sure about that.Has anyone any suggestions on the topic?",
"username": "Zsolt"
},
{
"code": "",
"text": "Running more than a single instance of mongod/mongod on the same hardware is detrimental to performance. Especially if your client application, load simulato, is also running on the same machine. Yes, you illiminate network latency but you increase resources contention. You cannot do much with 2 cpu without context switching. Shards add a lot of overhead, the only way to have better overall performances is with multiple physical machine.",
"username": "steevej"
},
{
"code": "",
"text": "Besides running on one server there are also some other concerns with this deployment.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thank you both for your answers.Yes, after a bit of researching I see why this shard key strategy was not the best idea.The 11 as a number of shard servers was just an arbitary number came to my mind, it was a test scenario to see how it changes overall performance. Well, after thinking about it (and looking at the databases in each shard server) I see why it didn’t permform as I was hoping for - all of them are just a member of one replica set, and without any other setting, all of them just store the same data, therefore slowing down operation. Though, one thing I noticed: the overall insertsion speed dropped by 10-40%, but the computer itself was writing to 11 different places → therefore all in all it was still writing more data in the same given time, than without sharding.Just out of curiosity I made a test with zones: I made 5 zones with 5 shards (each with a replica set consisting of one member, yes I know, not very realistic to be honest) with a shard key that should have made more or less a decent balance. Well, the shards store almost the same data size -which were about the fifth of the whole database in size, so to this point it was what I expected. But… it was writing especially slowly. I know using zones in sharding is very much not made for this purpose, but still I was surprised that it was this slower.For the time we have a few more of these servers at our hands I will make some test with those in a LAN just to see how it performs, but still I’m somehow “disappointed” in the results. With a much older and weaker server I could get about 30-40.000 inserts per second on a similar scenario (I mean without sharding, but the same random data) and I was hoping somehow this could pull the trick, and not just by a factor of 2-3 times. Of course I know multithreading has it’s limitations and sure, no program will probably use all cores and threads to push the hardware to it’s full limits.",
"username": "Zsolt"
},
{
"code": "",
"text": "Well, after thinking about it (and looking at the databases in each shard server) I see why it didn’t permform as I was hoping for - all of them are just a member of one replica set, and without any other setting, all of them just store the same dataIf the above is really what you observed (all shards holding the same data) then you did not configured sharded clusted. Each shard is a different replica set and each is supposed to hold a different set of data.I strongly recommend that you take M103 from university.mongodb.com as you seem to lack some fundamental knowledge about sharding vs replica set to do what you are doing.",
"username": "steevej"
}
]
| Aiming for high performance on single server | 2022-08-25T12:53:14.705Z | Aiming for high performance on single server | 2,428 |
null | [
"queries",
"java"
]
| [
{
"code": "Projections.includeCould not construct new instance of: CaseDocument. Missing the following properties: [owner, requester, resolver, resolveByTime, creationTime, issue]@Builder\n@Data\n@FieldDefaults(makeFinal = true)\nclass CaseDocument {\n @BsonId\n private long caseId;\n \n private long creationTime;\n\n private long resolveByTime;\n\n private CaseOwner owner;\n\n private CaseCustomer customer;\n\n private CaseIssue issue;\n\n private CaseRequester requester;\n\n private CaseResolver resolver;\n\n @BsonCreater\n public CaseDocument() {\n // All Args Constructor\n }\n}\ncustomerProjections.include(Arrays.asList(\"customer\"))",
"text": "I’m using Mongo Java Driver 3.12 and have configured AutomaticPojoCodec. I’m writing a query which requests selected fields using Projections.include but while deserialization I get error Could not construct new instance of: CaseDocument. Missing the following properties: [owner, requester, resolver, resolveByTime, creationTime, issue].In my query, I’m only requesting customer property using Projections.include(Arrays.asList(\"customer\")). I’ve verified that data is present in DB and is not null. Why MongoDB is not able to deserialize in this scenario?",
"username": "Shubham_Gupta"
},
{
"code": "",
"text": "I see that the question is quite old, but I had similar issue and want to help anyone who will struggle with same.\nHere, instead of primitive types (long in this case), use wrapper (java.lang.Long). In this case, if field is not specified in projection, these fields will be deserialized as nulls\nThe error is quite missleading, as it mentions not the fields which actually have the issue, but all unspecified fields",
"username": "Dmytro_Solop"
}
]
| Mongo Java Driver Deserialization error when using Projections | 2020-11-23T08:24:11.276Z | Mongo Java Driver Deserialization error when using Projections | 2,539 |
[]
| [
{
"code": "",
"text": "\nScreen Shot 2022-08-23 at 9.29.05 AM3000×1414 378 KB\n\nWe’ve been receiving email warnings about “Connections % of configured limit has gone above 80” for a couple hours this morning, but can’t identify anything we’re doing as the source of these extra connections. We have a hosted M10 cluster, AWS N. California, v4.4.16, with a Realm sync instance. Realm sync says there has been 23 requests in the last hour, so not a lot.Looking at the realtime data, local.oplog.rs looks to be consuming the most cycles, followed by _realm_sync.history and _realm.sync.resume_tokens. Is this just a lot of sync activity? Or should I restart the sync?We had a similar issue last week that just “resolved itself” after a few hours.",
"username": "Joe_Keeley"
},
{
"code": "mongod",
"text": "Hi @Joe_Keeley - Welcome to the community Is this just a lot of sync activity? Or should I restart the sync?It’s hard to say what the cause of the connection surge could be with the information at hand. However, have you had a chance to investigate the mongod logs starting from just before the surge and ending just after it dips back to your regular connection levels? This may provide some clue or further insight into which client(s) have caused the connection surge.In saying so, perhaps it may be best to raise this with the Atlas chat support team as they have more insight into the associated Atlas project / cluster.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "We figured out what was happening (we think!). We were doing some smaller sized updates to our MongoDB data, which then gets updated in Realm. We made the updates bigger and less frequent, and that appears to have solved it…thanks!",
"username": "Joe_Keeley"
},
{
"code": "",
"text": "Great! Thanks for updating the post with your solution Joe ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Connections % of configured limit has gone above 80 - thoughts? | 2022-08-23T16:50:06.428Z | Connections % of configured limit has gone above 80 - thoughts? | 1,142 |
|
[
"connecting"
]
| [
{
"code": "",
"text": "Hi,I am facing this issue when trying to connect to my mongodb atlasuser is not allowed to do action [listCollections] on [server.].It works fine on my local device, but when I try running on Azure App Service I am getting this error.\nimage1605×50 6.35 KB\nSystem & Build Info:\nSpring Boot Application\nJava 11\nJDO Data Nucleus 6.0I have added whitelist ip and also excluded the mongodb auto configuration classes in spring boot application annotation.Really stuck, not sure what else am I missing.",
"username": "Raymond_Nathan"
},
{
"code": "server.",
"text": "Hi @Raymond_Nathan, thanks for posting!The M150 course is meant to be completed on MongoDB University, so Azure App Service is not necessary. I am mainly writing this for other learners who stumble on this post – you do not need to use Azure or Spring Boot However, I can try to troubleshoot your issue. It looks like you are trying to read from the collections on the server. database.Do you have:I hope this helps! Sorry I don’t have a lot of experience with Azure App Service.Thanks,\nMatt",
"username": "mattjavaly"
},
{
"code": "",
"text": "Hi Matt,Thanks for getting back so quickly. Actually I have managed to resolve the issue.It was a combination of multiple reasons:Fixed up all those and working great now. Regards,\nRaymond",
"username": "Raymond_Nathan"
},
{
"code": "EditatlasAdmin@admin",
"text": "In the Atlas cluster, click “Database Access” under “SECURITY” on the left sidebar,\nClick on the button Edit and then add a role atlasAdmin@admin to the user.\nTry again or refresh your connection to see the result.\nScreenshot from 2022-08-30 21-25-071054×944 99.3 KB\n",
"username": "Michael_Emelieze"
}
]
| Getting Error From MongoAtlas user is not allowed to do action [listCollections] on [DBName.] | 2021-07-12T13:54:55.388Z | Getting Error From MongoAtlas user is not allowed to do action [listCollections] on [DBName.] | 7,872 |
|
[
"queries",
"data-modeling"
]
| [
{
"code": "",
"text": "My collection is called places, it is used by 2 clients, one with 370 documents and the other with 250. On Thursday, both clients were very happy and at 09:00 on Friday, one of them began to wait approximately 1:40 minutes for each query. towards. The other took 10-12 seconds. The queries are made by the ownerId and sharedUser, both compound indexedAnalyze the weight of each document in the client that takes longer and there is no record that weighs more than 5kb, at most there are a couple of them that weigh 100kb, I even deactivated one of 100kb and the problem persisted.The same thing happened on Tuesday and Wednesday, everything solved… I didn’t understand what happened.The size of the collection is just:\nStorage size: 880.64KB\nDocuments: 1.1K\nAvg. document size: 3.35 kB\nIndexes: 3\nTotal index size: 167.94 kBIt is even much faster to export the entire collection than to do the query.And to add more inconsistency, this same client consults another collection and it takes too long… very distressing.Will it be something from the internal cache, oplog of some query?\nCaptura de Pantalla 2022-08-28 a la(s) 10.28.492246×562 63.1 KB\n",
"username": "Mauro_Perez_Araya"
},
{
"code": "{ownerId : -1, sharedUsers : -1}",
"text": "Hi @Mauro_Perez_Araya ,Looking at the screen shot the {ownerId : -1, sharedUsers : -1} has not been used at all ( 0 seens … )So I am not sure what is the query that you perform?Can you provide it with an execution plan ?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "El problema era que M2 tiene limite de transferencia de datos y ni me avisaron y se ralentizaba… Despues de descubrir esto habilitando otro cluster dedicado la velocidad de descarga era de 2 segundos… asi que me di cuenta que el problema era mongo atlas y luego buscando por esta razon encontre otra persona con exactamente el mismo problema y un empleado de mongo describio la solucion… Q lamentable que no se notifique de ninguna forma que excedi el limite, me trajo muchos problemas con mis clientes",
"username": "Mauro_Perez_Araya"
}
]
| De un minuto a otro, las consultas comenzaron a demorarse mucho | 2022-08-28T14:29:02.607Z | De un minuto a otro, las consultas comenzaron a demorarse mucho | 1,214 |
|
null | [
"queries",
"data-modeling",
"swift",
"atlas-device-sync"
]
| [
{
"code": "class Person: Object {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n\n @Persisted var shirts: List<Shirt>\n\n @Persisted(originProperty: \"owner\") var pants: LinkingObjects<Pants>\n}\n\nclass Pants: Object {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n\n @Persisted var owner: Person?\n}\n\nclass Shirt: Object {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n}\ntry await subscriptions.update {\n let sub = QuerySubscription<Person>(name: \"subscription\")\n subscriptions.append(sub)\n }\ntry await subscriptions.update {\n let sub = QuerySubscription<Pants>(name: \"subscription\")\n subscriptions.append(sub)\n }\n",
"text": "Does flexible sync pull objects across relationships like Lists, LinkingObjects, or direct relationships? Assuming this basic schema:Would this query sync Pants and Shirts?Would this query sync Persons?",
"username": "Harry_Netzer1"
},
{
"code": "",
"text": "Hey @Harry_Netzer1 - Flexible Sync does not pull in linked objects. You’d need to add subscriptions for the linked objects you need to preserve the relationships. In your examples, querying for Person does not sync Pants and Shirts, and querying for Pants does not sync Person/Shirt. If you subscribe to all three object types, the links/relationships will function as expected.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Thanks for clearing that up. I’ll use something like a region Id on all my objects.In the old days, sync pulled in directly referenced objects and could be configured to pull linkingObjects too. Any chance of this feature in the future?",
"username": "Harry_Netzer1"
},
{
"code": "",
"text": "I’m not sure what form it will take, but I think @Ian_Ward and the engineering team has some plans for linked object support. Stay tuned!",
"username": "Dachary_Carey"
}
]
| Flexible Sync Across Relationships | 2022-08-29T15:20:09.671Z | Flexible Sync Across Relationships | 1,996 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": " [\n\n {\n \"_id\": \"630499244683ed43d56edd06\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\"\n },\n \n {\n \"_id\": \"6304c19bda84477b41b4bbfa\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\"\n },\n \n {\n \"_id\": \"6304c1b5da84477b41b4bbfb\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\"\n },\n {\n \"_id\": \"6304c1cbda84477b41b4bbfc\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\"\n },\n]\norder_id [\n\n {\n \"_id\": \"630499244683ed43d56edd06\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\",\n \"order\": 7\n },\n \n {\n \"_id\": \"6304c19bda84477b41b4bbfa\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\",\n \"order\": 5\n },\n \n {\n \"_id\": \"6304c1b5da84477b41b4bbfb\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\",\n \"order\": 0,\n },\n {\n \"_id\": \"6304c1cbda84477b41b4bbfc\",\n \"userId\": \"630499234683ed43d56edd05\",\n \"isPaid\": \"true\",\n \"order\": 2\n },\n]\n",
"text": "I have an array of MongoDB collection -I just want to add the order property to all the objects but for all the specific _id I need to add the different `order valueLike this -Please let me know How do I implement this approach in MongoDB?\nI’m not using mongoose.",
"username": "Mohammad_Noushad_Siddiqi"
},
{
"code": "",
"text": "What you need is bulkWrite where you create an array of updateOne, each element being a different query and update operation.",
"username": "steevej"
}
]
| How to update some collection with different query parameters and with different set values? | 2022-08-26T07:39:05.146Z | How to update some collection with different query parameters and with different set values? | 1,253 |
null | []
| [
{
"code": "",
"text": "Hello i dont understand what happened i have a 455 mb database and i see terabytes for Atlas AWS Data Transfer (Internet) (N. Virginia) - Cluster0 and a lot of amount charged i really dont understand anything . Please help me .",
"username": "Gharths_Eudd_Nodet_DORELIEN"
},
{
"code": "",
"text": "@Gharths_Eudd_Nodet_DORELIEN this is community support … you may not get much billing help here.Go to Atlas and pull down the menu GetHelp → Create New Case and I think that may be the fastest way to get billing support.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| My month billing is 539.25 $ for an M10 with a db of 455 mb database | 2022-08-30T13:35:02.128Z | My month billing is 539.25 $ for an M10 with a db of 455 mb database | 824 |
null | [
"swift"
]
| [
{
"code": "print(\"Calling realm init at: \\(Date())\")print(\"Realm finished initializing at: \\(Date())\")public extension DispatchQueue {\n\n static let realmThread = DispatchQueue(\n label: \"realmThread\",\n qos: .background)\n \n}\nlet config = user.configuration(partitionValue: \"\\(user.id)\")\nDispatchQueue.realmThread.async {\n do {\n try autoreleasepool {\n print(\"Calling realm init at: \\(Date())\") // Up to 30 seconds between here\n let realm = try Realm(configuration: config, queue: DispatchQueue.realmThread)\n print(\"Realm finished initializing at: \\(Date())\") // and here\n self.realm = realm\n print(\"Realm set into local property at: \\(Date())\")\n completionHandler(true, nil)\n print(\"Completion handler fired at: \\(Date())\")\n }\n\n } catch(let error) {\n print(error)\n print()\n }\n}\n",
"text": "I have this setup for initializing realm in my app (there are a bunch of time stamps because I’m going crazy trying to debug why it’s so slow).\nI now notice that the only hang in this bit of code is when actually initializing Realm.\nBetween print(\"Calling realm init at: \\(Date())\") and print(\"Realm finished initializing at: \\(Date())\") it can take up to 30 seconds.One thing to note is that this issue only affects cold launches. Warm launches work great. Is there anything i am doing wrong in my realm initialization?",
"username": "Tudor_Andreescu"
},
{
"code": "",
"text": "Background QoS is almost certainly not what you want. It’s intended for long-running low importance tasks than can be safely suspended indefinitely. In practice background QoS queues are not executed at all in low-power mode and typically only run on efficiency cores when they are executed, and the OS has a lot of low-importance background QoS work that you’re effectively yielding to.Energy Efficiency Guide for iOS Apps: Prioritize Work with Quality of Service Classes has a decent description of what the QoS levels mean, but the short summary is that anything which will hard block the main thread until it’s done should be User-interactive, short background tasks should be User-initiated, and Utility is for background tasks where showing a progress bar is appropriate. Background is for long-running tasks where showing a progress bar doesn’t make sense because the user doesn’t care when it completes or that it’s running.",
"username": "Thomas_Goyne"
},
{
"code": "",
"text": "Thank you, this absolutely makes a ton of sense and completely solved my problem. Much obliged!",
"username": "Tudor_Andreescu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Slow first Realm Init on Cold Launch | 2022-08-26T11:55:40.287Z | Slow first Realm Init on Cold Launch | 1,371 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": " router.post(\"/\", upload.single(\"img\") ,verifyTokenAndAuthorization, async (req,res)=>{\n \n try{ \n const result = await cloudinary.uploader.upload(req.file.path, {\n upload_preset: \"Mern_redux-practice\",\n resource_type: \"auto\",\n }) //cloudinary gets data, so the data must be here\n const newDayLinks = new Daylinks({\n cloudinary_id: result.public_id, \n title: req.body.title,\n content:req.body.content,\n img: result.secure_url,\n ident: req.body.ident, //this is the new fieldname and it is undefined\n })\n const savedDayLinks = await newDayLinks.save();\n res.status(200).json(savedDayLinks);\n } catch(error){ \n res.status(403)\n console.log(ident, title, content,img);\n throw new Error(\"Action failed\"); //this error is thrown\n }\n});\nconst DaylinksSchema = new mongoose.Schema({\n cloudinary_id: {type:String, required:true},\n img:{type:String, required:true},\n title:{type:String, required:true},\n content:{type:String, required:true},\n id:{type:String}, //that was the fieldname before\n ident:{type:String, required:true}, // that is the new fieldname I manually changed in the collection\n}, \n {timestamps:true},\n);\n\nmodule.exports = mongoose.model(\"Daylinks\", DaylinksSchema);\n",
"text": "After I manually changed a field name in mongoDb, my crud operations no longer work. I have read that in this case the old and new field name in mongoose. Schema must be specified, and have done so, but the database still does not take the values. Do I also have to specify something in my routes in node?That is the route in node:That is the current mongoose Schema:",
"username": "Roman_Rostock"
},
{
"code": "",
"text": "Hi @Roman_RostockCould you elaborate on this issue a bit further?Would be great if you can also post your MongoDB version and mongoose version as well.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hello kevinadi, I have got an answer on stack overflow. I must drop the collection and create it new. I have not done it yet, but how I learned in m001basic, I will drop the collection in the ui and create a new post, so the collection should be there again. I found out that I have an dublicate key error. When I post an entry the first one is uploaded, but when I want to post a second one, it throws me that error.",
"username": "Roman_Rostock"
},
{
"code": "",
"text": "in mongodb, you do not need to drop any collection just because you have changed a field in your schema. that is how documents databases work; you don’t need a concrete shape to save data.on the other hand, you need a schema just because the application you use may use an ORM or DAO library to map the data to objects. yet this does not means you need to remove/recreate/migrate all previous data when you have a design change.your old data still pretty much usable, it is just that your app does not know how to handle the change. for example, you can add a new field “version” to your new document schema to differentiate old and new, and use 2 schema. or you can add both fields in one schema, check for existence of old field but write new data with new field. and if you are the sole user of the data, you can invoke an update to all existing documents from\" “id” to “ident”. make note that none of these include a data removal, because mongodb gives all these kinds of freedom.",
"username": "Yilmaz_Durmaz"
}
]
| After I manually change a field name, the console tells me that the associated value is undefined | 2022-08-15T07:16:28.868Z | After I manually change a field name, the console tells me that the associated value is undefined | 2,547 |
[]
| [
{
"code": "",
"text": "Having an issue when connecting to mongodb with TLS configuration . getting the error below when trying to connect with the tls optionmongod service is up and runningOS : Rhel8\nMongoDB Version : 4.2.18\nimage1433×92 6.86 KB\n",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi @Daniel_Inciong,Can you share your command line maybe with the options you are trying to use? Are you able to connect by any other mean?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime,Here is the command line use for connectingmongo admin --tls --host vbmfarirdbs01.mbtc.mgc.local -tlsCertificateKeyFile vbmfarirdbs01_4096.pem --port 37027 -u testuser",
"username": "Daniel_Inciong"
},
{
"code": "--host",
"text": "As you are using FIPS mode (which is an Enterprise Advanced feature) I suggest that you open directly a support ticket if you still cannot connect. I don’t see anything wrong except maybe the fact that your --host only includes one of the nodes when it should - in theory - contain the full connection string with the 3 nodes from your Replica Set and the RS name.That being said, I haven’t touched TLS & these security options for quite some time now so I could also completely miss something obvious.Maybe someone will have a better idea . But at least the support will be able to help you more directly.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Also worth looking at the monogd log, and ideally turning up the debug level on the log and seeing if there are clues there.",
"username": "John_Page"
},
{
"code": "",
"text": "hi john,thank you for your feedback , already turned up the log but cannot see any error messages. connection accepted then end connection im seeing on the logs",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi MaBeuLux88Server is not yet on replica mode since i’m on the initial setup and configuration of a single node. will also check on with support. Thanks",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi,This issue already resolved. Per checking crypto policy is set to FUTURE, resolved when i changed it to FIPS mode",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Socket Exception : Stream Truncated | 2022-02-28T05:19:20.024Z | Socket Exception : Stream Truncated | 3,901 |
|
null | [
"storage"
]
| [
{
"code": "",
"text": "Hi everybody,I’m working on a MongoDB database that will contain a few dozen collections, each containing time-based sensor data growing in 1 Hz interval. These collections will become very big (the order of magnitude is ~10 gb each) and it is mandatory that query performance is good.Since each collection grows simultaneously in a 1 Hz rhythm, my fear is that the files on the hard drive will become heavily “intertwined” (fragmented) and that query performance will suffer due to excessive head movement of the drive.My question:What “chunk size” (granularity) does MongoDB use when it needs to enlarge a collection’s file on disk ?\nIs it possible to configure that setting? I’d like to be able to set it to a very high value to reduce file fragmentation.Kind regards,\nArhur Hoornweg",
"username": "Arthur_Hoornweg"
},
{
"code": "",
"text": "Hi @Arthur_Hoornweg and welcome to the community!!As far as I know, the “chunk size” is not a configurable feature in MongoDB. However, disk fragmentation may or may not be the primary cause of performance issues you are thinking about.For example, using an SSD might lead to a better performance when compared to a spinning disc. Also, the right hardware configuration(RAM, CPU) according to the workload would also have sizeable impacts to performance.In my opinion, before going deep into disk fragmentation optimisation, it’s best to ensure that your deployment follow the settings recommended in the production notes and the operations checklist.However, while your understanding and concern regarding the fragmentation is valid, it is also important to note that, aside from growing collections and indexes, other parts of the system may also grow and contribute to fragmentation as time goes on (e.g. MongoDB logs, system logs, and other files outside of MongoDB’s control). Also if you’re using SSD, their wear levelling algorithm may also create fragmentation, in order to extend the life of the SSD.In general, query performance would be impacted by the following considerations:However, without a complete understanding of your use case, you may be interested in exploring the use of time-series collection available in MongoDB version 5.0 onwards.If you need further information, could you help me with a few details like:Thanks\nAasawari",
"username": "Aasawari"
}
]
| Avoiding disk fragmentation | 2022-08-05T09:40:25.562Z | Avoiding disk fragmentation | 2,172 |
null | [
"dot-net",
"unity"
]
| [
{
"code": "",
"text": "Hi So, I’m not sure if this is the right place for this, but I’m new here, so sorry if I’m wrong or if I use expressions wrongly.We are currently looking for tools to gather analytics data from our customers for our next project (which will be a Unity game).\nWe also want to gather data to provide statistics to the player.I stumbled upon Realm while looking for tools to do this and I thought I can use it for both. But I’m not sure if I got some things right.So here are a few questions I stumbled upon while I built it into our current project to get a feel for it:Turn off data collection by switching Realm configuration:\nOur players must be able to turn off the collection of analytics data at any time.My first approach was to switch the Realm configuration to a local Realm after a player turned off data collection.\nThen, when a player enables data collection again, I would switch the configuration back to Sync.\nWhen I do that, I get the following error:RealmException: History type (as specified by the Replication implementation passed to the DB constructor) was not consistent across the sessionI guess this happens because the Realm was once synced and can’t be used as a local Realm now?I would use Flexible Sync for this approach because we don’t want to collect all the data we gather to display the statistics.Is this a practical approach?\nOr does the next approach make more sense?Turn off data collection by using two different Realms:My second approach is, to use two different Realms.\nOne local Realm for gathering statistic data to display to the player and one synced Realm to gather analytics data.\nWhen the player chooses to turn data collection off, I can simply stop to gather data with the synced Realm and log out from the sync session.I would use Partition Sync for this approach because I can simply sync all the data in the analytics Realm.The downside of this approach is that some data is duplicated.\nFor example, in our current project, a duplicate dataset would be how many settlers a player had in the last few in-game years.\nBut I guess the positives outweigh the negatives with this approach, right?General data collectionFor data collection, I’m thinking about a pull approach, rather than a push approach.So, rather than pushing data from different parts of the game to the Realm managing part of the game and writing them to the Realm, we want to collect data at fixed time intervals and then write them to the Realm in one go. This would probably happen asynchronously.As I understand it, the recommended approach is to write as much data as possible in one Write call to the Realm, rather than having many Write calls, right?Device Sync UserAs we don’t want to log in any of our users by e-mail and password, I log in every user as an anonymous user.\nIs this the right approach?\nOr should we rather not do that?",
"username": "Gentlymad_Stefan"
},
{
"code": "Realm.WriteCopyrealm.SyncSession.Stop()SyncSession.Start",
"text": "Hey Stefan, thanks for the well thought out post! Let me try to address some of the points here and pose a few questions.If I understand correctly, you have two types of data that you want to store in Realm - Analytics (information consumed by you as a developer) and Statistics (information consumed by the player). Do you want the Statistics to be synchronized to the server? This may be useful if you want to allow users to share progress across multiple devices or continue where they left off if they reinstall the game.And for your questions:Hope this clarifies things a little and happy to continue the conversation. As a big Unity fan, I’m always excited to see what games/projects people are building ",
"username": "nirinchev"
},
{
"code": "",
"text": "Hi Nikola,thanks for the quick and detailed answer!If I understand correctly, you have two types of data that you want to store in Realm - Analytics (information consumed by you as a developer) and Statistics (information consumed by the player). Do you want the Statistics to be synchronized to the server?Yes, that’s correct. Currently, we do not plan to synchronise statistics, but we could think about synchronising global progress (i.e. cross-session data).If you don’t really need to sync the statistics data, then using a combination of local + synchronized Realm might be best.By “combination” do you mean two separate realms? One for statistics and one for analytics?For analytics data, we’re in the process of releasing a feature called Asymmetric Sync.I’m looking forward to that! Is there a way to estimate the approximate workload of the Atlas database and therefore the approximate costs, when using Device Sync?Since we don’t have any analytics in our current game, we don’t have any figures regarding unique users. However, according to Steam, we have about 300 simultaneous players at any given time, with a peak of about 6000 simultaneous players, on the day we left early access.Of course, we hope that this will increase with the next project ",
"username": "Gentlymad_Stefan"
},
{
"code": "",
"text": "By “combination” do you mean two separate realms? One for statistics and one for analytics?Yes, exactly.Is there a way to estimate the approximate workload of the Atlas database and therefore the approximate costs, when using Device Sync?@Ian_Ward or @Drew_DiPalma do we have something for this?",
"username": "nirinchev"
},
{
"code": "",
"text": "Sure you can see our Billing docs here along with examples. -",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks, @nirinchev and @Ian_Ward, that helped me a lot!",
"username": "Gentlymad_Stefan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Using Realm to gather analytics and statistics data in Unity | 2022-08-23T07:06:39.645Z | Using Realm to gather analytics and statistics data in Unity | 2,995 |
null | [
"swift"
]
| [
{
"code": "class ServiceEntity: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var iconName: String\n @Persisted var mainColor: String\n @Persisted var secondaryColor: String\n @Persisted var _partitionValue: String = AppInfo.partitionValue\n @Persisted var serviceType: ServiceType?\n \n convenience init(name: String, mainColor: String, secondaryColor: String, iconName: String, serviceType: ServiceType) {\n self.init()\n self.name = name\n self.mainColor = mainColor\n self.secondaryColor = secondaryColor\n self.iconName = iconName\n self.serviceType = serviceType\n }\n}\nclass ServiceType: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var iconName: String\n @Persisted var mainColor: String\n @Persisted var secondaryColor: String\n @Persisted var _partitionValue: String = AppInfo.partitionValue\n \n convenience init(name: String, mainColor: String, secondaryColor: String, iconName: String) {\n self.init()\n self.name = name\n self.mainColor = mainColor\n self.secondaryColor = secondaryColor\n self.iconName = iconName\n }\n}\nServiceEntityserviceTypeServiceEntityserviceType",
"text": "Hi all! This error has been reported in GitHub too, but it seems there are still no solutions.\nAs the title states, app crashes when trying to update an object that has another realm object as a property.In my case I haveandCreating a new ServiceEntity object and setting serviceType property works flawlessly. Updating a ServiceEntity object modifying the serviceType property always crashes with*** Terminating app due to uncaught exception ‘RLMException’, reason: ‘Can’t set object of type ‘ServiceType’ to property of type ‘ServiceType’’It always happens ",
"username": "Nerkyator"
},
{
"code": "serviceTypeServiceType@ObservedRealmObject var service: ServiceEntity",
"text": "I forgot to add some details:",
"username": "Nerkyator"
}
]
| App crashes with "Can't set object of type" | 2022-08-30T07:07:10.654Z | App crashes with “Can’t set object of type” | 1,491 |
null | [
"node-js",
"atlas-cluster"
]
| [
{
"code": "mongodb+srv://myusername:[email protected]/?retryWrites=true&w=majoritystderr:\nnpm WARN lifecycle The node binary used for scripts is /home/c1439621c/nodevenv/1lalana-server/14/bin/node but npm is using /opt/alt/alt-nodejs14/root/usr/bin/node itself. Use the `--scripts-prepend-node-path` option to include the path for the node binary npm was executed with.\nMongoServerSelectionError: connect ECONNREFUSED 13.245.246.5:27017\n at Timeout._onTimeout (/home/c1439621c/nodevenv/1lalana-server/14/lib/node_modules/mongodb/lib/sdam/topology.js:312:38)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-fwv9tmo-shard-00-01.inp06zb.mongodb.net:27017' => [ServerDescription],\n 'ac-fwv9tmo-shard-00-02.inp06zb.mongodb.net:27017' => [ServerDescription],\n 'ac-fwv9tmo-shard-00-00.inp06zb.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-109yxn-shard-0',\n commonWireVersion: 13,\n logicalSessionTimeoutMinutes: 30\n }\n}\n0.0.0.0/0 (includes your current IP address)",
"text": "My app can not connect to MongoDB Atlas.The Url:\nmongodb+srv://myusername:[email protected]/?retryWrites=true&w=majorityThe error:It work fine in local and in Heroku, but not in cPanel.\nThe IP Access List in MongoDB Atlas is already setup as Everywhere:\n0.0.0.0/0 (includes your current IP address)",
"username": "Tiavina_Mik"
},
{
"code": "0.0.0.0/0 (includes your current IP address)",
"text": "Hi @Tiavina_Mik,It work fine in local and in Heroku, but not in cPanel.\nThe IP Access List in MongoDB Atlas is already setup as Everywhere:\n0.0.0.0/0 (includes your current IP address)I would recommend going over the following post’s discussion that also mentions cPanel connection attempts and the associated solutions involving port opening from the cPanel end. This is in addition to the fact you are able to connect to the cluster locally and from Heroku. The errors also mentioned in the stack overflow post that the OP posted also looks very similar to the one you have posted here.If you’re still having issues connecting after attempting any of the suggested solution(s) on that post, please let us know what was attempted and any new errors.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoServerSelectionError: connect ECONNREFUSED 13.xxx.xxx.x:27017, MongoDB Atlas | 2022-07-29T14:03:18.438Z | MongoServerSelectionError: connect ECONNREFUSED 13.xxx.xxx.x:27017, MongoDB Atlas | 2,551 |
null | [
"crud",
"golang"
]
| [
{
"code": "return m, err",
"text": "I am doing an UpdateOne with upsert true.What do I set the ID field as when I am doing an upsert?log.Println(“Saving Partner…”)\nq := bson.D{{Key: “_id”, Value: i.ID}}\nupdate := bson.D{{\nKey: “$set”,\nValue: i,\n}}\nm, err := s.C.UpdateOne(c, q, update, &options.UpdateOptions{\nUpsert: newTrue(),\n})",
"username": "Marty_Weel"
},
{
"code": "_id",
"text": "an ID field, _id, means you already have a document, so the operation will not be an upsert (update-or-insert) but instead be just an update.upsert is for queries without an ID such that if there is no document to “match” your criteria, then a new one with the content you send to the server will be inserted with a new server-generated ID.",
"username": "Yilmaz_Durmaz"
}
]
| UpdateOne with upsert what is the ID field | 2022-08-29T19:07:24.846Z | UpdateOne with upsert what is the ID field | 3,138 |
null | [
"queries",
"node-js"
]
| [
{
"code": "{\n\"code\": 200,\n \"message\": \"16700 record found\",\n \"result\": [\n{\n \"liquidation_status\": {\n \"start\": {\n \"date\": \"16/03/2014\",\n \"artical_number\": 1050\n },\n \"end\": {\n \"date\": null,\n \"artical_number\": null\n }\n },\n \"_id\": \"62d068cb10d394e4b5922874\",\n \"company_name\": \"شركة سعيد وعبدالله أبناء علي بن سعيد الناعبي للتجارة\",\n \"commercail_number\": 114586\n },\n{\n \"liquidation_status\": {\n \"start\": {\n \"date\": \"16/03/2014\",\n \"artical_number\": 1050\n },\n \"end\": {\n \"date\": null,\n \"artical_number\": null\n }\n },\n \"_id\": \"62d068cb10d394e4b5922875\",\n \"company_name\": \"شركة سعيد وعبدالله أبناء علي بن سعيد الناعبي للتجارة\",\n \"commercail_number\": 114486\n },\n]\n}\n",
"text": "i want to search from the beginning of the commercail_number which is an number like 114486\nso i want if i search with 11 it will return me all which start from 11",
"username": "Mehmood_Zubair"
},
{
"code": "result\"11\"\"commercail_number\"\"result\"\"commercail_number\"$toStringdb.collection.aggregate(\n{\n '$addFields': {\n filteredResults: {\n '$filter': {\n input: '$result',\n cond: {\n '$regexMatch': {\n input: { '$toString': '$$this.commercail_number' },\n regex: /^11/\n }\n }\n }\n }\n }\n}\n\"filteredResults\"\"result\"\"commercail_number\"\"11\"$addFields$filter$regexMatch$toString$gte$lte$filterresult\"commercail_number\"$gte$lte",
"text": "Hi @Mehmood_Zubair - Welcome to the community!i want to search from the beginning of the commercail_number which is an number like 114486\nso i want if i search with 11 it will return me all which start from 11Based off the single sample document provided, I assume you then want both objects inside of the result array to be returned since they begin with \"11\". Is this assumption correct?Additionally, could you also provide the following information:In the meantime, if you want to specifically filter out all other results perhaps the following example may help:The above would result in document(s) that have an additional field called \"filteredResults\" which contains the objects within the \"result\" array that have a \"commercail_number\" value beginning with \"11\".For your reference with regards to the above example:Another method that may work but will depend on the use case details is possibly a $gte and $lte (E.g. If you’re searching for it to begin with 11 then you could use a $filter for objects within the result array with \"commercail_number\" values to be $gte 11,000 and $lte 11,999). But again to re-iterate, this will depend on your use case.As with any of these example code snippets, please test thoroughly in a test environment to verify it meets all your use case(s) and requirements.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to search by the number from beginning of the numeric key | 2022-08-23T08:06:40.970Z | How to search by the number from beginning of the numeric key | 2,086 |
[
"aggregation",
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
]
| [
{
"code": "Load moreskiplimitconst aggregationResult = await ResourceCommentModel.aggregate()\n .match({\n resourceId: new mongoose.Types.ObjectId(req.params.resourceId),\n parentCommentId: undefined\n })\n .sort({ 'createdAt': -1 })\n .facet({\n comments: [\n { $skip: (page - 1) * pageSize },\n { $limit: pageSize },\n {\n $lookup: {\n from: 'resourcecomments',\n localField: '_id',\n foreignField: 'parentCommentId',\n as: 'replies',\n }\n }\n ],\n totalCount: [{ $group: { _id: null, count: { $sum: 1 } } }]\n })\n .exec();\npagecomments: [\n { $match: { createdAt: { $gt: new Date(continueAfterTimestamp) } } },\n { $limit: pageSize },\n {\n $lookup: {\n from: 'resourcecomments',\n localField: '_id',\n foreignField: 'parentCommentId',\n as: 'replies',\n }\n }\n],\n",
"text": "I’m loading user comments on my website. Initially, I only load a subset of comments and show a Load more button which loads the next page and appends it to the end of the comments list.Loading another page works by using the usual skip and limit aggregation steps in MongoDB:Problem: If new comments have been posted in the meantime, comments from page one have been pushed to page 2, and we are loading duplicate comments (and showing them in the UI).One option would be to not rely on the page number, but instead start after the last loaded timestamp:However, there are 2 problems with this approach:Although unlikely, if two comments were posted at the exact same millisecond, one of them will not be loaded.The bigger problem: I’m planning to later allow sorting comments by upvotes, and then I don’t have a way to start after the last loaded document anymore.",
"username": "Florian_Walther"
},
{
"code": "",
"text": "Hi @Florian_Walther ,Why won’t you pre load comments for first 4-5 pages and name in the facet comments1-5?Otherwise you can use unique commentId and sort by them and always pass the last commentId to the next query, when you need to use upvotes add them to the beginning of the sort and sort on (upvotes, commentId)Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "_idmatch$lt/$gt: commentId",
"text": "Thank you for your answer!Why won’t you pre load comments for first 4-5 pages and name in the facet comments1-5?Wouldn’t this kinda destroy the purpose of pagination (to load less data at once)? Also, it doesn’t solve my sorting problem, does it?Otherwise you can use unique commentId and sort by them and always pass the last commentId to the next queryYea that’s what I’m doing right now after finding out that the _id contains a timestamp.when you need to use upvotes add them to the beginning of the sort and sort on (upvotes, commentId)I don’t think this will work. To start after the last comment id, I have to use match with $lt/$gt: commentId. If I sort by upvotes first, the ids will not be ordered from smallest to largest anymore.",
"username": "Florian_Walther"
},
{
"code": "facet.facet({\n comments: [\n ...(continueAfterId ? [{ $match: { _id: { $lt: new mongoose.Types.ObjectId(continueAfterId) } } }] : []),\n { $sort: { _id: -1 } },\n { $limit: pageSize },\n ],\n lastComment: [\n { $sort: { _id: 1 } },\n { $limit: 1 },\n ]\n})\n",
"text": "Also, how do I know when I reached the last document? At the moment, I’m returning the last document as a separate facet step so I can check client side if we received this document in our dataset yet. Is there a better way to handle this?",
"username": "Florian_Walther"
},
{
"code": "async function fetchComments(pageSize: number, ...) {\n\n ...\n\n const aggregationResult = await ResourceCommentModel.aggregate()\n .match({\n ...\n })\n .facet({\n comments: [\n ...\n { $sort: { _id: sort } }, \n { $limit: pageSize + 1 },\n ],\n })\n\n...\n\n return {\n resourceComments: resourceCommentsPopulated.slice(0, pageSize),\n endOfPaginationReached: resourceComments.length <= pageSize,\n }\n}\n",
"text": "Returning the last document was buggy in certain situations. I now figure out the end of pagination by returning 1 document more than the page size, and then I check if the length of the results is pageSize + 1, which means that there are more documents to come. What do you think about this approach?",
"username": "Florian_Walther"
},
{
"code": "",
"text": "Another option is to hold a cursor with a batch size of your page size and run a “next” command every time the page is being rolled forward…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Interesting, thank you. I will try to look into this!",
"username": "Florian_Walther"
}
]
| Resume pagination after specific document | 2022-08-26T10:53:40.418Z | Resume pagination after specific document | 2,718 |
|
null | []
| [
{
"code": "The connection to the server localhost:8080 was refused - did you specify the right host or port?\n",
"text": "hello,the cluster were created when it were want to works but often i happen a firewall forbidden who break the working of this mongodb by kubernetes kind thus the error below :thanks you in advance to help myself pass the firewall forbidden,Regards.Dorian ROSSE.",
"username": "Dorian_ROSSE"
},
{
"code": "",
"text": "MongoDB instances are mostly run at ports 27000 or 27017, but your error shows port 8080 which is used mostly by a web server.check your config files first to see the actual ports used for your web app and MongoDB server. and then check Kubernetes setting for the forwarded ports in and out containers.PS: that does not seem a firewall issue for now.",
"username": "Yilmaz_Durmaz"
}
]
| The cluster are created but i happen a forbidden by the firewall | 2022-08-29T14:18:33.107Z | The cluster are created but i happen a forbidden by the firewall | 1,225 |
null | []
| [
{
"code": "",
"text": "Hello,I have a question regarding Atlas Encryption at Rest using Customer Key Management.\nAs far as I understand it the customer must provide its Key Version Resource ID from its own KMS (GCP/AWS/Azure) and then:is there any other step I missed?I would be grateful for confirmation that my assumption is correctBR\nArek",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Broadly correct: note one great thing about this model is that you can do light weight key rotation without having to re-write all the data",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "@Andrew_Davidson\nthank you for your answer. Yes, it is true, this is a huge advantage if Atlas can do lightweight key rotation without having to re-write all the data!\nI have one more question. Are the customer’s unique Master Key and the Master data key two separate keys ? I mean one is held in the customer cloud provider KMS and the second in Atlas underlying cloud provider KMS ?best\nArek",
"username": "Arkadiusz_Borucki"
}
]
| Atlas Encryption at Rest using Customer Key Management | 2022-08-27T08:29:24.309Z | Atlas Encryption at Rest using Customer Key Management | 1,339 |
null | [
"atlas-triggers"
]
| [
{
"code": "",
"text": "Hi, is there a way to create a schedule trigger that runs a function, using the API ?",
"username": "WILLIAM_LATORRE_LEAL"
},
{
"code": "",
"text": "Welcome back @WILLIAM_LATORRE_LEAL !Yes, you can create and manage functions and triggers using the Atlas App Services Admin API:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, thanks for your reply, in fact i’m looking for some mongodb API call, to create an Atlas project\nschedule trigger that execute a function, i’m not sure if it is possible.as well if is possible how to create variables and functions in realm(app services) calling the mongodb Atlas API, thanks",
"username": "WILLIAM_LATORRE_LEAL"
},
{
"code": "function_idconfigschedule",
"text": "in fact i’m looking for some mongodb API call, to create an Atlas project\nschedule trigger that execute a function, i’m not sure if it is possible.Hi @WILLIAM_LATORRE_LEAL,The Atlas App Services API links I provided earlier document managing Triggers & Functions (create, list, update, delete, etc). You need to create a function before referencing it in the function_id parameter when creating a trigger via the API. The config object for a trigger defines trigger configuration parameters including the schedule if applicable.is possible how to create variables and functions in realm(app services)Yes, this is possible. App Services API calls manage Atlas Functions written in JavaScript so you can use variables and import packages for more complex functions.I suggest you start by creating some triggers and functions via the Atlas UI. You can then use the App Services API to list your functions and triggers as a reference for creating the same definitions programmatically.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, yes i saw the links, i.e to create a function:\n{\n“can_evaluate”: {},\n“name”: “string”,\n“private”: true,\n“source”: “string”,\n“run_as_system”: true\n}the main questions is, how to run the curl command specifying parameters like, Org Id, Project Id etc and the body of the function?, thnaks in advance.",
"username": "WILLIAM_LATORRE_LEAL"
},
{
"code": "curlAuthorizationsourcecurlzshGROUP_IDAPP_IDACCESS_TOKENcurl --request POST \"https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps/$APP_ID/functions\" \\\n --header \"Authorization: Bearer $ACCESS_TOKEN\" \\\n --header 'Content-Type: application/json' \\\n --data '{ \"name\": \"gday\", \"source\": \"exports = function(name) { return `Hello, ${name ?? \\\"stranger\\\"}!` }\"}'\n\n{\"_id\":\"630869acf9ec00b1942c2235\",\"name\":\"gday\"}%\n# Note that this uses GET request method per the API docs\ncurl --request GET \"https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps/$APP_ID/functions\" \\\n --header \"Authorization: Bearer $ACCESS_TOKEN\" \\\n --header 'Content-Type: application/json' \n\n[{\"_id\":\"630869acf9ec00b1942c2235\",\"name\":\"gday\",\"last_modified\":1661495724}]%\n$ curl --request POST \"https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps/$APP_ID/debug/execute_function?run_as_system=true\" \\\n --header \"Authorization: Bearer $ACCESS_TOKEN\" \\\n --header 'Content-Type: application/json' \\\n --data '{ \"name\": \"gday\" }' \n \n{\"error_logs\":null,\"logs\":null,\"result\":\"Hello, stranger!\",\"stats\":{\"execution_time\":\"554.776µs\"}}%\n$ curl --request POST \"https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps/$APP_ID/debug/execute_function?run_as_system=true\" \\\n --header \"Authorization: Bearer $ACCESS_TOKEN\" \\\n --header 'Content-Type: application/json' \\\n --data '{ \"name\": \"gday\", \"arguments\": [\"Stennie\"] }'\n\n{\"error_logs\":null,\"logs\":null,\"result\":\"Hello, Stennie!\",\"stats\":{\"execution_time\":\"590.229µs\"}}%\ncurl",
"text": "Hi @WILLIAM_LATORRE_LEAL ,The App Services API docs include some example of making requests via curl:The documentation for API methods (for example, Creating a new Function) describes the resource endpoints, path parameters, request body schemas, and response schemas.In your example of creating a function, the source string is the body of the function:The stringified source code for the function. The code must be valid ES6.The documentation links above have more detailed information, but I created a few quick examples using curl via zsh.These assume that shell variables of GROUP_ID, APP_ID, and ACCESS_TOKEN have been appropriately set:However, if you’re working with any significant code in Atlas Functions, I recommend using your favourite programming language (eg Python) or a tool like Postman to work with the App Services API. Since curl runs in a shell environment, there is generally more effort involved to debug API calls and properly escape parameters that should (or should not be) interpolated by the shell.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "thanks Stennie, regarding about how to create a scheduled trigger for a Atlas project in one cluster that execute a function, may you send me some example calling the curl ?, best regards",
"username": "WILLIAM_LATORRE_LEAL"
},
{
"code": "curlcurl --request POST \"https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps/$APP_ID/functions\" \\\n --header \"Authorization: Bearer $ACCESS_TOKEN\" \\\n --header 'Content-Type: application/json' \\\n --data '{ \"name\": \"gday\", \"source\": \"exports = function(name) { return `Hello, ${name ?? \\\"stranger\\\"}!` }\"}'\n\n{\"_id\":\"630869acf9ec00b1942c2235\",\"name\":\"gday\"}%\n",
"text": "may you send me some example calling the curlHi @WILLIAM_LATORRE_LEAL,There are four examples using curl in my previous post. Click on the lines prefixed with to show the snippet.For example:Create an Atlas Function with curlRegards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, when create a trigger (i.e a trigger that pause one cluster) in one Atlas project, automatically it creates an “app services” called Triggers, this “new app” contains atlas triggers and shows “no environment”, how to get the right APP_ID ?.the next command doesn´t show anything( in fact there is one app, the one created automatically)$ curl --request GET --header ‘Authorization: Bearer $ACCESS_TOKEN’ \\ https://realm.mongodb.com/api/admin/v3.0/groups/$GROUP_ID/apps\n",
"username": "WILLIAM_LATORRE_LEAL"
}
]
| How to create an Atlas schedule trigger using API | 2022-08-25T11:11:43.818Z | How to create an Atlas schedule trigger using API | 4,206 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6175f6e5f2363e1cc9aa5835\"\n },\n \"#CHROM\": 1,\n \"POS\": 286747,\n \"ID\": \"rs369556846\",\n \"REF\": \"A\",\n \"ALT\": \"G\",\n \"QUAL\": \".\",\n \"FILTER\": \".\",\n \"INFO\": [{\n \"RS\": 369556846,\n \"RSPOS\": 286747,\n \"dbSNPBuildID\": 138,\n \"SSR\": 0,\n \"SAO\": 0,\n \"VP\": \"0x050100000005150026000100\",\n \"WGT\": 1,\n \"VC\": \"SNV\",\n \"CAF\": [{\n \"$numberDecimal\": \"0.9381\"\n }, {\n \"$numberDecimal\": \"0.0619\"\n }],\n \"COMMON\": 1,\n \"TOPMED\": [{\n \"$numberDecimal\": \"0.88411856523955147\"\n }, {\n \"$numberDecimal\": \"0.11588143476044852\"\n }]\n },\n [\"SLO\", \"ASP\", \"VLD\", \"G5\", \"KGPhase3\"]\n ]\n}\n{\n \"_id\": {\n \"$oid\": \"6175f6e5f2363e1cc9aa583b\"\n },\n \"#CHROM\": 1,\n \"POS\": 911220,\n \"ID\": \"rs35331099\",\n \"REF\": \"CT\",\n \"ALT\": \"C\",\n \"QUAL\": \".\",\n \"FILTER\": \".\",\n \"INFO\": [{\n \"RS\": 35331099,\n \"RSPOS\": 911221,\n \"dbSNPBuildID\": 130,\n \"SSR\": 0,\n \"SAO\": 0,\n \"VP\": \"0x05010008000517013e000200\",\n \"GENEINFO\": \"LOC284600:284600\",\n \"WGT\": 1,\n \"VC\": \"DIV\",\n \"CAF\": [{\n \"$numberDecimal\": \"0.2492\"\n }, {\n \"$numberDecimal\": \"0.7508\"\n }],\n \"COMMON\": 1,\n \"TOPMED\": [{\n \"$numberDecimal\": \"0.21621750764525993\"\n }, {\n \"$numberDecimal\": \"0.78378249235474006\"\n }]\n },\n [\"RV\", \"SLO\", \"INT\", \"ASP\", \"VLD\", \"G5A\", \"G5\", \"GNO\", \"KGPhase1\", \"KGPhase3\"]\n ]\n}\nsep_vals = [doc for doc in src_coll_obj.aggregate([{'$group': {'_id': 'null', 'spl_field': {'$addToSet': '$INFO.0.VC'}}}])][0]['spl_field']['SNV', 'DIV']",
"text": "Examples of documents:Task:\nget the values of the INFO.0.VC field in a unique form.PyMongo query:\nsep_vals = [doc for doc in src_coll_obj.aggregate([{'$group': {'_id': 'null', 'spl_field': {'$addToSet': '$INFO.0.VC'}}}])][0]['spl_field']Expected result:\n['SNV', 'DIV']The result obtained:\nempty two-dimensional array.Question:\nIs there an error in the query? Or is it a MongoDB bug?",
"username": "Platon_workaccount"
},
{
"code": "[ [ 'SNV' ] , [ 'DIV' ] ]\n",
"text": "You could replace $INFO.0.VC with $INFO.VC. You will getwhich is almost what you want and as easy to use.To get exactly what you want, you could use:Personally, I prefer the simpler pipeline at the cost of a little bit of extra work at the application level in order to leave as much cycle as possible to the server. I think it scales better.",
"username": "steevej"
},
{
"code": "\"0\"INFO$arrayElemAt{$addToSet: {$getField: {field: \"VC\", input: {$arrayElemAt: [\"$INFO\", 0]}}}}\n$getField$addField$group$INFO$addToSetINFO0.VC",
"text": "‘$INFO.0.VC’This is only proper syntax if you are matching first element, in aggregation expression this means field named \"0\" of subdocument INFO. If you want to fetch it the way you do, you have to use $arrayElemAt expression. In your case it would be:$getField is new in version 5.0 - it’s possible to do this in earlier version but a little less readable. Simplest would be to have another $addField stage before $group that sets INFO0 to first element of $INFO and then $addToSet can use INFO0.VC.Asya",
"username": "Asya_Kamsky"
},
{
"code": "$addToSetINFO.0.VCINFO.VC",
"text": "@steevej @Asya_Kamsky thanks for the answers!The documentation describes the only way to access subdocuments in arrays. But it turns out that $addToSet violates it. Surprises like this complicate application development.Personally, I prefer the simpler pipelineSometimes simplicity is about uniformity. Example. In my case, the application receives the field path from user, validates it, and uses it to retrieve all uniquified values of a given field. Validation is performed in all possible paths. Paths are gathered automatically strictly according to query rules. If referring to the first post, a valid path is INFO.0.VC. If the user types INFO.VC, the program will throw an exception. Of course it is possible to rewrite the validator allowing to specify paths without array indexes. But this is a complication, not a simplification.",
"username": "Platon_workaccount"
},
{
"code": "{ \"F\": [ 1, 2, 3] } /* \"F.0\" is the first element of array \"F\" */\n{ \"F\": { \"0\": 5 } } /* \"F.0\" is the field named \"0\" in subdocument \"F\" */\n",
"text": "The problem with all of this is that “Field.0” is ambiguous since MongoDB schema allows both:This makes for complexities when parsing some queries. What would “F.0.0” mean if “F” was an array of subdocuments and one of them had field named “0”? Anyway, these are internal issues that the user shouldn’t worry about, but unfortunately early choices of syntax means sometimes your code isn’t as simple as it could be. We do have a query team that’s thinking forward about how we might be able to simplify the language and make it more consistent (but without breaking any of the existing applications).Asya\nP.S. the page you linked to talks about query expressions - aggregation expressions are different and have different ways to access parts of document.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "It seems to me that naming fields as numbers is extremely irrational (are there any examples that refute this?). So in a similar conflict, 0 as an array index should have priority.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "I don’t think the question is whether it’s rational - it’s allowed and has been allowed since the beginning of time so we cannot just change the behavior of the database without it breaking applications which have been relying on this.And frankly I don’t think it’s irrational if you think about keys in subdocuments being named after account numbers or such. It won’t likely be “0” it might be “148235” but that’s still a number.Asya",
"username": "Asya_Kamsky"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"61fd0980b477c135a9fc6284\"\n },\n \"q\": 1,\n \"w\": {},\n \"e\": {\n \"a\": 10,\n \"b\": 11,\n \"c\": {\n \"d\": -11,\n \"e\": -10,\n \"f\": -9\n }\n },\n \"r\": \".\",\n \"t\": [],\n \"y\": [100, 101, {\n \"l\": 0.01,\n \"m\": 0.02,\n \"n\": 0.03\n },\n [{\n \"h\": 0,\n \"z\": 0\n }, {\n \"m\": 0,\n \"z\": 0\n }]\n ]\n}\n{'_id': 'null', 'out': {'$addToSet': '$y.l'}}_id: \"null\"\nout: Array\n 0: Array\n 0: 0.01\n{'_id': 'null', 'out': {'$addToSet': '$y.h'}}_id: \"null\"\nout: Array\n 0: Array\n",
"text": "You could replace $INFO.0.VC with $INFO.VC .By the way, this does not work for documents nested in a two-dimensional array. Toy example:The subdocument is nested in a one-dimensional array. Query without specifying indexes works.\n{'_id': 'null', 'out': {'$addToSet': '$y.l'}}The subdocument is nested in a two-dimensional array. A query without specifying the subdocument path does not output a result.\n{'_id': 'null', 'out': {'$addToSet': '$y.h'}}Perhaps this is yet another argument for the fact that explicitly specifying field path is always better than hidden one.",
"username": "Platon_workaccount"
},
{
"code": "$addToSet",
"text": "In general, $addToSet expects a scalar so if the fields you want to add to the set are in an array you would have to unwind the array first.I’d say that it’s an accurate generalization that the more complex your schema/documents can be, the more complicated your queries/pipelines will end up being.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "I propose to implement in future versions at least one of the solutions for this conflict:Interpret numbers as field names:\n’f1.`0`.f2.`0`'\n{’$str’: ‘$f1.0.f2.0’}Interpret numbers as array element indexes:\n’f1.0.f2.0’\n{’$idx’: ‘$f1.0.f2.0’}",
"username": "Platon_workaccount"
}
]
| $group + $addToSet don't work with nested fields | 2021-12-24T20:54:19.197Z | $group + $addToSet don’t work with nested fields | 4,045 |
null | [
"queries"
]
| [
{
"code": "",
"text": "I want to build a “feed” that gets multiple objects from a realm. My objects are organized together in a single realm, but I haven’t been able to figure out how to query multiple object types in a single query. Is this possible and what would the query look like? Each object does have some shared fields for things like sorting and filtering.",
"username": "Matthew_Brimmer"
},
{
"code": "class Video {\n static schema: ObjectSchema = {\n name: 'Video',\n primaryKey: '_id',\n properties: {\n _id: 'objectId'\n .... other props\n }\n}\nclass Audio {\n static schema: ObjectSchema = {\n name: 'Audio',\n primaryKey: '_id',\n properties: {\n _id: 'objectId'\n .... other props\n }\n}\nclass Image {\n static schema: ObjectSchema = {\n name: 'Image',\n primaryKey: '_id',\n properties: {\n _id: 'objectId'\n .... other props\n }\n}\n",
"text": "Is this a data modeling question? Should I be merging objects into a single type? If this is the case, it makes it seem like a realm should only have a single object schema type unless you can query multiple object types.Example:Would it be better to merge these into a single object Media?",
"username": "Matthew_Brimmer"
}
]
| Query multiple object types from realm | 2022-08-27T18:56:48.566Z | Query multiple object types from realm | 1,379 |
null | [
"performance"
]
| [
{
"code": "db.books.insertOne(\n {\n \"_id\" : 1,\n \"item\" : \"XYZ123\",\n \"stock\" : 15\n }\n);\ndb.books.update(\n { _id: 1 },\n {\n $set: { item: \"XYZ123\" }, // same value !\n $setOnInsert: { stock: 10 }\n },\n { upsert: true }\n)\n",
"text": "Given:with an index on item.\nWhat happens with the index and the document when I perform:Does mongodb recognize that nothing has changed? Or get document as well as index, or at least one of them updated and I should try to avoid such a query if possible?",
"username": "Jens_Lippmann"
},
{
"code": "db.test12.updateOne({x:1},{$set : {x: 1}})\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 0,\n upsertedCount: 0 }\n",
"text": "Hi @Jens_Lippmann ,If the updated dcoument does not change any values the update will only use the read required to perform it…You can test that and compare the result document from the update:As you can see although x:1 was matched for the update the number of modified documents was 0…Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Update a field to its current value | 2022-08-29T12:37:29.320Z | Update a field to its current value | 2,502 |
null | [
"aggregation",
"node-js"
]
| [
{
"code": "{\n _id: 1,\n title : \"Intern\",\n post : \"Testing\",\n company_id : 2332,\n desc : \"......\",\n location : [\n { type: \"Point\", coordinates: [longitude, latitude] },\n { type: \"Point\", coordinates: [longitude, latitude] },\n { type: \"Point\", coordinates: [longitude, latitude] }\n ],\n someotherinfo : '......'\n}\n\n",
"text": "Hello guys,I am working on jobseeker app. Here companies come and create some Job PostSo, Job_Posting Schema :Now, I have latitude and longitude , I want the jobs record within a radius of 100 kmwhen I have saved one location in one record.\nThen we can use geoNear,But Here, How can I do .Thanks ",
"username": "Praveen_Gupta"
},
{
"code": "{\n type: \"MultiPoint\",\n coordinates: [\n [ longitude, latitude ],\n [longitude, latitude ],\n [ longitude, latitude ],\n [ longitude, latitude ],\n ...\n ]\n}\n",
"text": "Hi @Praveen_Gupta ,What is the reason for having an array of geo points?This array cannot be indexed with a regular 2d or 2dsphere indexes the way it is.You can have them as a multipoint type:This should allow you to see if any of those point are in a radius of 100km…If that does not work for you, Perhaps consider having a document per geo point if those all reference to the same company.Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny .\nCan we save city name with these coordinates ?",
"username": "Praveen_Gupta"
},
{
"code": "",
"text": "Not sure what you mean?Inside the geo object you can save only the geo format.If you want to save an array of the cities you can.Best\nPavel",
"username": "Pavel_Duchovny"
}
]
| How to get distance in array of location | 2022-08-29T09:05:11.219Z | How to get distance in array of location | 1,438 |
null | [
"data-modeling",
"sharding"
]
| [
{
"code": "",
"text": "Hello Team,\ncan we select my data base in multi-region ( Indian and America )\nand have separate shards for each region ?",
"username": "Ganesh_Wankhede"
},
{
"code": "",
"text": "Take a look at the Segmenting Data by Location tutorial as that sounds like it’s what you’re looking for.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for the response,\nI followed it, but I see two shards are getting created in same Region, not in diffrent countries ",
"username": "Ganesh_Wankhede"
},
{
"code": "regionstatecountry",
"text": "Hi @Ganesh_Wankhede,Assuming you are planning an on-premises deployment, you can choose whatever granularity of locality and shard zones fits your use case. If you are using MongoDB Atlas, it has a more streamlined Global Cluster feature for M30 or greater sharded clusters.In sharded clusters, you can create zones of sharded data based on the shard key. You can associate each zone with one or more shards in the cluster. A shard can associate with any number of zones. In a balanced cluster, MongoDB migrates chunks covered by a zone only to those shards associated with the zone.The example @Doug_Duncan referenced is using zone sharding to segment data by country with deployments in two zones (NA and EU). Segmenting by location is a common use case for zone sharding, but there are other examples like Tiered Hardware for Varying SLA or SLO and Segmenting Data by Application or Customer:Borrowing the diagram from Segmenting Data by Location:The N represents however many shards you would like to have.So that could be:Or another combination like:You can also use a more granular shard key (for example, region or state instead of country) to create zones like “North India” or “Maharashtra” if those are more suitable for your location-based use case.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Can we create multi-region shards | 2022-08-28T11:35:40.906Z | Can we create multi-region shards | 2,879 |
[
"database-tools"
]
| [
{
"code": "",
"text": "I am not able to add or insert data into databases via inserting documents into collections. I have a list of methods that I am trying to insert data through and none of it is coming. Please let me know what is going on and how I can add documents to collections by calling and using the mongo method.Thanks.\nScreen Shot 08-29-22 at 09.13 AM1330×889 53.8 KB\n",
"username": "Quiet_Services"
},
{
"code": "",
"text": "Hello @Quiet_Services and welcome to the MongoDB Community forums. Can you tell us what errors you are getting when you try to run any of the insert statements from the screenshot you’ve posted (looks to be part of one of the University courses).Knowing what error you’re getting will help us better know what’s going on so we can point you in the right direction.",
"username": "Doug_Duncan"
},
{
"code": "MongoDB Enterprise atlas-o2a0p3-shard-0:PRIMARY> db.pets.insert({\"pet\":\"cat\"},{ \"pet\":\"dog\"}, {\"pet\":\"fish\"})\nWriteCommandError({\n \"ok\" : 0,\n \"errmsg\" : \"(Unauthorized) not authorized on admin to execute command { insert: \\\"pets\\\", ordered: true, lsid: { id: {4 [240 184 37 69 164 234 79 180 171 92 104 163 106 217 101 174]} }, $clusterTime: { clusterTime: {1661671493 2}, signature: { hash: {0 [37 55 125 33 194 184 27 154 103 223 56 230 69 119 135 230 97 114 182 242]}, keyId: 7094555532198936576.000000 } }, $db: \\\"admin\\\" }\",\n \"code\" : 8000,\n \"codeName\" : \"AtlasError\"\n})\n\nMongoDB Enterprise atlas-o2a0p3-shard-0:PRIMARY> db.pets.insert([{\"pet\":\"cat\"},{ \"pet\":\"dog\"}, {\"pet\":\"fish\"}])\nWriteCommandError({\n \"ok\" : 0,\n \"errmsg\" : \"(Unauthorized) not authorized on admin to execute command { insert: \\\"pets\\\", ordered: true, lsid: { id: {4 [240 184 37 69 164 234 79 180 171 92 104 163 106 217 101 174]} }, $clusterTime: { clusterTime: {1661748125 1}, signature: { hash: {0 [242 110 186 233 136 125 162 12 27 127 145 245 195 46 132 135 2 137 49 87]}, keyId: 7094555532198936576.000000 } }, $db: \\\"admin\\\" }\",\n \"code\" : 8000,\n \"codeName\" : \"AtlasError\"\n})\n",
"text": "Hi Doug, when I enter command line prompts or statements like the following:db.pets.insert([{“pet”:“cat”},{ “pet”:“dog”}, {“pet”:“fish”}])ordb.pets.insert({“pet”:“cat”},{ “pet”:“dog”}, {“pet”:“fish”}) with the I get the following errors:You can see the errors in the attached files in this message.\n\nScreen Shot 08-29-22 at 10.42 AM1915×657 59 KB\nPlease let me know what you can do. Thanks a lot.QUIET Services",
"username": "Quiet_Services"
},
{
"code": "",
"text": "Are you connected to correct db?\nYou can check by command db\nIf not in correct db you can switch to correct db by use db\nDo you see the collection you are querying in the current connected session where you are getting the error",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes it is all connected. I am able to finally insert items in there but the Question asks me which command will insert 3 connections successfully -Problem:Which of the following commands will successfully insert 3 new documents into an empty pets collection?\nAttempts Remaining:2 Attempts leftCheck all answers that apply:I have all the possible answers uploaded. NONE of them work successfully when I try them out on the Mongo Command prompt. Only the first ID works. {\"_id\":1, “pet”:“cat”}\nScreen Shot 08-29-22 at 09.13 AM1330×889 53.8 KB\n",
"username": "Quiet_Services"
},
{
"code": "\"errmsg\" : \"(Unauthorized) not authorized on admin to execute command { insert: \\\"pets\\\", ordered: true, lsid: { id: {4 [240 184 37 69 164 234 79 180 171 92 104 163 106 217 101 174]} }, $clusterTime: { clusterTime: {1661671493 2}, signature: { hash: {0 [37 55 125 33 194 184 27 154 103 223 56 230 69 119 135 230 97 114 182 242]}, keyId: 7094555532198936576.000000 } }, $db: \\\"admin\\\" }\"\nadmininsert()myFirstDatabase",
"text": "The error message you posted shows:This means you’re in the admin database and trying to perform an insert(). This matches up with Ramamchandra’s comments:Are you connected to correct db?\nYou can check by command db\nIf not in correct db you can switch to correct db by use dbYou will want to change to the database you created for the course which by default is myFirstDatabase if I remember correctly.",
"username": "Doug_Duncan"
}
]
| I am not able to add | 2022-08-29T03:50:43.097Z | I am not able to add | 3,850 |
|
null | [
"installation"
]
| [
{
"code": "joyin@joyin:~$ systemctl status mongod\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Mon 2022-08-29 14:55:44 IST; 6s ago\n Docs: https://docs.mongodb.org/manual\n Process: 41404 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=217/USER)\n Main PID: 41404 (code=exited, status=217/USER)\n CPU: 4ms\n\nAug 29 14:55:44 joyin systemd[1]: Started MongoDB Database Server.\nAug 29 14:55:44 joyin systemd[41404]: mongod.service: Failed to determine user credentials: No such process\nAug 29 14:55:44 joyin systemd[41404]: mongod.service: Failed at step USER spawning /usr/bin/mongod: No such process\nAug 29 14:55:44 joyin systemd[1]: mongod.service: Main process exited, code=exited, status=217/USER\nAug 29 14:55:44 joyin systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "",
"username": "Joyin_Laskar"
},
{
"code": "",
"text": "Does mongod user exist?\nCheck this thread",
"username": "Ramachandra_Tummala"
}
]
| I can't start mongodb server | 2022-08-29T09:29:16.290Z | I can’t start mongodb server | 3,548 |
null | [
"swift",
"atlas-device-sync"
]
| [
{
"code": "The following changes cannot be made in read-only schema mode:\\n- Property 'UserObj.name' has been added.\"\nlet config = Realm.Configuration(fileURL: backupURL, readOnly: true)\n",
"text": "Hi All,Maybe a slightly general question here as I will be providing very little code, although I will do my best to describe the process and results.I have an app in production which was released with realmSwift version 10.16.0. This version had the problem that user tokens were not automatically refreshed for Sign In with Apple, therefore, 60 days after an initial login, client sync was silently lost. Clients were still able to save data to their device, however this data were not synced to the server. This silent failure went undetected by myself.During this time, I made destructive changes to our database. I was comfortable in doing this as code were implemented in the app to handle the client reset. The only problem was, users that had lost connection sync, were not getting the client reset.Fast forward to now, where we are wanting to release a new version of the app where I have made additional (to the original destructive changes causing the client reset) additive changes to the schema. When a user updates their app they are required to log back in. Upon login they receive the old client reset. An attempt to restore from the old backed-up realm (which includes all of the valuable data that has not been synced to the server) is made, however the following error is received:The config used to read the backup realm is as follows:Effectively, I am unable to read the old realm, preventing me from restoring from the old backup. I have googled around and have found nothing on the error “changes cannot be made in read-only schema mode” - only “Changes cannot be made in additive-only schema mode”Is there any way to read the old realm in my case?Thanks,Ben",
"username": "BenJ"
},
{
"code": "let config = Realm.Configuration(fileURL: backupURL, readOnly: false)\nRealm file's history format is incompatible with the settings in the configuration object being used to open the Realm. Note that Realms configured for sync cannot be opened as non-synced Realms, and vice versa. Otherwise, the file may be corrupt.\n",
"text": "I should also add, I have tried setting the following:which results in:as has been reported in various other topic discussions.",
"username": "BenJ"
}
]
| Changes cannot be made in read-only schema mode | 2022-08-29T11:26:21.566Z | Changes cannot be made in read-only schema mode | 2,191 |
null | []
| [
{
"code": "",
"text": "Hello:I have a nested array of objects in my document as follows:{\n_id: “xyz”,\ndeos: [\n{_id: “123”,\ntext: “guitar”\n},\n{_id:“124”,\ntext:“mango”\n]\n}Now I am able to use the Atlas Text Search. Lets say I search for “guitar” it returns the entire collection. However I only want it to return those nested objects that contain the keyword guitar. How do I accomplish this?",
"username": "Arjun_kochhar_leodeo"
},
{
"code": "",
"text": "I am not seeing a straightforward solution to this simple use case and would request someone from Mongodb to please look into it. In a nosql database creating embedded documents in the the way to structure relationships in many cases. So I have a nested array of objects. I do a $Search full text search on a nested object in a nested array and I get the parent collection back. The actual search results are in an object inside a nested array. So how then do I get a reference to the objects that the full text search found? I don’t need the entire document I need to be able to filter only those objects in the embedded/nested array that the $search found a match for. How do I do this?",
"username": "Arjun_kochhar_leodeo"
},
{
"code": "guitar",
"text": "Dear @Arjun_kochhar_leodeo ,Welcome to The MongoDB Community forums! To get a better understanding of your use-case, could you please confirm below?Lets say I search for “guitar” it returns the entire collection.I do a $Search full text search on a nested object in a nested array and I get the parent collection back.After you perform this search, do you mean you get back the whole collection (with all documents, even when they don’t contain the “guitar” term you’re looking for), or documents in which guitar is present (e.g. not the whole collection), but not in the format that you wanted?For example, do you only want the nested object, something like below after your search?{_id:“124”,\ntext:“mango”}Also, could you please share the below details:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "const agg = [\n {\n '$search': {\n 'text': {\n 'query': 'electric',\n 'path': 'deos.text'\n },\n 'highlight': {\n 'path': 'deos.text'\n }\n }\n }, {\n '$limit': 10\n }, {\n '$project': {\n '_id': 1,\n 'given_name': 1,\n 'highlights': {\n '$meta': 'searchHighlights'\n },\n 'score': {\n '$meta': 'searchScore'\n }\n }\n }\n];\n",
"text": "Hello @Tarun_Gaur,Thanks for your response. My question was about search a “nested” array specifically. If a word is found in a nested array of a document, it returns the entire document. My issue was then it also returns all the other items in the nested array where the search term does not match.For me the solution seems to be to use $project as follows:For those who follow I will clarify. Deos is a nested array in the collection that I am doing the $search on. I want to to search for the word “electric” in this nested array. If I don’t use the $meta operator in the $project, I get the whole collection back if there is ANY match in the deos. text nested array for the word electric. This is useless, because what is the point of the search if I get the whole collection back because there is a match.The solution for me seems to be the “highlights” in the $project. This object contains the precise places in deos.text where there is a match. I can take this and process it further in Javascript.This is the solution for me.",
"username": "Arjun_kochhar_leodeo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to create Search Index for text field inside nested array of objects | 2022-08-20T06:13:17.999Z | How to create Search Index for text field inside nested array of objects | 3,128 |
null | []
| [
{
"code": "",
"text": "I have a collection of 5.5M records. I have an index, on a single field ‘description’. 2M of those records have description value as ‘First Lot’ and the rest 2M have them as ‘Second Lot’. After doing explain ExecutionStats, for queries on COUNT for ({“description” : “First Lot”}) and ({“description” : “Second Lot”}) the “executionTimeMillis” is 500-600 range for both but when I do COUNT for ({“description” : { $in : [“First Lot”, “Second Lot”]}}) the “executionTimeMillis” is minimum in the range of 2000. why is that? shouldn’t it be in the range of 1100-1200? How can I can reduce the time ??",
"username": "KR_1"
},
{
"code": "",
"text": "{“description” : { $in : [Hi @KR_1 ,Can you provide an explain plan with execution stats of both queries ?In general having a value that return 2M documents out of 5M is not considered selective , and the $in operator might need to do a full index scan.You might consider running 2 queries, 1- ({“description” : “First Lot”} and 2- ({“description” : “Second Lot”}), finally sum the numbers on client side…Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Mongo Performance Improve | 2022-08-28T11:18:11.218Z | Mongo Performance Improve | 935 |
null | [
"queries",
"graphql"
]
| [
{
"code": "[\"message\":\"execution memory limit exceeded\",\"locations\"]",
"text": "We currently have some problems requesting large datasets (>7000 objects) through Atlas GraphQL. When running the query, we get the following error message:[\"message\":\"execution memory limit exceeded\",\"locations\"]",
"username": "Christian_Schulz"
},
{
"code": "",
"text": "Hi @Christian_Schulz,\nCan you please provide us with the following details in order to analyze your situation better?If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Hi @SourabhBagrecha,Regards\nChristian Schulz",
"username": "Christian_Schulz"
},
{
"code": "",
"text": "Hi @Christian_Schulz,I believe it’s best to post the information in the relevant thread you have already opened so that we can keep all the information and discussion in one place. I’m hoping what you experienced would be informative to the community at large, and would be very helpful to future community users who experienced a similar issue.If you have sensitive information, please redact it before posting publicly And I would also like to thank you for contributing to the community!If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "\n units (limit:3000000, query: {location:{name:\"Germany\"},\n\n healthText_in: [\"working\", \"defect\", ],\n\n type_in: [\"a123123sdaw1289zau908dh\", \"1238097asdjkh081923\"]}) {\n\n _id\n\n }\n\n }\n\n",
"text": "Hi @SourabhBagrecha,okay the error comes also with very less information:And if I try to get more then 8k entries I got the error.Regards\nChristian Schulz",
"username": "Christian_Schulz"
},
{
"code": "limit:3000000, ",
"text": "Hi @Christian_Schulz,\nYou mentioned that you see this message when you’re trying to request a large dataset.limit:3000000, Do you also see this issue when you’re requesting a smaller result set? Is there any pattern that you can discern regarding this error?Also, have you brought this up to the attention of Atlas support? It seems like your case is very particular, and Atlas support would have more visibility into the tools and information required to be able to troubleshoot this issue you’re having.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Hi @SourabhBagrecha ,I said as long as the limit remains below 7000 everything is good.\nAs soon as I go above it, I get the error.\nEven if I leave out the limit.No I have only asked in this forum so far as we do not have support.Regards\nChristian Schulz",
"username": "Christian_Schulz"
}
]
| Execution memory limit exceeded | 2022-08-17T09:07:13.939Z | Execution memory limit exceeded | 4,302 |
null | []
| [
{
"code": "",
"text": "Hey everyone!My name is Justin Israel Poveda and I am currently a senior at Stony Brook University studying Technological Systems Management / Computer Science. As someone who was born in raised in Brooklyn, NY. It is truly an honor for me to be part of this amazing community, as a MongoDB User Group Leader for New York.Just a little bit more about me, I am a CodeNation and All Star Code Alumni. These two non profit organizations helped and guided me to the path I am on today. Ever since then, it has been instilled in me to give back to my community and help underrepresented students pursue careers in tech. I am currently the Chair Advisor at ColorStack SBU, an MLT Career Prep Fellow, a member in SHPE (Society of Hispanic Professional Engineers), and a member in Techqueria. It is one of my passions to give back to my community and with the help of MongoDB and you all I know we can make that happen!The opportunity to be one of the User Group Leaders was something I did not expect to come upon but I am so grateful that I did because it allows me to impact more people with community building! I actually explain how I ended up here in one of my Youtube videos that I will definitely leave a link to down below if you want to check it out. I make videos on my tech journey and document everything I experience and share what I learned in hopes to serve as another resource for the community.With all that being said please be on a look out for our upcoming event mid September at my university in New York at Stony Brook University!!! I hope you are all as excited as I am about the first event that I will be planning out with the MongoDB team.Feel free to connect with me:\nLinkedIn: https://www.linkedin.com/in/justinpoveda/\nYoutube: https://www.youtube.com/channel/UCwMnrkwOt6i364O3fAcK3sw",
"username": "Justin_Poveda"
},
{
"code": "",
"text": "Welcome to the MongoDB Community Justin!Glad to have met you and have you in the community. Your journey is an inspiration to many and thanks for being so passionate about giving back to the community!",
"username": "Harshit"
},
{
"code": "",
"text": "Hi Justin and welcome to the Forums!Share photos of the NYC MUG when you meet, please!And just in case, for the Hispanic community we do have a MongoDB Podcast in Spanish: the UNICODE U00D1 Podcast Unicode(U+00D1) Podcast | MongoDBAnd we welcome new guests, just sayin’ ",
"username": "Diego_Freniche"
}
]
| Hi from Justin Poveda in New York City | 2022-08-23T14:23:03.974Z | Hi from Justin Poveda in New York City | 2,377 |
[
"react-native",
"graphql"
]
| [
{
"code": "__schema",
"text": "I created a collection called Person on a database called yadda and generated a gql schema / server based on some bson that i wrote for that collection. I managed to execute queries on the gql page in the web ui of atlas for this collection. I can write into the collection through a graphql mutation and read from it too. But only on the atlas web ui.However a problem occurs when I use an external client to post a query to the graphql endpoint instead of using the web ui. When I execute a gql introspection query like asking the endpoint for the __schema this succeeds and I get a reply. However when I query actual data through gql I always get this error here:\nimage2266×632 53 KB\nThis person here seems to have the same problem: MongoDB Realm Functions on React NativeWhen i post an introspection query exactly the same way with an external gql client, then it succeeds.It allow all read and write operations on that collection without any conditions.",
"username": "Tobias_Klock"
},
{
"code": "_id",
"text": "My bson schema had no dedicated field for _id . The schema validation of atlas shows no error when saving this. But it then still fails to generate a valid graphql server out of it. The error message here could be a bit more helpful ",
"username": "Tobias_Klock"
}
]
| Can't find a table mapping for namespace yadda.Person | 2022-08-29T09:10:27.473Z | Can’t find a table mapping for namespace yadda.Person | 2,535 |
|
null | [
"aggregation"
]
| [
{
"code": "\"datasets\": [\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F010\",\n \"colorDescription\": \"braun, rose gold\",\n\t\t \"cgPrices\": 200,\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-02-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-02-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-02-16T13:06:13.493+00:00\",\n \n }\n ]\n },\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F011\",\n \"colorDescription\": \"beige, silber\",\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-03-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-03-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-03-16T13:06:13.493+00:00\",\n \n }\n ]\n }\n ]\n",
"text": "I have two collections “datasets” and “users” both having an array of object fields.datasets inside stateHistory array object having date fields.\nusers inside some of the users have a prices array of object date fields\nI need to join stateHistory.date === prices.date fieldsI need an output ofmy code: Mongo playground",
"username": "Aravinth_E"
},
{
"code": "users[\n {\n \"email\": \"[email protected]\",\n \"firstName\": \"Gerd\",\n \"lastName\": \"Müller\",\n \"role\": \"Admin\",\n \"isActive\": true,\n \"accountId\": 12345,\n \"prices\": [\n {\n \"date\": \"2022-03-22T13:06:13.493+00:00\",\n \"price\": 95\n },\n {\n \"date\": \"2022-02-16T13:06:13.493+00:00\",\n \"price\": 105\n }\n ]\n },\n {\n \"email\": \"[email protected]\",\n \"firstName\": \"Tobias\",\n \"lastName\": \"Noell\",\n \"role\": \"SuperAdmin\",\n \"isActive\": true,\n \"accountId\": 32661,\n \n }\n ]\ndatasets[\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F010\",\n \"colorDescription\": \"braun, rose gold\",\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-02-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-02-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-02-16T13:06:13.493+00:00\",\n \n }\n ]\n },\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F011\",\n \"colorDescription\": \"beige, silber\",\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-03-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-03-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-03-16T13:06:13.493+00:00\",\n \n }\n ]\n }\n ]\n_iddatasets\"datasets\"\"datasets\": [\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F010\",\n \"colorDescription\": \"braun, rose gold\",\n\t\t \"cgPrices\": 200,\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-02-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-02-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-02-16T13:06:13.493+00:00\",\n \n }\n ]\n },\n {\n \"eyeSize\": 54,\n \"bridgeSize\": 15,\n \"templeLength\": 135,\n \"colorCode\": \"F011\",\n \"colorDescription\": \"beige, silber\",\n \"stateHistory\": [\n {\n \"state\": \"scanning\",\n \"date\": \"2022-03-22T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"scanned\",\n \"date\": \"2022-03-18T13:06:13.493+00:00\",\n \n },\n {\n \"state\": \"reconstructing\",\n \"date\": \"2022-03-16T13:06:13.493+00:00\",\n \n }\n ]\n }\n ]\n_id",
"text": "Hello @Aravinth_E,Welcome to the MongoDB Community forums Let me re-iterate the problem statement to understand it better.So, you have a users collections, which is:And you have another collection named datasets, which looks like:In both collections, you don’t have any _id, which is the default primary key for the documents in the collection.Also, the output format you shared in the questions, seems very similar to the datasets collection document!! Are you sure you want the same output as the \"datasets\" collection? Or I’m missing something?Also in the playground, you’re grouping on the_id basis, which doesn’t exist in the above-given schema.Please let us know if you have any further questions.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
}
]
| Join two collections array of object match | 2022-08-26T08:18:23.346Z | Join two collections array of object match | 1,600 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "Hello,\nI’m designing a schema where events can have many tags and tags can belong yo many event. I would love yo be able to use MongoDb aggregator to determine trending event in the last 2 days based on numbers of events they had appeared. Im coming from an SQL background. What is the best design for this usecase and allow for scalability.\nAlso there will be a tag suggestion feature for users when creating their events",
"username": "EmeritusDeveloper"
},
{
"code": "",
"text": "Event can have multiple tags\nTags can be used my multiple events.Events will grow indefinitely and same for tags\nI’m not sure uf referencing will be a good idea since they are both unbounded?A Tag can have 30million post as the system grows. Im afraid it mught hit the 16mb document limit.Anyone who can help please?",
"username": "EmeritusDeveloper"
},
{
"code": "$unwind$sortByCount$limit",
"text": "Can a single event really have infinite number of tags? How would they be displayed? In my experience the number of tags on an event would be somewhat limited (if nothing else, by the UI). In which case embedding tags as an array into events and aggregating via $unwind followed by $sortByCount (preferably with $limit after that) would be the way to get top tags…Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you @Asya_KamskyYes. Events can have limited number of tags (max of 10tags)However, one of the feature is for users to get suggestion on tags while typing/creating their events based on tag that exist in the databaseWould you have a suggestion as to how to go about that please.I sincerely apprciate your response",
"username": "EmeritusDeveloper"
},
{
"code": "",
"text": "Please, anyone who can help provide an answer to this?",
"username": "EmeritusDeveloper"
}
]
| Many-to-Many relationship Schema Best Practice | 2022-08-28T16:35:11.412Z | Many-to-Many relationship Schema Best Practice | 3,037 |
[
"containers"
]
| [
{
"code": "",
"text": "Hello.\nI tried to use mongodb with docker-compose on my m1mac\nBut the command was not available in the container\nmongoDB itself appears to be working.\nHow can I solve this?\nAnd what kind of learning should I do?\n",
"username": "yasuto_fujii"
},
{
"code": "mongomongosh",
"text": "Hi @yasuto_fujii and welcome to the MongoDB forums. The mongo command line tool is no longer being installed with MongoDB version 6.0 and newer. You may have the newer mongosh shell, depending on how you installed MongoDB. If it’s not available, you can always download it for your platform.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for the thoughtful reply.\nI’ve learned a lot.",
"username": "yasuto_fujii"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| コンテナ上で「mongo」が使えない | 2022-08-26T09:32:49.105Z | コンテナ上で「mongo」が使えない | 2,190 |
|
null | [
"sharding"
]
| [
{
"code": "",
"text": "Hello Team,\nWe are in situation wherehow i will achieve this ?",
"username": "Ganesh_Wankhede"
},
{
"code": "",
"text": "Is this a duplicate of Can we create multi-region shards, or are you trying to ask something different here? They seem pretty similar.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "No, here I want to get full procedure",
"username": "Ganesh_Wankhede"
}
]
| How to setup multi-country shards | 2022-08-28T11:32:44.338Z | How to setup multi-country shards | 1,880 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "So I am very new to working in MongoDB I have some background using SQL in a relational database.So I am trying to utilize a loop or a foreach statement to create two new records in one collection for each organization ID I have in another collection. As I stated, I am very new to MongoDB and looking for any and all assistance or ideas.I have scoured the MongoDB website and google but I am just not able to put one to another. The link between the two collections is the organization ID in one collection it is an embedded record and the other collection it is not.",
"username": "Josh_Barnes"
},
{
"code": "$merge",
"text": "So you have an existing collection with records with organization ID (sample document please?) and you want to create two new records in another (existing?) collection - what would be in those records? It’s hard to help without knowing more details. It’s possible that what you are trying to do can be done with aggregation pipeline with $merge stage but I cannot say for sure without knowing more details…Asya",
"username": "Asya_Kamsky"
},
{
"code": " \"_organization\" : {\n \"id\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\",\n \"type\" : \"Organization\"\n },\n{\n \"_id\" : \"254480e5-7d32-4756-b748-120da9ef5f2b\",\n \"locationIds\" : [ \n \"*\"\n ],\n \"roomIds\" : [ \n \"*\"\n ],\n \"signageListIds\" : [ \n \"*\"\n ],\n \"signageIds\" : [ \n \"*\"\n ],\n \"deviceIds\" : [ \n \"*\"\n ],\n \"type\" : \"Role\",\n \"name\" : \"Admin\",\n \"description\" : \"Admin Role\",\n \"organizationId\" : \"ec5b8680-f63a-11ec-a078-253419036b28\",\n \"application\" : {\n \"receivers\" : 2,\n \"rooms\" : 2,\n \"locations\" : 2,\n \"signage\" : 2,\n \"alerts\" : 2,\n \"organization\" : 2,\n \"billing\" : 2,\n \"users\" : 2\n },\n \"readOnly\" : true,\n \"createdAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"updatedAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"__v\" : 0\n}\n\n/* 3 */\n{\n \"_id\" : \"745f0dde-40c8-40b0-9000-54fedbb62875\",\n \"locationIds\" : [ \n \"*\"\n ],\n \"roomIds\" : [ \n \"*\"\n ],\n \"signageListIds\" : [ \n \"*\"\n ],\n \"signageIds\" : [ \n \"*\"\n ],\n \"deviceIds\" : [ \n \"*\"\n ],\n \"type\" : \"Role\",\n \"name\" : \"User\",\n \"description\" : \"No access to users or billing.\",\n \"organizationId\" : \"ec5b8680-f63a-11ec-a078-253419036b28\",\n \"application\" : {\n \"receivers\" : 2,\n \"rooms\" : 2,\n \"locations\" : 2,\n \"signage\" : 2,\n \"alerts\" : 2,\n \"organization\" : 2,\n \"billing\" : 0,\n \"users\" : 0\n },\n \"readOnly\" : true,\n \"createdAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"updatedAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"__v\" : 0\n}\n",
"text": "Hello,Thank you for the quick response and my apologies for not adding more information. So I would be using the organization Id in the following example from one existing collection.Then on the other existing collection, I would be creating two new records linked to the organization Id the records would look like this for example:I would need to create two new records for each distinct organization Id.",
"username": "Josh_Barnes"
},
{
"code": "",
"text": "Where is the data in these new records coming from?",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "So the script I am writing is to take the current data from one collection and link it to the newly created records in the other collection.would you be available for an email exchange?",
"username": "Josh_Barnes"
},
{
"code": "$lookuporg{ \"_id\" : ObjectId(\"6309063722a3b8c70e43a64e\"), \n\"_organization\" : {\n \"id\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\", \n \"type\" : \"Organization\", \n \"field1\" : \"xxx\" } \n}\n{ \"type\" : \"Role\",\n \"name\" : \"Admin\",\n \"description\" : \"Admin Role\",\n \"organizationId\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\"\n}\ndb.org.aggregate( [ {$lookup:{\n from:\"other\", \n localField:\"_organization.id\", \n foreignField:\"organizationId\", \n as:\"other\"\n}}])\n{ \"_id\" : ObjectId(\"6309063722a3b8c70e43a64e\"),\n \"_organization\" : {\n\t\"id\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\",\n\t\"type\" : \"Organization\",\n\t\"field1\" : \"xxx\"\n },\n \"other\" : [\n\t{\n\t\t\"_id\" : ObjectId(\"6309090b22a3b8c70e43a64f\"),\n\t\t\"type\" : \"Role\",\n\t\t\"name\" : \"Admin\",\n\t\t\"description\" : \"Admin Role\",\n\t\t\"organizationId\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\"\n\t}\n]}\norg",
"text": "would you be available for an email exchange?We keep the discussions in the public forums so that everyone can learn.It’s not really clear to me whether there are two collections involved or three, I’ll assume three - one with org, one with some current data you refer to as “current data from one collection”, and then there’s a third collection into which you are trying to insert two new documents… If that’s not the current schema, you’ll need to clarify. Since you’re inserting into an existing collection, there’s also a question about what you want to do if the equivalent record already exists. Maybe (simplified) actual sample records (that are input and that you want to generate as output) would clarify things.Based on your comment The link between the two collections is the organization ID in one collection it is an embedded record and the other collection it is not.It sounds like you would just use $lookup to join the data in two collections.Example org document:Example “other” collection:Now this aggregationwill result in this output:Presumably there are other fields in org - but once you clarify the full problem statement, we can proceed from there. Suffice it to say (for now) that I don’t see why you would need to do this in code on the client/application side - you should be able to do this inside the database assuming all these collections are in the same database.Asya",
"username": "Asya_Kamsky"
},
{
"code": "/* 1 */\n{\n \"_id\" : \"d65eb800-23c3-11ed-b98c-519bf1f3431f\",\n \"is_active\" : true,\n \"is_verified\" : true,\n \"crm_id\" : null,\n \"role\" : \"admin\",\n \"ownerSetMFA\" : false,\n \"orgSwitching\" : false,\n \"type\" : \"User\",\n \"verification_code\" : \"2db0961b-f67c-49d9-ac2b-0b2140a3136c\",\n \"name\" : \"Josh test8\",\n \"email\" : \"[email protected]\",\n \"phone\" : \"5555555555\",\n \"jobDescription\" : \"Administrator\",\n \"migrated\" : true,\n \"_organization\" : {\n \"id\" : \"d65ff080-23c3-11ed-b98c-519bf1f3431f\",\n \"type\" : \"Organization\"\n },\n{\n \"_id\" : \"254480e5-7d32-4756-b748-120da9ef5f2b\",\n \"locationIds\" : [ \n \"*\"\n ],\n \"roomIds\" : [ \n \"*\"\n ],\n \"signageListIds\" : [ \n \"*\"\n ],\n \"signageIds\" : [ \n \"*\"\n ],\n \"deviceIds\" : [ \n \"*\"\n ],\n \"type\" : \"Role\",\n \"name\" : \"Admin\",\n \"description\" : \"Default Ditto Admin Role\",\n \"organizationId\" : \"ec5b8680-f63a-11ec-a078-253419036b28\",\n \"application\" : {\n \"receivers\" : 2,\n \"rooms\" : 2,\n \"locations\" : 2,\n \"signage\" : 2,\n \"alerts\" : 2,\n \"organization\" : 2,\n \"billing\" : 2,\n \"users\" : 2\n },\n \"readOnly\" : true,\n \"createdAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"updatedAt\" : ISODate(\"2022-06-27T17:02:27.904Z\"),\n \"__v\" : 0\n",
"text": "OK let me try to type this out a little better on my part.I have two collections a db.users collection and a db.roles collection.The data on the user collection look like the following:with the data on the roles collection looking like this:The roles collection is a new collection that was created due to a new feature that was created. We are creating two roles for every organization ID on the users collection an Admin and a User role. So I am working to write a script that will look for the distinct organization id on the user collection and work to created the two roles on the roles collection.On the roles collection the _id field will be a UUID that is unique to the role.Does that make a little more sense?",
"username": "Josh_Barnes"
},
{
"code": "usersdistinct\"_organization.id\"",
"text": "Does that make a little more sense?This makes a little more sense though I don’t know whether your roles for different organizations are going to be all the same or not - presumably they won’t be because otherwise why have different documents for each organization, but I guess you may be planning on populating these different fields later?First thing you’ll want to do is to get every unique organization id from the users collection - probably either using aggregation, or using distinct command on \"_organization.id\".If you want to do this in your client code, then just loop over each distinct organization id creating (inserting) two documents for each. That seems pretty straight forward, so I’m probably still missing what the challenge/tricky part is. When you go down this path, where is the next obstacle?Asya",
"username": "Asya_Kamsky"
}
]
| Assistance with a Loop | 2022-08-25T15:52:45.335Z | Assistance with a Loop | 1,844 |
null | []
| [
{
"code": "",
"text": "I was directed here by the chat support. Can the paid tiers of Atlas MongoDB hosting be run with javascriptEnabled :true?",
"username": "Hans-Eirik_Hanifl"
},
{
"code": "",
"text": "Hi @Hans-Eirik_Hanifl,Please review the Cluster additional settings documentation. I believe the option can be configured from the below menu when configuring settings for your cluster:\nimage883×650 75.9 KB\nRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "how to enable it for M0 cluster as i need to use $ function",
"username": "mohamed_aslam"
},
{
"code": "",
"text": "Hello @mohamed_aslam and welcome to the MongoDB Community forums. This is not possible to do on the M0 cluster:\nimage1560×362 27.5 KB\nI would recommend looking at the list of limitations for other restrictions on the M0 tier.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
]
| Can Atlas be run with javascriptEnabled :true? | 2022-08-03T23:16:33.440Z | Can Atlas be run with javascriptEnabled :true? | 1,419 |
[
"installation"
]
| [
{
"code": "",
"text": "I have the following error please help it’s been 3 days now my entire project is on hold\n\nScreenshot 2022-08-26 at 12.39.54 AM 11567×731 84.2 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "cat ~/Library/LaunchAgents/[email protected] /usr/local/etc/mongod.confsystemlog.pathrootroot",
"text": "You would have to look into the log file for MongoDB to figure out what the issue is.To find the log file run cat ~/Library/LaunchAgents/[email protected]. This will show you how brew runs the service. In the results you should see something similar to the following:This shows the config file that the mongod process will use. You will want to look at the contents of that file ( cat /usr/local/etc/mongod.conf ) if your’s matches what I have. The config file will have a path for systemlog.path which you can look at to see the error.Without the log file messages we can’t tell you why MongoDB is not running. I will note that it’s interesting that root is showing as the user for the service. This should be showing up as your normal user. You should never (well almost never) run a process as the root user. Bad things can happen when you do.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I installed it using homebrew so it created folder as /opt/homebrew but not as usr/local so what should I do now??",
"username": "Bhavneet_Singh"
},
{
"code": "",
"text": "Should I go with this\nopt/homebrew/etc/mongod.conf",
"username": "Bhavneet_Singh"
},
{
"code": "brew",
"text": "It doesn’t matter where brew put it as long as the file exists. You would need to provide the info from the log file for us to be able to help you out. The steps I gave above will be able to get the log data, you just need to replace the paths that I listed from my machine with the paths that you get at each step from your machine.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Sure sir I will provide you with the details asap",
"username": "Bhavneet_Singh"
},
{
"code": "mongodb-communitybrew uninstallbrew install",
"text": "I found a screenshot in a post you deleted that might shed some light on things. Below is a portion of that screenshot:\nimage1920×956 175 KB\nNote that it there are warnings stating that mongodb-community must be run as non-root to start at user login. This would line up with what I said earlier that this should be your normal user. It looks like you need to brew uninstall the package and manually clean up the mentioned directories and then brew install with your normal user.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan sir see I made the changes is these correct? so that we can move further with the process\n\nbrew admin1920×1200 54.8 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "brew servicesroot",
"text": "That could be correct for Redis, but I’m not sure. Redis processes have nothing to do with MongoDB.Does brew services still show root as the user? If so you need to uninstall MongoDB and manually remove the directories/files in the screenshot that I posted earlier. You then need to reinstall MongoDB with your normal user account so that it can start up properly.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan sir as per you instruction i uninstalled mongodb and then manually removed the files and then reinstalled it as below\n\nScreenshot 2022-08-26 at 8.04.01 PM1920×1200 199 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "",
"text": "and after the above step i am getting the following error as soon as i used command brew services start [email protected]\nScreenshot 2022-08-26 at 8.04.27 PM1920×1200 190 KB\nNow what should I do?",
"username": "Bhavneet_Singh"
},
{
"code": "",
"text": "Please check this one too\n\nScreenshot 2022-08-26 at 8.15.25 PM1920×1200 97.3 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "",
"text": "Unfortunately you have not supplied the MongoDB logs so again we can’t help with why the service is having issues starting up. I have put steps to get the log data in my original post. Once you post the log then we can provide more help.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "output of the log file with path - /opt/homebrew/var/log/mongodb/mongo.log\nScreenshot 2022-08-26 at 10.02.03 PM1920×1200 495 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "",
"text": "@Doug_Duncan sir please check the mongo.log",
"username": "Bhavneet_Singh"
},
{
"code": "Waiting for connections",
"text": "Looking at the screenshot you provided I can see (about half way down) where it says Waiting for connections and then towards the bottom of the screen I can see a connection was made with the MongoDB Shell.Are you have issues?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Actually the first issue is in brew services it’s stating error with mongoDB [email protected]\nAnd the second is temporary the connections are made through mongo shell and later it’s not able to connect to the server",
"username": "Bhavneet_Singh"
},
{
"code": "brew services start mongodb-communitymongodmongodps -ef | grep mongodbrewmongod",
"text": "If brew services start mongodb-community is failing to start it sounds like there is an instance of mongod already running, but that’s just a suggestion from what I’ve seen in the past. Again, the screenshot only shows a limited amount of the log file and from what I can see the mongod service is indeed running.What are the results of running ps -ef | grep mongod? If you get a result then kill the process, delete the current log file and try staring via brew once more. If mongod fails to start the log should be relatively small. and it will be easier to see what’s happening.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "output of ps -ef | grep mongod\n\nScreenshot 2022-08-27 at 12.26.11 AM1920×1200 27.7 KB\n",
"username": "Bhavneet_Singh"
},
{
"code": "mongodmongomongoshbrew start ...rootkill 334brew service start mongodb-communitybrewrootbrew start ...root",
"text": "OK, so this shows that mongod is running and explains why you can connect via either mongo or mongosh. It also explains why you are not able to start the process using brew start ....The 0 in the first column of the first line show that is running under the root user which is not good. I would run kill 334 to stop that process and then brew service start mongodb-community to allow brew to run the process.One thing to note here is that since the process is currently running as root, that user probably owns the directories and files and those will need to be either deleted or have their ownership changed to your normal user so brew start ... does not fail due to permission issues.I can’t stress this enough, you should never run a service/daemon as the root user unless explicitly told to do so, and even then only do it as long as you understand the risks involved with doing so.",
"username": "Doug_Duncan"
}
]
| ERROR STARTING IN mongoDB SERVICES | 2022-08-25T19:14:01.845Z | ERROR STARTING IN mongoDB SERVICES | 7,722 |
|
[
"containers"
]
| [
{
"code": "",
"text": "\nimage1255×138 23.8 KB\n",
"username": "Salman_Ad5"
},
{
"code": "",
"text": "Try with mongosh\nCheck this thread",
"username": "Ramachandra_Tummala"
}
]
| Docker mongo not found | 2022-08-27T13:09:35.051Z | Docker mongo not found | 2,389 |
|
null | [
"queries",
"node-js"
]
| [
{
"code": "const client = new MongoClient(url, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\nclient.connect();\nconst db = client.db('Blind');\n\nipcMain.handle(\n 'get-data',\n (event: Event, arg: string, type: string, query: string) => {\n const coll = db.collection(arg);\n if (type === 'projection') {\n const projection = query;\n const f: Promise = coll.find({}, projection); // .populate('all');\n return f.toArray();\n }\n if (type === 'find') {\n const f: Promise = db.collection(arg).find(query);\n return f.toArray();\n }\n return null;\n }\n);\nuseEffect(() => {\n const q = JSON.parse('{ \"Name\": 1, \"_id\": 1 }');\n window.electronAPI\n .getData('Attendees', 'projection', q)\n .then((response) => {\n console.log('UseEffect: ', response[0]);\n setAttendees(response);\n })\n .catch((error: Error) => {\n console.log('Error: ', error);\n });\n }, []);\n\noid{\n \"_id\": {},\n \"Name\": \"First Last\",\n \"Address\": \"6012 Main Street\",\n \"City\": \"MyCity\",\n \"State\": \"NC\",\n \"Zip\": \"27540\",\n \"HomePhone\": \"919-555-1212\",\n \"CellPhone\": \" \",\n \"Email\": \" \",\n \"Location\": {\n \"lat\": 35.77,\n \"lng\": -78.44\n },\n \"Notes\": \"\"\n}\n_id_id.$oidtoArray()",
"text": "Hi folks! I am attempting to use MongoDB directly from an electron/ReactJS app and I am struggling with one thing: Getting the object id from the returned results.My code in the main.tsx file:This works as hoped, and in my render code, I do the following:This, too, works as expected. But what I’m trying to do, and cannot figure out how to do, is extract the oid from the returned result.The _id seems impossible to access. I know I’m looking for _id.$oid but that seems to not exist. Do I need to parse the return from the query different way than using the simple toArray() function? And if so, how?Sorry if this is a simple question, but I’m a simple person. Thanks!",
"username": "David_Simmons"
},
{
"code": "cursor.toArray()_idmain.tsxipcMain.handle(\n 'get-data',\n async (event: Event, arg: string, type: string, query: string) => {\n const res = [];\n if (type === 'projection') {\n const projection = query;\n const f = db.collection(arg).find().project(projection);\n await f.forEach((doc) => {\n res.push(JSON.stringify(doc));\n });\n return res;\n }\n if (type === 'find') {\n const f = db.collection(arg).find(query);\n await f.forEach((doc) => {\n res.push(JSON.stringify(doc));\n });\n return res;\n }\n }\n);\nrenderer_id",
"text": "Since no one answered … I’ll answer myself Doing MongoDB queries from electron is a bit of an odd thing and the asynchronous stuff makes it harder (for me anyway) to figure out what’s going on, etc.Anyway, when I was returning the results in an array, using cursor.toArray() it was impossible to later extract the _id from the results.However, I discovered that if I iterate on the cursor before turning it into an array, I have access to it.So the function in the main.tsx file needs to be:At which point the returned array in the renderer end now has access to the _id field as needed.Just in case anyone wanted to know.",
"username": "David_Simmons"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Get $oid from array of results | 2022-08-11T19:35:44.139Z | Get $oid from array of results | 2,140 |
null | [
"python",
"connecting",
"atlas-cluster"
]
| [
{
"code": "DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.\"\ndjango-admin startproject projectname\npython manage.py startapp appname\nfrom django.db import models\nfrom django.db.models.fields.related import ForeignKey\nfrom django.db.models.query import EmptyQuerySet\nfrom django.contrib.auth.models import User,AbstractUser, UserManager\n\nimport datetime\n\nimport mongoengine\n\n# Create your models here.\n\nclass Project(mongoengine.Document):\n\nprojectName = mongoengine.StringField()\nimport os\nfrom pathlib import Path\n\n\nimport mongoengine\nmongoengine.connect(db=\"testdatabase\", host=\"mongodb+srv://<Username>:<Password>@cluster0.qspqt0a.mongodb.net/?retryWrites=true&w=majority\")\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/4.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = secretKey\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'api'\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'prodash.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [BASE_DIR / 'templates'],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\n\n \n WSGI_APPLICATION = 'prodash.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/4.1/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.dummy'\n }\n}\nasgiref 3.5.2\ncertifi 2022.6.15\ncharset-normalizer 2.1.1\nDjango 4.1\ndjangorestframework 3.13.1\ndnspython 2.2.1\nidna 3.3\nmongoengine 0.24.2\npip 22.2.2\npymongo 4.2.0\npytz 2022.2.1\nrequests 2.28.1\nsetuptools 63.4.3\nsqlparse 0.4.2\nurllib3 1.26.12\n",
"text": "New to MongoDB, love it but I need some support with MongoEngine.I am trying to connect my app to MongoDB as I want to implement a Mongo database. The application works fine with SQL3Lite and I was also able to use Djongo. Yet, I am planning to use MongoEngine models and therefore I am trying to use it as DB Engine.However, for whatever reason I receive an error settings.Here is what I did:Models.py:Settings.pyAt this point I receive always the same error if:My pip list is:It might be a typo, but I can’t figure out what I am doing wrong. Please, help.",
"username": "Giacomo_Carloni"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Django + Mongoengine - DATABASES is improperly configured | 2022-08-26T10:11:59.515Z | Django + Mongoengine - DATABASES is improperly configured | 2,167 |
null | [
"field-encryption"
]
| [
{
"code": "",
"text": "I have a users collection having fields like name, phone and email. I a, currently use prefix regex search to support the substring search on these fields.\nFor example to return a user object with email [email protected] we could search with tes, test, teste, testem etc on email key using regex search.I was planning to use field level encryption on these fields to encrypt the personal information. How would I be able to support this use case?\nI have considered storing all these prefixes in an array, but turns out FLE doesn’t support deterministic encryption on arrays and using random encryption would make them non queryable.",
"username": "Anvesh_Reddy_Patlolla"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Partial prefix search on FLE encrypted fields | 2022-08-26T12:46:58.653Z | Partial prefix search on FLE encrypted fields | 1,463 |
null | [
"swift"
]
| [
{
"code": "",
"text": "Hey guys,is there any way to update realm objects without triggering a rerender? Cause I have to change some ids of my objects once I get them from the API, but I don’t want the writes to cause a rerender (because it will result in e.g. my image-flipper to jump back to index 0 etc.)Or maybe another approach for issues like this?Thanks in advance!",
"username": "Lila_Q"
},
{
"code": "",
"text": "The question is a bit vague; what is a ‘rerender’ and what’s the use case causing it?Realm is not directly related to any UI element ‘rendering’ so perhaps you could supply a very brief piece of code we can use to duplicate the issue? Are you using observers or perhaps SwiftUI wrappers?Oh, and yes you can silently write to Realm without causing an event. See Silent Write",
"username": "Jay"
},
{
"code": "",
"text": "Yea I’m using SwiftUI wrappers for a collection, so any change to that collection triggers the view to render again and set back to its initial state (leaving out a bit of stuff in between for complexity’s sake).After reading up a bit I guess I should use an observer instead with e.g. a keyPath, and then pass the notificationToken to the actual write (even though that would at least need a new EnvironmentObject to not make it messy)Bottom line, I have a spot in my code where I want to write changes to a realm object, without the SwiftUI wrapper detecting a change and forcing new renders / calculations and throwing off the current state.",
"username": "Lila_Q"
}
]
| Update realm object without triggering | 2022-08-26T13:02:18.046Z | Update realm object without triggering | 1,678 |
null | [
"node-js",
"connecting"
]
| [
{
"code": "mongooserequire('dotenv').config();\nconst { MongoClient } = require('mongodb');\nconst config = require('../../config/index');\n\nconst username = encodeURIComponent(config.mongo_db1.user);\nconst password = encodeURIComponent(config.mongo_db1.pass);\nconst dbHost = config.mongo_db1.host;\nconst authMechanism = 'DEFAULT';\nconst qString = `retryWrites=true&w=majority&authMechanism=${authMechanism}`;\n\nconst uri = `mongodb+srv://${username}:${password}@${dbHost}/?${qString}`;\n\nconst mongoOptions = {\n poolSize: 100,\n wtimeout: 2500,\n useNewUrlParser: true,\n useUnifiedTopology: true,\n};\n\nconst client = new MongoClient(uri, mongoOptions);\n\nlet _db;\n\nclient.on('serverClosed', (event) => {\n // eslint-disable-next-line no-console\n console.log('received serverClosed');\n // eslint-disable-next-line no-console\n console.log(JSON.stringify(event, null, 2));\n\n // should i call mongoDBConnection() here if connection lost while app running?\n});\n\nconst mongoDBConnection = async (app) => {\n try {\n if (client.isConnected()) {\n _db = client.db(config.mongo_db1.dbName);\n return client.db(config.mongo_db1.dbName);\n }\n\n await client.connect();\n if (app) app.use(passport.initialize());\n _db = client.db(config.mongo_db1.dbName);\n return client.db(config.mongo_db1.dbName);\n } catch (error) {\n return Promise.reject(error);\n }\n};\n\nconst dbObj = () => _db;\n\nmodule.exports = {\n mongoDBConnection,\n dbObj,\n};\nmongoDBConnection()app.jsmongoDBConnection()client.isConnected()const db = await mongoDB.mongoDBConnection();\nconst result = await db.collection('image').find({}).toArray(); \nconsole.log('what is result', result);\n.dbObj()const db = mongoDB.dbObj();\nconst result = await db.collection('image').find({}).toArray();\nconsole.log('what is result', result);\n_db|`autoReconnect`|boolean|true|optional Enable autoReconnect for single server instances|\nclient.on('serverClosed', (event) => {\n // eslint-disable-next-line no-console\n console.log('received serverClosed');\n // eslint-disable-next-line no-console\n console.log(JSON.stringify(event, null, 2));\n\n // should i call mongoDBConnection() here if connection lost while app running?\n});\nmongoDBConnection()dbObj()",
"text": "Hello, I’m new to using the node.js mongodb driver. In the past I’ve used mongoose for all mongodb related projects. I took M220JS and had a lot of unanswered questions. What is holding me up now is understanding the connection.The docs point out how to connect and offer some examples hereBut the example seems to be written for a one-off use-case rather than a real world application. For instance the example shows closing the connection after the request but in a real application, you would want to re-use the existing connection and take advantage of the connection pool. With that in mind, I have the following mongo.jsIn my application, I call mongoDBConnection() in my app.js and it connects as expected on app startup. When it’s time to make a request to the db, I have 2 options with the above code. The first is to call mongoDBConnection() again and let the drivers client.isConnected() tell me if I should reconnect. The code in another file looks something like this:The second option is a little cleaner to use throughout the app because I can call the .dbObj() at the top of the file:The problem with the cleaner option, is it doesn’t check if there is a connection issue. It uses whatever was assigned to _db. So if connection was lost, I don’t know what happens. The docs point out some options for handling reconnect here, but it’s only for a single server instance:What do we do for reconnection to clusters? All I came up with so far from the docs is to listen for a disconnectI don’t know if calling mongoDBConnection() in the code above would make any since because I don’t think (not sure) it would update the already imported instance of dbObj() with the updated instance of the connection. Are there drivers to help test fail scenarios with mongodb?In a nutshell, my questions are:",
"username": "Travis_Lindsey"
},
{
"code": "mongoDB.dbObj()client.connectgetaddrinfo ENOTFOUND <cluster url>client.connectconnectimeoutMSconst mongoOptions = {\n poolSize: 100,\n wtimeout: 2500,\n connectTimeoutMS: 10000,\n useNewUrlParser: true,\n useUnifiedTopology: true,\n};\n",
"text": "I tested loosing a connection to a cluster while operating on mongoDB.dbObj() after the original connection is made. I disconnected my network so it couldn’t reach mongodb atlas cluster, then made a query. The query waited for me to restore the connection, and then proceeded. I didn’t have to call client.connect again…awesome.if i waited the >=30 seconds before restoring the connection, the server would timeout with getaddrinfo ENOTFOUND <cluster url>. If I try the request again after restoring the connection, the request worked…I didn’t have to restart the app server or call client.connect.This makes me think what I’m doing is appropriate but I don’t have enough experience with the native driver to know what other things I should be testing to confirm this setup is good for production.Another problem with the above is the timeout is not consistent. 30 seconds is the most consistent but I’ve had it wait for over a minute and not timeout. I’m not sure how to duplicate it consistently. I thought the 30 seconds came from connectimeoutMS here but setting it like below has no affect:At this point, i’m not sure if this is a bug in the 3.6 node.js driver or if I’m doing something wrong with my connection setup.",
"username": "Travis_Lindsey"
},
{
"code": "",
"text": "also looking for answers for this questions, appreciate if some one can help.",
"username": "Level_0_You"
},
{
"code": "",
"text": "No answers, still no docs available…\nDamn, it really sucks",
"username": "Daniel_I"
},
{
"code": "",
"text": "Same question today 2022, how to reconnect replica using node.js driver without using default values?\nHeartbeat gives me an approach of lost connection so like a viking should force reset my app? Bad idea, but it works.",
"username": "juan_mamani"
}
]
| Node.js Mongodb Driver reuse connection and reconnect to cluster if connection lost | 2020-09-23T00:18:02.814Z | Node.js Mongodb Driver reuse connection and reconnect to cluster if connection lost | 21,586 |
[
"mongodb-shell",
"server"
]
| [
{
"code": "",
"text": "I’m having trouble running [email protected] on my Mac. I’m currently on macOS Monterey Version 12.5.1.I was able to install [email protected] in my terminal, but I’m running into an issue when I attempt to run the line ‘mongosh’. Here’s an image of my terminal:\nScreen Shot 2022-08-25 at 8.45.07 PM1602×1060 112 KB\nI would appreciate any help that you can provide. I’ve been toying with this for a couple days now and can’t seem to get any further than this. Thanks in advance!",
"username": "Nick_Zanichelli"
},
{
"code": "",
"text": "Your service is not up\nIt shows error status\nCheck mongod.log for errors on why it is failing to start\ncat your ~/Library/…/xyz.plist file shown in your screenshot\nIt will have a config file which has the mongod.log file location",
"username": "Ramachandra_Tummala"
},
{
"code": "mongoshmongosh",
"text": "From the screenshot, you can see that the MongoDB server starting has failed with an error. You cannot connect to the database without starting the MongoDB server, hence mongosh cannot connect to it. Only, after starting the server successfully you can connect using the mongosh or Compass or from any other program.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Can you point me in the right direction on how to start the server? I created an app.js file, initialized npm and started a server on port “27017” using node and nodemon, but I’m still receiving the following error:\nError 21598×902 62.1 KB\nFrom what I’m finding online, it seems I should be connecting to a local deployment simply from following the installation instructions that I had.Thanks!",
"username": "Nick_Zanichelli"
},
{
"code": "mongoshmongoshmongosh",
"text": "From what I’m finding online, it seems I should be connecting to a local deployment simply from following the installation instructions that I had.Correct.Follow these instructions and it should work. You can start MongoDB server as a service or manually. After starting the server, connect to it using mongosh. Connecting to MongoDB using mongosh is just typing mongosh at command prompt in the terminal.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Those are the instructions that I followed a few times now, but every time it leads me to this result.On another thread I saw that Mac no longer creates a /data/db file path so I created one of those, but still receiving this error. Any other ideas on why it can’t connect?",
"username": "Nick_Zanichelli"
},
{
"code": "",
"text": "Unless your mongod is up & running you will not be able to connect\nYes on Macos access to root folders is removed\nWhat dirs you created? Can mongod write to those directory?\nDoes those dirs match with what you have in your mongod.conf file?",
"username": "Ramachandra_Tummala"
}
]
| ERRORS: 'mongosh' and 'mongod --config /usr/local/etc/mongod.conf --fork' | 2022-08-26T01:52:59.373Z | ERRORS: ‘mongosh’ and ‘mongod –config /usr/local/etc/mongod.conf –fork’ | 3,560 |
|
[
"queries",
"node-js",
"data-modeling",
"mongoose-odm",
"mongodb-shell"
]
| [
{
"code": "const productSchema= mongoose.Schema({\n title:{\n type:String,\n required:[true,\"Enter product title\"]\n },\n description:{\n type:String,\n required:[true,\"Enter Product description\"]\n },\n StretchedLength:[\n {\n length:{\n type:String\n },\n quantity:{\n type:Number\n }\n }\n ],\n Density:[\n {\n amount:{\n type:Number\n },\n quantity:{\n type:Number\n }\n }\n ],\n HairColor:[\n {\n name:{\n type:String\n },\n quantity:{\n type:Number\n }\n }\n ],\n shipFrom:[\n {\n location:{\n type:String\n },\n available:{\n type:Boolean,\n default:false\n }\n }\n ],\n price:{\n type:Number\n },\n createdAt:{\n type:Date,\n default:Date.now\n },\n sale:{\n status:{\n type:Boolean,\n default:false\n },\n percent:{\n type:Number,\n default:0\n }\n },\n quantity:{\n type:Number\n }\n})\n",
"text": "Hey developer i am pretty new to mongodb need your help guys shall be thankfull to you!\nProblem is that i am creating hair wig website there is product order page where the user have choice to select multiple options if if any of option does not match to product that option will remain off\ni had created a schema according to my approach need suggestion\n**Hint *** one product has hundred variants like some product has full options and some has not\ni am sharing my order page image that what i want to accomplished and also the that i had createdSchemaOrder page\norder-page.PNG1393×605 92.4 KB",
"username": "Nauman_Arshad"
},
{
"code": "product: {\n title: \"Wig A\",\n description: \"Wig A's description\",\n attributes: [\n { \n name: \"Stretched Length\", \n details: [ \n { value: \"14 inches\", quantity: 10 }, \n { value: \"16 inches\", quantity: 6 }, \n // ... \n ]\n },\n { \n name: \"Density\", \n details: [ \n { value: \"200 density\", quantity: 8 },\n // ... \n ]\n },\n //\n // ... other attributes, like HairColor, etc.\n ],\n shipFrom: [\n { location: \"abc\", available: true },\n // ... other locations\n ],\n price: \n quantity:\n //... other properties\n}\n",
"text": "Hello @Nauman_Arshad, welcome to the MongoDB community forum!You can try this approach, and it has the same details with some structural differences. I have used the wig’s attributes as an array. You can include other properties for different wigs as needed:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "wow such a quick response very very thankful to you that what i want now i am going to work with this structure hope so it will resolve my problem once again thank you (Y)",
"username": "Nauman_Arshad"
},
{
"code": "",
"text": "@Prasad_Saya one more suggestion please this is the general schema for product what to do if there are more 100 variants with this product like the product with length 18 and density 150 cannot be ship from china same like some products are not available with particular Color\ni am asking that i want to add manually all variants in the database\nassume that i don’t have admin page i just adding data using post request in database",
"username": "Nauman_Arshad"
},
{
"code": "",
"text": "\norder2953×586 234 KB\n\ncheck with 150 density i dont have product with 22,20 length\ni have to add variants manually in db or we can handle it using the attribute in product schema?\n@Prasad_Saya",
"username": "Nauman_Arshad"
},
{
"code": "",
"text": "A possibility is that you can store all the possible combinations, like specify the attributes as shown in my earlier post. And, the available combinations as a separate field (and can be updated as and when there are changes). So, at any given point you can know the availability of a product with certain attributes.",
"username": "Prasad_Saya"
},
{
"code": "{\n \"title\":\"200% Water Wave Lace Front Wigs For Women Pre Plucked With Baby Hair Curly Human Hair Wigs Deep Wave Frontal Wigs Lace Closure\",\n \"description\":\"Brand Name: CEXXY Texture: Jerry CurlHair Grade: Remy Hair Origin: CN(Origin) Human Hair Type: Brazilian Hair Cap Size: Average Size Base Material: Swiss Lace Lace Color: Medium Brown Suitable Dying Colors: Darker Color Only Wig Material: 100% Natural Unprocessing Human Hair Wig Type 1: 4x4 Lace Closure Wig Wig Type 2: 13x4 Lace Frontal Wig Wig Length: 8inch -38inch Human Hair Wig Texture: Water Wave Deep Curly Human Hair Wig Made Method: Half Machine Made & Half Hand Tie Hairline: Pre Plucked Hairline With Baby Hair\",\n \"attributes\":[\n {\n \"name\":\"Stretched Length\",\n \"details\":[\n {\"value\":14,\"quantity\":8},\n {\"value\":16,\"quantity\":8},\n {\"value\":38,\"quantity\":8},\n {\"value\":36,\"quantity\":8},\n {\"value\":34,\"quantity\":8},\n {\"value\":32,\"quantity\":8},\n {\"value\":30,\"quantity\":8},\n {\"value\":28,\"quantity\":8},\n {\"value\":26,\"quantity\":8},\n {\"value\":24,\"quantity\":7},\n {\"value\":22,\"quantity\":6},\n {\"value\":20,\"quantity\":7},\n {\"value\":18,\"quantity\":8}\n ]\n },\n {\n \"name\":\"Density\",\n \"details\":[\n {\n \"value\":200,\"quantity\":52\n },\n {\n \"value\":150,\"quantity\":48\n } \n ]\n\n },\n {\n \"name\":\"Hair Color\",\n \"details\":[\n {\n \"value\":\"4x4 Lace Closure Wig\",\"quantity\":52\n },\n {\n \"value\":\"13x4 Lace Front Wig\",\"quantity\":48\n }\n ]\n\n }\n ],\n\n \"shipFrom\":[\n {\n \"value\":\"France\",\n \"available\":true\n },\n {\n \"location\":\"China\",\n \"available\":true\n },\n {\n \"location\":\"United States\",\n \"available\":true\n }\n ],\n \"price\":74.10,\n \"createdAt\":\"\",\n \"sale\":{\n \"status\":false,\n \"percent\":0\n }\n\n}\n",
"text": "how the combinations filed will be look like in this schema please give some idea please",
"username": "Nauman_Arshad"
},
{
"code": "combinations: [\n [ \n { name: \"Stretched Length\", value: \"14 inches\" },\n { name: \"Density\" value: \"200 density\" },\n { name: \"HairColor\" value: \"4x4 Lace Closure Wig\" },\n ],\n [ \n { name: \"Stretched Length\", value: \"18 inches\" },\n { name: \"Density\" value: \"180 density\" },\n { name: \"HairColor\" value: \"4x4 Lace Closure Wig\" },\n ],\n // ... more\n]\n",
"text": "I see this is one way to start working about the available combinations, for example:At this point I cannot for sure know how it works in your case certainly. This was an idea I had at that moment when I wrote the comments.Design is a process and based upon your requirements (your queries and how you use them in your app, for example) it can (and will) evolve. Its a very involving process. One needs to be intimately familiar with various aspects in an app to get the design right. As you build the app, the design can (and will) change, and be prepared for that.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya thank you for you response i agreed to your point according to my design i am adding bool value for attributes in combinations too to predict whether a product include which attributes or not so i can dynamically make that field un-select accordingly",
"username": "Nauman_Arshad"
},
{
"code": "",
"text": "@Prasad_Saya sorry to disturb you again can you please tell me why the checks array inside the combinations array in not creating in database is there any issue with syntax? also the HairColor value is also empty.\nraprd-req-post1522×706 102 KB\n",
"username": "Nauman_Arshad"
},
{
"code": "",
"text": "\ndb1317×627 28 KB\n\nDatabase",
"username": "Nauman_Arshad"
}
]
| Need suggestion in creating custom product schema | 2022-08-23T18:36:50.524Z | Need suggestion in creating custom product schema | 4,131 |
|
null | [
"sharding",
"text-search"
]
| [
{
"code": "{\n \"first_name\":\"Jhon\",\n \"category\":\"sport\",\n \"location\":\"Maribor, Slovenia\",\n \"url\":\"someurl.com\",\n ...\n}\n",
"text": "Hey i’m new to this community so hi everyone !Just to mention on start i have a sharded Mongo DB ( 3 config replicas, 2 routers , 2 shards x 3 replicas , hosted locally NOT on Atlas) , not sure if this is important to my issue just to mention it.So in one of my collections i want to preform full text search, so here is a sample data:I want to be able to perform full text search on multiple fields , but i saw that mongo allows 1 text index per collection. I also so that u can put multiple fields as part of that index ( in my case first_name, category, location etc. ), but the issue i have with this is when i search for example if an object has “Mar” in their first_name , i assume the results will return the objects that contain “Maribor” in location field as well , which is not the output that i want ( i want to get results that only match the criteria against the first_name field ).I wanted to ask if this is the approach to go , is there a better solution ? ( Atlas is not an option for me at the moment ).Thanks a lot",
"username": "Omen_Omen"
},
{
"code": "",
"text": "Hi @Omen_Omen ,The Atlas Search indexes come to solve such a problem.To solve it with regular text indexes you will have to add another stage after the text match, for example add a regex match in the next stage on the first_name field as you require in the case.Please note that the te, t indexes search words and not subtextsThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hey @Pavel_Duchovny thanks for the response !As i mentioned currently Atlas isn’t an option for me.\nIn regards to your solution about adding additional stage with regex i’m not sure i understand you, would you mind elaborating on that or show a short example ?Thanks a lot",
"username": "Omen_Omen"
},
{
"code": "db.collection.aggregate( [ { $match: { $text: { $search: \"Maribor\" } } }, { $match: { \"first_name\" : /Maribor/ } } ])\n",
"text": "This query will only return results that have first_name with maribor and not any other…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the example.Is this the go to practice for my case ? Doesn’t the second match return exact value ( which can be achieved without the text index on the field ? ) I’m just trying to find the most optimal solution",
"username": "Omen_Omen"
},
{
"code": "",
"text": "It can be also a sub expression like /Mar/ or , I thought you are searching for a word inside of sentences or a few words…",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"first_name\":\"marvin jones\",\n \"bio\":\"I've watched the movie marvelous mesigner , its amazing\",\n \"link\":\"www.smarvit.com\"\n}\ndb.collection.aggregate( [ { $match: { $text: { $search: \"marv\" } } }, { $match: { \"first_name\" : /marv/ } } ])",
"text": "Okay so lets say i have the following data:If i only apply the search query it will return all 3 results? And than if i add the first_name part:db.collection.aggregate( [ { $match: { $text: { $search: \"marv\" } } }, { $match: { \"first_name\" : /marv/ } } ])It will only return the results that match the first_name filter ?",
"username": "Omen_Omen"
},
{
"code": ".aggregate( [ { $match: { $text: { $search: \"marvin\" } } }, { $match: { \"first_name\" : /marv/ } } ])\n",
"text": "Yes thats the idea.However, text indexes are built to search for words and not substrings. Atlas search has a $regex operator to enhance that If you need to search for substrings you will have to use regex matches from the start (those are not optimal for index used unfortenatly)Atlas search is one heck of a technology that boost any search. I am very in favor of moving to Atlas In your example only the following query will work Ty",
"username": "Pavel_Duchovny"
},
{
"code": "$search",
"text": "So basically i can’t search for substrings efficiently ? In the example above , i have to $search for the whole word , and than match against the substring? Does that mean i can’t search substrings ( the above result would return if the first name was marvin but if it was marvery it won’t return any result, meaing i have to know the full word ( for the $search part ) if i wanted to do substring search ( for the regex part ), which doesn’t really makes sense ?\nAm i getting this right or ?",
"username": "Omen_Omen"
},
{
"code": ".aggregate( [ { $match: { \"first_name\" : /^marv/ } } ])\n",
"text": "You can just use :This type of query can use an index on first_name…Searching substrings/full text search is more of Atlas search capabilities. You will need to test how regex or regular text search can be done on your use case…",
"username": "Pavel_Duchovny"
},
{
"code": "first_name{\n\"first_name\":\"marvin gaye\"\n}\n",
"text": "Will this query work if i had single field ( asc ) index on first_name ( without the text index ) and for example. i have?Sorry again if i’m asking too many questions but i was a bit confused on the indexing part in Mongo",
"username": "Omen_Omen"
},
{
"code": "",
"text": "The regex with start anchor and without case insensitivity specifications should work with regular indexesTy",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Full text search on multiple unrelated fields | 2022-08-23T13:20:27.212Z | Full text search on multiple unrelated fields | 7,068 |
null | []
| [
{
"code": "",
"text": "Hello!\nI used mongo 4.2 on windows server 2019.\nAfter i executed powershell command “.\\mongo localhost/admin -u $user -p $password --eval “db.runCommand({logrotate : 1})””, I cann’t delete old files.\nAlso I see in resource monitor that old files are deing used mongod.exe.\nAnd that’s why I have a question: what should I do to resolve those files?",
"username": "Eugen_Saprykin"
},
{
"code": "admindb.runCommand(....db.adminCommand(...",
"text": "Hi @Eugen_SaprykinThe command needs to be run on the admin database. Change the db.runCommand(.... to db.adminCommand(...Hope this helps.",
"username": "chris"
},
{
"code": "",
"text": "Hi! It doesn’t help. It seems this is the same situation: Delete or Empty MongoDB log file. Some time our database server becomes… | by Ankit Kumar Rajpoot | DataDrivenInvestor",
"username": "Eugen_Saprykin"
},
{
"code": "",
"text": "how do you run mongod for logrotate? “reopen” or “rename” mode?is it possible you are trying to delete files from a non-windows-admin account that is not related to mongod admin?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I don’t specify logrotate option. I ran powershell as windows admin account, but this account is not related to mongo admin, that’s why I specify user and password for ps script.",
"username": "Eugen_Saprykin"
},
{
"code": "{logRotate : 1}mongod --dbpath ./data --logpath mylogs --logRotate rename\nmongod --dbpath ./data --auth --logpath mylogs --logRotate rename\n",
"text": "here are a few things:1- it is possible you were getting an error but couldn’t notice it: the command you are using should be {logRotate : 1}, capital “R”. apply this correction and try again.2- I have spun up a test server with the following commands, the second also requires adding admin user (in a test folder). applying your command causes a log flush to disk with a date as the extension, and I had no problem removing those files. I could even delete the “mylogs” file while the server was still up ( server logs nothing if it not there, I had to create an empty one so server could continue printing logs).So I suggest first checking your disk for problems and also the system itself. files getting locked even to admins is not uncommon in windows systems.If you really need to remove those files, try stopping/restarting the mongod. but if the problem persists you can say it is a disk/windows problem.although its relevance is low, do not forget to also check the folder permissions of your logs.PS: there might be other causes I missed, so do not take this post as an absolute solution.",
"username": "Yilmaz_Durmaz"
},
{
"code": "{\"ok\":1}--logpathsystemLog.path",
"text": "To add to @Yilmaz_Durmaz’s reply.If you run the command interactively do you get the {\"ok\":1} output. This would be the indicator that logrotation is completing.I cann’t delete old files.Also I see in resource monitor that old files are deing used mongod.exe.What are the filename(s) you are attempting to delete?By default the logrotate is going to rename the current log file by appending a timestamp to the filename and opening a new file with the name defined by --logpath or systemLog.pathA couple of screenshots of what you are experiencing can help clarify too.",
"username": "chris"
},
{
"code": "",
"text": "I see {“ok”:1}. Logroatation comlete successful. In log path I see file that called mongo.log and creation time = script start time. Also in log path I see a new file mongo.log.YYYY-MM-DDTHH-mm-SS.\nModification time second file the same script starting time.\nAlso in file properties I see I have permission to all.\nI just try to remove old file manual and it was successful.\nTurns out that problem when i try remove files in the same script as logrotation.\nI think the problem not in delay, becouse in script i try to delete files older than 7 days.\nHere is full text script$mongoBin = $args[0]$user = $args[1]$password = $args[2]$serverName = “localhost”try{\nWrite-Host “Move to mongod location: $mongoBin”\nSet-Location $mongoBin\n.\\mongo $serverName/admin -u $user -p $password --eval “db.adminCommand( { logRotate : 1 } )”\nif (!$?)\n{\n“An error during changing log file”\nWrite-Host $Error[0]\nExit 1\n}\n#delete files older then 7days\n$config = Get-ChildItem …\\mongo*.cfg -Recurse\n$input = Get-Content $config\n$input | foreach-object {If ($_ -match '\\Wpath:’)\n{\n$logPath = ($_ -split “path:”)[1].Trim()\n}\n}\nGet-Item \"$logPath.\" | Where-Object {\nif ($.LastWriteTime -lt (Get-Date).AddDays(-7))\n{\nWrite-Host \"remove file $\"\nRemove-Item $_.FullName\n}}}\ncatch\n{\nWrite-Host -Exception $_.ExceptionWrite-Host POWERSHELLERROR\nWrite-Error [string]::Empty\nExit 1\n}Also I have just tried at local machine and it works as expected",
"username": "Eugen_Saprykin"
},
{
"code": "",
"text": "I just try to remove old file manual and it was successful.it is fortunate this eliminates a disk problem I don’t have experience with PowerShell. so it is either on the file removal portion of the code or somehow the script does not have enough privilege to delete the file when/where it starts.by the way, resource monitor does not show live data, otherwise it was not possible to inspect by eye. resource usages are delayed (I don’t know how long) before they are removed from the list.",
"username": "Yilmaz_Durmaz"
},
{
"code": "Get-Item \"$logPath.\" | Where-Object {\nif ($.LastWriteTime -lt (Get-Date).AddDays(-7))\n",
"text": "The filter is files last written more than 7 days ago. Its not going to remove the log file that was just rotated.",
"username": "chris"
},
{
"code": "",
"text": "I understand. I donn’t want remove just rotated files but files are created 7 days ago.",
"username": "Eugen_Saprykin"
},
{
"code": "",
"text": "Try a different file property, creationtime.I think your logrotation question is answered at least, you’re into powershell territory now.",
"username": "chris"
}
]
| After log rotation mongod.exe still uses rotatetd files | 2022-08-23T12:44:43.956Z | After log rotation mongod.exe still uses rotatetd files | 3,105 |
null | []
| [
{
"code": "exports = async function(){\n // Load the AWS SDK for Node.js\n const AWS = require('aws-sdk');\n // Set the AWS config\n AWS.config = new AWS.Config();\n AWS.config.accessKeyId = context.values.get(\"AWS_ACCESS_KEY\");\n AWS.config.secretAccessKey = context.values.get(\"AWS_ACCESS_SECRET\");\n AWS.config.region = context.values.get(\"AWS_REGION\");\n \n // Create S3 service object\n s3 = new AWS.S3({apiVersion: '2006-03-01'});\n \n\n // Call S3 to list the buckets\n const buckets = await s3.listBuckets().promise()\n return buckets\n};\n",
"text": "Hi All,There have been some recent messages popup around the depreciation of 3rd party services such as AWS and Twilio. I think this is going to cause people a lot of confusion so I would advise MongoDB to add some examples of using the AWS package as a dependency.In my projects across multiple clients, I have utilised the 3rd party AWS service to do lots of things such as S3, SQS, SFN, SNS. I have therefore just explored how to implement this change.The example below gives a sort of “hello world” on how to get the AWS SDK working.The access key and secret are stored in realm using the secrets and values area.The installed aws-sdk package is v2.737.0. More recent ones don’t seem to work yet and I found this out from @Drew_DiPalma post HEREOne thing I would like to know from the MongoDB realm team is what sort of overhead is this going to add to our functions? Is it going to slow them down considerably if we need to load in the AWS SDK? The 3rd party services worked well in my opinion and didn’t really add any overhead.As I mentioned, it would be great if there could be some detailed documention with examples around these dependencies that people will be needing. Mainly around AWS and HTTP, so maybe an example of the library axios or node-fetch would be ideal.Thanks",
"username": "Adam_Holt"
},
{
"code": "aws-sdk",
"text": "Hello @Adam_Holt,Thanks for raising this query and I acknowledge it has been a while since you asked and I have good news to share Please find the documentation link describing how 3rd party services can be replaced with npm modules.One thing I would like to know from the MongoDB realm team is what sort of overhead is this going to add to our functions? Is it going to slow them down considerably if we need to load in the AWS SDK?There should not be any difference in overhead between AWS service and aws-sdk. The service wraps the official Go SDK from amazon so under the hood, they are doing the same things.I hope this helps. Please feel free to post a query should you run into issues.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "exports = async function(){\n // Load the AWS SDK for Node.js\n const S3 = require('aws-sdk/clients/s3');\n const s3 = new S3({\n accessKeyId: context.values.get(\"AWS_ACCESS_KEY\"),\n secretAccessKey: context.values.get(\"AWS_ACCESS_SECRET\"),\n region: \"ap-southeast-2\",\n });\n\n // Call S3 to get object\n const beforeNodeSDK = new Date()\n const getResult = await s3.getObject({\n Bucket: \"myBucket\",\n Key: \"myKey\"\n }).promise()\n const afterNodeSDK = new Date()\n const timeTakenNodeSDK = afterNodeSDK - beforeNodeSDK\n \n return timeTakenNodeSDK // (result = 2326)\n};\nexports = async function() {\n // Load the built in AWS service\n const s3 = context.services.get(\"AWS\").s3(\"ap-southeast-2\");\n // Call S3 to get object\n const beforeGoSDK = new Date()\n const result = await s3.GetObject({\n Bucket: \"myBucket\",\n Key: \"myKey\"\n });\n const afterGoSDK = new Date()\n const timeTakenGoSDK = afterGoSDK - beforeGoSDK\n \n return timeTakenGoSDK // (result = 57)\n};\n",
"text": "Hi @henna.sThanks for the response. I have given this a go now thanks to the documentation with the new usage of importing the S3 Node.js client.However, I’m seeing an overhead using the node.js AWS package.For example, take a look at these functions. There is a 40x increase in the time it takes to get an object from S3!2326ms vs 57ms Node.js package - 2326ms3rd Party Services (Go SDK) - 57msThanks!",
"username": "Adam_Holt"
},
{
"code": "",
"text": "Thanks, @Adam_Holt for reporting this.Could you please check if subsequent requests are faster or are they consistently slow?I look forward to your response.Kind Regards,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hey,I ran the functions about 10 times each and the time taken was always similar.Thanks",
"username": "Adam_Holt"
},
{
"code": "",
"text": "Thanks, Adam. I have reported this and I should be able to get you an update soon.I appreciate your patience in the meantime.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "I am seeing very similar results while working with the ‘aws-sdk’. Although, my results seem to be more random.I have a simple function which signs URLs to upload to S3. In some cases the function returns in under 1 second and other cases it takes upwards of 15 seconds to return.",
"username": "Tyler_Collins"
},
{
"code": " const {S3Client, GetObjectCommand, PutObjectCommand} = require(\"@aws-sdk/client-s3\");",
"text": "Hello @Adam_Holt , @Tyler_Collins,Thank you for your patience.Could you try running the function with aws sdk version 3 like this and check if you get the same slow performance?\n const {S3Client, GetObjectCommand, PutObjectCommand} = require(\"@aws-sdk/client-s3\");Could you share the testing s3 file so we could try reproducing the same?I look forward to your response.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hi @henna.sI’m unable to install that package. I get the following error info in the UI.“failed to transpile node_modules/aws-crt/scripts/build.js. “aws-crt” is likely not supported yet. unknown: Unexpected reserved word ‘package’ (142:8)”I did not specify a specific version. Just the package “@aws-sdk/client-s3”.Thanks",
"username": "Adam_Holt"
},
{
"code": "",
"text": "Hi @Adam_Holt,Thanks for sharing the result. This error happens if you click “Add New Dependency” to add it.Could you try to use the old way (uploading node_modules.tar file) to install the dependency and let me know if the function still takes the same time?Thanks a lot for the feedback, the “Add Dependency” should also work and the team is investigating that now.I look forward to your response.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "Failed to install dependencies failed to transpile node_modules/aws-crt/scripts/build.js. \"aws-crt\" is likely not supported yet. unknown: Unexpected reserved word 'package' (145:8)",
"text": "node_modules.tarWhen I try to install the module with the current version I receive the following error.\nFailed to install dependencies failed to transpile node_modules/aws-crt/scripts/build.js. \"aws-crt\" is likely not supported yet. unknown: Unexpected reserved word 'package' (145:8)I receive this regardless of if I upload a node_modules.tar or if I install in the UI. The only version I can get to install is 2.737.0",
"username": "Tyler_Collins"
},
{
"code": "",
"text": "I saw the deprecation yesterday too and I’m seeing increases of times on my replacement of context.http to node-fetch (v2) from 5-6ms to 800ms. (didn’t deploy the change after seeing the times) In my case I was just using the triggers + functions to send some slack notifications.Even though it is a different use case the underlying problem might be similar.",
"username": "Henrique_Silva"
},
{
"code": "",
"text": "Hello @Tyler_Collins,Thanks for sharing the feedback. Could you please share the testing S3 file, so that we can try to reproduce the issue on our end?I look forward to your response.Cheers ",
"username": "henna.s"
},
{
"code": "GetObject5827ms56ms",
"text": "\nScreenshot 2022-02-16 0823352908×1315 239 KB\n\n\nPO100293_31920×3413 434 KB\nHi @henna.sHere is an example of the 2 scripts side by side doing a simple GetObject from S3.Node.js module takes 5827ms\nBuilt in Go version takes 56msSo in this case, it’s 104x slower.Thanks",
"username": "Adam_Holt"
},
{
"code": "// arg = [[\"key(filename)\", \"filetype\"]]\nexports = async function(arg){\n const S3 = require('aws-sdk/clients/s3');\n console.log(arg)\n \n const AWSAccessKeyID = context.values.get(\"AWSAccessKeyID_value\");\n const AWSSecretKey = context.values.get(\"AWSSecretKey_value\");\n\n // Configuring AWS\n const s3 = new S3({\n accessKeyId: AWSAccessKeyID, // stored in the .env file\n secretAccessKey: AWSSecretKey, // stored in the .env file\n region: \"us-east-1\",\n });\n\n // retrieve the bucket name\n const Bucket = \"\";\n\n // PUT URL Generator\n const generatePutUrl = (Key, ContentType) => {\n return new Promise((resolve, reject) => {\n // Note Bucket is retrieved from the env variable above.\n const params = { Bucket, Key, ContentType, Expires: 900 };\n // Note operation in this case is putObject\n s3.getSignedUrl('putObject', params, function(err, url) {\n if (err) {\n reject(err);\n }\n // If there is no errors we can send back the pre-signed PUT URL\n resolve(url);\n });\n });\n }\n \n const URLS = await Promise.all(arg.map((res, index) => {\n const awaitresult = generatePutUrl(res[0], res[1])\n return awaitresult\n }));\n \n return {urls: URLS};\n};\n",
"text": "@henna.s Please see the attached code",
"username": "Tyler_Collins"
},
{
"code": "",
"text": "As a quick update from the Realm engineering team – we’re actively working on tickets related to these performance issues and we expect to have a few significant improvements soon. While we have a date for removing 3rd Party Services, we are also going to continue monitoring usage and will be working with the community to make sure the deprecation date is fair, and extending if necessary.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hello @Henrique_Silva,Thank you for raising the query. This appears to be different than the aws-sdk issue. Could you please create a separate topic for this and share necessary details that can help us investigate.I look forward to your response.Cheers ",
"username": "henna.s"
},
{
"code": "",
"text": "Do you have an ETA on the performance upgrades for “AWS-SDK”?It looks like the “PresignURL” function from the s3 3rd party service no longer works so I am stuck waiting for the performance fixes to be made.Thank you",
"username": "Tyler_Collins"
},
{
"code": "exports = function(s3Path) {\n const s3 = context.services.get(\"AWS\").s3(\"ap-southeast-2\");\n const bucket = \"my-bucket\";\n\n const presignedUrl = s3.PresignURL({\n Bucket: bucket,\n Key: s3Path,\n // HTTP method that is valid for this signed URL. Can use PUT for uploads, or GET for downloads.\n Method: \"GET\",\n // Duration of the lifetime of the signed url, in milliseconds\n ExpirationMS: 900000\n });\n return presignedUrl;\n};\n",
"text": "Hey @Tyler_CollinsTry this code out. It works fine in my app.",
"username": "Adam_Holt"
},
{
"code": "",
"text": "This worked, thank you",
"username": "Tyler_Collins"
}
]
| AWS services moving forward - 3rd party services depreciation | 2021-12-14T01:57:07.981Z | AWS services moving forward - 3rd party services depreciation | 11,928 |
[
"security"
]
| [
{
"code": "",
"text": "Hi guys.I can’t see others databases created in mongodb, with userAdmin even with permission.For example I have a database called letsintdb and when I login with userAdmin at instance Mongo, this database don’t appear for me.I tried to give access to the userAdmin but even with permission this issue occors.If I try to create a new database, the same occors.Is someone knows how I can solve this?Regards.\n\nCapturar (1)1373×574 65.3 KB\n",
"username": "Felipe_Jareta"
},
{
"code": "",
"text": "Are you connected to the correct cluster?\nIs it local DB or on a cloud?\nHave you created any collections under the letsintdb?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Yes, I am.\nIt’s on private cloud using Openshift.\nNot yet. For now the database is empty. It would be necessary to show for the userAdmin?",
"username": "Felipe_Jareta"
}
]
| I can't see others database with userAdmin | 2022-08-25T19:21:21.953Z | I can’t see others database with userAdmin | 2,665 |
|
null | [
"indexes",
"atlas-search"
]
| [
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"geometry\": {\n \"indexShapes\": true,\n \"type\": \"geo\"\n }\n }\n }\n}\nYour index could not be built: Unexpected error: Unable to Tessellate shape [[49.47748652086282, -123.79858384565688] [49.47748631290387, -123.79833537632288] [49.47748610439212, -123.79808690974527] [49.47748589536565, -123.79783844040281] [49.477485761488396, -123.79767969655772] [49.47750369510057, -123.79761404608594] [49.47750367636922, -123.79758989051233] [49.47750365585305, -123.79756573214883] [49.477485610071895, -123.79750024279329] [49.477485475730475, -123.79734149894456] [49.4774852650837, -123.79709303235002] [49.4774850539222, -123.79684456299061] [49.477484914050514, -123.79667891676817] [49.47753880850618, -123.7965958535766] [49.47785707753759, -123.79659445750491] [49.47785658451906, -123.7964562714891] [49.47786416290044, -123.79858393595389] [49.47748652086282, -123.79858384565688] ]. Possible malformed shape detected.\ngeometry_2dsphere",
"text": "Hi all,trying to create an atlas search index with following definition:sadly the index build fails with following error:The polygon looks like this: problematic_polygon.geojson · GitHubI know we could try to fix the concrete polygon, but this is quite impossible for size of the dataset like ours - there will be many of such polygons.Interesting fact is that the collection contains MongoDB geometry_2dsphere index and there is no problem in inserting the polygon into the MongoDB itself.Maybe this is related to this bug report?Any idea how to fix / workaround this?",
"username": "Ikar_Pohorsky"
},
{
"code": "",
"text": "We will release a fix before autumn and I will update this ticket when we do. Thank you for reporting it. I need to test your coordinates to be absolutely sure, but that is my hunch. It can be difficult to eyeball collinearity. I can move it to a spreadsheet, though.",
"username": "Marcus"
},
{
"code": "geometry_2dsphere",
"text": "That sounds fantastic, thanks @Marcus!It can be difficult to eyeball collinearity.…that’s right, but my guess here is: if you manage to store the polygon in MongoDB collection with the geometry_2dsphere index then the polygon is valid ",
"username": "Ikar_Pohorsky"
},
{
"code": "",
"text": "@Ikar_PohorskyI believe the issue is fixed in the most recent release of Atlas Search but not documented. I’ll circle back with the team to change that fact.Feel free to check it out. If it doesn’t work out, please let me know and I will investigate.",
"username": "Marcus"
},
{
"code": "",
"text": "Will try when I have a chance and confirm. Hopefully next Monday.",
"username": "Ikar_Pohorsky"
},
{
"code": "",
"text": "It works! This is brilliant \nThank you so much ",
"username": "Ikar_Pohorsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas Search: Unable to Tessellate shape | 2022-06-14T07:44:37.163Z | Atlas Search: Unable to Tessellate shape | 2,110 |
null | [
"sharding"
]
| [
{
"code": "sh.status()",
"text": "I would like to ask a few questions about sharding, they are as follows:In a sharded cluster are query routers separate nodes? How are they set up from what I have seen the shareded clusters are configured by setting up the Conf Server and then adding and enabling shards and their DBs or Collections?Are each Shard and Conf Server a Replicate Set? If so do you have to add all members of a replicate set in Shard Cluster or can you add primary or secondary nodes by themselves?Other than commands like sh.status() what are methods of checkup can you do to make sure the sharded cluster is working as expected? For example, there arent any hardware issues that are setting the cluster back etc.",
"username": "Master_Selcuk"
},
{
"code": "",
"text": "Hi @Master_Selcuk and welcome to the community!!In a sharded cluster are query routers separate nodes? How are they set up from what I have seen the shareded clusters are configured by setting up the Conf Server and then adding and enabling shards and their DBs or Collections?In a sharded collection, the mongos basically route the queries to the respective shards. Whereas the config servers are the metadata for the sharded clusters.Are each Shard and Conf Server a Replicate Set?Yes, each shard in sharded cluster is a replica set and starting in MongoDB 3.4 each config server can also be deployed as replica sets. Please see our documentation on replica sets Config server for further details.If so do you have to add all members of a replicate set in Shard Cluster or can you add primary or secondary nodes by themselves?Yes, you will need to add all the members of the replica sets in the sharded cluster. Here are the steps on how to deploy a sharded cluster in MongoDB.what are methods of checkup can you do to make sure the sharded cluster is working as expected?sh.status() with db.printShardingStatus() could be other command to view the status of the sharded cluster. For other sharding sharding related command you could visit the documentation on Sharding Commands .Note that sharding is considered an advanced MongoDB topic. It requires knowledge about operations, security, replica sets, and also specific sharding knowledge (chunks, balancing, shard keys, etc.). These answers are very high-level, and involve only the very basic concepts of a sharded cluster.I would suggest you to take our University course on cluster administration concepts which would include replication , sharding and other concepts. Visit the course MongoDB Courses and Trainings | MongoDB University.Let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
}
]
| Questions About Query Routers and Replicate Set in relation to Shard Clusters | 2022-08-24T19:13:26.490Z | Questions About Query Routers and Replicate Set in relation to Shard Clusters | 1,762 |
[
"aggregation",
"queries",
"node-js",
"data-modeling"
]
| [
{
"code": "exports.getConversation = catchAsync(async (req, res, next) => {\n const loginUser = req.user.id;\n const conversationId = req.params.id;\n const conversation = await Conversation.find({\n \"deleteMessagesBy.deleterID\": loginUser,\n $updateAt: {\n $dsadsa: \"deleteMessagesBy.$.deleteAt\"\n },\n _id: conversationId,\n }, \"id conversationParticipant conversationType readBy lastMessage updateAt deleteMessagesBy\"\n //readBy\n ).populate({\n path: \"conversationParticipant\",\n select: \"name username profileImage \",\n }).populate({\n path: \"lastMessage\",\n select: \"messageData type sentAt category conversationId sender\",\n });\n res.status(200).json({\n status: \"success\",\n conversation: conversation,\n });\n});\n",
"text": "Hello,When I find a data in my model, I want to compare two variables in this data, but one of them is the data of the object. How can I achieve this?Data;\n",
"username": "Mehmet_Kasap"
},
{
"code": "var unwind = {$unwind: '$deleteMessageBy'};\n\nvar match = {$match: {'deleteMessageBy.deleterID': req.user.id}};\n\nConversation.aggregate(match, unwind, function(err, result) {\n console.log(result);\n});\ndb.collection.aggregate([\n {\n \"$unwind\": \"$deleteMessageBy\",\n },\n {\n $project: {\n compareDate: {\n $let: {\n vars: {\n deleteTime: \"$deleteMessageBy.deleteAt\",\n updateTime: \"$updatedAt\"\n },\n in: {\n $lt: [ \"$$deleteTime\", \"$$updateTime\" ]\n }\n }\n }\n }\n }\n])\n",
"text": "Hello @Mehmet_Kasap,Welcome to the MongoDB Community!I want to compare two variables in this data, but one of them is the data of the object.In Mongoose you can try using $unwind - Mongoose:Note that this is an untested example and may not work in all cases. Please do test any code thoroughly with your use case so that there’s no surprises.Now you have the result and you can perform further operations.Alternatively, You can compare the data of the object by using MongoDB aggregations - by first unwinding the array of objects by using $unwind, and can use $let operator to compare the dates.Something like this:If you have any doubts, please feel free to reach out to us.Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB query with same data referance | 2022-08-20T02:37:43.753Z | MongoDB query with same data referance | 1,879 |
|
null | [
"swift",
"atlas-device-sync"
]
| [
{
"code": "struct SyncContentView: View {\n\t// Observe the Realm app object in order to react to login state changes.\n\t@EnvironmentObject var app: RealmSwift.App\n\t\n\tvar body: some View {\n\t\tif app.currentUser != nil {\n\t\t\tOpenSyncedRealmView()\n\t\t\t\t.environment(\\.partitionValue, app.currentUser?.id)\n\t\t} else {\n\t\t\tAuthenticationView()\n\t\t}\n\t}\n}\n// This view opens a synced realm.\nstruct OpenSyncedRealmView: View {\n\t\t\n\t@AutoOpen(appId: Constants.APP_ID, partitionValue: \"\", timeout: 4000) var autoOpen\n\t\n\tvar body: some View {\n\t\tswitch autoOpen {\n\t\t\t\n\t\tcase .connecting:\n\t\t\tProgressView()\n\t\t\n\t\tcase .waitingForUser:\n\t\t\tProgressView(\"Waiting for user to log in...\")\n\t\t\t\n\t\tcase .open(let userPartitionedRealm):\n\t\t\tTabView {\n\t\t\t\tLibraryView(userGroup: { // PRIVATE data\n// User should be able to write in the public realm here too...\n\t\t\t\t\t// ... getting group from private user-partitioned realm...\n\t\t\t\t}())\n\t\t\t\t.environment(\\.realm, userPartitionedRealm)\n\t\t\t\t.tabItem {\n\t\t\t\t\tLabel(\"Library\", systemImage: \"books.vertical\")\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\tBrowseView() \n// Should display PUBLIC data. \n// Users can import data from PUBLIC to their own space (PUBLIC copied to PRIVATE)\n// Where to open the public realm?\n\t\t\t\t\t.tabItem {\n\t\t\t\t\t\tLabel(\"Browse\", systemImage: \"rectangle.stack\")\n\t\t\t\t\t}\n\t\t\t\t\n\t\t\t\tSettingsView()\n\t\t\t\t\t.tabItem {\n\t\t\t\t\t\tLabel(\"Settings\", systemImage: \"gear\")\n\t\t\t\t\t}\n\t\t\t}\n\t\t\t\n\t\tcase .progress(let progress):\n\t\t\tProgressView(progress)\n\t\t\t\n\t\tcase .error(let error):\n\t\t\tErrorView(error: error)\n\t\t}\n\t}\n}\n",
"text": "Hi,My app on iOS is relying on private and public data backed by partition-based Realm Sync. I understood in that case that two partition keys are necessary, meaning I need to open two different realms at the app launch.\nUntil now, I have been working on private data operations and it is working quite well. I have been basically following the structure on this page: https://www.mongodb.com/docs/realm/sdk/swift/swiftui-tutorial/#complete-codeIn addition, there should be some level of interaction between the two realms. Users can create new entries in the private/public space (LibraryView) and in the other direction users can get public data copied to their private space.I don’t see exactly how I am supposed to open a second @AsyncOpen Realm and access it from both views. Ideally, I assume both realms should be accessible by LibraryView and BrowseView…Any recommendation on the way to achieve this?",
"username": "Sonisan"
},
{
"code": "struct OpenSyncedRealmView: View {\n @AutoOpen(appId: Constants.APP_ID, partitionValue: \"\", timeout: 4000) var publicData\n @AutoOpen(appId: Constants.APP_ID, partitionValue: \"\", timeout: 4000) var privateData\n}\nlet publicRealm = try await Realm(configuration: publicConfiguration)\nlet privateRealm = try await Realm(configuration: privateConfiguration)\n",
"text": "There are a couple of approaches you could consider.With Realm’s SwiftUI property wrappers, AutoOpen/autoOpen is essentially a variable with some associated state. There’s no reason you couldn’t open both:You’d need to change the view body as you wouldn’t just be reacting to the AutoOpenState/AsyncOpenState of one Realm. When both have opened, you could then pass them as environment objects to the views that need them.Another option is to ignore the SwiftUI property wrappers entirely and instead open them with the standard Swift SDK syntax:Then, once you have them, you can pass them to the views that need them. You may find this easier to work with, as I think the SwiftUI AsyncOpen/AutoOpen property wrappers are optimized for cases where a View needs one realm.",
"username": "Dachary_Carey"
},
{
"code": "@Environment(\\.realm)",
"text": "@Dachary_Carey\nThank you for your reply! I managed to make it work after some refactoring (without the property wrappers)\nIf I may, another follow-up question: is there an easier way to pass multiple Realms to child views than cascading by parameters? I was using @Environment(\\.realm) before, but this does not seem to be compatible with multiple realms, unfortunately.",
"username": "Sonisan"
},
{
"code": "",
"text": "Sonisan, if you liked the environment variable approach, and would like to continue using it but with more than one realm… the way to handle this is thru the swift language itself.A Swift Environment object is exposed by its type, which is what you are bumping up against when you have more than one realm object but both are of type Realm. A way to work with swift’s implementation of environment variables and expose multiple objects of the same type thru environment is to wrap each instance object into a wrapping class.Psuedo code:\nclass privateRealm {\nvar realm: Realm // probably have some propertywrapper on this\n}class publicRealm {\nvar realm: Realm // probably have some propertywrapper on this\n}then you can pass them to a view:\nYourView.environment(…, publicRealm).environment(…, privateRealm)In your views:\nstruct YourView: View {\n@EnvironmentObject var publicWrap: publicRealm\n@EnvironmentObject var privateWrap: privateRealm… now you can reference with privateWrap.realm or publicWrap.realm}Hope this helps",
"username": "Joseph_Bittman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Help on opening multiple Synced Realms (SwiftUI) | 2022-08-23T06:55:52.533Z | Help on opening multiple Synced Realms (SwiftUI) | 2,482 |
null | [
"atlas-search"
]
| [
{
"code": "",
"text": "Hello i would like to know where does Mongodb stores $search indexes? Is it in the same place as the normal indexes, in the ram memory on the same machine or is it on a different? If its on the same what are the benefits of using Mongo Atlas search over some other search engines like elasticSearch for example?",
"username": "Bojan_Despotoski"
},
{
"code": "mongotmongodmongotmongot",
"text": "Hi @Bojan_Despotoski - Welcome to the community Hello i would like to know where does Mongodb stores $search indexes? Is it in the same place as the normal indexes, in the ram memory on the same machine or is it on a different?As per the Atlas Search - Tune Performance documentation :Atlas Search runs a new process, called mongot , alongside the mongod process on each host in your Atlas cluster. mongot maintains all Atlas Search indexes on collections in your Atlas databases. The amount of CPU, memory, and disk resources mongot consumes depends on several factors, including our index configuration and the complexity of your queries. Atlas Search alerts measure the amount of CPU and memory used by Atlas Search processes.If its on the same what are the benefits of using Mongo Atlas search over some other search engines like elasticSearch for example?You can check out the Elasticsearch vs MongoDB Atlas Search page which contains more information on this topic which may be useful.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Where does Mongo stores indexes from the Atlas Search? | 2022-08-19T09:46:20.488Z | Where does Mongo stores indexes from the Atlas Search? | 1,421 |
null | [
"aggregation"
]
| [
{
"code": "{\n $project: {\n _id: 1,\n replies: {\n $filter: {\n input: `$comments`,\n as: 'comment',\n cond: {\n $eq: [`$$comment.replyToId`, new ObjectId(data.commentId)],\n },\n },\n },\n },\n },\n",
"text": "Hello,I would like to know if I have to create index for nested array object field (replyToId) if I use it only together with $project and $filter;for example:",
"username": "Vytautas_Pranskunas"
},
{
"code": "",
"text": "Indexes are mostly useful to find documents fast when querying - projection is something done to already found documents and therefore an index cannot help here (other than in some cases where a covered index query can help you avoid fetching the document entirely but this isn’t one of those cases).TL;DR no, you don’t need to create an index for this field unless you are also querying by it.Asya",
"username": "Asya_Kamsky"
}
]
| Index on $filter | 2022-08-25T14:55:24.237Z | Index on $filter | 1,699 |
null | [
"queries"
]
| [
{
"code": "",
"text": "Hello! I want to do something simple. I have a collection of chat messages and I want to return them in groups of 100. I want to be able to get the next 100 messages after a message with a given _id.One way to do this would be to query for that particular record, get the timestamp, then query for the next 100 records with an earlier timestamp. Or the client itself could just send the timestamp of the relevant message directly to the server, so only one query would need to be made.However, I am wonder if it is possible to do what I am trying to do in a single query, given the _id of the relevant message?",
"username": "Brian_Bleakley"
},
{
"code": "",
"text": "The answer depends - what’s the value of the _id field? Is it the default ObjectId() generated by the client (or server)?Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Hi Asya,I have the same question regarding to get records after any specific one.\nand I’m using the mongodb generated value in the _id “objectId”.",
"username": "Shahzaib_Imran"
},
{
"code": "_idObjectIdinserted on",
"text": "The default _id value is an ObjectId which encodes the timestamp as its first four bytes so it can be used as equivalent to inserted on date field. In other words, you can sort by it, and ensure a stable sort where highest (or lowest) value in one batch can be used to ensure the next batch started after that value.Asya",
"username": "Asya_Kamsky"
}
]
| Find all records after particular record | 2020-08-07T03:58:36.697Z | Find all records after particular record | 2,968 |
null | [
"monitoring",
"ops-manager"
]
| [
{
"code": "",
"text": "Hi there,I am evaluating the MongoDB OpsManager and have some trouble when activating the Datadog integration. I have updated the datadog.api.url value and added my datadog API key, but nothing seems to appear in datadog, and the integration seems to be for Atlas only (Atlas is off the table unfortunately due to contractual issues). The logs on the OpsManager pods seem to be happily trundling along with no errors either, but also no mention of sending metrics.Is there a way to confirm OpsManager is sending metrics? Are the names of metrics sent defined anywhere so I can ask Datadog support to look for them?Thank you!",
"username": "DB_CA"
},
{
"code": "",
"text": "Hi @DB_CA,Apologies for the delayed response. Unfortunately, I don’t have enough information to say exactly what the problem is. However, here are some documentation links that may be helpful!Also, please do open a MongoDB support case for this, as we should be logging DataDog publish attempts in the server logs. If that isn’t happening as you mentioned, our TSEs can help further investigate.Thanks,\nFrank",
"username": "Frank_Sun"
}
]
| Datadog Integration with the Enterprise MongoDB OpsManager, no metrics available in the datadog UI | 2022-08-09T13:20:51.119Z | Datadog Integration with the Enterprise MongoDB OpsManager, no metrics available in the datadog UI | 2,372 |
null | [
"change-streams"
]
| [
{
"code": "",
"text": "We are building a chat server with MongoDB using change streams. We run it in Kubernetes and want it to be able to scale horizontally. We have a couple of clients who wants to use it and it should be able to handle 50k+ users spread across a number of pods.Currently, each client starts a new change stream that listens in a specific collection for his/her id. When a message is sent to the user, the change stream will find out and forward the message to the user.\nI am not sure if this is a great solution and if MongoDB can handle 10k+ change streams per pod…\nIf not, do you guys have any recommendations on how to improve?Also, I should mention that our MongoDB database is self-hosted in kubernetes and inside the same cluster as our chat-server.Thanks.PS. if you guys don’t mind, then give our pinned repositories a star - would mean a lot! Next-Gen Open Source, Kubernetes-first & Modular Cloud Infrastructure - Nuntio",
"username": "Oscar_Orellana"
},
{
"code": "",
"text": "Sorry, not each client but each user connected to the chat-server starts a change-stream… all events is forwarded to the user using a stream in gRPC.",
"username": "Oscar_Orellana"
}
]
| Building a chat with MongoDB change streams | 2022-08-25T15:57:00.445Z | Building a chat with MongoDB change streams | 1,992 |
null | [
"replication",
"mongodb-shell"
]
| [
{
"code": "admin> rs.conf()\n{\n _id: 'rs0',\n version: 6,\n term: 1,\n members: [\n {\n _id: 0,\n host: 'mongod-rs-0.mongodb-service-rs:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: 'mongod-rs-1.mongodb-service-rs:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 2,\n host: 'mongod-rs-2.mongodb-service-rs:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"6307630d55b01fb4471c7b7e\")\n }\n}\nrs.initiate()\n\nvar cfg = rs.conf()\n\ncfg.members[0].host=\"mongod-rs-0.mongodb-service-rs:27017\"\n\nrs.reconfig(cfg)\n\nrs.add(\"mongod-rs-1.mongodb-service-rs:27017\")\n\nrs.add(\"mongod-rs-2.mongodb-service-rs:27017\")\nkubectl scale sts mongod-rs --replicas 0\n\nkubectl scale sts mongod-rs --replicas 3\n",
"text": "I was able to stand up a working 3-node RS (Mongo v6) on Digital Ocean Kubernetes with persistent storage, and running my app against the DB without any problem.When I restart the mongod pod, it seems the RS is having trouble initializing itself correctly. See errors below.Primary pod error:{“t”:{\"$date\":“2022-08-25T14:45:06.604+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4333208, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM host selection timeout”,“attr”:{“replicaSet”:“rs0”,“error”:“FailedToSatisfyReadPreference: Could not find host matching read preference { mode: “primary” } for set rs0”}}\n{“t”:{\"$date\":“2022-08-25T14:45:06.604+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:20714, “ctx”:“LogicalSessionCacheRefresh”,“msg”:“Failed to refresh session cache, will try again at the next refresh interval”,“attr”:{“error”:“FailedToSatisfyReadPreference: Could not find host matching read preference { mode: “primary” } for set rs0”}}mongosh session:admin> rs.status()\nMongoServerError: Our replica set config is invalid or we are not a member of itConfig is still thereNotes:on Primary:To simulate an outage I shutdown all 3 mongo instances gracefully and restarted them:",
"username": "V11"
},
{
"code": " publishNotReadyAddresses: true\n",
"text": "Answering my own question. Include the following attribute in the headless yaml:See also https://jira.mongodb.org/browse/SERVER-24778",
"username": "V11"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Replica set doesn't survive a pod restart | 2022-08-25T15:02:25.105Z | Replica set doesn’t survive a pod restart | 2,613 |
null | [
"aggregation",
"transactions"
]
| [
{
"code": "MintEvent = {\n address: \"0xnnnnn\",\n minter: \"0xnnnnn\",\n date: \"2022-08-20T16:51:48.554+00:00\"\n transactions: [\n {\n hash: \"0xnnn\",\n count: 10,\n function: \"mint\"\n },\n {\n hash: \"0xnnn\",\n count: 5,\n function: \"claim\"\n },\n {\n hash: \"0xnnn\",\n count: 7,\n function: \"claim\"\n }\n ],\n mintTotal: 25\n}\n",
"text": "I have a database of NFT mint events, where each event contains an array of transactions with a ‘count’ and a ‘function’ among other fields. For example, the data looks as follows (simplified):I want to graph the mint data over time using a stacked bar chart, so for every 15 minute interval I want the total number of mints for each ‘function’ name within that 15 minute period.I can’t figure out how to do this using an aggregate pipeline. I’ve seen examples of how to use $dateToParts with a group to count the number of mints within each time period, but I don’t want to return one value for each period. I want to return the total number of mints for each function name (there can be any number of functions, all with different names and many with the same name). To add a further level of complexity, the function names are held int a different collection and it is a function sig in the mints array. I know how to use a $project to resolve that.Where to do I need to be looking to answer this please?Thanks",
"username": "Stephen_Eddy"
},
{
"code": "transactions.functionunitbinSize",
"text": "Hi @Stephen_Eddy,Is the date field a string in your database?I believe you will need to unwind the transactions array and use a new stage called $densify (available in 6.0) to create empty gaps for empty 15 min ranges:If you believe you have at least one value every 15 min you might not need it,Then you will have to group each transactions.function value based on a $dateTrunc with unit of minutes and binSize of 15 :In the operator use a $sum of the “count” field.Ty\nPavelThanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for your answer Pavel.However, I don’t think I can unwind the transactions array can I? There can be multiple entries in this array so how can they unwind to a single object? When I have tried, but the unwind operation only keeps the first transaction array entry and the rest are lost.There are situations where there will not be an entries in the time period, so it sounds like $densify will be required. Is it possible for you to give a code example of your explanation?",
"username": "Stephen_Eddy"
},
{
"code": "",
"text": "Despite the example showing the same ‘hash’ for subdocuments, in reality every subdocument will have a different hash value.I’ve also tried using a Group stage with addToSet and dateTrunc, but addToSet is adding duplicate ‘functionSig’ values.",
"username": "Stephen_Eddy"
},
{
"code": "",
"text": "Just to clarify the mention of functionSig. In the example in the first question I used the field name ‘function’ to simplify. In the real dataset it is actually called functionSig, but they are one and the same.",
"username": "Stephen_Eddy"
},
{
"code": "db.MintEvent.aggregate([{\n $unwind: '$transactions'\n },{ $densify : {\n field: \"date\",\n range: {\n step: 15,\n unit: \"minute\",\n bounds: [ISODate(\"2022-08-20T16:00:00.000Z\"), ISODate(\"2022-08-20T17:00:00.000Z\")]\n }}\n },{\n $group: {\n _id: {\n date: {\n $dateTrunc: {\n date: '$date',\n unit: 'minute',\n binSize: 15\n }\n },\n 'function': '$transactions.function'\n },\n functionCount: {\n $sum: {$cond : [{$eq : ['$transactions.function', null]},0,\"$transactions.count\"]}\n }\n }\n }]).toArray()\n",
"text": "Hi @Stephen_Eddy ,I based it on your initial document example and used MintEvent as the collection name:I unwind and then densify every 15min of the hour (16:00 -17:00)Then I sum based on the truncation of 15min and the function name that was unwinded…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Fantastic. Thank you for your help Pavel.What was confusing me was the unwind. I didn’t fully understand that it was duplicating the parent for each copy of the child. It makes perfect sense now.",
"username": "Stephen_Eddy"
}
]
| How to Aggregate by time interval with secondary dimension | 2022-08-23T23:15:27.076Z | How to Aggregate by time interval with secondary dimension | 4,079 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "for a user, there is a key, say type, this is not going to change a lot, like maybe 10 times. But this type is needed in 3-4 places in the application. So is it better to keep type in a separate collection and whenever needed, do a lookup or should I keep it in the user collection itself?\nThe number of users can be up to 1000.",
"username": "S_V2"
},
{
"code": "",
"text": "Hey @S_V2,Welcome to the MongoDB Community Forums! In general, we use separate collections when:Since your data is not frequently changing, and you only need to frequently read it and assuming that you will need user data too along with it, embedding can be recommended as it provides better performance for read operations and requesting and retrieving data in a single database operation would also be possible easily.I am linking documentation too here to help further cement your understanding of these concepts. Please feel free to reach out for anything else as well.Data Model Design\nEmbedding One to Many Relationship\nModel One to Many Relationship with Document ReferenceRegards,\nSatyam Gupta",
"username": "Satyam"
}
]
| Where to store a user related key, in the user collection or in a separate collection? | 2022-08-25T08:54:15.133Z | Where to store a user related key, in the user collection or in a separate collection? | 1,310 |
null | []
| [
{
"code": "{_id: x, bool_field: true}\n{_id: x, bool_field: false}\nmongot",
"text": "Greetings,We encounter cases where running an Atlas search query after an update of documents don’t reflect the changes.\nFor example if I have document:And I update it to:I get a successful response from Mongo that document was updated.\nWhen I perform search query with bool_field equals to false this document is not included.After looking on documentation here, I assume the reason is that the Atlas index wasn’t updated - the question is whether we can get a successful response from Mongo only after the Atlas index was updated? (or any other method I can guarantee the index update)If you make changes to the collection for which you defined Atlas Search indexes, the latest data might not be available immediately for queries. However, mongot monitors the change streams, which allows it to update stored copies of data, and Atlas Search indexes are eventually consistent.Thanks,\nOfer",
"username": "Ofer_Chacham"
},
{
"code": "",
"text": "Hi @Ofer_Chacham,The team would like to help but we need a bit more information. Can you share your index definition and your query?",
"username": "Marcus"
},
{
"code": "",
"text": "Hi @Marcus, I don’t believe that a specific index and query are relevant here, I think the example is enough to understand the issue which is also documented in the documentation.\nI just wonder if there is something we can do to promise Atlas index will be updated before getting a successful response from Mongo.Thanks,\nOfer.",
"username": "Ofer_Chacham"
},
{
"code": "trueboolean",
"text": "Well, search engines based on Lucene are eventually consistent and do not respect read concerns. It’s a trade off made in favor of other features and opportunities. So, if your question is about a read concern or a read-after-write use case, then an index is not applicable. The best thing you can do is read from the primary, but there’s no guarantee.The other issue, and why we almost always request an index and even a query example is that if you have set dynamic: true, we do not index boolean by default today. It will change in the future, but that’s how it is today.I’m only asking for more information to ensure I get the most accurate answer to you in the fastest time. I hope the above two points help.",
"username": "Marcus"
},
{
"code": "",
"text": "It helped, thanks Marcus.\nThe boolean field was only for simple demonstration - the real scenario is read after write of a string field.\nThanks!",
"username": "Ofer_Chacham"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Delay of Atlas search index update when documents are updated | 2022-08-22T10:41:51.211Z | Delay of Atlas search index update when documents are updated | 1,999 |
null | [
"connecting",
"atlas-cluster"
]
| [
{
"code": "",
"text": "mongodb+srv://tandemcluster.jxejedr.mongodb.net/myFirstDatabase --apiVersion 1 --username tdemirelithis is my atlas connection string although i change myfirstdatabase section i cant loginis there any one to help me about it?",
"username": "Taner_Demireli"
},
{
"code": "",
"text": "Insufficient information\nWhat error are you getting?\nDBname will not impact your connection.You can give test or any name\nAre you using mongo or mongosh shell?\nIs this specific to a course or you are trying to connect from your local machine?",
"username": "Ramachandra_Tummala"
}
]
| Atlas database connection | 2022-08-25T07:58:44.352Z | Atlas database connection | 1,693 |
null | [
"crud"
]
| [
{
"code": "",
"text": "Hi Team, We want to provide read and write lock on multi collections(more than 1 collection to be locked until db operation is completed) based on one field . for example we are updating CollectionA.fieldA at the same time we want to lock the document for fieldA in CollectionB so that until the operation in CollectionA is completed, user should not be able to operate query/insert/update/delete operations on CollectionB for the same document(fieldA).I tried findOneAndUpdate() , but this works on a single document of one collection.I also tried ACID operations for external commit, but locking is not happening for more than 1 collection and it is ending in auto commit for collection where we are trying to update, we don’t have any explicit control to include other collection into this locking scope.Please let us know how to achieve this functionality.",
"username": "Aman_Dillon"
},
{
"code": "locking based on field",
"text": "Hello @Aman_Dillon ,Welcome to the MongoDB Community! We want to provide read and write lock on multi collections(more than 1 collection to be locked until db operation is completed) based on one field .I’d like to confirm my understanding of your goal. By locking based on field, do you mean that all the documents in the target collection having that field be locked, and documents without that field stay unlocked? That is, if field A exists in every document, the whole collection should be locked from any operation, and if field A exists in e.g. half of the documents, only those documents should be locked from any operation. Is my understanding here correct?If my understanding is correct, currently MongoDB cannot lock multiple documents based on whether a field exists or does not exist. You can, however, lock a single document from writes using findOneAndUpdate as mentioned in this blog post.Having said that, most database systems typically strive to allow maximum concurrency, which means: no read or write lock as much as possible, only lock as required, and typically allow reads as much as possible. Having a very strong read/write lock on possibly the whole collection/table runs counter to this goal, and I believe would not allow the system to scale. To achieve this goal, MongoDB uses intent locks, sessions, ACID multi-document transactions, read concerns, read preference and other methods to promote greater concurrency.If you need further help, could you please explain your use case scenario in more details?Regards,Tarun Gaur",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Locking multi collection in Mongodb | 2022-07-13T03:03:07.405Z | Locking multi collection in Mongodb | 1,719 |
null | [
"replication",
"sharding"
]
| [
{
"code": "",
"text": "In a sharded setup on MongoDB Atlas, We do not have network access to shard replicaset. This blocks the oplog tailing feature of MongoDB.Is there a way to bypass this limitation",
"username": "Manish_Rawat1"
},
{
"code": "",
"text": "For most use case this has been replaced by Change Streams",
"username": "chris"
},
{
"code": "",
"text": "Hi Chris,\nThanks for your response.However, in a big cluster with 100s of shards. Processing all the changes from a single stream will not be performant for us. Also, our application is based on an Oplog tailing-based Kafka connector. Converting to changestream is not a trivial change for us.",
"username": "Manish_Rawat1"
},
{
"code": "",
"text": "Hi @Manish_Rawat1Note that you can open a changestream for a collection, a database, or the whole deployment so you can fine-tune your needs. Additionally, the official Kafka connector can be configured to use changestream.Having said that, it sounds like you have a very large deployment in Atlas. In this case, you might want to contact Atlas support for suggestions, since they will have better visibility and insight into your exact situation.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Oplog tailing on MongoDB atlas sharded cluster | 2022-08-23T13:08:13.837Z | Oplog tailing on MongoDB atlas sharded cluster | 1,474 |
[
"node-js",
"mongoose-odm",
"atlas-cluster"
]
| [
{
"code": " await fetch(\"https://ac-***-shard-00-01.****.mongodb.net:27017/events\", {\n method: \"POST\",\n headers: {\n \"Accept\": \"application/json\",\n \"Content-type\": \"application/json\",\n },\n body: JSON.stringify(data),\n })\n console.log(data);\nconst mongoose = require(\"mongoose\");\n\nmongoose.connect(\n \"mongodb+srv://***:***@ffc.kv8qoen.mongodb.net/?retryWrites=true&w=majority\",\n { useNewUrlParser: true, useUnifiedTopology: true },\n (err) => {\n if (!err) console.log(\"Mongodb connected\");\n else console.log(\"Connection error :\" + err);\n }\n);\n",
"text": "Hello,\nI have a problem with the connexion with the server.My navigator (Chrome) return ERR_EMPTY_RESPONSE when I want to send a POST request. My URL seems to be good on the request and on the config.Thanks for your help",
"username": "Baptiste_LOY"
},
{
"code": "",
"text": "Hi @Baptiste_LOY - Welcome to the community.The error and the POST format does appear quite odd but perhaps there is something i’m missing here.However, can you advise the context for this question / error? Are you trying to perform some read / writes to your cluster via the API? If so, i’d recommend going over the Atlas Data API documentation.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| ERR_EMPTY_RESPONSE on MongoDB atlas | 2022-08-22T09:23:21.925Z | ERR_EMPTY_RESPONSE on MongoDB atlas | 1,380 |
|
null | [
"backup"
]
| [
{
"code": "",
"text": "Hi Team. When restoring the snapshot in the ec2 instance, not all the data is in some collections, there is a third of the information, how can we adjust so that when a restore is performed, all the information is available?",
"username": "Jimmy_Muchachasoy"
},
{
"code": "",
"text": "Hi @Jimmy_Muchachasoy - Welcome to the community!When restoring the snapshot in the ec2 instanceHow are you performing this restore? Are you following the Restore a cluster from a Cloud Backup procedure? I am a bit confused regarding the ec2 instance you have mentioned. Please clarify the context of the “ec2 instance” you have mentioned.not all the data is in some collections, there is a third of the information, how can we adjust so that when a restore is performed, all the information is available?This does sound a bit odd. However, along with the restore information requested above, can you advise how you are verifying that only a third of the information is available?Lastly, please advise the following details:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "hi @Jason_Tran nice to me you!.\nIn the ec2 instance we have a local environment deployed with docker-compose version mongo image 4.4.15 with this we lift the database and perform the restore with the shanpshot downloaded from mongo atlas and put the data pointing to a volume of the container.\nWe also validate with mongorestore downloading the bson files\nthanks",
"username": "Jimmy_Muchachasoy"
},
{
"code": "",
"text": "Hi @Jimmy_Muchachasoy Thanks for clarifying the environment details.with this we lift the database and perform the restore with the shanpshot downloaded from mongo atlas and put the data pointing to a volume of the container.\nWe also validate with mongorestore downloading the bson filesWhen doing the validation, is it at this point (before even restoring) you have calculated that the snapshot is only a third of the original size? How are you calculating the original size?Additionally, have you performed a direct restore to another test Atlas cluster to see if the full content of the restore is there? This does sound a bit odd only have a “third of the information”.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Backup Mongo Atlas in EC2 instance | 2022-08-19T16:25:50.893Z | Backup Mongo Atlas in EC2 instance | 2,061 |
null | [
"app-services-user-auth"
]
| [
{
"code": "",
"text": "Please, is possible lock one user in MongoDB 4.2 Community? (Not drop)Thanks,\nHenrique.",
"username": "Henrique_Souza"
},
{
"code": "",
"text": "There is no such facility in mongodb\nCheck thesehttps://jira.mongodb.org/browse/SERVER-12818",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks for help me @Ramachandra_Tummala , but i’ve seen it before and in my version of MongoDB (4.2) don’t have a db.lockUser method.I think do one backup of this specific user (JSON of admin collection db.system.users) and store this in one collection. After this i drop the user!But thanks for help!",
"username": "Henrique_Souza"
},
{
"code": "",
"text": "Hi @Henrique_Souza,There is currently no in-built feature to “lock” a user account, however you can effectively remove access by one or more of:As noted in the discussion referenced by @Ramachandra_Tummala, a more typical approach for Enterprise customers would be using third party auth systems (for example, LDAP) which provide additional account management features.db.lockUser method.To be clear, there is no such method as of MongoDB 6.0. SERVER-12818 is a feature request (currently in the issue backlog) to implement locking/unlocking user accounts.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How I lock one user in MongoDB | 2022-08-23T20:32:09.087Z | How I lock one user in MongoDB | 3,388 |
null | [
"atlas-device-sync",
"react-native",
"android",
"flexible-sync",
"react-js"
]
| [
{
"code": "realm.write(() => {\n realm.create('Entry', Entry.generate(newEntry, userId));\n });\n}\npackage.json \"@realm/react\": \"^0.3.1\",\n \"expo\": \"^44.0.6\",\n \"expo-dev-client\": \"~0.8.4\",\n \"expo-splash-screen\": \"~0.14.2\",\n \"expo-status-bar\": \"~1.2.0\",\n \"expo-updates\": \"~0.11.6\",\n \"react\": \"17.0.1\",\n \"react-dom\": \"17.0.2\",\n \"react-native\": \"0.64.3\",\n \"react-native-get-random-values\": \"~1.8.0\",\n \"react-native-web\": \"0.17.1\",\n \"realm\": \"^10.19.0\"\n \"@realm/react\": \"^0.3.2\",\n \"expo\": \"^46.0.2\",\n \"expo-dev-client\": \"~1.1.1\",\n \"expo-splash-screen\": \"~0.16.1\",\n \"expo-status-bar\": \"~1.4.0\",\n \"expo-updates\": \"~0.14.3\",\n \"react\": \"18.0.0\",\n \"react-dom\": \"18.0.0\",\n \"react-native\": \"0.69.3\",\n \"react-native-get-random-values\": \"~1.8.0\",\n \"react-native-web\": \"0.18.7\",\n \"realm\": \"^10.19.5\"\n",
"text": "I created an Expo React Native app using the “Quick start with Expo” documentation. I modified the schema and variable names to match my data structure and set up the backend in App Services, also by following the Realm documentation. I set up Flexible device sync as well as the user authentication stuff.Everything looks good (including log ins) until I try to add a record. The app running in an Android emulator crashes when the code gets to these lines: ('Entry is the name of my schema)I get no error messages in console nor in the Realm logs. The schema definition in the app matches that in the Realm backend.The following is my package.json dependencies:I tried upgrading the packages to the latest version with the same results:I’ve tried simplifying the schema, creating a new app in the Realm backend, and furious web searches with no luckThank you!",
"username": "Nick_Martin"
},
{
"code": "",
"text": "Just to follow up, I created a new Realm Expo React Native app with the template and ran it without modifying. It crashes at the same point (writing a newly created record)",
"username": "Nick_Martin"
},
{
"code": "traceRealm.App.Sync.setLogLevel(app, 'trace')",
"text": "Hi @Nick_Martin,Can you please set the log level to trace (Realm.App.Sync.setLogLevel(app, 'trace'))? The messages you get leading to the crash may clarify what the SDK is doing/expecting, and give hints of what may be wrong there.",
"username": "Paolo_Manna"
},
{
"code": "AppSyncimport Realm from 'realm';\nimport { useApp, useUser } from '@realm/react';\n{other imports}\n\nexport function AppSync() {\n ...\n const app = useApp();\n \n Realm.App.Sync.setLogLevel(app, 'trace');\n...\n",
"text": "Thanks for the suggestion!I added that line to the AppSync file like so:I didn’t see any additional logging in the console or the Realm app logs. Did I implement this correctly? Where should I expect to see additional logging?",
"username": "Nick_Martin"
},
{
"code": "",
"text": "Just as an update, I’ve recreated the app by following the React Native tutorial. It connects to my Realm backend without crashing.I would still like to solve the crashing issue with Expo since that’s how I’d prefer to develop the app. I’m open to suggestions",
"username": "Nick_Martin"
},
{
"code": "",
"text": "Thanks @Nick_Martin ,If we understood correctly, then, the “regular” React Native tutorial works as expected, but the Expo variant crashes on write operations, is that the case?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "That’s correct. I was not able to write to a synced realm with an app built with the Expo template and the app crashed consistently on the write function. I was on an android emulatorThe same setup was used for an app built with the React Native template (same emulated device and same Realm backend) and it is working as expected with no crashes",
"username": "Nick_Martin"
},
{
"code": "Task.ts",
"text": "Hi @Nick_Martin ,I’ve just noticed that the data structure for the Expo sample code doesn’t match the one in the React Native sample, so that the sample app would work locally (and indeed I checked that it does), but would clash with the existing setup on the backend, that matches the RN sample instead.Have you tried to correct Task.ts so that the model matches the React Native sample?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "To further elaborate on this, I’ve been able to fully run the Expo tutorial with Sync on iOS, providing that the App Services app it connects to",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "I’ve only tried Android (since I’m on Linux).What are the specs of the device you’re using? If it’s not an Android-related issue, I suspected it might have to do with not having enough ram or somethingto address the points you brought up:",
"username": "Nick_Martin"
},
{
"code": "expo init <ProjectName> --template @realm/expo-template-js{ enabled: false }",
"text": "A few other things that might help",
"username": "Nick_Martin"
},
{
"code": "expo run:android: Can't get WAA status for Mavatar! : java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Class java.lang.Object.getClass()' on a null object reference : \tat com.google.apps.tiktok.dataservice.z.b(PG:1) : \tat com.google.apps.tiktok.dataservice.at.a(PG:1) : \tat com.google.apps.tiktok.tracing.ed.a(PG:2) : \tat com.google.common.util.concurrent.aj.a(PG:2) : \tat com.google.common.util.concurrent.ac.apply(PG:1) : \tat com.google.common.util.concurrent.h.e(PG:2) : \tat com.google.common.util.concurrent.j.run(PG:9) : \tat com.google.common.util.concurrent.bf.execute(PG:1) : \tat com.google.common.util.concurrent.d.i(PG:1) : \tat com.google.common.util.concurrent.d.l(PG:12) : \tat com.google.common.util.concurrent.d.n(PG:2) : \tat com.google.common.util.concurrent.i.f(PG:1) : \tat com.google.common.util.concurrent.j.run(PG:13) : \tat com.google.android.libraries.i.af.run(PG:1) : \tat com.google.android.libraries.i.an.run(PG:23) : \tat com.google.android.libraries.i.l.run(PG:2) : \tat com.google.android.libraries.i.q.run(PG:4) : \tat java.lang.Thread.run(Thread.java:923)eas buildexpo-dev-client",
"text": "Hi I would like to add my experience to this conversion as I’m in the same boat.Created a new project using the Expo template code only, ran the command: expo run:android with the Android Studio Emulator open, the development build opens no problem but as soon as I try to add a new entry, the app crashes without error.Logcat gives a ton of information but I’m able to get the following error (I can provide a more detailed log if this isn’t helpful):\n: Can't get WAA status for Mavatar! : java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Class java.lang.Object.getClass()' on a null object reference : \tat com.google.apps.tiktok.dataservice.z.b(PG:1) : \tat com.google.apps.tiktok.dataservice.at.a(PG:1) : \tat com.google.apps.tiktok.tracing.ed.a(PG:2) : \tat com.google.common.util.concurrent.aj.a(PG:2) : \tat com.google.common.util.concurrent.ac.apply(PG:1) : \tat com.google.common.util.concurrent.h.e(PG:2) : \tat com.google.common.util.concurrent.j.run(PG:9) : \tat com.google.common.util.concurrent.bf.execute(PG:1) : \tat com.google.common.util.concurrent.d.i(PG:1) : \tat com.google.common.util.concurrent.d.l(PG:12) : \tat com.google.common.util.concurrent.d.n(PG:2) : \tat com.google.common.util.concurrent.i.f(PG:1) : \tat com.google.common.util.concurrent.j.run(PG:13) : \tat com.google.android.libraries.i.af.run(PG:1) : \tat com.google.android.libraries.i.an.run(PG:23) : \tat com.google.android.libraries.i.l.run(PG:2) : \tat com.google.android.libraries.i.q.run(PG:4) : \tat java.lang.Thread.run(Thread.java:923)\n\nScreenshot_20220816_1654201125×382 52 KB\nIt’s worth mentioning that running a standalone build - eas build - doesn’t have any problem running on the emulator or physical device, only the expo-dev-client seems to have trouble.",
"username": "Ben_Ujdur"
},
{
"code": "expo-dev-clienteastrace",
"text": "Thanks @Ben_Ujdur ,That’s indeed useful information: it’s interesting to notice that nowhere in the stack trace there’s a sign of the Realm SDK, it’s a TikTok (?) library that crashes instead.This seems to reinforce the hypothesis that’s not a Realm SDK issue in itself, but more an environment conflict, almost certainly limited to Android (it’s a Google library) and to the expo-dev-client (it works with eas). At this time, we don’t know if it’s just Linux, either, but I’d assume it may be, as we’d likely have many more occurrences reported otherwise.I’ve almost enough to put together an internal ticket to investigate: could you please specifyThank you both for your patience and collaboration to get to the bottom here!",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thank you @Ben_Ujdur for chiming in! Nice to see I’m not the only one experiencing thisAs for the “TikTok” library, this stack exchange answer indicates that it’s just an internal Google app and is unrelated to the social media app of the same name.When I can get access to my dev environment, I’ll provide as much version information as I can get for the different tools and SDKs. My read of the error logs points to an issue with either the JDK version or the Android SDK. The error is a Java exception but it seems like it’s breaking on a missing dependency coming from the Android environment",
"username": "Nick_Martin"
},
{
"code": "",
"text": "I am also facing the same issue with android, but in iOS, the dev client does not have this crashing issue. Even for the android if I do a release build this issue is not there. Is there a workaround for this ?",
"username": "lakshan_karunathilake"
},
{
"code": "trace",
"text": "Hi @lakshan_karunathilakeIf you’ve been following this thread, you know that we’re trying to find the reason of the crash: as asked above, if you could please provideUntil we’re able to consistently reproduce the issue, and have the exact reason, we cannot suggest a workaround (or, better, provide a solution)",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Hi @Paolo_MannaSure I can provide the detailsI am using\nOS : windows 10 (19044.1766 build version)\nIDE: Webstorm ( I don’t believe this actually matters)\nAndroid API 30, Java 11.0.4, realm : 10.19.5, expo-dev-client: 1.2.1, react-native: 0.68.2I have captured a bug report from the emulator, I hope that helps. I checked for traces but I could not find any.Please find the bug report https://we.tl/t-W0E2AxCoFT\nThe bundle id of my testing app is (com.fitsmiles.app)",
"username": "lakshan_karunathilake"
},
{
"code": "",
"text": "Furthermore the following log is found when crashing the app2022-08-16 22:38:58.948 21481-21599/com.fitsmiles.app A/libc: Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 21599 (mqt_js), pid 21481 (m.fitsmiles.app)",
"username": "lakshan_karunathilake"
},
{
"code": "",
"text": "Thanks Paolo, I agree it does appear to be an Android SDK or local Java related issue, I don’t have much experience at all with native development so I was unsure if it was Android-library specific or RealmDB/Expo related.I will look at reinstalling my Android Studio and SDKs to see if that helps.\nHere’s my system infoThe default RealmDB template app uses Expo 44 with the locked RN 0.64.3 mind you though the problem persisted there.",
"username": "Ben_Ujdur"
},
{
"code": "",
"text": "My dev environment:Trace didn’t work for me when I tried earlier (as suggested up thread)",
"username": "Nick_Martin"
}
]
| Expo app crashes when attempting to write a new record with Realm SDK | 2022-08-02T02:59:08.774Z | Expo app crashes when attempting to write a new record with Realm SDK | 10,287 |
null | [
"mongodb-shell",
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "If I attempt to log on to my instance with mongosh, e.g.\nmongosh --username manager --password $MONGO_SERVER_PWD mongodb://XXXX:27017\nEverything works fine.\nBut when I attempt to access with mongodump with\nmongodump --username manager --password $MONGO_SERVER_PWD mongodb:/XXXX:27017 --db boatnet_sandbox --archive=boatnet_sandbox.20220824.archiveI get ailed: can’t create session: could not connect to server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism “SCRAM-SHA-1”: (AuthenticationFailed) Authentication failed.Why aren’t these behaving the same.",
"username": "Stephen_Montsaroff_NOAA_Affiliate"
},
{
"code": "",
"text": "It looks like with mongodump your are missing one forward slash. Maybe it is simply a redaction typo. That is the major problem when things that do not work are redacted for publishing. It is a real error or a redaction error.Authentication failed error is one of 3 things\n1 - wrong username\n2 - wrong password\n3 - wrong authentication database/mechanism",
"username": "steevej"
},
{
"code": "mongodump --username smontsaroff --password $MONGO_SERVER_PWD mongodb://nwcdbd26.nwfsc2.noaa.gov:27017 --db boatnet_sandbox --archive=boatnet_sandbox.20220824.archiv\n2022-08-24T17:58:22.041-0700 Failed: can't create session: could not connect to server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism \"SCRAM-SHA-1\": (AuthenticationFailed) Authentication failed.\n\n",
"text": "Yes the missing slash was a cut and paste error. At the bottom is the full correct command.I am using the same password and same username for mongosh. (Just to belay a possible distracting question, the tests were repeatedly scripted using the actual strings, so we could verify that they were the same values used.)It would follow from the suggestion, that mongodump uses a different authentication mechanism.Is this likely to be the case and if it is, how is this configured.",
"username": "Stephen_Montsaroff_NOAA_Affiliate"
},
{
"code": "",
"text": "Try to specify the authentication database with",
"username": "steevej"
}
]
| Mongosh can connect to my Enterprise Instance Mongodump cannot | 2022-08-25T00:09:38.227Z | Mongosh can connect to my Enterprise Instance Mongodump cannot | 6,951 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "'mapname' : 'myName'[{\n \"title\": \"e\",\n \"metadata\": {\n \"editedBy\": \"[email protected]\",\n \"timeEdited\": {\n \"$date\": \"2022-08-23T00:46:01.783Z\"\n },\n \"createdBy\": \"[email protected]\",\n \"timeCreated\": {\n \"$date\": \"2022-08-23T00:46:01.660Z\"\n }\n },\n \"mapname\": [\n {\n \"name\": \"myName\"\n }\n ],\n \"user_scenarios\": []\n }]\n",
"text": "Hey ! Can anybody help me with getting 'mapname' : 'myName' on $project stage in pipeline ? I get those docs after ‘pipeline’ in $lookup stage:Thanks in advance!",
"username": "Andrei"
},
{
"code": "{ \"$set\" : { \"mapname\" : \"$mapname.0.name\" } }\n",
"text": "Untested possible solution:The possible solution is a stage in the aggregation pipeline. However, I feel that is it better to do this type of data cosmetic in application layer because it scale better. When done on the server, everybody is impacted by the extra work done on the server just to present the data in a specific format. When done in the application, only the instance of the application doing the query is impacted.",
"username": "steevej"
},
{
"code": "{ $set : { \"mapname\" : {$arrayElemAt: ['$mapname.name',0] } }}",
"text": "Thank @steevej , here my solution based on your suggestion: { $set : { \"mapname\" : {$arrayElemAt: ['$mapname.name',0] } }} that gave me the string value",
"username": "Andrei"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Merge object in array to array property | 2022-08-24T22:24:01.424Z | Merge object in array to array property | 1,199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.