image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "mongodb-shell" ]
[ { "code": "mongosh --shell consoletable.js\nCurrent sessionID: 77dbd754f7b36b026be1b82f\nConnecting to: mongodb://consoletable.js:27017/test\nmongosh consoletable.js --shell\nCurrent sessionID: 77dc0556b638ce307878c5bc\nConnecting to: mongodb://consoletable.js:27017/test\nmongosh consoletable.js\nCurrent sessionID: 77df76ef26ae494504e8c4ec\nConnecting to: mongodb://consoletable.js:27017/test\n", "text": "These do not work:", "username": "Gianfranco_Palumbo" }, { "code": "mongosh> .load ./consoletable.js\n", "text": "That is currently not supported.What you can try is to start mongosh and then from inside the shell do", "username": "Massimiliano_Marcon" }, { "code": "", "text": "The script needs to run as part of an automated build process. Any plans to implement this in the future?", "username": "Jideobi_Onuora" }, { "code": "mongo", "text": "Yes, definitely. One of our goals is feature-parity with the existing mongo shell.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to run the new `mongosh` with a JS file?
2020-07-22T20:22:45.750Z
How to run the new `mongosh` with a JS file?
4,691
null
[ "indexes" ]
[ { "code": " db.b.insertMany([\n {c:\"a\", },\n {c:\"a\", },\n {c:\"a\", n:null},\n {c:\"a\", n:1},\n {c:\"a\", n:10},\n {c:\"a\", n:5},\n {c:\"a\", n:1},\n {c:\"a\", n:12},\n {c:\"b\", n:12},\n]);\ndb.b.createIndex({\"n\": 1, \"c\": 1});\ndb.b.find({c: \"a\", n: {$in:[10, 12]}}, {_id: 0, n:1,c:1}).explain(\"allPlansExecution\") \ndb.b.find({c: \"a\", n: {$in:[null, 12]}}, {_id: 0, n:1,c:1}).explain(\"allPlansExecution\") \n", "text": "Why does MongoDB (version 4.2.8) handle queries with and without null differently?Sample:Then if I callthe query is fully covered. But, the queryisn’t covered, and MongoDB fetches all found documents.It looks like a bug in MongoDB.", "username": "_Voronenkov" }, { "code": "", "text": "Hi @_Voronenkov,Welcome to MongoDB community!This is not a bug, covered queries are not performed for null predict values:So it is working as designed.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't create covered query with nulls
2020-11-26T19:02:10.599Z
Can’t create covered query with nulls
1,986
null
[ "data-modeling", "schema-validation" ]
[ { "code": "", "text": "I am new to MongoDB and have a couple questions regarding naming keys.I am building a Content Management System that will contain items with a variety of attributes.End-users will have the ability to select related attributes for any given item that enters the CMS. Does anyone have recommendations for maintaining an organized selection of attributes that can be selected for a particular item?I was imagining a JSON file that would be referenced, which would contain all available attributes that can be selected. The selected attributes would then be added to item’s document.Again, I apologize if this is a basic question. I just want to make sure I am implementing good practices.", "username": "cc9one" }, { "code": "", "text": "Hi @cc9one,Welcome to MongoDB community!I believe the best pattern for you is the attribute pattern:Building with Patterns: The Attribute Pattern | MongoDB BlogThe benift of this pattern is the ability to add attributes flexibility while maintaining minimum index structure with ability to effectively search on any attribute.I recommend reading through all of our known design patternsA summary of all the patterns we've looked at in this seriesThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny this is very helpful, thank you. I did have a “specifications” array in mind.I do want to expand on this:ability to add attributes flexibilityI don’t necessarily want the users to have the flexibility to create their own attributes, but rather select from a list of attributes that I have already defined. Additionally, I want to have the ability to manage (i.e. add, remove, edit) attributes from this list.This will allow me to maintain standardization of attributes as my application grows.", "username": "cc9one" }, { "code": "", "text": "Hi @cc9one,Updating attributes in an array list is possible via indexing inner objects and using an array filter update statement:This can be used with multi : true or with bulk.find().update() if you want to update multiple user documents. Adding and removing can be done by $push,$addToSet,$pull or rewritten and stored array.Having said that, if the list of attributes is almost fixed and users share those when those can be changed for many users at once, it might make more sense to keep attribute ids as an array in users referencing attributes collection.I will recommend doing 2 queries where one fetch user data and attributes array and use it as a $in query or aggregation to attributesBest\nPavel", "username": "Pavel_Duchovny" } ]
Predefined keys/ attributes
2020-11-25T03:05:57.857Z
Predefined keys/ attributes
3,159
null
[ "devops" ]
[ { "code": "", "text": "I want to prevent a single logfile from consuming too much disk space.\nIs it good idea to automate MongoDB logRotate using the logrotate utility.", "username": "ryu" }, { "code": "", "text": "Yes it is:Here is an example logrotate, change the rotate to taste.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB logRotate
2020-11-27T02:41:59.090Z
MongoDB logRotate
2,535
https://www.mongodb.com/…d56931acb706.png
[]
[ { "code": "", "text": "The github.com/mongodb-developer/community-slack repository is very easy to hit (with a search engine…) and it leads to a poor user experience.There are two issues complaining about the repo:And someone tried to use the repo to get support:It’d be nice if the repo was updated to point somewhere else.\nProbably:But possibly:MongoDB Community from the above repo has a very inviting right side:\n\nWhich when one clicks the Join Now button yields:\n\nAttention! Read below: token_revoked...834×552 66.8 KB\n\nWhich is not helpful to end users.– It’d be great if at least the right side were removed from that page.", "username": "Josh_Soref" }, { "code": "", "text": "Hi @Josh_Soref,Thanks for bringing this to our attention! I’ll take care of updating that.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Josh_Soref,The GitHub repo has been updated to reflect current community resources and starting points.The community Slack was retired earlier this year as part of a longer term plan to grow and support community with a more cohesive experience per The MongoDB Community: Our First Steps Together Into A New Future. We now have a larger (and more active) group of members on the community forums .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Also, pages on mongodb.com have links to Slack, e.g.:", "username": "Josh_Soref" }, { "code": "launchpass.com/mongo-db", "text": "thanks,\ncan you do anything about the right side of launchpass.com/mongo-db ?That URL isn’t really able to die, as there are plenty of things that link to it, e.g.:", "username": "Josh_Soref" }, { "code": "", "text": "Hi @Josh_Soref,It’s a bit challenging to remove all of the historical links that referred directly to LaunchPass, but the canonical url of MongoDB Developer Community Forums - A place to discover, learn, and grow with MongoDB technologies was redirected.The editable body of the LaunchPass page was also updated within the limitations of the template to refer to the new community forums:\nwe're moving1286×1046 119 KB\nThe login message on the right side of the page is a generic template provided by LaunchPass that can’t be hidden.We could completely remove the LaunchPass page, but that dead ends anyone following old links rather than giving them opportunity to discover the new home. It has been more than 6 months since the migration, but I’ll start a discussion with the community team on whether it is time to fully remove the old landing page.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X: what about this? I’d hope this is an area mongodb controls…", "username": "Josh_Soref" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Archive/deprecate github.com/mongodb-developer/community-slack
2020-11-26T00:08:09.053Z
Archive/deprecate github.com/mongodb-developer/community-slack
4,100
null
[ "node-js", "transactions" ]
[ { "code": "UnhandledPromiseRejectionWarning: MongoError: Retryable write with txnNumber 3552 is prohibited on session 8a19417e-8bbd-46c5-9aab-04a2cb60f406 - /C5A+ZfCloKsJhH+D\naIABeIyRV1uALaRfamlKu9yI5I= because a newer retryable write with txnNumber 3553 has already started on this session.\n\nat MessageStream.messageHandler (/home/ubuntu/project/node_modules/mongodb/lib/cmap/connection.js:263:20) \nat MessageStream.emit (events.js:315:20)\nat processIncomingData (/home/ubuntu/project/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\nat MessageStream._write(/home/project/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\nat doWrite (_stream_writable.js:403:12)\nat writeOrBuffer (_stream_writable.js:387:5)\nat MessageStream.Writable.write (_stream_writable.js:318:11)\n", "text": "Using Mongodb transactions with withTransaction Helper in nodejs (nestjs framework), These errors are popping upCan anyone help me with what it means, or point me to any resources explaining this, there is very limited documentation for this particular error.", "username": "Mel_George" }, { "code": "", "text": "Sounds like the code doesn’t wait for X to finish before starting Y and then X can’t finish.\nAsynchronicity is a harsh discipline ", "username": "Jack_Woehr" } ]
Mongodb Transactions Unhandled Promise Rejection Error - MongoError: Retryable write with is prohibited on session
2020-11-25T11:13:32.694Z
Mongodb Transactions Unhandled Promise Rejection Error - MongoError: Retryable write with is prohibited on session
3,385
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net" ]
[ { "code": "", "text": "In this topic:There is following note:\nNOTEPrimary keys can be of type char, short, int, long, string, or MongoDB.Bson.ObjectId. Once you assign a property as a primary key, you cannot change it after an object of that type is added to a realm.But, I get the following when I set my primary ket to a MongoDB.Bson.ObjectId:Severity\tCode\tDescription\tProject\tFile\tLine\tSuppression State\nError\t\tFody/Realm: Item.ID is marked as [PrimaryKey] which is only allowed on integral and string types, not on MongoDB.Bson.ObjectId.\tStretchedPenny\tD:\\projects\\MongoDBSP\\StretchedPenny\\StretchedPenny\\StretchedPenny\\Models\\Items.cs\t9In Realm, what is the recommended way to create a unique key?", "username": "Don_Glover" }, { "code": "", "text": "Are you using version 10.0.0-beta.2 of the SDK?", "username": "nirinchev" }, { "code": "", "text": "No, I was using 5.1.2 latest from NuGet. I have installed 10.0.0-beta-2 and that does indeed resolve the issue.Might I suggest at minimum you add a header to the page that says:Some features described in this topic require installation of a beta version of the SDK.It would be a better user experience, but harder to maintain, to call out specific locations in the document where this is true.You would also want to update any other pages where this could be an issue.Thanks for your help.", "username": "Don_Glover" }, { "code": "", "text": "This is briefly covered under the Install Realm section, but I’ll pass the feedback to the docs team.", "username": "nirinchev" }, { "code": "", "text": "Ahhh… yes. I see now. It really needs to be more explicit. I did not take away from that I needed to install the beta package. I suggest:In the search bar, search for Realm . Check the “Show pre-release packages” checkbox and choose the latest beta release .Select the result and click Add Package. When prompted, select all projects and click Ok.", "username": "Don_Glover" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Primary key and MongoDB.Bson.ObjectId
2020-11-26T19:00:37.015Z
Primary key and MongoDB.Bson.ObjectId
5,190
null
[ "dot-net", "xamarin" ]
[ { "code": "System.Runtime.InteropServices.MarshalDirectiveException: Incompatible MarshalAs detected in parameter named 'value'. Please refer to MCG's warning message for more information.\n at __Interop.realm_wrappers.get_primitive__0(ObjectHandle, IntPtr, PrimitiveValue&, NativeException&) + 0x63\n at Realms.ObjectHandle.NativeMethods.get_primitive(ObjectHandle, IntPtr, PrimitiveValue&, NativeException&) + 0x8\n at Realms.ObjectHandle.GetPrimitive(IntPtr, PropertyType) + 0x3b\n at Realms.RealmObjectBase.GetRealmIntegerValue[T](String) + 0x33\n at TrackTime.Helpers.DateTimeHelper.SetCulture(PersonRealmObject) + 0x293\n at TrackTime.App.OnStart() + 0x481\n", "text": "When using ObjectID in xamarin.forms with UWP and native compiling an error occures. I get the exception:It seams that using ObjectID is not suported on .NET xamarin.forms for UWP!", "username": "Bruno_Zimmermann" }, { "code": "", "text": "Which version of the .NET SDK are you using?", "username": "nirinchev" }, { "code": "", "text": "I’m using the .NET 10.0.0 beta-2", "username": "Bruno_Zimmermann" }, { "code": "", "text": "Based on the stacktrace, this doesn’t appear to be related to ObjectId. Instead, it’s likely caused by the same issue that causes Failed to load IList<int> from Realm database on a UWP project · Issue #1780 · realm/realm-dotnet · GitHub.", "username": "nirinchev" }, { "code": "", "text": "Unfortunately it seems to be. I was spending around 40 hrs tracking the problems in a close to release app with realm and Xamarin.forms. One of the problems was with the IList. It can be very easy solved by adding a wrapper RealmObject around the string within the list - works perfect. But I used also the new ObjectID which was not working. I did extensive tests with a test program and different scenarios. As soon as I changed the ObjectID to string the test program was working like a charm. If you like I can send the small visual studio 2019 test-project containing the different situations.With the app facing the problem I had to go back to the realm.database 5.1.2 and reverse the change from ObjectID to string and GUID where as the problem doesn’t exist. The app is now running with native compilation perfect without any exception.", "username": "Bruno_Zimmermann" }, { "code": "", "text": "I’d love to take a look at the ObjectId issues. If you have a repro please open an issue in the .NET repo: Issues · realm/realm-dotnet · GitHub.", "username": "nirinchev" } ]
Use ObjectID in Realm database on a UWP project (xamarin.forms)
2020-11-25T20:28:06.183Z
Use ObjectID in Realm database on a UWP project (xamarin.forms)
3,393
https://www.mongodb.com/…_2_1024x383.jpeg
[ "data-modeling", "etl" ]
[ { "code": "", "text": "I watched the following presentation and had few questions from it:A summary of all the patterns we've looked at in this series", "username": "Anoop_Sidhu" }, { "code": "", "text": "Hi @Anoop_Sidhu,Welcome to MongoDB community.There are many ways to achieve data streaming from verious sources to MongoDB , whether its Oracle or other data sources.The method depands on the purpose and length of migration.However moving to MongoDB from a legacy database has more to it then getting and loading data, I recommend to read Relational Database To MongoDB | MongoDBBest regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for replying. We are strangling the monolith into microservices, so the idea is while we are selectively moving functionality out of legacy i.e monolith we can use cdc(change data capture) from oracle into mongo db. Then eventually we could sunset the oracle db and legacy later on. I have follow up questions:In the advance design pattern webinar, just talked about user collection and related policy, claims and other things like messages and documents. As we are streaming data from oracle we would have to transform the data from relational to user collections. Is that right assumptions.When we use extended reference from one collections to the other how do we typically do that when a legacy system is involved. Do we use a batch job to first populate the collections and then go to the other side and populate the extended reference collections. It looks lot of work to me for data migrations", "username": "Anoop_Sidhu" }, { "code": "", "text": "Hi @Anoop_Sidhu,(1) You are correct moving the data as 1-1 from oracle to MongoDB will not allow you to benifit MongoDB model, where data that accessed together can be flexibly stored together. As part of your cdc processing transform the data.(2) You can potentially store the data in staging collections and transform them post migration to their target collections. But I would recommend transforming data as it is being migrated and prepared for MongoDB. Potentially join all data of a target MongoDB document on oracle side so it would be easier to store it directly in MongoDB format (extended reference).Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks again for giving wonderful pointers. So if we have collections that have extended references and we have parent collections that are being referenced they would be populated from oracle as the data migration is going on. Like you mentioned we joined the data and then move it together with master tables in mongo db. Do you recommend any etl tools that are good at moving data from rdbms(oracle) into mongo db.I am guessing there will be one time data load and then there could be scenarios where cdc is only moving changes across. These are two separate scenarios", "username": "Anoop_Sidhu" } ]
Advanced Schema Design Patterns
2020-11-22T22:10:48.925Z
Advanced Schema Design Patterns
3,031
null
[ "security" ]
[ { "code": "", "text": "Hello,I have a hosting and MongoDB database hosted on an approved French hosting.I would like to encrypt the data at the storage level. So I went on this documentation: https://docs.mongodb.com/manual/core/security-encryption-at-rest/For this, I need to manage a key with the KMIP protocol.\nI spent hours looking for a company (even in MongoDB partners) that would allow me to manage my keys and that would be compatible with the KMIP protocol (because required for MongoDB).I don’t want to move my database to another host. I would like to pay for a service in an external company that would just allow me to manage my keys and to which I can connect my database.Do you know a company that does this?\nThanks to you", "username": "Mathie" }, { "code": "", "text": "Hello @Mathie, welcome to the community.I searched and found this one: KMIP Secrets Engine (HashiCorp - Vault). Hope you find it is useful.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello @Prasad_SayaThank you very much for your answer!HashiCorp’s Vault tool is a tool that I need to install on a machine of my own. It is not outsourced.\nI’m looking for an outsourced KMS compatible KMIP", "username": "Mathie" }, { "code": "", "text": "@Mathie, I think you are looking for a Key Management as a Service (KMaaS) provider.", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya Yes I’m looking for that", "username": "Mathie" }, { "code": "", "text": "I don’t find solution ", "username": "Mathie" } ]
Recommendations for a hosted KMIP service?
2020-09-02T19:06:03.364Z
Recommendations for a hosted KMIP service?
1,997
null
[ "dot-net", "xamarin" ]
[ { "code": "", "text": "Following the .NET SDK quick start.When I add\nvar app = App.Create();\nI am told Create is not part of App.Yes, I have added\nusing Realms;What am I missing?", "username": "Don_Glover" }, { "code": "AppRealms.Sync.App.Create(...)", "text": "Likely the compiler is resolving App from Xamarin.Forms. You can try fully qualifying it: Realms.Sync.App.Create(...).", "username": "nirinchev" }, { "code": "", "text": "Thanks. An initial test indicates that will solve the issue. When I get back to the point of trying that bit of implementation I will investigate more deeply.", "username": "Don_Glover" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB realm, Xamarin Forms, and App.Create
2020-11-21T01:42:35.902Z
MongoDB realm, Xamarin Forms, and App.Create
2,772
null
[]
[ { "code": "", "text": "I followed the steps in the video carefully but got an error message when trying to connect to the compass app\nHow can I resolve this issue?", "username": "Godswil_Ugbosanmi" }, { "code": "", "text": "Your hostname srarts with mflix but this course is about mongodb basics\nWhich course video are you referring to?The screenshots show old style of connecting to Compass\nUse SRV connect string method", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Solved!! Added an IP address to the IP access list using 0.0.0.0/0. Thank you for reaching out to help", "username": "Godswil_Ugbosanmi" }, { "code": "", "text": "I followed the steps in the video, but when accessing, I get an errorimage1454×809 74.9 KBimage1299×766 25.1 KB", "username": "Alex_Armando_Ticona_Mamani" }, { "code": "", "text": "Did you replace “dbname” in your connect string with appropriate dbname?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "You I entered the command in the file editing area. You have to type the command in the terminal area. If you are unsure which area is area, I recommend that you revise the lesson where the IDE is presented.", "username": "steevej" }, { "code": "", "text": "Could you tell me what I did wrong?image479×674 22.6 KBimage1557×919 40.1 KB", "username": "Alex_Armando_Ticona_Mamani" }, { "code": "", "text": "You I entered the command in the file editing area. You have to type the command in the terminal area. If you are unsure which area is area, I recommend that you revise the lesson where the IDE is presented.You entered the command mongo … in the file editing area. In your screenshot it is a tab that says connection_instructions.txt. This is a file, this is not the area where you enter commands. You enter commands in the terminal area. In your screenshot it is a tab with the label Terminal 0. If you are still unsure which area is which, I recommend that you revise the lesson where the IDE is presented.", "username": "steevej" }, { "code": "", "text": "Yes, after spending a long time, I wanted to try doing it there and, surprise, it ended up being the simplest answer, thank you", "username": "Alex_Armando_Ticona_Mamani" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Unable to connect in mongodb compass app
2020-11-24T12:39:01.409Z
Unable to connect in mongodb compass app
2,373
null
[ "replication", "devops" ]
[ { "code": "", "text": "Hello All,\nI have a MongoDB cluster at on-prem hardware servers and now looking for migration to AWS EC2 hosted MongoDB replicaset. Is there any way to achieve live migration?\nAlso if you can guide how I can achieve below -", "username": "Krishna_Kulkarni" }, { "code": "", "text": "Hi @Krishna_Kulkarni. You can do a live migration. The process is described here, if you’re not using ops manager, how you add the nodes will be slightly different but the process remains the same.Backup options are documented here.If you ever want to migrate to a completely hosted solution, you can also live migrate to Atlas.", "username": "Naomi_Pentrel" } ]
MongoDB Migration to AWS
2020-11-25T11:51:53.679Z
MongoDB Migration to AWS
1,704
null
[ "atlas-device-sync" ]
[ { "code": "config = new SyncConfiguration(\"myPart\", user); \nvar realm = await Realm.GetInstanceAsync(config);\nvar token = session.GetProgressObservable(ProgressDirection.Download, ProgressMode.ReportIndefinitely)\n .Subscribe(progress =>\n {\n if (progress.TransferredBytes < progress.TransferableBytes)\n {\n // Show progress indicator\n }\n else\n {\n // Hide the progress indicator\n }\n });\nrealm.GetSession()Realm.GetInstanceAsync(config)", "text": "I’m planning to develop a Windows application with an offline-first approach. I found MongoDB Realm with the .Net SDK that seems to suit my requirements. Currently I’m trying to find out how I can track the upload/download progress when connecting to the server database (especially for the first time).I connect like this:Then I found this snippet in the Realm documentation to track for example the download-progress:The problem for me is, that I need to have a session-handle to subscribe to the progress event. But I can get the session only after I opened the realm (with realm.GetSession() ). And as I open the realm async I will not get the realm before Realm.GetInstanceAsync(config) is completed, which makes the progress-tracker useless.What can I do to get the progress during the sync? Thank you", "username": "David_Funk" }, { "code": "", "text": "When you open the realm asynchronously (i.e. with GetInstanceAsync) the async call doesn’t resolve until the SDK has synced all pending sync changes. Try opening the realm synchronously (i.e. with GetInstance) - it should return the realm immediately and then start syncing changes in the background, which you can observe using the code you posted.Hope that helps!", "username": "nlarew" }, { "code": "", "text": "Try opening the realm synchronouslyExcept that your sync documentation says:The first time a user logs on to your realm app, you should open the realm asynchronously to sync data from the server to the device.", "username": "Phil_Seeman" }, { "code": "config = new SyncConfiguration(\"myPart\", user)\n{\n OnProgress = progress =>\n {\n Console.WriteLine($\"Progress: {progress.TransferredBytes}/{progress. TransferableBytes}\");\n }\n}\n", "text": "I replied to the same question in StackOverflow. The gist of it is:", "username": "nirinchev" }, { "code": "", "text": "X-POST on SO - .net - How to track progress in MongoDB Realm on initial sync? - Stack Overflow", "username": "Ian_Ward" }, { "code": "GetInstanceAsync()GetInstance() SyncConfig = new SyncConfiguration(\"myPart\",User)\n {\n OnProgress = progress =>\n {\n Console.WriteLine($\"Progress: {progress.TransferredBytes}/{progress.TransferableBytes}\");\n }\n };\n Realm1 = await Realm.GetInstanceAsync(SyncConfig);\n var session = Realm1.GetSession();\n\n var uploadProgress = session.GetProgressObservable(ProgressDirection.Upload, ProgressMode.ReportIndefinitely);\n var downloadProgress = session.GetProgressObservable(ProgressDirection.Download, ProgressMode.ReportIndefinitely);\n\n var token = uploadProgress.CombineLatest(downloadProgress, (upload, download) => new\n {\n TotalTransferred = upload.TransferredBytes + download.TransferredBytes,\n TotalTransferable = upload.TransferableBytes + download.TransferableBytes\n })\n .Throttle(TimeSpan.FromSeconds(0.1))\n .ObserveOn(SynchronizationContext.Current)\n .Subscribe(progress =>\n {\n if (progress.TotalTransferred < progress.TotalTransferable)\n {\n // Show spinner\n Console.WriteLine($\"ProgressIF: {progress.TotalTransferred}/{progress.TotalTransferable}\");\n }\n else\n {\n // Hide spinner\n Console.WriteLine($\"ProgressELSE: {progress.TotalTransferred}/{progress.TotalTransferable}\");\n }\n });\nUnbehandelte Ausnahme: System.NullReferenceException: Der Objektverweis wurde nicht auf eine Objektinstanz festgelegt.\n bei Realms.Sync.SessionHandle.HandleSessionProgress(IntPtr tokenPtr, UInt64 transferredBytes, UInt64 transferableBytes) in C:\\jenkins\\workspace\\realm_realm-dotnet_PR-2097\\Realm\\Realm\\Handles\\SessionHandle.cs:Zeile 220.\nDas Programm \"[16928] MongoDBDemo.exe\" wurde mit Code 0 (0x0) beendet.\n", "text": "thanks for the replies and sorry for the crosspost. As nobody was replying in SO I thought I might get some help here. But then @nirinchev replied which helped me already.With his method I get a progress for GetInstanceAsync(), which I need, because GetInstance()creates and error when you try the initial sync with it. I wanted to go now one setup further and did this:But this unfortunately creates another error:And as this error seems to occur in the Realm.dll I can’t debug it to further understand why I get this error.", "username": "David_Funk" }, { "code": "", "text": "Hm, this indeed sounds like a bug on our end - I’ll take a look and get back when I have more information.", "username": "nirinchev" }, { "code": "", "text": "Looking at the code, it appears that this is a flaw in the design of our ProgressObservable implementation - combining it with another observable will not correctly retain the notification token from both, so it gets garbage collected. I’ll file a bug and we’ll fix it in the next release.", "username": "nirinchev" } ]
How to track progress in MongoDB Realm on initial sync?
2020-11-22T09:06:26.406Z
How to track progress in MongoDB Realm on initial sync?
4,554
null
[]
[ { "code": "", "text": "I am working on a project which needs integrating Mongodb to Google Spreadsheets, writing an SQL query, and showing the data received from the query to a HTML5 website.\nI am new to this database and didn’t do anything but create a project and a Google Cloud cluster.Regards,Abhinav ", "username": "programer151" }, { "code": "", "text": "Hi @programer151,You can potentially use the Atlas BI connector enabled on your platform and have an ODBC driver connection to your Atlas cluster from Google sheets similar to the Microsoft excel tutorial:\nhttps://docs.mongodb.com/bi-connector/master/connect/excelAlternatively you can setup a realm app with webhook to expose data to Google sheetsUsing MongoDB Stitch to Create an API for Data in Google SheetsStitch is the old name for MongoDB Realm.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Thanks! now I know how to connect Google Spreadsheets, where should I write the Sql query for the sheet? Also I don’t know google apps script ", "username": "programer151" }, { "code": "", "text": "Hi @programer151,The specific place to write code or sql queries depanding on your ODBC connector.Usually the data source defined should allow you to define queries on top of it.There us also a zapier connector available, however I never used it.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @programer151,What framework are you using for this project to host your website so far? There’s some sample code here available in different languages that shows you how you can access google sheets data. If you are starting from scratch on this you don’t necessarily have to use the BI connector - you might be able to just use the native MongoDB Query Language.Let us know how you get on ", "username": "Naomi_Pentrel" }, { "code": "", "text": "@Naomi_Pentrel This is how I created my website :\nI first created a html and css website, then I watched a video which told me how to present the spreadsheets data into my website ,after that I used netlify to host my website,then I needed to make a search function to present only one specific row when the query input was typed in\nto do that I needed to connect mongodb to google spreadsheet and write SQL code to make this project work.can I use Zapier to connect to google sheet and still write SQL code?\nlike @Pavel_Duchovny saidThere us also a zapier connector available, however I never used it.", "username": "programer151" }, { "code": "", "text": "@programer151 can you link to that video? I’m still wondering what you’re using to access the data - is that javascript? The reason I’m asking that is because, if you don’t have to or specifically want to use SQL, it would be easier to access the data in MongoDB using the native MongoDB Query Language (MQL) than to add the BI connector.What are you planning to do on the website itself? Maybe that’ll help us suggest the right tools As for zapier - here are the integrations available: https://zapier.com/apps/google-sheets/integrations/mongodb. I don’t believe it allows you to query data in SQL format.", "username": "Naomi_Pentrel" } ]
How can I connect Mongodb atlas to google spreadsheets?
2020-11-24T06:42:35.790Z
How can I connect Mongodb atlas to google spreadsheets?
3,974
null
[ "queries" ]
[ { "code": "", "text": "MongoDB version 3.1.13excluding a subdocument not working .db.collections.find({“name”:“riya”},{“address”:0})db.collections.find({“name”:“riya”},{“address.$”:0})I tried both but it is not excluding the subdocument", "username": "Gayathri_S" }, { "code": "", "text": "Hi @Gayathri_S,The regular projection will not exclude specific arrays elements.You Will need an aggregation with $filter projection:\nhttps://docs.mongodb.com/manual/reference/operator/aggregation/filter/Additionally, I am not sure why you are using a development version 3.1.13 and not a general available 3.6 - 4.4 versions.Best\nPavel", "username": "Pavel_Duchovny" } ]
Excluding a subdocument
2020-11-26T06:29:18.097Z
Excluding a subdocument
2,051
null
[ "atlas-functions" ]
[ { "code": "", "text": "I am looking to do something pretty simple: once monthly, send an email with data from one of my collections in Atlas. Is there anything in Realm or Atlas with this functionality? I’m hoping to avoid using a third party service (such as AWS SES, since it’s not on my organization’s allowed list). If this service isn’t available in Realm or Atlas, what is the most common infrastructure for accomplishing this?Thank you!", "username": "Thomas_Tenner" }, { "code": "", "text": "Hi @Thomas_Tenner,Welcome to MongoDB community!Unfortunately the most common and easy way to do this is via AWS service integration with SES:A quick introduction to sending emails via AWS SES with Stitch.Stitch was rebranded to Realm.Other options is to find another rest api service and use the http moduels to send it.Additional option is to create this data as a referhed chart and send it out with a link or embedded in emails:Best\nPavel", "username": "Pavel_Duchovny" } ]
Is there a native email service in Realm or Atlas?
2020-11-25T20:30:13.190Z
Is there a native email service in Realm or Atlas?
3,117
null
[]
[ { "code": "", "text": "Can I move a project to another account (different company)?", "username": "Manos_Valergakis" }, { "code": "", "text": "Hi @Manos_Valergakis! Welcome to the community. Do you mean moving an Atlas project onto a different company account in Atlas? Or do you mean to another platform? Regarding the latter: You can absolutely move data to another service or to a self hosted MongoDB deployment. Certain features like Charts, Atlas Triggers and Realm Sync will not be available on other platforms but if you are only using the core database you can move the data whenever you want (you can get the data with mongoexport for example). Do you have a particular one in mind?", "username": "Naomi_Pentrel" }, { "code": "", "text": "Hi @Naomi_Pentrel. Thank you for your response.\nYes, I want to move a project to an other company in Atlas.\nI have already chat with someone (named Disha) form Mongo Support and he write me that there isn’t any automation online tool to migrate a project to an other account in Atlas … so, I used mongodumb backup!", "username": "Manos_Valergakis" }, { "code": "", "text": "Hi Manos,I’m sorry to see this too late: you got some incorrect information.You can move an Atlas Project between two Atlas Organizations if you’re an Organization Owner of both: see https://docs.atlas.mongodb.com/tutorial/manage-projects/index.html#move-a-projectCheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can i move a project to another account?
2020-11-24T08:18:41.207Z
How can i move a project to another account?
9,892
null
[ "atlas-search" ]
[ { "code": "$search: {\n \"compound\": {\n \"must\": [{\n \"queryString\": {\n \"defaultPath\": \"text\",\n \"query\": paramMust\n }\n }],\n \"mustNot\": [{\n \"text\": {\n \"query\": paramMustNot,\n \"path\": [\"text\", \"name\"],\n }\n }],\n \"should\": [{\n \"text\": {\n \"query\": paramShould,\n \"path\": [\"text\", \"name\"],\n \"fuzzy\": {\n \"maxEdits\": maxEdits,\n \"maxExpansions\": maxExpansions,\n }\n }\n }]\n },\n \"highlight\": {\n \"path\": \"text\",\n }\n }\n },\n {\n $project: {\n \"_id\": 1,\n \"name\": 1,\n \"highlights\": { \"$meta\": \"searchHighlights\" }\n }\n }\n", "text": "Am using Atlas and doing some searches - working all fine. However I am looking to return the highlights - which sometimes work - and sometimes don’t. Search is working insofar as it is finding the documents that contain the terms, however it appears that is the terms are ‘too far in’ (the text is often long), and highlights are not returned.Not sure if I have made an error or there is a known bug (?) or if anyone else has stumbled across this issue. Here is (a somewhat modified) snippet:After many hours of toiling I found a highlight that returned - half a sentence - I believe not expected behaviour! This leads me to believe perhaps if it doesn’t find the term in the first x characters then it goes ‘ah well, too hard’ and returns no highlights. Whilst I could parse the text myself and do my own deconstruction, this feels sub-optimal when there is a highlighting feature.", "username": "Simon_Kun" }, { "code": "", "text": "Hi @Simon_Kun. I’m the Product Manager for Atlas Search. It looks like you have found a bug but we have a fix. We should be able to release it next week. Thank you very much for reporting.Right now, we only return the top five highest scoring passages for a field when highlighting. Furthermore, if the length of your field is longer than 10,000 characters, only the first 10,000 characters will be examined for highlighting. This information is currently not in our docs and we apologize for that, but that information should be out next week as well.If you run into any other issues on Atlas Search be sure to let us know.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Highlight Missing
2020-11-21T12:04:14.157Z
Highlight Missing
2,224
null
[ "swift", "security" ]
[ { "code": "No suitable servers found: `serverSelectionTimeoutMS` expired: [connection closed calling ismaster on 'cluster0-shard-00-00-fzuuk.mongodb.net:27017'] \n[Failed to receive length header from server. calling ismaster on '********************'] \n[Failed to receive length header from server. calling ismaster on '********************']\n", "text": "Hi EveryoneI am using the Swift native driver and Vapor for my web application which is hosted on Heroku. I have been in the development phase using the sandbox cluster but I am now looking to begin Beta testing and I have upgraded my cluster to M2.I would now like to remove enabling access from anywhere and restrict access to two static IP addresses available from an Heroku add on, having followed the how to blog post: Using MongoDB Atlas on HerokuHowever I am getting the following error and I am not sure how to resolve it:Any ideas on how I can resolve this?What is the danger of making my MongoDB Atlas cluster available to be accessed from any IP Address?Thanks\nPiers", "username": "Piers_Ebdon" }, { "code": "", "text": "The danger of any IP address is low as long as you have an appropriately strong database credential and governance around the database credential management (consider MongoDB Atlas- Database - Secrets Engines | Vault | HashiCorp Developer for example).But if you’re connecting from a static IP address then that should be all you need on the Atlas IP Access List: are you sure you have the public IP added?-Andrew", "username": "Andrew_Davidson" } ]
Heroku static IP addresses and MongoDB Atlas
2020-11-21T16:27:21.288Z
Heroku static IP addresses and MongoDB Atlas
4,438
null
[ "installation", "devops" ]
[ { "code": "", "text": "Hi guys!,I’m just to deploy a sharded cluster in vms in two differentes sites and I was wondering if is it needed that every shard node be able to reach every machine with the config server and mongos? or just mongos?.. or just config server rs?In other words, every single element in the cluster should be reachable from any other element?thanks guys", "username": "Oscar_Cervantes" }, { "code": "", "text": "I’ve already confirmed", "username": "Oscar_Cervantes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Communication among Cluster shard's elements
2020-11-24T20:27:03.104Z
Communication among Cluster shard&rsquo;s elements
1,806
null
[ "java", "change-streams" ]
[ { "code": "ChangeStreamDocument{ operationType=OperationType{value='insert'}, namespace=Machine2Machine.Messages, destinationNamespace=null, fullDocument=Document{{_id=ad69f54f-3d4d-49ce-a70d-951efb58750e, type=RESPONSE, requesterId=33b19ba9-b7c0-4c4e-822d-162b7894bec8, status=PENDING, value=Hello}}, documentKey={\"_id\": \"ad69f54f-3d4d-49ce-a70d-951efb58750e\"}, clusterTime=.........}mongoCollection.watch(asList(Aggregates.match( Filters.eq(\"fullDocument.requesterId\", requesterId)) )).fullDocument(FullDocument.UPDATE_LOOKUP).subscribe(this);", "text": "Disclaimer:Hi There,I am trying to filter a change stream using Java. I am only interested in changes where the fullDocument.requesterId matches my criteria.Unfortunately it does not work. I still get all changes.This is how the ChangeStreamDocument looks like:ChangeStreamDocument{ operationType=OperationType{value='insert'}, namespace=Machine2Machine.Messages, destinationNamespace=null, fullDocument=Document{{_id=ad69f54f-3d4d-49ce-a70d-951efb58750e, type=RESPONSE, requesterId=33b19ba9-b7c0-4c4e-822d-162b7894bec8, status=PENDING, value=Hello}}, documentKey={\"_id\": \"ad69f54f-3d4d-49ce-a70d-951efb58750e\"}, clusterTime=.........}This is what I do:mongoCollection.watch(asList(Aggregates.match( Filters.eq(\"fullDocument.requesterId\", requesterId)) )).fullDocument(FullDocument.UPDATE_LOOKUP).subscribe(this);I got it from several sources. One of them is https://blog.codecentric.de/en/2018/01/change-streams-mongodb-3-6/What do I miss?Any help is really appreciated.Thanks", "username": "Kurt_Berg" }, { "code": "", "text": "Sorry my bad.\nThe above works.\nTurns out that the publisher I use in combination with the reactive stream driver was the actual problem.", "username": "Kurt_Berg" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to filter change stream using fullDocument.<field> in Java
2020-11-25T11:52:01.890Z
How to filter change stream using fullDocument.&lt;field&gt; in Java
2,850
null
[ "atlas-functions" ]
[ { "code": "exports = function(payload, response) {\n const reqCollection = \"COLLECTION_NAME_HERE\"; \n const query = ({query:payload.query});\n\n return new Promise((resolve, reject) => {\n const doc = context.services.get(\"mongodb-atlas\").db(\"DBNAME\").collection(reqCollection);\n \n const data = doc.find(query).toArray()\n .then ((response) => {\n resolve (response); \n })\n })\n \n};\n", "text": "I have webhook set up via Realm, utilizing a 3rd Party, http, post.The url performs as expected.\nWhen I pass a query parameter in the payload from my 3rd party app, it’s logging the following error inside Realm logs, and a 404 from the endpoint calling the webhook.“Error:\nincoming webhook evaluation blocked by CanEvaluate”I can see the query parameter being passed over the wire in the logging output provided by Realm.Here’s the code:", "username": "Erin_O_Neill" }, { "code": "", "text": "Hi @Erin_O_Neill,Welcome to MongoDB community!It seems that on your http service or webhook you have placed some code or condition in the can evaluate sectionThis section cannot be resolved to true therefore you fail.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "That’s what I thought as well - it’s empty.\nI also attempted an empty object only.Is the condition set that once you add a configuration, and then remove it, the system inherits the expression, and another function must be created?", "username": "Erin_O_Neill" }, { "code": "", "text": "Hi @Erin_O_Neill,Can you share the application link with me?Have you sure all of your changes are “Reviewed and Deployed”?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "exports = function (payload, response) {\ntry { console.log('line 1' + EJSON.parse(payload.body))} catch (er) {}\ntry { console.log('line 2' + JSON.stringify(EJSON.parse(payload.body.text))} catch (er) {}\n}\n\"base64\": \"eyJBZGRpdGlvbmFsQXR0cmlidXRlMSI6IkJvbmZpcmUgQ2FubmFiaXMifQ==\",\n", "text": "Sure thing - I actually figured it out. Want to share this out to everyone - because there’s no examples in the POST of the need for this translation. There’s ref docs for an insert, but use cases for a POST and .find().toArray()…it’s Narnia.Finally figured it out due to logging.-this returned line 2 in the Realm logs and upon inspection this is what the body showed (hint - this is not what I sent as the query…or so I thought…)Because this is a .find().toArray() webhook there’s no reason to translate the object back to B64 on the return trip to the endpoint.", "username": "Erin_O_Neill" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
3rd Party Webhook - POST returns error
2020-11-24T01:46:41.819Z
3rd Party Webhook - POST returns error
2,857
https://www.mongodb.com/…_2_1024x682.jpeg
[]
[ { "code": "", "text": "To celebrate our global MongoDB.live events happening around the world (virtually) from November, 10 2020 to January, 14 2020 we would like to learn what local food you love best!Respond to this post and tell us about your favorite breakfast, lunch, dinner, drink, or snack to complete this step of our MongoDB.live Scavenger Hunt!\npexels-ella-olsson-16407774288×2859 2.41 MB\nVisit the Developer Community virtual booth at your local MongoDB.live event to learn more about MongoDB Community programs and take part in scavenger hunt!", "username": "Leith_Darawsheh" }, { "code": "", "text": "Masala Dosa - this finger-licking masala dosa is pure south Indian; it has its origins in that geographically-diverse Karnataka region. The rice crèpe is ingeniously simple: rice and lentils for the batter, cooked on a skillet. Voila! You get a mouthwatering filling of potato and onion curry dipped in chutney. Naturally, it’s sumptuous on its own too!.\ndosa1920×1080 215 KB", "username": "Binay_Agarwal" }, { "code": "", "text": "Recently moved to Denmark and loving the local Flæskesteg!", "username": "Martin_Basterrechea" }, { "code": "", "text": "For local (Germany) I go for “Pfannkuchen”\ngrafik960×647 208 KBwhich you can have sweet savory or filled with salad …\nPersonally I am an all time barbecue person independent if hot summer or snow winter So I always go what the season provides.\nAnd if I want to vary: you can make super tasty curries on a barbecue fire.\ngrafik931×355 117 KBBeside this I try to combine South African Barbecue style with Cape Malay style (both found around Cape Town) - not local but I brought them 20 years back form SA to my home in Germany and cultivate them since then: Squash (something between pumpkin and zucchini)\nAnd @Binay_Agarwal your Masala Dosa looks great !!!MichaelPS and if you are now under the impression that I have beside MongoDB a further passion than you are correct… Actually this forces a third passion: the need of workouts, I am runner (no sprint).", "username": "michael_hoeller" }, { "code": "", "text": "My favourite local lunch would be amala with gbegiri and ewedu soup decorated with assorted meats. I haven’t eaten this in a year because I am miles away from home.", "username": "Modupeoluwa" }, { "code": "", "text": "20200602_091001470×966 114 KB My favourite breakfast, lunch, dinner and snacks : PANCAKES!!! I can eat them for every meal, and have recently discovered a great healthy alternative made of just oats and banana!", "username": "Snehal_Bhatia" }, { "code": "", "text": "PitepaltAs I have my origin in the north of Sweden, I love the traditional northern \"cuisine\". What could be more north Sweden than the traditional swedish \"filled potato balls\" called Pitepalt? Enjoyed properly, you will find yourself in...", "username": "Peter_Johansson" }, { "code": "", "text": "Hi @Peter_Johanssonis that close to Ebelskiver? I love them !Michael", "username": "michael_hoeller" }, { "code": "", "text": "No like a sort of dumplings. No apples in them ", "username": "Peter_Johansson" }, { "code": "", "text": "This is my one and only breakfast! LOVE IT!\n", "username": "Yigit_Erkal" }, { "code": "", "text": "Here in Québec, Canada there is nothing better then “Tourtiere du lac st jean”. We usally eat that around Christmas. Full of meat and potato. I love it\n800×474 97.4 KB", "username": "Maxime_Villeneuve" }, { "code": "", "text": "**My favorite local South African tea is Rooibos TeaCenturies ago, rooibos was a drink of the bushmen, who chopped the bush’s stalks, bruised them with hammers, and let them ferment and dry in the sun – to be sipped later in a warm brew over cooking fires.\n1200×900 161 KBAt first glance, the rooibos bush is unremarkable. It is almost spindly, with needle-like stalks, and looks like just another scrub in the fynbos – the ecologically rich heathland of western South Africa.Despite its name, it’s not red but earthy green. Rooibos, or “red bush” in Afrikaans, refers to the plant after fermentation, when its stalks turn a deep auburn.Rooibos tea shows promise when it comes to treating common ailments such as pain and skin conditions, as well as more serious diseases such as heart disease and diabetes. With a rich, crimson color that is visually pleasing along with a smoky, fruity and subtle flavor profile, drinking rooibos tea is a pleasure. Pour yourself a cup of tea and enjoy a sweet tea experience and reap the health benefits.I highly recommend you give it a try!", "username": "Deidre_Gerber" }, { "code": "", "text": "Cabrito is very tasty and quite popular around here.", "username": "Ernesto_Celis" }, { "code": "", "text": "I am a major foodie so it’s tough to just pick one favorite item… but here are some of my favorites.\nBreakfast: Monte Cristo or Eggs Benedict\nLunch: Dim Sum or Pho\nDinner: Sweet Potato Pizza (Haven’t found one in the states yet but amazing in Korea), KBBQ, Korean Fried Chicken, or Kimchi Stew", "username": "Hoyoung_Jung" }, { "code": "", "text": "Hyderabadi Dum Biryani ! No second thoughts ", "username": "Akshay_Pallerla" }, { "code": "", "text": "I’m right there with you @Yigit_Erkal ", "username": "Leith_Darawsheh" }, { "code": "", "text": "@Akshay_Pallerla I had Hyderabadi Biryani for the first time during my last visit to India. Absolutely hands down the best!", "username": "Leith_Darawsheh" }, { "code": "", "text": "Hyderabadi Dum Biryani ! No second thoughtsIndeed \nHyderabad a South Indian city with 400 years history is well known for 2 things\nOne Charminar ( 4 minarets) and other Hyderabad Biryani.Made with chicken or lamb or egg and rice and lots of spices", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hello everyone from Trujillo (Peru). Here, a good lunch on a sunny day is ceviche. Delicious!", "username": "Diana_FloresSaavedra" }, { "code": "", "text": "My personal favorite here in Rochester, NY is Cocina Latina, a Puerto Rican restaurant.Being from Rochester, I also have to shout out our aptly named Garbage Plates!\n", "username": "Montgomery_Watts" } ]
MongoDB.live Scavenger Hunt Food Quest!
2020-11-10T05:28:48.413Z
MongoDB.live Scavenger Hunt Food Quest!
12,526
null
[]
[ { "code": "", "text": "Hello everyone,I was wondering if I’m the only one who can not access the “Realm” tab in the dashboard. In addition to that in the “Atlas” Tab is says:LINKED REALM APP\nUnable to load application dataI first noticed it yesterday but something seems to be very wrong. Even when I register with a clean email a new test account I can’t access it and no realm app is connected anywhere.Is that something someone is working on or what is that about? This unavailability already cost me plenty of hours yesterday and until right now it is still not working.I’d be happy to get some info about that issue - even if the error might be on my side (which I wouldn’t exclude).Thanks for shedding some light on this!Best,\nRené", "username": "Rene_Seib" }, { "code": "", "text": "@Rene_Seib Best to open a support chat on this, it is working for me -image2838×1560 404 KB", "username": "Ian_Ward" }, { "code": "", "text": "Ok, now I tried the whole thing with Safari browser - and it works. So that seems to be a browser related issue.", "username": "Rene_Seib" }, { "code": "", "text": "Thanks, Ian!I would open the chat - if it wouldn’t open and immediately close after click. That happens in latest Chrome version and Firefox.I got no clue what’s going on. Like 2 days ago all was going smooth and somewhat it looks like the issue is on my end. However, even when logging in from Firefox - nothing changes.In addition to that the metrics show way more activity than before, although I’m not doing much.Screenshot2348×790 106 KB", "username": "Rene_Seib" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm not available in dashboard
2020-11-25T08:33:41.097Z
Realm not available in dashboard
2,013
https://www.mongodb.com/…4_2_1024x512.png
[ "swift" ]
[ { "code": "Initialize the App\n \n name: \"RealmSwift\",\n targets: [\"Realm\", \"RealmSwift\"]),\n ],\n dependencies: [\n .package(name: \"RealmDatabase\", url: \"https://github.com/realm/realm-core.git\", .exact(Version(coreVersionStr)!))\n ],\n targets: [\n .target(\n name: \"Realm\",\n dependencies: [.product(name: \"RealmCore\", package: \"RealmDatabase\")],\n path: \".\",\n exclude: [\n \"CHANGELOG.md\",\n \"CONTRIBUTING.md\",\n \"Carthage\",\n \"Configuration\",\n \"Jenkinsfile.releasability\",\n \"LICENSE\",\n \"Package.swift\",\n \"README.md\",\n \"Realm.podspec\",\n \n ", "text": "I followed these steps,added Realm SDK via SwiftPM.But I failed Initialize the App.I found App.swift excluded.Why is App.swift exclude from Package.swift?–\nRealm framework version: v10.1.4\nXcode version: 12.1", "username": "Shinya_Kato" }, { "code": "", "text": "Hi @Shinya_Kato Realm via SPM does not support Sync functionality yet, you will need to use Cocoapods or Carthage for the moment.", "username": "Lee_Maguire" }, { "code": "", "text": "@Lee_Maguire I understand, thank you!", "username": "Shinya_Kato" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why is App.swift exclude from Package.swift?
2020-11-25T13:17:45.901Z
Why is App.swift exclude from Package.swift?
2,342
null
[ "production", "php" ]
[ { "code": "MongoDB\\Driver\\Exception\\InvalidArgumentExceptionE_WARNINGMongoDB\\Driver\\CursorIteratorpecl install mongodb-1.9.0\npecl upgrade mongodb-1.9.0\n", "text": "The PHP team is happy to announce that version 1.9.0 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release makes the extension compatible with PHP 8.This release ensures that all functions in the extension throw MongoDB\\Driver\\Exception\\InvalidArgumentException instead of emitting a PHP error or warning during argument parsing (e.g. E_WARNING). Previous versions of the driver generally only did this for constructors, which was inconsistent. Note that this behavior does not apply to cases where PHP throws an Error (e.g. TypeError), which is done more consistently in PHP 8 (see: Consistent Type Errors).The MongoDB\\Driver\\Cursor class now implements the Iterator interface directly. This change was necessary to ensure consistent behavior across all supported PHP versions.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "peclBuild process completed successfully\nInstalling '/usr/lib64/php/modules/mongodb.so'\nupgrade ok: channel://pecl.php.net/mongodb-1.9.0\nconfiguration option \"php_ini\" is not set to php.ini location\nYou should add \"extension=mongodb.so\" to php.ini\nSegmentation fault (core dumped)\n", "text": "Installed on Ubuntu 20.04 and Fedora 33 and working this morning.\nHowever, as before, the pecl build core dumps at the very end on both platforms.\nNonetheless, the driver works thereafter.", "username": "Jack_Woehr" }, { "code": "dnf install php-pecl-mongodb", "text": "Filed an issue with PHP\nThey pointed me to a Fedora package release of the 1.9.0 driver.\nThat package does not actually appear to be released yet to judge from dnf install php-pecl-mongodb which shows 1.8.1.", "username": "Jack_Woehr" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.9.0 released
2020-11-25T12:20:12.811Z
MongoDB PHP Extension 1.9.0 released
6,964
null
[ "installation", "on-premises" ]
[ { "code": "", "text": "Hi,I am new to Mongo. I have setup an on-prem MongoDB and I was trying to setup Mongo Charts (ver 19.12.2) by following the documentation.Below is what i see for each command executed (in sequence):docker stack deploy -c /home/mongodb-charts/charts-docker-swarm-19.12.2.yml mongodb-charts\nCreating network mongodb-charts_backend\nCreating service mongodb-charts_chartsdocker service ls\nID NAME MODE REPLICAS IMAGE\ntjpbzxt01ymi mongodb-charts_charts replicated 0/1 Quaydocker service logs tjpbzxt01ymi\nNo output. It hangs there for a long time. I had to exit out of this.Even after waiting for half-day, i do not see the replicas as 1/1 and i cant pull up the container logs.Below is my environment:RHEL v 7.9\nDocker Client v1.13.1\nDocker Server v1.13.1Note: I updated the version number in docker compose file to 3.1 (from 3.3) as i was seeing an “unsupported version error”Any guidance on how to debug this issue is very much appreciated.", "username": "Tej_R" }, { "code": "docker pull quay.io/mongodb/charts:19.12.2", "text": "Hi @Tej_R -The first thing I’d check is whether the Charts image has been pulled down successfully. You can do this by running docker pull quay.io/mongodb/charts:19.12.2 before trying to deploy the stack. I’ve seen behaviour similar to what you’ve described when there are problems pulling the image due to network problems, low disk space, etc.Let me know if that helps at all.\nTom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollander ,Thanks for the quick response.I had network restrictions from the server and hence i used a workaround:Download the image on a different machine -> save it as a tar file -> move the tar file to the server -> load the image into dockerDo you think something might have gone wrong with the above workaround.I can try downloading the image again but can’t do it directly on the server. Is there an alternative location to get the image.Also, how can i verify the integrity/correctness of the image after i download it?Thanks.", "username": "Tej_R" }, { "code": "docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE\nmongodb-charts-dev latest 683b657d6295 16 hours ago 656MB\nmongo latest fb58c9bbce4e 3 days ago 493MB\nmongo <none> ba0c2ff8d362 8 weeks ago 492MB\nquay.io/mongodb/charts 19.12.2 bfd64537eef0 4 months ago 714MB\nalpine latest a24bb4013296 5 months ago 5.57MB\nchartsquay.io/mongodb/charts", "text": "Hi @Tej_R -If you imported the image this way, I’d say it is no longer tied to its quay.io registry name. If you execute docker images it will show the expected names for all images. For example here are some of mine:If your Charts image is showing as just charts (or something else) instead of quay.io/mongodb/charts you’ll need to update your compose file to reference the local image name.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollander.Here is what i see when i execute docker images:docker images\nREPOSITORY TAG IMAGE ID CREATED SIZE\nQuay 19.12.2 bfd64537eef0 4 months ago 714 MBBased on the output, i do not see anything incorrect here.Also, I downloaded the image again and repeated the steps to deploy (I removed the previous image) and seeing the same issue.What else can i check?", "username": "Tej_R" }, { "code": "docker run quay.io/mongodb/charts:19.12.2\n", "text": "Do you get any output when you type this?This command won’t result in Charts starting successfully, but if there’s some fundamental problem you may get some clues in the error messages.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollanderI got the following output for docker run Quay parsedArgs\n installDir (‘/mongodb-charts’)\n log\n salt\n productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\n gitHash (undefined)\n supportWidgetAndMetrics (undefined)\n tileServer (undefined)\n tileAttributionMessage (undefined)\n rawFeatureFlags (undefined)\n stitchMigrationsLog ({ completedStitchMigrations: })\n featureFlags ({})\n chartsMongoDBUri failure: ENOENT: no such file or directory, open ‘/run/secrets/charts-mongodb-uri’\n tokens failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/charts-tokens.json’\n encryptionKeyPath failure: ENOENT: no such file or directory, open ‘/mongodb-charts/volumes/keys/mongodb-charts.key’\n lastAppJson ({})\n stitchConfigTemplate\n libMongoIsInPath (true)I am seeing 3 errors here. However, i was able to generate a docker secret (charts-mongodb-uri) as part of the setup process. So not sure why i am seeing an error related to it. And i have no clue about the other 2 errors.", "username": "Tej_R" }, { "code": "", "text": "Thanks @Tej_R. The errors here are expected - when the container is run directly it doesn’t have access to the secrets or volumes. However this did confirm that the image itself is installed correctly.After re-reading your initial post I picked up that you’re using Docker 1.13.1. This is a very old version, and as per the Charts documentation you need to be using version 17.06 or higher. So I suspect that may be why it’s not working.Also in case you missed the announcement, please note that the on-prem release of Charts will only be supported until September 2021.Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks again, @tomhollander.Looks like i am left with no other options. My company only allows us to use RHEL servers and Windows Server 2012 R2.\nThe docker version supported on RHEL is only 1.13.1. So, looks like this option is not viable.I was researching on the possibility of setting up MongoDB and Charts on Windows Server 2012 R2. I can set up MongoDB v4.2 but looks like docker isn’t supported on 2012.I saw the announcement about retirement of on-prem charts after Sep 2021. I felt, it would have been better if there wasn’t a dependency on docker for Charts.Thanks,", "username": "Tej_R" } ]
Issues deploying Charts container on RHEL server
2020-11-23T23:48:08.068Z
Issues deploying Charts container on RHEL server
4,787
https://www.mongodb.com/…f1d16a95a347.png
[ "production", "php" ]
[ { "code": "mongodbdownloadToStreamdownloadToStreamByNameuploadFromStreamMongoDB\\GridFS\\Bucketcomposer require mongodb/mongodb^1.8.0\nmongodb", "text": "The PHP team is happy to announce that version 1.8.0 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release makes the library compatible with PHP 8.With this release, errors that occur while copying GridFS stream contents will now cause an exception instead of relying on PHP to emit a warning. This primarily affects the downloadToStream, downloadToStreamByName, and uploadFromStream methods for MongoDB\\GridFS\\Bucket.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=29654DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "composer require mongodb/mongodb^1.8.0$ composer require mongodb/mongodb^1.8.0 \n [InvalidArgumentException] \n Could not find a matching version of package mongodb/mongodb^1.8.0. Check the package spelling, your version constraint and that the package is available in a stability which matches your minimum-stability (stable).\ncomposer.jsoncomposer update mongodb/mongodb", "text": "composer require mongodb/mongodb^1.8.0This invocation fails onHowever, editing the composer.json and calling composer update mongodb/mongodb works", "username": "Jack_Woehr" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.8.0 released
2020-11-25T12:28:08.093Z
MongoDB PHP Library 1.8.0 released
1,916
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "I’m really struggling with the new .NET client; I have an app that worked fine in the old Realm Cloud v5 client but in the v10 beta, I’m getting ObjectNotDisposed errors and sometimes memory exception crashes; and frequently the sync stops working and clicking on the red “restart” message in the portal doesn’t help.I’m not sure if the problem is in the new .NET client itself and I should just wait for a release version, or if I’m doing things wrong which happened to work OK previously but perhaps the new client is less forgiving. Any thoughts on this? I know the ideal is to supply a small sample that illustrates the problems, but this is a full app and it would take some effort to strip it down to a sample (but I guess I may have to do that).A related question: are there any good .NET sample apps available using the new v10 client? (Something with multiple threads, more than just a simple console app with one Main method.) I haven’t seen any .NET samples for the new architecture yet.I’m a bit flummoxed as to a next step; any advice is appreciated!", "username": "Phil_Seeman" }, { "code": "", "text": "Hi @Ian_Ward, would love your thoughts on this. ", "username": "Phil_Seeman" }, { "code": "", "text": "@Phil_Seeman For any crashes you are getting on the client I would encourage you to file issues on dotnet repo so the team can investigate - GitHub - realm/realm-dotnet: Realm is a mobile database: a replacement for SQLite & ORMsFor sync stopping, we would need to see the error message you are getting but feel free to open a support chat in the portal when that occurs so we can investigate.We are working on .NET tutorial, it will be done toward the end of the quarter but feel free to ask any questions you are unsure about on the forums and we will do our best to guide you.", "username": "Ian_Ward" } ]
Stability of .NET client; sample apps?
2020-11-23T14:48:31.675Z
Stability of .NET client; sample apps?
1,589
null
[ "replication", "java" ]
[ { "code": "com.mongodb.MongoQueryException: Query failed with error code 13435 and error message 'not master and slaveOk=false' on server [secondary]\n", "text": "Hello,I wasn’t sure if this is the correct section to post this in, so apologies if it’s wrong here.We run several application components which use the MongoDB Java Driver 3.12.1 to connect to a PSA replica set running MongoDB 4.2.6. In fact, we run one production environment and several similar staging environments.Last night, on one of the staging environments, one of the application components threw the following exception and then hung:We are fairly sure that the MongoDB Primary did not step down, so there was no failover. But it looks like this component tried to work on the Secondary and thus failed. All components have all MongoDB hosts configured, i.e. Primary, Secondary and Arbiter, for their DB connection. Also, the application is not written to ever read from the secondary, for example. It’s always supposed to talk to the Primary.The application exists since MongoDB 3.2 or even 3.0, so it’s fairly old. And we have done stepDowns and failovers many times, for upgrades and so on. But we have never seen this effect before. Is there anything that could cause this? We’re not too concerned at this point, because it has only happened once and not on production, but we’re still curious what could cause this. So I’d be thankful for any ideas. Regards\nFrank", "username": "frankshimizu" }, { "code": "", "text": "Hi Frank,It might turn out to be helpful to have the full stack trace, including any chained exceptions. Can you post it please?Thanks,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Hi Jeffrey,thank you for your reply! Since my first post it has happened again and through more research I did find that there was in fact a failover and re-election. I suspect that it’s a resource problem on the MongoDB servers. So it doesn’t seem to be related to the driver. Sorry about not following up on that earlier. I think we can consider this solved.Regards\nFrank", "username": "frankshimizu" }, { "code": "", "text": "Hi Frank,More recent versions of the Java driver automatically retry read and write operations. This might help your application be more robust in the face of failovers and elections.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Java driver with Replica Set: not master and slaveOk=false
2020-11-17T14:33:17.075Z
Java driver with Replica Set: not master and slaveOk=false
4,035
null
[ "installation", "licensing" ]
[ { "code": "", "text": "Hi Team,We know that MongoDB Enterprise Edition is the Commercial Edition and Community one is free.\nHowever Can we use Enterprise Edition on Corporate Non-PROD/TEST/LAB environments for testing/learning purpose. Any legal/Compliance issues on using so…?Please advise me.Thanks in ADV", "username": "venkata_reddy" }, { "code": "", "text": "Hi @venkata_reddy,You can review the specific terms of usage for MongoDB Enterprise in the Customer Agreement applicable to Enterprise downloads.I believe the relevant section you are looking for is 2(b):(b) Free Evaluation and Development. MongoDB grants you a royalty-free, nontransferable and nonexclusive license to use and reproduce the Software in your internal environment for evaluation and development purposes. You will not use the Software for any other purpose, including testing, quality assurance or production purposes without purchasing an Enterprise Advanced Subscription. We provide the free evaluation and development license of our Software on an “AS-IS” basis without any warranty.You will have to ascertain whether your usage is complaint with the full terms of this agreement.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we use MongoDB Enterprise in Corporate Test servers/Lab environment
2020-11-25T06:33:44.520Z
Can we use MongoDB Enterprise in Corporate Test servers/Lab environment
3,597
null
[ "atlas-triggers" ]
[ { "code": "", "text": "Hi,I just noticed some weird thing about triggers. I followed the below steps,Can anyone please let me know what is wrong with my approach? Basically, I don’t want trigger to get fired for the documents which are inserted/updated/replaced when trigger was disabled.Thanks,\nVinay", "username": "Vinay_Gangaraj" }, { "code": "", "text": "Hi Vinay,Yes this is expected as when the index is enabled after its disabled, the trigger automatically uses the last stored resume token to try and cover all operations since last activity.This behaviour is currently the default and unchangeable behaviour for enable/disable.The only thing you can do is to export or save the trigger function seperately and delete the trigger. Recreate it pointing to the original function.Consider filing a call in https://feedback.mongodb.com to allow this to work as with suspended trigger, where you can checkbox if you wish to use a resume token or start fresh.CC: @Drew_DiPalma I saw this confusion with a few users.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Triggers are getting executed for the documents which are inserted when the trigger was disabled
2020-11-24T18:48:06.383Z
Triggers are getting executed for the documents which are inserted when the trigger was disabled
2,334
null
[ "dot-net" ]
[ { "code": "\"MongoDB.Bson.BsonSerializationException: Element name 'abc.net' is not valid'.\n at MongoDB.Driver.Core.Connections.BinaryConnection.SendMessagesAsync(IEnumerable`1 messages, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.ExecuteAsync[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatchesAsync(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteAsync(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperationAsync[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperationAsync[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWriteAsync(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase`1.ReplaceOneAsync(FilterDefinition`1 filter, TDocument replacement, ReplaceOptions options, Func`3 bulkWriteAsync)\n\n", "text": "Hello, I’m trying to store an entity to Mongo DB (version 4.2.9) and receive the following stack trace that appears at the end of the message.It seems that adding a <string, string> dictionary that holds a key that has a dot (.) inside causes this exception.String is : “abc.net”.Removing the dot (making it abcnet) works properly.I’m using the following nugets:Upgrading them to the latest 2.11.4 did not fix the issue.Attempting to add the entry manually (using Robo3T) works properly.Any suggestions as to what can be done?Thanks", "username": "Ilia_Shkolyar" }, { "code": "", "text": "", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
BsonSerializationException when trying to store an entity
2020-11-24T18:48:30.500Z
BsonSerializationException when trying to store an entity
4,161
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Are paritions just a way to silo access to data or are there performance benefits?For example, with the task tracking tutorial application, let’s say a user has multiple projects, each with multiple tasks.All the project and task data would be read-write for a given user, let’s say with a partition value of “user=user1”.You could give every project and task the same partition as their user: “user=user1”Or you could give every project and task the following partitionsuser=user1;project=proj1\nuser=user1;project=proj2\nuser=user1;project=proj3Are there performance benefits to further sub-partitioning each project and its tasks?", "username": "David_Boyd" }, { "code": "", "text": "@David_Boyd There are performance benefits for partitioning data if the partition is read/write for mobile clients.For read/write partitions we generally recommend 20-30 clients, this number is non-deterministic because it depends on how much data each client is writing, how much of this data is conflicting with other writes, the amount of time the user is offline - so it can vary but this is a good number for a rule of thumb.For read-only realms its not uncommon to scale to thousands or tens of thousands of clients listening to a read-only realm - this is because it is essentially a data push without conflicts.I hope this helps", "username": "Ian_Ward" }, { "code": "", "text": "Thank you! 20-30 writes? Can you elaborate on that? Is this during a specific time period or total or per-second/minute/hour?", "username": "David_Boyd" }, { "code": "", "text": "@David_Boyd Oops, typo - I meant 20-30 client writers - updated ", "username": "Ian_Ward" } ]
Partition question
2020-11-22T01:46:11.449Z
Partition question
1,888
null
[ "backup" ]
[ { "code": "", "text": "Team,Mongodump and Mongorestore was completed in less than 5 minutes for 20 GB of data.\nWhen data increased to 100 GB then mongodump took ~5 hours and mongorestore took 48 hours.Please let me know if this is expected.", "username": "Jitender_Dudhiyani" }, { "code": "mongodumpmongorestoremongodumpmongorestoremongodumpmongorestoremongodumpmongorestoremongodumpmongodmongodumpmongorestore", "text": "Welcome to the community @Jitender_Dudhiyani!The general problem you are describing is a likely outcome using mongodump and mongorestore for backup. This approach does not scale well, as noted in the documentation:mongodump and mongorestore are simple and efficient tools for backing up and restoring small MongoDB deployments, but are not ideal for capturing backups of larger systems.It would be helpful if you can provide some more information on your environment:A mongodump operation requires reading all data to be dumped through the mongod process’ memory, so if your data has grown significantly beyond available RAM the process may become I/O bound. A mongodump backup can have a significant performance impact if your application is also actively trying to use the instance you are backing up.The mongorestore operation will load all data and rebuild all indexes. The time to rebuild indexes will also grow with your data set.If you want to improve both backup and restore times, I would recommend using an alternative supported backup method such as filesystem snapshots or an agent-based approach like MongoDB Cloud/Ops Manager. If you have monitoring in place, that may provide more insight into the resource limitations that currently impact your backup and restore procedures.Regards,\nStennie", "username": "Stennie_X" }, { "code": "mongodumpmongorestoremongodumpmongorestore", "text": "@Stennie_X - Please find the below response:-What type of deployment do you have (standalone, replica set, or sharded cluster)?\nSharded clusterWhat specific version of MongoDB server are you using?\n4.2.1 Enterprise edition for WindowsHow many GBs of RAM does the instance you are dumping from have?\n16 GBDo you have any monitoring in place for metrics like memory and I/O during your dump & restore procedures?\nI have Spotlight tool to monitor Windows performanceAre you dumping data from an instance that is actively being used by your application?\nThis is Dev environment and instance that is NOT actively being used by the application.What options are you using with mongodump and mongorestore ?\nmongodump --oplog --host server01 --port yyyyyy --out e:\\mongodb_backup\\shard04\nmongorestore --host server02 --port yyyyy --oplogReplay --dir=E:\\mongodb_backup\\shard04 --stopOnErrorAre you running mongodump and mongorestore local to the instance you are backing up (or restoring to), or over the network?\nlocal execution of commands within the server", "username": "Jitender_Dudhiyani" }, { "code": "", "text": "@Stennie_X - Your help is appreciated. Please help me.", "username": "Jitender_Dudhiyani" }, { "code": "mongodumpmongodmongorestoremongodump", "text": "Hi @Jitender_Dudhiyani,As noted earlier, mongodump is not the most efficient or scalable backup approach to use, as it requires all data to be read and dumped via the mongod process. A mongorestore has to recreate all data files and rebuild indexes, so will also take longer to restore than a backup approach such as filesystem snapshots.If your data to be backed up is significantly larger than RAM, the backup and restore time will increase with the growth in your data set.If you are backing up a sharded cluster, there are more moving parts to coordinate and mongodump is not a viable backup approach if you are also using sharded transactions in MongoDB 4.2+.I would recommend looking into alternative backup methods (filesystem snapshots or MongoDB Cloud/Ops Manager).4.2.1 Enterprise edition for WindowsAn aside not specifically related to backup: you should upgrade to the latest 4.2.x release (currently 4.2.8). Minor releases include bug fixes and stability improvements, and do not introduce any backward breaking changes. See Release Notes for MongoDB 4.2 for more details on issues fixed.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Also very disappointed in Mongo from a backup and recovery perspective. I have a .gz backup which is 30 GB. One of the collections in this is 400GB uncompressed. I worked out that the restore for just that collection will take +/- 40 hours.", "username": "John_Clark" } ]
Mongodump mongorestore slowness
2020-07-14T10:33:52.877Z
Mongodump mongorestore slowness
9,924
null
[]
[ { "code": "", "text": "Hi Team,For example : I have a 5 node cluster, we are planning to migrate our virtual machines to Physical servesOur plan is to add the new physical servers one by one to the existing cluster and make sure initial sync is completed on all physical servers.Once the sync completed on all physical servers, we are going to remove old servers one at a time and we will make new physical machines as Primary and secondary.Is this a good approach ? is there any issues we will face ?And we have observed local database having huge difference in size between old VMs and new physical servers - is this expected ?and also we have seen “me” collection in local database in vms, but in physical we did not see, do you know why this collection is missed in Physical and existed in old vms ?The “me” collection consists the respected server name .Regards\nManu", "username": "Mamatha_M" }, { "code": "", "text": "Hi @Mamatha_M,Yep, The presented approach is the preferred way.Please note that we recommend adding the new servers with 0 votes and 0 priority. This is since having 10 servers with 1 vote will make your voting number to 10 which is not odd as recommended.Once you want to promote new servers reconfigure the set with bumping new servers vote and priority and moving old to 0.What we recommend to do also is when you are ready to decommision the old servers consider first promoting one of the new servers to a primary (by raising temporary its priority) to check how well it behaves as Primary. Once you are confident in new servers remove the old ones.The local database is initially small as it mainly consists of the oplog which write replica set operations over time, therefore its initial size is small for new nodes.I think the me collection is historically there and its not used anymore, new nodes will not have it.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the information Pavel, It was very useful.", "username": "Mamatha_M" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB migration from VMs to physical servers
2020-11-23T17:14:08.702Z
MongoDB migration from VMs to physical servers
2,034
null
[ "aggregation" ]
[ { "code": "$setUnion$setUnion[{ \"key1\": 1, \"key2\": \"R\" },\n{ \"key1\": 2, \"key2\": \"Q\" },\n{ \"key1\": 2, \"key2\": \"Q\" },\n{ \"key1\": 3, \"key2\": \"P\" },\n{ \"key1\": 3, \"key2\": \"P\" },\n{ \"key1\": 3, \"key2\": \"P\" },\n{ \"key1\": 4, \"key2\": \"O\" }]\nkey1key2key1db.collection.aggregate([\n {\n $group: {\n _id: null,\n root: { $push: { key1: \"$key1\", key2: \"$key2\" } }\n }\n },\n { $project: { root: { $setUnion: \"$root\" } } }\n])\nkey2key1key2db.collection.aggregate([\n {\n $group: {\n _id: null,\n root: { $push: { key2: \"$key2\", key1: \"$key1\" } }\n }\n },\n { $project: { root: { $setUnion: \"$root\" } } }\n])\n", "text": "The real work of $setUnion operator is to filters out duplicates in its result to output an array that contain only unique entries.I have workaround $setUnion operator orders element in ascending order on the base of first field of element, is that true?Sample documents:First query, key1 would be first and key2 would be second, means this will result in ascending order by key1 field,First query, key2 would be first and key1 would be second, means this will result in ascending order by key2 field,I looked at the MongoDB $setUnion Documentation, they have mentioned statement: The order of the elements in the output array is unspecified., but this query returns in exact order by first field,Please suggest or refer me if i am missing to refer any MongoDB documentation, and please if anyone confirm this is a feature then i will implement in my code.", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal,Although the operator outputs the data according to your desired output it cannot guarantee this order will maintain , its only circumstantially.If you wish to guarantee the order of the output please use a $sort stage after the $setUnion to guarantee your order.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "$addToSet$push$group$addToSet$addToSetdb.collection.aggregate([\n { $sort: { key1: 1 } },\n {\n $group: {\n _id: null,\n root: { $addToSet: { key1: \"$key1\", key2: \"$key2\" } }\n }\n }\n])\n$unwind$groupdb.collection.aggregate([\n {\n $group: {\n _id: null,\n root: { $addToSet: { key1: \"$key1\", key2: \"$key2\" } }\n }\n },\n { $project: { root: { $setUnion: \"$root\" } } }\n]) \n$unwind$group", "text": "Thank you very much for your reply,Actually i am using $addToSet instead of $push in $group stage, but $addToSet unorder the array, my actual query is:I know the behavior of $addToSet operator, it will not guarantee the order of array,So to prevent this problem, I have used below query and It is working perfectly, and i don’t need to use $unwind and again $groupIf its save 2 stages $unwind and $group for sorting then its really good feature for us.I have tested many examples its working perfectly, if it is circumstantially then could you please share some example that is not circumstantially.", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal, I can’t provide you with an example for which your code won’t work (I haven’t tried) but to ensure it’s always sorted you may want to consider using a custom aggregation expression. That way you can control the order. Note that this only works if you are using MongoDB 4.4.", "username": "Naomi_Pentrel" } ]
Does $setUnion expression operator order array elements in ascending order?
2020-11-23T19:36:56.511Z
Does $setUnion expression operator order array elements in ascending order?
2,778
null
[ "connecting" ]
[ { "code": "", "text": "Hello!\nI would like to run some tests using Travis CI. The tests are using mongo atlas cluster connection string.\nHow can Atlas cluster authorize connections from Travis?\nI do not want to use 0.0.0/0 . Any ideas?", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal, I think Travis CI has a list of IP addresses they use here. If you whitelist just those that should work.", "username": "Naomi_Pentrel" } ]
Connect Atlas Cluster from Travis CI
2020-11-23T22:14:03.782Z
Connect Atlas Cluster from Travis CI
1,962
https://www.mongodb.com/…5a71384b7e5c.png
[ "node-js", "next-js", "developer-hub" ]
[ { "code": ".env.local.examplesample_mflix.env.local.exampleMONGODB_URI=mongodb+srv://test:[email protected]/sample_mflix?retryWrites=true&w=majorityMONGODB_DB=sample_mflix", "text": "Hello,I’m following this small tutorial to learn Next.js and MongoDB in detail. However, I’m encountering some issues with the following in this tutorial. I’m trying to use the .env.local.example from the file to the mongodb.js. I rechecked both of my Atlas clusters and connect it to the Compass. The compass shows all of the data and I can see the sample_mflix fine. My only issue is that it doesn’t show through the landing page. Thanks for your help \nScreenshot from 2020-11-23 16-55-33956×474 33.3 KB\nthis is the content of .env.local.example:\nMONGODB_URI=mongodb+srv://test:[email protected]/sample_mflix?retryWrites=true&w=majorityMONGODB_DB=sample_mflix", "username": "Zeid_Tisnes" }, { "code": ".env.local.example.env.local.example.env.localnpm run dev", "text": "this is the content of .env.local.example :Welcome to the MongoDB community @Zeid_Tisnes!Per the tutorial, make sure you rename or copy .env.local.example to .env.local after adding your configuration. Try renaming the file and restarting your Next.js app with npm run dev.Regards,\nStennie", "username": "Stennie_X" }, { "code": ".env.localAuthentication failed", "text": "I did change the name to .env.local but still no luck with Authentication failed error message. Even though I have the same credentials. I’m a little bit lost and confused because it is literally what the tutorial shows.", "username": "Zeid_Tisnes" }, { "code": "mongodb+srv://test:[email protected]/sample_mflix?authSource=admin&replicaSet=atlas-ance8p-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=true", "text": "Ok, apparently the URI I had initially was not working and I had to change it to this instead:mongodb+srv://test:[email protected]/sample_mflix?authSource=admin&replicaSet=atlas-ance8p-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=true", "username": "Zeid_Tisnes" }, { "code": ".env.localAuthentication failed", "text": "I did change the name to .env.local but still no luck with Authentication failed error message.Hi @Zeid_Tisnes,That sounds like progress … the MONGO_URI is set but the credentials aren’t correct yet.There should be a line preceding the “ready” message indicating that the an env file is being read:Loaded env from /home/alone/with-mongodb/.env.localIf there is an obvious problem with the format of MONGO_URI there should be an error after the “event - compiled successfully” message similar to:MongoParseError: Invalid connection stringThe error you encountered suggests the connection string format is correct, but could not be used to successfully authenticate to your MongoDB deployment.Regards,\nStennie", "username": "Stennie_X" }, { "code": "mongodb+srv://user:[email protected]/sample_mflix", "text": "Hi @Zeid_Tisnes, I hope it’s working now? Just a quick note that you should change your user on the Atlas database you shared. The connection string you added in your solution gives away your username and password combo, giving me and everyone else reading this full access to your database. Given it’s only a practice application, this isn’t a huge problem but in general you will want to anonymize the connection string to something like mongodb+srv://user:[email protected]/sample_mflix.Cheers,\nNaomi", "username": "Naomi_Pentrel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MONGODB_URI not found following Next.js App tutorial
2020-11-24T00:14:51.256Z
MONGODB_URI not found following Next.js App tutorial
11,404
null
[]
[ { "code": "", "text": "The guide is asking you to use the command line but this won’t work since mongo isn’t installed locally.Please advise.", "username": "Ken_Mathieu_Beaudin" }, { "code": "mongo", "text": "Hi @Ken_Mathieu_Beaudin !at the end of the chapter there is an Interactive Developer Environment (IDE). This is bash shell with mongo and other bins already installed for you.To run it locally head over to the installation chapter in the docs.", "username": "santimir" }, { "code": "", "text": "I am not able to connect to my cluster using IDE environment, can any one help me?", "username": "Muzaffar_ali_53011" }, { "code": "", "text": "What issue you are facing\nPlease show us the screenshot or error details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi,i am not able to connect to mogodb using shell .\nfollowing the same procedure as informed.\nthe IDE show error.\nattaching the screenshot .\nconnection string :mongo “mongodb+srv://sandbox.wr6ht.mongodb.net/” --username m001-student\nScreenshot (20)1920×1080 91.7 KB", "username": "aditya_rana" }, { "code": "", "text": "What error are you getting?\nI don’t see any error in your snapshot.It shows test result failed\nDid you run the command in correct area of IDE\nDid you hit enter after typing/pasting the connect string?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am not able to connect to Atlas cluster. I am getting following erros.\nconnecting to: mongodb://sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-exfvdg-shard-0&ssl=true\n2020-10-27T10:02:45.134+0000 I NETWORK [js] Starting new replica set monitor for atlas-exfvdg-shard-0/sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:45.771+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.771+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.959+0000 I NETWORK [js] changing hosts to atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net:27017,sandbox-shard-00-01.ftsyv.mongodb.net:27017,sandbox-shard-00-02.ftsyv.mongodb.net:27017 from atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017,sandbox-shard-00-02.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:46.535+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:46.538+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.363+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-02.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-02.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.728+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:48.104+0000 I NETWORK [js] Marking host sandbox-shard-00-02.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-02.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-02.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-02.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:48.478+0000 I NETWORK [js] Marking host sandbox-shard-00-00.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-00.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-00.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-00.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:49.612+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:49.613+0000 E QUERY [js] Error: can’t authenticate against replica set node sandbox-shard-00-01.ftsyv.mongodb.net:27017 :: caused by :: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed. :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "Sandeep_41860" }, { "code": "", "text": "Hi follow the step to sucess the exercice mongo university step1398×703 57.4 KB", "username": "Jean-Claude_ADIBA" }, { "code": "", "text": "bad authentication means wrong combination of userid/pwd\nWhat did you give as password?\nMay be some invalid character or space got introduced while pasting the password at the time of creating your sandbox cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Muzaffar_ali_53011,I am not able to connect to my cluster using IDE environment, can any one help me?Please share the information requested by @Ramachandra_37567 if you are still facing any issue.What issue you are facing\nPlease show us the screenshot or error details", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi\nI am facing issue while connection to mongoshell.\nError which I facing,I pasted below. please let me know where I need to improve the command.bash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:41:19.699+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:41:19.865+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:19.865+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:41:20.406+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.406+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:41:20.947+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.947+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:41:21.489+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:21.489+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:41:22.031+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.031+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:41:22.572+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.572+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:41:23.112+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.112+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:41:23.653+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.653+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:41:24.195+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.195+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:41:24.737+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.737+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:41:25.277+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:25.278+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:41:25.819+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.359+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.900+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.448+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.989+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:28.530+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.071+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.612+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.153+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:41:31.238+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:31.778+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.319+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.860+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.400+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.941+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.481+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.482+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed\nbash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:43:40.571+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:43:40.755+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:40.755+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:43:41.296+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.296+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:43:41.836+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.836+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:43:42.385+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.385+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:43:42.927+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.927+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:43:43.468+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:43.468+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:43:44.010+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.010+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:43:44.551+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.551+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:43:45.091+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.091+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:43:45.633+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.633+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:43:46.173+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:46.173+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:43:46.720+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.264+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.805+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.346+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.886+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.427+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.968+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:50.508+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.049+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:43:52.131+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:52.671+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.212+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.753+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.293+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.834+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "jay_bhosale" }, { "code": "", "text": "Is your cluster up and running?\nPlease check status in Atlas.Any errors?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely you forgot to whitelist your IP address.", "username": "steevej" }, { "code": "", "text": "Yess,Its up & running.\nI didn’t see any wrong with configuration.", "username": "jay_bhosale" }, { "code": "", "text": "I added IP (My-Machine) in network access.\nIts seem successfully added without any error.but while i trigger connection command trough console its throwing me error.", "username": "jay_bhosale" }, { "code": "", "text": "3 posts were split to a new topic: Not able to connect to cluster through IDE", "username": "Shubham_Ranjan" }, { "code": "whitelistIPs0.0.0.0", "text": "Hi @jay_bhosale,Can you try to whitelist all the IPs by selecting 0.0.0.0 option?Please take a look at this post for more information.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I can’t paste the command to terminal. the terminal is not responding. what should I do?", "username": "Binti_Solihah" }, { "code": "", "text": "Please show us the screenshot\nMay be you pasted it in wrong area?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Press the + button.", "username": "santimir" } ]
Lab: Connect to your Atlas Cluster
2020-10-24T10:47:28.869Z
Lab: Connect to your Atlas Cluster
4,669
https://www.mongodb.com/…3_2_723x1024.png
[ "app-services-user-auth" ]
[ { "code": "OAuth2 configuration consists of only clientId, clientSecret, and openId", "text": "I haven’t changed any configuration in my Realm app but received emails from users who’re no longer able to sign in. I’m seeing the following in Sentry OAuth2 configuration consists of only clientId, clientSecret, and openIdIs there anything I can do to fix this?image1046×1480 154 KB", "username": "Theo_Miles" }, { "code": "", "text": "Apple Sign In no longer working · Issue #6954 · realm/realm-swift · GitHub I can verify it’s happening to others here.", "username": "Theo_Miles" }, { "code": "", "text": "Hi @Theo_Miles I can reproduce the issue and we are looking into it now.", "username": "Lee_Maguire" }, { "code": "", "text": "@Lee_Maguire Thanks very much for the quick response ", "username": "Theo_Miles" }, { "code": "", "text": "@Theo_Miles Sign in with Apple is up and running again.", "username": "Lee_Maguire" }, { "code": "", "text": "Thanks, working for me!", "username": "Theo_Miles" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sign In With Apple error
2020-11-22T09:45:05.878Z
Sign In With Apple error
2,498
null
[]
[ { "code": "", "text": "Hello Community,\nI am a junior developer getting started with an open-source contribution. I am looking forward to contributing to an active project with MongoDB. I am comfortable with any stack technologies. If you have one left vacancy in your team, you won’t regret taking me into it ", "username": "Binay_Agarwal" }, { "code": "", "text": "Hi @Binay_Agarwal! The O-FISH project is an open source project that uses MongoDB that is actively seeking community (volunteer) code contributions.You can see contribution guideilnes and how to get started at Contributing to O-FISH - O-FISH DocsIf you have questions about a particular open issue, let me know! I’m happy to try to match you up with an issue.", "username": "Sheeri_Cabral" }, { "code": "", "text": "Thanks for your reply @Sheeri_Cabral. I have started working on this project!!", "username": "Binay_Agarwal" }, { "code": "", "text": "Hi @Binay_Agarwal!The GrandNode project is an open-source e-commerce platform, based on .NET Core and MongoDB. We are actively looking for an active community to code contributions.You can check the GitHub page of our project here: https://github.com/grandnode/grandnodeIf you have any doubts or questions, feel free to ask!Best!", "username": "Patryk_Porabik" } ]
Getting Started with open-source projects
2020-11-17T19:31:02.593Z
Getting Started with open-source projects
3,770
null
[]
[ { "code": "", "text": "Hello everyoneI am taking M01 Basic Mongodb and in chapter4 there is one exercise asking for this:\n“Find all documents in the companies collection where people named Mark used to be in the senior company leadership array, a.k.a the relationships array, but are no longer with the company.”So I create this query:db.companies.find({“relationships.person.first_name”:“Mark”, “relationships.is_past”:true }).count()which returns 448 documentsHowever I think this other query may work:db.companies.find({ “relationships”: { “$elemMatch”: { “is_past”: true, “person.first_name”: “Mark” } } } ).count()This returns 256 documentsDoes anyone may explain me why they return different counts?Regards", "username": "Carlos_Hidalgo" }, { "code": "{ _id : 1 , array : [ { a : 1 , b : 2 } , { a : 3 , b : 3 } ] }\n{ _id : 2 , array : [ { a : 1 , b : 3 } , { a : 3 , b : 4 } ] }\n", "text": "First, you might have more success with the MongoDB university forum for the course specific questions.The best way to see why counts are different is to use Compass with the schema analysis to see which documents are selected by the query.In the first case the query find documents an element of the array with first_name Mark and a second one with is_past true not necessarily the same element. While the second one, both conditions must be true for the same element.Example:find( { array.a : 1 , array.b : 3 } ) will find both documents.\nfind( { array : { $elemMatch : { a : 1 , b : 3 } } } ) will only find _id:2.", "username": "steevej" } ]
Difference between $elemMatch and by querying with dot notation
2020-11-23T23:37:41.288Z
Difference between $elemMatch and by querying with dot notation
4,458
null
[]
[ { "code": "", "text": "The edit icon is not showing in username, i can not able to edit my username, is there any specific reason or require any criteria for edit my username?I just found this topic filtering out your community profile, there is a edit icon is username field, but i can not see in my profile.", "username": "turivishal" }, { "code": "", "text": "You can’t @turivishal , but moderators normally can", "username": "santimir" }, { "code": "", "text": "Hi Vishal,I just checked into this site setting and it looks like the ability to change username is currently limited to changing within your first 3 days after registration, without assistance from the moderation team.I suspect this is to avoid breaking references in existing conversations that @-mention a previous username. I’ll discuss the setting further with the moderation team.In the interim, I’ll also send you a direct message to follow up on your request.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
How to change username, I can't see any edit icon in Preferences > Account section?
2020-11-23T20:36:41.063Z
How to change username, I can&rsquo;t see any edit icon in Preferences &gt; Account section?
3,936
null
[ "python" ]
[ { "code": "def deleteDups(datab):\n col = db[datab]\n pipeline = [\n {'$group': {\n '_id': {\n 'CASE NUMBER': '$CASE NUMBER',\n 'JURISDICTION': '$JURISDICTION'},#needs to be case insensitive\n 'count': {'$sum': 1},\n 'ids': {'$push': '$_id'}\n }\n },\n {'$match': {'count': {'$gt': 1}}},\n ]\n results = col.aggregate(pipeline, allowDiskUse = True)\n count = 0\n for result in results:\n doc_count = 0\n print(result)\n it = iter(result['ids'])\n next(it)\n for id in it:\n deleted = col.delete_one({'_id': id})\n count += 1\n doc_count += 1\n #print(\"API call recieved:\", deleted.acknowledged) debug, is the database recieving requests\n\n print(\"Total documents deleted:\", count)\nresults = col.aggregate(pipeline, allowDiskUse = True)File \"C:\\Users\\Laura\\Documents\\GitHub\\Ant\\controller.py\", line 202, in deleteDups\n results = col.aggregate(pipeline, allowDiskUse = True)\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\collection.py\", line 2375, in aggregate\n return self._aggregate(_CollectionAggregationCommand,\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\collection.py\", line 2297, in _aggregate\n return self.__database.client._retryable_read(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1464, in _retryable_read\n return func(session, server, sock_info, slave_ok)\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\aggregation.py\", line 136, in get_cursor\n result = sock_info.command(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\pool.py\", line 603, in command\n return command(self.sock, dbname, spec, slave_ok,\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\network.py\", line 165, in command\n helpers._check_command_response(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\helpers.py\", line 159, in _check_command_response\n raise OperationFailure(msg % errmsg, code, response)\npymongo.errors.OperationFailure: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.\n", "text": "I’m trying to delete duplicates from my database. It’s just gotten past the 500k documents mark, so it complains that the aggregate return is too big and I need to allow disk use. So I do. But… nothing happens. Same error.Errors out on results = col.aggregate(pipeline, allowDiskUse = True):I can’t even right now. I’m expecting this to be something dumb, but for the life of me I can’t see it. Thank you.", "username": "ladylaurel18_N_A" }, { "code": ">>> pymongo.__version__\n'3.11.1'\n>>> client.server_info()['version']\n'4.4.1'\n def test_allowDiskUse(self):\n coll = self.client.test.test_allowDiskUse\n if coll.count_documents({}) < 100:\n str_1mb = 's' * 1024 * 1024\n coll.insert_many([{'s': str_1mb, 'i': i} for i in range(101)])\n large_pipeline = [{'$group': {'_id': '$i', 's': {'$addToSet': '$s'}}}]\n with self.assertRaisesRegex(OperationFailure, 'Exceeded memory limit'):\n list(coll.aggregate(large_pipeline))\n # Passes with allowDiskUse\n list(coll.aggregate(large_pipeline, allowDiskUse=True))\n", "text": "Hi @ladylaurel18_N_A, thanks for reproing this issue. Can you please provide:PyMongo 3.11.1 with MongoDB 4.4.1 works correctly as evidenced by this test:", "username": "Shane" }, { "code": " File \"C:\\Users\\Laura\\Documents\\GitHub\\Ant\\controller.py\", line 204, in deleteDups\n results = col.aggregate(pipeline, allowDiskUse = True)\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\collection.py\", line 2453, in aggregate\n return self._aggregate(_CollectionAggregationCommand,\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\collection.py\", line 2375, in _aggregate\n return self.__database.client._retryable_read(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1471, in _retryable_read\n return func(session, server, sock_info, slave_ok)\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\aggregation.py\", line 136, in get_cursor\n result = sock_info.command(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\pool.py\", line 683, in command\n return command(self, dbname, spec, slave_ok,\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\network.py\", line 159, in command\n helpers._check_command_response(\n File \"C:\\Python38\\lib\\site-packages\\pymongo\\helpers.py\", line 160, in _check_command_response\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in., full error: {'operationTime': Timestamp(1606167414, 43), 'ok': 0.0, 'errmsg': \"Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.\", 'code': 16945, 'codeName': 'Location16945', '$clusterTime': {'clusterTime': Timestamp(1606167414, 43), 'signature': {'hash': b'\\x83\\xf9!~\\x9a\\xd1\\xe6\\xab\\xe1\\xef\\xd8v\\x9a\\xb4\\xe7\\xe0\\xe0\\x96\\x80\\xd5', 'keyId': 6841937452907626500}}}\n", "text": "PyMongo 3.10.1, and the free Atlas M0. It says version 4.2.10. I updated everything, which did indeed break Python - apparently NumPy is having a bad couple months - rolled back NumPy, and successfully ran it on PyMongo 3.11.1, with this slightly more informative error:I didn’t change the code.", "username": "ladylaurel18_N_A" }, { "code": "", "text": "PyMongo 3.10.1The server is a (so far) free Atlas M0, cluster 0 being version 4.2.10I wonder if I update to PyMongo 3.11.1, will it break everything or fix my problem? ", "username": "ladylaurel18_N_A" }, { "code": "M0allowDiskUse", "text": "You can and should upgrade to pymongo 3.11 (it’s also compatible with MongoDB 4.2) however that won’t fix this issue. Unfortunately, you are running into a documented limitation of Atlas M0 (Free Tier):Atlas M0 Free Tier clusters do not support the allowDiskUse option for the aggregation command or its helper method.AFAIK you’ll need to rework your query to use less than 100MB or upgrade to a paid cluster.", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pymongo ignoring allowDiskUse = True?
2020-11-23T19:37:53.968Z
Pymongo ignoring allowDiskUse = True?
11,730
null
[ "golang" ]
[ { "code": "", "text": "Does Golang support Delete with Limit in addition to a filter? We need to run a periodic cleanup, and we need it to be quick, so do not want to delete more than 1000 documents at a time. So wanted to do something like this https://docs.mongodb.com/manual/reference/command/delete/#dbcmd.delete , but looking at the delete many, it does not seem to support the limit. Will you be adding this support?", "username": "parvathi_nair" }, { "code": "bulk.find({_id : xxxxx}).remove()", "text": "Hi @parvathi_nair,The best way to perform this operation is by fetching the needed for cleanup _id with a query using a limit and than pass it to a bulk delete bulk.find({_id : xxxxx}).remove():Than execute the batch in 1000 chunks.Please note that a better cleanup maybe using ttl index on a date field:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "deleteDeleteOneDeleteMany", "text": "Also, the server-side delete command doesn’t actually support an arbitrary limit. Per https://docs.mongodb.com/manual/reference/command/delete/#deletes-array-limit, the limit can be 1 to delete no more than one document and 0 to delete all documents that match the filter. These values map to the DeleteOne and DeleteMany functions in the Go driver, respectively.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Thanks for the reply. I noticed the limit as 0 or 1 after I posted the message. I ended up using the id range to delete in batches.", "username": "parvathi_nair" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Delete Many with Limit
2020-11-19T18:17:48.296Z
Delete Many with Limit
14,824
null
[ "installation" ]
[ { "code": "\t# get MongoDB\n\twget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -\n\techo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse\" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list\n\tapt update\n\tapt-get install -y mongodb-org\n apt-get install -y systemd\n\ttouch /etc/init.d/mongod\n\tapt-get install -y gnupg\n\twget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | apt-key add -\n\techo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | tee /etc/apt/sources.list.d/mongodb-org-4.4.list\n\tapt-get update\n\tapt-get install -y mongodb-org\n+ apt-get install -y mongodb-org\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n mongodb-org : Depends: mongodb-org-shell but it is not going to be installed\n Depends: mongodb-org-server but it is not going to be installed\n Depends: mongodb-org-mongos but it is not going to be installed\nE: Unable to correct problems, you have held broken packages.\nABORT: Aborting with RETVAL=255\n", "text": "I’m asking this in parallel on Problem installing MongoDB (in container) - Stack Overflow.So, I have a singularity container recipe in which MongoDB is installed in an Ubuntu environment, which worked just fine in the last months. Now suddenly it doesn’t, without me having changed the instructions at that part.My original lines wereAfter having tried out several possible fixes on my own, my current lines are:The error message is:", "username": "Ksortakh_Kraxthar" }, { "code": "mongodb-org", "text": "I solved my problem by using a higher Ubuntu version in the setup of my container, went from 18.04 to 20.04.My assumption is that the mongodb-org package was recently updated to something that is not compatible with my old environment anymore. I’m now considering limiting the version of the package somehow in order to reduce the required maintenance.", "username": "Ksortakh_Kraxthar" } ]
Problems installing MongoDB in a container
2020-11-23T12:38:16.188Z
Problems installing MongoDB in a container
2,998
null
[ "sharding" ]
[ { "code": "", "text": "Hey All,I have a requirement to build an active-active environment in MongoDB. Please confirm if this is possible with MongoDB. Primary will be hosted on each data center and will have bi-directional replication.", "username": "Rajavignesh_Paranira" }, { "code": "", "text": "Hi @Rajavignesh_Paranira,It’s not officially supported by MongoDB but there are some alternatives and solutions.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you Maxime for your input.Mongo Mirror is supported only for MongoDB Atlas.For active-active configuration, to go with Partiotined (Sharded) Database and would like to know how the data is being replicated from one data center to another data center within the Sharding.", "username": "Rajavignesh_Paranira" }, { "code": "", "text": "Sharding != Replication.Replication is based on the Replica Set architecture. Usually one Primary and 2 Secondaries which replicate the write operations happening on the Primary. See https://docs.mongodb.com/manual/replication/#replication-in-mongodb.Sharding allows you to split your entire data set and distribute it across multiple Replica Sets (=shards), so each shard is responsible for only 1/N th of the data (N being the number of shards you are running).The number of primaries and secondaries per shard is at your discretion and of course you usually put the servers keeping the same set of data in different data centers. It doesn’t change the fact that one primary responsible for 1/N th of the data is only in one place at a given time.See https://docs.mongodb.com/manual/sharding/#sharded-cluster.Usually you don’t shard before 1 or 2 TB. But there could be other motivations for sharding. MongoDB Atlas is capable of deploying such a complexe infrastructure and configure it automatically.", "username": "MaBeuLux88" } ]
Active Active Sharding
2020-11-19T06:43:00.585Z
Active Active Sharding
4,117
null
[]
[ { "code": "", "text": "Hello,i’m looking on the princing of a cluster and i’m seeing 0,56$/hour for the M30.\nIf i understood correctly it’s around 400$ a month for 3000 simultaneous user connections ?Is Realm not suited for startups? I found this too much expensive.\nif i may compare to firestore, for around 300$ we can have 100 000 users daily.Is there something i missed?Thanks to all answers.", "username": "Nabil_Ben" }, { "code": "", "text": "Hi @Nabil_Ben,I think you are confusing the amount of connection pool database connections vs amount of concurrent realm requests.The M30 does allow 3000 database connections but it does not block realm users from running an unlimited amount of concurrent requests against the realm application.Realm smartly utelize connection pooling to the Atlas database therefore should never reach this limitation while still allowing your application users running 10s of thousands of requests(in min) as long as they don’t saturrate other database resources… Eventually 3000 concurrent database connection should sustain millions of daily requests.Realm is built for startups and allow you a scalable backend which can run on any atlas tier including free clusters.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "connection pool databaseThanks i’m a junior developer. It’s kind new.\nWe are two developer working on a project and we saw 37 connections. We use Realm with graphQL for a web application.So it has nothing to do with user currently logged in besides the requests?\nIs there a doc or something to understand those connections.For 1 million requests on realm, it will start about how much connections?\nExample of use like we find on google docs would be great.", "username": "Nabil_Ben" }, { "code": "", "text": "Hi @Nabil_Ben,The beauty of realm is you don’t need to care about database connections , we optimized ot for you. What you need to examine is your response time and tune your queries/indexes and cluster size to follow your requirements.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "Realm.open()Realm.open()new Realm()", "text": "Hello, I have kind of similar question.\nNow I use only M0, where the limit is 500 connections.\nIf in my app I call Realm.open() does it mean that new connection is being opened?\nAnd what is a difference between Realm.open() and new Realm() in this context?", "username": "Lukasz_Stachurski" } ]
I have a real concern about pricing
2020-10-31T19:12:41.751Z
I have a real concern about pricing
2,782
null
[ "java", "performance" ]
[ { "code": "var bulk=db.quote_date.initializeOrderedBulkOp(); \n\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337417\"),nano_seconds:NumberInt(784325554),ask_price:-0.98,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4),qualifiers:[\"[BID_TONE]\",\"[ASK_TONE]\"]}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337417784\")},\n\t$max:{quote_last:NumberLong(\"1597337417784\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\n\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337606\"),nano_seconds:NumberInt(436207836),ask_price:-0.97,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4)}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337606436\")},\n\t$max:{quote_last:NumberLong(\"1597337606436\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\n\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337635\"),nano_seconds:NumberInt(967713742),ask_price:-0.98,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4)}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337635967\")},\n\t$max:{quote_last:NumberLong(\"1597337635967\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\n\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337940\"),nano_seconds:NumberInt(812651241),ask_price:-0.98,ask_size:NumberInt(4),qualifiers:[\"[BID_TONE]\"]}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337940812\")},\n\t$max:{quote_last:NumberLong(\"1597337940812\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\n\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337940\"),nano_seconds:NumberInt(832562892),ask_price:-0.99,ask_size:NumberInt(4)}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337940832\")},\n\t$max:{quote_last:NumberLong(\"1597337940832\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\n\nbulk.execute({w:0});\n", "text": "HiWe are processing historical quote data from files (millions of rows). I have followed fixed size bucketing (1000 quotes per document) for time series data as per scenario # 3 in MongoDB's New Time Series Collections | MongoDBso to push data to mongodb I am sending 10000 transactions (bulk.find().upsert().updateone()) between bulk initialization (tried both ordered and non ordered) and bulk.execute() (tried both with write and no write concern), but somehow performance is significant slow (it is pushing 1000 quotes per second)here is example of generated query which is being pushed to mongodb through java (for clarity I have put 5 transactions here instead of 10000). Any suggestion why it is significant slow?", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Hi @Dhruvesh_Patel,What is the version of MongoDB?For replica set bulk updates do not use w:0 to speedup write. I would even recommend stick with w: majority. The w:0 is probably causing replicas to lag resulting in worse performance.Additionally, make sure that the criteria of the update is compound indexed on its fields.MongoDB creates bulks of 1000 operations even if you specify a higher number .Why are all the updates repeating?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "HiThank you for contacting me on this issueSee the following commentsPlease let me know if you need any additional information.Thanks", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Hi @Dhruvesh_Patel,One additional question does the order matter or you can consider initialising unordered bulk update?How much data is in qoute_date collection? Can you partition the data into several collections with naming convention? (Weekly or monthly collections).4.0.2 is an old version , I would use 4.0.20 or 4.2.10 as they have many performance improvement!Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I have tried both Ordered vs Non Ordered bulk operation and it same performance issue. We will soon try in production which has version 4.2.3 so we will confirm about change in performance with latest versionWe don’t need document in specific order but “quotes” array in a document should quote be in specific order since it is time series data. “quotes” array has 1000 quotes. We can change number of quote in the array since we are using fixed size bucketingWe do have performance issue even with empty collection so not sure it can help us to do weekly/monthly collection at this point.", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Hi @Dhruvesh_Patel,Ok now that you say that elements in a document can grow to 1000 array elements it makes sense that updates are slow…When mongodb perfoms an update to an array it needs to desirialize and serelize the whole array for each update.Consider limiting array elements to 100 a document.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "we tried the solution you suggested , but not significant improvement in production. If we have 1000 quotes in array per documents then we able to save 1684 quotes/seconds , but if we have 100 quotes in array per document then we able to save 4083 quotes/seconds.", "username": "Dhruvesh_Patel" }, { "code": "$min:{quote_first:NumberLong(\"1597337417784\")},\n\t$max:{quote_last:NumberLong(\"1597337417784\")},\n", "text": "Hi @Dhruvesh_Patel,Consider upgrading the version and your hardware that should speed the upserts.One additional comment , I noticed you are settingHardcoded value each upsert command. This is uneeded overhead and traffic. You can set those values in the setOnInsert clause without running $max or $min or at least running it once.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "We can ask DBA for version upgrade (but its long shot depending on the priorities and we have to be sure why we are asking for it) . But we have adopted MongoDB in our company almost one year ago. so our hardware are recommended by MongoDB.\nAlso we do see MongoDB site has other uses cases where timeseries data adopted successfully and they have done with previous versions of MongoDB.Also if you see scenario # 3 in MongoDB's New Time Series Collections | MongoDB, “quote_first” and “quote_last” value should be changed based on every time new quote pushed to “qutoes” array. I am generating bulk insert query on the fly so those value are part of query before it is being executed on MongoDB. So “setOnInsert” will not work in this scenario.$min:{quote_first:NumberLong(“1597337417784”)},\n$max:{quote_last:NumberLong(“1597337417784”)},Thanks", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Hi @Dhruvesh_Patel,Further assistance require specific environment investigation to identify which resources are blocking you from proceeding.This kind of investigation is best covered by our Support subscriptions.Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
Bulk operation performance with bulk.find().upsert().updateOne()
2020-11-04T23:03:38.894Z
Bulk operation performance with bulk.find().upsert().updateOne()
5,368
null
[ "queries", "dot-net" ]
[ { "code": "", "text": "Hi all,How do I create a bulk query with multiple id’s returning those documents?A code example would be really helpful.", "username": "timah" }, { "code": "", "text": "Hi @timah!Welcome to the MongoDB Community Can you describe what kind of query you’re trying to create in more detail? What kind of output are you expecting given a set of example input?Any additional information you give can help us get on the same page and hopefully enable us to give a suitable answer.Thanks!", "username": "yo_adrienne" }, { "code": "", "text": "Hi Adrienne,I’m looking for a way to do “bulk reads” with the .NET driver using c# classes.Ideally inputting an array/list of filter definitions, and outputting an array/list of documents that matched one or more of those filters.Do you know if that is possible?", "username": "timah" } ]
Bulk find query with .NET
2020-11-19T10:30:38.738Z
Bulk find query with .NET
2,014
null
[]
[ { "code": "mongo \"mongodb+srv://sandbox.xxxxx.mongodb.net/<dbname>\" --username m001-studentmongo \"mongodb+srv://<username>:<password>@<cluster>.mongodb.net/admin\"", "text": "Hello, I’ve noticed that in the video we go into our cluster and use the command that is there in to connect using the shell, which is:mongo \"mongodb+srv://sandbox.xxxxx.mongodb.net/<dbname>\" --username m001-studentI have seen in every lecture of every chapter that down below in the Notes we have this way of connecting:\nmongo \"mongodb+srv://<username>:<password>@<cluster>.mongodb.net/admin\"", "username": "Ronny_Legones" }, { "code": "", "text": "Both will work\nGiving example for Class cluster:mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” -u m001-student -p m001-mongodb-basicsmongo “mongodb+srv://m001-student:[email protected]/test”Modify your Sanbox connect string as per above format", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you! I finally did itIn my case I had to adjust the cluster name from “@cluster-” to “@cluster.”\nHad to put a . instead of -", "username": "Ronny_Legones" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Ways of connecting to Atlas
2020-11-22T06:04:57.255Z
Ways of connecting to Atlas
1,586
https://www.mongodb.com/…_2_1024x725.jpeg
[ "connecting" ]
[ { "code": "", "text": "I am trying to connect mongo using port and localhost, but I always have this problem (error).\nСнимок экрана 2020-11-21 в 15.05.151364×966 681 KB\nI did a search for everything I could find on the internet, but nothing helps. What could be the mistake? Perhaps I am not correctly identifying the host?MongoDB shell version v4.4.1\nI installed Mongo using brew\nI have macOS Big Sur 11.0.1", "username": "111132" }, { "code": "", "text": "Is your mongod up and running on port 30000 on your local host?\nOn unix systems you can check\nps -ef|grep mongod", "username": "Ramachandra_Tummala" }, { "code": "", "text": "ps -ef|grep mongodWhat should I check? I entered this command and was given this\n", "username": "111132" }, { "code": "mongo", "text": "The first question is why port 30000 ? The default port for mongo is 27017.When mongod is running with default configuration on your local machine running mongo is enough to connect to it.", "username": "chris" }, { "code": "", "text": "In my task I need to use this port, when I use 27017 its works.", "username": "111132" }, { "code": "", "text": "in default configuration mongod always running in the background on port 27017 as service\nSo you just issue mongo to connectBut if you want to connect a mongod running on a different port firt you need to bring up your mongod\nmongod --port 30000 --dbpath provide_valid_path --logpath provide_valid_path\nPlease check documenation for various options like auth etc\nOnce mongod is up you can check with ps -ef|grep mongod\nYou will see mongod process\nThen you can connect with\nmongo --port 30000", "username": "Ramachandra_Tummala" }, { "code": "", "text": "It would be a good thing for your learning to be acquainted with the different options available to start mongod.\nand", "username": "steevej" } ]
Error connecting to localhost:30000
2020-11-21T12:41:56.752Z
Error connecting to localhost:30000
3,229
null
[]
[ { "code": "{\"t\":{\"$date\":\"2020-11-20T06:33:15.797+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn63203\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"default_chat.rooms\",\"command\":{\"find\":\"rooms\",\"filter\":{\"_updatedAt\":{\"$gte\":{\"$date\":\"2020-11-20T06:33:08.015Z\"},\"$lt\":{\"$date\":\"2020-11-20T06:33:12.622Z\"}},\"visitorResponded\":true,\"test\":{\"$ne\":true},\"dummy\":{\"$ne\":true},\"isDeleted\":{\"$ne\":true}},\"sort\":{\"_updatedAt\":1},\"limit\":500,\"maxTimeMS\":2000,\"returnKey\":false,\"showRecordId\":false,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1605853994,\"i\":11}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"reHNIHbNvR7Hi6LUIj4mVDwAX8k=\",\"subType\":\"0\"}},\"keyId\":6881916503147413508}},\"lsid\":{\"id\":{\"$uuid\":\"b4b2219f-f0f7-4102-b695-b864274d7fe0\"}},\"$db\":\"default_chat\"},\"planSummary\":\"COLLSCAN\",\"keysExamined\":0,\"docsExamined\":27431,\"hasSortStage\":true,\"cursorExhausted\":true,\"numYields\":30,\"nreturned\":1,\"queryHash\":\"DA4A9692\",\"planCacheKey\":\"EDBD4246\",\"reslen\":1489,\"locks\":{\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":31}},\"Global\":{\"acquireCount\":{\"r\":31}},\"Database\":{\"acquireCount\":{\"r\":31}},\"Collection\":{\"acquireCount\":{\"r\":31}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"protocol\":\"op_query\",\"durationMillis\":175}}\ndb.rooms.find({\"_updatedAt\":{\"$gte\":{\"$date\":\"2020-11-20T06:33:08.015Z\"},\"$lt\":{\"$date\":\"2020-11-20T06:33:13.818Z\"}}, \"responded\":true,\"test\":{\"$ne\":true},\"dummy\":{\"$ne\":true},\"isDeleted\":{\"$ne\":true}}).sort({\"_updatedAt\": 1}).limit(500)\n", "text": "Currently debugging some slow queries and inspecting Mongo Logs. So, from these I would like to pick “Slow query” logs, convert them to queries and run them locally.for e.g. my log has:I want:is there tool which does this? right now I am converting them manually. Thank you!", "username": "V_N_A" }, { "code": "", "text": "Sounds like a fun exercise in JSON parsing.", "username": "Jack_Woehr" }, { "code": "", "text": "I see 2 different directions on how one can achieve that.Use jq (jq) to extract the required parts for the command.Import the logs in mongodb and then using the aggregation framework to extract the required parts for the command.", "username": "steevej" } ]
How do I convert commands from mongo log to queries
2020-11-20T07:59:46.603Z
How do I convert commands from mongo log to queries
3,033
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Can anyone help me out of this", "username": "rajesh_ranjan" }, { "code": "", "text": "Hi @rajesh_ranjan,Welcome to MongoDB community!I am not sure what you exactly mean but lookups between collections can be done via $lookup or $graphLookup aggregation stages:Best\nPavel", "username": "Pavel_Duchovny" } ]
Is it possible to do lookup in between the schema collection
2020-11-22T05:46:56.942Z
Is it possible to do lookup in between the schema collection
1,164
null
[ "aggregation" ]
[ { "code": "int getFactor(int num1, int num2){\n int temp;\n while (num2 != 0){\n temp = num2;\n num2 = num1 % num2;\n num1 = temp; \n }\n return num1;\n }\n", "text": "Is it possible to convert / use the following function in an aggregation pipeline?", "username": "An_De" }, { "code": "", "text": "You can use it as javascript,with MongoDB 4.4Convert i don’t think you can.\nSome languages like Clojure have (reduced …) to stop the reduce from being complete.\nMongoQL doesn’t have loops or reduced option.", "username": "Takis" } ]
Use a while loop in MongoDB Aggregation?
2020-11-17T18:50:20.728Z
Use a while loop in MongoDB Aggregation?
6,186
null
[]
[ { "code": "", "text": "Is there any way to prevent multiple apps from opening a Realm file under macOS ?I was sure we used to get an exception if a second application tried to open a realm file that was already open by another application but this no longer seems to be the case.We want to release a beta version but want to prevent the user from running the production application and the beta at the same time.", "username": "Duncan_Groenewald" }, { "code": ".lock", "text": "@Duncan_Groenewald So we definitely should still be throwing exceptions for multiprocess access. To be clear, the Realm SDK without sync does support multiprocess apps, that is why there is a .lock file, to manage mutations in an ACID compliant way. We do not support multiprocess apps for sync realms however, and here, you should be getting an exception.If you are able to reproduce this please upload a sample app and we can sort this for you.", "username": "Ian_Ward" }, { "code": "", "text": "Ah thanks for providing that clarification, didn’t realise there was a difference in behaviour.It should be noted that we are using the Sync SDK but for the bulk of testing we are just connecting to a local file. Presumably the SDK will throw an exception when opening a Synced Realm from a second process but will allow opening a non-synced realm from multiple processes.Did we miss this in the documentation somewhere ?", "username": "Duncan_Groenewald" }, { "code": "let fileURL = FileManager.default\n .containerURL(forSecurityApplicationGroupIdentifier: \"group.io.realm.app_group\")!\n .appendingPathComponent(\"default.realm\")\nlet config = Realm.Configuration(fileURL: fileURL)\nlet realm = try Realm(configuration: config)\n", "text": "It should be noted that we are using the Sync SDK but for the bulk of testing we are just connecting to a local file. Presumably the SDK will throw an exception when opening a Synced Realm from a second process but will allow opening a non-synced realm from multiple processes.To share Realms between apps in the same iOS app group, you’ll need to specify a common location for the Realm. So something like this -", "username": "Ian_Ward" }, { "code": "", "text": "Thanks, our app is a macOS Desktop app and we don’t need to share realms between apps or processes. We are planning to release a new beta version which can be installed at the same time as the production version but don’t want to allow them to be run at the same time.", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Ian_Ward - how can I catch this exception ? try Realm() does not seem to be throwing the exception so it goes through as an uncaught exception.\" uncaught exception of type realm::MultipleSyncAgents\"", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Duncan_Groenewald So the error is thrown from the sync-client’s worker thread - so it cannot be caught. I’m not sure how the developer would handle this error, since presumably, opening the realm is necessary for the functioning of the app and likely should crash.", "username": "Ian_Ward" }, { "code": "", "text": "Well in our case it would be nice to be able to display a message to the user warning them that another application is already using the database and to close that application first if they wish to proceed.You shouldn’t be killing the application from a worker thread - just send a message to the application saying you can’t continue and then exit the worker thread.You probably shouldn’t be returning a Realm for the application to use until this has been checked anyway.Similarly I think the SDK crashes the application with a fatal error if you try adding an object with a duplicate ID - again kind of weird to work like that - just return an error so the application can gracefully handle the situation. Its a pretty common problem with databases - even if the application checks for a duplicate ID prior to adding the object it is entirely possible that someone else might have added one between the time the check is performed and the call to add the object is made.I am not aware that there is any way to link the check for duplicate ID with the call to add() as a transaction in Realm so the application should be left to handle the ‘duplicate key’ error rather than crashing the whole application. This isn’t a compile time check you can do but definitely a runtime check you should handle elegantly for the user.Maybe I should raise this as a request on GitHub ? Would love to hear the arguments for simply crashing the application with a fatal error rather than returning an error.", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Duncan_Groenewald In the case of adding objects with primary keys, we have a default parameter named updatePolicy that allows users to choose how duplicate primary keys are handled – either by throwing an exception, updating the modified fields of the object with the matching primary keys, or by replacing all the fields. This circumvents most needs to handle a duplicate primary key.https://realm.io/docs/swift/latest/api/Classes/Realm/UpdatePolicy.html", "username": "Ian_Ward" } ]
Prevent multiple apps from opening Realm File
2020-11-17T00:56:25.067Z
Prevent multiple apps from opening Realm File
2,287
null
[ "indexes" ]
[ { "code": "bulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337417\"),nano_seconds:NumberInt(784325554),ask_price:-0.98,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4),qualifiers:[\"[BID_TONE]\",\"[ASK_TONE]\"]}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337417784\")},\n\t$max:{quote_last:NumberLong(\"1597337417784\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337606\"),nano_seconds:NumberInt(436207836),ask_price:-0.97,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4)}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337606436\")},\n\t$max:{quote_last:NumberLong(\"1597337606436\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\nbulk.find({ric:\"LCOJ1-V1\",utc_date:ISODate(\"2020-08-13T00:00:00.000000000Z\"),nquotes:{$lt:1000}}).upsert().updateOne(\n{\n\t$push:{quotes:{utc_date_time:NumberInt(\"1597337635\"),nano_seconds:NumberInt(967713742),ask_price:-0.98,ask_size:NumberInt(4),bid_price:-1.0,bid_size:NumberInt(4)}},\n\t$setOnInsert:{exch_utc_offset:NumberInt(1)},\n\t$min:{quote_first:NumberLong(\"1597337635967\")},\n\t$max:{quote_last:NumberLong(\"1597337635967\")},\n\t$inc :{nquotes:NumberInt(1)}\n});\ndb.getCollection(\"time_series\").find(\n { \n \"$and\" : [\n { \n \"ric\" : \"CLV0-X0\"\n }, \n { \n \"utc_date\" : ISODate(\"2020-08-03T00:00:00.000+0000\")\n }, \n { \n \"quote_first\" : { \n \"$gte\" : NumberLong(1596493635301)\n }\n }, \n { \n \"quote_first\" : { \n \"$lte\" : NumberLong(1596499142016)\n }\n }, \n { \n \"quote_last\" : { \n \"$lte\" : NumberLong(1596499197995)\n }\n }\n ]\n }, \n { \n \"ric\" : 1.0, \n \"utc_date\" : 1.0, \n \"quote_first\" : 1.0, \n \"quote_last\" : 1.0,\n \"quotes\" : 1.0\n }\n).sort(\n { \n \"utc_date\" : 1.0,\"quote_first\" : 1.0\n }\n);\n", "text": "Hiwe are in process to store historical time series data (millions of rows) for Reuters ric. we are following \" Scenario 3: Size-based bucketing\" as per MongoDB's New Time Series Collections | MongoDB. We are storing 1000 quotes per document .The structure of upsert query will be as following.Based on recommendation of article we have created composite index on ({ric:1,utc_date:1,nquotes:1}) which is perfectly fine for “upsert” operation.But during the query time we are expected to query data using also “ric”,“utc_date”,“first_quote” and “last_quote”. So far we have created composite index on “ric,utc_date,nquotes”.What would be the suggestion to create index if we are querying data using “ric”,“utc_date”,“first_quote” and “last_quote” ? Since we are processing millions of rows , we need to consider performance and required space for any additional index.here is example read queryThanks for looking into this.", "username": "Dhruvesh_Patel" }, { "code": "ricutc_date qoute_firstqoute_last{ ric: 1, utc_date : 1, qoute_first : 1, qoute_last : 1 }\n", "text": "Hi @Dhruvesh_Patel,Index field order can be initially determine using Equility Sort Range order. Recommend reading more here:Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.The equility fields in your query is ric and utc_date while range is qoute_first and qoute_last.So optimal index could be:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Index on the Time Series collection structure
2020-11-20T14:13:20.984Z
Index on the Time Series collection structure
1,638
null
[ "replication", "monitoring" ]
[ { "code": "", "text": "Hi all,\nI keep getting this alert, and I’m not entirely sure why is that!\nin the documentations, it says Intensive write and update operations in a short period of time.\nis this serious? is there a way to find out which collection or document is being written to?\nThanks in advance", "username": "naim_sakaamini" }, { "code": "", "text": "Please check this link", "username": "Ramachandra_Tummala" } ]
Replication Oplog Window has gone below 1 hours
2020-11-19T17:09:40.340Z
Replication Oplog Window has gone below 1 hours
4,589
https://www.mongodb.com/…4_2_1024x512.png
[ "aggregation" ]
[ { "code": "", "text": "I would like to add an item to the middle of an array during the aggregation pipeline.The solution to the question below shows adding to the end of an array using ‘$concatArrays’, but I would like to add to the middle using something like ‘$position’.Does anyone know if ‘$push’ will get added as an update operation during the aggregation pipeline?The following page provides examples of updates with aggregation pipelines.Would really appreciate the help!", "username": "Scott_Wager" }, { "code": "$concatArrays{ arr: [ 8, 39, 21, 0, 999 ] }55539arr{ \n $set: { \n arr: { \n $concatArrays: [ \n { $slice: [ \"$arr\", 2 ] }, \n [ 555 ], \n { $slice: [ \"$arr\", -3 ] } \n ] \n } \n } \n}\n{ arr: [ 8, 39, 555, 21, 0, 999 ] }", "text": "Hello @Scott_Wager ,You can use the $concatArrays in your case also. For example, consider a document:{ arr: [ 8, 39, 21, 0, 999 ] }and you want to insert a number 555 as third element (after the 39) of the array field arr.The result : { arr: [ 8, 39, 555, 21, 0, 999 ] }", "username": "Prasad_Saya" }, { "code": "", "text": "This blew my mind, I would never have thought of doing that.I have a recursive data structure that will get big (similar to a file system with folders), so I would prefer to have an option to push into the array. I’m reluctant to reset the entire field as there could be a lot of data to write. Do you know if there is or ever will be a mutable solution like push?I’ve gone through so much stackoverflow this was really my last shot, so if this is the only option then I’ll stick with it. Thank you so much for the quick reply!", "username": "Scott_Wager" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update pipeline: Push into middle of array
2020-11-20T11:05:11.882Z
Update pipeline: Push into middle of array
2,666
null
[ "app-services-user-auth" ]
[ { "code": " LoginButton onLoginFinishedconst signInFacebook = async (error, result) => {\n if (error) {\n console.error(`Failed to log in: ${result.error}`);\n } else if (result.isCancelled) {\n console.log(\"Facebook login was cancelled\");\n } else {\n const { accessToken } = await AccessToken.getCurrentAccessToken();\n const credential = Realm.Credentials.facebook(accessToken);\n const user = await app.logIn(credential);\n setUser(user);\n }\n\n };\nconsole.logcredentialPossible Unhandled Promise Rejection (id: 1):\nObject {\n \"code\": 2,\n \"message\": \"authentication via 'oauth2-facebook' is unsupported\",\n}\nFacebookEmail/PasswordAuthentication ProviderUsers", "text": "I’m trying add the Facebook login button to the Task tracker example app which I am running on an Android emulator on my laptop. I am using the React Native SDK. I have managed to register my app with Facebook and can get the login button showing. The LoginButton onLoginFinished callback is as followsI can console.log the accessToken ok but when I examine the credential I see an empty object. The code then gives me this error:I have Facebook and Email/Password authentication turned on the Authentication Provider tab of the Users page. I hope this is a silly error, but I can’t seem to find much information when googling for this.I can provide more information if necessary.", "username": "DaveAik" }, { "code": "", "text": "Hi Dave - welcome to the forum!\nThis error message is used when the authentication provider hasn’t been enabled server-side. You’ve enabled the Facebook authentication provider. Are you sure that change has been deployed?", "username": "kraenhansen" }, { "code": " \"OAuth2 configuration consists of only clientId, clientSecret, and openId\"", "text": "Yep, that seems to have sorted it - thank you!I am however getting a new error message stating: \"OAuth2 configuration consists of only clientId, clientSecret, and openId\"I have checked setup the client id, client secret on the Facebook authentication page. I have published many times and reinstalled the app. I have check all the metadata fields to on.I’m not sure if this is something I need to look into on the Facebook side of things.", "username": "DaveAik" }, { "code": "", "text": "Any ideas? I’ve been looking through the docs in both realm and Facebook authentication.", "username": "DaveAik" } ]
Facebook authentication on Android - empty object
2020-11-18T19:39:29.760Z
Facebook authentication on Android - empty object
2,086
null
[ "dot-net", "transactions" ]
[ { "code": "", "text": "I am gettingStandalone servers do not support transactions.error for MongoDB Transactions. I referred this code to check this Transactions.\nhttps://www.mongodb.com/how-to/transactions-c-dotnetKindly do the needful.Thanks in Advanced", "username": "Lawrence_Raja" }, { "code": "", "text": "I am getting error as well using the same concepts. Command commitTransaction failed: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction…’", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Lawrence_Raja, and welcome to the community!Standalone servers do not support transactions.In order to perform multi-document transactions, your MongoDB deployment needs to be either a replica set or a sharded cluster.The error message has indicated that the deployment that you have is a standalone. See also Transactions for more information.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi @Supriya_Bansal,Command commitTransaction failed: WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction…’The error that you have encountered here is different to the original post. It seems that in your case, this is caused by more than one async operation trying to perform write operation on the same document.For others to help you better, please open a new discussion thread with the following information:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks @wan for the response. I realized it later that mine is a different problem.\nI had created Write Conflict. Please let me know your thoughts.", "username": "Supriya_Bansal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting error MongoDB Transactions with C# and the .NET Framework
2020-11-06T16:24:06.774Z
Getting error MongoDB Transactions with C# and the .NET Framework
15,183
https://www.mongodb.com/…7b77dee98ef8.png
[ "connecting" ]
[ { "code": "", "text": "Please see attached. How do i connect to my local db via other project with what uri?Screenshot_20201120-073037480×800 76.1 KB", "username": "Jordan_Peters" }, { "code": "", "text": "Connecting to your db via a project will depend on what your project is written in, then you will use that languages tools to connect.But from the picture it looks like your uri is mongodb://127.0.0.1:3335 . You can test the uri from the command line and run the mongo command followed by your uri.", "username": "tapiocaPENGUIN" } ]
Connecting to my local DB
2020-11-20T05:42:49.955Z
Connecting to my local DB
1,496
null
[ "kafka-connector" ]
[ { "code": "{\n \"id\": 12345\n \"array_name\": [\n {\"id\": 12345, \"something\": {\"id\": 12345, \"another\": 5.5} ...},\n . . .\n ]\n}\n{\n \"id\": 12345\n \"array_name\": [\n \"{\\\"id\\\": 12345, \\\"something\\\": {\\\"id\\\": 12345, \\\"another\\\": 5.5} ...}\",\n . . .\n ]\n}\n{\n \"key.converter.schemas.enable\": \"false\",\n \"value.converter.schemas.enable\": \"false\",\n \"name\": \"Mongo-Connect\",\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"tasks.max\": \"1\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"errors.log.enable\": \"true\",\n \"errors.log.include.messages\": \"true\",\n \"connection.uri\": \"mongodb://mongo1:27017\",\n \"database\": \"testdb\",\n \"collection\": \"testcol\",\n \"topic.prefix\": \"test-prefix\",\n \"output.format.key\": \"json\",\n \"output.format.value\": \"schema\",\n \"output.schema.infer.value\": \"true\",\n \"output.json.formatter\": \"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n \"copy.existing\": \"true\"\n}", "text": "Hello *,Mongo Version => 4.0-xenial (Docker)\nConfluent Kafka Version => 6.0.0 (Docker)MongoConnector => 1.3.0I am encountering problem where suddenly during uploading data to Kafka Array of Objects becomes Array of Strings e.g. there is:and in Kafka it becomes:Connector options are:", "username": "lyubick" }, { "code": "db.testcol.insert({\"array_name\":[{\"something\":{\"another\":55}}]})kafkacat -b localhost:9092 -t test-prefix.testdb.testcol -C -f '\\nKey :\\t %k\\t\\nValue :\\t %s\\nPartition: %p\\tOffset: %o\\n--\\n'\\n\n\nKey :\t {\"_id\": {\"_data\": \"825FB57CB1000000022B022C0100296E5A1004E00B897973A14FEDB1A43528B132F50846645F696400645FB57CB12CDCC09225054EBE0004\"}}\t\nValue :\t {\"_id\":{\"_data\":\"825FB57CB1000000022B022C0100296E5A1004E00B897973A14FEDB1A43528B132F50846645F696400645FB57CB12CDCC09225054EBE0004\"},\"clusterTime\":-588311704,\"documentKey\":{\"_id\":\"5fb57cb12cdcc09225054ebe\"},\"fullDocument\":{\"_id\":\"5fb57cb12cdcc09225054ebe\",\"array_name\":[{\"something\":{\"another\":55}}]},\"ns\":{\"coll\":\"testcol\",\"db\":\"testdb\"},\"operationType\":\"insert\"}\nPartition: 0\tOffset: 0\n", "text": "Hi,\nI could not reproduce this problem and I used your same configuration. I used the mongosh shell to insertdb.testcol.insert({\"array_name\":[{\"something\":{\"another\":55}}]})I used Kafkacat to print the messageHow are you reading the kafka message ? My guess is it whatever you are using to read the message is taking that array of objects and converting to strings. Can you try with Kafkacat?", "username": "Robert_Walters" }, { "code": "{\n \"-1236052134575208584\": 1603802006849,\n \"-3078921119283744887\": {\n \"6022414958441676900\": {\n \"4344647195749500666\": \"4344647195749500666\",\n \"6440041613324510652\": \"6440041613324510652\",\n \"8265573421575953197\": \"8265573421575953197\"\n },\n \"2919502189498201214\": {\n \"-4263262838430303025\": {\n \"-1013215829982866721\": 6,\n \"1209830369222860200\": 12124\n },\n \"8815982370389166190\": false,\n \"6135136337149060510\": {\n \"-5671795673574091253\": 4.226828822362668,\n \"-1545304719862780077\": 0,\n \"6472265773460362391\": \"6472265773460362391\",\n \"1664694551407375950\": {\n \"-8201417328859773479\": 643,\n \"1895862094853980300\": 400\n },\n \"3582048408897295416\": 3,\n \"4676544319411765991\": {\n \"-5606549786281302960\": \"-5606549786281302960\"\n },\n \"3702657416359142280\": null,\n \"-2764320561543193457\": null,\n \"7342153465701196142\": {\n \"4273523332437800642\": 12124,\n \"2877948439254078926\": true\n }\n },\n \"-7535015702201045030\": null,\n \"6968704615890252414\": \"6968704615890252414\",\n \"9143290242159779235\": 1\n },\n \"-1631773694551398667\": \"-1631773694551398667\",\n \"1654016379873652592\": 1\n },\n \"-8730419922316402040\": {\n \"-701297860731463640\": {\n \"8850753136911882592\": {\n \"-8954055381677511378\": 6,\n \"-3600730930140391064\": 12124\n },\n \"34763064450124911\": false,\n \"-7207119165969726438\": {\n \"-7658080728774377404\": 1.6432313657184152,\n \"-5661544056901251958\": 5079,\n \"-6626094279135658848\": \"-6626094279135658848\",\n \"2276608449961011082\": {\n \"641677532283622210\": 609,\n \"-3232278498323453766\": 400\n },\n \"3356311166389917555\": 29,\n \"7445038319791579437\": [\n \"5374410395986528777\",\n \"-5970305719089840402\"\n ],\n \"5453741773890927602\": 1,\n \"-8201960194732107681\": null,\n \"7128737013364301064\": {\n \"27640474992925850\": 12124,\n \"-6617784163376580317\": true\n }\n },\n \"-8750040877746075818\": [\n {\"-7675143677926194569\": 41, \"-76751436779261945619\": {\"-7675143677926194569\": 12124, \"-76751436779226194569\": true}, \"-76751433677926194569\": [\"-7675143677926194569\", \"-7675143677926194569\"], \"-76751443677926194569\": {\"-76751436775926194569\": 941, \"-7675143677926194569\": 660}, \"-76751436779261994569\": 50, \"-76751431677926194569\": 1.044447267034656, \"-7675143677926194569a\": \"-7675143677926194569\", \"-76751h43677926194569\": 2, \"-76751436747926194569\": 0},\n {\"-25342639a09144125530\": 29, \"-25342463909144125530\": {\"-25344263909144125530\": 12124, \"-2534263909144125530\": true}, \"-2534263909a144125530\": [\"-2534263909144125530\", \"-2534263909144125530\"], \"-253426390914a4125530\": {\"-25342639091441a25530\": 609, \"-2534a263909144125530\": 400}, \"-253426390aa9144125530\": 5079, \"-253a4263909144125530\": 1.6432313657184152, \"-25a34263909144125530\": \"-2534263909144125530\", \"-2534263909144125530\": null, \"-25342639091d44125530\": 1},\n {\"-351845619a2799115644\": 42, \"-35184561492799115644\": {\"-35184561942799115644\": 12124, \"-3518456192799115644\": true}, \"-35a18456192799115644\": [\"-3518456192799115644\", \"-3518456192799115644\"], \"-35184561927a99115644\": {\"-3518456192799a115644\": 684, \"-3518456192799115644\": 400}, \"-3518456192799aa115644\": 593, \"-351845a6192799115644\": 2.479277203030802, \"-3518a456192799115644\": \"-3518456192799115644\", \"-a\": null, \"-3518456a192799115644\": 2},\n {\"-7973291a657838087544\": 26, \"-79732491657838087544\": {\"-79732916578380875444\": 12124, \"-7973291657838087544\": true}, \"-797329165a7838087544\": [\"-7973291657838087544\", \"-7973291657838087544\", \"-7973291657838087544\", \"-7973291657838087544\", \"-7973291657838087544\", \"-7973291657838087544\"], \"-7973a291657838087544\": {\"-797a3291657838087544\": 847, \"-7973291657838087544\": 650}, \"-797329165783808a7544\": 1820, \"-797329165a7838a087544\": 2.5081437308920598, \"-79732a91657838087544\": \"-7973291657838087544\", \"-7973291657aaa838087544\": 2, \"-79732916578a38087544\": 3},\n {\"723957439050a4453704\": 30, \"72395743940504453704\": {\"72395743905044543704\": 12124, \"7239574390504453704\": true}, \"7239574390504a453704\": [\"7239574390504453704\", \"7239574390504453704\", \"7239574390504453704\", \"7239574390504453704\"], \"723957a4390504453704\": {\"7239574390504453704\": 759, \"723957439aa0504453704\": 1100}, \"72a39574390504453704\": 1732, \"7239574390a504453704\": 2.5294378400108055, \"72395743905044aa53704\": \"7239574390504453704\", \"7239574390504aa453704\": 2, \"723957439a0504453704\": 4},\n {\"-23912009330439173695\": 43, \"-23910093340439173695\": {\"-23910093304394173695\": 12124, \"-2391009330439173695\": true}, \"-23910a09330439173695\": [\"-2391009330439173695\", \"-2391009330439173695\"], \"-2391009330a4391736a95\": {\"-23910093304391736a95\": 616, \"-2391009330439173695\": 480}, \"-239100933043917a3695\": 347, \"-239100aa9330a439173695\": 2.9827470360102772, \"-2391a009330439173695\": \"-2391009330439173695\", \"-2391009330a439173695\": null, \"-2391009330439173695\": 5},\n {\"-745169876a8078486853\": 39, \"-74516984768078486853\": {\"-74516948768078486853\": 12124, \"-7451698768078486853\": true}, \"-7451698768a078486853\": [\"-7451698768078486853\", \"-7451698768078486853\"], \"-74516987a6a8078486853\": {\"-7451698a768078486853\": 690, \"-7451698768078486853\": 480}, \"-745169876807848a6853\": 1411, \"-745169876aa8078486853\": 3.7791890450274153, \"-745a1698768078486853\": \"-7451698768078486853\", \"-745169876a80aa78486853\": null, \"-7451698768078486853\": 6},\n {\"-12230520276098273608\": 6, \"-12230502760948273608\": {\"-12230540276098273608\": 12124, \"-1223050276098273608\": true}, \"-122305a0276098273608\": [\"-1223050276098273608\", \"-1223050276098273608\"], \"-1223050276a098273608\": {\"-122305027a6098273608\": 1098, \"-1223050276098273608\": 1170}, \"-122305027609827a3608\": 319, \"-12230502760a98273608\": 3.790121254273812, \"-12230a50276098273608\": \"-1223050276098273608\", \"-122305027609a8273608\": 2, \"-1223050276098273608\": 7},\n {\"-83081032622357989357\": 34, \"-83081026422357989357\": {\"-83084102622357989357\": 12124, \"-8308102622357989357\": true}, \"-830810262235a7989357\": [\"-8308102622357989357\", \"-8308102622357989357\"], \"-83081026a22357989357\": {\"-830810262a2357989357\": 1009, \"-8308102622357989357\": 1500}, \"-83081026223579aa89357\": 425, \"-8308102622a357989357\": 4.128897616678362, \"-830a8102622357989357\": \"-8308102622357989357\", \"-83081026223a57989357\": 2, \"-8308102622357989357\": 8},\n {\"-75169676a67566740423\": 44, \"-75169676467566740423\": {\"-75146967667566740423\": 12124, \"-7516967667566740423\": true}, \"-75169a67667566740423\": [\"-7516967667566740423\", \"-7516967667566740423\"], \"-75169676a67a566740423\": {\"-7516a967667566740423\": 776, \"-7516967667566740423\": 440}, \"-a7516967667566740423\": 379, \"-7516967667566a740423\": 5.887623744018165, \"-75169a67667566a740423\": \"-7516967667566740423\", \"-751696766756a6740423\": null, \"-7516967667566740423\": 9},\n {\"72411138a38436319010\": 52, \"72411138384436319010\": {\"72411134838436319010\": 12124, \"7241113838436319010\": false}, \"72411138384363190a10\": [\"7241113838436319010\", \"7241113838436319010\"], \"7241113838436aa319010\": {\"72411138384a36319010\": 761, \"7241113838436319010\": 459}, \"724111383843631901a0\": 1942, \"7241113838436319a010\": 6.263293924039651, \"72411138384a36319010\": \"7241113838436319010\", \"724111383a8436319010\": 1, \"7241113838436319010\": 10},\n {\"66888828393518514293\": 40, \"66888828935185142493\": {\"6688882893518514293\": 12124, \"66888824893518514293\": true}, \"668888289351851a4293\": [\"6688882893518514293\", \"6688882893518514293\"], \"66888828935185a14293\": {\"66888828935a18514293\": 668, \"6688882893518514293\": 500}, \"6688882893a51851a4293\": 99, \"668888289351851429a3\": 9.81369212962963, \"6688882893518a514293\": \"6688882893518514293\", \"66888828a93518514293\": null, \"6688882893518514293\": 11},\n {\"682406913a7771811068\": 27, \"68240691377718110648\": {\"68244069137771811068\": 12124, \"6824069137771811068\": true}, \"6824069137a771811068\": [\"6824069137771811068\"], \"68240691377718110a68\": {\"6824069137771a811068\": 977, \"6824069137771811068\": 850}, \"6824069137771811a068\": 416, \"682a4069137771811068\": 19.35300719938331, \"6a824069137771811068\": \"6824069137771811068\", \"6824069137a77181a1068\": 2, \"6824069137771811068\": 12},\n {\"401853999a2064174925\": 54, \"40185399920641749425\": {\"40185399920464174925\": 12124, \"4018539992064174925\": true}, \"4018539992064a174925\": [\"4018539992064174925\", \"4018539992064174925\"], \"401853999206417a4925\": {\"401853999206a4174925\": 669, \"4018539992064174925\": 400}, \"40185399920641a74925\": 48, \"40185399920641749a25\": null, \"40185399920a64174925\": \"4018539992064174925\", \"401853a9992064174925\": null, \"4018539992064174925\": 13},\n {\"90179546838230936796\": 45, \"90179546882309367496\": {\"90179546882430936796\": 12124, \"9017954688230936796\": true}, \"901795468823a0936796\": [\"9017954688230936796\", \"9017954688230936796\"], \"901795468823093a6796\": {\"901795468823a0936796\": 637, \"9017954688230936796\": 500}, \"90179546882309a3a6796\": 0, \"90179546882309a36796\": 3.661615067005595, \"901a7954688230936796\": \"9017954688230936796\", \"901795a4688230936796\": null, \"9017954688230936796\": 14},\n {\"5443577614393337551\": 16, \"5443577613933375451\": {\"5443574761393337551\": 12124, \"544357761393337551\": true}, \"54435776139333a7551\": [\"544357761393337551\", \"544357761393337551\"], \"54435776a1393337551\": {\"5443577613a93337551\": 862, \"544357761393337551\": 400}, \"5443577a61393337551\": 0, \"5443577613933375a51\": 4.044922689075912, \"544357761393337a551\": \"544357761393337551\", \"54435776139a3337551\": 3, \"544357761393337551\": 15},\n {\"-26743996049095110585\": 3, \"-26743996409095110585\": {\"-26743996094095110585\": 12124, \"-2674399609095110585\": true}, \"-26743996090495110585\": [\"-2674399609095110585\", \"-2674399609095110585\"], \"-26a74399609095110585\": {\"-267439a9609095110585\": 643, \"-2674399609095110585\": 400}, \"-26743a99609095110585\": 0, \"a-2674399609095110585\": 4.226828822362668, \"-267439960a9095110585\": \"-2674399609095110585\", \"-26743996a09095110585\": null, \"-2674399609095110585\": 16}\n ],\n \"6215212195274199410\": \"6215212195274199410\",\n \"-5550529520968987173\": 1\n }\n }\n }", "text": "Hey @Robert_Walters yes indeed reproduction is not trivial I posted JSON Document which could help to reproduce.", "username": "lyubick" }, { "code": "{\n \"L1\": {\n \"L2\": {\n \"L3\": [\n {\"V2\": {\"K1\": 0},\n \"K1\": 0},\n {\"V5\": [\"A1\", \"A2\"],\n \"V11\": 1}\n ]\n }\n }\n }\n", "text": "I trimmed this down into the following to repro:I filed a Jira ticket to track this and investigate it more.\nhttps://jira.mongodb.org/browse/KAFKA-175", "username": "Robert_Walters" }, { "code": "", "text": "Hi @lyubick,I can see why this is confusing. Unfortunately, Arrays in Kafka can only have a single value type (a single schema for all values), given the arrays have varied values, with differing schemas the infer schema logic picks the base String schema type and uses the Json formatter for formatting that data.I hope that helps clarify the situation.Ross", "username": "Ross_Lawley" }, { "code": "", "text": "Hi @Robert_Walters,Thanks, I understood the RCA, I am bypassing it by using StringConverter it produces 100% valid JSON string then later is read and transformed back to JSON Object.However I do not agree that problem is with Kafka itself, since everything in Kafka is kept in a Array[Byte] format - this is why we are using converters, so types are not applied there.Another point that I could agree that there is a logic that elements of array should be the same. But again on which scope? Basically this array is Array[Objects], each document has its own internal structure.And finally if MongoDB saves, operates such structures, provides the functionality to create MongoDB -> Kafka -> MongoDB connection (as mentioned in docs) then obviously something is wrong with the JsonConverter because during this Data flow Source != Sink.", "username": "lyubick" }, { "code": "", "text": "Hi @lyubick,Currently the connector only supports list validation for Json Arrays. See Json Array compatibility. The connector internally uses the Kafka SchemaBuilder API which only allows a single type for the Array value.I’ve re-opened: https://jira.mongodb.org/browse/KAFKA-175 to investigate further improving support for Json with Schema in the future.Ross", "username": "Ross_Lawley" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Array of Objects become Array of String during upload to Kafka
2020-11-10T18:56:21.967Z
Array of Objects become Array of String during upload to Kafka
6,074
null
[]
[ { "code": "", "text": "Hi!We wish to add case insensititvity to some of our collections, which we distribute through Mongo Realm’s GraphQL integration. Hence we have been looking on collations. From what we can understand, you can add collations to existing, populated collections, but then you’ll have to specify the collation in each query you perform (https://docs.mongodb.com/manual/core/index-case-insensitive/#create-a-case-insensitive-index).On the other hand, you are able to setup a collation which will be inherited in all documents, when you set up your new collection (https://docs.mongodb.com/manual/core/index-case-insensitive/#case-insensitive-indexes-on-collections-with-a-default-collation).Is there any way for us to specify a collation for a collection in our Realm rules for the given collection? If not, is it possible to somehow add some sort of collation rule that is inherited by all existing documents in the collection, so we don’t have to specify the collation in each database query?Thanks in advance!", "username": "petas" }, { "code": "", "text": "Hey @petas,There is no way to automatically apply collation through rules or avoid adding the collation for every query unless you’re using a new collection.If you’re using a default collation that you know you won’t be changing, I suggest creating a new collection and exporting your collection data to the new one.Another option is to use a custom resolver that adds the collation to every query from the client.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Alright, thanks a lot for the quick answer Sumedha!", "username": "petas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Add collation to Realm collection
2020-11-19T10:12:55.250Z
Add collation to Realm collection
2,541
null
[ "swift" ]
[ { "code": " @objc class RealmTask: Object {\n @objc dynamic var _id: ObjectId? = ObjectId.generate()\n @objc dynamic var _instance: ObjectId? \n @objc dynamic var assigneeId: ObjectId?\n @objc dynamic var comment: String?\n @objc dynamic var completeDate: RealmDateTimeOffset? \n let completerVisitId = RealmOptional<Int>()\n @objc dynamic var createdAt: RealmDateTimeOffset?\n let done = RealmOptional<Bool>()\n @objc dynamic var dueDate: RealmDateTimeOffset?\n @objc dynamic var editorId: ObjectId?\n @objc dynamic var endTime: RealmDateTimeOffset?\n let insightsCustomerId = RealmOptional<Int>()\n let insightsId = RealmOptional<Int>()\n let isAnytime = RealmOptional<Bool>()\n @objc dynamic var reminder: RealmDateTimeOffset?\n @objc dynamic var type: String = \"\"\n @objc dynamic var updatedAt: RealmDateTimeOffset?\n @objc dynamic var updatedBy: ObjectId = ObjectId.generate()\n \n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n\n@objc class RealmDateTimeOffset: EmbeddedObject {\n @objc dynamic var dateTime: Date = Date()\n @objc dynamic var offset: Int = 0\n}\n try! realm.write {\n let task = RealmTask()\n task.dueDate = RealmDateTimeOffset()\n realm.add(task)\n }\n", "text": "This is my schema:When I execute this code:I get this error:2020-11-18 09:14:50.739130+0200 Skynamo[66587:1209146] RLMException\n2020-11-18 09:14:50.739312+0200 Skynamo[66587:1209146] Invalid value ‘2020-11-18 07:14:50 +0000’ to initialize object of type ‘RealmDateTimeOffset’: missing key ‘dateTime’\n2020-11-18 09:14:50.739514+0200 Skynamo[66587:1209146] {\nRLMRealmCoreVersion = “”;\nRLMRealmVersion = “10.1.3”;\n}This is on Xcode 11.7", "username": "Sonja_Meyer" }, { "code": "", "text": "I copy and pasted your exact code and ran the app.Everything worked as expected; a RealmTask object was created and the dateTime property was correctly populated with the embedded RealmDateTimeOffset.macOS 10.14.6 and 10.15.x\nXCode 11 & 12\nRealm version: 10.1.4", "username": "Jay" }, { "code": "", "text": "Thanks for the reply.For anyone who run into a similar issue. I had my own getters and setters that probably overrides the Realm getters and setters. Still working on it but removing them seems to solve the problem.", "username": "Sonja_Meyer" } ]
Swift Date initialisation problem in Nested Object
2020-11-18T18:35:04.378Z
Swift Date initialisation problem in Nested Object
1,404
null
[ "field-encryption" ]
[ { "code": "", "text": "Hello Team,In AWS server mongodb enterprise 4.4.1 FLE is not working, but same is working on windows machine. I am trying out with the same master key but no luck.Can someone help me with this please.", "username": "khasim_ali" }, { "code": "", "text": "Further logs,{“t”:{\"$date\":“2020-11-01T17:15:13.113+05:30”},“s”:“E”, “c”:“CONTROL”, “id”:24231, “ctx”:“initandlisten”,“msg”:“Failed to open pid file, exiting”,“attr”:{“error”:{“code”:98,“codeName”:“DBPathInUse”,“errmsg”:“Unable to create/open the lock file: D:/Projects/onesingleview\\mongocryptd.pid (The process cannot access the file because it is being used by another process.). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the D:/Projects/onesingleview directory”}}}", "username": "khasim_ali" }, { "code": "mongocryptdmongocryptdmongocryptd", "text": "Hi @khasim_ali, and welcome to the forums!Thanks for providing the error log.\nBased on the error message itself, it looks like either:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "I got this working, thanksAnother question is that csfle encryption works only on port 27017?", "username": "khasim_ali" }, { "code": "mongocryptd", "text": "MongoError: Unable to connect to mongocryptd, please make sure it is running or in your PATH for auto-spawnI am getting this issue, it was working fine some time back. Suddenly stoped workings without any change", "username": "khasim_ali" }, { "code": "--port <value>mongocryptdmongocryptdmongocryptd", "text": "Hi @khasim_ali,Another question is that csfle encryption works only on port 27017?Not exactly sure which part of CSFLE you’re referring to here. If you’re referring to mongocryptd, the default port is running on 27020. Depending on your use case, you can modify this by specifying the --port <value> option of mongocryptd.MongoError: Unable to connect to mongocryptd , please make sure it is running or in your PATH for auto-spawnIt is likely that the mongocryptd is no longer running. Please make sure you have an active mongocryptd running.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Client side field level encryption (CSFLE) not working with AWS server linux ubuntu
2020-10-30T04:02:05.482Z
Client side field level encryption (CSFLE) not working with AWS server linux ubuntu
3,161
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "When using realm for authentication, is it possible to send a custom email when calling the reset password function? Currently, when setting a resetFunctionName, it does not run this function (in here we send our own reset email template).We have it set up the same way for sending the signup confirmation - which works just fine. Only when trying to customize the password reset experience, we are not able to do so.", "username": "Martin_Kayser" }, { "code": "", "text": "Hi @Martin_Kayser,So according to what you are saying usin a confirmation function which sends a confirmation email does work, while using similar logic for resetPassword does not?Can you share your application link? How do you trigger the password reset? Which sdk?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "We currently have it set up as follows:await this.$client.emailPasswordAuth.callResetPasswordFunction(this.email, “RandomPass”);where this.email is the same as username.Calling the function results in the following error:UserNotFound Error (Authentication)\nError:\nuser not foundThe following works, but it does not allow us to send a custom templated email:await this.$client.emailPasswordAuth.sendResetPasswordEmail(this.email);Using JavaScript:“realm-web”: “^0.6.0”,", "username": "Martin_Kayser" }, { "code": "", "text": "Any ideas what could be going wrong?Thanks,\nMartin", "username": "Martin_Kayser" }, { "code": "callResetPasswordFunction", "text": "Hey Martin,Can you try updating your realm-web version to 10.0 - the version you are using was a beta version and the issue may have been fixed in a later version.The syntax in the stable version has also changed to callResetPasswordFunction", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sending a custom email template for reset password
2020-09-10T16:28:30.761Z
Sending a custom email template for reset password
3,950
null
[ "indexes" ]
[ { "code": "", "text": "Hey!\nI wonder how MongoDB is storing the indexes?\n( I’m using the default storage engine)\nThe indexes are stored in the disk and read to the RAM for every query?\nOr by default, the indexes are store in the RAM until there is no available storage left and then transfer to the disk?Thanks", "username": "guy_more" }, { "code": "", "text": "I would start my hunt at https://docs.mongodb.com/manual/faq/storage/.", "username": "steevej" }, { "code": "", "text": "Short answer:\nIndicies have an in-memory and storage presence. Update are frequently written to storage(~100ms or less).Longer answer see @steevej’s answer.", "username": "chris" }, { "code": "mongodmongod", "text": "Indexes live in RAM but are written to disk frequently so you don’t have to rebuild them if you restart your mongod. For a healthy and happy mongod, all the indexes must fit in RAM.You also need RAM for your frequently accessed documents (so you don’t have to fetch them from the disk each time - this is what we refer to as the “Working Set”)And finally you need some RAM for your queries and pipelines and eventually in-memory sort operations (not great).See https://docs.mongodb.com/manual/tutorial/ensure-indexes-fit-ram/.", "username": "MaBeuLux88" }, { "code": "", "text": "Chipping in with another short answer:To WiredTiger, indexes and collections are not very different and they are treated the same way once they’re loaded into RAM (at least in the current MongoDB version). They’re just stored physically in different WiredTiger “table type” to optimize for snappy compression (for collection) or prefix compression (for indexes). Once loaded into the WiredTiger cache, both of them have a different representation vs. on disk.For WiredTiger to work on anything, they would have to be loaded into RAM (WT cache) first. The OS will take care of caching them in the filesystem cache.Hopefully I’m not introducing any confusion with regard to previous replies Best regards,\nKevin", "username": "kevinadi" } ]
How MongoDB store indexes
2020-11-19T12:53:44.547Z
How MongoDB store indexes
4,975
null
[ "configuration" ]
[ { "code": "", "text": "Hi\nI have M10 instance and I want to apply audit log on document level, is it possible ?\nThanks", "username": "Nilesh_Chourasia" }, { "code": "", "text": "Yes Database Auditing is available for M10+ clusters in Atlas, see https://docs.atlas.mongodb.com/database-auditing/index.htmlCheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AuditLog on document level
2020-11-19T06:23:09.958Z
AuditLog on document level
1,775
https://www.mongodb.com/…41_2_1024x66.png
[ "on-premises" ]
[ { "code": "", "text": "Dear Team,In our Company we have data present in MongoDB on prem. We are just trying to use the On prem database in the MongoDB charts for our reporting and dashboarding purposes.I have Done the below mentioned steps\n--------docker pull Quay--------docker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts----After that I have run the ```\ndocker service ls and successful it is showing the details as mentioned in the screenshotNow how to use the chart and what is the URL.I tried https://localhost:80 but not working\nimage1401×91 5.04 KB\nThrough Docker I am unable to find any option to open the browser.\nimage1109×269 15.9 KB\n", "username": "Prabhu_Das" }, { "code": "", "text": "It looks like it is publishing 80 and 443 so one should work. You’re trying https over port 80, an unlikely combination.http://localhost\nhttps://localhost", "username": "chris" }, { "code": "", "text": "Chris is correct. If that doesn’t help, try following the steps at https://docs.mongodb.com/charts/19.12/installation#troubleshooting", "username": "tomhollander" } ]
MongoDb Charts for On prem database
2020-11-19T17:10:04.480Z
MongoDb Charts for On prem database
4,069
null
[]
[ { "code": "", "text": "I want to pefrom like below,Step1 :-- login to MongoDB\nStep2:- export data .every where I can see options like export along with URI username &password.Thanks in advance!!", "username": "Anil_Komuravelli" }, { "code": "mongoexportmongoexportmongo", "text": "You might have seen the mongo documentation mongoexport is a command-line tool that produces a JSON or CSV export of data stored in a MongoDB instance.Run mongoexport from the system command line, not the mongo shell.", "username": "Ramachandra_Tummala" } ]
After logging from Mongoshell how to export DB
2020-11-19T17:00:13.204Z
After logging from Mongoshell how to export DB
1,517
null
[]
[ { "code": "", "text": "Does MongoDB have an official naming convention for collection and field naming, at least internally? I tried to find any related documentation and couldn’t found any. However, there are external blog posts like these which suggest different practices.\nLook forward to hearing the thoughts from the internal team and community.", "username": "DhammikaMare" }, { "code": "thisIsMyField : ...\n", "text": "Hi @DhammikaMare,Welcome to MongoDB community.We do not have any specific convention however we do have naming limitations and antipattern:\nhttps://docs.mongodb.com/manual/reference/limits/#naming-restrictionsHaving said that, internally you might notice that we use a known js convention of camle casing any word starting from the second word:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you @Pavel_Duchovny for sharing naming restrictions and a sample. Will continue with js convention.", "username": "DhammikaMare" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does MongoDB have a naming convention for collection and field naming?
2020-11-17T21:19:51.611Z
Does MongoDB have a naming convention for collection and field naming?
36,195
null
[ "data-modeling" ]
[ { "code": "{\n \"call\": ObjectId,\n \"user\": ObjectId,\n \"createdAt\": ISODate\n}\n{\n \"users\": [{ \"user\": ObjectId, \"createdAt\": ISODate }],\n \"call\": ObjectId\n}\n", "text": "Hi guys! I have a question related to, which is the better schema based on this case:Is better to save assignment logs in a single or in multiple documents?Multiple documents:Single document:In the context of preferring an easy scheme to analyze assignments per user. To respond to question like:How many calls were assigned yesterday?\nWhich users were assigned to X call?\nMonth, week, day, hour, with most than X assignments.\nTotal time in assignment for X call.Thanks!!!", "username": "Matias_Lopez" }, { "code": "users.user", "text": "Hi @Matias_Lopez,Welcome to MongoDB community!The question is what are the most common application user queries and this should define the schema.If your application need to present call details with participate details it make sense to store it in an array if the amount of participants is not large.If I would go with the array approach I would do 2 mandatory things:If there could be houndreds of participants per call I would not keep it in an array ,you can explore a hybrid solution using the Outlier patternThe Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setI would recommend reading our schema antipattern article\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summaryBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks, @Pavel_Duchovny!! Yesterday I read that post and consistently bucket pattern is our case.", "username": "Matias_Lopez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best schema for user logs in analytics context
2020-11-19T03:24:24.595Z
Best schema for user logs in analytics context
1,612
null
[ "indexes" ]
[ { "code": "db.files_v2.createIndex({“container”: 1, “_id”: 1});\ndb.files_v2.createIndex({“container”: 1, “blockId”: 1});\ndb.files_v2.createIndex({“container”: 1, “path”: 1, “_id”: 1});\ndb.files_v2.createIndex({“container”: 1, “path”: 1, “fileNameLower”: 1}, {unique: true});\ndb.files_v2.find({“container”:\"…\"}).sort({\"_id\":1})\ndb.getCollection(‘files_v2’).aggregate(\n [\n { $match : {\n container : “bbbf087e-6d2d-4941-b812-f035407becba:storage”,\n path:{\"ne\":\"\"}\n }},\n { $group: {\n _id:\"\",\n sum :{ $sum: “$size” },\n count : { $sum : 1 }\n }}\n ]\n)\n", "text": "These are my indexes today :A little explanation about the fields :I think I have some duplications in there between the first and the third one but one of my queries isso for that query, I must keep the first index (I think?)Secondly, I have a new query :if this collection scales up the size value is not indexed and you can go over 1mil documents in the disk.\nshould i create new index?db.files_v2.createIndex({“container”: 1, “path”: 1, “size”: 1});or should i add it to that indexdb.files_v2.createIndex({“container”: 1, “path”: 1, “_id”: 1,“size”:1});should i even index size field?Thanks!!!", "username": "guy_more" }, { "code": "", "text": "Hi @guy_more,Question 1:No, there is no duplication here. If you have a compound index {a:1, b:1, c:1}, it means you also have access to the indexes “for free”: {a:1} and {a:1, b:1} or even {a:-1} and {a:-1, b:-1} I think. But definitely not access to {b:1} or {a:1, c:1}.If you had {“container”:1, “_id”:1, “path”:1}, it would then be redundant to have {“container”:1, “_id”:1} that you have at the top of your list.The order of the field really depends on the query you are running. With an index, you are trying to avoid the full collection scan. Which is really the worst of all evil here. But then you can potentially avoid in memory sorts and also eventually on disk fetch because you made a “covered query” - meaning you don’t even need the on disk document - everything you need to answer the query is already in the index.To avoid collection scans, put in your index some fields of your find query. All if you want to avoid useless index entry scans.\nTo avoid in memory sort, you must reach the sort part of the index before any range query like $in or $gt for example. The rule of thumbs here is EQUALITY => SORT => RANGE. If they are in this order in your compound index, then you should not see a sort step in your explain plan - but potentially some extra index entries scans - that’s usually an acceptable trade of as in memory sort are usually more costly in ressources.\nTo have covered queries, you need all the fields you need to resolve the query in your index… Which brings me to…Question 2:\nAn index on {“container”:1, “path”:1, “size”:1} would make this a covered queries. The aggregation pipeline optimises the query automatically in the background and projects the documents to remove unnecessary fields from the pipeline.\nYou are not using the “_id” field in this query so {“container”: 1, “path”: 1, “_id”: 1,“size”:1} would also be used but only the first part: {“container”:1, “path”:1}. The rest of the index would be useless with this query and just take more space in RAM.To sum up:\nTry to syndicate your indexes if you can to avoid redundancy. If you can’t afford 3 indexes in RAM like ABC, ABD and ABE because they are too big, maybe just create one of them and the 2 other queries will be able to use at least the AB part of that index which will at least avoid the collection scan. Maybe that would be already acceptable for your target response time. If not then you will need more RAM to create the 3 indexes.\nAlso keep in mind that indexes need to be updated each time you touch that collection. Inserts, deletes, updates, etc. All these operations will be a tiny bit slower each time you add a new index.\nWhen in doubt between 2 indexes. In 99.999% of the cases, the MongoDB query optimizer will choose the right one for you - given that you are running this on a realistic data set which represents your prod data. So in doubt, create both if you can afford it (RAM, time, etc) and run your query with an explain. Keep the one that is used by MongoDB.I hope this helps !\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "/* 1 */\n{\n“serverInfo” : {\n“host” : “TLVR-00018-D”,\n“port” : 27017,\n“version” : “3.6.18”,\n“gitVersion” : “2005f25eed7ed88fa698d9b800fe536bb0410ba4”\n},\n“stages” : [\n{\n“$cursor” : {\n“query” : {\n“container” : “bbbf087e-6d2d-4941-b812-f035407becbe:storage”,\n“path” : {\n“$ne” : “.”\n}\n},\n“fields” : {\n“size” : 1,\n“_id” : 0\n},\n“queryPlanner” : {\n“plannerVersion” : 1,\n“namespace” : “storagedb.files_v2”,\n“indexFilterSet” : false,\n“parsedQuery” : {\n“$and” : [\n{\n“container” : {\n“$eq” : “bbbf087e-6d2d-4941-b812-f035407becbe:storage”\n}\n},\n{\n“$nor” : [\n{\n“path” : {\n“$eq” : “.”\n}\n}\n]\n}\n]\n},\n“winningPlan” : {\n“stage” : “FETCH”,\n“inputStage” : {\n“stage” : “IXSCAN”,\n“keyPattern” : {\n“container” : 1.0,\n“path” : 1.0,\n“_id” : 1.0\n},\n“indexName” : “container_1_path_1__id_1”,\n“isMultiKey” : false,\n“multiKeyPaths” : {\n“container” : ,\n“path” : ,\n“_id” : \n},\n“isUnique” : false,\n“isSparse” : false,\n“isPartial” : false,\n“indexVersion” : 2,\n“direction” : “forward”,\n“indexBounds” : {\n“container” : [\n“[“bbbf087e-6d2d-4941-b812-f035407becbe:storage”, “bbbf087e-6d2d-4941-b812-f035407becbe:storage”]”\n],\n“path” : [\n“[MinKey, “.”)”,\n“(”.\", MaxKey]\"\n],\n“_id” : [\n“[MinKey, MaxKey]”\n]\n}\n}\n},\n“rejectedPlans” : [\n{\n“stage” : “FETCH”,\n“filter” : {\n“$nor” : [\n{\n“path” : {\n“$eq” : “.”\n}\n}\n]\n},\n“inputStage” : {\n“stage” : “IXSCAN”,\n“keyPattern” : {\n“container” : 1.0,\n“_id” : 1.0\n},\n“indexName” : “container_1__id_1”,\n“isMultiKey” : false,\n“multiKeyPaths” : {\n“container” : ,\n“_id” : \n},\n“isUnique” : false,\n“isSparse” : false,\n“isPartial” : false,\n“indexVersion” : 2,\n“direction” : “forward”,\n“indexBounds” : {\n“container” : [\n“[“bbbf087e-6d2d-4941-b812-f035407becbe:storage”, “bbbf087e-6d2d-4941-b812-f035407becbe:storage”]”\n],\n“_id” : [\n“[MinKey, MaxKey]”\n]\n}\n}\n},\n{\n“stage” : “FETCH”,\n“inputStage” : {\n“stage” : “IXSCAN”,\n“keyPattern” : {\n“container” : 1,\n“path” : 1,\n“fileNameLower” : 1\n},\n“indexName” : “container_1_path_1_fileNameLower_1”,\n“isMultiKey” : false,\n“multiKeyPaths” : {\n“container” : ,\n“path” : ,\n“fileNameLower” : \n},\n“isUnique” : true,\n“isSparse” : false,\n“isPartial” : false,\n“indexVersion” : 2,\n“direction” : “forward”,\n“indexBounds” : {\n“container” : [\n“[“bbbf087e-6d2d-4941-b812-f035407becbe:storage”, “bbbf087e-6d2d-4941-b812-f035407becbe:storage”]”\n],\n“path” : [\n“[MinKey, “.”)”,\n“(”.\", MaxKey]\"\n],\n“fileNameLower” : [\n“[MinKey, MaxKey]”\n]\n}\n}\n}\n]\n}\n}\n},\n{\n“$group” : {\n“_id” : {\n“$const” : “”\n},\n“sum” : {\n“$sum” : “$size”\n},\n“count” : {\n“$sum” : {\n“$const” : 1.0\n}\n}\n}\n}\n],\n“ok” : 1.0\n}\n/* 1 */\n{\n“serverInfo” : {\n“host” : “TLVR-00018-D”,\n“port” : 27017,\n“version” : “3.6.18”,\n“gitVersion” : “2005f25eed7ed88fa698d9b800fe536bb0410ba4”\n},\n“stages” : [\n{\n“$cursor” : {\n“query” : {\n“container” : “bbbf087e-6d2d-4941-b812-f035407becbe:storage”,\n“path” : {\n“$ne” : “.”\n}\n},\n“fields” : {\n“size” : 1,\n“_id” : 0\n},\n“queryPlanner” : {\n“plannerVersion” : 1,\n“namespace” : “storagedb.files_v2”,\n“indexFilterSet” : false,\n“parsedQuery” : {\n“$and” : [\n{\n“container” : {\n“$eq” : “bbbf087e-6d2d-4941-b812-f035407becbe:storage”\n}\n},\n{\n“$nor” : [\n{\n“path” : {\n“$eq” : “.”\n}\n}\n]\n}\n]\n},\n“winningPlan” : {\n“stage” : “PROJECTION”,\n“transformBy” : {\n“size” : 1,\n“_id” : 0\n},\n“inputStage” : {\n“stage” : “IXSCAN”,\n“keyPattern” : {\n“container” : 1.0,\n“path” : 1.0,\n“_id” : 1.0,\n“size” : 1.0\n},\n“indexName” : “container_1_path_1__id_1_size_1”,\n“isMultiKey” : false,\n“multiKeyPaths” : {\n“container” : ,\n“path” : ,\n“_id” : ,\n“size” : \n},\n“isUnique” : false,\n“isSparse” : false,\n“isPartial” : false,\n“indexVersion” : 2,\n“direction” : “forward”,\n“indexBounds” : {\n“container” : [\n“[“bbbf087e-6d2d-4941-b812-f035407becbe:storage”, “bbbf087e-6d2d-4941-b812-f035407becbe:storage”]”\n],\n“path” : [\n“[MinKey, “.”)”,\n“(”.\", MaxKey]\"\n],\n“_id” : [\n“[MinKey, MaxKey]”\n],\n“size” : [\n“[MinKey, MaxKey]”\n]\n}\n}\n},\n“rejectedPlans” : \n}\n}\n},\n{\n“$group” : {\n“_id” : {\n“$const” : “”\n},\n“sum” : {\n“$sum” : “$size”\n},\n“count” : {\n“$sum” : {\n“$const” : 1.0\n}\n}\n}\n}\n],\n“ok” : 1.0\n}\n", "text": "Thanks for your quick response !! it was really helpful.\nregarding what you said about\nthe new index {“container”:1,“path”:1,\"_id\":1,“size”:1}Today without the new index the query execution time for a large collection is 6 sec.this is the queryPlanner:How ever with the new index the execution time takes 2 sec and this is the queryPlannerAre you sure he doesn’t use the “_id” field inside the index because from run time preceptive its looks like it using the index? I’m using Mongo 3.6", "username": "guy_more" }, { "code": "“fields” : {\n “size” : 1,\n “_id” : 0\n}\n$nedb.coll.updateMany({path:\"\"},{$unset: {path:1}}){\"path\": {$exists: true}}", "text": "Your pipeline is filtering out the _id field. You are not outputting the _id nor using its value in this pipeline so yes, I think it’s useless to have it in this index if it’s not used by another query.\nPlease create this index without the _id and make another explain to see which one is selected. If I’m reading this correctly, this index is still covering this pipeline as I don’t see a FETCH stage.Also, I guess your “container” value will change from one pipeline to the next. But I assume you are only interested in documents where “path” is not an empty string. If this index is only ment to be used by this query, you could consider using a partial index. Sadly you can’t use $ne in a PartialFilterExpression. However, maybe you could run a script like db.coll.updateMany({path:\"\"},{$unset: {path:1}}) to remove the field “path” when it’s an empty string and instead use in your aggregation pipeline and your PartialFilterExpression {\"path\": {$exists: true}}If you have many fields where “path” is an empty string, this would make the index smaller and save you some RAM.2 comments:Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Improving my indexes
2020-11-18T17:22:50.672Z
Improving my indexes
2,348
null
[ "node-js" ]
[ { "code": "", "text": "Is there any way to create trigger from nodeJS application in MongoDB Atlas instead of creating them from atlas.\nfor example: we define model in node app and when we run the app, model will be created automatically in atlas if not exist.\nI want same thing in trigger.\nI want to define trigger using node app instead of defining it using MongoDB atlas", "username": "2018_12049" }, { "code": "", "text": "You might want to take a look at https://docs.mongodb.com/manual/changeStreams/.", "username": "steevej" }, { "code": "const pipeline = [\n {\n $project: { documentKey: false }\n }\n ];\n try {\n var db = sails.getDatastore().manager.client;\n var collection = db.collection('user');\n const changeStream = collection.watch(pipeline);\n changeStream.on(\"change\", function(change) {\n console.log(change);\n });\n } catch (err) {\n res.serverError(err);\n }\n", "text": "Ya I explored this also. the problem is I’m implementing change stream in sails JS\nfor that there is no reference documents available.\nHere is the code that I have wrote for the change stream in sailsJS but it says watch() is not definedLooking forward to get some insights", "username": "2018_12049" }, { "code": "", "text": "@steevej can you please help me out", "username": "2018_12049" }, { "code": "", "text": "No need to ask for my help explicitly. I look at the forum when I have time and I answer when I can. If I did not answer to your previous post it is because I have absolutely no clue about sailsJS. And writing that I have no clue about sailsJS would have been a useless post that other people would waste time reading. When I do not follow or answer, it is because I do not know or I am not available.", "username": "steevej" } ]
How to create trigger from node JS application
2020-11-17T09:06:33.190Z
How to create trigger from node JS application
4,347
https://www.mongodb.com/…4_2_1024x512.png
[ "queries" ]
[ { "code": "", "text": "Hi All,\nI want to know that is there is any way to create Triggers for MongoDB(local). I know that we can create Triggers for MongoDB Atlas which is a cloud service and here is the documentation link:\nIf we can create triggers via Robo 3T then how ?~ Thanks in advance.", "username": "Nabeel_Raza" }, { "code": "", "text": "Hi Nabeel,MongoDB Atlas Triggers are implemented in the Atlas service using the Change Streams feature of MongoDB Server.You can implement similar functionality for your own application or API using Change Streams.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X for answering my question. Could you please be specific in Yes/No that is there is any way to write triggers in Robo 3T? We can write triggers for MongoDB Atlas only which is an cloud service, right?", "username": "Nabeel_Raza" }, { "code": "", "text": "Hi @Nabeel_Raza,Triggers are not part of the core MongoDB Server functionality. Database Triggers are implemented using the Change Streams functionality to watch a deployment for relevant changes which are then passed to a function; Scheduled Triggers are also implemented in the Atlas service.MongoDB Atlas includes an implementation that runs as part of the cloud service, but you can also implement similar behaviour in your own application or API for a self-hosted deployment.The Yes/No interpretation for implementing triggers for a self-hosted deployment is:Yes: if you write code in your application or API using Change Streams and/or a scheduling service. The triggers are configured and implemented outside of Robo 3T.No: if you do not have an application or API and are trying to do this entirely in Robo 3T.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Triggers using Robo 3T
2020-11-18T11:39:47.520Z
Triggers using Robo 3T
3,543
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "", "text": "Hi. I have a scenario that appears to me as a poor fit for Realm sync, but I’d appreciate the thoughts and advice of others in case I’ve overlooked something.Here is the scenario, simplified as best I can:Logical entities\nSite->Job->Job details(outputs logged to each worker) + workers/users etc4 Roles\nEmployee-Admin (read/write access to all jobs and work details - creates jobs and assigns staff)\nEmployee-Senior (read/write access to assigned jobs)\nEmployee-Junior (read access to just the work details they performed)\nSite Manager (external to org - read access to all work details for each job performed at their site)Additional info\nData entry during a job could be messy. A senior employee could add job details for themselves and 3 junior employees, whilst an employee with the admin role could add job details for themselves plus a junior employee. Jobs may only last a few days before the workers move to another site. They rotate through sites, meaning the concept of a site is sufficiently independent of a job, and many sites have no internet availability where the work is conducted and data entry performed.From what I’ve learnt of Realm sync so far, it would seem that client-side filtering may be easiest for internal staff (where cross-role security might not be much of an issue and there are very few data writers), whereas external staff might need a different Realm App using a different partition key, or a duplicated set of records with a partition value set just for them. As these sound like nasty compromises I’m hoping someone has better ideas than me as to how to tackle this scenario.", "username": "Jared_Homes" }, { "code": "", "text": "@Jared_Homes Sounds like you’ve done some research already. Our initial implementation with partition based sync makes it a bit more difficult to allow the kind of hierarchy of roles and the sharing of data that you described but we are currently putting a plan together to allow for more flexible syncing in the future.However there is a solution you could implement today to solve this as long as you are okay with denormalizing and duplicating the data. You can create a per-user partition and then copy shared job documents between users that have access. You can use Atlas database triggers to observe changes occurring on one user’s partition and copy to all other user’s who also have access to that same job document - https://docs.mongodb.com/realm/triggers/database-triggers/", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward Thanks for triggers tip. I’ll keep that in mind, however in my scenario I’d anticipate 99% of the records in the database being low level job details with each document needing to sync to 4-5 users on average. Most of the database would then consist of heavily duplicated data…+1 vote for more flexible syncing (or a query-based sync) in the near future. And more .Net love with code samples etc. Thanks.", "username": "Jared_Homes" } ]
Help with MongoDB Realm partitioning strategy
2020-11-13T01:08:02.956Z
Help with MongoDB Realm partitioning strategy
2,058
null
[ "atlas", "connector-for-bi" ]
[ { "code": "", "text": "Hi,I’m using Windows ODBC and my database is in Atlas. To read I have no problem, but to add it shows[MySQL][ODBC 1.0(w) Driver][mysqld-5.7.12 mongosqld v2.14.0]insert requires --writeModeAnyone facing this problem and how to overcome this?", "username": "Alfirus_Ahmad" }, { "code": "", "text": "Hi @Alfirus_Ahmad,I believe you are using the BI connector to read the data. Please note that bi connector is read only by definition.You cannot use it for writes. Use any of our language native drivers to use CRUD.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This is a big draw back for me because 1 of my system using asp classic.", "username": "Alfirus_Ahmad" }, { "code": "", "text": "Hi @Alfirus_Ahmad,I understand. What you can do is open a feature request at https://feedback.mongodb.comBest\nPavel", "username": "Pavel_Duchovny" } ]
ODBC insert requires --writeMode
2020-11-19T01:11:42.966Z
ODBC insert requires &ndash;writeMode
3,033
null
[]
[ { "code": "", "text": "How we can get the old record in trigger with an update event.\nI want this old record to maintain history.", "username": "2018_12049" }, { "code": "", "text": "Hi @2018_12049,Triggers are defined to act on a change and not before a change. What I would suggest is to use triggers to move data from history to your live collection.This means that you actually store your history data and most up to date records are pushed to your main collection via a trigger.Best\nPavel", "username": "Pavel_Duchovny" } ]
How to get old record in atlas trigger
2020-11-17T09:02:50.361Z
How to get old record in atlas trigger
1,594
null
[]
[ { "code": "", "text": "Hi guys, right now we need to use Azure Data Factory (it’s like SSIS in cloud) to extract, transfrom and load information from ORACLE (OnPremise using Express Route) to Mongo Atlas (Azure cloud) but there is not sink connector for Mongo Atlas, so we need your help to realize how to do this integration.If there is not a sink connector or a custom connector, what would be the proper way to do this integration?Maybe expose a api that ADF could consume?Any suggestions will be apreciated.Thanks for your help.Pd. this post was posted in Microsoft forums too.https://docs.microsoft.com/en-us/answers/questions/165748/is-there-a-way-to-use-mongo-atlas-like-target-sink.html?childToView=167016#answer-167016", "username": "Jose_Alejandro_Benit" }, { "code": "", "text": "Hi @Jose_Alejandro_Benit,Welcome to MongoDB community!I am not sure if it will be easy to customize Data Factory to sync data to MongoDB Atlas.But what I saw people do is use the Azure kafka managed service and kafka sink connector for MongoDB to sync the databases:Build real-time, event-driven services and applications with a scalable, resilient, and secure data streaming solutionMaybe ADF has a kafka connector as well.Let me know if that helpsBest\nPavel", "username": "Pavel_Duchovny" } ]
Is there a way to use Mongo Atlas like target (sink) in a Azure Data Factory copy o migration operations?
2020-11-18T18:34:53.175Z
Is there a way to use Mongo Atlas like target (sink) in a Azure Data Factory copy o migration operations?
2,641
null
[ "replication" ]
[ { "code": "", "text": "Is there any way to identify the previous primary in the Replica Set structure?For example, there are nodes 1, 2, 3 and in the past, primary 1 is primary 3 is now primary. Can you confirm that number 1 was primary in the past?", "username": "Kim_Hakseon" }, { "code": "rs.status()mongo", "text": "Hi @Kim_Hakseon,You can confirm current replica set member states via the rs.status() helper in the mongo shell, but historical state information is only available via monitoring solutions or log files.I recommend using a deployment-level monitoring solution (for example MongoDB Cloud Manager or Ops Manager) to track status changes. Monitoring solutions will generally allow you to set up alerts for changes of interest such as election of a new primary, increasing replica set lag, or replica set reconfiguration.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you, Thank you ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Primary History
2020-11-19T01:23:17.439Z
Primary History
3,233
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.11 is out and is ready for production deployment. This release contains only fixes since 4.2.10, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.11 is released
2020-11-19T00:09:02.258Z
MongoDB 4.2.11 is released
2,498
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.2 is out and is ready for production deployment. This release contains only fixes since 4.4.1, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.2 is released
2020-11-19T00:07:02.949Z
MongoDB 4.4.2 is released
2,468
null
[ "node-js", "connecting" ]
[ { "code": " MongoClient.connect(\n \"mongodb://<username>:<password>@127.0.0.1:27018/?ssl=false\",\n {\n ssl: false,\n sslValidate: false,\n useNewUrlParser: true,\n useUnifiedTopology: true,\n }\n", "text": "I’m trying to connect to my mongodb instance through a ssh tunnel. I’m able to connect when using the mongo cli with the following commandmongo “mongodb://:@127.0.0.1:27018/?ssl=false”But when i use npm library mongodb and callingI get a connection timeout.\nCan anyone help me out on this one?", "username": "Mathias_haudgaard" }, { "code": "localhost127.0.0.1", "text": "Just for fun, have you tried making it localhost instead of 127.0.0.1?", "username": "Jack_Woehr" } ]
Connecting to mongodb via ssh tunnel in Node.js
2020-11-18T18:36:19.426Z
Connecting to mongodb via ssh tunnel in Node.js
8,463
null
[ "golang" ]
[ { "code": "", "text": "I’d like to continue the conversation we started in the Google Group about an ease-of-use feature-set - making day-to-day operations easier/more compact.\nI can copy the contents over, but it sounds like that thread will be archived.\nUltimately, I’d love to work with the team and the community to help develop a really straightforward interface and I am happy to use my own hours to get it done.", "username": "TopherGopher" }, { "code": "", "text": "Also adding this ticket as an example of ease-of-use features for the library: https://jira.mongodb.org/browse/GODRIVER-903\nListCollections() and Indexes().List() both return a cursor that you have to define a custom type and unmarshal into. Why don’t they just return a slice of Collection{} and a slice of IndexModel{} instead, so that I as a developer can access things WAY more efficiently.", "username": "TopherGopher" }, { "code": "", "text": "Hi Christopher,We currently have our hands full working on the 1.4.0 driver release, which will correspond with the MongoDB 4.4 server release. Once that’s over, I plan to organize a meeting with the team to go over the feedback you outlined and get you an answer about which tickets we can re-open/accept PRs for now and which ones are already planned for cross-drivers work in the future (e.g. client side operations timeouts). Does this sound OK to you?– Divjot", "username": "Divjot_Arora" }, { "code": "mongo-for-humans", "text": "Actually Divjot, I was hoping we could take a slightly different tack.\nI understand that the driver tries to use the API fairly exclusively and at this point we can’t introduce breaking changes, and my primary qualm is with how hard the functions themselves are to use. It takes me forever to get teammates spun up and it’s hugely error prone because there are just so many knobs you have to turn and so many gotchas you have to check for.I would like to add another package to mongo-go-driver - mongo-for-humans or something similar which still uses mongo-go-driver, but abstracts a lot of the complexity away from the developer. That way, if you want, you can have full control, or if you are just doing something common (e.g. SelectAll with no timeout), then it’s really easy. If you support this route, I would love to just come up with an interface spec together here and I’m happy to do a majority of the work.", "username": "TopherGopher" }, { "code": "mgocompat.Registrymongomongo", "text": "I understand that there are things that are more difficult to do in our API compared to mgo and many of these are because we have a set of specs to follow and have to ensure that we offer a way to work with both raw BSON and decode BSON into native Go types so an application can choose what it wants to do based on performance constraints. I’m not convinced that another package is the right way to proceed, though. Many of the BSON issues you linked in your original post have been solved by the mgocompat.Registry, released in v1.3.0 of the driver. This new BSON registry was introduced to address the issues users were facing when migrating BSON libraries. Some of the other BSON features linked would have to be solved in our BSON library, not in a helper. An example is a struct tag to treat string fields as ObjectIDs. Many of the CRUD helpers in the tickets you linked (e.g. FindAll) would be added to the main mongo package rather than having users remember that some helpers are in mongo and others are in a separate helper package.", "username": "Divjot_Arora" }, { "code": "if err = coll.Find(query).All(&sliceToUnpackTo).WithTimeout(time.Minute); err != nil {}\n ctx, cancel := context.WithTimeout(context.Background(), time.Minute)\n defer cancel()\n cursor, err := coll.Find(ctx, query)\n defer cursor.Close(context.Background())\n for cursor.Next(context.Background()) {\n if err = cursor.Decode(&someObj); err != nil {\n break\n }\n sliceToUnpackTo = append(sliceToUnpackTo, someObj)\n }\n if err == nil {\n err = cursor.Err()\n if err == nil && len(sliceToUnpackTo) { err = ErrNotFound }\n }\n if err != nil {\n return err\n }\n", "text": "Hey Divjot -Happy Monday I truly appreciate you working the tickets I linked - that will make life easier for most.My fundamental issue is how verbose normal functions are to implement.Ultimately, I’m about the least about of key strokes - I like to write lean, clean code, and unfortunately it’s not possible to be DRY with the current go library for standard tasks.The problem is there are too many knobs exposed that have to be turned. I want to leverage all these awesome and cool features you’ve exposed to make some really simple, less error prone functions. Look at how clean and readable this line is. It’s really easy for me to teach new developers - they just check a single error object. Forget about how we would do it for a moment, and just compare:Here’s how I have to do that in today’s world:No need to deal with using context or cancel functions or closing the cursor. (They always forget the defers). No need to play with defering the close of the cursor. Every time they implement, I always see them forget to check for a decode error or a cursor error, or setting the error not found. Usually, day to day, this is what our developers are doing. We’ve needed to actually use the cursor to do something interesting three times across literally hundreds of functions. It’s useful to have, but man it makes for verbose and repeated code.I want a standard mongo library to have functions to make unpacking and timeouts easy. Because of the current mongo-go-driver design though, introducing this friendly syntax there isn’t possible because of backwards breaking changes. This is why I suggest a different package to wrap mongo-go-driver.Do you follow my logic?", "username": "TopherGopher" }, { "code": "", "text": "I can use .All() as well, but my point is the number of errors you have to check when you’re doing it is still higher. It would be nice to bundle that up a little.", "username": "TopherGopher" }, { "code": "AllFindAllAggregateAll", "text": "All is definitely a more concise way of doing this. Changing how we do timeouts isn’t really on the table. Parts of the Go language that were built before context.Context has been updated to accept contexts (e.g. net/http). I would be open to a PR to add FindAll/AggregateAll to condense two error checks into one, but adding a completely separate fluent API isn’t really possible, especially given the fact that Go doesn’t have method overloading.", "username": "Divjot_Arora" }, { "code": "", "text": "As a heavy user of this library, I, a user, am providing feedback that the current interface isn’t user friendly in general compared to literally every other mongo golang package out there. I don’t think we need method overloading, so that works out - Ideally, I want a different interface on top of mongo-go-driver - wrapping the existing code in mongo-go-driver, not the API.\nLet’s put how to do timeouts aside and realize that usually we aren’t using a context, we pass nil, background or todo, so why force the user to specify it on every call? Why not just add it as an option instead? We can’t make Distinct easier without breaking the library, We can’t make these kinds of changes without breaking the library. That’s why I want to work together to design some wrappers in a separate library that I’m willing to maintain myself.\nThere are just design decisions in this library that have been made with Mongo, not the community, in mind. And that’s fine, we have different target audiences for our libraries.In the Google thread I think that you had mentioned trying to do an mgo shim. We could use that as an opportunity to team up, extend that further and create something awesome. I envision things like, rather than having to manipulate a cursor, you could instead read from a channel.I think what we could do is find libraries that currently consume mongo-go-driver and see what kind of patterns are being used to help drive what users are usually doing.", "username": "TopherGopher" }, { "code": "", "text": "The approach taken for the PHP driver is interesting here: while the original PHP extension was designed to be used in userland, the second-generation extension was reduced in scope, to provide a small and inconvenient lower-layer, on top of which the userland driver was written, and meant to be used by most developers.\nWe have a similar situation here, with the new driver being aligned in terms of API with other MongoDB drivers, but being arguably less convenient than mgo. It could be used to provide the basis for a more user-friendly and more idiomatic driver written on top of it.", "username": "Frederic_Marand" }, { "code": "", "text": "Hi,Sorry I haven’t updated this thread in a while. Our product manager is looking into this and conducting some field research regarding the API differences between the Go driver and mgo. As you can imagine, we want to make sure we don’t speed through this as we want this to be an official API that we can recommend to our users.We’ll post updates here once we have a concrete plan.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "We’re moving from node.js to golang. The driver is a real blocker. I’m a new user of the driver, but I’d like to say that @TopherGopher has done you are real service here. I’ve spent my first three days on a new project trying to box the driver out of the rest of my code. It’s requiring me and others to organize the code in ways that are discouraged by the go community.", "username": "Dustin_Currie" }, { "code": "", "text": "To provide an update on this, we are continuing to meet with community members to gather feedback about the pain points of the driver and from there, we will figure out whether there are improvements we can make directly to the driver or we need to add a separate API.@Dustin_Currie Sorry to hear the migration process has been tough. If you have any specific questions, feel free to create new threads about them and hopefully we can provide some advice to make things easier.– Divjot", "username": "Divjot_Arora" }, { "code": "bson.Rawinterface{}boolstringfloat64nil[]interface{}map[string]interface{}", "text": "@Divjot_Arora while we wait for improvements, is it possible to get an escape hatch. I desperately need something that will translate bson.Raw into an interface{} that consists of only go default concrete types. e.g. bool, string, float64, nil []interface{} and map[string]interface{}. see the go blogs json post here:How to generate and consume JSON-formatted data in Go.", "username": "Dustin_Currie" }, { "code": "", "text": "I’m surprised that the mongo driver pushes its internal type system onto users. I don’t know the numbers, but it seems like a majority of people who are writing in Go with MongoDB are in my situation–a node app is too slow so we’re rewriting. In this situation, it’s going to be common to need to model deep json hierarchies as interface{}. In my situation, I have data that just needs to pass through to the front end. Writing a huge set of structs to model it, just so I can transform it from bson to json, is a real economic problem.", "username": "Dustin_Currie" }, { "code": "/*\n|--------------------------------------------------------------------------\n| Mongo Database Connection\n|--------------------------------------------------------------------------\n|\n| We are using mongo-go-driver to connect to mongodb\n| Connect is used to make connection\n| TestConnect is used to make connection while running tests\n|\n*/\n\npackage database\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"github.com/joho/godotenv\"\n\tletslog \"github.com/letsgo-framework/letsgo/log\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\nmakePrimaryKeyFilter()\n \n func (ms *MongoStore) Save(ctx context.Context, r *Record) error {\n \t_, err := ms.collection.InsertOne(\n \t\tctx, bson.M{\n \t\t\t\"_id\": r.ID,\n \t\t\t\"name\": r.Name,\n \t\t\t\"version\": r.Version,\n \t\t\t\"dependencies\": r.Dependencies,\n \t\t},\n \t)\n \n \n\tif err != nil {\n \t\tif ferr, ok := err.(mongo.WriteException); ok {\n \t\t\t// ignore duplicate key error\n \t\t\tif len(ferr.WriteErrors) > 0 && ferr.WriteErrors[0].Code == 11000 {\n \t\t\t\treturn nil\n \t\t\t}\n \t\t}\n \n \n\t\treturn err\n \t}\n \n \n\n \n \n }\n \n \nfunc (c *connector) UpdateOne(ctx context.Context, filter interface{}, update interface{}, opts ...*options.UpdateOptions) (*mongo.UpdateResult, error) {\n \treturn c.collection.UpdateOne(ctx, filter, update, opts...)\n }\n \n \nfunc (c *connector) InsertOne(ctx context.Context, document interface{}, opts ...*options.InsertOneOptions) (*mongo.InsertOneResult, error) {\n \treturn c.collection.InsertOne(ctx, document, opts...)\n }\n \n \nfunc (c *connector) FindOne(ctx context.Context, filter interface{}, structToDeserialize interface{}, opts ...*options.FindOneOptions) error {\n \traw, err := c.collection.FindOne(ctx, filter, opts...).DecodeBytes()\n \tif err != nil {\n \t\treturn err\n \t}\n \treturn bson.UnmarshalWithContext(bsoncodec.DecodeContext{\n \t\tRegistry: bson.DefaultRegistry,\n \t\tTruncate: true,\n \t}, raw, structToDeserialize)\n }\n \n \n\n \n \n \t\t}\n \n \n\t}\n \tindexOpts := options.Index().\n \t\tSetExpireAfterSeconds(int32(mstore.options.MaxAge)).\n \t\tSetBackground(true).\n \t\tSetSparse(true).\n \t\tSetName(indexName)\n \n \n\tindexModel := mongo.IndexModel{\n \t\tKeys: bson.M{\n \t\t\t\"modified_at\": 1,\n \t\t},\n \t\tOptions: indexOpts,\n \t}\n \t_, err = mstore.coll.Indexes().CreateOne(ctx, indexModel)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"mongodbstore: error ensuring TTL index. Unable to create index: %w\", err)\n \t}\n \n \n\treturn nil\n \n context.TODO()", "text": "I thought comparing a few of the articles available might be telling of some common usage patterns.MongoDB drivers allow node.js applications to connect with MongoDB to work with the data. In this blog we will give an overview of the MongoDB Go Driver.\nEst. reading time: 4 minutes\nHow to create an index in MongoDB using Go and the official mongo-go-driverhttps://vkt.sh/go-mongodb-driver-cookbook/https://wb.id.au/computer/golang-and-mongodb-using-the-mongo-go-driver/Like the other official MongoDB drivers, the Go driver is idiomatic to the Go programming language and provides an easy way to use MongoDB as the database so…On December 13, 2018 MongoDB released its official Go Driver into beta, ready for the wider Go and MongoDB community to put it into…\nReading time: 6 min read\nGet started with the official MongoDB Go DriverThen I went to find projects that consume the mongo-go-driver and spot checked a few (some are garbage or test projects, but it gives a great idea of where people who are starting out struggle or miss something key): mongo package importedby - go.mongodb.org/mongo-driver/mongo - Go Packageshttps://bitbucket.org/QuizKhalifa/mongowrapper/src/master/manipulation.gohttps://github.com/ornell/mongoDB/blob/master/mongodb.gohttps://github.com/pipe-cd/pipe/blob/master/pkg/datastore/mongodb/mongodb.goGitLab Community EditionGitLab Community Editionapicodegen", "username": "TopherGopher" }, { "code": "", "text": "There are 3 other feature gaps that I’m hoping we can address somehow.", "username": "TopherGopher" }, { "code": "", "text": "@TopherGopher Thank you for organizing all this feedback! As we plan out these improvements, we looking into the points that you’ve brought up.–Isabella", "username": "Isabella_Siu" }, { "code": "", "text": "Hi! I’ve made a separate post for the improved error API that we’re considering, where we would like some user feedback: Seeking Developer Feedback: Go Driver Error Improvements", "username": "Isabella_Siu" } ]
Developer feedback for ease-of-use improvements for Go Driver
2020-03-03T05:17:40.747Z
Developer feedback for ease-of-use improvements for Go Driver
5,579
null
[ "aggregation", "anti-patterns" ]
[ { "code": "", "text": "What are some of the biggest mistakes people make in aggregation pipelines? How can people avoid these mistakes?Paging @Asya_Kamsky after her fantastic talk at .live today", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi\nthe biggest mistake is not to use an aggregation pipeline. Many, literally over 50% of the projects I see, have the same pattern: SQL people try out MongoDB. They limit their test to CRUD operations, some do not even know that there is something like an aggregation pipeline, other feel lost in too many curly braces …\nCheers\nMichaelPS if you want to get a concrete answer on the biggest mistakes", "username": "michael_hoeller" }, { "code": "find$map$reduce$filter", "text": "I find these most often:", "username": "Prasad_Saya" }, { "code": "", "text": "Yes - The aggregation pipeline is a hidden gem", "username": "Lauren_Schaefer" }, { "code": "", "text": "@Prasad_Saya Thanks for sharing! What formatting rules do you use?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hithis would be nice to have in VSCode with the MongoDB plugin", "username": "michael_hoeller" }, { "code": "db.collection.aggregate([\n { \n $group: {\n _id: \"$workers.order\", \n order_avg: { $avg: { $subtract: [ \"$workers.endAt\", \"$workers.startAt\" ] } },\n global_values: { $addToSet: { some_id: \"$_id\", duration: { $subtract: [ \"$endAt\", \"$startAt\" ] } } },\n another_field: { ... ... ...}\n } \n }\n])\n { \n $group: {\n _id: \"$workers.order\", \n order_avg: { \n $avg: { \n $subtract: [ \"$workers.endAt\", \"$workers.startAt\" ] \n } \n },\n global_values: { \n $addToSet: {\n some_id: \"$_id\", \n duration: { $subtract: [ \"$endAt\", \"$startAt\" ] } \n } \n },\n another_field: { \n ... ... ...\n }\n } \n }\n { \n $group: {\n _id: \"$workers.order\", \n order_avg: { \n $avg: { \n $subtract: [ \n \"$workers.endAt\", \n \"$workers.startAt\" \n ] \n } \n },\n global_values: { \n $addToSet: {\n some_id: \"$_id\", \n duration: { \n $subtract: [ \n \"$endAt\", \n \"$startAt\" \n ] \n } \n } \n },\n another_field: { \n ...\n ... \n ...\n }\n } \n }", "text": "What formatting rules do you use?I don’t have any written rules. Its is about aesthetics and about code readability. These are subjective. I guess, proper indentation is the simple way of saying it.An aggregation stage is too long (vertically), this makes a stage more than a page length or with all operators and fields put together and too wide (horizontally). Both cases affect code readability. When the readability of code is affected, it in turn affects the maintainability.So, what is the correct indentation? For example, the following are three samples of the same code with different indentation. The first and last are either too wide or too long to read clearly. The second sample, fits within a page and I see it is indented appropriately. A set of related code when it fits within a page and also properly indented is good readability, the eye can glance from top to bottom of the page and get the meaning of the code. If scrolling is involved it becomes awkward and then difficult.Sample 1:Sample 2:Sample 3:", "username": "Prasad_Saya" }, { "code": "match = { \"$match\" : { ... } } ;\nsort = { \"$sort\" : { ... } } ;\nlookup = { \"$lookup\" : { ... } } ;\ngroup = { \"$group\" : { ... } } ;\npipeline = [ match , lookup , sort , group ] ;\ndb.collection.aggregate( pipeline ) ;\ngroup = \n{ \n $group:\n {\n _id: \"$workers.order\", \n order_avg:\n { \n $avg: { $subtract: [ \"$workers.endAt\", \"$workers.startAt\" ] } \n },\n global_values:\n { \n $addToSet:\n {\n some_id: \"$_id\", \n duration: { $subtract: [ \"$endAt\", \"$startAt\" ] } \n } \n },\n another_field:\n { \n ... ... ...\n }\n } \n} ;\n", "text": "One thing I do to make it more readable is to assign each stage a variable and have the pipeline be an array of my variables. For example:I find it is easier to modify a stage because it is by itself rather than being embedded in a myriad of braces. I can also easily remove a stage from the pipeline. As for indentation, being as old as I am, I prefer the K&R/Allman braces style. So it would be, taking Prasad_Saya example:And I like to indent with tabs, which makes me not like yaml and python very much. B-)", "username": "steevej" }, { "code": "", "text": "Love that idea! You can submit feature requests at https://feedback.mongodb.com/", "username": "Lauren_Schaefer" }, { "code": "", "text": "Done Bad formatting of queries are a pain. Queries and specially Aggregation Pipelines can get quickly unreadable, unmaintaibable or miss understandable. To avoid this I'd like to ask for a code formater. There will be always a discussion of a good and...", "username": "michael_hoeller" } ]
What are some of the biggest mistakes people make in aggregation pipelines?
2020-11-17T20:48:52.040Z
What are some of the biggest mistakes people make in aggregation pipelines?
6,470
null
[ "upgrading" ]
[ { "code": "", "text": "Hi.\nI’m updating a 3.6.1 mongoDb server and the goal is to arrive to use the latest version (4.4.1).\nI did create a server copy in order to experiment without fear to disrupt any service.\nIn that server I successfully updated from 3.6.1 to 3.6.20 and then from 3.6.20 to 4.0.20.\nDo I have to escalate version by version or I can jump to the 4.4.1 ?\nOn some early tests trying to migrate to a 4.2.10 I did experience the error 14.\nI did find lot of info and suggestions to fix the error 14 but nothing worked for me so far, so I erased the server and restored to 3.6.1 and then re-upgraded back to 4.0.20.\nSo I wonder what would be the best strategy to upgrade it.Thanks\nSTeve", "username": "Stefano_Bodini" }, { "code": "", "text": "Hi @Stefano_BodiniThis is a reoccurring topic/question. Yes the best practice is to complete each incremental major release in full as per the release notes. You can search the forum for those topics.If you have any follow up questions, post them here.", "username": "chris" }, { "code": "", "text": "Hi Chris !\nThank you to confirm that, yes I did read the documentation and I planned the migration gradually but since I ended up with problems I was wondering.\nBasically I have a server with mongodb standalone, CentOS 7. I’m not sure but I think it was created with Mongo 3.4 or even earlier.\nWhen I started the migration the running version was the 3.6.1.\nSo I did update it to 3.6.20 and then to 4.0.20. So my latest stable running server is with the 4.0.20.\nThe goal is to end up with the 4.4.1. So I did update to the 4.2.1 and the server was crashing with Error 14.\nI followed lot of suggestions I found to get rid of the Error 14 but nothing. Permissions correct, config file OK, etc.\nThen I did notice some errors in the log that suggested some errors in the code, so I did update the server (usually is built with Chef without updates).\nApparently now I get rid of the Error 14, but I have an Error 62, data incompatibility.\nSo I’m working now to understand what is required to migrate from 4.0.20 to 4.2.1.\nThe compatibility flag (set up with the 3.6.20 was set to 3.6.Thanks for any suggestion.Stefano", "username": "Stefano_Bodini" }, { "code": "", "text": "Ok … finally !\nThe Error 14 was due because some system libraries of CentOS 7 were not updated.\nAfter an update of the system the Error 14 error disappeared.\nAnd the incompatibility error was because I didin’t realized to have to set the compatibility value on every step.\nI have finally now the mongodb 4.4.1 happily running !Thank you\nStefano", "username": "Stefano_Bodini" }, { "code": "", "text": "Good news @Stefano_Bodini,Out of interest did you use a package manager or a manual installation method to upgrade the mongo binaries?And the incompatibility error was because I didin’t realized to have to set the compatibility value on every step.Yes, that is why I commented with the below point, more text strength required ?complete each incremental major release in full as per the release notes", "username": "chris" }, { "code": "", "text": "Well, the system is supposedly handled by a Chef infrastructure (that I have to update) but in order to see what to do I’m doing manual tests on temporary servers, don’t want to risk to damage production databases and yes, I’m quite new to mongo admin (thus my naive questions).\nWhat is “duh yeah” for you is not for me and honestly reading the documentation (not knowing) for the update was not really clear to me about the requested passages for example.\nI though that the compatibility value update was needed only passing from 3.6 to 4.0 and after that, not changing the major version number, was ok.\nMy bad.\nThank you again to help me out.\nStill lot of things to do before to be ready to update the real thing, hopefully I will not disturb anymore.\nThanks\nSteve", "username": "Stefano_Bodini" }, { "code": "", "text": "Well, the system is supposedly handled by a Chef infrastructure (that I have to update) but in order to see what to do I’m doing manual tests on temporary servers, don’t want to risk to damage production databases and yes, I’m quite new to mongo admin (thus my naive questions).Was it/you using yum/dnf , rpm -i or tarball for mongod installation?Thank you again to help me out.\nStill lot of things to do before to be ready to update the real thing, hopefully I will not disturb anymore.You’re welcome. Disturb away, its what all the contributors are here for.", "username": "chris" }, { "code": "", "text": "Hi Chris\nI do use yum for mongoDB installation, mainly because is what is used on the Chef infrastructure.\nI normally set up the repository on the server (lot of security issues) for the specific series (3.6/4/0/4.2/4/4) and then stop, remove the old one, install the new one, start the new one but I didn’t post specifically about that on this group in the past, so I would say “no, it was not me” As usual handling mongoDb admin stuff is juts one of the thing one has to do so there is never time to stop everything and, say, take a full training with time to experiment.\nThanks for the kind wordsRegardsStefano", "username": "Stefano_Bodini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrading advice from 3.6.1 to 4.4.1
2020-11-16T17:28:58.924Z
Upgrading advice from 3.6.1 to 4.4.1
1,873
null
[ "mongodb-live-2020" ]
[ { "code": "", "text": "Hey everyone! We have our Mongo.live pre-game user groups occurring the day before each event throughout the world. We have some amazing presentations from our MongoDB teams, a friendly community drawing challenge, and a chance to network with fellow developers and other MongoDB enthusiasts. Take a look below to see some of the user groups that are happening in the month of November. You won’t want to miss them! Make sure to RSVP.Upcoming MongoDB.live user groups include France and Israel, please keep a lookout for those soon! We will update and provide those links here.", "username": "Celina_Zamora" }, { "code": "", "text": "", "username": "Stennie_X" } ]
.live Pre-Game User Group November events are happening SOON!
2020-11-05T15:38:46.493Z
.live Pre-Game User Group November events are happening SOON!
5,060
null
[ "dot-net" ]
[ { "code": "", "text": "HiI have 2 MongoDB Sync apps. One is Android, the other .Net (WPF) Realm 10.0.0-Beta2\nDevelopment Mode is ON. All classes have Primary Keys ( _id), and _partition etc.\nSchemas are flagged as Syncing OK with no errors on the Dashboard.In the .Net App:\nOn starting, the .Net app connects, creates a Realm, which has all the Data Model fields, but crashes on call to:\nawait Realm.GetInstanceAsync(config);\nError: The program ‘[2612] Bookarama.exe’ has exited with code -1073740791 (0xc0000409).That’s the first issue which I thought might be related to the following:The data models include a class ‘ Zone ’ and a class ‘ Building ’\nThe Zone data model contains an IList[MapTo(“buildings”)]\npublic IList Buildings { get; }When I view the Realm created in RealmStudio, the list of buildings is shown as:buildings\nZone_buildings (Embedded)I would have expected (as Android seems to)buildings\nBuildingStarting the Android App gives errors:Property Zone.buildings has been changed from array<Zone_buildings> to array Building>.I have wiped the Android device and it still throws the error.\nWhat to do?", "username": "Richard_Fairall" }, { "code": "", "text": "@Richard_Fairall Are you able to share the schema on the client and cloud as well as any logs so we can investigate further?", "username": "Ian_Ward" }, { "code": "", "text": "Are you suggesting I send the AppId and Web URL?\nI have emailed them, otherwise I don’t understand your request.", "username": "Richard_Fairall" }, { "code": "", "text": "Hi Ian\nAnother day has passed. Any response would be welcome.", "username": "Richard_Fairall" }, { "code": "", "text": "@Richard_Fairall Did you open a support ticket? Sorry I have a lot things to do so if you need immediate responses please go to support - that is what they are there for.", "username": "Ian_Ward" }, { "code": "", "text": "await Realm.GetInstanceAsync(config);\nError: The program ‘[2612] Bookarama.exe’ has exited with code -1073740791 (0xc0000409)I thought you were support - that explains things.", "username": "Richard_Fairall" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
.Net App crashes on call to: await Realm.GetInstanceAsync(config);
2020-11-17T14:57:09.409Z
.Net App crashes on call to: await Realm.GetInstanceAsync(config);
2,494
null
[ "aggregation" ]
[ { "code": "I am getting only _id in output for following query await User.aggregate([\n {\n $match: {_id: ObjectId(req.user._id)}\n },\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$request_rec.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"tripjoinrequests\",\n \"let\": { \"trip\": \"$_id\",\"admin\": ObjectId(req.user['_id']) },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":{\n $and:[\n { \"$eq\": [\"$$trip\", \"$trip\"] },\n { \"$eq\": [\"$$admin\", \"$admin\"]}\n ]}\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"pending\"\n\n }\n },\n\n { \"$project\": { \"location\": 1, \"date\": 1, \"pending\": 1 }}\n ],\n \"as\": \"request_rec\"\n }\n },\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$request_send.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"admin\": \"$admin\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$admin\", \"$_id\"] },\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"admin\"\n\n }\n },\n { \"$unwind\": \"$admin\"},\n { \"$project\": { \"location\": 1, \"date\": 1,\"admin\": 1 }}\n ],\n \"as\": \"request_send\"\n }\n },\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"tripId\": \"$upcomming_trips.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$tripId\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"adminId\": \"$admin\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$adminId\", \"$_id\"] },\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"Admin\"\n\n }\n },\n {$unwind: \"$Admin\"},\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"memberId\": \"$members.user_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$memberId\", \"$_id\"] },\n }\n },\n { \"$project\": { 'avatar': { \"$slice\": [ \"$avatar\", 3 ]}, \"_id\": -1}}\n ],\n \"as\": \"Members\"\n }\n },\n { \"$project\": { \"location\": 1, \"date\": 1,\"Admin\": 1,\"memberCount\": 1, \"coordinates\": 1,'Members': 1 }}\n ],\n \"as\": \"Upcomming_trips\"\n }\n },\n\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$completed_trips.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"member\": \"$members.user_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$member\", \"$_id\"] },\n }\n },\n { \"$project\": { 'avatar': { \"$slice\": [ \"$avatar\", 3 ]}, \"_id\": -1}}\n ],\n \"as\": \"members\"\n }\n },\n { \"$project\": { \"location\": 1, \"date\": 1, \"memberCount\": 1,'members': 1 }}\n ],\n \"as\": \"completed_trips\"\n }\n },\n\n { $unwind: {path: \"$Upcomming_trips\", \"preserveNullAndEmptyArrays\": true}},\n { $unwind: {path: \"$request_rec\", \"preserveNullAndEmptyArrays\": true}},\n { $unwind: {path: \"$request_send\", \"preserveNullAndEmptyArrays\": true}},\n { $unwind: {path: \"$completed_trips\", \"preserveNullAndEmptyArrays\": true}},\n {\n \"$project\":{\n \"request_rec\": 1,\n \"request_send\": 1,\n \"Upcomming_trips\": 1,\n \"completed_trips\": 1\n }\n }\n]).exec(function(err, mytrips){\n if(err) return res.status(422).send({error: err.message});\n console.log(\"mytrips\",JSON.stringify(mytrips,null,4))\n res.status(200).send({\n mytrips: mytrips\n });\n})\n{ \"_id\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"show_onboarding\" : false, \"role\" : \"Normal\", \"marketing_emails\" : true, \"account_activated\" : true, \"galleryUrls\" : [ ], \"spokenLanguages\" : [ ], \"countriesVisited\" : [ ], \"tripsCount\" : 0, \"name\" : \"Kj Jose\", \"email\" : \"[email protected]\", \"salt\" : \"982956249717\", \"hashed_password\" : \"7d9f29e08be931bf1c2a4a2224ff25a1f96c6a82\", \"avatar\" : \"https://platform-lookaside.fbsbx.com/platform/profilepic/?asid=2314576115355640&height=50&width=50&ext=1607955339&hash=AeSxNozVpV_6lIf85lc\", \"request_rec\" : [ ], \"request_send\" : [ ], \"upcomming_trips\" : [ { \"_id\" : ObjectId(\"5fafe60c6b3f1d6259406699\"), \"trip_id\" : ObjectId(\"5fafe60c6b3f1d6259406698\"), \"createdAt\" : ISODate(\"2020-11-14T14:13:32.782Z\") }, { \"_id\" : ObjectId(\"5fafe7606b3f1d625940669b\"), \"trip_id\" : ObjectId(\"5fafe7606b3f1d625940669a\"), \"createdAt\" : ISODate(\"2020-11-14T14:19:12.013Z\") }, { \"_id\" : ObjectId(\"5fb0072a2f5fb26e977bb1a1\"), \"trip_id\" : ObjectId(\"5fb0072a2f5fb26e977bb1a0\"), \"createdAt\" : ISODate(\"2020-11-14T16:34:50.728Z\") } ], \"completed_trips\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T14:12:48.200Z\"), \"updatedAt\" : ISODate(\"2020-11-15T21:54:55.723Z\"), \"__v\" : 0, \"fcmToken\" : \"frAT3vq8Ri2k2v7Fms-JHa:APA91bHD775-exOoT3wbJYJdoe3aT53dZ7RFW-q4gUfq0mrIbb84vd5YmcVPHpjGwnoGmKKZz0hyIJnrW3gq7zOFm-w71kjQU5IP1IrVMKepUNqtCtbRmQTqY6hvmoKvn6gXezzWp7Op\" }\n{ \"_id\" : ObjectId(\"5fafe7606b3f1d625940669a\"), \"coordinates\" : [ 101.71649, 3.10718 ], \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"cheras, kuala lumpur, malaysia\", \"date\" : ISODate(\"2020-11-14T00:00:00Z\"), \"desc\" : \"gtgtgtgtgtg\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T14:19:12.013Z\"), \"updatedAt\" : ISODate(\"2020-11-14T14:19:12.013Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5fb0072a2f5fb26e977bb1a0\"), \"coordinates\" : [ 101.71649, 3.10718 ], \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"cheras, kuala lumpur, malaysia\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"huhhuhuhu\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T16:34:50.728Z\"), \"updatedAt\" : ISODate(\"2020-11-14T16:34:50.728Z\"), \"__v\" : 0 }{ \"_id\" : ObjectId(\"5fb0180ae58c397e9d630c1d\"), \"coordinates\" : [ -0.5074, 51.3902 ], \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"chertsey, surrey, england, united kingdom\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"iyiyiyiyiiy\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T17:46:50.999Z\"), \"updatedAt\" : ISODate(\"2020-11-14T17:46:50.999Z\"), \"__v\" : 0 }{ \"_id\" : ObjectId(\"5fb1a3afa3c3bc24908b42ec\"), \"coordinates\" : [ 77.05972, 10.08917 ], \"dob\" : ISODate(\"2004-11-15T21:52:53.564Z\"), \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"munnar, kerala, india\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"hahahahhah\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-15T21:54:55.680Z\"), \"updatedAt\" : ISODate(\"2020-11-15T21:54:55.680Z\"), \"__v\" : 0 }{ \"_id\" : ObjectId(\"5f91b92bcdec9c3b5146973d\"), \"status\" : \"Accepted\", \"trip_id\" : ObjectId(\"5f8fe98d0259331ab7eae568\"), \"message\" : \"tetetetetet\", \"user\" : ObjectId(\"5f53214e2893f20ae9e6924d\"), \"createdAt\" : ISODate(\"2020-10-22T16:54:03.253Z\"), \"updatedAt\" : ISODate(\"2020-10-26T09:57:37.294Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5f992ada54e28d25e53decd2\"), \"status\" : \"Accepted\", \"message\" : \"tetetettetetetetettettettetetetetetteet\", \"trip_id\" : ObjectId(\"5f99247a54e28d25e53decd0\"), \"user\" : ObjectId(\"5f9927a954e28d25e53decd1\"), \"createdAt\" : ISODate(\"2020-10-28T08:24:58.187Z\"), \"updatedAt\" : ISODate(\"2020-10-31T11:25:51.009Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5f9aa9a30a3b160fe85fa695\"), \"status\" : \"Accepted\", \"message\" : \"I would like to join\", \"trip_id\" : ObjectId(\"5f9aa8af0a3b160fe85fa694\"), \"user\" : ObjectId(\"5f53214e2893f20ae9e6924d\"), \"createdAt\" : ISODate(\"2020-10-29T11:38:11.739Z\"), \"updatedAt\" : ISODate(\"2020-10-29T11:39:37.743Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5f9d4975775eb52d57209e08\"), \"status\" : \"Accepted\", \"message\" : \"I wanna join\", \"trip_id\" : ObjectId(\"5f9d4836775eb52d57209e07\"), \"user\" : ObjectId(\"5f9927a954e28d25e53decd1\"), \"createdAt\" : ISODate(\"2020-10-31T11:24:37.729Z\"), \"updatedAt\" : ISODate(\"2020-10-31T11:25:45.027Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5fb1aaac6e1d2329cefa6bc4\"), \"message\" : \"jhjhjhjhjhj\", \"admin\" : ObjectId(\"5fb034675a4ee09507af0055\"), \"user\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"createdAt\" : ISODate(\"2020-11-15T22:24:44.822Z\"), \"updatedAt\" : ISODate(\"2020-11-15T22:24:44.822Z\"), \"__v\" : 0 }\n", "text": "User modeltrips model{ \"_id\" : ObjectId(\"5fb0072a2f5fb26e977bb1a0\"), \"coordinates\" : [ 101.71649, 3.10718 ], \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"cheras, kuala lumpur, malaysia\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"huhhuhuhu\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T16:34:50.728Z\"), \"updatedAt\" : ISODate(\"2020-11-14T16:34:50.728Z\"), \"__v\" : 0 }{ \"_id\" : ObjectId(\"5fb0180ae58c397e9d630c1d\"), \"coordinates\" : [ -0.5074, 51.3902 ], \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"chertsey, surrey, england, united kingdom\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"iyiyiyiyiiy\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-14T17:46:50.999Z\"), \"updatedAt\" : ISODate(\"2020-11-14T17:46:50.999Z\"), \"__v\" : 0 }\n{ \"_id\" : ObjectId(\"5fb1a3afa3c3bc24908b42ec\"), \"coordinates\" : [ 77.05972, 10.08917 ], \"dob\" : ISODate(\"2004-11-15T21:52:53.564Z\"), \"open\" : true, \"completed\" : false, \"memberCount\" : 1, \"location\" : \"munnar, kerala, india\", \"date\" : ISODate(\"2020-12-25T00:00:00Z\"), \"desc\" : \"hahahahhah\", \"admin\" : ObjectId(\"5fafe5e06b3f1d6259406697\"), \"members\" : [ ], \"createdAt\" : ISODate(\"2020-11-15T21:54:55.680Z\"), \"updatedAt\" : ISODate(\"2020-11-15T21:54:55.680Z\"), \"__v\" : 0 }tripjoinrequest datamongodb version 4.4. please guide me. where i might be going wrong i was expecting upcomming_trips details to be there in output.Thanks", "username": "Jose_Kj" }, { "code": "await User.aggregate([\n {\n $match: {_id: ObjectId(req.user._id)}\n },\n {$unwind: {path: \"$upcomming_trips\", preserveNullAndEmptyArrays: true }},\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"tripId\": \"$upcomming_trips.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$tripId\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"adminId\": \"$admin\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$adminId\", \"$_id\"] },\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"Admin\"\n\n }\n },\n {$unwind: \"$Admin\"},\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"memberId\": \"$members.user_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$memberId\", \"$_id\"] },\n }\n },\n { \"$project\": { 'avatar': { \"$slice\": [ \"$avatar\", 3 ]}, \"_id\": -1}}\n ],\n \"as\": \"Members\"\n }\n },\n { \"$project\": { \"location\": 1, \"date\": 1,\"Admin\": 1,\"memberCount\": 1, \"coordinates\": 1,'Members': 1 }}\n ],\n \"as\": \"upcomming_trips1\"\n }\n },\n {$unwind: {path: \"$upcomming_trips1\", preserveNullAndEmptyArrays: true }},\n\n {$unwind: {path: \"$request_rec\", preserveNullAndEmptyArrays: true }}, {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$request_rec.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"tripjoinrequests\",\n \"let\": { \"trip\": \"$_id\",\"admin\": ObjectId(req.user['_id']) },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":{\n $and:[\n { \"$eq\": [\"$$trip\", \"$trip\"] },\n { \"$eq\": [\"$$admin\", \"$admin\"]}\n ]}\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"pending\"\n\n }\n },\n\n { \"$project\": { \"location\": 1, \"date\": 1, \"pending\": 1 }}\n ],\n \"as\": \"request_rec\"\n }\n },\n {$unwind: {path: \"$request_rec\", preserveNullAndEmptyArrays: true }},\n\n {$unwind: {path: \"$request_send\",preserveNullAndEmptyArrays: true }}, {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$request_send.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"admin\": \"$admin\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$admin\", \"$_id\"] },\n }\n },\n { \"$project\": { 'name': 1, 'dob': 1, 'country': 1,'avatar': 1 }}\n ],\n \"as\": \"admin\"\n\n }\n },\n { \"$unwind\": \"$admin\"},\n { \"$project\": { \"location\": 1, \"date\": 1,\"admin\": 1 }}\n ],\n \"as\": \"request_send\"\n }\n },\n {$unwind: {path: \"$request_send\",preserveNullAndEmptyArrays: true }},\n\n {$unwind: {path:\"$completed_trips\", preserveNullAndEmptyArrays: true }},\n {\n $lookup:{\n \"from\": \"trips\",\n \"let\": { \"trip\": \"$completed_trips.trip_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\": { \"$eq\": [\"$$trip\", \"$_id\"] }\n }\n },\n {\n $lookup:{\n \"from\": \"users\",\n \"let\": { \"member\": \"$members.user_id\" },\n \"pipeline\": [\n { \"$match\": {\n \"$expr\":\n { \"$eq\": [\"$$member\", \"$_id\"] },\n }\n },\n { \"$project\": { 'avatar': { \"$slice\": [ \"$avatar\", 3 ]}, \"_id\": -1}}\n ],\n \"as\": \"members\"\n }\n },\n { \"$project\": { \"location\": 1, \"date\": 1, \"memberCount\": 1,'members': 1 }}\n ],\n \"as\": \"completed_trips\"\n }\n },\n {$unwind: {path: \"$completed_trips\",preserveNullAndEmptyArrays: true }},\n {\n $group: {\n \"_id\": \"$_id\" ,\n 'upcomming_trips': { $push: \"$upcomming_trips1\"},\n 'request_rec': {$push: \"$request_rec\"},\n 'request_send': {$push: \"$request_send\"},\n 'completed_trips': {$push: \"$completed_trips\"}\n }\n },\n\n\n {\n \"$project\":{\n \"request_rec\": 1,\n \"request_send\": 1,\n \"upcomming_trips\": 1,\n \"completed_trips\": 1\n }\n }\n]).exec(function(err, mytrips){\n if(err) return res.status(422).send({error: err.message});\n console.log(\"mytrips\",JSON.stringify(mytrips,null,4))\n res.status(200).send({\n mytrips: mytrips\n });\n})\n", "text": "I finally got it working with this query ,Thanks\nplease let me know if the query is optimal,", "username": "Jose_Kj" } ]
My aggregate query not working
2020-11-17T22:22:56.204Z
My aggregate query not working
3,779
https://www.mongodb.com/…a_2_1024x673.png
[]
[ { "code": "", "text": "image1584×1042 78.7 KB", "username": "00_00_00" }, { "code": "db.menus.update({menusName : \"333\"},{$pull : { children : { menusName : \"333\" }}})\n", "text": "Hi @00_00_00,The uploaded pic has the marked element as an array object therefore you need to use $pull update to remove it.https://docs.mongodb.com/manual/reference/operator/update/pull/#remove-items-from-an-array-of-documentsBest\nPavel", "username": "Pavel_Duchovny" } ]
How to delete the second level of nested data?
2020-11-17T18:53:00.261Z
How to delete the second level of nested data?
3,787
null
[ "aggregation", "indexes" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"5eb122f714d0510011e3a184\"\n },\n \"from\": \"Star_friends\",\n \"to\": \"94713414047\",\n \"accountName\": \"ZM\",\n \"accountId\": \"ZM\",\n \"campaignName\": \"test 1\",\n \"campaignId\": \"5eb122f1e921c3001922f73c\",\n \"campaignType\": \"BULK\",\n \"status\": {\n \"$numberInt\": \"3\"\n },\n \"reason\": \"No Routing\",\n \"channel\": \"sms\",\n \"messageType\": {\n \"$numberInt\": \"1\"\n },\n \"event\": \"MT\",\n \"content\": \"test 132\",\n \"credit\": {\n \"$numberInt\": \"1\"\n },\n \"msgId\": \"\",\n \"createdDateTime\": \"2020-05-05T13:55:27.743Z\",\n \"updatedTime\": \"2020-05-05T13:55:27.745Z\",\n \"uDate\": \"2020-05-05\",\n \"operator\": \"mobitel\"\n}\ndb.getCollection('report').aggregate([{\n \"$match\": {\n \"createdDateTime\": {\n \"$gt\": \"2020-09-14T00:00:01.000Z\",\n \"$lt\": \"2020-09-15T23:59:99.999Z\"\n },\n \"messageType\": {\n \"$in\": [1, 2]\n },\n \"channel\": {\n \"$in\": [\"sms\", \"viber\", \"whatsapp\"]\n },\n \"accountId\": {\n \"$in\": [\"ZM\", \"ABC\"]\n }\n }\n}, {\n \"$project\": {\n \"_id\": 0,\n \"channel\": 1,\n \"messageType\": 1,\n \"accountName\": 1,\n \"accountId\": 1,\n \"createdDateTime\": 1,\n \"uDate\": 1,\n \"credit\": 1,\n \"status\": 1\n }\n}, {\n \"$group\": {\n \"_id\": {\n \"channel\": \"$channel\",\n \"messageType\": \"$messageType\",\n \"accountName\": \"$accountName\",\n \"accountId\": \"$accountId\",\n \"filteredDate\": {\n \"$substr\": [\"$createdDateTime\", 0, 7]\n },\n \"sortDate\": \"$uDate\"\n },\n \"total\": {\n \"$sum\": \"$credit\"\n },\n \"send\": {\n \"$sum\": {\n \"$cond\": [{\n \"$in\": [\"$status\", [2, 15, 1, 14, 6, 17, 4, 5]]\n }, \"$credit\", 0]\n }\n },\n \"delivered\": {\n \"$sum\": {\n \"$cond\": [{\n \"$in\": [\"$status\", [6, 17, 4]]\n },\n \"$credit\",\n 0\n ]\n }\n },\n \"deliveryFailed\": {\n \"$sum\": {\n \"$cond\": [{\n \"$in\": [\"$status\", [12, 5]]\n }, \"$credit\", 0]\n }\n },\n \"failed\": {\n \"$sum\": {\n \"$cond\": [{\n \"$in\": [\"$status\", [3]]\n }, \"$credit\", 0]\n }\n },\n \"datass\": {\n \"$addToSet\": {\n \"channel\": \"$channel\",\n \"messageType\": \"$messageType\",\n \"accountName\": \"$accountName\",\n \"accountId\": \"$accountId\",\n \"filteredDate\": {\n \"$substr\": [\"$createdDateTime\", 0, 7]\n },\n \"sortDate\": \"$uDate\"\n }\n }\n }\n}, {\n \"$unwind\": \"$datass\"\n}, {\n \"$project\": {\n \"_id\": 0\n }\n}, {\n \"$sort\": {\n \"datass.sortDate\": -1\n }\n}])\n", "text": "I have stuck somewhere in MongoDB aggregate query. I tried to generate a summary report from the database which contains 110M records. during the report generation, I faced the following issues 1).Even though the collection is indexed they are not utilized for the search. 2).Once query execution finished memory of DB server not decreased. 3)query take considerable time to return the result.im useing mongodb Atlas v4.2.8\nsample documentmy query as followsindexes as followsaccountId_1 / accountId_1_createdDateTime_-1 / campaignId_-1 / channel_1 / createdDateTime_-1 / messageType_1 / msgId_-1 / msgId_-1_status_1I would be appreciated if someone can help me with thisThanks", "username": "Praveena_Buddhika" }, { "code": "accountId : 1, messageType : 1, channel : 1, createdDateTime : 1\n", "text": "Hi @Praveena_Buddhika,Welcome to MongoDB community!What makes you believe that no index is used? Have you located the log entry with this particular aggregation?The aggregation is pretty complex involving many aggregate stages where the index is not in playing a part and those are running in memory.Having said that I believe indexing all match stage predict as one compound index should allow better performance:Additionally, you might consider using new $merge stage periodically and have this report query as a materialized view constantly updating or use $out to create monthly report collections.Best\nPavel", "username": "Pavel_Duchovny" } ]
Indexing not utilized during the MongoDB aggregation query
2020-11-17T13:11:43.789Z
Indexing not utilized during the MongoDB aggregation query
3,525
null
[ "atlas-device-sync" ]
[ { "code": "/Users/realm/workspace/realm_realm-core_release_10.1.1/src/realm/obj.cpp:1462: [realm-core-10.1.1] Assertion failed: n != realm::npos\n0 Realm 0x0000000105dc0adc _ZN5realm4utilL18terminate_internalERNSt3__118basic_stringstreamIcNS1_11char_traitsIcEENS1_9allocatorIcEEEE + 28\n1 Realm 0x0000000105dc0d80 _ZN5realm4util9terminateEPKcS2_lOSt16initializer_listINS0_9PrintableEE + 328\n2 Realm 0x0000000105d41514 _ZN5realm3Obj23assign_pk_and_backlinksERKNS_8ConstObjE + 828\n3 Realm 0x0000000105d87c84 _ZN5realm5Table30create_object_with_primary_keyERKNS_5MixedEONSt3__16vectorINS_10FieldValueENS4_9allocatorIS6_EEEEPb + 680\n4 Realm 0x0000000105af8060 _ZN5realm4sync18InstructionApplierclERKNS0_5instr12CreateObjectE + 416\n5 Realm 0x0000000105aab7d8 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 136\n6 Realm 0x0000000105aa7e28 _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterE + 820\n7 Realm 0x0000000105aba298 _ZN5realm5_impl14ClientImplBase7Session29initiate_integrate_changesetsEyRKNSt3__16vectorINS_4sync11Transformer15RemoteChangesetENS3_9allocatorIS7_EEEE + 180\n8 Realm 0x0000000105af22e4 _ZN12_GLOBAL__N_111SessionImpl29initiate_integrate_changesetsEyRKNSt3__16vectorIN5realm4sync11Transformer15RemoteChangesetENS1_9allocatorIS6_EEEE + 48\n9 Realm 0x0000000105ab8cbc _ZN5realm5_impl14ClientImplBase7Session24receive_download_messageERKNS_4sync12SyncProgressEyRKNSt3__16vectorINS3_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 692\n10 Realm 0x0000000105ab5ae4 _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS0_14ClientImplBase10ConnectionEEEvRT_PKcm + 4644\n11 Realm 0x0000000105ab0b2c _ZN5realm5_impl14ClientImplBase10Connection33websocket_binary_message_receivedEPKcm + 60\n12 Realm 0x0000000105b9df24 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1532\n13 Realm 0x0000000105abe348 _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINSt3__18functionIFvNS5_10error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 260\n14 Realm 0x0000000105abde30 _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINSt3__18functionIFvNS8_10error_codeEmEEEE19recycle_and_executeEv + 240\n15 Realm 0x0000000105b901d0 _ZN5realm4util7network7Service4Impl3runEv + 404\n16 Realm 0x0000000105ae7a04 _ZN5realm4sync6Client3runEv + 36\n17 Realm 0x0000000105a8000c _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_10shared_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN_ + 44\n18 libsystem_pthread.dylib 0x00000001d8072b40 _pthread_start + 320\n19 libsystem_pthread.dylib 0x00000001d807b768 thread_start + 8!!! IMPORTANT: Please send this log and info about Realm SDK version and other relevant reproduction info to [email protected] 17:37:31.607681+0100 Time[10024:2960857] /Users/realm/workspace/realm_realm-core_release_10.1.1/src/realm/obj.cpp:1462: [realm-core-10.1.1] Assertion failed: n != realm::npos\n0 Realm 0x0000000105dc0adc _ZN5realm4utilL18terminate_internalERNSt3__118basic_stringstreamIcNS1_11char_traitsIcEENS1_9allocatorIcEEEE + 28\n1 Realm 0x0000000105dc0d80 _ZN5realm4util9terminateEPKcS2_lOSt16initializer_listINS0_9PrintableEE + 328\n2 Realm 0x0000000105d41514 _ZN5realm3Obj23assign_pk_and_backlinksERKNS_8ConstObjE + 828\n3 Realm 0x0000000105d87c84 _ZN5realm5Table30create_object_with_primary_keyERKNS_5MixedEONSt3__16vectorINS_10FieldValueENS4_9allocatorIS6_EEEEPb + 680\n4 Realm 0x0000000105af8060 _ZN5realm4sync18InstructionApplierclERKNS0_5instr12CreateObjectE + 416\n5 Realm 0x0000000105aab7d8 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 136\n6 Realm 0x0000000105aa7e28 _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterE + 820\n7 Realm 0x0000000105aba298 _ZN5realm5_impl14ClientImplBase7Session29initiate_integrate_changesetsEyRKNSt3__16vectorINS_4sync11Transformer15RemoteChangesetENS3_9allocatorIS7_EEEE + 180\n8 Realm 0x0000000105af22e4 _ZN12_GLOBAL__N_111SessionImpl29initiate_integrate_changesetsEyRKNSt3__16vectorIN5realm4sync11Transformer15RemoteChangesetENS1_9allocatorIS6_EEEE + 48\n9 Realm 0x0000000105ab8cbc _ZN5realm5_impl14ClientImplBase7Session24receive_download_messageERKNS_4sync12SyncProgressEyRKNSt3__16vectorINS3_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 692\n10 Realm 0x0000000105ab5ae4 _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS0_14ClientImplBase10ConnectionEEEvRT_PKcm + 4644\n11 Realm 0x0000000105ab0b2c _ZN5realm5_impl14ClientImplBase10Connection33websocket_binary_message_receivedEPKcm + 60\n12 Realm 0x0000000105b9df24 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1532\n13 Realm 0x0000000105abe348 _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINSt3__18functionIFvNS5_10error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 260\n14 Realm 0x0000000105abde30 _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINSt3__18functionIFvNS8_10error_codeEmEEEE19recycle_and_executeEv + 240\n15 Realm 0x0000000105b901d0 _ZN5realm4util7network7Service4Impl3runEv + 404\n16 Realm 0x0000000105ae7a04 _ZN5realm4sync6Client3runEv + 36\n17 Realm 0x0000000105a8000c _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_10shared_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN_ + 44\n18 libsystem_pthread.dylib 0x00000001d8072b40 _pthread_start + 320\n19 libsystem_pthread.dylib 0x00000001d807b768 thread_start + 8!!! IMPORTANT: Please send this log and info about Realm SDK version and other relevant reproduction info to [email protected].\n", "text": "Hi there,First of all, I am not sure if this is the correct topic to post this under, but here goes.We’re working with realm and mongo sync. At some point I started getting this error message from realm every time I would launch my iOS app in Xcode.We figured out that it had to do with the partition that was used. Maybe the partition is somehow corrupted, but we can’t really tell from the logs. There is nothing in the logs on https://realm.mongodb.com/ … /logs, the only error we get is from the iOS app.Creating a new partition and putting in the same content results in the same situation.I am using\nXcode 12.2 (12B45b)\nRealm v10.1.3/Thomas", "username": "Thomas_Juel_Andersen" }, { "code": "", "text": "@Thomas_Juel_Andersen Are you able to open a support ticket for this? If not, please do and we can get the operations team to sort this forr you", "username": "Ian_Ward" }, { "code": "", "text": "I don’t have access atm, but I’ll sort it out and create a support ticket.", "username": "Thomas_Juel_Andersen" } ]
Corrupted partition
2020-11-17T18:50:58.404Z
Corrupted partition
1,611
null
[ "aggregation" ]
[ { "code": "", "text": "how can i use $sort and $cond thogether in aggregation.\nOr is there any way to use conditional based sorting.", "username": "Abhishek_Dhadwal" }, { "code": "$sort", "text": "From the docs, $sort does not accept an expression, so whatever you’re thinking is probably not possible.That being said, a more concrete example may help to find an alternative.", "username": "santimir" }, { "code": "", "text": "A simple solution would be to add a field first($addFields),using the condition,\nand sort after on that field.", "username": "Takis" }, { "code": "", "text": "i already tried it with $set but not working fine", "username": "Abhishek_Dhadwal" }, { "code": "", "text": "Ya i am trying to find an alternative", "username": "Abhishek_Dhadwal" }, { "code": "", "text": "add more details to the post", "username": "santimir" } ]
Help with conditional sorting in aggregation
2020-11-13T18:36:02.052Z
Help with conditional sorting in aggregation
3,403
null
[ "python", "production" ]
[ { "code": "", "text": "We are pleased to announce the 3.11.1 release of PyMongo - MongoDB’s Python Driver. This release adds support for Python 3.9 and fixes a number of bugs.See the changelog for a high-level summary of what’s new and improved or see the PyMongo 3.11.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Prashant_Mital" }, { "code": "", "text": "", "username": "system" } ]
PyMongo 3.11.1 Released
2020-11-17T18:13:50.079Z
PyMongo 3.11.1 Released
2,110
null
[ "golang" ]
[ { "code": "// List executes a listIndexes command and returns a cursor over the indexes in the collection.\n//\n// The opts parameter can be used to specify options for this operation (see the options.ListIndexesOptions\n// documentation).\n//\n// For more information about the command, see https://docs.mongodb.com/manual/reference/command/listIndexes/.\ncursor, err := coll.Indexes().List(context.Background()).All()[]mongo.IndexModel{}\tcursor, err := coll.Indexes().List(nil)\n\tis.NoError(err)\n\tindices := []mongo.IndexModel{}\n\tis.NoError(cursor.All(nil, &indices))\n\tis.GreaterOrEqual(len(indices), 9)\n", "text": "I can find some examples for creating indices:But listing them is where things are falling down for me. I can’t seem to find an example of List being used online.The mongo-go-driver docs just reference the underlying command being used:cursor, err := coll.Indexes().List(context.Background()) returns a cursor that I can call .All() on and provide it an interface.Unfortunately, unpacking into []mongo.IndexModel{} doesn’t seem to work. @Divjot_Arora - would you happen to have a code snippet for listing indices?Current code:", "username": "TopherGopher" }, { "code": "mongo.IndexModelIndexView.ListSpecificationslistIndexesCursortype IndexSpecification struct {\n// copied from Go Driver\n}\n\ncursor, err := coll.Indexes().List(nil)\nis.NoError(err)\nvar indices []IndexSpecification\nis.NoError(cursor.All(nil, &indices))\nis.GreaterOrEqual(len(indicies), 9)\n", "text": "Hi @TopherGopher,Sorry, this got lost in my inbox. mongo.IndexModel is a helper type used to store information when creating indices, not a type to unpack index specifications into. Driver version 1.5.0 will include an IndexView.ListSpecifications function that calls listIndexes and unpacks all results into a helper type rather than simply returning a Cursor object. In the meantime, you can see the struct definition for this type and copy it into your code:", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How should one work with indices?
2020-10-16T20:32:25.521Z
How should one work with indices?
2,050
null
[ "queries" ]
[ { "code": "array1 = [ 'one', 'two', 'three', 'four', 'five', 'six', 'seven' ]\narray2 = [ '1', '2', '3', '4', '5', '6', '7' ]\n $match : { \n $or: [ { array1 : { $elemMatch: { $eq: 'two' } } }, { array2 : { $elemMatch: { $eq: '4' } } } ]\n }\n", "text": "Hello, i have 2 arrays - example:i need load not all array, only that i need - Example only - two or 4TryHow i can do it?", "username": "gotostereo_N_A" }, { "code": "", "text": "you can try $filter i think it would solve your problem.", "username": "Abhishek_Dhadwal" }, { "code": "", "text": "if the array has a million records, I want to reduce the load", "username": "gotostereo_N_A" }, { "code": "", "text": "Havingthe array has a million recordsis a bad design decision. Please see https://www.mongodb.com/article/schema-design-anti-pattern-massive-arrays.But, indeed as mentioned by @Abhishek_Dhadwal, you should also look at https://docs.mongodb.com/manual/reference/operator/aggregation/filter/ if reworking your schema is out of the question.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find and load from array only specific data
2020-11-17T11:23:05.609Z
Find and load from array only specific data
1,298
null
[]
[ { "code": "", "text": "Hello,Yesterday, we had a crash on our Google cloud VM hosting our mongo db server. We restored it by creating a new VM and importing the data disk including all the WT files. We lauched our mongo instance and performed an mongo --repair command.Now collections and objects are loading / recovering. When we perform a db.stats() object count is growing but it’s growing VERY slowly (around 1 object per minute).", "username": "Florian_Duport" }, { "code": "", "text": "Hi, did you check for any IO related issue? vmstat should be able to help you on this.", "username": "jff_frm" } ]
MongoDB objects loading after repair ?
2020-11-07T20:36:02.557Z
MongoDB objects loading after repair ?
1,391