image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hello everyone,Since I’m pretty new to the whole Realm concept it was just yesterday I got my notifications working. I’m working with Node.jsI was wondering if there is way to get more specific information about what was changed.For example:\nI have an array of objects. Each object has two key/value pairs. Now, when I change one of these values, is there a way to for example find out which one and on which index in the array?I couldn’t find anything about that in the documentation, hence I’m hoping you might help out here. Even if it is not possible, then I know I can stop searching! Thanks and looking forward to any feedback!", "username": "Rene_Seib" }, { "code": "", "text": "The Realm Getting Started guide has a lot of examples and information about this topic. Specifically you should take a look at Notifications to be notified of object changes.Are you storing your Realm objects in an Array? If so you may want to consider using a Realm Collection as that will work with Realm Notifications and will provide fine grained notifications about collection (List & Results) object changes.", "username": "Jay" } ]
How to know what has changed
2020-10-26T19:08:25.148Z
How to know what has changed
1,866
null
[ "queries", "performance" ]
[ { "code": "", "text": "I transferred current/old running DB into a new standalone server for MongoDB. To do this, I performed the following:Issue:\nI noticed that after performing the above, few queries on the NEW server were running slow almost twice the time compared to their performance on the OLD server.Configurations:\nThe configurations of both the servers are same however the NEW server has 32 GB RAM while the OLD server had 28GB RAM. OLD server had other applications and servers running as well. While the NEW server is a dedicated server only for this DB.CPU consumption is similar however RAM is heavily occupied in the OLD server while it is comparatively less occupied on NEW server.Therefore, NEW server is better equipped in hardware and RAM consumption. Also NEW server is standalone dedicated to only this DB.Question:\nWhy could my NEW server even though it is standalone be slow compared to OLD one? How can I correct this?", "username": "Temp_ORary" }, { "code": "", "text": "The first thing is to check the explain from the queries to see if indexes are use correctly.", "username": "steevej" }, { "code": "", "text": "Hi @Temp_ORary and welcome in the MongoDB Community !If the data and indexes are absolutely identical, then the explain plans should be exactly identical on the old & new server. One difference though is that the old server is running in prod while the new one isn’t - based on what I understood here.\nSo the old cluster is hot while the new is cold. Meaning that the working set isn’t already loaded in RAM in the new server while it is in the old server as it’s constantly freeing some RAM and pulling more fresh documents from disk into the RAM.\nIf this is not happening as well on the new server, then maybe it’s slow just because it’s cold and it needs to load more documents in RAM.\nOnce it’s done, I would expect the new node to be a tiny bit faster than the old one - given than the hard drives are at least identical or better.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
MongoDB: Queries running twice slow on NEW server compared to OLD server
2020-10-22T12:53:21.381Z
MongoDB: Queries running twice slow on NEW server compared to OLD server
2,373
null
[ "atlas-device-sync" ]
[ { "code": "const config = {\n sync: {\n user: user,\n partitionValue: `Res=${resId}`,\n }\n Realm.open(config).then((UserRealm) => { \n UserRealm.write(() => {\n UserRealm.create(\n \"User\",{\n _id: new ObjectId(),\n _partition: `Hospital=${Id}`,\n patientName: patient,\n RestaurantId: {\n _id:'1', //Just example\n _partition: 'PUBLIC',\n Name:'ABC'\n }\n }\n );\n CaseRealm.syncSession.uploadAllLocalChanges().then(() => {\n CaseRealm.close();\n });\n }); \n})\n", "text": "I have two Tables:Ex. 1, ‘PUBLIC’ , ‘ABC’\n2, ‘PUBLIC’, ‘PQR’I want to store the user data as below:Ex: 1, ‘Res=1’, ‘John’, 1(Restaurant obj)\n2, '‘Res=2’, ‘Mike’, 2So, I Open realm with below config.It adds the data into local realm but then it is throwing an error like:Synchronisation fail: error code: 212Can’t we write the data like this. Can some one please help what is wrong!!", "username": "2018_12049" }, { "code": "partitionValue: \"Res=some-id\"User_partition: \"Hospital=some-id\"Restaurant_partition: \"Public\"", "text": "The server logs should give you more information about why this error occurs, but looking at your code, you’re opening a Realm with partitionValue: \"Res=some-id\", but then you’re adding objects with different partitions to it - User with _partition: \"Hospital=some-id\" and Restaurant with _partition: \"Public\". All objects added to the Realm instance must be in the same partition.", "username": "nirinchev" } ]
Relation Between Two different realms
2020-10-26T10:09:18.398Z
Relation Between Two different realms
1,351
https://www.mongodb.com/…5638870e97be.png
[ "atlas-device-sync" ]
[ { "code": "", "text": "I have two table User and User role. where I want to add relation between these two table. I want to add Userrole into user table. here is my schema in mongo realm schema.\n\nUser983×611 49.1 KB\n \nUserrole978×608 44.6 KB\nAfter doing this…when I query the data it gives me empty array of user. it is confusing how the schema will look like if i want to apply realm sync and how data population will done.\nyour help will be appreciated.!! Thank You in advance.", "username": "2018_12049" }, { "code": "bsonTypeUser.UserRoleobjectId_idRelationshipsUser.UserRoleUserRole._id", "text": "The bsonType of User.UserRole should be objectId - it should match the _id type of the object you’re trying to create a relationship to.Then, you should go to the Relationships tab (right next to Schema) and create a new relationship from User.UserRole to UserRole._id.", "username": "nirinchev" }, { "code": "", "text": "I did as u have explain. It’s working!!.. but here i have one concern… like when I add records into userTB at that time I need to pass whole object of userRole…just UserRoleID is not sufficient as in normal MongoDB insertion.Is there any alternative…or this is how mongo realm works???\nWhen I just Add UserRoleID instead of whole obj…it is throwing an error for missing value", "username": "2018_12049" }, { "code": "UserRoleIDUserRolevar myUser = new User();\nvar role = realm.Find<UserRole>(userRoleId);\nmyUser.UserRole = role;\n", "text": "Realm has a different data model than MongoDB - it offers relationships as first class concept and is generally geared toward mobile development where ease of use is more important than scaling to support terabytes of data.So yes, you need to assign the entire object - if you know the UserRoleID, you can look up the UserRole first and then assign it (will need to adapt that to your language of choice, but should look like):", "username": "nirinchev" }, { "code": "myUser.UserRole = role", "text": "N what if I’m inserting new record in ‘User=Id’ partition and the role I want to assign is in the ‘Public’ partition.\nI’m opening public partition to get particular role.\nthen I’m inserting new user and assigning myUser.UserRole = roleit is throwing an error… like the role that I’m assigning from public partition is from other realmHow to active this scenario…where i need some depended object from other partition.", "username": "2018_12049" }, { "code": "", "text": "As per the documentation, this is not possible. You’ll need to either duplicate the user roles in all partitions, or find a different partitioning strategy that allows you to do what you want.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Define schema in mongo realm
2020-10-22T05:55:53.171Z
Define schema in mongo realm
1,887
null
[ "aggregation" ]
[ { "code": "", "text": "Hi all,\nI just want to know that can we use multiple columns join in $lookup as we do in sql like thisleft join ABC abc\non\nabc.X = def.T\nand\nabc.Y = def.U\nand\nabc.Z = def.V", "username": "Nabeel_Raza" }, { "code": "", "text": "Yes, here is example in the Docs", "username": "Katya" }, { "code": "db.inventory.insert([\n { \"_id\" : 1, \"sku\" : \"almonds\", \"description\": \"product 1\", \"instock\" : 120 },\n { \"_id\" : 2, \"sku\" : \"bread\", \"description\": \"product 2\", \"instock\" : 80 },\n { \"_id\" : 3, \"sku\" : \"cashews\", \"description\": \"product 3\", \"instock\" : 60 },\n { \"_id\" : 4, \"sku\" : \"pecans\", \"description\": \"product 4\", \"instock\" : 70 },\n { \"_id\" : 5, \"sku\": null, \"description\": \"Incomplete\" },\n { \"_id\" : 6 }\n])\ndb.orders.insert([\n { \"_id\" : 1, \"item\" : \"almonds\", \"price\" : 12, \"quantity\" : 2 },\n { \"_id\" : 2, \"item\" : \"pecans\", \"price\" : 20, \"quantity\" : 1 },\n { \"_id\" : 3 }\n])\nselect b.item, a.instock from inventory a\ninner join orders b on\na.sku = b.item and\na._id = b._id;", "text": "Can you do it for me, as here is the first collection:And here is the second collection:And i want the parallel to this:", "username": "Nabeel_Raza" }, { "code": "db.inventory.aggregate([\n {\n $lookup:\n {\n from: \"orders\",\n let: { inventory_id: \"$_id\", inventory_sku: \"$sku\" },\n pipeline: [\n { $match:\n { $expr:\n { $and:\n [\n { $eq: [ \"$_id\", \"$$inventory_id\" ] },\n { $eq: [ \"$item\", \"$$inventory_sku\" ] }\n ]\n }\n }\n }\n ],\n as: \"lookup\"\n }\n }\n])\n", "text": "A simple one-to-one mapping of the example from the link provided by Katya to your sample documents will give the following.(I left the $project stages out)I left the $project stages out as the important things are the let : { … } and the multiple $eq with the $and : [ … ].However, I have some issues with this data model which looked already like an adaptation of the example provided from the link. (The choice of the item names almonds, pekans, … are a give away).Issue 1: Clearly document orders with _id:2 is related to inventory document _id:4. Your query a._id = b._id will never pick it up.Issue 2: In my book orders can have more that one item (sku). I would imagine to have an array items with an order.", "username": "steevej" }, { "code": "", "text": "Thanks a lot @steevej", "username": "Nabeel_Raza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Join in MongoDB on different conditions
2020-10-26T16:59:38.326Z
Join in MongoDB on different conditions
11,538
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I am using mongo.exe to get diagnostics of the set status. Every few seconds I receive and parse db.Status.\nconnection string:", "username": "Sergey_Rugalev" }, { "code": "", "text": "Screenshot_91040×24 1.97 KB\nIt works well.", "username": "Sergey_Rugalev" }, { "code": "", "text": "But if all copies are lost, I watch it endlessly\nScreenshot_101685×177 11.7 KB\nThis can take tens of minutes.\nhow do i restrict connection attempts? For example, if 3 losses do not need to continue to beat against the wall", "username": "Sergey_Rugalev" } ]
Connect Mongo to set
2020-10-27T08:13:15.685Z
Connect Mongo to set
1,505
null
[ "queries", "performance" ]
[ { "code": "", "text": "How to handle Slow Queries in MongoDB", "username": "Jason_Lawrence" }, { "code": "", "text": "The question is large so I guess my answer will be large too.If you have slow operations, I would solve the issue by trying these solutions in this order.", "username": "MaBeuLux88" }, { "code": "", "text": "Any idea why this would be happening?", "username": "Temp_ORary" } ]
How to handle slow queries in MongoDB
2020-09-07T11:55:02.885Z
How to handle slow queries in MongoDB
1,981
null
[ "student-developer-pack" ]
[ { "code": "", "text": "If I finish a learning path and get a voucher do I have to use it within a certain amount of time?", "username": "adam" }, { "code": "", "text": "Hi Adam,Welcome to the community!The current codes are valid for six months.\nIt works best if you request a voucher once you’re planning on registering for the exam \nHope this helps and good luck!Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Exam voucher question
2020-10-26T19:07:41.868Z
Exam voucher question
5,242
null
[ "queries" ]
[ { "code": "SELECT * FROM Document WHERE REPLACE(Field,'.','')=REPLACE(@Search,'.','')", "text": "I want to make a query similar to this Microsoft SQL Server query:SELECT * FROM Document WHERE REPLACE(Field,'.','')=REPLACE(@Search,'.','')Basically I want to find all documents where a field matches a search parameter but ignoring all dots.\nAny help please?", "username": "Alejandro_Carrazzoni" }, { "code": " db.collection.aggregate( [ { $set: { newField: { $replaceAll: { input: \"$field\", find: \".\", replacement: \"\" } } } }, { $match: { newField: \"searchString\" } } ])", "text": "You can replace dots using $replaceAll db.collection.aggregate( [ { $set: { newField: { $replaceAll: { input: \"$field\", find: \".\", replacement: \"\" } } } }, { $match: { newField: \"searchString\" } } ])", "username": "Katya" } ]
How do I match a field but ignoring dots in a find query?
2020-10-26T16:17:40.638Z
How do I match a field but ignoring dots in a find query?
2,186
null
[ "aggregation" ]
[ { "code": "", "text": "Is there an aggregation operator that can convert a string type hex to a decimal? We are working with blockchain data and have run into cases like this quite often.", "username": "Kuan_Huang" }, { "code": "", "text": "Hi @Kuan_Huang,Welcome to MongoDB community!Have you tried to use a $convert or $toDouble operator in a project or addFields stage?type conversion, convert to double, double conversion, aggregationBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "That works. Thank you!", "username": "Kuan_Huang" }, { "code": "", "text": "case 2:\nScreen Shot 2020-10-21 at 10.27.52 PM1486×942 72 KB", "username": "Kuan_Huang" }, { "code": "", "text": "Actually there is a problem. It seems to work in below in case 1 but not in case 2:case 1:\nScreen Shot 2020-10-21 at 10.03.50 PM1566×542 44.9 KB", "username": "Kuan_Huang" }, { "code": "", "text": "Hi @Kuan_Huang ,As you can see the projection stage preview results in no value for “b” therefore its null later.I would use $addFields and have represantation of a binary string via the extended json represtation :Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you. I will find sometime to explore it and get back to you.", "username": "Kuan_Huang" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Converting a string type hex number to a decimal?
2020-10-19T18:43:12.466Z
Converting a string type hex number to a decimal?
5,605
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "What’s the intended way to perform a destructive change to a synced realm? It seemingly causes issues with data sync for me and looks as if I’m no longer able to sync certain objects, I can still see them in Mongodb however they won’t sync back to my client (realm-cocoa rc1).", "username": "Theo_Miles" }, { "code": "", "text": "Glad you asked as we’ve just been through that process and have the same question. We were happily coding along when suddenly everything stopped working as it had and we’ve spent close to a week tracking down what we did to cause it.The good news is for us at least, additive changes seem to go smoothly - either adding collection (class) or adding a property to an existing class.From our perspective though, performing any other changes may result in sync problems:The result is that Realm sync goes offline and in many cases will refuse to sync your objects. You will see a bunch of recurring Translator Errors in the logs (View All Logs)The only option is to use the console to Terminate Sync and re-set it up from scratch.In some cases you may also get a BadChangeset error which may then be followed by Invalid Session errors. The resolution is the same. Terminate Sync and re set it up.In some cases we’ve have to totally delete all local files as well which forces the server to re-sync the data (some data loss may occur doing this depending on when the list successful sync was).For us at least, altering classes (collections), changing property names etc is just part of app development and occurs frequently during that process so I am not sure what the expectation is from the Realm folks on how to handle those types of changes as it seems that’s a lot of hoops to jump through.Hopefully they will chime in and correct my observations and provide clear direction.", "username": "Jay" }, { "code": "", "text": "Thanks for the quick reply @Jay. The change that prompted me to ask the question was the removal of a field that was only ever added in the MongoDB Realm console and I’d never actually written as part of my local client’s schema.Interestingly removing that field made me seemingly lose data in another collection on the client, digging into this deeper I noticed that some objects in that collection were not validating due to some missing required fields. I ran some commands in the mongo shell to fix up those items, deleted the app and reinstalled and they started to sync back to my client again, it seems like changing a field in the mongo admin console prompted a client reset and some fields with missing required values (likely left over from earlier development builds) failed to sync.I’m still looking into this and making sure this was actually the case, would be great to know how to handle this better in the future.", "username": "Theo_Miles" }, { "code": "", "text": "Hey Y’all - thanks for the comments here - I will comment here that from the client, your mobile app class definitions, you should be able to:Without needing to trigger a re-sync on the server. The client should be able to sync with a subset of classes that are defined on the cloud. If you define an extra class on the server side, then yes, you will need to add this class to to your server side schema - but it should not trigger a resync. If you are encountering different behavior with any of the above I have described then please let us know.The above are interpreted as “additive” changes - A destructive change is defined as:The above destructive changes will necessitate a full re-sync, ie. a termination of sync on the cloud and the re-enablement.If you are looking to make a destructive change then you can do two things, both follow an “API versioning” methodology -In this way, both version will be able to co-exist out in the wild without the need of a re-sync. You can use triggers to copy data between them if necessary. I hope this helps, let me know if you have any questionsBest\nIan", "username": "Ian_Ward" }, { "code": "", "text": "Thank you @Ian_Ward that helps considerably. That detailed info would be awesome to add to the documentation.To be clear, if there’s a destructive change, for example, changing a property from a String to an Int there are only two options and both involve API Versioning.What if we just want to start from scratch with a Class but use the same name - like in an early development stage where there’s no data to be concerned with. Is it possible to terminate sync, drop the collection (class) in the console, update the class in code and then re-set up sync? Or is there metadata issues with that?For us during early development and model planning, we are adding, changing and removing classes all the time and want to know the best process for that which won’t negatively impact the server.", "username": "Jay" }, { "code": "", "text": "If you are just iterating on the Schema then you should be able to control all of that from the client code using Developer mode - that is what it is there for. You shouldn’t need to mess with the cloud sync schema at all - unless you are making breaking destructive changes as mentioned above, in which case, yes terminate and re-enable sync.", "username": "Ian_Ward" }, { "code": "class TestObject: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partitionKey: TaskProjectId = \"\"\n @objc dynamic var myProperty = \"\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\nclass TestObject: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partitionKey: TaskProjectId = \"\"\n @objc dynamic var myProperty = 1234\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n", "text": "@Ian_Ward Thanks. The question was specifically about a destructive change as defined by your postchanging a property from a String to an IntPerhaps I have overlooked a step so let me provide a simple use case.Here’s the classString1322×598 47.7 KBAnd we want to change myProperty to an Int - we do not care about loosing any existing data since we are just starting this project fresh. Here’s the updated class:The following changes cannot be made in additive-only schema mode:If the goal is to change myProperty from String to Int, what’s the process?I am asking for it to be spelled out so we don’t break the server (again) and cause it to stop sync’ing (per the OP’s original question).", "username": "Jay" }, { "code": "", "text": "You also need to delete the old TestObject Schema from the cloud UI - after terminating sync and before re-enabling it.", "username": "Ian_Ward" }, { "code": "", "text": "Does that mean to go Realm->Select App->Left Column ‘Schema’ then the Schema Tab, then select the class and click Remove configurationorDoes that that that mean drop the collection by going to Atlas->Collections and clicking the trash can icon next to the collectionJust trying to not break the server.", "username": "Jay" }, { "code": "", "text": "Realm App - Schema - Remove Schema", "username": "Ian_Ward" }, { "code": "", "text": "Great! Just tried it. For anyone else following along… The button is labeled REMOVE CONFIGURATION and then a popup window with REMOVE COLLECTION.Seems to work but as it stated, it would not delete any data so in Atlas, the collection (Class) now has both Strings and Ints for the same property (based on my example above). Kinda unexpected.However when the class is read in code into a Results object, it ignores the ‘old’ version and only reads the ‘new’ objects.", "username": "Jay" }, { "code": "", "text": "Yeah if you want to delete the data you will need to delete it in Atlas ", "username": "Ian_Ward" }, { "code": "", "text": "As a followup - while Remove Collection removes the schema and allows changes to the class, if you also want to totally delete ALL of the prior data, the Collection itself must be dropped as well as Ian mentioned.That’s done in Atlas->Collections select the class and click the Trash icon - this should be done in conjunction with Remove Configuration.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Performing destructive changes to synced Realm
2020-10-22T16:26:09.830Z
Performing destructive changes to synced Realm
7,257
null
[ "golang" ]
[ { "code": "", "text": "Hi, The mongo Java driver has “Going Async with Callbacks”. MongoDB Async Driver.Is there a similar Mongo Driver implementation for Go? Could you please point to the link?I am assuming that the go driver provided bulkWrite() return only after a successful write to the Mongod Server.", "username": "Abhay_Mukewar" }, { "code": "go func() {\n collection.BulkWrite()\n // do other stuff on completion\n} ()\n", "text": "you can use goroutines to achieve what you wantAlso, please note that I am not from the MongoDB Support, just a regular user.A piece of advice: in general, avoid callbacks", "username": "Alessandro_Sanino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Driver For GoLang Async Queries
2020-10-15T12:31:58.672Z
Driver For GoLang Async Queries
3,174
https://www.mongodb.com/…6a1a91da86d7.png
[ "mongodb-shell" ]
[ { "code": "", "text": "How to use it? It cannot be used as a command alone, db.coll._id instanceof ObjectId will be equal to false, and the document will return true. And typeof returns undefined, no, so how do you use these two operations to operate ni?image1001×717 33.9 KB", "username": "111117" }, { "code": "var mydoc = db.coll.findOne();\ntypeof mydoc._id\n", "text": "Hi @111117,Welcome to MongoDB community!I think it needs to be operated on a variable:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Okay thank you ", "username": "111117" } ]
Mongodb instanceof and typeof cannot be used in the shell
2020-10-25T22:06:44.775Z
Mongodb instanceof and typeof cannot be used in the shell
2,687
null
[ "queries", "sharding" ]
[ { "code": "db.getMongo().setReadPref('nearest', [ { \"something\": \"x\", \"region\": \"a\" } ])db.getMongo().setReadPref('nearest', [ { \"something\": \"x\", \"region\": \"a\" }, {\"something\": \"x\"} ])db.getCollection('mytest').find({ _id: ObjectId(\"my-object-id\") });", "text": "Based on behavior observation it remains unclear how members are considered worthy. According to doc there’s latency consideration present, but it’s not clear if it’s latency between driver and mongo router or latency between primary and secondary. My issue is that when querying from a region which has one local secondary present, but primary with another secondary is in a remote region, query seems to end up randomly in both regions. If I remove the last ‘fallback’ readPreferenceTags, which allows other regions too, then I get results with expected latency.From documentation:Order matters when using multiple readPreferenceTags. The readPreferenceTags are tried in order until a match is found. Once found, that specification is used to find all eligible matching members and any remaining readPreferenceTags are ignored.However, the nearest read mode, when combined with a tag set, selects the matching member with the lowest network latency. This member may be a primary or secondary.Example of the setup:there’s mongos router between driver and mongo replica set, located in region:adb.getCollection('mytest').find({ _id: ObjectId(\"my-object-id\") });While using 1. setting, I get expected results. While using 2. setting, I can see that query is executed in random node matching second tag (as region is different)", "username": "prodigyf" }, { "code": "\n \n BSONObj tag = tagElem.Obj();\n \n std::vector<const Node*> matchingNodes;\n for (size_t i = 0; i < nodes.size(); i++) {\n if (nodes[i].matches(criteria.pref) && nodes[i].matches(tag) &&\n matchNode(nodes[i])) {\n matchingNodes.push_back(&nodes[i]);\n }\n }\n \n // Only consider nodes that satisfy the minOpTime\n if (!criteria.minOpTime.isNull()) {\n std::sort(matchingNodes.begin(), matchingNodes.end(), opTimeGreater);\n for (size_t i = 0; i < matchingNodes.size(); i++) {\n if (matchingNodes[i]->opTime < criteria.minOpTime) {\n \n \n \n BSONObj tag = tagElem.Obj();\n \n std::vector<const Node*> matchingNodes;\n for (size_t i = 0; i < nodes.size(); i++) {\n if (nodes[i].matches(criteria.pref) && nodes[i].matches(tag) &&\n matchNode(nodes[i])) {\n matchingNodes.push_back(&nodes[i]);\n }\n }\n \n // don't do more complicated selection if not needed\n if (matchingNodes.empty()) {\n continue;\n }\n if (matchingNodes.size() == 1) {\n return {matchingNodes.front()->host};\n }\n \n // Only consider nodes that satisfy the minOpTime\n if (!criteria.minOpTime.isNull()) {\n std::sort(matchingNodes.begin(), matchingNodes.end(), opTimeGreater);\n \n ", "text": "There could be bug as the readPreferenceTags behaviour has changed in version 4.4. Part of code location has shifted and that is the reason that readPreferenceTags behavior has changed. Or there could be major changes with the code how matching is done regard with tags.In 4.4 >In 4.2 >", "username": "prodigyf" } ]
readPreferenceTags behaviour on sharded cluster
2020-10-21T09:06:48.280Z
readPreferenceTags behaviour on sharded cluster
1,960
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Hi I signed in with github student developer pack for mongodb but i was unable to login normally\nso i used sign in with google with the same gmail account i use for github and now i have two accounts\nHelp", "username": "sonu_ishaq" }, { "code": "", "text": "Hi @sonu_ishaqThank you for reaching out to us and welcome to the forum!\nI’ll send you a DM to gather a bit more insight on what happened here. Best,Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should I have two accounts for github student developer pack mongodb?
2020-10-24T14:11:09.938Z
Should I have two accounts for github student developer pack mongodb?
5,877
null
[ "atlas-functions" ]
[ { "code": "", "text": "I need to access, from a realm app function, another realm app’s (in the same project) functions.\nIs there a straightforward way to do this? I could not find any explicit information in the documentation.\nI have tried using this approach https://docs.mongodb.com/realm/node/call-a-function/.\nI then needed to upload the “realm” dependency. I installed the realm-package locally and followed the instructions here: https://docs.mongodb.com/realm/functions/upload-external-dependencies/. But on upload I get the following errorFailed to upload node_modules.tar.gz: unknown: Unexpected token (216:9) 214 | } 215 | > 216 | async* watch({ids = undefined, filter = undefined} = {}) { | ^ 217 | let args = { 218 | database: this.databaseName, 219 | collection: this.collectionName,which I find impossible to interpret.Very thankful for any help you could provide!", "username": "clakken" }, { "code": "foobarbarfoofoo", "text": "You can’t use the realm SDKs in our functions - however a workaround here would be to use an incoming webhook for the function you want to call from your first App.Example:You have App A with function foo and App B with function bar. You want to call bar in foo.You will have to create an incoming webhook (3rd party services --> HTTP ) in App B which will execute your logic. You can send a request to this webhook from App A in function foo", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Great, I will try that.\nThanks for the quick reply!", "username": "clakken" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
From realm app function, access another realm app's functions
2020-10-23T10:55:53.298Z
From realm app function, access another realm app&rsquo;s functions
2,123
https://www.mongodb.com/…3_2_1024x289.png
[ "motor-driver" ]
[ { "code": "", "text": "This is my code\nСнимок экрана 2020-10-23 в 16.49.411176×332 31.3 KBThis is error\n22407×58 7 KBBut type(info)\nWhere is my mistake?", "username": "Fungal_Spores" }, { "code": "insert_many()insert_one()", "text": "This is caused by incorrect usage of the insert_many() API. As per the documentation this method accepts an iterable of documents. Put your document inside of a list to get things to work, or if you want to only insert a single document, maybe consider using insert_one().", "username": "Prashant_Mital" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem with insert many using Motor
2020-10-23T18:00:11.458Z
Problem with insert many using Motor
4,222
null
[ "app-services-user-auth" ]
[ { "code": "Signup failed: Error Domain=realm::app::ServiceError Code=-1 \"authentication via 'local-userpass' is unsupported\" UserInfo={NSLocalizedDescription=authentication via 'local-userpass' is unsupported, realm::app::ServiceError=Unknown}app.login(withCredential: AppCredentials.anonymous()) { (user, error) in\n // Remember to dispatch back to the main thread in completion handlers\n // if you want to do anything on the UI.\n DispatchQueue.main.sync {\n guard error == nil else {\n print(\"Login failed: \\(error!)\")\n return\n }\n\n // Now logged in, do something with user\n\n }\n}\n", "text": "We are going through the getting started guide (crafting a macOS app) and are getting an authentication failure when attempting to sign upSignup failed: Error Domain=realm::app::ServiceError Code=-1 \"authentication via 'local-userpass' is unsupported\" UserInfo={NSLocalizedDescription=authentication via 'local-userpass' is unsupported, realm::app::ServiceError=Unknown}So to double check ourselves, we downloaded the repo project, built and ran it and got the exact same error.We’ve checked the Atlas->Realm project console, in the Users->providers section and both Anonymous sign in as well as Email/Password are On and configured.We know it’s the correct AppID as the log is showing the login attempts failing. I am not including the code as it’s the code in the repo project itselfThe error log from the console shows thisError:\nauthentication via ‘local-userpass’ is unsupportedThoughts?Edit:Since anonymous sign is On in the console we tried to use anonymous login using the code from the iOS SDK’s sectionwhich results in this errorLogin failed: Error Domain=realm::app::ServiceError Code=2 “authentication via ‘anon-user’ is unsupported” UserInfo={NSLocalizedDescription=authentication via ‘anon-user’ is unsupported, realm::app::ServiceError=InvalidSession}", "username": "Jay" }, { "code": "", "text": "This issue magically rectified itself without us changing anything. Go figure.", "username": "Jay" }, { "code": "", "text": "It usually comes when your app changes is not deployed.\nOnce deploy completed you wont get this.", "username": "Amiya_Panigrahi" }, { "code": "", "text": "@Amiya_PanigrahiThanks for the reply. The problem was the website would give an error when we attempted to deploy and it was unclear what to do about the error or how to correct it. After deleting everything - all apps, all clusters, every piece of data 3 times over a two day period, 4th time was a charm and the errors subsided.", "username": "Jay" }, { "code": "anonymous", "text": "anonymousHi , I just posted a Stackoverflow question relating to the exact same error message. Mine though did not miraculously disappear. Can you have a look at this?Chris", "username": "Chris_Kunze" }, { "code": "", "text": "I took a wild guess. Let us know if that corrects it or not as I have two other guesses as well,", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Signup Failed: authentication via 'local-userpass' is unsupported
2020-06-12T21:24:03.856Z
Signup Failed: authentication via &lsquo;local-userpass&rsquo; is unsupported
7,306
null
[ "aggregation", "queries" ]
[ { "code": "$projectfield: 1{\n \"newField\": {\n \"$filter\": {\n \"input\": \"$example\",\n \"as\": \"example\",\n cond: {\n \"$and\": [\n {\"$eq\": [\"$$example.id\", \"1234\"]}\n ]\n }\n }\n },\n}\n{\n _id:ObjectId(\"5f4d78952ebfba2559305679\")\n example: [{_id:: 12314}]\n}\n", "text": "I’m using the following specification in $project but I’ve to use field: 1 to retrieve the fields that I want in the final document but instead what I want is to do is include all the fields in the final document.Output:Is there any way to retrieve all the fields?Thank you so much.", "username": "Himanshu_Singh" }, { "code": "", "text": "If you using $project to simply add a field while you want all other fields you should take a look at https://docs.mongodb.com/manual/reference/operator/aggregation/addFields/ introduced in 3.4.There is one extra thing that need to be tested. It is to use $project but to exclude a non-existent, which in principle will include all existing fields.", "username": "steevej" } ]
Retrieve all fields when using $project
2020-10-25T05:41:53.481Z
Retrieve all fields when using $project
14,802
null
[ "sharding", "containers", "devops" ]
[ { "code": "", "text": "Hi,I am working with docker and mongodb containters (in a replica set configuration), so I have 3 nodes, each with a mongodb container (1 Primary, 2 secondaries).\nEach sefver is quite powerfull (20 CPU’s, 100 GB RAM ).We expect ourdata to grow quite large, and for that I should use sharding as far as I understand.My question is:\nDoes it make any sense to configure sharding using containers on the same server?\nAny advantages for deploying sharding like this…?Thanks,\nTamar", "username": "Tamar_Nirenberg" }, { "code": "", "text": "Does it make any sense to configure sharding using containers on the same server?I do not have numbers or documentation to support my opinion but I would say that it does not make a lot of sense. The purpose of sharding is to distribute the load so that more work can be done in parallel and when more powerful hardware is not an option. If you run many shards on the same physical server within container, VM or simply another process you will end up having many processes fighting for the same resources while adding some latency for the routing of the queries. In my opinion if you cannot shard on different physical hardware you might end up with a more complicated configuration which ‘might’ be less performant. Do not forget that mongos and config servers will chew up on these resources.Any advantages for deploying sharding like this…?Yes. For testing purpose.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Deploying containers sharding on the same server
2020-10-25T08:49:47.949Z
Deploying containers sharding on the same server
2,095
null
[ "connecting" ]
[ { "code": "", "text": "I am using minikube on virtualbox. In my application i am connecting to mongodb atlas cluster using following stringmongodb://:@/filtered_data?retryWrites=true&w=majorityI start a container using following commanddocker run --name app -p 80:80 -d <app_image> but for some reason it cannot connect to atlas cluster when i run the above container in minikubeI also tried with srv command but still same issue.however, when i run same command/container from my mac, it is able to connect to atlas cluster.I am not sure what is the issue. Could someone help?", "username": "Arun_Mittal" }, { "code": "", "text": "After few days of experiment, i found out that when i start minikube using “minikube start --driver=virtualbox”, i cannot connect to mongodb atlas cluster using neither mongodb://: url nor mongodb+srv://: urlHowever if start minikube simply using command “minikube start”, my container can connect to atlas cluster using mongodb+srv://: urlSo looks like some issue with virtualbox driver.I tested same thing with microk8s and my container can connect to atlas cluster using mongodb+srv://: url", "username": "Arun_Mittal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot connect to Atlas cluster from minikube
2020-10-22T17:45:42.392Z
Cannot connect to Atlas cluster from minikube
2,301
null
[ "atlas-triggers", "stitch" ]
[ { "code": "subscriptionStatussubscription{\"updateDescription.updatedFields\": {\"subscription.subscriptionStatus\":\"current\"} }{\"updateDescription.updatedFields.subscription.subscriptionStatus\":\"current\"}{\"removedFields\":[],\"updatedFields\":{\"subscription.endDate\":\"2020-06-07T12:02:10.440Z\", \"subscription.subscriptionStatus\":\"current\"}...}", "text": "Hi there,This seems really silly but I just can’t figure it out. I want my update stitch trigger to fire on a nested field, is that possible?My particular use case is field subscriptionStatus in object subscription.\nI’ve tried:\n{\"updateDescription.updatedFields\": {\"subscription.subscriptionStatus\":\"current\"} }\nand\n{\"updateDescription.updatedFields.subscription.subscriptionStatus\":\"current\"}\nBut neither fire, despite when removing the match expression and logging the result of the expected update gives:\n{\"removedFields\":[],\"updatedFields\":{\"subscription.endDate\":\"2020-06-07T12:02:10.440Z\", \"subscription.subscriptionStatus\":\"current\"}...}\nSo it’s definitely there, what am I missing?\nThanks in advance", "username": "Carla_Wilby" }, { "code": "{\n \"updateDescription\": {\n \"updatedFields\": {\n \"subscription\": {\n \"subscriptionStatus\":\"current\" \n }\n }\n }\n}", "text": "Hi Carla – I believe the correct match syntax is as follows, can you try this and let me know if it works –", "username": "Drew_DiPalma" }, { "code": "subscriptionStatussubscriptionStatus\"current\"\"pending\"{\n \"$or\": [{\n \"updateDescription\": {\n \"updatedFields\": {\n \"subscription\": {\n \"subscriptionStatus\":\"current\" \n }\n }\n }\n },\n {\n \"updateDescription\": {\n \"updatedFields\": {\n \"subscription\": {\n \"subscriptionStatus\":\"pending\" \n }\n }\n }\n }]\n}\n", "text": "Hi @Drew_DiPalma, how can this solution be adopted to work with $or? I have a very similar problem and the only difference is that I want to watch for 2 possible values of my own equivalent of @Carla_Wilby 's subscriptionStatus. So consider if Carla needed the trigger to fire when subscriptionStatus is either \"current\" or \"pending\". How would that work? I tried adjusting your solution like this (but it doesn’t work):What’s the correct way to achieve this? Thanks!", "username": "Uchechukwu_Ozoemena" }, { "code": "", "text": "2 posts were split to a new topic: Limiting a trigger based on the update belonging to a specific field of a subdocument", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Stitch Update Trigger on Subdocument
2020-05-07T12:59:22.349Z
Stitch Update Trigger on Subdocument
3,472
null
[]
[ { "code": "", "text": "HelloI’m having trouble completing the last section of Chapter 1 for course M001, Connect to your Atlas Cluster.I have copied the connection string from as instructed in the exercise and have entered it in the IDE.My connection string is: mongo “mongodb+srv://sandbox.9fyvl.mongodb.net/” --username m001-student I have tried changing the dbname to test so the string reads as below, as I’ve read on these forums suggestions to try test database, but no luck.mongo “mongodb+srv://sandbox.9fyvl.mongodb.net/test” --username m001-studentScreenshots attached for reference.Can someone please kindly advise on this.", "username": "J_Lei" }, { "code": "", "text": "Please revise the section where the IDE is presented. You missed some fundamental concepts as you have entered the command in the file editing area.", "username": "steevej" }, { "code": "Atlas cluster", "text": "HI @J_Lei,Please enter the command in the terminal area and hit enter to connect to your Atlas cluster.Screenshot 2020-10-09 at 4.27.48 PM2118×1540 138 KB", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi @Shubham_Ranjan\nThe same issue was occuring for me too. I tried changing the dbname and entered the string in the terminal. Yet, the answer was shown to be incorrect. Could you please help me figure out what exactly do I do?\nThank you & Regards\nHarshita", "username": "Harshita_Kaur" }, { "code": "", "text": "Did you hit enter after entering the string in terminal area?\nPlease run the test results while you are connected to your cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Harshita_Kaur,Please share a screenshot of the command that you ran and the output of the test result.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hello @Shubham_Ranjan\nThank you for replying. I actually got the issue resolved. I connected my cluster to the IDE a few hours back.\nRegards", "username": "Harshita_Kaur" }, { "code": "", "text": "Hi @Harshita_Kaur,I’m glad your issue got resolved. Please feel free to get back to us if you face any other issue.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "how to use browser ide , what is the url for browser ide", "username": "Narasimha_27715" }, { "code": "", "text": "Please check this", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Narasimha_27715,I hope your doubts are clear now. Let us know if you have any other questions.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi All,I am not able to connect to atlas sandbox cluster. Below is the screenshot of the error.image1212×652 36.5 KB logs.txt (4.0 KB)I have also attached full logs. please help.Thank you", "username": "Chandan_kumar3" }, { "code": "", "text": "You must provide the username and password.", "username": "steevej" }, { "code": "", "text": "Hi @steevej-1495,Thank you for you reply.I am providing Username and password as belowconnection string - mongo “mongodb+srv://sandbox.#####.mongodb.net/test” --username m001-studentPassword - m001-mongodb-basics", "username": "Chandan_kumar3" }, { "code": "", "text": "My guess, then, is that you did not created the user correctly in Atlas.I would recreate it by typing manually rather than cut-n-paste. Sometimes, white spaces are cut-n-pastes.", "username": "steevej" }, { "code": "", "text": "connnect atlas1048×316 31.1 KBhow to find out the name of the database?", "username": "dirland_multasim" }, { "code": "", "text": "You can use test for now.It will put you in test DB(it is a dummy/default DB )\nOnce you create different DBs in your Sandbox cluster you can replace test with whatever DB you wish to connect", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you for you reply.", "username": "dirland_multasim" }, { "code": "", "text": "how to fix this problem?connnect atlas1861×442 21.5 KB ", "username": "dirland_multasim" }, { "code": "", "text": "You were supposed to name your cluster Sandbox but you named it Cluster0.You have to terminate your current cluster and create a new one with the appropriate name.", "username": "steevej" } ]
Connect to your Atlas Cluster
2020-10-07T18:59:21.689Z
Connect to your Atlas Cluster
4,195
null
[ "database-tools" ]
[ { "code": "", "text": "I am using following command to store PDF in mongodb :\nmongofiles -d mycollection put abc.pdfand below mentioned command for downloading the PDF :\nmongofiles -d mycollection get abc.pdfThe downloaded PDF is not opening. Getting message as - File Not supported/Corrupted.I also tried storing/downloading using JAVA(GridFS) code but the issue remains the same.Has anybody ever faced this problem ?", "username": "Siddharth_Kumar1" }, { "code": "-d", "text": "Well, for one thing, the -d switch refers to a database, not a collection.\nSee https://docs.mongodb.com/database-tools/mongofiles", "username": "Jack_Woehr" } ]
MongoDB returning corrupted PDF
2020-10-24T06:18:26.185Z
MongoDB returning corrupted PDF
2,124
null
[]
[ { "code": "", "text": "Hi,Are there any best or recommended practices when it comes to creating development and production stages for Realm? I would like to continue to be able to turn sync on and off and work in development mode when my app is pushed out into production, but I currently only have one Atlas cluster and one Realm sync service. Do I just have to create a new Atlas cluster and a new Realm service for development and copy any schema changes from development over to the production cluster and production Realm when the time comes to push changes into production?Thank you!", "username": "Jerry_Wang" }, { "code": "", "text": "Yes - separate environments, Realm Apps and Atlas clusters would be what I would recommend. You can use the realm-cli to do this programmatically - https://docs.mongodb.com/realm/deploy/realm-cli-reference/index.htmlExport the configuration from dev and then import it to your new prod Realm app.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best practices for separating development and production
2020-10-24T03:27:00.968Z
Best practices for separating development and production
3,627
null
[]
[ { "code": "", "text": "Hello,Just am not sure where to log an error I found in the docs. On this section there is a paragraph repeated.I’d be happy to hear if this is not the place, where it is.", "username": "santimir" }, { "code": "", "text": "Hi @santimir,Thanks for opening this and bringing it to our attention. I have file a bug under “docs” project on the https://jira.mongodb.com website.https://jira.mongodb.org/browse/DOCS-13946You can do this in the future as well.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Documentation Bug
2020-10-24T14:11:27.931Z
Documentation Bug
1,488
null
[ "transactions" ]
[ { "code": "exports = function(changeEvent){\n\nconst pipeline = [\n {\n $project: {\n _id: 0, \n bookingNumber: 1, \n salesforceAccountId: 1\n }\n },\n {\n $addFields: {\n bookingNumber: changeEvent.fullDocument.bookingNumber, \n salesforceAccountId: changeEvent.fullDocument.salesforceAccountId, \n }\n }\n }, {\n $merge: {\n into: 'booking_vinay', \n on: ['bookingNumber'],\n whenMatched: 'replace', \n whenNotMatched: 'insert'\n }\n }\n];\n\nconst kafkaConnectBookingCollection = context.services.get(\"xxxx\").db(\"xxxx\").collection(\"kafka_connect_booking\");\n\nreturn kafkaConnectBookingCollection.aggregate(pipeline).toArray().then(bookings => {\n console.log(`Successfully moved ${changeEvent.fullDocument.bookingNumber} data to booking_vinay collection.`);\n kafkaConnectBookingCollection.deleteOne({ bookingNumber: changeEvent.fullDocument.bookingNumber });\n console.log(`Successfully deleted ${changeEvent.fullDocument.bookingNumber} data from kafka_connect_booking collection.`);\n return bookings;\n })\n .catch(err => console.error(`Failed to move ${changeEvent.fullDocument.bookingNumber} data to booking_vinay collection: ${err}`));\n};\n", "text": "Hi,I would like to know if there is a way to implement transactions on Database trigger function?Below code actually gets triggered if there is any modification on kafka_connect_booking collection document and that document gets inserted into booking_vinay collection after some data transformation(using aggregation). Once, the transformed document gets inserted in booking_vinay collection I need to delete that document from kafka_connect_booking collection. I have used the below code to achieve it but I think it would be better to handle this with a transaction instead of relying on .then().\nSo, could you please let me know how exactly I can use transactions in my code which would lock a particular document which I am processing in kafka_connect_booking collection.Code snippet:", "username": "Vinay_Gangaraj" }, { "code": "", "text": "Hi @Vinay_Gangaraj,Transactions are supported in Realm functions and triggers:However, $merge stage is not supported in transactions so you will need to change your code for it.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thanks for your response.I would like to explain a bit more about my issue. When there is a change event triggers on kafka_connect_booking collection an event gets triggered and that document I am processing and inserting into vinay_collection using aggregate and deleting that document from kafka_connect_booking collection.Now, majore concern in this cycle is, Let’s say a record of ABCD1 gets inserted in kafka_connect_booking collection and during this event processing itself let’s say another upsert for the same document ABCD1 happens on kafka_connect_booking collection which in-turn triggers another event but the first event would have completed and it would have deleted record of ABCD1 from kafka_connect_booking and now the second event for the same record processes it comes back and tries to delete the same data which could result with an exception as there is no record of ABCD1 in kafka_connect_booking collection to delete. So, I wanted to lock the ABCD1 document of kafka_connect_booking collection when there is a insert/update so that till the first event processing is completed second event will be in a queue.I hope I was able to make you understand my issue If we can’t use triggers is there anyway where I can lock the document in kafka_connect_booking till the event processing is completed for that particular document.Thanks,\nVinay", "username": "Vinay_Gangaraj" }, { "code": "", "text": "Hi @Vinay_Gangaraj,Perhaps you should update a status field for this record inside the triggers using transaction and then do the logic of your trigger.For example the first transaction command is to update the status field to in-progress and just before committing change it to done.This will make any other operations outside of the transaction to wait for its completion/abort:If a transaction is in progress and has taken a lock to modify a document, when a write outside the transaction tries to modify the same document, the write waits until the transaction ends.Will that work for you?Best\nPavel", "username": "Pavel_Duchovny" } ]
Transactions on Database Trigger function
2020-10-22T13:38:21.407Z
Transactions on Database Trigger function
3,634
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "I am opening 3 realm with different partition value. This got result in stoping all the existing functionality. Can you tell me the reason behind this malfunctioning.??", "username": "2018_12049" }, { "code": "", "text": "The question is a bit vague - opening three realms with different partition values works and does not cause any issues…Can you perhaps share some code you’re having difficulty with and describe the expected behavior?", "username": "Jay" } ]
Opening multiple realms with different partition values
2020-10-23T05:03:32.277Z
Opening multiple realms with different partition values
1,933
null
[ "devops" ]
[ { "code": "", "text": "From our CircleCI builds, we are in need of connecting to our Atlas replica sets. We have a VPC Peering connection from our AWS VPC so my first idea was to setup a bastion host within our VPC where our CircleCI servers can SSH tunnel from, however it seems like it’s not possible to connect to a replica set via SSH tunneling from what I’ve read online. Would a VPN be the next best option? Any other suggestions?", "username": "Jason_Mattiace" }, { "code": "", "text": "Hi @Jason_Mattiace,Welcome to MongoDB community!I read that Circle CI can be installed in your AWS vpc and it can be peered to Atlas project.Do you have another topology in mind?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hey @Pavel_Duchovny, thanks for the reply. In order to do that we would have to get an enterprise package which we don’t want to do at this time.", "username": "Jason_Mattiace" }, { "code": "", "text": "Hi @Jason_Mattiace,So if you are running outside of AWS you will need to whitelist a perminant public IP/CIDR of your Circle CI servers.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "We can’t whitelist IPs since the CircleCI block is too large and open to many others. I’m looking at solutions to setup a VPN connection since SSH tunneling doesn’t seem to be an option.", "username": "Jason_Mattiace" }, { "code": "", "text": "Hi @Jason_Mattiace,In that case I would recommend looking into Aws Private Link connection setupConnections to private endpoints within your VPC can be made transitively from:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What's the best way to connect to an Atlas replica set from CI/CD servers
2020-10-23T02:11:52.318Z
What&rsquo;s the best way to connect to an Atlas replica set from CI/CD servers
3,762
null
[ "security", "configuration" ]
[ { "code": "", "text": "Hello. I have a setup where the TLS certificate (and private key) are replaced every few months (for renewal purposes). I was wondering how I would go about reloading the TLS certificate and key so mongod used the new one. It doesn’t seem that SIGHUP or SIGUSR1 (the standard signals for rehashing TLS certificates) would work. I would like to avoid restarting mongod if possible.", "username": "figboot" }, { "code": "", "text": "I don’t think it is possible to update certificates without mongod bounce\nYou have to do it rolling restart method\nPlease check these linkshttps://jira.mongodb.org/browse/SERVER-10962", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Oh, that’s not very good. I don’t have a very complex system going on. Just a single mongod running on a single server.", "username": "figboot" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to reload mongod when TLS certificate is renewed
2020-10-22T02:16:25.525Z
How to reload mongod when TLS certificate is renewed
4,523
null
[ "stitch" ]
[ { "code": " $set: {\n \"streams.$[elem].pickup_point_id\": \"\",\n _modified: new Date()\n }\n },\n {\n multi: true,\n arrayFilters: [ { \"elem.pickup_point_id\": pickup_point_id } ], //pickup_point_id is sent in arguments and it's working fine for everything else\n }\n )", "text": "I’m working on a function in MongoDB Stitch and I’m having an error when I try to update multiple documents in an array. I’m using an arrayFilter and $ operator.\nThe error I’m getting is the following:StitchError: No array filter found for identifier ‘elem’ in path ‘streams.$[elem].pickup_point_id’And this is the function:return locatCollection.findOneAndUpdate({_id: res._id} ,\n{", "username": "Juan_Amaral" }, { "code": "arrayFilters", "text": "Hi @Juan_Amaral, welcome!StitchError: No array filter found for identifier ‘elem’ in path ‘streams.$[elem].pickup_point_id’Unfortunately arrayFilters is currently not supported. There’s an open canny arrayFilters | Voters | MongoDB for this operator to be added. Please feel free to up-vote to get notifications on request.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks for the answer, is currently unsupported for stitch only? When I was trying this in stitch I also tried it in mongo shell and it worked just fine.", "username": "Juan_Amaral" }, { "code": "", "text": "Hi @Juan_Amaral,Thanks for the answer, is currently unsupported for stitch only?Yes, that’s correct. The canny link above is only for MongoDB Stitch.In regards to MongoDB server arrayFilters with $<identifier> operator has been available since MongoDB v3.6+.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
arrayFilters with $<identifier> not working in stitch
2020-05-06T00:24:39.274Z
arrayFilters with $&lt;identifier&gt; not working in stitch
6,842
null
[ "ops-manager", "kubernetes-operator" ]
[ { "code": "", "text": "Following instructions at Install the MongoDB Enterprise Kubernetes Operator — MongoDB Kubernetes Operator 1.18. At the end of the doc, it says you can create an instance of Ops Manager and deploy MongoDB resources.Do you have to install Ops Manager or can you skip to deploy MongoDB resources?", "username": "Albert_Wong" }, { "code": "", "text": "I found out that if you install the MongoDB Operator in OperatorHub, you can then connect it to cloud.mongo.com and don’t have to do a local Ops Manager install within the kube cluster.", "username": "Albert_Wong" }, { "code": "", "text": "Also the MongoDB Operator cannot connect to an Atlas-based organization in cloud.mongodb.com. It must be of type Ops Manager-based organization in cloud.mongodb.com", "username": "Albert_Wong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Operator 1.8 on Red Hat OpenShift 4.5. Can you skip the install of Ops Manager?
2020-10-23T02:38:22.776Z
MongoDB Operator 1.8 on Red Hat OpenShift 4.5. Can you skip the install of Ops Manager?
2,589
null
[ "python" ]
[ { "code": "from pymongo import MongoClient\n\nfrom utils.errors import databaseCreateChatPostMissingData\n\nfrom utils.jsonLoader import read_json\n\nclass user_data:\n\n #Setup class initilization and stuff like that\n\n def __init__(self, userId, dataIn):\n\n self.database_name = read_json(\"ids\")['guildDbName']\n\n self.uid = int(userId)\n\n self.data = dataIn\n\n self.db = MongoClient(\"mongodb://127.0.0.1:27017\")[self.database_name]\n\n self.query = {\"_id\": int(userId)}\n\n #Perhaps add a dataparser\n\n #def dataParser(self):\n\n # return \n\n #Implmentation/behind scences helper functions not for normal use NEVER USE THESE EXCEPT AS HELPER FUNCTIONS FOR CLASS\n\n #Makes the new dict for the chat dict to be appended to list\n\n def __makeChatPostDict(self, messageChannelName: None, messageCurrentUserName: None, messageId: None, messageCreatedTime: None, messageContent: None):\n\n if messageChannelName is None or messageContent is None or messageId is None or messageCreatedTime is None or messageCurrentUserName is None:\n\n raise databaseCreateChatPostMissingData\n\n dct = {\n\n \"content\": str(messageContent),\n\n \"channelName\": str(messageChannelName),\n\n \"username\": str(messageCurrentUserName),\n\n \"time\": str(messageCreatedTime),\n\n \"mid\": str(messageId)\n\n }\n\n return dct\n\n #This is function responsible for actually updating the data(Split into 2 functions later during refactoring)\n\n def __assembleChatPost(self, messageChannelName: None, messageCurrentUserName: None, messageId: None, messageCreatedTime: None, messageContent: None):\n\n try:\n\n #Setups the collection for chat log assembler along with also setting up the before list of chat data(searchs via user id doesn't include user_id on return tho)\n\n chat_col = self.db[\"chat_logs\"]\n\n before_data = None\n\n raw_chat_logs = None\n\n before_data = chat_col.find_one({\"_id\": self.uid}, {\"_id\": 0, \"server_chat_data\": 1})\n\n for x in before_data:\n\n print(x[\"server_chat_data\"])\n\n raw_chat_logs = []\n\n #Makes the new chat logs with the proper extra data appeneded\n\n updated_chat_logs = raw_chat_logs.append(self.__makeChatPostDict(self, messageChannelName, messageCurrentUserName, messageId, messageCreatedTime, messageContent))\n\n chat_col.update_one(self.query, {\"$set\": {\"server_chat_data\": updated_chat_logs}}, upsert = True)\n\n return True\n\n except:\n\n return False\n\n #Setup Non-Private Access methods\n\n def dbChatServerPost(self):\n\n d = self.data\n\n return self.__assembleChatPost(d[\"messageChannelName\"], d[\"messageCurrentUserName\"], d[\"messageId\"], d[\"messageCreatedTime\"], d[\"messageContent\"])\n\n def testRead(self):\n\n chat_col = self.db[\"chat_logs\"]\n\n for thingy in chat_col.find({\"_id\": self.uid}, {\"_id\": 0, \"server_chat_data\": 1}):\n\n print(thingy)\n\nuser_chat_log_test_dict = {\"messageChannelName\": \"test\", \"messageCurrentUserName\": \"test\", \"messageId\": \"test\", \"messageCreatedTime\": \"test\", \"messageContent\": \"test\"}\n\ntest = user_data(1, user_chat_log_test_dict)\n\ntest.dbChatServerPost()\n\ntest.testRead()\n", "text": "this is my python code and i confirm my server is receiving the connections", "username": "kaibeast223" }, { "code": " except:\n return False\n", "text": "This blanket try/except is hiding bugs in your code. Remove it and you will find the problem:", "username": "Shane" }, { "code": "", "text": "I removed it and its due to being none type\nFile “database.py”, line 60, in \ntest.dbChatServerPost()\nFile “database.py”, line 50, in dbChatServerPost\nself.__assembleChatPost(d[“messageChannelName”], d[“messageCurrentUserName”], d[“messageId”], d[“messageCreatedTime”], d[“messageContent”])\nFile “database.py”, line 38, in __assembleChatPost\nfor x in before_data:\nTypeError: ‘NoneType’ object is not iterable\nim using the database to store logs of messages as a dict(id, log[dict(logs)])\nbut im nto sure why its none type code is same except i removed try catch", "username": "kaibeast223" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
My database straight isnt working and im getting zero errors no idea why
2020-10-22T23:46:52.962Z
My database straight isnt working and im getting zero errors no idea why
2,079
null
[]
[ { "code": "", "text": "Hello everyone!Thank you to all our participants in April’s scavenger hunt series! Over 50 members earned their Internet Detective badges. An impressive 20 members earned the Serial Solver badge by completing at least three out of four of our puzzles. Everyone who participated and included their mailing address will receive a MongoDB sticker pack.For now, because of logistical limitations thanks to COVID-19, we are going to pause the scavenger hunt activity to make sure everyone who participated gets their prizes. Please note: there may be a delay in delivery times and we appreciate your patience.Keep an eye out for our next fun activity!Cheers,Jamie", "username": "Jamie" }, { "code": "", "text": "Can we get some tracking numbers if stickers are sent or expected date , Also if know that stickers were sent in multiple or single package\nAnd also can you have update on this\n“We will soon be announcing additional rewards for completing multiple scavenger hunts with a special reward for those who complete all our weekly scavenger hunts this spring and summer.”", "username": "Alex_Beckham" }, { "code": "", "text": "Hi @Alex_Beckham,The stickers will come in a single package, but we will not provide each individual a tracking number. I don’t have an expected date for anyone yet. Logistics right now are pretty unpredictable.Re: the additional ‘big’ reward we wanted to do, that is on pause for now too. We’ll reevaluate later this year to see if we can better accommodate restarting this activity.Sorry I don’t have better news. This one was tough to put on hold, as we were all having fun with it.Cheers,Jamie", "username": "Jamie" }, { "code": "", "text": "Any Update On the Sticker Packs ?", "username": "Alex_Beckham" }, { "code": "", "text": "Hey @Alex_Beckham! All of the sticker packs have now been sent out. (Apologies for the delay – I ended up packing and mailing them myself! COVID19 is really making logistics a pain…) Keep an eye on your mailbox if you submitted your mailing address with your scavenger hunt submission. ", "username": "Jamie" }, { "code": "", "text": "Hello everyone !I received it today in France .mongodb_pack1210×1386 289 KBThank you @Jamie @Ryan_Quinn and the others for this event ! I hope it can resume soon (maybe a bigger prize? ).", "username": "Gaetan_MORLET" }, { "code": "", "text": "Hi,yes it was fun. Suggestion for a bigger prize: maybe we can earn “points” next time and at the end MongoDB is planting a tree for each x points which have been archived by all of us ? The leaf is there so why not add the tree??Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "stickers1040×585 27.5 KB\nHey guys, just received my stickers yesterday in India. It was such a pleasant surprise given the fact that I was not expecting it at all. I completely forgot about attending the scavenger hunts, until these beautiful stickers arrived.Thank you @Jamie and the other organizers for this lovely surprise and this awesome event.", "username": "infinity_intellect" }, { "code": "", "text": "Hello, can you make shipments to Cuba?", "username": "Pedro_Almirall" }, { "code": "", "text": "", "username": "Jamie" } ]
Thanks for playing with us!
2020-05-12T08:44:11.017Z
Thanks for playing with us!
7,456
null
[ "graphql" ]
[ { "code": "", "text": "Hi!I am trying to create a Custom Resolver with a custom input type in the form of an array of objects. No GraphQL Warnings are generated, but if I go to the GraphQL>Schema tab I get the following error code:“Introspection must provide input type for arguments, but received: [ArrayOfShirtItem].”This is what my custom resolver custom input type looks like: https://jsonblob.com/1d5368db-0dfb-11eb-a6df-a7f5e0d24313Can you tell me what might be wrong?", "username": "petas" }, { "code": "", "text": "Hi Petas,This seems like a bug which we’re investigating, thanks for flagging and I’ll keep this post updated.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi @petas, we fixed this bug in our recent release and this should be working now. Let me know if you run into any issues.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Awesome!We decided on another approach, but the time will probably come when we will try this again. If we encounter some problems then we will reach out again!Thanks a lot Sumedha!", "username": "petas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Introspection must provide input type for arguments, but received
2020-10-14T09:05:42.333Z
Introspection must provide input type for arguments, but received
3,996
null
[ "swift" ]
[ { "code": "", "text": "We are delighted to announce the Realm iOS Hackathon running for the week of November 16th.Attending a virtual hackathon is new to most of us and our challenge for you is simple: Unleash your creativity! Join in, bring your friends & colleagues and get started on our ideas with Realm and Swift. Throughout the Hackathon, there will be MongoDB Realm Engineers and Developer Advocates on hand, ready for 1:1 support and we’ll hold break out sessions dealing with core mobile development topics.Please register HEREand do share the word - the more the merrier!!We look forward to welcoming you on November 16th and if you 'd like a taste of what’s involved, please visit THIS POST on our DevHub by team PurpleBlack, the winners of the last MongoDB Realm Hackathon that took place in July.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Peter" } ]
Realm iOS Hackathon - November 16th
2020-10-23T10:38:14.438Z
Realm iOS Hackathon - November 16th
1,690
null
[ "graphql" ]
[ { "code": "mutation SomeMutation {\n insertOneFoo(data: {\n quantity: 12\n})\n}\nquantitylong", "text": "Running a mutation like the following:Where quantity in my schema is defined as bsonType long will result in the value being silently converted to a string which will no longer be valid when attempting to query or sync from other clients. This is not a problem with the sync clients (realm-cocoa at least).Edit: I guess this is just a limitation of sending 64bit ints over json, I’ll try and figure out a workaround.", "username": "Theo_Miles" }, { "code": "integerlonglong", "text": "Hey Theo - thanks for bringing this to our attention. Can you use the integer type for now?As for the reason why we ask users to pass in a string for the long type, it is because different languages have different limitations around long and we want each client to be able to cast the returned string from GraphQL the way it best suits their use-case.However, the fact that it is getting cast to a String in MongoDB is an issue that we are working on resolving, so that the data in your collections respects your JSON schema - I will follow up on when this is fixed.", "username": "Sumedha_Mehta1" }, { "code": "Int32integer", "text": "Thank you for your response and help @Sumedha_Mehta1, the reasons make sense and I didn’t think of that when initially asking the question.The thing is that my local sync client’s schema (realm-cocoa) is defined as Int32 so I’m unsure why it was mapped to a long on the server-side, this might be my fault though, maybe at some point, I manually set it to that. What would be the best way to convert this value to an integer? Add a new field?", "username": "Theo_Miles" }, { "code": "", "text": "The Realm Database stores all integers as int64, regardless of what the SDK model uses. The SDK handles the downcasting/upcasting internally, so for the most part this should be transparent to the developer. The exception, as you have seen, is when the schema gets synchronized - when developer mode is enabled, the server side schema that gets generated doesn’t use the swift models but instead uses the database schema (thus using the int64 type). It is safe to manually specify integer in the server schema - Realm sync will correctly downcast/upcast it.", "username": "nirinchev" }, { "code": "", "text": "Thanks for the explanation @nirinchev, appreciate it. So I should be able to just specify integer on the sync server schema config? I assume this will trigger a sync restart? Do I also need to run an update in the MongoDB shell to change the type of these to integer?", "username": "Theo_Miles" }, { "code": "", "text": "If you have non-integer values for that field in the database, you’ll need to convert them manually, yes. And yes, changing the type will require reinitializing sync.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
GraphQL Long scalar type
2020-10-22T13:22:19.005Z
GraphQL Long scalar type
3,125
null
[ "dot-net" ]
[ { "code": "", "text": "hi, i want to make a data migration tool for migrating data from 1 environment to another, these environments will be feature environments but in order to populate data into these environments there is a need for a tool that can migrate data from a mongo into this environments instance of mongo.so i was thinking of a simple application that takes a connection string in an input field so that we can migrate from any of our current and future mongo databases, and an input field to write a simple query to specify the dataset you need as there will be a data limit as to how much data you are allowed to migrate for various reasons,this means that i will need to be able to pass not only a connection string from a string input field, which is the easy part, but be able to write find querys in another field that i will then be able to use with the c# driver and run against the database the connection string leads to.is there a way to do this? or does mongoDB simply not support understanding queries from strings alike how SQL understands queries from strings?", "username": "patrick_johnsen" }, { "code": "", "text": "Hi Patrick,Unlike SQL, the MongoDB query language isn’t really a language, but is, instead, a set of method/function calls on the collection object. You can pass components of the arguments, like the find JSON doc and the sort JSON doc to those operations.Typically, C# developers tend to use the builders instead.Regards,\nSteve", "username": "Steve_Hand" }, { "code": "", "text": "this i know, however i was looking for a simple way of writing queries into a text input field and have it parse it as a query, i would ofcourse prefer to achieve this without programattically identiyf keywords as FIND and find some way of attaching the find function when building my function and such. but if there is no way around it then i suppose this is what i would have to do", "username": "patrick_johnsen" }, { "code": "", "text": "Good morning Patrick,AFAIK, MongoDB does not have a language, so there’s nothing to parse. There’s nothing to work around.Having written this very utility that you describe, twice, there was no need to have to deal with arbitrary statements in my case. For me, the utility can be told the collections to copy and even the search criterion to be used to select documents from those collections. The operations required to migrated that data can be determined at run time and the variability of the operations is very limited. BTW, you may need to process change streams as well, depending on your use case.Given that there is likely a time limit on the migration, like having it complete is a maintenance window, it would be unwise to have dynamic SQL-like statements even if you could because one would want to use prepared statements in the implementation in order to gain the related performance.What benefit do you expect to gain if there was a MongoDB operation language?", "username": "Steve_Hand" }, { "code": "", "text": "well, there is no maintenance time limit, as we would populate these databases as the feature testing environments were set up. the benefit i was expecting was the ability to have a query input field, that could easily allow for writing the specific find query with whatever else you needed like order by or skip and take or what else you need, and then have it understand the query from that text input field without having to figure out how to create a factory that detects keywords from the text in order to put together the query. I know i can pass string variables to the inside of a find method in the c# driver but i would like to avoid writing a strict query that takes variables and allow the tool to construct the entire query from text.but seeing as there is no mongo language i will need to figure out a suitable alternative, to the solution i first thought of.", "username": "patrick_johnsen" }, { "code": "", "text": "I think you may be overthinking this a bit. The number of ops needed to accomplish the task is very limited.Good luck.", "username": "Steve_Hand" }, { "code": "BsonDocument.ParseCollection.Find()string query = \"{ 'foo' : 'bar', 'baz':{'$gt': 7} }\";\nvar filter = BsonDocument.Parse(query);\nvar result = collection.Find(filter).FirstOrDefault(); \nFind()", "text": "Hi @patrick_johnsen and welcome to the forums,this means that i will need to be able to pass not only a connection string from a string input field, which is the easy part, but be able to write find querys in another field that i will then be able to use with the c# driver and run against the database the connection string leads to.Depending on your use case, you can try to utilise BsonDocument.Parse to parse a string into BsonDocument that you could pass into Collection.Find(). For example:You won’t be able to construct limit and sort with Find(), but you may be able to use aggregation pipeline instead. With aggregation pipeline you could construct $match, $sort, etcJust be extra careful exposing string(s) that your application would pass into the database. Generally you would display collections and fields within the collections for users to filter on. Essentially creating an abstraction layer for sanity check and safety.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "interesting ill try that and look into this, thanks", "username": "patrick_johnsen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Any way of running mongo shell queries through the C# driver?
2020-10-16T17:12:26.019Z
Any way of running mongo shell queries through the C# driver?
8,806
null
[ "app-services-user-auth" ]
[ { "code": "const newUser = await app.emailPasswordAuth.registerUser('[email protected]', '123456');\nconst config = {\n sync: {\n user: user,\n partitionValue: 'PUBLIC',\n}\nRealm.open(config).then((userRealm) => {\n userRealm.write(() => {\n const result = userRealm.create(\n \"User\",\n new User ({\n _id: ( I want to put id of newly register user here),\n _partition: `User=${Id}`,\n name: email,\n role:role\n })\n );\n \n }); \n});\n", "text": "I have one scenario when admin needs to create other users account. where he provide email, password and other custom info too. now when Admin add new user, I call app.emailPasswordAuth.registerUser(). to get register with application. It will take only email and password. I want to store custom info based of Id return by app.emailPasswordAuth.registerUser(). but unfortunately it won’t return any id for newly created user. how facing a problem to map user info into mongo atlas db.\nhere is my code:Can you please guide me how it will be done. Thank you in advance!!!", "username": "2018_12049" }, { "code": "const user = await app.login(Credentials.emailPassword('[email protected]', '123456'));\n\n// ...\n\n_id: user.id\n", "text": "The user id gets populated the first time the user logs in with the credentials. So you probably need to add something like:", "username": "nirinchev" }, { "code": "login", "text": "loginBut I want to define roles before he/she logs in so that I can assign login flow based on role. N that’s why I need ID to add custom information to user before he/she logs in.", "username": "2018_12049" }, { "code": "", "text": "I think my question is not clear to you. let me try again.\nFor example you are Admin who is currently logged into the system. now he add users( employees). so he called app.emailPasswordAuth.registerUse() to register employe with App. Admin also want to store Employees Address,Phone num etc. It is only possible if I get the Id of Employee registered with the app.\nAs u say app.login() will populate the Id of employee…but then he will get logged in when Admin is already logged in into the app.", "username": "2018_12049" }, { "code": "async function registerNewUser(email, password) {\n // Store the current user to switch back to it\n const currentUser = app.currentUser;\n\n // Login new user to allocate an Id for them\n await app.emailPasswordAuth.registerUser(email, password);\n const newUser = await app.login(Credentials.emailPassword(email, password));\n\n // We have the user id now, let's log them out and remove them from the local device\n await app.removeUser(newUser);\n\n // Switch back to the original currentUser\n app.switchUser(currentUser);\n\n // Return the user Id to setup Address, Phone, and so on.\n return newUser.id;\n}\napp.emailPasswordAuth.registerUser", "text": "I understand the question, but am not sure what your concern is with logging in the user. There are no issues with logging in multiple users on the same device and you don’t have to transition the app UI to indicate a new user has logged in. Here’s an example function that hopefully clarifies my point:In your code, instead of calling app.emailPasswordAuth.registerUser, you can now call the new function and get the Id of the newly registered user.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
App.EmailPassword.register()
2020-10-22T05:31:00.716Z
App.EmailPassword.register()
3,228
null
[ "aggregation", "queries" ]
[ { "code": "count[\n {\n \"recordID\": \"11989\",\n \"count\": 5\n \n },\n {\n \"recordID\": \"2561\",\n \"count\": 10\n \n },\n {\n \"recordID\": \"57546\",\n \"count\": 30\n \n },\n {\n \"recordID\": \"12623\",\n \"count\": 40\n \n },\n {\n \"recordID\": \"199429\",\n \"count\": 50\n },\n {\n \"recordID\": \"12793\",\n \"count\": 60\n \n }\n]\n{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : 5\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : 10\n }\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : 30\n}\n{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : 5\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : 10\n }\n", "text": "I need to fetch documents where the total count sum less than or equal to a specified value.So in the query, I want to sum the documents count and $sum should not cross the limit specified.A sample format of my MongoDB schema:Example1: Let’s say when totalcount<=50) I need to fetch the documents, in which total count sum less than or equal 50 (totalsum<=50), it should return below documents.count <=50 ( count = 5 + 10 + 30 )Example2 : when totalSum<=20, query should return below documents.count <=20 ( count : 5 + 10 = 15 )How to achieve this use case using aggregation framework?MongoDB Playground", "username": "Tiya_Jose" }, { "code": "", "text": "Hi @Tiya_Jose,Seems like a similar request was made earlier:Consider reading it and let us know if you have questionsThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to fetch documents with condition on $sum that it should not cross the limit specified
2020-10-23T03:35:11.132Z
How to fetch documents with condition on $sum that it should not cross the limit specified
3,506
null
[]
[ { "code": "", "text": "Hello!\nGood Night y’all from Costa Rica!I’m quite new to MongoDB. I have just created and setup a Free Atlas cluster …I have the challenge to create some sort of ‘Integration’ bewteen Microsoft SQL Server and MongoDB, just a read access to mongo and being able to query some collections from SQL.I have researched in order to analyze possibilities, but so far there’s nothing that much clear for me on what method to use.I saw that creating a Data Lake on the cluster could help to employ an JDBC driver for reading and querying mongoDB collections as SQL. But when reviweing it’s possible integrations, it work like with tableau for example. Could this method work for connecting with SQL Server to the data lake maybe?Another method I saw was the one with maybe an ODBC driver, but I don’t know if that method works with an Atlas cluster or just with like, local clusters.Please, I’m very knew to DBs and much more with MongoDB and NoSQL. If someone could give some advice or guidance on this approach, it would be highly appreciated.Regards!", "username": "Luis_Guzman" }, { "code": "", "text": "¡Si es posible!I myself often do this between MySQL and MongoDB using Python with the Pythondrivers for both databases.", "username": "Jack_Woehr" }, { "code": "", "text": "To learn MongoDB deeply and quickly, take the free courses at MongoDB University.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "Jack_Woehr" }, { "code": "", "text": "To see all the MongoDB language drivers, browse here: https://docs.mongodb.com/drivers/", "username": "Jack_Woehr" }, { "code": "", "text": "Thanks for the reply Jack! @Jack_WoehrThose Python drivers you mentioned, are different than the MongoDB JDBC drivers right?", "username": "Luis_Guzman" }, { "code": "", "text": "Yes, each language has its own set of drivers.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to get Data from a MongoDB Atlas Cluster to SQl Server?
2020-10-22T04:22:55.042Z
Is it possible to get Data from a MongoDB Atlas Cluster to SQl Server?
2,345
null
[]
[ { "code": "", "text": "Hi,I’m new to Mongo DB. I have no problem installing Mongo DB to Ubuntu. However I stuck at creating user, allow remote access for Compass or my app.Is there anyone can guide me?", "username": "Alfirus_Ahmad" }, { "code": "", "text": "I recommend you take M103 course at https://university.mongodb.com/You might also find the following interesting.", "username": "steevej" }, { "code": "", "text": "Thanks for your answer.Now I know the solution. In mongod.conf, don’t use 0.0.0.0, instead use the server IP address it self.", "username": "Alfirus_Ahmad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Setting user and allow remote access
2020-09-26T07:37:55.769Z
Setting user and allow remote access
9,975
null
[ "atlas", "connector-for-bi" ]
[ { "code": "", "text": "I am using Atlas and have enabled BI Connector for my cluster.\nOn Windows PC, I have installed the mongoldb odbc driver.\nWhen I try to add a DSN by providing host, port, user, password.\nI get an error when trying to connect.\nError -> [MySQL][ODBC 1.4(w) Driver]SSL connection error: protocol version mismatch\"\nI have tried both 64/32 bit drivers and 1.0,1.1,14. versions of the driver as well without much luck. Pls advice.", "username": "Epicle_Technology" }, { "code": "", "text": "Hi,To confirm:Thank you!", "username": "Jeffrey_Sposetti" }, { "code": "", "text": "Hi Jeffrey,\nI am using Windows 8.1.\nI tried version 1.4 as well as 1.1.\nI installed and reinstalled the VC++ re-distr… 2015.\nI intend to use Power BI. But can’t create DSN yet.\nThanks.\nMahesh", "username": "Epicle_Technology" }, { "code": "", "text": "We are also facing same issue but ODBC version 1.2 (a). Can you please let us know how this can be resolved.", "username": "vishwanath_kumbi" }, { "code": "", "text": "i’m also facing the same problem", "username": "Alfirus_Ahmad" }, { "code": "", "text": "To be specific", "username": "Alfirus_Ahmad" }, { "code": "", "text": "I found the solution.Don’t use master account. We need to create database user and use that user in ODBC. However make sure the user can access anywhere and that can be set as 0.0.0.0/0", "username": "Alfirus_Ahmad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to Atlas BI Connector
2020-06-13T23:19:18.600Z
Unable to connect to Atlas BI Connector
4,253
null
[ "queries" ]
[ { "code": "", "text": "How do I make a query such that given a list: {“column1”:1,“column2”:1},{“column1”:2,“column2”:2},etc…I only get the documents in a collection where BOTH attributes match BOTH attributes in an object in the list?\nSo I don’t get documents like {“column1”:1,“column2”:2},{“column1”:2,“column2”:1}\nAs far as I know I can only do an $and with $in and that doesn’t work because that can return an object where one attribute matches the list but not the other one.", "username": "Alejandro_Carrazzoni" }, { "code": "$elemMatch", "text": "Hi @Alejandro_Carrazzoni,I think what you are looking for is $elemMatch . Please read the following comment:Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "$elemMatch only works with elements inside an array. I need something like $elemMatch but that works with attributes in the main document, not in sub-documents inside an array", "username": "Alejandro_Carrazzoni" }, { "code": "var mylist = [{\"column1\":1,\"column2\":1},{\"column1\":2,\"column2\":2}];\n\ninsert docs = [{\"column1\":1,\"column2\":1},{\"column1\":2,\"column2\":2},{\"column1\":1,\"column2\":2},{\"column1\":2,\"column2\":1}];\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$match\": {\n \"$expr\": {\n \"$in\": [\n \"$$ROOT\",\n mylist //the var from above\n ]\n }\n }\n }\n ],\n \"maxTimeMS\": 0,\n \"cursor\": {}\n}\n{\"column1\" : 1, \"column2\" : 1}\n{\"column1\" : 2, \"column2\" : 2}\n", "text": "Hello : )The above code , filters the docs,and keeps a doc,only if it a member on mylistResult (only the members of mylist passed)Hope it helps.", "username": "Takis" }, { "code": "var mylist = [{\"column1\":1,\"column2\":1},{\"column1\":2,\"column2\":2}];\n\ninsert docs = [{\"column1\":1,\"column2\":1,\"column3\":1},{\"column1\":2,\"column2\":2\",column3\":2},{\"column1\":1,\"column2\":2,\"column3\":3},{\"column1\":2,\"column2\":1,\"column3\":4}];\n{\"column1\" : 1, \"column2\" : 1,\"column3\":1}\n{\"column1\" : 2, \"column2\" : 2,\"column3\":3}\n", "text": "What if I add a column3 attribute and I want to select all columns but only put column1 and column2 in the in expression? How can I do it?\nSo I have this:And i want this to be the result", "username": "Alejandro_Carrazzoni" }, { "code": "{\"column1\" : \"$column1\" ,\"column2\" : \"$column2\"}\n", "text": "Instead of $$ROOT you can construct the matching document.", "username": "Takis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I make a query that matches documents where a combination of attributes is in a list?
2020-09-21T18:36:29.027Z
How do I make a query that matches documents where a combination of attributes is in a list?
5,962
null
[ "python" ]
[ { "code": "", "text": "In January 2020, the core CPython team sent out the following message:We have decided that January 1, 2020, was the day that we sunset Python 2. That means that we will not improve it anymore after that day, even if someone finds a security problem in it. You should upgrade to Python 3 as soon as you can.This message was a long time coming - Python 3 was first released in 2008. The PyMongo driver added support for Python 3 in 2012 (in version 2.2). Nevertheless, migrations from users to Python 3, and of our own internal python development teams, did not start right away.8 years later, we find that 75% of PyMongo downloads are Python 3, the most popular libraries have dropped support for Python 2, and there are numerous resources, tutorials, and tools available to make migrations easier (such as Porting Python 2 Code to Python 3 — Python 3.11.2 documentation).Yet some of our users are still using Python 2 in applications and we want to be sensitive to their upgrade timelines. At the same time we do not want to over-invest in a version of Python that has been end of lifed. Therefore, starting with version 4.0, PyMongo will not be compatible with Python 2.If you have any questions or feedback on this, please reach out.Thank you!!\nRachelle", "username": "Rachelle" }, { "code": "", "text": "", "username": "system" } ]
Deprecation Notice: Python 2.7 Support
2020-10-22T19:36:49.122Z
Deprecation Notice: Python 2.7 Support
1,808
null
[ "replication", "upgrading" ]
[ { "code": "", "text": "Hi,I’ve a 3.4 MongoDB cluster with tree nodes :Currently I have upgraded the version on all nodes to 3.6.x but i’ve kept the featureCompatibility version to be 3.4 only.I need to know what will be the impact of changing the featureCompatibility version to 3.6. It needs to be done as it is one of the prerequisite to upgrade the servers to 4.0.xAnd hence I won’t be able to downgrade the feature compatibility version it makes it a difficult task if things go wrong.Need more insights about the featureCompatibility version in MongoDB and what are the best practices around the same.Also it’ll be helpful if I can get list of known issue people face while moving from 3.4 to 3.6.", "username": "Manan_Verma" }, { "code": "", "text": "Hi @Manan_VermaChanging the FCV allows you to use the backwards incompatible changes for the release.Usually you are encouraged to run with the previous FCV for a while to ensure the server upgrade itself does not impact your deployment and allow for a quicker downgrade path.The downgrade guide have step on how to downgrade if you have set the FCV. It is usually removing any reliance on the features that were introduced with the new version.If you’re not using the features now, it is unlikely they will cause you issues.Read the release notes and the upgrade guide thoroughly and make some backups for peace of mind.I’d say good luck but upgrades are well documented and not that difficult.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB cluster upgrade from 3.4.x to 3.6.x to 4.0.x
2020-10-21T19:33:35.713Z
MongoDB cluster upgrade from 3.4.x to 3.6.x to 4.0.x
1,458
null
[ "dot-net", "xamarin" ]
[ { "code": "var app = App.Create(\"realmmongoapp-ywehx\");\nRealms.Exceptions.RealmException: Keychain returned unexpected status code: -25293\n at Realms.NativeException.ThrowIfNecessary(Func`2 overrider)\n at Realms.Sync.AppHandle.CreateApp(AppConfiguration config, Byte[] encryptionKey)\n at Realms.Sync.App.Create(AppConfiguration config)\n at Realms.Sync.App.Create(String appId)\n", "text": "Hi! I install nuget Realm10.0.0.beta1when code run on:I have this exception:Can you help me?", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "What platform are you running this on?", "username": "nirinchev" }, { "code": "", "text": "Visual Studio for Mac 8.7.8", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "I’m sorry, I meant what is the platform that you’re targeting (running the app on). Are you building a Xamarin.iOS project running it on a simulator/actual device, or are you running a .NET Core app, targeting your Mac?", "username": "nirinchev" }, { "code": "", "text": "Oh, sorry.\nI’m running a .NET Core web-app", "username": "Luigi_De_Giacomo" }, { "code": "MetadataPersistenceModeNotEncryptedvar config = new AppConfiguration(\"realmmongoapp-ywehx\")\n{\n MetadataPersistenceMode = MetadataPersistenceMode.NotEncrypted\n};\n\nvar app = App.Create(config);\n", "text": "Hm… I’m failing to reproduce it so far, but will keep trying. In the meantime, can you try setting MetadataPersistenceMode on the realm config to NotEncrypted:", "username": "nirinchev" }, { "code": "", "text": "Now it’s ok, thanks!", "username": "Luigi_De_Giacomo" }, { "code": " var configApp = new AppConfiguration(\"realmmongoapp-ywehx\")\n {\n MetadataPersistenceMode = MetadataPersistenceMode.NotEncrypted\n };\n var app = App.Create(\"realmmongoapp-ywehx\");\n var user = await app.LogInAsync(Credentials.Anonymous());\n", "text": "Thanks, now it’s ok, but I have this error on loginAsync:Realms.Sync.Exceptions.AppException: InvalidSession: authentication via ‘anon-user’ is unsupported\nat Realms.Sync.App.LogInAsync(Credentials credentials)I have enabled anonymous user on console.", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "Did you deploy the draft after enabling anonymous authentication?", "username": "nirinchev" }, { "code": "", "text": "When I have enabled anonymous authentication, it works fine!", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realms.Exceptions.RealmException: Keychain returned unexpected status code
2020-10-19T19:31:53.335Z
Realms.Exceptions.RealmException: Keychain returned unexpected status code
3,383
null
[ "atlas-device-sync" ]
[ { "code": "userId={user.id}PUBLICasyncOpen()var configuration = user.configuration(partitionValue: \"userId=\\(user.identity!)\")\nRealm.asyncOpen(configuration: configuration) { [weak self] (userRealm, error) in\n ...\n}\n{\n \"%%partition\": \"userId=%%user.id\"\n}\nFailed to open realm: Error Domain=io.realm.unknown Code=89 \"Operation canceled\" UserInfo={Category=realm.basic_system, NSLocalizedDescription=Operation canceled, Error Code=89}\nError:\n\nuser does not have permission to sync on partition (ProtocolErrorCode=206)\nPartition:\n\nuserId=5ee811450178b19c376debac\n\nSDK:\nRealm Cocoa v10.0.0-beta.2\nPlatform Version:\nVersion 14.0 (Build 18A372)\n_partition: \"userId=5ee811450178b19c376debac\"", "text": "I’m working on a prototype that partitions data into userId={user.id} and PUBLIC. According to the docs, this a supported strategy. There’s no specific example provided for writing such a sync permission, so I attempted to infer one (see below). Unfortunately when calling asyncOpen() on the user’s partition, I’m getting an error on the client and in the server logs:Code:Sync rule:Client error:Server logs:The partition key in the data is _partition: \"userId=5ee811450178b19c376debac\", so I’m a bit confused as to what I’ve misconfigured. Any help would be greatly appreciated, thanks!-Rudi", "username": "Rudi_Strahl" }, { "code": "", "text": "I held off on updating to 10.0.0, but noticed the following in the release notes:Just so I’m 100% certain I’m interpreting this correctly; does this mean the permission mechanism presented in the third step of configuring Realm Sync is superfluous now?(I’ve removed all permissions and re-created my sync, which resolves my original issue and continue prototyping, but I want to understand the dev experience/expectations for permissions going forward. Thanks to whomever can help clarify!)", "username": "Rudi_Strahl" }, { "code": "", "text": "@Rudi_Strahl I’m not sure expressions will work with concatenating a string and an expression like that - if you wanted to do that I believe you’d have to use a function for that -\nhttps://docs.mongodb.com/realm/sync/permissions/#function-rulesHowever, In reading your architecture, it sounds like you only have two realms, a per-user realm and a public realm, in that case you could just do -{\n“%%partition”: “%%user.id”\n}For the user realm.By the way, we published a guide on migrating to the new sync from the legacy version, it may be of help to you:", "username": "Ian_Ward" }, { "code": "{ \"$or\": [\n { \"%%partition\": \"PUBLIC\" },\n { \"%%partition\": \"%%user.id\" }\n ]\n}\n{ \"%%partition\": [ \"%%user.id\", \"PUBLIC\"] }\n", "text": "@Ian_Ward Ah okay - I’ll gladly simplify and use your approach. I do only have two realms currently; if I’m looking to lock down the public partition as read-only, which expression would be correct:or(Also, thanks for the pointer over to the migration guide; just started digging into it after the update to 10.0.0 - very helpful!)", "username": "Rudi_Strahl" }, { "code": "", "text": "@Rudi_Strahl I presume you want all users to have read-only access to PUBLIC but write access to only their user’s realm?In which case for the Sync permissions it would be:Read:\n{\"%%partition\": “PUBLIC” }Write:\n{ “%%partition”: “%%user.id” }", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync permissions appear correct, but throw error on iOS client and server logs
2020-10-20T06:08:27.699Z
Realm Sync permissions appear correct, but throw error on iOS client and server logs
5,093
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hello.\nI’m wondering where exactly is stored the User Object collection, can I export it? Can I preview it in Mongo Compass? I’m not saying about the Custom User Data stored in my own MongoDB collection.", "username": "Stanislaw_Baranski" }, { "code": "", "text": "You can export all the Users as an admin via our Admin API.However, if you’re trying to access all the users of your application in a client side application, we recommend using custom user data to create a collection and populate the data using authentication Triggers when a new user logs in.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thank you. And where are they actually stored? And why I can’t export (hashed) passwords? I’m just wondering if it’s possible to migrate from MongoDB/realm to an on-premise system just in case.", "username": "Stanislaw_Baranski" }, { "code": "", "text": "@Sumedha_Mehta1 Could you please answer my question? This is the deciding factor for our business.", "username": "Stanislaw_Baranski" }, { "code": "", "text": "There is no way to export hashed passwords. Can you explain what exactly you’re trying to do by moving to an on-premise system and why? If this is just for authentication, you can build the authentication yourself and then use our JWT Auth Provider to authenticate the user.Otherwise, there is no way to use Realm on-prem unless you are building the backend yourself.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks for your answer. We are basically looking for potential blockers that can prevent from migrating to a different platform (in case we decide so). We don’t want to be “vendor-locked”.", "username": "Stanislaw_Baranski" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I access User Object collection?
2020-10-16T11:52:49.489Z
How can I access User Object collection?
2,148
null
[ "realm-studio" ]
[ { "code": "", "text": "Is it possible to import relational data into Realm? My data model for ObjectA has a one-to-many relationship using RealmList. This is for my Swift iOS app.I pre-populated data using Realm Studio’s import from CSV. Most of my data model is made up of Strings and Ints. But I’m not sure how to represent a List<> datatype in CSV.I’m creating custom code to create this data in my app at runtime. But it’s time consuming, and this strikes me as basic functionality. There has to be a better way!", "username": "Theo_Goodman" }, { "code": "1234,jay,pizza\n5678,cindy,steak\n1111,rover,1234\n2222,spot,1234\n3333,scraps,5678\nclass PersonClass: Object {\n @objc dynamic var person_id = \"\"\n @objc dynamic var person_name = \"\"\n @objc dynamic var fav_food = \"\"\n let dogList = List<DogClass>()\n}", "text": "Is it possibleAsking is it possible is pretty vague as yes, it’s possible. However, without seeing some code or examples of your data it’s going to be hard to say how or what advice to give.For example, if you want to read in ‘relational data’ you could write an app to read that data, populate your realm objects and save it to realm.Representing relational data in a flat file usually has some common keys that associate that data. For example people and dogs. The people file may look like this; persons id, name and favorite foodthen the dogs file would look like this with the dog id, dog name and then the owners idjay owns rover and spot and cindy owns scraps.The realm object would be", "username": "Jay" }, { "code": "", "text": "Thanks Jay for the feedback! I took your suggestion and ran into a problem…My end goal is to pre-populate my app with data by bundling a Realm file. But I’m doing backflips to get data in the correct format. I created my Realm from CSVs, and made a separate app to create relationships (per your input). New problem is: Realm Studio converts my variables into optionals during the import…Int?, String?, String?\n1234, jay, pizza\n5678, cindy, steakIs there a way to prevent Realm Studio from doing this? Or is there a good way to get them back into non-optionals? I wrote a method for this, but instantiating objects with ‘id = RealmOptional()’ seems to overwrite any pre-existing value with a nil value.So I’m stuck until I can figure out a way around this! Help me Obi Wan ", "username": "Theo_Goodman" }, { "code": "class PersonClass: Object {\n @objc dynamic var person_id = \"\"\n @objc dynamic var person_name = \"\"\n @objc dynamic var fav_food = \"\"\n}\nPersonClass.csvperson_id,person_name,fav_food\n1234,jay,pizza\n5678,cindy,steak\n", "text": "In my answer, I was referring to structuring your data to be read in as a flat file where you are creating the app to read that flat file. Doing it in code would allow you to generate relationships in whatever ways you want (1-1, 1-Many, Many-Many)If you’re importing data in Realm Studio that’s different and what you’re showing is not the correct format for importing.I will point you to my answer on Stack OverflowSo if we have a simple person class as in my exampleThe Realm object name is PersonClass , so the imported file name needs to matchPersonClass.csvalong with that the first line of the file needs to match the classes property names, comma separated so the import file would look like this", "username": "Jay" } ]
Can I pre-populate relational data (e.g. RealmList<Object>) into Realm
2020-10-02T23:38:10.705Z
Can I pre-populate relational data (e.g. RealmList&lt;Object&gt;) into Realm
4,492
null
[ "atlas-device-sync" ]
[ { "code": "const Realm = require('realm');\n\n// Define your models and their properties\nconst CarSchema = {\n name: 'Car',\n properties: {\n make: 'string',\n model: 'string',\n miles: {type: 'int', default: 0},\n }\n};\nconst PersonSchema = {\n name: 'Person',\n properties: {\n name: 'string',\n birthday: 'date',\n cars: 'Car[]', // a list of Cars\n picture: 'data?' // optional property\n }\n};\n\nRealm.open({schema: [CarSchema, PersonSchema]})\n .then(realm => {\n // Create Realm objects and write to local storage\n realm.write(() => {\n const myCar = realm.create('Car', {\n make: 'Honda',\n model: 'Civic',\n miles: 1000,\n });\n myCar.miles += 20; // Update a property value\n });\n\n // Query Realm for all cars with a high mileage\n const cars = realm.objects('Car').filtered('miles > 1000');\n\n // Will return a Results object with our 1 car\n cars.length // => 1\n\n // Add another car\n realm.write(() => {\n const myCar = realm.create('Car', {\n make: 'Ford',\n model: 'Focus',\n miles: 2000,\n });\n });\n\n // Query results are updated in realtime\n cars.length // => 2\n\n // Remember to close the realm when finished.\n realm.close();\n })\n .catch(error => {\n console.log(error);\n });\n", "text": "I have tried below code. this is writing data only in localdb. it is not syncing back to the actual db. what are other steps required to sync localdata to actal database.??", "username": "2018_12049" }, { "code": "await realm.syncSession.uploadAllLocalChanges()...\nreturn realm.syncSession.uploadAllLocalChanges().then(() => {\n realm.close();\n});\n", "text": "You seem to be closing the Realm instance immediately after writing the data - can you try to add await realm.syncSession.uploadAllLocalChanges()? With promises should probably look something like:", "username": "nirinchev" } ]
Data Not Syncing
2020-10-22T08:48:58.421Z
Data Not Syncing
2,893
null
[ "app-services-user-auth" ]
[ { "code": " exports = ({ token, tokenId, username, password }) => {\n // will reset the password\n return { status: 'success' };\n };\n", "text": "I want to use a custom password reset function to be invoked by my Swift client via sendResetPasswordEmail.In the MongoDB Realm Authentication Providers page I selected Automatically confirm users for User Confirmation Method and Run a pssword reset function for Password reset method.After selecting + New Function in order to provide a body for the resetFunction and entering the code below:I get the following error when I press the Save button:invalid value for resetFunctionIdCould you please help me with this issue?", "username": "Alfredo_da_Silva" }, { "code": "", "text": "Hey Alfredo -Did you name your resetFunction with a name? This error usually shows up when you have left the function name blank/removed it:image1247×292 28.7 KBIf that’s not the case, can you attach some screenshots and your stack trace from the console so we can take a closer look?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Sumedha,Thanks for your reply.I tried to provide a body for resetFunc and also to another function that I named, but I got the same error for both.Please find the link to my app below.https://realm.mongodb.com/groups/5efa3289244bdc4f6d5c3028/apps/5f661c0474ea39958d2b81e2/dashboardThanks.Alfredo da Silva", "username": "Alfredo_da_Silva" }, { "code": "", "text": "Can you also screenshot your current config and the stack trace when that error appears? Unfortunately, I don’t get too much information about the error from just looking at your app.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Sumedha,I’m not sure what you mean by current config, and there is no stack trace, just an error on the Authentication Providers page: “Invalid value for resetFunction”.Below is the link where you can find Authentication Providers page screen shots:Access Google Drive with a Google account (for personal use) or Google Workspace account (for business use).Thanks.Alfredo", "username": "Alfredo_da_Silva" }, { "code": "", "text": "It seems like you’re hitting “new function” but you already have “resetFunction” defined. For now, you can get around this be hitting “existing function” and using the resetFunction or a new function you have defined.We will work on making that error message a bit more clearer for users - hope this helps!", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Sumedha,It is not clear to me yet how I can edit the body of the resetFunction, or any other function that I create to reset the user’s password and save these changes, without getting the error I mentioned earlier.Could you please elaborate?Thanks.Alfredo", "username": "Alfredo_da_Silva" }, { "code": "", "text": "You can go here in the ‘Functions’ tab to edit the already created function. App Services", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Sumedha,I’ve just got it to work.Thank you so much for your help.Alfredo", "username": "Alfredo_da_Silva" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot save password reset function
2020-10-18T20:22:33.909Z
Cannot save password reset function
2,155
null
[ "dot-net" ]
[ { "code": "\npublic class DummyContainer\n\n{\n\n public DummyContainer(IList<Dummy> dummies)\n\n {\n\n Dummies = dummies;\n\n this.Id = ObjectId.GenerateNewId().ToString();\n\n }\n\n public string Id { get; private set; }\n\n public IList<Dummy> Dummies { get; private set; }\n\n}\n\npublic class Dummy\n\n{\n\n public Dummy(string name)\n\n {\n\n this.Name = name;\n\n this.Id = ObjectId.GenerateNewId().ToString();\n\n }\n\n public string Id { get; private set; }\n\n public string Name { get; private set; }\n\n}\n\nDummyContainer\ncollection.InsertOneAsync(new DummyContainer(new[]\n\n{\n\n new Dummy(\"SomeValue1\"),\n\n new Dummy(\"SomeValue2\"),\n\n}));\n\n\n class DummyContainerWithDummyNames\n\n {\n\n public string ContainerId { get; set; }\n\n public IEnumerable<string> DummyNames { get; set; }\n\n }\n\n\n var dummyNamesByUsingFindAsync = await (await collection.FindAsync(Builders<DummyContainer>.Filter.Empty, new FindOptions<DummyContainer, DummyContainerWithDummyNames> { \n\n Projection = Builders<DummyContainer>.Projection.Expression(x => new DummyContainerWithDummyNames\n\n {\n\n ContainerId = x.Id,\n\n DummyNames = x.Dummies.Select(d => d.Name)\n\n })\n\n })).ToListAsync();\n\n var dummyNamesUsingQuery = await collection.AsQueryable()\n\n .Select(x => new DummyContainerWithDummyNames\n\n {\n\n ContainerId = x.Id,\n\n DummyNames = x.Dummies.Select(d => d.Name)\n\n })\n\n .ToListAsync();\n\ndummyNamesByUsingFindAsyncdummyNamesUsingQueryDummyNames[\"SomeValue1\", \"SomeValue2\"]DummyContainerDummyContainerWrapper\npublic class DummyContainerWrapper\n\n{\n\n public DummyContainerWrapper(DummyContainer container)\n\n {\n\n Container = container;\n\n this.Id = ObjectId.GenerateNewId().ToString();\n\n }\n\n public string Id { get; private set; }\n\n public DummyContainer Container { get; private set; }\n\n}\n\n\nawait collection.InsertOneAsync(new DummyContainerWrapper(new DummyContainer(new[]\n\n{\n\n new Dummy(\"SomeValue1\"),\n\n new Dummy(\"SomeValue2\"),\n\n})));\n\n\nvar dummyNamesByUsingFindAsync = await (await collection.FindAsync(Builders<DummyContainerWrapper>.Filter.Empty, new FindOptions<DummyContainerWrapper, DummyContainerWithDummyNames> { \n\n Projection = Builders<DummyContainerWrapper>.Projection.Expression(x => new DummyContainerWithDummyNames\n\n {\n\n ContainerId = x.Id,\n\n DummyNames = x.Container.Dummies.Select(d => d.Name)\n\n })\n\n})).ToListAsync();\n\nvar dummyNamesUsingQuery = await collection.AsQueryable()\n\n .Select(x => new DummyContainerWithDummyNames\n\n {\n\n ContainerId = x.Id,\n\n DummyNames = x.Container.Dummies.Select(d => d.Name)\n\n })\n\n .ToListAsync();\n\ndummyNamesUsingQueryDummyNamesIEnumerable<string>List<string>[\"SomeValue1\", \"SomeValue2\"]dummyNamesByUsingFindAsyncDummyNamesSystem.Linq.Enumerable.SelectListIterator<MyTestProgram.Dummy, string>null", "text": "I believe I’ve found a bug related to projection, specifically when you map an array that is a deep descendant of the document root.Suppose this is my model:Let’s say I insert one document of type DummyContainer into an empty collection:And then I load all documents from that collection, projected into this type:Here we go:Both dummyNamesByUsingFindAsync and dummyNamesUsingQuery contain one document, and in both cases the property DummyNames is populated with [\"SomeValue1\", \"SomeValue2\"].Ok, in order to reproduce the bug, we’re gonna wrap the DummyContainer in a DummyContainerWrapper and let this be our root entity:We’re gonna insert such a document to a new collection:And just like before, we’re gonna query the collection for all documents and project them into the same type we used above:And this is where we see the bug:dummyNamesUsingQuery has one element, who’s DummyNames property is an IEnumerable<string> (the concrete type is List<string>) populated with [\"SomeValue1\", \"SomeValue2\"].dummyNamesByUsingFindAsync on the other hand also has one element, but it´s DummyNames property is of type System.Linq.Enumerable.SelectListIterator<MyTestProgram.Dummy, string>, and the values are both null.I thought I’d post it here first, to check if this is already a known issue, before I open a Jira ticket.", "username": "John_Knoop" }, { "code": "", "text": "Since there’s not replies to this post, I’ll just go ahead and open a ticket in Jira", "username": "John_Knoop" }, { "code": "", "text": "Hi @John_Knoop,Thanks for opening a ticket CSHARP-3227 for this.\nRunning the snippet code that you provided with MongoDB .NET/C# v2.11.2 , I could also replicate the issue that you’re seeing.Regards,\nWan", "username": "wan" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Possible bug when mapping a subarray using FindAsync (C# driver)
2020-09-30T08:17:46.692Z
Possible bug when mapping a subarray using FindAsync (C# driver)
4,642
null
[ "sharding" ]
[ { "code": "db.adminCommand( { cleanupOrphaned: \"dbname.collectionname\" } )\n\n{\n \"ok\" : 0,\n \"errmsg\" : \"ns: <dbname>.<collectionname>, min: { document_normalized_id: MinKey }, max: { document_normalized_id: \\\"-sb6Zhn9moNP_wIyvur7uTsWXvG5AtBmKsLgz2bg02o\\\" } is already being processed for deletion.\"\n}\n", "text": "MongoDB server version: 3.4.18In the mongo sh, I am trying to run cleanupOrphaned and don’t really understand the results. All I can tell is it failed. I am not a mongo dba, just doing ops work for my team:", "username": "Philip_Izor" }, { "code": "... is already being processed for deletion", "text": "Hi @Philip_IzorThe message ... is already being processed for deletion means that the documents you’re trying to delete is already scheduled to be deleted (typically due to a chunk move), but the server haven’t been able to delete them yet for some reason (typically because a query is still holding a cursor on them). It doesn’t necessarily mean that there’s a problem.Could you elaborate what you are trying to achieve? What was the motivation behind running the cleanupOrphaned command?In another note, MongoDB 3.4 series was out of support since Jan 2020 (see Support Policy). If possible, I would encourage you to upgrade to a newer, supported MongoDB versions.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks. I found a script that does this for us, too. We’re putting orphan mongo docs in our mongo exports. I found documents in the physical document collection export that are deleted from mongo. Yes, aware of the EOSL. Waiting on architecture and devs to decide if they are upgrading or migrating. Thank you so much!", "username": "Philip_Izor" }, { "code": "", "text": "Hi, can you share the script to perform cleanupOrphaned. TIA", "username": "Dheeraj_G" }, { "code": "cleanupOrphaned", "text": "Hi @Dheeraj_G,The cleanupOrphaned command is an inbuilt server command.Please consult the documentation for your version of MongoDB server for the relevant details, as there have been some changes between releases.Quoting from the MongoDB 4.4 manual:Starting in MongoDB 4.4, chunk migrations and orphaned document cleanup are more resilient to failover. The cleanup process automatically resumes in the event of a failover. You no longer need to run the cleanupOrphaned command to clean up orphaned documents. Instead, use this command to wait for orphaned documents in a chunk range from a shard key’s minKey to its maxKey for a specified namespace to be cleaned up from a majority of a shard’s members.In MongoDB 4.2 and earlier, cleanupOrphaned initiated the cleanup process for orphaned documents in a specified namespace and shard key range.Regards,\nStennie", "username": "Stennie_X" } ]
cleanupOrphaned doesn't seem to be doing what I want
2020-08-18T12:40:36.844Z
cleanupOrphaned doesn&rsquo;t seem to be doing what I want
2,123
null
[ "queries" ]
[ { "code": "db.getCollection('collection_name').find(\n{\n \"deviceAttr.deviceId\": { $exists : true },\n \"deviceAttr.deviceId\": { \"$ne\" : null },\n \"deviceAttr.deviceId\": { \"$ne\" : \"\" }\n})\ndb.getCollection('collection_name').find(\n{\n \"deviceAttr.deviceId\":{ $exists:true }\n})\ndb.getCollection('collection_name').find(\n{\n \"deviceAttr.deviceId\" : { \"$ne\" : null },\n \"deviceAttr.deviceId\" : { \"$ne\" : \"\" }\n})\ndb.getCollection('collection_name').find(\n{\n \"$and\" : [\n { \"deviceAttr.deviceId\" : { \"$ne\" : null } },\n { \"deviceAttr.deviceId\" : { \"$ne\" : \"\" } }\n ]\n})\n", "text": "query 1:return :18065 resultsquery 2:return: 1 resultquery 3:return :18065 resultsquery 4:return : 1 resultI am confused about the results , it really completely overturn what I’ve known.", "username": "zhao_chao" }, { "code": "deviceAttr.deviceIdmongo> query = {\n \"deviceAttr.deviceId\": { $exists : true },\n \"deviceAttr.deviceId\": { \"$ne\" : null },\n \"deviceAttr.deviceId\": { \"$ne\" : \"\" }\n}\n{ \"deviceAttr.deviceId\" : { \"$ne\" : \"\" } }\n$and", "text": "Welcome to the community @zhao_chao!I am confused about the results , it really completely overturn what I’ve known.The issue with queries 1 and 3 is that you have repeated deviceAttr.deviceId keys within the same object. JavaScript requires keys at the same level to be unique and will only use the last value set. Both of these queries are equivalent to:“deviceAttr.deviceId” : { “$ne” : “” }You can confirm this in the mongo shell:Your query 4 example avoids the issue of repeated fields using the $and query operator, which is the correct approach for Queries With Multiple Expressions Specifying the Same Field in JavaScript. The result of matching all of the query conditions is a single document, which is expected as the combination of the previous queries.Regards,\nStennie", "username": "Stennie_X" } ]
Why do these queries return ambiguous results?
2020-10-22T03:18:23.376Z
Why do these queries return ambiguous results?
1,274
null
[ "sharding" ]
[ { "code": "", "text": "Hi,Let us assume, Shard A and Shard B each hold 200GB of dataSize, (85/100 storage used)\nTurnedOff balancer and added Shard C to the cluster and TurnedOn balancer to migrate chunks.If chunks migrate from shard A, B (old) holding 3000 chunks together without jumbo chunks to shard C (new) equally 1000 chunks, can I expect Shard A to reduce with dataSize from 200Gb to ~150GB at least?, so that I can perform “compact” operation to claim reUsable storage.FYI, I am on MongoDB v4.0.20TIA", "username": "Dheeraj_G" }, { "code": "", "text": "Below explanation should understand better,MongoDB v4.0.20Before [ no Jumbo Chunks verified by sh.status(true) ]:\nShard A - Chunks 1500 - dataSize 200GB\nShard B - Chunks 1500 - dataSize 200GBAfter adding a shard\nShard A - Chunks 1000 - dataSize ?? (Can I expect reduced dataSize?)\nShard B - Chunks 1000 - dataSize ?? (Can I expect reduced dataSize?)\nShard C - Chunks 1000 - dataSize 130GBTIA", "username": "Dheeraj_G" }, { "code": "compact", "text": "Hi @Dheeraj_G welcome to the community!Although in theory the data size and to some extent the storage size should be evenly distributed between the number of shards, in practice this is difficult to determine. Every deployment is different, and the data size would depend on (off the top of my head):It should be approximately evenly distributed if the collection was ideally distributed and the workload evenly distributed as well, however in practice this is not always the case.It also gets more complicated due to how WiredTiger actually allocates the data files physically within each shard (which could be very different on each shard). Deleting documents and compacting the database may result in space returned to the OS, but this is not a guarantee. WiredTiger’s compression features should help you with disk space conservation to some extent, should disk space conservation is important to you.However if you are expecting your data to grow in size, I don’t think you need to run compact. The reasoning is because if you expect your data to grow, those spaces will have to be reclaimed again by WiredTiger in the future, thus resulting in no net useful work.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Chunks migrated but dataSize didn't reduce on previous shards
2020-10-19T21:25:35.004Z
Chunks migrated but dataSize didn&rsquo;t reduce on previous shards
1,517
null
[ "queries", "python" ]
[ { "code": "", "text": "If I try find() something, how can I find out if something was found or not?When I use find_one() it’s None , what about find() ?", "username": "Fungal_Spores" }, { "code": "Nonenext()>>> cur = db.test.find()\n>>> cur.next()\nTraceback (most recent call last):\n...\nStopIteration\n>>> list(db.test.find())\n[]\n", "text": "Hi @Fungal_Spores welcome to the community!I’m assuming you’re using Pymongo since you mentioned None.find() returns an iterable cursor, so if there is nothing to return, calling next() on the cursor will raise StopIteration:Alternatively, if you put the result set into a list, it will be an empty list:Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
If I try find() something, how can I find out if something was found or not?
2020-10-21T16:30:51.103Z
If I try find() something, how can I find out if something was found or not?
2,623
null
[ "aggregation" ]
[ { "code": "\"bookingLines\" : [ \n {\n\t\t \"BookingLines\" : {\n \"BookingLineCode\" : \"CNF146551_5\",\n \"BookingLine_id\" : 86660727,\n \"MarketKey\" : \"asia\"\n }\n },\n\t\t{\n\t\t\t\"BookingLines\" : {\n \"BookingLineCode\" : \"CNF146551_8\",\n \"BookingLine_id\" : 86660728,\n \"MarketKey\" : \"paris\"\n }\n\t\t}\n\t]\n\"bookingLines\" : [ \n {\n \"BookingLineCode\" : \"CNF146551_5\",\n \"BookingLine_id\" : 86660727,\n \"MarketKey\" : \"asia\"\n\n },\n\t\t{\n\n \"BookingLineCode\" : \"CNF146551_8\",\n \"BookingLine_id\" : 86660728,\n \"MarketKey\" : \"paris\"\n\t\t}\n\t]\n", "text": "Hi,I have the below document stored in one of my collectionsAnd, I would like to transform this to something like belowCould anyone please let me know how I can achieve using the aggregate pipeline options ?Thanks,\nVinay", "username": "Vinay_Gangaraj" }, { "code": "test:PRIMARY> db.coll.find().pretty()\n{\n\t\"_id\" : ObjectId(\"5f8ee428867da8a9a949fb40\"),\n\t\"bookingLines\" : [\n\t\t{\n\t\t\t\"BookingLines\" : {\n\t\t\t\t\"BookingLineCode\" : \"CNF146551_5\",\n\t\t\t\t\"BookingLine_id\" : 86660727,\n\t\t\t\t\"MarketKey\" : \"asia\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"BookingLines\" : {\n\t\t\t\t\"BookingLineCode\" : \"CNF146551_8\",\n\t\t\t\t\"BookingLine_id\" : 86660728,\n\t\t\t\t\"MarketKey\" : \"paris\"\n\t\t\t}\n\t\t}\n\t]\n}\n[\n {\n '$unwind': {\n 'path': '$bookingLines'\n }\n }, {\n '$project': {\n 'a.BookingLineCode': '$bookingLines.BookingLines.BookingLineCode', \n 'a.BookingLine_id': '$bookingLines.BookingLines.BookingLine_id', \n 'a.MarketKey': '$bookingLines.BookingLines.MarketKey'\n }\n }, {\n '$group': {\n '_id': '$_id', \n 'bookingLines': {\n '$push': '$a'\n }\n }\n }\n]\n{\n\t\"_id\" : ObjectId(\"5f8ee428867da8a9a949fb40\"),\n\t\"bookingLines\" : [\n\t\t{\n\t\t\t\"BookingLineCode\" : \"CNF146551_5\",\n\t\t\t\"BookingLine_id\" : 86660727,\n\t\t\t\"MarketKey\" : \"asia\"\n\t\t},\n\t\t{\n\t\t\t\"BookingLineCode\" : \"CNF146551_8\",\n\t\t\t\"BookingLine_id\" : 86660728,\n\t\t\t\"MarketKey\" : \"paris\"\n\t\t}\n\t]\n}\n", "text": "Hi @Vinay_Gangaraj,Here is my starting point:Here is my aggregation pipeline:And here is the result I get:Enjoy,\nMaxime.", "username": "MaBeuLux88" }, { "code": "[{\n $project: {\n _id: 0,\n bookingLines: \"$bookingLines.BookingLines\",\n }\n}]\n", "text": "Hi @MaBeuLux88,Thanks for your response. But, I was able to solve it with below approach ", "username": "Vinay_Gangaraj" }, { "code": "", "text": "Oh damn I didn’t know this would work on an array out of the box!\nAwesome !", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issue with transforming data
2020-10-20T08:50:18.207Z
Issue with transforming data
1,327
null
[ "swift", "atlas-device-sync" ]
[ { "code": " tasks = realm.objects(Task.self).sorted(byKeyPath: \"_id\")\n notificationToken = tasks.observe { [weak self] (changes) in\n", "text": "I’m running the ‘tracker’ tutorial app with mongoDB and an iOS client. Per the tutorial code I’m listening for notifications on the “Task” collection:When I add or delete a “Task” entity in the linked mongoDB Atlas cluster, the notification fires as expected and my table view updates.However when I modify the property of a “Task” entity (for example, its name) in Atlas and save the changes, nothing happens i.e. the notification does not fire. Is this expected? (When I reload the table view the data refreshes however it’s not the “live” update I expect)", "username": "Jeff_Sorrentino" }, { "code": "", "text": "@Jeff_Sorrentino This is a known issue with Atlas Collection viewer - it translates an update into a delete and insert instead of an update. If you instead use the mongodb update command, the update notification will fire -How to update documents in MongoDB. How to update a single document in MongoDB. How to update multiple documents in MongoDB. How to update all documents in MongoDB. How to update fields in documents in MongoDB. How to replace documents.We are working on fixing this", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm collection "update" not firing iOS notification
2020-10-20T20:34:31.294Z
Realm collection &ldquo;update&rdquo; not firing iOS notification
1,833
null
[ "data-modeling", "react-native" ]
[ { "code": "", "text": "I pretend to integrate Realm on my react-native app. By the moment, I have established two Schemes (bookingsScheme and ubicationScheme). The problem is that I want to read the information contained in the schemes in multiple screens and for doing that I open a Realm instance for each screen, but when I do that shows up an error saying that: ‘Realm at path X already opened on current thread with different schema.’.I understand the error, and I try to find the solution. I put realm.close() each time I create a Realm instance, but when I do that another error jumps saying that Realm has been closed and the data is invalidad.I suppose that I don’t understand how Realm exactly works, and I’m wondering what I should to do at time of create instance of Realm.My idea is having the app data structure handled by Realm. I have thought pass a unique Realm instance with Redux but I don’t think that it was a good solution.", "username": "David_Arnau" }, { "code": "", "text": "@David_Arnau You can open the same realm with both schemas and then use two different filters to just get bookings and ubication objects to bind to separate views - see here:\nhttps://docs.mongodb.com/realm/react-native/reads/#filter-results", "username": "Ian_Ward" } ]
Realm instances
2020-10-21T16:30:42.660Z
Realm instances
2,081
null
[]
[ { "code": "", "text": "Hi, I have some problem with my Mongo volumes that I’ve set for my docker-compose.yml file about this project I’ve worked on about half a year ago but I didn’t set the version and today I’ve clean all my docker images in my machine and re-running docker-compose file but It’s thrown me the error message below\n“Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.”So there is a way that I can check the version of mongo from my volume files?", "username": "tanapone_kanongsri" }, { "code": "bsondump --version\nbsondump version: 100.2.0\n\nbsondump --quiet /data/db/diagnostic.data/metrics.2020-10-21T12-27-20Z-00000 | jq -s '.[] | select(.type == {\"$numberInt\": \"0\"})| .doc.buildInfo.version'\n\"3.2.21\"\n", "text": "Hello @tanapone_kanongsriYes it is, sort of. Note that this will only show what version last successfully started the database not what FCV it is at.On startup mongod will write out the results of some commands to the diagnostic.data folder, this can be parsed to see what version it was running at that time.This is an example of a data directory that does not start with mongod 4.4I’m not sure if there are any other tricks to find the info. When I tried a 3.6 database with mongod 4.2 it did say what version the database was at in the error.Good write up I found which helped with this:Full Time Diagnostic Data Capture (FTDC) was introduced in MongoDB 3.2 (via SERVER-19585), to incrementally collect the results of certain diagnostic commands to assist MongoDB support with troubleshooting issues.", "username": "chris" }, { "code": "bsondump", "text": "bsondumpYou saved my life thank you so much ", "username": "tanapone_kanongsri" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to check mongo version from volumes that I've set in the docker-compose.yml file?
2020-10-21T05:13:31.614Z
How to check mongo version from volumes that I&rsquo;ve set in the docker-compose.yml file?
12,613
null
[ "app-services-data-access" ]
[ { "code": "", "text": "Hi,I’m trying to figure out how to grant permissions to a certain API Key authenticated user, using the Apply When object in collection rules.Thanks for anyone’s help !", "username": "Jonathan_Gendre" }, { "code": "", "text": "Actually I found out right after I asked… When I realized an API Key is assigned an ID.So I could define a rule like this:\n{\n“%%user.id”: “API-KEY-ID”\n}", "username": "Jonathan_Gendre" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Define rule with API Key
2020-10-20T18:33:57.166Z
Define rule with API Key
2,881
https://www.mongodb.com/…6fcd313da696.png
[]
[ { "code": "", "text": "Hi, Atlas community!I’m very excited to share with you that you can now deploy multi-cloud clusters on MongoDB Atlas that span AWS, Azure, and Google Cloud simultaneously. With this, you’ll have unparalleled flexibility when it comes to where your data is stored and what cloud services you can use with Atlas.Multi-cloud clusters allow you to:\nScreen Shot 2020-10-19 at 11.21.03 AM855×828 85.6 KB\nThis has been a mammoth undertaking for the team and we look forward to you trying it out. Do let us know where multi-cloud clusters take you.To get started, learn about what multi-cloud clusters unlock for you in this blog post.Cheers,\nChris\nProduct Manager @ Atlas", "username": "Christopher_Shum" }, { "code": "", "text": "Congratulations, this will be invaluable to many organisations.", "username": "chris" } ]
Introducing Multi-Cloud Clusters on Atlas (Oct 20, 2020)
2020-10-20T18:08:51.492Z
Introducing Multi-Cloud Clusters on Atlas (Oct 20, 2020)
1,640
null
[ "indexes", "performance" ]
[ { "code": " db.getCollection('COL1').find(\n { \n \"type\": \"A\",\n \"dt_tm\" : {\n $gte: ISODate(\"2020-08-17 00:00:00.000Z\"),\n $lt: ISODate(\"2020-09-17 00:00:00.000Z\")}\n },\n {_id:0, \"type\":1, \"dt_tm\":1 }\n)\ndb.getCollection('COL1').find(\n {\"type\": \"A\",\n \"dt_tm\" : {\n $gte: ISODate(\"2020-08-17 00:00:00.000Z\"),\n $lt: ISODate(\"2020-09-17 00:00:00.000Z\")}\n },\n {_id:0, \"type\":1, \"dt_tm\":1, \"teams\":1}\n)\n", "text": "I have run the below queryThere is an index on the type and dt_tm column, query uses that index and returned 2886148 records from the IXSCAN stage[Time taken in this index stage is 283ms] , from this stage it goes to PROJECTION_COVERED returning the same records[Time taken in this stage is 82ms].Then added one more column by name teams in the projection of the above query.Below is the queryNow executed the above query, Mongo uses the same index and returned 2886148 records from the IXSCAN stage[Time taken in this index stage is 771ms].The time taken is increased here from 283ms to 771ms. Why is this difference since it has used the same index and returning the same record count from the IXSCAN stage…?Here the FETCH stage is added additionally and it is returning the same records [Time taken is 681ms].Next is the PROJECTION_SAMPLE stage , giving the same records but time taken is 742ms.We can see that time taken is increased both in IXSCAN and the PROJECTION_SAMPLE stage, can any one help, why there is increase in the time…?", "username": "vinodkumar_Mallikarj" }, { "code": "teams", "text": "Hello @vinodkumar_Mallikarj,In the first query, it was a case of Covered Query. The query was served by the index alone. That means, the index doesn’t have to lookup the documents for any additional details. The query filter and the projection are covered by the index fields.Here the FETCH stage is added additionally and it is returning the same recordsThis is because the query had to fetch additional data, the field teams, from the document. The trip to access the document is your additional time and the FETCH stage in the query plan.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @vinodkumar_Mallikarj,What @Prasad_Saya said is I believe the correct answer.As an aside, if this is an actual query you use regularly, personally I would reevaluate the need to return 2886148 documents in a single query. It seems to be an excessive number, unless you’re doing a data export. A query returning an excessive amount of documents could interfere with your working set, leading to suboptimal performance in some cases.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Then how to handle of large amount of data with aggregation…?", "username": "vinodkumar_Mallikarj" }, { "code": "", "text": "Hi @vinodkumar_MallikarjThen how to handle of large amount of data with aggregation…?I’m not sure I understand what you mean. Are you asking on how to aggregate your data so that you don’t need to fetch millions of documents from the server?If that is the question, it’s probably best to open a new thread with more details (what you need, example documents, example output, what you have attempted, etc.). I think it’s best to keep one question per thread so it’s not confusing to read, since this thread is all about query indexing.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "If I may add, MongoDB university at https://university.mongodb.com/ offers very great courses. One in particular, M121, just for the aggregation framework.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query is performing poorly even after using the index
2020-10-19T14:10:07.719Z
Query is performing poorly even after using the index
3,509
null
[ "replication" ]
[ { "code": "", "text": "I would like to know the difference between master/slave and replica set.I think it’s a similar technology, but why did MongoDB adopt a replica set?What are the advantages and disadvantages of each?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,I am not sure what you are referring to? MongoDB’s Master/Slave approach is using a replica set with its replication and election mechanisms where there is only one Primary and a few secondaries at any given time.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "From version 3.2, MongoDB was supposed to use PSS replication instead of Master/Slave, which meant I wanted to know the difference between the two replication.And I want to know the advantages and disadvantages of each replication.", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,Master slave replication is the previous generation of current replica set replication.There is only one replication method. I suppose that you might mean the PV:0 and PV:1 change:\nhttps://docs.mongodb.com/manual/reference/replica-set-protocol-versions/Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ah-ha! Thank you Thank you ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difference between master/slave and replication
2020-10-21T00:58:30.943Z
Difference between master/slave and replication
8,415
null
[ "licensing" ]
[ { "code": "", "text": "We are building node.js application with mongo db community edition. Can we able to use mongodb community edition for free of cost in Production environment. Will there be any licensing issues.", "username": "Durga_Prasad_Gembali" }, { "code": "", "text": "Yes it is free to use.Consider MongoDB Atlas if you don’t want to manage mongodb yourself.", "username": "chris" }, { "code": "", "text": "Welcome to the community @Durga_Prasad_Gembali!The MongoDB Community Server is licensed under the Server Side Public License (SSPL) . As @chris mentioned, the Community Server is freely available and there are also paid options including MongoDB Enterprise (for self-hosted) and MongoDB Atlas (cloud managed).There is a usage caveat on the SSPL which applies to publicly offering MongoDB as a service. For more information, please see the Server Side Public License FAQ .There’s also a similar discussion from earlier in the year which you may find helpful: Licensing when it comes to develop commercial applications - #3 by Stennie_X.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we use mongo community edition for free of cost in production environment
2020-10-20T18:32:26.060Z
Can we use mongo community edition for free of cost in production environment
27,305
null
[ "sharding" ]
[ { "code": "", "text": "Our MongoDB production database got high disk used percent, mongod data directory used percent over 90%. for reasons, we can’t allocate more space to existing data directory, so we solve the problem by add new shard to replicate set and let replicate set rebalance.\nbut the problem is , after add new shard and reblance, the existing disk space can’t be recycled to operating system, so the percent still over 90% so we still got warning message.\nI tried to compact the only one collection , and db.push_task.stats show there’re a lot of bytes cacn be reused:\n“file bytes available for reuse” : 997790101504,\nI compact the collection by : db.runCommand( { compact : ‘push_task’ } )but seems no help, after compact complete, the disk space still not recycled, is there any other methods to compact and recycle disk space of mongod?", "username": "hunter_huang" }, { "code": "", "text": "Hi @hunter_huang,A good way to reclaim space is by rolling resyncing the source shard replica set:\nhttps://docs.mongodb.com/manual/tutorial/resync-replica-set-member/Having said that, since the bytes are ready for reuse the mongod could use them for future.However, if other processes and logs require space you better reclaim it.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Can I have solution for a similar situation too, where data balanced (100gb migrated) but there are only few bytes available to reUse (may be just 100mb to reUse), what to do in this situation?", "username": "Dheeraj_G" } ]
How to compact disk space after add new shard and rebalance?
2020-10-12T04:11:02.389Z
How to compact disk space after add new shard and rebalance?
3,174
null
[ "containers", "configuration" ]
[ { "code": "", "text": "Hello,\nI’m going to share my MongoDB instance with other users and I wanted to know if there are any built-in database size limiting mechanisms other than the capped colletions?I’m afraid of people who will try to overload my resources. And I would prefer to use ready-made solutions from MongoDB to prevent them from doing so, or possibly to block such a user but I found nothing.If a similar topic exists, I apologize for spam, I searched over a dozen topics, but I did not find the answers that interest me.Greetings,\nMarek", "username": "Marek_Winiarski" }, { "code": "mongodmongod", "text": "Hi @Marek_Winiarski and welcome in the MongoDB Community !To my knowledge, there is no such thing. Capped collection are the right solution to limit the size of a collection. You could also use TTL indexes but that’s not really limiting the actual size of the collection.You could run separate mongod instances into docker containers and limit the disk size of each instance… But that’s really all I can think of right now.You could also obtain a similar result by partitioning the disk in X partitions and run X mongod but then they would all share the same RAM and you would have an issue there too potentially. At least with docker you could assign a limited amont of ressources to each container and thus user.I’m not really solving your issue here but at least it’s something you could explore.\nI hope this helps .Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
DB size limiting mechanism
2020-10-20T18:33:40.959Z
DB size limiting mechanism
1,571
null
[ "installation" ]
[ { "code": "", "text": "trying to install mongoDB on my ubuntu machine, followed the tutorial and when i start the service with systemctl and check it with systemctl status it output the error with status 127 and as description: error while loading shared libraries: libssl.so1.0.0 cannot open shared object file: no such file or directory. tryied to install libssl but with no result. what am i missing here?", "username": "Lorenzo_Lucca" }, { "code": "", "text": "Hi @Lorenzo_LuccaWhat tutorial did you follow? What version are you installing?According to the installation notes only mongodb 4.4 is supported on Ubuntu 20.4 . Given the dependency error is in libssl1.0.0 this must be mongo 3.6, 4.0", "username": "chris" } ]
Error on systemctl start ubuntu 20.4.1
2020-10-20T18:34:29.249Z
Error on systemctl start ubuntu 20.4.1
1,312
null
[ "python" ]
[ { "code": "", "text": "Hello,\nMy name is Kacper, I’m hobby programmer. I’ve been trying to use Beeware and PyMongo , and i couldn’t really get in hang of it, it works very well in developer mode but when i try to build it and run it says only “Could’t start application loginsystem”. Can you help me? I would really appreciate some help from much more expierienced programmer than myself. Sorry for my not perfect English.On Beeware Forums we identified that problem is with something environment-specific - environment variables, assumptions about working directories, the handling of dynamic libraries - something like that.\nCan someone please tell me what are PyMongo requirements that BeeWare cant really package it so well?With Best Regards.\nDeska", "username": "Dinghy_Boat" }, { "code": "pymongopymongopymongopyproject.tomlbriefcase update --update-dependenciesbuildrun", "text": "Hi @Dinghy_Boat (Kacper) and welcome to the forums!when i try to build it and run it says only “Could’t start application loginsystem”.You need to find and debug where this error message is coming from in your code. This error message is not coming from PyMongo.On Beeware Forums we identified that problem is with something environment-specific - environment variables, assumptions about working directories, the handling of dynamic libraries - something like that.Without seeing any code snippets or a sample application it’s really difficult to know what the problem is. However, my wild guess would be your code is calling pymongo methods for login purposes, however the module pymongo is missing from the Beeware runtime package. If so, make sure you add pymongo in your pyproject.toml file. For example, see sindbach/beeware-mongodb-example/main/pyproject.tomlAlso make sure you run briefcase update --update-dependencies and build, before executing run again.If the above doesn’t solve your issue, please provide:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks for your help @wan", "username": "Dinghy_Boat" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PyMongo with BeeWare
2020-10-09T10:00:52.827Z
PyMongo with BeeWare
2,070
null
[ "java" ]
[ { "code": "_idfindIterable = collection.find(eq(\"status\", \"A\")).projection(include(\"item\", \"status\"));\n\n @Override\n public User get(Object userId) {\n \n MongoCollection<Document> userTbl = database.getCollection(\"User\");\n\n FindIterable<Document> findIterable = userTbl.find().projection(include(\"email\")); // error\n\n userId = findIterable;\n \n return (User) userId;\n }\n", "text": "I try to get the _id field from the existing collection in MongoDB to my return method so I can use that oid to edit user document, but the include keyword prompts me to create a new method. Am using 3.12 and follow the projections for guidance. Any advice will be highly appreciated!", "username": "Pat_Yue" }, { "code": "ObjectIdStringimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.model.Filters;\nimport com.mongodb.client.model.Projections;\nimport com.mongodb.client.model.Updates;\nimport org.bson.Document;\nimport org.bson.types.ObjectId;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Projections.include;\n\npublic class Community {\n\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(\"mongodb://localhost\")) {\n MongoCollection<Document> coll = mongoClient.getDatabase(\"test\").getCollection(\"coll\");\n coll.drop();\n coll.insertMany(\n Arrays.asList(new Document(\"name\", \"Max\"), new Document(\"name\", \"Alex\"), new Document(\"name\", \"Claire\")));\n List<Document> docs = coll.find().projection(include(\"_id\")).into(new ArrayList<>());\n System.out.println(\"Printing the ObjectIds from the 3 docs:\");\n docs.forEach(doc -> System.out.println(doc.get(\"_id\")));\n\n System.out.println(\"\\nUpdating the 3 documents using their respective IDs:\");\n docs.forEach(doc -> {\n String stringId = doc.get(\"_id\").toString();\n ObjectId objectId = new ObjectId(stringId);\n coll.updateOne(eq(\"_id\", objectId), Updates.set(\"hobby\", \"gaming\"));\n });\n\n docs = coll.find().into(new ArrayList<>());\n docs.forEach(doc -> System.out.println(doc.toJson()));\n }\n }\n}\nPrinting the ObjectIds from the 3 docs:\n5f8757f9e42cd148f41b29c8\n5f8757f9e42cd148f41b29c9\n5f8757f9e42cd148f41b29ca\n\nUpdating the 3 documents using their respective IDs:\n{\"_id\": {\"$oid\": \"5f8757f9e42cd148f41b29c8\"}, \"name\": \"Max\", \"hobby\": \"gaming\"}\n{\"_id\": {\"$oid\": \"5f8757f9e42cd148f41b29c9\"}, \"name\": \"Alex\", \"hobby\": \"gaming\"}\n{\"_id\": {\"$oid\": \"5f8757f9e42cd148f41b29ca\"}, \"name\": \"Claire\", \"hobby\": \"gaming\"}\n", "text": "Hi @Pat_Yue and welcome in the MongoDB Community !I’m not sure I completely understood your question but here is a little piece of Java that explains how to retrieve an ObjectId as a String and then reusing it to update the documents in the collection.I think the code is pretty self explanatory but please feel free to ask me questions is something isn’t clear.This is the output I get:I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "public User get(Object userId) {\n\n\tFindIterable<User> userTbl = database.getCollection(\"User\", User.class).find();\t\t\n\nfor (User doc : userTbl) {\n\tString id = doc.getId().toString();\n\tSystem.out.println(\"MongoDB _id = \" + id);\n\n\t\tif (id.equals(userId)) {\n\t\t\treturn doc;\n\t\t}\n\t}\n\treturn null;\n}\n", "text": "Thanks for your reply. I worked it out this way while learning Java and MongoDB together.I wonder how does the lambda function to work in above instead output into the console?Thanks so much in advance.", "username": "Pat_Yue" }, { "code": "find()getprivate static Document getUser(String id) {\n return coll.find(eq(\"_id\", new ObjectId(id))).first();\n}\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.result.InsertManyResult;\nimport org.bson.Document;\nimport org.bson.types.ObjectId;\n\nimport java.util.Arrays;\n\nimport static com.mongodb.client.model.Filters.eq;\n\npublic class Community {\n\n private static MongoCollection<Document> coll;\n\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(\"mongodb://localhost\")) {\n coll = mongoClient.getDatabase(\"test\").getCollection(\"coll\");\n coll.drop();\n InsertManyResult insertManyResult = coll.insertMany(\n Arrays.asList(new Document(\"name\", \"Max\"), new Document(\"name\", \"Alex\"), new Document(\"name\", \"Claire\")));\n insertManyResult.getInsertedIds()\n .forEach((ignoredInt, id) -> System.out.println(getUser(id.asObjectId().getValue().toHexString())));\n }\n }\n\n private static Document getUser(String id) {\n return coll.find(eq(\"_id\", new ObjectId(id))).first();\n }\n}\n_idname", "text": "Your function isn’t optimised at all.First it’s returning ALL the documents in the “User” collection to your Java (so potentially millions of documents) and then if ONE of the documents has the “_id” you are looking for, then you return this document.The find() function takes a filter in parameter which you can leverage here to simplify your code and only fetch that ONE document you are looking for.Here is how you should write this get function:Here it is in action:Also note that this function can benefit from the default _id index that exists in all the MongoDB collections by default, so it’s not doing a collection scan to find this one document in your collection.If you did a search on the name, you would have to create an index on that field to avoid a collection scan.More details here: https://docs.mongodb.com/manual/tutorial/analyze-query-plan/", "username": "MaBeuLux88" }, { "code": "InsertManyResult ", "text": "I can’t see any below option in eclipse, is that need to be imported manually? Thanks.InsertManyResult ", "username": "Pat_Yue" }, { "code": "import com.mongodb.client.result.InsertManyResult;", "text": "import com.mongodb.client.result.InsertManyResult;InsertManyResult comes from this package.\nI use IntelliJ and it resolves my imports automatically when there are no doubts between 2 classes.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get Object Id as primary key to a return method in Java
2020-10-11T09:04:41.813Z
Get Object Id as primary key to a return method in Java
28,503
null
[ "queries" ]
[ { "code": " \"specifications\" : {\n \"children\" : [\n {\n \"name\" : \"brand\", \n \"val\" : \"brandName\"\n }, \n {\n \"name\" : \"series\", \n \"val\" : \"seriesValue\"\n }, \n {\n \"name\" : \"object\", \n \"val\" : \"objectVal1, objectVal2, objectVal3, objectVal4, objectVal5, objectVal6\"\n }\n ]\n }\n", "text": "Hello, how i can find several value in array? Example:need find brandName + seriesValue + regexp objectVal3I try simple test{ $elemMatch: { $in: [ { name: “object”, val: { $regex: /objectVal3/gi } } ] } }but not work", "username": "gotostereo_N_A" }, { "code": "$in$elemMatch{ \"specifications.children\": { $elemMatch: { name: \"object\", val: { $regex: /objectVal3/i } } } }", "text": "Hello @gotostereo_N_A,You don’'t need the $in to specify multiple fields with $elemMatch. The following query filter will work:{ \"specifications.children\": { $elemMatch: { name: \"object\", val: { $regex: /objectVal3/i } } } }", "username": "Prasad_Saya" } ]
Search several value in array
2020-10-20T10:32:53.613Z
Search several value in array
1,401
https://www.mongodb.com/…c428e14725f1.png
[ "node-js", "sharding", "performance" ]
[ { "code": "db.getCollection('myTest').find({userId: 'SrPxPwXXSDqO7ede3y', _id: ObjectId(\"5f61fc091e244c157f43401\"), deletedAt: null})", "text": "We have sharded cluster in two regions with one shard currently. There is no sharding enabled on the collections and balancing is off mode. Two servers in Europe and one in Asia. All servers are containing same setup with router, config server and data node.\nData nodes are one replica set with the primary in Europe\nConfig servers are one replica set with the primary in Europe.\nMongoDB 4.4.0 is used. In data nodes we have tags named - region: “europe” and region: “asia” - depending region where the nodes are.example780×421 39.3 KBClient is a NodeJS with the 3.6 driver located in asia region and connects to this region router. Client uses readPreference=nearest&readPreferenceTags=role:asia\nClient makes one specific query that does findOne query. In the router logs we can see that read preference is set. However, read request takes 200ms which means it tries to fetch data from europe region.\nWhen we set client connection directly to the data nodes describing all replica set members and add same readPreference and readPreferenceTag then read request takes 3ms and data is retreived from asia as it supposed to do.\nQuery that client is making:\ndb.getCollection('myTest').find({userId: 'SrPxPwXXSDqO7ede3y', _id: ObjectId(\"5f61fc091e244c157f43401\"), deletedAt: null})\nWhat can be done do debug it further or is there any reasonable explanation for this issue?", "username": "prodigyf" }, { "code": "", "text": "This problem I described is in our production. We tested with small NodeJS script this behavior also. When using only one readPreferenceTag then response times are always good (~3ms). But when adding multiple tags and failover in the client configuration then some requests are routed to other mongo data nodes. Time-to-time getting (~200ms)\nFurthermore, one interesting observation. When using maxStalenessSeconds in the client (for example: &maxStalenessSeconds=120) then Mongo router crashes and do not come up after this parameter is removed.\nFor example:\nexample2761×422 38.9 KBUpdate:\nWe are using connection level read preference. Seems those parameters are not picked up. From Mongo router logs we can see those tags since connection level is logged in there. From network dump we can see that client do not send those tags nor readPreference to the router in the query level as we can see only readPreference: secondaryPreferred in there.\nWe needed to specify query level readpreferences.Based on behaviour observation it remains unclear how are members considered worthy. According to doc there’s latency consideration present, but it’s not clear if it’s latency between driver and mongo router or latency between primary and secondary. Issue is that when querying from region which has one local secondary present, but primary with other secondary is in remote region, query seems to end up randomly in both regions.", "username": "prodigyf" } ]
Sharded cluster performance issues with Nodejs driver
2020-10-17T10:35:23.712Z
Sharded cluster performance issues with Nodejs driver
2,800
null
[ "swift" ]
[ { "code": "updateOnereturn req.application.mongoClient.withSession { session in\n return session.startTransaction().flatMap { _ -> EventLoopFuture<InsertManyResult?> in\n req.playerRatingsCollection.insertMany(playerRatings, session: session)\n }\n .flatMap { _ -> EventLoopFuture<InsertManyResult?> in\n req.memberPlayerRatingsCollection.insertMany(Array(insertNewMemberPlayerRatings), session: session)\n }\n .flatMap { _ -> EventLoopFuture<UpdateResult?> in\n // several documents where I need to $inc a totalRaitngs fields by 1 and update the average ratings for each document\n req.memberPlayerRatingsCollection.updateMany(filter: <#T##BSONDocument#>, update: <#T##BSONDocument#>)\n }\n .map { _ in\n Response(status: .ok)\n }\n}\n.hop(to: req.eventLoop)\nchange streamsviews", "text": "Hello All!I have several documents of the same type that I would like to update at the same time, as well a couple of other operations that I am looking to do within a transaction using the Swift driver.This is the code I currently have below, but I am stuck on what would be the best way to update several of the same documents? I need to $inc a field by 1 for all of them and then the tricky part is updating another field with their own specific values.Should I loop through them one at time and use the updateOne operator? Is there a better way to do this?Please let me know if anything doesn’t make sense or needs further clarity. I’ve looked into several different ways of solving the problem I am stuck on using change streams and views but I’ve found them both quite complex and would ideally like to use the above approach.Thanks", "username": "Piers_Ebdon" }, { "code": "[ Stage 1 - update the total ratings by inc 1,\nStage 2 - calculate the avg based on the totalRatings from previous stage and update\n]\n", "text": "Hi @Piers_Ebdon,So if I understand correctly you want to avoid the need of fetching the updated documents before the update, otherwise I think doing a single document math calculations might be easier on the client side saving the calculated values directly.If you still want to do it server side I would recommend looking into pipeline updates available from MongoDB 4.2 .The following page provides examples of updates with aggregation pipelines.You can filter the needed documents and run few stages on each. Since I don’t know the specific of your logic and documents i would say it should look like:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "struct MemberPlayerRating: Content {\n var _id: BSONObjectID?\n let averageRating: Double\n let totalRatings: Int\n let playerID: BSONObjectID\n let userID: BSONObjectID\n}\nMemberPlayerRatingaverageRatingMemberPlayerRating{ playerID: 1, newRating: 7, userID: 1 },\n{ playerID: 2, newRating: 6.1, userID: 1 },\n{ playerID: 3, newRating: 2, userID: 1 },\n{ playerID: 4, newRating: 9, userID: 1 },\n{ playerID: 5, newRating 4.7, userID: 1 }\naverageRatingaverageRatingupdateMany", "text": "Hi @Pavel_DuchovnyThanks for the response.The document model I am trying to update is:So what I am unsure about is the best way to update say 5 different MemberPlayerRating documents with different new ratings in order to calculate their new average rating. I need them either to all fail or all succeed ideally.so if I had the following data, which I would like to use to calculate the new averageRating for a MemberPlayerRating:Even if I calculated the averageRating client side, I would still then want to update all the averageRating at the same time for the different documents. I don’t think I can achieve this with updateMany or an aggregation pipeline. is that correct? if so is there a recommended way to do this?Thanks", "username": "Piers_Ebdon" }, { "code": "", "text": "Hi @Piers_Ebdon,If you want multiple document update to succeed or fail as one you should consider using transactionsFor situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions.Another option is to keep updated data in one document to update it as one.An update with multi true will update all documents in one command but as it is not atomic as a whole it cannot guarantee that all will succeed or fail everything.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnyI think keep the updated data in one document is the way to go.Thanks!!", "username": "Piers_Ebdon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update several documents of the same type with different values
2020-10-18T19:08:01.903Z
Update several documents of the same type with different values
3,990
null
[ "connecting", "php" ]
[ { "code": "serverSelectionTryOnceReplicaSet:PRIMARY> rs.printSecondaryReplicationInfo()\nsource: mongodb-2:27017\nsyncedTo: Fri Oct 16 2020 05:33:11 GMT+0000 (UTC)\n0 secs (0 hrs) behind the primary \nsource: mongodb-3:27017\nsyncedTo: Fri Oct 16 2020 05:33:11 GMT+0000 (UTC)\n0 secs (0 hrs) behind the primary \nreturn [\n 'default' => 'mongodb',\n 'connections' => [\n 'mongodb' => [\n 'driver' => 'mongodb',\n 'host' => ['Comma separated Ips'],\n 'port' => env('DB_PORT'),\n 'database' => env('DB_DATABASE'),\n 'username' => env('DB_USERNAME'),\n 'password' => env('DB_PASSWORD'),\n 'options' => []\n \n ],\n ],\n 'migrations' => 'migrations',\n ];\n2020-10-16T05:12:26.877+0000 I ACCESS [conn134] Successfully authenticated as principal mongo_styli_coupon on styli_coupons from client 10.60.5.198:42038\n2020-10-16T05:12:57.378+0000 I NETWORK [listener] connection accepted from 127.0.0.1:58282 #135 (12 connections now open)\n2020-10-16T05:12:57.378+0000 I NETWORK [conn135] received client metadata from 127.0.0.1:58282 conn135: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.2.10\" }, os: { type: \"Linux\", name: \"CentOS Linux release 7.8.2003 (Core)\", architecture: \"x86_64\", version: \"Kernel 3.10.0-1127.19.1.el7.x86_64\" } }\n", "text": "Hi Team,We are getting below errors sometimes when we try to connect from PHP client. This goes away when we try to login as single node rolling back from distributed login method to single node login. And by trying to login as a different user.This is really concerning as single node login always works for us but login as Replica Set with read preference as slave fails in between. It is very scary to move with this development forward.Error at application health check :{​​​​​​​​\"status\":“Down”,“message”:“Something went wrong!!!”,“error”:“No suitable servers found (serverSelectionTryOnce set): [Failed to resolve ‘mongodb-1’] [Failed to resolve ‘mongodb-2’] [Failed to resolve ‘mongodb-3’]”}​​​​​​​​Setup :PHP client 7.3\nMongoDB 4.2.10 (Replica Set cluster 3 nodes(no arbiter))\n(mongodb-1,mongodb-2,mongodb-3)Our ground check :Firewall not enabled\nSelinux disabledMongoDB replication stat (replicating working normally) :PHP config :MongoDB Logs :", "username": "Pritiranjan_Khilar" }, { "code": "ReplicaSet:PRIMARY> db.getUser(\"mongo_styli_coupon\")\n{\n\t\"_id\" : \"styli_coupons.mongo_styli_coupon\",\n\t\"userId\" : UUID(\"f9cc744e-1516-4106-9fe0-c2a2426b9f9b\"),\n\t\"user\" : \"mongo_styli_coupon\",\n\t\"db\" : \"styli_coupons\",\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"read\",\n\t\t\t\"db\" : \"admin\"\n\t\t},\n\t\t{\n\t\t\t\"role\" : \"readWrite\",\n\t\t\t\"db\" : \"styli_coupons\"\n\t\t}\n\t],\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-1\",\n\t\t\"SCRAM-SHA-256\"\n\t]\n}\n", "text": "User details :", "username": "Pritiranjan_Khilar" }, { "code": "mongodb-2mongodb-3", "text": "Hi,the error you’re receiving is what we call a server selection error. When you run any operation, the driver selects a suitable server to run the command on, based on the type of command and the read preference you’ve stated. In your case, the error detail (“Failed to resolve ‘mongodb-1’”) indicates that this error originated in the base implementation of libmongoc, see here: mongo-c-driver/mongoc-client.c at 282f110565239a0d5d91f1e5e2f91ba4bee57dc7 · mongodb/mongo-c-driver · GitHub.Looking at the replication stats that you posted, I can see that the host names for the two secondaries are mongodb-2 and mongodb-3, which is consistent with the error message. It looks like these host names don’t resolve to any hosts on your application server, leading to the connection error seen above as the server is not able to connect to any of the hosts in the replica set.Please either reconfigure your replica set to use IP addresses, or ensure that the host names used in the replica set config resolve on all machines that are supposed to connect to any server from the replica set.", "username": "Andreas_Braun" }, { "code": "", "text": "Thanks Andreas,I will try these changes and confirm.Regards,\nPritiranjan", "username": "Pritiranjan_Khilar" }, { "code": "", "text": "Hi Andreas,We have made those changes and it has fixed the issue. Thanks a lot.Regards,\nPritiranjan", "username": "Pritiranjan_Khilar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB authentication failed for Replica Set
2020-10-16T05:46:53.882Z
MongoDB authentication failed for Replica Set
8,484
null
[ "aggregation", "data-modeling", "atlas-device-sync", "atlas", "weekly-update", "atlas-data-lake" ]
[ { "code": "", "text": "Welcome to MongoDB $weeklyUpdate, a weekly digest of MongoDB tutorials, articles, and community spotlights!Each week, we’ll be sharing the latest and greatest MongoDB content and featuring our favorite community content too, curated by Adrienne Tacke at MongoDB!Enjoy!We love streaming on Twitch. We also love when the community joins and engages with us in the chat! Take a look at our Twitch lineup and mark your calendars:Tue. Oct 20: Sam Julien from Auth0 visits the MongoDB stream!\n 3pm PT / 6pm ET / 10pm UTCWed. Oct 21: Community Spotlight: James Turner and MongoFramework\n 3pm PT / 6pm ET / 10pm UTCTue. Oct 27: Adrienne Tacke joins Sam Julien on the Auth0 stream!\n 3pm PT / 6pm ET / 10pm UTCWed. Oct 28: Game Dev with Adrienne Tacke and Nic Raboy!\n 10:30pm PT / 1:30pm ET / 5:30pm UTCFri. Oct 30: Recruitment AMA with Rebecca Mosner\n 10am PT / 1pm ET / 5pm UTCFollow us on Twitch so you don’t miss a stream (especially since impromptu, not-on-the-calendar streams are the best)!Hacktoberfest is a month-long celebration of open source software first created by DigitalOcean in 2013. Check out last week’s $weeklyUpdate to see all the details and find out how you can participate!Check it out in full-screen glory: MongoDB Hacktoberfest DashboardTHANK YOU to all of our contributors:FIX: Dark mode - Search page & FEATURE: Dark Mode for Risk Page\nhttps://github.com\nThank you to ashwinpilgaonkar for contributing TWO PRs to our O-Fish Android app! FIX: Use sharedpreferences to store darkmode state & & FEATURE: Home screen dark mode\nhttps://github.com\nThank you to thearavind for contributing another TWO PRs to our O-Fish Android app! FEATURE: Add brand color files & FEATURE: Dark mode draft boardings page & FIX: icon size\nhttps://github.com\nThank you to ippschi for contributing THREE more PRs to our O-Fish Android app! FEATURE: Add dark mode to boarding record page\nhttps://github.com\nThank you to cfsnsalazar for contributing to our O-Fish Android app!FEATURE: Add ability to search violations\nhttps://github.com\nThank you to evayde for contributing a much needed feature to our O-Fish web app! FIX: Added semicolon after risk to all languages\nhttps://github.com\nThank you to crowtech7 for contributing another PR to our O-Fish web app!FIX: Add dependencies to useEffect\nhttps://github.com\nThank you to fandok for contributing to our O-Fish web app!FIX: Filter & chart header UI enhancements\nhttps://github.com\nThank you to deveshchatuphale7 for contributing to our O-Fish web app!FIX: Modified currentFilter and changeFilter to fix issue #233\nhttps://github.com\nThank you to SEGH for contributing to our O-Fish web app!FIX: Data sharing\nhttps://github.com\nThank you to jsdmaria for contributing another PR to our O-Fish web app!FEATURE: 347 Dark Mode initial implementation\nhttps://github.com\nbladebunny contributes a PR to the O-Fish iOS app! Thank you!FIX: Add done button on keyboard to fix issue #220\nhttps://github.com\nThank you to czuria1 for contributing to the O-Fish iOS app! Thank you!FIX: Added date to match to fix issue #170\nhttps://github.com\nThank you to SEGH for contributing to our O-Fish Realm app as well! We are INCREDIBLY happy and thankful for your Hacktoberfest contributions! Looking forward to the rest of October! Want to find the latest MongoDB tutorials and articles created for developers, by developers? Look no further than our DevHub!How to Archive Old Data to Cloud Object Storage with MongoDB Atlas Data Lake & Online Archive\nNeed to tier your data of natively query your data across cloud object storage? Maxime Beugnet shows you how to do both using Online Archive and Atlas Data Lake in this tutorial!Schema Design Anti-Patterns - Part 3\nCheck out Lauren Schaefer’s new YouTube video on Schema Design Anti-Patterns! In the third and final installment of this anti-patterns series, Lauren discusses the sixth anti-pattern: separating data that is accessed together.How to Use Custom Aggregation Expressions in MongoDB 4.4\nAdo Kukic shows you how to use custom aggregation expressions to extend the MongoDB Query Language and fit your needs! Learn how to use the $function and $accumulator operators, new in MongoDB 4.4.We stream tech tutorials, live coding, and talk to members of our community every Friday. Sometimes, we even stream twice a week! Be sure to follow us on Twitch to be notified of every stream!Adrienne and Nic finish out their Door Dash level in Episode 6! Watch as Adrienne adds fake and real doors, plays with physics materials 2D, and adds bounciness Watch nowEpisode 7 is on Oct 28, 10:30am PT. Join us in the chat and help us build this game!While you wait for Wednesday, catch up on past streams:Episode 5: Level Design and Player Animation in UnityEpisode 4: Getting Familiar with UnityEpisode 3: Creating a User Profile store with MongoDB - Part 2 And if you ever need to catch up on any of our streams, you can always find them on our Developer Hub or our Twitch Live Streams playlist!Episode 22: The Mongoose ODM with Val Karpov\nThis episode is a one! Hosts Nic Raboy & Mike Lynn chat with Val Karpov, maintainer of the Mongoose ODM.Mongoose is the 18th most popular download on GitHub with over 1.5m downloads of the package on NPM. Val is the sole maintainer of this massively popular package and on this episode, Val shares details of its history as well as why its so popular. If you’re a NodeJS developer using MongoDB, don’t miss this episode!(Not listening on Spotify? We got you! We’re most likely on your favorite podcast network, including Apple Podcasts, PlayerFM, Podtail, and Listen Notes )Every week, we pick interesting articles, questions, and more from all over the internet! Be sure to use the #MongoDB hashtag when posting on dev.to or leave a comment on my weekly Tweets. You might be featured in an upcoming edition!Stumped Trying to Query For Field Containing a String, Case Insensitive And Diacritic Sensitive\n_https://www.mongodb.com/community/forums_Can you help out this MongoDB user with their question? Join the discussion now!Migrating from Legacy Realm Sync to MongoDB Realm Guide\n_https://www.mongodb.com/community/forums_You’ve asked and we listened! Check out this guide on how to migrate from old versions of Realm Object Server and Realm Cloud to MongoDB Realm!Is there a way to apply an arbitrary condition to a MongoDB aggregation?\n_https://www.mongodb.com/community/forums_Running into this issue? Check out this topic and add your perspective!Watch our team do their thang at various conferences, meetups, and podcasts around the world (virtually, for now). Also, find external articles and guest posts from our DevRel team here! UpcomingOct 20: EuropeClouds Summit\nAs Daniel Tiger wisely sings, “It’s OK to make mistakes. Try to fix them, and learn from them too.” Come learn common mistakes developers make as they model their data in document databases in Lauren Schaefer’s talk “Stop! Don’t Make These Mistakes in Your Document Database”.Oct 22: nerdear.la\nJoe Karlsson’s favorite things in life are cats , computers and crappy ideas , so he decided to combine all three and make an IoT (Internet of Things) litter box using a Raspberry Pi and JavaScript! If you have ever wanted to get build your own IoT project, but didn’t know how to start, then this is the talk for you.Oct 22: Big Mountain Data & Dev\nLauren Schaefer gives a timely talk “Top Ten Tips for Making Remote Work Actually Work Right Now” on the first day of Big Mountain Data & Dev!Oct 22: Big Mountain Data & Dev\nJoe Karlsson will join Lauren at Big Mountain Data & Dev to give his talk “An Introduction To IoT (Internet of Toilets); Or How I Built an IoT Kitty Litter Box Using JS”!Oct 23: Big Mountain Data & Dev\nTune into Lauren Schaefer’s talk “From Tables to Documents—Changing Your Database Mindset” at Big Mountain Data & Dev!Oct 23: Data Con LA\nAs if two conferences weren’t enough, you’ll get to catch Joe Karlsson’s Intro to IoT talk at Data Con LA as well!Oct 24: devfest Madison, WI\nCheck out Joe Karlsson’s talk “Bechdel.io: How We Used JavaScript To Help Make Film More Inclusive” to see how a brother and sister team created, bechdel.io, a film script parsing tool that automatically tests film scripts to determine whether or not they pass the Bechdel Test in a fraction of a second!", "username": "yo_adrienne" }, { "code": "", "text": "And I thought I was going to have a busy week. Good luck on all of your talks @JoeKarlsson!", "username": "Lauren_Schaefer" }, { "code": "", "text": "Right? I got tired just reading it! ", "username": "yo_adrienne" } ]
MongoDB $weeklyUpdate #8: Community Streams Galore!
2020-10-19T00:46:14.716Z
MongoDB $weeklyUpdate #8: Community Streams Galore!
3,922
null
[ "spring-data-odm" ]
[ { "code": "{$or: \n [\n {'title': {$regex : \"aard\", $options: 'i'} },\n {'tags': {$elemMatch: {$regex : \"aard\", $options: 'i'} } },\n {'categories': {$elemMatch: {$regex : \"aard\", $options: 'i'} } }\n ]\n}\n@Query(\"{$or: [\" +\n \"{'title': {$regex: ?0, $options: 'i'} }, \" +\n \"{'tags': {$elemMatch: {$regex: ?0, $options: 'i'} } },\" +\n \"{'categories': {$elemMatch: {$regex: ?0, $options: 'i'} } }\" +\n \"] }\")\nList<Product> search(String input);\n@Query(\"{'categories': {$elemMatch: {$regex: ?0, $options: 'i'} } }\")\nList<Product> findByCategory(String category);\n_id:1\nsupplier:DBRef(supplier, [object Object], undefined)\ntitle:\"Aardappels\"\ndescription:\"Verse aardappels\"\nunitSize:\"5KG\"\ncategories:[\"Groente\"]\ntags:[\"Aardappels\"]\nprice:14.99\nsalePrice:9.99\nsale:true\namount:0\nsold:0\n_class: \"nl.hva.ewaserver.models.Product\"\n_id:1\ntitle:\"Aardappels\"\ncategories:[\"Groente\"]\ntags:[\"Aardappels\"]\n", "text": "So I’m writing a search query, but got stuck with an error.Querying this query in Mongo Compass returns the correct results.However querying with this query in spring boot throws an error:Query failed with error code 2 and error message ‘$elemMatch needs an Object’Smaller example code with same result:Mongo Document Example:Smaller Mongo Document Example:It’s weird, they both have the same query, but in spring boot I use parameters.\nWhich shouldn’t be the problem, because I’ve also tested hardcoded.", "username": "Maikel_van_Dort" }, { "code": "$elemMatch@Query(\"{'categories': { $regex: ?0, $options: 'i' } }\")$elemMatchcategories{\n \"_id\" : 1,\n \"title\" : \"Aardappels\",\n \"categories\" : [\n {\n \"a\" : \"a1\",\n \"b\" : \"b1\"\n },\n {\n \"a\" : \"y1\",\n \"b\" : \"z1\"\n }\n ]\n}\n$emeMatch@Query(\"{'categories': { $elemMatch: { a: { $regex: ?0, $options: 'i' }, b: ?1 } } }\")", "text": "Hello @Maikel_van_Dort, welcome to the community.This error occurs when you use $elemMatch with Single Query Condition . So, if you change your query as follows, there is no error (and works as expected).@Query(\"{'categories': { $regex: ?0, $options: 'i' } }\")Note that, if you need to work with an array of objects, then it becomes necessary to use the $elemMatch operator, with multiple fields of the object. For example, with a document with the categories array field with embedded documents (or objects):Then your query need to be using $emeMatch, and this runs without any errors. E.g.:@Query(\"{'categories': { $elemMatch: { a: { $regex: ?0, $options: 'i' }, b: ?1 } } }\")", "username": "Prasad_Saya" } ]
Spring Boot Mongo: "$elemMatch needs an Object"
2020-10-19T18:57:11.280Z
Spring Boot Mongo: &ldquo;$elemMatch needs an Object&rdquo;
8,922
null
[ "dot-net", "xamarin" ]
[ { "code": "", "text": "Hi there.\nI’m building an app, and i wanted to use Realm and Realm Cloud. I previously build an app in Xamarin Forms using Realm and Realm Cloud, and it was pretty straight forward with great guides in documentation.\nNow MongoDb has acquiredrealm, and created ‘MongoDb realm’, and i can’t for the love of Baby Yoda figure out how to implement it in a Xamarin Forms app. So my question is - can i just use Realm Cloud? It seems to be up and running - and if so - will i have to worry about having to migrate to Atlas in the future?Thanks in advance.", "username": "Rasmus_B" }, { "code": "var user = await User.LoginAsync(new Uri(\"...\"), Credentials.UsernamePassword(...));\nvar config = new FullSyncConfiguration(new Uri(...), user);\nvar app = App.Create(\"my-app-id\");\nvar user = await app.LoginAsync(Credentials.EmailPassword(...));\nvar config = new SyncConfiguration(\"partition\", user);\n", "text": "We just released the first beta of the .NET SDK that adds support for MongoDB Realm. For the most part, it should be fairly straightforward migration where you replace:withWe also just published the .NET docs that should hopefully clarify a few concepts related to how to use the .NET SDK with MongoDB Realm. We also have a migration guide that may also be helpful.But to answer your questions - for a new project, it would probably be a bad idea to base it off the legacy Realm Cloud - all new development targets MongoDB Realm and you’ll be missing out on a lot of new functionality, including new datatypes, platforms, and performance improvements.If you do have specific questions or issues, we’d be happy to help clear things up.", "username": "nirinchev" }, { "code": "", "text": "A post was split to a new topic: Realms.Exceptions.RealmException: Keychain returned unexpected status code", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB realm and Xamarin Forms
2020-10-18T19:27:25.982Z
MongoDB realm and Xamarin Forms
3,511
null
[]
[ { "code": "mongod*.wtulimitssystemdLimitNOFILE=200000nprocmongodmongodmongodb", "text": "Due to the nature of my data (being very many collections) I have a lot of problems starting mongod. I have about 140k *.wt files on disk, although during the normal running of the server I only have about 2k open pointers as once. So this large number of files is only a problem on startup, which takes about 5 mins.I’ve posted about ulimits before and have resolved that problem in systemd by using a drop-in config that sets LimitNOFILE=200000 plus the corresponding nproc config as mentioned here. I think this side of things is resolved.However I have hit a new ceiling. Today (amidst upgrade to 4.4), starting mongod causes the server to run out of memory. The box has 4G of RAM and a 2G swap disk. It only runs MongoDB and under normal operation the mongod process seems to use about half the available RAM.So I suppose my question is what can I do to get the server to start with my 4G of RAM? Do I need a bigger box just to get it started? If so, is it possible to calculate the ram I’ll need based the number of files? Perhaps increasing the size of the swap disk would fix it? Disk space is not a problem.The next size up of my Linode VPS is 8G and will double the monthly cost for all my mongodb servers. Not the end of the world, but as I only seem to need this RAM on startup, I’m wondering if I can hold off on that upgrade.Suggestions much appreciated.", "username": "timw" }, { "code": "", "text": "I tried increasing my swap partition to 4GB and still got memory exhaustion. I don’t understand memory management in Linux well enough to know if I did this correctly. It appeared to me that the swap space wasn’t being used, but I’m out of my depth here.The only way I’ve been able to fix this is to trash the data directory and start from empty. It takes MUCH longer to start, but doesn’t suffer the same problems.", "username": "timw" }, { "code": "1sysctl vm.swappiness=501", "text": "Final update until I have to go through all this again. I realised that my system swappiness was 1 as per the recommendations, so I issued sysctl vm.swappiness=50 and the server started without exhausting memory. I don’t know if this is a coincidence, because supposedly the value of 1 swaps “only to avoid out-of-memory problems” which should have meant it would work.", "username": "timw" }, { "code": "mongod", "text": "More than 10k collections is generally not a great idea.https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collectionsHow much space does your indexes take in RAM?RAM = Indexes + working set + some free space for the queries & aggregations. That’s what a healthy mongod needs.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks for the reply. Had that article been around 8 years ago, we wouldn’t be having this conversation. Likewise if WiredTiger had been the storage engine. However I am dealing with what I’ve got until such time as I have the capacity to completely redesign my system.I don’t know how to answer your question accurately, but I usually have about 1G of spare RAM when the system is running. At any one time only about 2k file pointers are open.", "username": "timw" }, { "code": "mongodmongodmongodmongod", "text": "A few good piece of advice in here:I think you are fine with a swappiness at 50 when mongod is starting. But then your linux might be more tempted to use the swap if you are starting some more RAM intensive activities. I guess what you could do is reduce the swappiness once mongod is started so the system is less tempted to use the swap once the startup is done.Also, make sure to monitor your swap usage at this point. If your mongod starts using the swap after the startup - I guess you definitely need more RAM then.What I’m about to say is really not accurate and should not be taking for an absolute truth, but usually I would recommend to have about 10-20% of the node size in RAM.So for example if you have a 200GB replica set (200GB per node). I would have around 20 to 40GB of RAM. Unless there is really something out of the ordinary going on, this should leave you plenty of RAM for the indexes which should represent about 10-20% of the RAM and the rest is for mongod to keep frequently queried documents in RAM and resolve queries.This ratio didn’t fall completely from the sky though. It’s my feeling, but it’s also more or less the ratio you get by default when you take a cluster on MongoDB Atlas.\nimage894×714 84.8 KB\nI hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "1", "text": "Thanks for the followup. Yes, I did switch swappiness back to 1 after the restart. I will keep an eye on resources.", "username": "timw" } ]
Mongod "out of memory" on startup
2020-10-17T18:01:06.587Z
Mongod &ldquo;out of memory&rdquo; on startup
4,469
null
[ "node-js", "java", "swift", "kotlin", "objective-c" ]
[ { "code": "", "text": "Hey All,If you haven’t already seen the news - today, we announced the GA release of theRealm SDKs 10.0, which includes new features like cascading deletes, and new types, like Decimal128.Check out the blog post here: https://www.mongodb.com/article/realm-database-cascading-deletesIf you’d like a live deep-dive, I’ll be hosting a webinar where I’ll talk through some code examples. You can sign-up if you’re in the Americas here: Resources | MongoDB or in Europe here: https://www.mongodb.com/webinar/what-s-new-in-realm-emea.We always want to hear feedback, so post here or head to: Realm: Top (70 ideas) – MongoDB Feedback Engine let us know what you think.We will also be hosting an iOS Hackathon in November with our iOS engineering team. For more information sign up for user group here - https://live.mongodb.com/realm-global-community/Thanks!\n-Ian", "username": "Ian_Ward" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Realm SDKs 10.0 - Generally Available
2020-10-19T18:49:33.141Z
Realm SDKs 10.0 - Generally Available
2,632
null
[ "java" ]
[ { "code": "CodecRegistry codecRegistry = CodecRegistries.fromRegistries(\nMongoClientSettings.getDefaultCodecRegistry(),\nCodecRegistries.fromProviders(PojoCodecProvider.builder().automatic(true).build()));\n", "text": "Hi,I’m attempting to store an object in a MongoDB database (using MongoDB 3.12.6) and am getting following error:org.bson.codecs.configuration.CodecConfigurationException: An exception occurred when encoding using the AutomaticPojoCodec.\"I am using the following lines of code currently to create CodeRegistry:But for some reason AutomaticPojoCodec generated here does not seem to work to encode/decode POJO.Could anyone please feedback ?", "username": "justin_george" }, { "code": "try {\n\tConnectionString connectiontring = new ConnectionString(\n\t\t\t\"your connection\");\nMongoClientSettings clientSettings = MongoClientSettings.builder()\n\t\t .codecRegistry(pojoCodecRegistry)\n\t\t.applicationName(\"project name\")\n\t\t.applyToConnectionPoolSettings(builder -> builder.maxWaitTime(20000, TimeUnit.MILLISECONDS))\n\t\t.applyConnectionString(connectiontring).retryWrites(false).build();\n\nMongoClient mongoClient = MongoClients.create(clientSettings);\n\nMongoDatabase database = mongoClient.getDatabase(\"your_database\")\n .withCodecRegistry(pojoCodecRegistry);\n\n\t\treturn database;\n\n\t} catch (Exception e) {\n\t\te.printStackTrace();\n\t}\n\treturn null;\n}", "text": "Did you then put it something like below (I also followed the document to connect it)CodecRegistry pojoCodecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),\nfromProviders(PojoCodecProvider.builder().automatic(true).build()));", "username": "Pat_Yue" }, { "code": "", "text": "Hi [Pat_Yue]Thanks for the reply. Unfortunately, this approach again do not work and gives the same error. Following is the error:org.bson.codecs.configuration.CodecConfigurationException: An exception occurred when encoding using the AutomaticPojoCodec.\nEncoding a AadharForm: ‘controller.AadharForm@42917caa’ failed with the following exception:Failed to encode ‘AadharForm’. Encoding ‘servletWrapper’ errored with: Unable to get value for property ‘servletFor’ in ActionServletWrapperRegards,\nJoe", "username": "justin_george" } ]
Explicitly configuring Custom Codec for POJO
2020-10-08T11:55:50.356Z
Explicitly configuring Custom Codec for POJO
5,163
null
[ "queries" ]
[ { "code": "", "text": "Hello!I am having an extremely difficult time creating the right syntax to query documents for my application.I have written out a whole SO question describing it here: regex - MongoDB \"find\" - How To Query For Docs With Field Containing a String, Case Insensitive And Diacritic Sensitive? - Stack OverflowAppreciate any help or advice on this! Thanks!", "username": "James_Lynch" }, { "code": " { title: { $regex: searchText, $options: \"i\" } }searchText\"home\"", "text": "Hello @James_Lynch,You can try this filter using regex with your find query: { title: { $regex: searchText, $options: \"i\" } }The searchText variable value, for example is \"home\" (and this returns the first 3 documents from your sample collection).", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya!When I try this though it matches for “höme” where I would like them to be distinct from each other.", "username": "James_Lynch" } ]
Stumped trying to query for field containing a string, case insensitive and diacritic sensitive
2020-10-18T17:56:37.748Z
Stumped trying to query for field containing a string, case insensitive and diacritic sensitive
3,228
null
[ "queries", "data-modeling", "java" ]
[ { "code": "EntityState PostcodeMongoCollection<Customer> customerTbl = database.getCollection(\"Customer\", Customer.class);\n\nCustomer newCustomer = new Customer(); // that linking to the constructuor of the entity instead\n\nnewCustomer.setEmail(customer.getEmail());\nnewCustomer.setFullName(customer.getFullName());\nnewCustomer.setPassword(customer.getPassword());\nnewCustomer.setState(customer.getState());\nnewCustomer.setPostCode(customer.getPostCode());\nnewCustomer.setPhone(customer.getPhone()); \nnewCustomer.setRegisterDate(new Date());\n\ncustomerTbl.insertOne(newCustomer);\n", "text": "I’m following the official document and blog post to use POJO mapping to the Entity. For example, I have a Customer Entity, which has a phone number and address details (other than the name, email, password). If I use the QuickStart guide to get user input would be complaining as it is different from the constructor that declared in the user entity.Then I tweak it like using the relational database to get all customer input, but it raises another issue on how to append all address details to the customer object? As you might know that the State and Postcode is separate as it owns. Should Address be another new entity or how to put it together? What is the right way to use MongoDB in Java Servlet?Thanks", "username": "Pat_Yue" }, { "code": "streetcityAddressAddress", "text": "Hello @Pat_Yue,Should Address be another new entity or how to put it together?The address information can be separate class (another entity) as shown in the Quick Start guide (you had linked, and it shows using address as a separate class) or the address information can be individual fields, like street, city, etc., and within the Customer POJO class. It is a matter of designing your application.An Address POJO on its own allows grouping of related information - info related to an address. Also, the Address class can be used in other entities of the application (for example, an Employee or a Supplier entity has an address too).", "username": "Prasad_Saya" }, { "code": "package com.mongodb.quickstart;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.FindOneAndReplaceOptions;\nimport com.mongodb.client.model.ReturnDocument;\nimport com.mongodb.quickstart.models.Grade;\nimport com.mongodb.quickstart.models.Score;\nimport org.bson.Document;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static com.mongodb.client.model.Filters.eq;\nGradeScorepackage com.mongodb.quickstart.models;\n\nimport org.bson.codecs.pojo.annotations.BsonProperty;\nimport org.bson.types.ObjectId;\n\nimport java.util.List;\nimport java.util.Objects;\n\npublic class Grade {\n\n private ObjectId id;\n @BsonProperty(value = \"student_id\")\n private Double studentId;\n @BsonProperty(value = \"class_id\")\n private Double classId;\n private List<Score> scores;\n\n public ObjectId getId() {\n return id;\n }\npackage com.mongodb.quickstart.models;\n\nimport java.util.Objects;\n\npublic class Score {\n\n private String type;\n private Double score;\n\n public String getType() {\n return type;\n }\n\n public Score setType(String type) {\n this.type = type;\n return this;\n }\n\n public Double getScore() {\n return score;\n", "text": "You also have an exemple in this blog post:https://www.mongodb.com/quickstart/java-mapping-pojosWhich is using this Github Repo:This repository contains code samples for the Java Quick Start blog post series - GitHub - mongodb-developer/java-quick-start: This repository contains code samples for the Java Quick Start blog po...Especially this class:And its models Grade and Score:", "username": "MaBeuLux88" } ]
Doubt with the POJOs
2020-10-18T05:14:50.000Z
Doubt with the POJOs
1,987
null
[ "aggregation", "performance" ]
[ { "code": "$match:{\nis_workflow_processing: false ,\nis_error: true\n}\n$group:{\n_id: {\nstatus: “$status”,\nRisk: “$control_monitorkey”,\nUser: “$Masterid”,\nAssetID: “$SYSTEMID”\n},\ncnt: { $sum: 1 }\n}\nrawData: [ {\nproject: { Status: \"_id.status\",\nRiskName: “_id.Risk\", userId: \"_id.User”,\nassetId: “$_id.AssetID”,\nExceptionCount: “$cnt”,\n_id: 0.0,\n},\n{\n$sort: {\nExceptionCount: -1\n}\n},\n{\n$skip: 0\n},\n{\n$limit: 1000\n}\n],\ncount: [ { $count: “sum” } ]\n", "text": "I have 600 million records and i am trying to run group stage and its taking 30 min .\nStage 1->Stage 2->stage3>Note-> Index is created on is_workflow_processing and is_error fields\nserver details-> 64gb RAM,16 core CPU,4.2 mongodb version", "username": "Narendra_S_Sikarwar" }, { "code": "{is_workflow_processing:1, is_error:1}[\n {\n '$match': {\n 'country_code': 840\n }\n }, {\n '$sort': {\n 'country': 1, \n 'state': 1, \n 'county': 1\n }\n }, {\n '$group': {\n '_id': {\n 'c': '$country', \n 's': '$state', \n 'cc': '$county'\n }, \n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n '_id': 0, \n 'country': '$_id.c', \n 'state': '$_id.s', \n 'county': '$_id.cc', \n 'count': '$count'\n }\n }, {\n '$sort': {\n 'count': -1\n }\n }, {\n '$limit': 100\n }\n]\n{country_code: 1, country:1, state:1, county: 1}", "text": "Hi @Narendra_S_Sikarwar and welcome in the MongoDB Community !If you already have the compound index {is_workflow_processing:1, is_error:1}, then your pipeline looks pretty much optimized here.Maybe you could try to create a bigger index + $sort before the $group so the documents in input of the $group state are already sorted which could help the algorithm - eventually.Take this pipeline as an example:This pipeline can use the index {country_code: 1, country:1, state:1, county: 1}. Try with and without the $sort operation and see if you have any improvements.Else - last obvious solution - you need to reduce the number of documents at the $match stage and support it with an index.I hope this help. You can read the discussions in this ticket for more details.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{is_workflow_processing:1, is_error:1, status: 1, control_monitorkey:1, Masterid:1, SYSTEMID:1}from faker import Faker\nfrom pymongo import ASCENDING\nfrom pymongo import MongoClient\n\nfake = Faker()\n\n\ndef random_docs():\n docs = []\n for _ in range(10000):\n doc = {\n 'firstname': fake.first_name(),\n 'lastname': fake.last_name(),\n 'is_workflow_processing': fake.pybool(),\n 'is_error': fake.pybool(),\n 'status': fake.pyint(),\n 'control_monitorkey': fake.pyint(),\n 'Masterid': fake.pyint(),\n 'SYSTEMID': fake.pyint()}\n docs.append(doc)\n return docs\n\n\nif __name__ == '__main__':\n client = MongoClient()\n collection = client.test.coll\n collection.insert_many(random_docs())\n collection.create_index([(\"is_workflow_processing\", ASCENDING), (\"is_error\", ASCENDING), (\"status\", ASCENDING),\n (\"control_monitorkey\", ASCENDING), (\"Masterid\", ASCENDING), (\"SYSTEMID\", ASCENDING), ])\n print('Done!')\n\n[\n {\n '$match': {\n 'is_workflow_processing': false, \n 'is_error': true\n }\n }, {\n '$group': {\n '_id': {\n 'status': '$status', \n 'control_monitorkey': '$control_monitorkey', \n 'Masterid': '$Masterid', \n 'SYSTEMID': '$SYSTEMID'\n }, \n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n '_id': 0, \n 'status': '$_id.status', \n 'control_monitorkey': '$_id.control_monitorkey', \n 'Masterid': '$_id.Masterid', \n 'SYSTEMID': '$_id.SYSTEMID', \n 'count': 1\n }\n }, {\n '$sort': {\n 'count': -1\n }\n }, {\n '$limit': 5\n }\n]\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"test.coll\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"is_error\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : true\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"is_workflow_processing\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : false\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"86D43161\",\n\t\t\t\t\t\"planCacheKey\" : \"AF368344\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"Masterid\" : 1,\n\t\t\t\t\t\t\t\"SYSTEMID\" : 1,\n\t\t\t\t\t\t\t\"control_monitorkey\" : 1,\n\t\t\t\t\t\t\t\"status\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"is_workflow_processing\" : 1,\n\t\t\t\t\t\t\t\t\"is_error\" : 1,\n\t\t\t\t\t\t\t\t\"status\" : 1,\n\t\t\t\t\t\t\t\t\"control_monitorkey\" : 1,\n\t\t\t\t\t\t\t\t\"Masterid\" : 1,\n\t\t\t\t\t\t\t\t\"SYSTEMID\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"is_workflow_processing_1_is_error_1_status_1_control_monitorkey_1_Masterid_1_SYSTEMID_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"is_workflow_processing\" : [ ],\n\t\t\t\t\t\t\t\t\"is_error\" : [ ],\n\t\t\t\t\t\t\t\t\"status\" : [ ],\n\t\t\t\t\t\t\t\t\"control_monitorkey\" : [ ],\n\t\t\t\t\t\t\t\t\"Masterid\" : [ ],\n\t\t\t\t\t\t\t\t\"SYSTEMID\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"is_workflow_processing\" : [\n\t\t\t\t\t\t\t\t\t\"[false, false]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"is_error\" : [\n\t\t\t\t\t\t\t\t\t\"[true, true]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"status\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"control_monitorkey\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"Masterid\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"SYSTEMID\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"status\" : \"$status\",\n\t\t\t\t\t\"control_monitorkey\" : \"$control_monitorkey\",\n\t\t\t\t\t\"Masterid\" : \"$Masterid\",\n\t\t\t\t\t\"SYSTEMID\" : \"$SYSTEMID\"\n\t\t\t\t},\n\t\t\t\t\"count\" : {\n\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"count\" : true,\n\t\t\t\t\"status\" : \"$_id.status\",\n\t\t\t\t\"control_monitorkey\" : \"$_id.control_monitorkey\",\n\t\t\t\t\"Masterid\" : \"$_id.Masterid\",\n\t\t\t\t\"SYSTEMID\" : \"$_id.SYSTEMID\",\n\t\t\t\t\"_id\" : false\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$sort\" : {\n\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\"count\" : -1\n\t\t\t\t},\n\t\t\t\t\"limit\" : NumberLong(5)\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"hafx\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.1\",\n\t\t\"gitVersion\" : \"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\"\n\t},\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1602536597, 4),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1602536597, 4)\n}\nIXSCANPROJECTION_COVEREDFETCH", "text": "Hey @Narendra_S_Sikarwar,I actually gave this a second thought and I think I found a faster way to deal with this: by using a covered query.In your case, the index might be a little fat but you have 64GB of RAM… So it should get the job done fast with the following aggregation pipeline:Here is the explain plan of this aggregation:As you can see above, you only have an IXSCAN stage followed by a PROJECTION_COVERED stage. No FETCH stage: meaning that no data is retrieved from disk. It’s only using the content of the index in RAM.If this query isn’t below the second - I don’t get it!Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88 thanks for your guidance.\n1: i am trying to use sort in 2 stage (after match before group) but its takes more time. without sort its better.\n2: i am trying to use project (after match before group) also takes more time\n3: i am trying to chunks the data (Example -> use “DATE” : {$gte:ISODate(“2013-01-07T00:00:00.000+0000”)} in first stage but also time taking )\nI don’t know what is the issue every thing is slow on 2 stage which is group", "username": "Narendra_S_Sikarwar" }, { "code": "", "text": "Can you share an explain plan? Which index are you using?\nWithout the covering index, you most probably have a FETCH stage in your execution plan which needs to retrieve all the documents from disk… Which is LONG it’s also putting a lot of pressure on the WT cache.", "username": "MaBeuLux88" }, { "code": "{ \n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"workflow_stage_current_assignee\" : \"gladmin\", \n \"is_workflow_processing\" : false, \n \"is_error\" : true, \n \"is_delegated\" : false, \n \"is_escalated\" : false\n }, \n \"fields\" : {\n \"Masterid\" : NumberInt(1), \n \"SYSTEMID\" : NumberInt(1), \n \"control_monitorkey\" : NumberInt(1), \n \"status\" : NumberInt(1), \n \"_id\" : NumberInt(0)\n }, \n \"queryPlanner\" : {\n \"plannerVersion\" : NumberInt(1), \n \"namespace\" : \"GLT_Narendra.EXCEPTIONS\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"is_delegated\" : {\n \"$eq\" : false\n }\n }, \n {\n \"is_error\" : {\n \"$eq\" : true\n }\n }, \n {\n \"is_escalated\" : {\n \"$eq\" : false\n }\n }, \n {\n \"is_workflow_processing\" : {\n \"$eq\" : false\n }\n }, \n {\n \"workflow_stage_current_assignee\" : {\n \"$eq\" : \"gladmin\"\n }\n }\n ]\n }, \n \"queryHash\" : \"7203DD95\", \n \"planCacheKey\" : \"9EA1C330\", \n \"winningPlan\" : {\n \"stage\" : \"FETCH\", \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"workflow_stage_current_assignee\" : NumberInt(1), \n \"is_workflow_processing\" : NumberInt(1), \n \"is_error\" : NumberInt(1), \n \"is_delegated\" : NumberInt(1), \n \"is_escalated\" : NumberInt(1)\n }, \n \"indexName\" : \"workflow_stage_current_assignee_1_is_workflow_processing_1_is_error_1_is_delegated_1_is_escalated_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"workflow_stage_current_assignee\" : [\n\n ], \n \"is_workflow_processing\" : [\n\n ], \n \"is_error\" : [\n\n ], \n \"is_delegated\" : [\n\n ], \n \"is_escalated\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"workflow_stage_current_assignee\" : [\n \"[\\\"gladmin\\\", \\\"gladmin\\\"]\"\n ], \n \"is_workflow_processing\" : [\n \"[false, false]\"\n ], \n \"is_error\" : [\n \"[true, true]\"\n ], \n \"is_delegated\" : [\n \"[false, false]\"\n ], \n \"is_escalated\" : [\n \"[false, false]\"\n ]\n }\n }\n }, \n \"rejectedPlans\" : [\n\n ]\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"status\" : \"$status\", \n \"Risk\" : \"$control_monitorkey\", \n \"User\" : \"$Masterid\", \n \"AssetID\" : \"$SYSTEMID\"\n }, \n \"cnt\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n }\n }\n }\n ], \n \"serverInfo\" : {\n \"host\" : \"XXXXXX\", \n \"port\" : NumberInt(27017), \n \"version\" : \"4.2.6\", \n \"gitVersion\" : \"20364840b8f1af16917e4c23c1b5f5efd8b352f8\"\n }, \n \"ok\" : 1.0, \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1602593872, 1), \n \"signature\" : {\n \"hash\" : BinData(0, \"jTuRMJ6j3jr6GDKbPn8Vykf1zU0=\"), \n \"keyId\" : NumberLong(6828817573358862341)\n }\n }, \n \"operationTime\" : Timestamp(1602593872, 1)\n}\n", "text": "", "username": "Narendra_S_Sikarwar" }, { "code": "", "text": "You have a FETCH in this explain plan. It means the index isn’t covering all the fields you need for this query and it has to fetch all the documents from disk that you didn’t eliminate in the $match stage.You need to evaluate if the index with all the fields will be too big or not. Depending on the cardinality of each field, it can get bigger or smaller if you switch the order of the fields but the first 5 must be the one you currently have to cover the $match correctly I guess.", "username": "MaBeuLux88" }, { "code": "\"FETCH\",{ \n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"is_workflow_processing\" : false, \n \"is_error\" : true, \n \"is_escalated\" : false, \n \"is_delegated\" : false, \n \"workflow_stage_current_assignee\" : \"gladmin\", \n \"TX_DATE\" : {\n \"$gte\" : ISODate(\"2017-06-19T00:00:00.000+0000\")\n }\n }, \n \"fields\" : {\n \"Masterid\" : NumberInt(1), \n \"SYSTEMID\" : NumberInt(1), \n \"control_monitorkey\" : NumberInt(1), \n \"status\" : NumberInt(1), \n \"_id\" : NumberInt(0)\n }, \n \"queryPlanner\" : {\n \"plannerVersion\" : NumberInt(1), \n \"namespace\" : \"GLT_Narendra.EXCEPTIONS\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"is_delegated\" : {\n \"$eq\" : false\n }\n }, \n {\n \"is_error\" : {\n \"$eq\" : true\n }\n }, \n {\n \"is_escalated\" : {\n \"$eq\" : false\n }\n }, \n {\n \"is_workflow_processing\" : {\n \"$eq\" : false\n }\n }, \n {\n \"workflow_stage_current_assignee\" : {\n \"$eq\" : \"gladmin\"\n }\n }, \n {\n \"TX_DATE\" : {\n \"$gte\" : ISODate(\"2017-06-19T00:00:00.000+0000\")\n }\n }\n ]\n }, \n \"queryHash\" : \"67F7F482\", \n \"planCacheKey\" : \"172BB4B5\", \n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_COVERED\", \n \"transformBy\" : {\n \"Masterid\" : NumberInt(1), \n \"SYSTEMID\" : NumberInt(1), \n \"control_monitorkey\" : NumberInt(1), \n \"status\" : NumberInt(1), \n \"_id\" : NumberInt(0)\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"is_workflow_processing\" : 1.0, \n \"is_error\" : 1.0, \n \"is_delegated\" : 1.0, \n \"is_escalated\" : 1.0, \n \"workflow_stage_current_assignee\" : 1.0, \n \"TX_DATE\" : 1.0, \n \"status\" : 1.0, \n \"Masterid\" : 1.0, \n \"SYSTEMID\" : 1.0, \n \"control_monitorkey\" : 1.0\n }, \n \"indexName\" : \"inboxaggregatetest\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"is_workflow_processing\" : [\n\n ], \n \"is_error\" : [\n\n ], \n \"is_delegated\" : [\n\n ], \n \"is_escalated\" : [\n\n ], \n \"workflow_stage_current_assignee\" : [\n\n ], \n \"TX_DATE\" : [\n\n ], \n \"status\" : [\n\n ], \n \"Masterid\" : [\n\n ], \n \"SYSTEMID\" : [\n\n ], \n \"control_monitorkey\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"is_workflow_processing\" : [\n \"[false, false]\"\n ], \n \"is_error\" : [\n \"[true, true]\"\n ], \n \"is_delegated\" : [\n \"[false, false]\"\n ], \n \"is_escalated\" : [\n \"[false, false]\"\n ], \n \"workflow_stage_current_assignee\" : [\n \"[\\\"gladmin\\\", \\\"gladmin\\\"]\"\n ], \n \"TX_DATE\" : [\n \"[new Date(1497830400000), new Date(9223372036854775807)]\"\n ], \n \"status\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"Masterid\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"SYSTEMID\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"control_monitorkey\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n \"rejectedPlans\" : [\n\n ]\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"status\" : \"$status\", \n \"Risk\" : \"$control_monitorkey\", \n \"User\" : \"$Masterid\", \n \"AssetID\" : \"$SYSTEMID\"\n }, \n \"cnt\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n }\n }\n }\n ], \n \"serverInfo\" : {\n \"host\" : \"GLT-S206\", \n \"port\" : NumberInt(27017), \n \"version\" : \"4.2.6\", \n \"gitVersion\" : \"20364840b8f1af16917e4c23c1b5f5efd8b352f8\"\n }, \n \"ok\" : 1.0, \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1602763420, 1), \n \"signature\" : {\n \"hash\" : BinData(0, \"f2I2ge99mQqmy4QFbvdvubTKfvI=\"), \n \"keyId\" : NumberLong(6828817573358862341)\n }\n }, \n \"operationTime\" : Timestamp(1602763420, 1)\n}\n", "text": "\"FETCH\",I am trying to change order of my match stage and its give me o/p like this@MaBeuLux88 give me some idea what should i do and from where i can start for optimise thisand what is difference between “stage” : “PROJECTION_COVERED”, and “stage”: “FETCH” in this explain plan", "username": "Narendra_S_Sikarwar" }, { "code": "", "text": "FETCH means that MongoDB is fetching the entirety of all the documents required to continue the pipeline from the disk. This generates a lot of IOPS if many documents are fetched and even more if these documents are big. All these documents are loaded in RAM which can force RAM evictions for other documents in the working set which isn’t ideal and generate cache pressure.PROJECTION_COVERED means that the pipeline directly or indirectly defines the fields that the pipeline needs to work and finally outputs. Because of this, if all these fields are present in an index, ONLY the content of the index is required to perform the aggregation and the FETCH step isn’t necessary. So for this kind of query, the disk is not solicited, everything works in RAM and it’s supposed to be way faster than the same pipeline with a FETCH operation - given that you have enough CPU and RAM available for the entirety of this index.What’s the execution time here? Is it better now?Also, do you still have enough RAM for the indexes + working set + queries? Is the CPU saturated?I hope this helps !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Execution time is 20 min . i am going to reduce the group field may be this will give me batter solution. I hope this helps …", "username": "Narendra_S_Sikarwar" }, { "code": "", "text": "When you are running this query, what is saturated? Is it the CPU or the RAM? If it’s running 100% on RAM, it should not be the disk. Does the index fit in RAM? If it’s a covered query and the index fits in RAM, it should take a lot less and you should see almost zero IOPS.", "username": "MaBeuLux88" }, { "code": "", "text": "Query takes 50% of RAM, CPU takes 100-150% out of 800%(if we are assuming 8core cpu).\nSometimes explain plan give me projection covered stage and sometimes give me projection default . It’s depends which fields I am using in group stage.I think I am going to create compound index including all fields in match stage and all fields in group stage that’s give me projection covered stage in my Winning plan.", "username": "Narendra_S_Sikarwar" } ]
Group aggregation slow
2020-10-12T08:52:38.498Z
Group aggregation slow
18,814
null
[ "swift" ]
[ { "code": "Object has been deleted or invalidatedvar a = realm.objects(dummy.self).first \n// if doesnt exist\nif a == nil || a.isInvalidated {\n// create new instance\n a = a()\n}\n\nrealm.write {\na.prop = \"something\"\n}\n", "text": "Sorry my ignorance but i have to ask because i’ve been looking around for sometime and I don’t think theres a clear answer on this.An invalidated object crash is… according to stackoverflowThe Object has been deleted or invalidated error will occur if an object has been deleted from a Realm, but you subsequently try and access a stored property of an instance of that object that your code was hanging onto since before the deletion.\"thats ok… although i dont yet quite understand why the object persists in the queries results.But imagine the current situation. I’ve deleted an object. The object persists in the result queries, now im forced to verify its invalidated to decide how to proceed in all queries. But… what happen when i get a object that is invalidated, and i want to replace it with a new object?lets sayis invalidated\nthen i dothis is still crashing.So my questions are:", "username": "Joao_Serra" }, { "code": "var alet standaloneModelObject = MyModel(value: persistedModelObject)", "text": "An object will not persist in a Results object if it’s been deleted - that’s the whole idea of a Results object. It’s a live connection to objects and if objects are added, changed or removed the Results object reflects that.However, if an object has been deleted, and you’re holding a reference to it (as in var a in your code), it will be marked as invalidated which is different than what’s in the Results object.In your example code, if an object doesn’t exist, you can’t create a ‘new instance’ of it - more than that though, a copy of a nil object is still a nil object and a copy of an existing object will point to the same data the initial object points to and that copy is still a managed object and cannot be modified outside of a write.If you want to make a standalone ‘copy’ of an object, which is editable - here’s the patternlet standaloneModelObject = MyModel(value: persistedModelObject)This would be useful for example, if you want to edit the data from an object outside of a write and then perhaps write that data at a later time.All of that being said - can you clarify what you’re trying to do and maybe we can provide a solution.", "username": "Jay" }, { "code": "", "text": "Thanks for the answer, I think I understand it better now.\nMy issue is… we have a project which is a bit messy, and we are getting crashes all over the app due Realm invalidated objects, I think this is caused because we are using multiple threads, we store data(realm objects) to use later, but when we are going to use the data the object was already invalidated, so the process shouldn’t even be executed or the code shouldn’t even get to this point.So what we are trying to look is a way to ensure that when we are going to work with a object reference its not invalidated, if so we should refetch it or make it nil(our code handle it from here) to avoid the crashes.Our nil validations are useless if the object is there but invalidated", "username": "Joao_Serra" }, { "code": "", "text": "While there is an option to check if a object isInvalidated and you could sprinkle that check throughout the code I think a better approach is to avoid the issue in the first place.I think this is caused because we are using multiple threadsThat may be part of the issue; from the Realm DocsDon’t pass live objects, collections, or realms to other threads: Live objects, collections, and realm instances are thread-confined : that is, they are only valid on the thread on which they were created.As mentioned previously, Realm objects are live updating and will always reflect the current state of the object. If you have a Results object that was populated by a query of uncompleted tasks, and then a task is marked completed (meaning it no longer matches the query) it will no longer be reflected in that Results object. If you’re seeing a different behavior then you’re probably passing that Result object (or and object in the results) around on different threads so they are then out of sync and will have invalidated objects.Another possibility is a race condition caused by using transactions on one thread and then trying to access an object on another thread OR if you are using home-grown notifications to detect object changes instead of the built-in Realm notification process.", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Procedure when object is invalidated
2020-10-12T18:41:34.976Z
Procedure when object is invalidated
10,729
null
[ "atlas-triggers" ]
[ { "code": "const credentials = Realm.Credentials.function(payload);\nconst user = await app.logIn(credentials);exports = function(data){\nconst mongodb = context.services.get(“mongodb-atlas”);\nconst users = mongodb.db(“app”).collection(“users”);\nreturn users.insertOne({ user: “NOVO USUARIO TESTE” });\n};", "text": "My trigger are not firing for Custom Authentication.\ni have tested with Anonymous and email password authentication, they worked.I created a custom function to login in my app with Authentication: Application Authenticationconst credentials = Realm.Credentials.function(payload);\nconst user = await app.logIn(credentials);Then i created a trigger for Custom authentication, but when i make login through the login function, it not fires.exports = function(data){\nconst mongodb = context.services.get(“mongodb-atlas”);\nconst users = mongodb.db(“app”).collection(“users”);\nreturn users.insertOne({ user: “NOVO USUARIO TESTE” });\n};Thank you for help", "username": "Royal_Advice" }, { "code": "custom-function", "text": "Hi @Royal_Advice,I can verify if auth triggers are expected to be fired for custom function provider.According to the provider list I don’t see custom-function as one. However , it might be out datedPlease note that since your auth function have an insert or potentially login time update you can use a database trigger on that collection instead…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "look this list", "username": "Royal_Advice" }, { "code": "Custom Authentication", "text": "Hi @Royal_Advice,If you are looking into Custom Authentication its for custom JWT and not function.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "this is sad, thank you for your time", "username": "Royal_Advice" }, { "code": "", "text": "@Royal_Advice,I think its a legitimate feature request you can file on https://feedback.mongodb.comCC: @Drew_DiPalmaThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Royal_Advice – We’re aware that there is a gap in Auth Triggers and are hoping to add this functionality soon.", "username": "Drew_DiPalma" }, { "code": "", "text": "thank you all, im will try a workaround", "username": "Royal_Advice" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trigger for Custom Authentication not firing
2020-10-19T04:54:47.125Z
Trigger for Custom Authentication not firing
2,925
null
[]
[ { "code": "", "text": "Hi, first of all, thanks for putting all the course together, I am enjoying it and it is well explained.I am still in the beginning but Lab 1.4: Determine the Value Type, Part 3 question says: “What is the value type of the year field for documents in the video.movies collection?”I know this is going very much into detail but my aggregations.movies collection shows 0.1% strings. Maybe the correct answer for the lab should contain both int32 and strings instead of int32?Anyway, thanks again and all the best.Diego", "username": "Diego_Losey" }, { "code": "int32 int32schemastringint32", "text": "The answer is int32 and only int32 (Not string). Look at the schema.The aggregate returns a cursor to the documents produced (Not string or int32). Anyway aggregate is not part of this course (Hard to know how you get this output).", "username": "Ezra_16731" }, { "code": "video.moviesaggregation.movies", "text": "Hi @Diego_Losey,You were supposed to use video.movies collection and not the aggregation.movies collection.That being said, we have released a brand new version of this course which has better and updated content.You can register here : MongoDB Courses and Trainings | MongoDB University~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Lab 1.4: Determine the Value Type, Part 3
2020-08-02T14:42:53.250Z
Lab 1.4: Determine the Value Type, Part 3
1,616
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Hi Everyone,\nI have started my certification journey at MongoDB university with my personal email account instead of a university email.\nAlso, Availed of the GitHub Student Developer Pack and it seems linked with Personal Mongo DB account. But seems, the certification is not free.\nCan somebody confirm, should I need to signup with University Mail ID to get the free Certification instead of a Personal Mail ID?\nI just started so, I don’t want to spend my time on the wrong path. Please help me out to start learning.Thanks,\nVetrivel Muthusamy", "username": "Vetrivel_Muthusamy" }, { "code": "", "text": "Hi @Vetrivel_MuthusamyWelcome to the forum!It’s correct that the certification is not showing up as ‘free’ in your MongoDB University account. After you’ve completed one of our learning paths, you can send me an email to request a Free Certfication voucher (see the instructions on your MongoDB Students offer profile).Hope this helps and good luck! ", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Student Discount will get availed with personal MongoDB account?
2020-10-17T19:51:14.183Z
Student Discount will get availed with personal MongoDB account?
5,367
null
[ "installation" ]
[ { "code": "", "text": "Hi,How to create a mongo DB service is RHEL?is it possible to create a mongo DB srevice in RHEL using mongod command?Your help is really appreciated.Br\nRaghavender", "username": "Kodumuri_Raghavender" }, { "code": "yum", "text": "Hi @Kodumuri_Raghavender, if you installed MongoDB via the yum package manager then this is already set up as a service. If you manually installed MongoDB then you would have to build your own service file to run it as a service.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Doug_Duncan, Thank you very much for your help. yum package manager install the mongodb software in root FS. we dont want to use root FS. we are installing in custome FS using TGZ file. i want to configure the service manually using mongod. can you please share the command if you have any?Thank you so much for your help.", "username": "Kodumuri_Raghavender" }, { "code": "mongod.serviceyum[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network.target\n\n[Service]\nUser=mongod\nGroup=mongod\nEnvironment=\"OPTIONS=-f /etc/mongod.conf\"\nEnvironmentFile=-/etc/sysconfig/mongod\nExecStart=/usr/bin/mongod $OPTIONS\nExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb\nPermissionsStartOnly=true\nPIDFile=/var/run/mongodb/mongod.pid\nType=forking\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n# Recommended limits for for mongod as specified in\n# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings\n\n[Install]\nWantedBy=multi-user.target \nsudo systemctl start mongod", "text": "Hi @Kodumuri_Raghavender the below is the default mongod.service file installed on RHEL 7 using yum:You would need to tweak the above for your system.You can learn more about systemd files in the systemd man pages.With this file in place you can start the service just as you would any other RHEL service with sudo systemctl start mongod.", "username": "Doug_Duncan" }, { "code": "--prefixyumdnfsudo rpm --prefix /home/mongodb/ mongodb.rpm", "text": "Hi @Kodumuri_Raghavender,If you want to manually create a service definition and config file, you can find the versions used in the RPM packages in the MongoDB source on GitHub: mongo/rpm at master · mongodb/mongo · GitHub.If you just need a different install path, the MongoDB RPMs should also be relocatable using the --prefix option for yum/dnf.For example: sudo rpm --prefix /home/mongodb/ mongodb.rpm.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi everyone, if I did my installation manually from the tarball for centos8 and deployed some sharded clusters, should I have a service per each mongod and mongos that I used?", "username": "Oscar_Cervantes" }, { "code": "/etc/sysconfig/mong", "text": "/etc/sysconfig/mongHello Stennie,Myself had used a tar ball extract for mongo software (version : 4.2.9) on a RHEL machine and configured 3 config files for 3 mongod services for a replica set. Now that, i need to make those services to be “auto start” in terms of server reboot situation. Can you advise or provide some needful pointers…?Thanks in advance…Kesav", "username": "ramgkliye" }, { "code": "systemd", "text": "Hi @ramgkliye,If you are installing without using an RPM package, you can create your own service definition as I suggested earlier in this discussion:If you want to manually create a service definition and config file, you can find the versions used in the RPM packages in the MongoDB source on GitHub: https://github.com/mongodb/mongo/tree/master/rpm .Assuming you are using a recent version of RHEL (7+), you’ll want to look into the documentation for Managing services with systemd.I generally recommend using packages to simplify the effort of installing and upgrading your deployment. With a tarball install you also have to manually install dependencies.Regards,\nStennie", "username": "Stennie_X" } ]
Creating mongodb service in RHEL
2020-06-08T09:33:19.136Z
Creating mongodb service in RHEL
9,501
null
[ "data-modeling" ]
[ { "code": "User Objectcustom_datastringobjectIdcould not evaluate sync permissions with error: cannot compare to undefined (ProtocolErrorCode=206)\n\nPartition:\nuser=5f8ad998ed73ed16ea144f62\n", "text": "Hello,\nI’m wondering why in the example you decided to use string type as a User’s primary key and not ObjectId like it is in Task’s primary key. Does it has something to do with User Object’s custom_data? So it can be only linked by string not objectId ? Can I use ObjectId instead? Does it has something to do with the partition key?I have tried to change to ObjectId, and I get ERROR during the synchronization (SyncSession Start).", "username": "Stanislaw_Baranski" }, { "code": "class Task: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\nObjectId()", "text": "Edit: Question is specifically about a User’s primary key which is populated by the server. Below is asking about primary key’s in general.Are you doing this?so when the Task is instantiated, the _id is populated.If you’re just doing this ObjectId() it’s only a new zero-initialized ObjectIdThey use the later function in the https://docs.mongodb.com/realm/ios/objects/#primary-key section - not sure why.", "username": "Jay" }, { "code": "Id_id", "text": "The Id of MongoDB Realm users is a string. The fact that it’s generated from an ObjectId is an implementation detail and we do not make guarantees that this will be the case going forward. This is why the tutorial uses strings for the User object’s _id.", "username": "nirinchev" }, { "code": "", "text": "If Id is a string, then why cannot .NET devs continue to generate string primary keys from Guids?", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "The Id is populated with the User’s Id generated by the server - it’s not something the SDK generates.", "username": "nirinchev" }, { "code": "", "text": "Typical of me … always getting it backwards ", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
In the example why type of User primary key is string and not objectId?
2020-10-18T14:46:55.867Z
In the example why type of User primary key is string and not objectId?
4,451
null
[ "atlas-device-sync" ]
[ { "code": "public class TaskActivity extends AppCompatActivity {\n Realm realm;\n App rApp;\n Credentials credentials;\n int i=0;\n private MongoClient mongoClient;\n private MongoCollection<Document> mongoCollection;\n private User user;\n private String TAG = \"EXAMPLE\";\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_task);\n credentials = Credentials.anonymous();\n realm = Realm.getDefaultInstance();\n TextView txt = findViewById(R.id.txt);\n String appID = \"task-tracker-tutorial-xomxk\"; // replace this with your App ID\n rApp = new App(new AppConfiguration.Builder(appID)\n .build());\n rApp.loginAsync(credentials, it -> {\n if (it.isSuccess()) {\n user = rApp.currentUser();\n assert user != null;\n mongoClient = user.getMongoClient(\"mongodb-atlas\");\n if (mongoClient != null) {\n mongoCollection = mongoClient.getDatabase(\"tracker\").getCollection(\"tasks\");\n Toast.makeText(getApplicationContext(), \"Successfully authenticated anonymous.\", Toast.LENGTH_SHORT).show();\n } else {\n Toast.makeText(getApplicationContext(), \"Error connecting to the MongoDB instance.\", Toast.LENGTH_SHORT).show();\n }\n }\n else {\n Toast.makeText(getApplicationContext(),\"Error in login\",Toast.LENGTH_SHORT).show();\n }\n });\n\n Button btn = findViewById(R.id.btn);\n btn.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n // all realm writes need to occur inside of a transaction\n rApp.loginAsync(credentials, it -> {\n if (it.isSuccess()) {\n user = rApp.currentUser();\n Toast.makeText(getApplicationContext(),\"Data sent\"+(Math.random()*20),Toast.LENGTH_SHORT).show();\n String partitionValue = \"myPartition\";\n SyncConfiguration config = new SyncConfiguration.Builder(user, partitionValue)\n .build();\n realm = Realm.getInstance(config);\n Task task = new Task(\"New Task: \"+(Math.random()*20), partitionValue);\n realm.executeTransaction( transactionRealm -> {\n transactionRealm.insert(task);\n });\n }\n else {\n Toast.makeText(getApplicationContext(),\"Not working\",Toast.LENGTH_SHORT).show();\n }\n });\n realm.close();\n }\n });\n }\n", "text": "We tried the quick start code and the tutorial given for android to setup sync between mongo and realm. The following code is used. We have setup backend and also the gradle dependencies.", "username": "Srinath_Alegatwar" }, { "code": "", "text": "Did you make sure that your Atlas cluster was configured with version 4.4 of MongoDB", "username": "Richard_Krueger" }, { "code": "", "text": "@Srinath_Alegatwar What do the logs say when you attempt to sync? On the client and serverside?", "username": "Ian_Ward" }, { "code": "", "text": "@Richard_Krueger I have configured cluster with version 4.4 of MongoDB", "username": "Srinath_Alegatwar" }, { "code": "", "text": "@Srinath_Alegatwar I am not an Android programmer, but I had a similar problem on iOS. Try getInstanceAsync instead of getInstance. I noticed that MongoDB updated the iOS SDK correctly, but the Android docs still show a sync open. I am teaching myself Android these days, because I always wanted to know what it was like on the dark side.", "username": "Richard_Krueger" }, { "code": "", "text": "@Richard_Krueger Hello, we can connect on mail?", "username": "Srinath_Alegatwar" } ]
Unable to Setup Mongodb Atlas connection with Realm
2020-10-09T10:01:12.126Z
Unable to Setup Mongodb Atlas connection with Realm
2,129
null
[]
[ { "code": "", "text": "Hello all,\nI have been trying to run the following in the IDE Terminal\nmongo “mongodb+srv://sandbox.2edkc.mongodb.net/test” --username m001-student\nI used sample_airbnb and test as the dbname too. Furthermore, I also tried using “Sandbox” and “sandbox” in the above given connection string. Despite that, it was still prompting that I was unable to pass the test. (The connection string in the editor section was for my reference so I could easliy paste it in the IDE Terminal) Please find attached the screenshot of the same for your perusal. It would be really nice if anyone could help me figure out the same. \nThank you so much\nRegards", "username": "Harshita_Kaur" }, { "code": "", "text": "Please revise the lesson where the IDE is presented.You have entered the command in the file editing are rather than the terminal area.[EDITED]After reading carefully the post I saw that the URI in the file editing area was simply for reference and that you have entered the command in the terminal area. We need the screenshot of the terminal area where you entered the mongo command.", "username": "steevej" }, { "code": "", "text": "Thank you for answering the query.\nThe following is the snippet of the terminal area where I entered the command and ran the tests again. mongodb1079×673 50.8 KB", "username": "Harshita_Kaur" }, { "code": "", "text": "We do not see the command.", "username": "steevej" }, { "code": "", "text": "Check the data of your cluster, the connection uri, your cluster’s name must be Sandbox, the username must be 'm001-studen’t with ‘m001-mongodb-basics’ passwordI’d say some of this parameters aren’t set correctly, you could try deleting and creating the cluster with the right set up", "username": "Jorge_Bush" }, { "code": "", "text": "Apologies.\nI had tried typing the command string. It wasn’t visible in the IDE Terminal. Is there some error in my command?", "username": "Harshita_Kaur" }, { "code": "", "text": "Hey\nThank you for replying. The data of the cluster, connection uri, username and the password are correct in my opinion. I can try deleting the cluster and creating once again.\nThank you", "username": "Harshita_Kaur" }, { "code": "", "text": "Is there some error in my command?How can I know if you do not post a screenshot of the command you types?", "username": "steevej" }, { "code": "", "text": "The data of the cluster, connection uri, username and the password are correct in my opinion.I can confirm that the cluster is fine and that the data is loaded.you could try deleting and creating the cluster with the right set upDon’t do that. Your cluster is fine.", "username": "steevej" }, { "code": "", "text": "I cannot actually type or paste my command in the terminal no matter how many times I try. What do I do?", "username": "Harshita_Kaur" }, { "code": "", "text": "So when you click on the Terminal 0 tab you are not able to enter the command?Click on Terminal 0 tab and post screenshot.", "username": "steevej" }, { "code": "", "text": "Thank you for the help!\nThere was some issue at my end. I could access the IDE Lab and type in Terminal 0 after getting my issue resolved. I apologise for causing trouble.\nRegards", "username": "Harshita_Kaur" }, { "code": "", "text": "I apologise for causing trouble.Do not worry. No trouble caused.", "username": "steevej" }, { "code": "How to use the IDE", "text": "Hi @Harshita_Kaur,Thanks for sharing your issue. We will update the lecture video on How to use the IDE.And please feel free to ask any question or doubt that you have in your mind related to this course ~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi, can someone help me with my concern, I also encountered this issue. I use the correct connection string which is mongo “mongodb+srv://sandbox.rkgle.mongodb.net/” --username m001-student, I also tried replacing the to test but it still occurs.image939×537 11.8 KB\nimage959×559 16.9 KB\nimage684×591 30.2 KB\nimage1350×458 29.2 KB", "username": "Gershon_Rivera" }, { "code": "", "text": "Hey @Gershon_Rivera\nThe mongo shell version which we need to use is 4.2 I believe.\nYour connection string seems fine. How about clicking “Enter” key instead of “Run Test” after you change the version to 4.2? Mine worked like that.\nRegards", "username": "Harshita_Kaur" }, { "code": "mongo \"<your connection string>\" -u m001-student\nTest results", "text": "When you typeAnd press Enter what’s the terminal output?By now only Test results tab is shownPS: consider opening a new post. This one has been solved already.", "username": "santimir" }, { "code": "", "text": "Hi, I think it’s seems fine to me now. Thank you for your reply! Appreciate it.", "username": "Gershon_Rivera" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Can't connect to Atlas cluster via IDE
2020-10-10T12:47:25.145Z
Can&rsquo;t connect to Atlas cluster via IDE
3,837
null
[ "connecting" ]
[ { "code": "'mongodb://<dbUsername>:<password>@<remotePublicIp>:27017,<remotePublicIp>:27018,<remotePublicIp>:27019/<databaseName>?replicaSet=rs0'\n", "text": "I’ve setup MongoDB replica set on a single machine with 3 different port 27017, 27018, and 27019.MongoDB connection works fine When I’m connecting from the same machine with the private IP(IPV4).But, I’m getting an error when I’m trying to access the replica set remotely.I’m getting below error when I’m trying to connect from the robo3t.Cannot connect to replica set “Mongo Server”[IP_ADDRESS:27017].\nSet’s primary is unreachable.\nReason:\nNo member of the set is reachable. Reason: Connect failedI’ve tried with the robo3t and programming with below connection string.Note: I can able to connect the standalone MongoDB with the public IP.Thanks in advance.", "username": "Akash_Patel" }, { "code": "", "text": "When you connect to a replicaset the first node reached will seed the replicaset connection by retreiving data from the replicaset. Usually this is hostnames. Those hostnames need to be resolvable by each node any client that is accessing the replica set.For a one off you can create entries in your clients hosts files as a workaround. Or ssh tunnel.If it is a deployment that will be connectible externally then you may need to update what names you are using in the replicaset configuration.", "username": "chris" }, { "code": "", "text": "Thanks for your reply but can you please explain this with example and in detail for better understanding?", "username": "Akash_Patel" }, { "code": "", "text": "Hi @Akash_PatelWhat @chris meant is that all the replica set nodes addresses must be resolvable by every node in the set and all external clients (see connectivity for examples & recommendations).Typically it’s recommended to use hostnames instead of IP addresses, so if the IP changes, you don’t have to reconfigure the replica set.See Deploy a replica set for details and examples, and check out Troubleshoot Replica Sets for tips.Finally, it is strongly recommended to not expose any database server to the public internet without a thorough security checks and precautions, or if you don’t really need to (e.g. IP whitelisting and enabling auth are practically the bare minimum requirement). This is true for any database servers, not only MongoDB. Please see the Security Checklist.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks @kevinadii also want to keep mongodb server private for better security.But if we want to access database securely from GUI application (from specific system) then how can we do that without exposing database IP to public?Is it possible to connect?", "username": "Akash_Patel" }, { "code": "", "text": "Hi @Akash_PatelBut if we want to access database securely from GUI application (from specific system) then how can we do that without exposing database IP to public?You may be able to use IP whitelisting to restrict access to the server to certain IP addresses. Having said that, note that although whitelisting is one solution, it’s also best to secure the server using all the security options available in MongoDB as well (see the security checklist).Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't connect remote replica set by Public IP
2020-10-06T08:47:55.551Z
Can&rsquo;t connect remote replica set by Public IP
7,749
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "Hello,I have an app on the App store which uses Realm Cloud and I would like to migrate to MongoDb Realm, is there a way to copy all the data from Realm Cloud to MongoDb ? Or is it not possible yet ?Thanks for your help ", "username": "Arnaud_Combes" }, { "code": "", "text": "There’s an existing post and guide to help with that migration. See", "username": "Jay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrate from Realm Cloud to MongoDB Realm
2020-10-18T09:55:41.508Z
Migrate from Realm Cloud to MongoDB Realm
4,400
null
[]
[ { "code": "", "text": "We serve our customers in Europe from a Realm Cloud server in Europe. MongoDB Realm Cloud Deployment Regions do not include Europe.When are we likely to see a MongoDB Realm Cloud in Europe?", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "We do have Ireland as a deployment region which is both in Europe and the EU. Is there a reason it doesn’t work for your use case?", "username": "nirinchev" }, { "code": "", "text": "Thank you @nirinchev I misread the document. Ireland will suit nicely.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Realm in Europe
2020-10-18T08:21:21.908Z
MongoDB Realm in Europe
2,110
null
[ "configuration" ]
[ { "code": "2020-10-17T11:34:43.114-0500 F - [conn1310] Failed to mlock: errno:12 Cannot allocate memory\n2020-10-17T11:34:43.114-0500 I - [conn1310] Fatal Assertion 28832\n2020-10-17T11:34:43.114-0500 I - [conn1310]\n\n***aborting after fassert() failure\n\n2020-10-17T11:34:43.119-0500 F - [conn1310] Got signal: 6 (Aborted).\n\n 0xc8b852 0xc8a789 0xc8af92 0x33f740f7e0 0x33f7032495 0x33f7033c75 0xc0d722 0x6b5cec 0x770dc5 0x771025 0x771146 0x7be5a3 0x7bfaff 0x79647b 0x7b23d7 0x7b4338 0xbbbc08 0xbbc8dd 0xbcba4d 0xbbb286 0x69c3a5 0xc36405 0x33f7407aa1 0x33f70e8bcd\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"400000\",\"o\":\"88B852\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"400000\",\"o\":\"88A789\"},{\"b\":\"400000\",\"o\":\"88AF92\"},{\"b\":\"33F7400000\",\"o\":\"F7E0\"},{\"b\":\"33F7000000\",\"o\":\"32495\",\"s\":\"gsignal\"},{\"b\":\"33F7000000\",\"o\":\"33C75\",\"s\":\"abort\"},{\"b\":\"400000\",\"o\":\"80D722\",\"s\":\"_ZN5mongo13fassertFailedEi\"},{\"b\":\"400000\",\"o\":\"2B5CEC\",\"s\":\"_ZN5mongo24secure_allocator_details8allocateEmm\"},{\"b\":\"400000\",\"o\":\"370DC5\",\"s\":\"_ZN5mongo5scram15generateSecretsERKNS_9SHA1BlockE\"},{\"b\":\"400000\",\"o\":\"371025\",\"s\":\"_ZN5mongo5scram15generateSecretsERKNS0_15SCRAMPresecretsE\"},{\"b\":\"400000\",\"o\":\"371146\",\"s\":\"_ZN5mongo5scram19generateCredentialsERKSsi\"},{\"b\":\"400000\",\"o\":\"3BE5A3\",\"s\":\"_ZN5mongo31SaslSCRAMSHA1ServerConversation10_firstStepERSt6vectorISsSaISsEEPSs\"},{\"b\":\"400000\",\"o\":\"3BFAFF\",\"s\":\"_ZN5mongo31SaslSCRAMSHA1ServerConversation4stepENS_10StringDataEPSs\"},{\"b\":\"400000\",\"o\":\"39647B\",\"s\":\"_ZN5mongo31NativeSaslAuthenticationSession4stepENS_10StringDataEPSs\"},{\"b\":\"400000\",\"o\":\"3B23D7\"},{\"b\":\"400000\",\"o\":\"3B4338\"},{\"b\":\"400000\",\"o\":\"7BBC08\",\"s\":\"_ZN5mongo7Command22execCommandClientBasicEPNS_16OperationContextEPS0_RNS_11ClientBasicEiPKcRNS_7BSONObjERNS_14BSONObjBuilderE\"},{\"b\":\"400000\",\"o\":\"7BC8DD\",\"s\":\"_ZN5mongo7Command20runAgainstRegisteredEPNS_16OperationContextEPKcRNS_7BSONObjERNS_14BSONObjBuilderEi\"},{\"b\":\"400000\",\"o\":\"7CBA4D\",\"s\":\"_ZN5mongo8Strategy15clientCommandOpEPNS_16OperationContextERNS_7RequestE\"},{\"b\":\"400000\",\"o\":\"7BB286\",\"s\":\"_ZN5mongo7Request7processEPNS_16OperationContextEi\"},{\"b\":\"400000\",\"o\":\"29C3A5\",\"s\":\"_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE\"},{\"b\":\"400000\",\"o\":\"836405\",\"s\":\"_ZN5mongo17PortMessageServer17handleIncomingMsgEPv\"},{\"b\":\"33F7400000\",\"o\":\"7AA1\"},{\"b\":\"33F7000000\",\"o\":\"E8BCD\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"3.2.22\", \"gitVersion\" : \"105acca0d443f9a47c1a5bd608fd7133840a58dd\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"2.6.32-754.23.1.el6.x86_64\", \"version\" : \"#1 SMP Thu Sep 26 12:05:41 UTC 2019\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"elfType\" : 2, \"b\" : \"400000\", \"buildId\" : \"9FDE7E083B126FA0B29530BD37E03D51927CDF1B\" }, { \"b\" : \"7FFDBFCB5000\", \"elfType\" : 3, \"buildId\" : \"147632A13846D848909EB52353F6102CF932E1E5\" }, { \"path\" : \"/usr/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"5A37D12297A649A37081AB63AC6B520444079986\" }, { \"path\" : \"/usr/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"0435315E0E6DC8BCF08DE1794B2A84D431DE835A\" }, { \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"FDF3A36FFFE08375456D59DA959EAB2FC30B6186\" }, { \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"1F7E85410384392BC51FA7324961719A10125F31\" }, { \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"8A852AC42F0B64F0F30C760EBBCFA3FE4A228F12\" }, { \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"9350579A4970FA47F3144AD8F40B183B0954497D\" }, { \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"85104ECFE42C606B31C2D0D0D2E5DACD3286A341\" }, { \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"814F2290D172521A3FD8581389E3E78A4A182379\" }, { \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"1CC2165E019D43F71FDE0A47AF9F4C8EB5E51963\" }, { \"path\" : \"/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"441FA45097A11508E50D55A3D1FF169BF2BE7C62\" }, { \"path\" : \"/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"F62622218875795666E08B92D176A50791183EEC\" }, { \"path\" : \"/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"152E2C18A7A2145021A8A879A01A82EE134E3946\" }, { \"path\" : \"/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"B8DEDADC140347276164C729418C7A37B7224135\" }, { \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"5FA8E5038EC04A774AF72A9BB62DC86E1049C4D6\" }, { \"path\" : \"/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"4BDFC7A19C1F328EB4FCFBCE7A1E27606928610D\" }, { \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"AF374BAFB7F5B139A0B431D3F06D82014AFF3251\" }, { \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"F0BE1166EDCFFB2422B940D601A1BBD89352D80F\" }, { \"path\" : \"/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"E6798A06BEE17CF102BBA44FD512FF8B805CEAF1\" } ] }}\n mongos(_ZN5mongo15printStackTraceERSo+0x32) [0xc8b852]\n mongos(+0x88A789) [0xc8a789]\n mongos(+0x88AF92) [0xc8af92]\n libpthread.so.0(+0xF7E0) [0x33f740f7e0]\n libc.so.6(gsignal+0x35) [0x33f7032495]\n libc.so.6(abort+0x175) [0x33f7033c75]\n mongos(_ZN5mongo13fassertFailedEi+0x82) [0xc0d722]\n mongos(_ZN5mongo24secure_allocator_details8allocateEmm+0x57C) [0x6b5cec]\n mongos(_ZN5mongo5scram15generateSecretsERKNS_9SHA1BlockE+0x75) [0x770dc5]\n mongos(_ZN5mongo5scram15generateSecretsERKNS0_15SCRAMPresecretsE+0x25) [0x771025]\n mongos(_ZN5mongo5scram19generateCredentialsERKSsi+0x106) [0x771146]\n mongos(_ZN5mongo31SaslSCRAMSHA1ServerConversation10_firstStepERSt6vectorISsSaISsEEPSs+0x1673) [0x7be5a3]\n mongos(_ZN5mongo31SaslSCRAMSHA1ServerConversation4stepENS_10StringDataEPSs+0x34F) [0x7bfaff]\n mongos(_ZN5mongo31NativeSaslAuthenticationSession4stepENS_10StringDataEPSs+0x2B) [0x79647b]\n mongos(+0x3B23D7) [0x7b23d7]\n mongos(+0x3B4338) [0x7b4338]\n mongos(_ZN5mongo7Command22execCommandClientBasicEPNS_16OperationContextEPS0_RNS_11ClientBasicEiPKcRNS_7BSONObjERNS_14BSONObjBuilderE+0x6C8) [0xbbbc08]\n mongos(_ZN5mongo7Command20runAgainstRegisteredEPNS_16OperationContextEPKcRNS_7BSONObjERNS_14BSONObjBuilderEi+0x2ED) [0xbbc8dd]\n mongos(_ZN5mongo8Strategy15clientCommandOpEPNS_16OperationContextERNS_7RequestE+0x19D) [0xbcba4d]\n mongos(_ZN5mongo7Request7processEPNS_16OperationContextEi+0x876) [0xbbb286]\n mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0x65) [0x69c3a5]\n mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x325) [0xc36405]\n libpthread.so.0(+0x7AA1) [0x33f7407aa1]\n libc.so.6(clone+0x6D) [0x33f70e8bcd]\n----- END BACKTRACE -----\n", "text": "Hi Team,Our mongo services are restarting continuously and below are the error log details:OS and mongo version details:mongo 3.2.22 and CentOS release 6.9 (Final)Please do the need ful.Thanks\nRaj", "username": "Raj_Sandiri" }, { "code": "limit memlock unlimited unlimited\n", "text": "Hi @Raj_Sandiri,This issue usually manifest as your ULIMITs OS setting is not configured according to our best practices:Specifically for this error itsBut I recommend checking all of them for all members.Moreover 3.2 is not supported or developed anymore and its better to go to 3.6+Best\nPavel", "username": "Pavel_Duchovny" } ]
Failed to mlock: errno:12 Cannot allocate memory
2020-10-17T21:36:37.235Z
Failed to mlock: errno:12 Cannot allocate memory
4,173
null
[ "golang" ]
[ { "code": "", "text": "Hello, I am developing a reminder of meeting by mongodb and golang. The basic idea is to create a meeting in mongo, then count down on the “due date” of the meeting, and just before 2 hours of the meeting, mongodb can trigger a function in golang and then golang function can send email or sms to user. Is it possible to use some timer-like module in mongo to do the “count down” on a time field? Thanks.", "username": "Zhihong_GUO" }, { "code": "reminderThreshold", "text": "Hi @Zhihong_GUO,It sounds like you will benefit from a trigger on our Atlas platform if you store your data in an Atlas cluster ( most recommended way to run MongoDB)Your scheduled trigger can be triggered every minute and query on the reminderThresholdfield gathering all data for reminding users and since this is a serverless function you can use it to send SMS via Twilio or Emails via 3rd party or AWS SES …If you cannot use Atlas you may need to write the scheduled process yourself.A creative way might be to define a 0 expireAfterSeconds TTL index on a field holding the time of reminder and then have a change streams on this delete triggering the alerts, but it might be a serious overkill and complicated way…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello Pavel,Many thanks for the answer. I don’t use Atlas for the time being so I can choose the second or the 3rd way mentioned by your answer. As to the “scheduled process”, you mean I should write the schedule process by golang, like to build a process by gocron? or there is a way in mongodb to write the schedule process then trigger the function “send sms or email” in golang?James", "username": "Zhihong_GUO" }, { "code": "", "text": "Hi @Zhihong_GUO,The platform you decide to build.your own triggers upon is your choice. There is no triggering mechanism in the MongoDB server other than ttl deletes.If you wish to file a feature request consider placing one at https://feedback.mongodb.comThanks\nPavel", "username": "Pavel_Duchovny" } ]
Count down a field and trigger an action
2020-10-16T22:10:40.597Z
Count down a field and trigger an action
3,168
null
[ "atlas-functions" ]
[ { "code": "", "text": "Hi team,I want to make an insertMany from a react app and get ALL the errors from duplicated values (by id) in response.I created a custom function using initializeUnorderedBulkOp(), and I used the console to check the response. THE PROBLEM IS that I get a different result if I use a normal user (only first duplication error) and a different one if I use the system user (all the duplications).\nFrom my app, I get the same response as a normal user. The one with only the first duplicateDoes anyone have any idea how should I move on with it?\nThank you in advance", "username": "Theodoros_Mathioudak" }, { "code": "", "text": "Figured it out.\nIt seems that the system and a normal user handle differently the response from initializeUnorderedBulkOp().The only thing that was required, was to set my Custom Function to run as a System User.", "username": "Theodoros_Mathioudak" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
initializeUnorderedBulkOp() returns different results for normal and system user
2020-10-17T19:51:43.539Z
initializeUnorderedBulkOp() returns different results for normal and system user
1,312
null
[ "o-fish" ]
[ { "code": "bundle exec Jekyll serve", "text": "I have set up wildaid.github.io locally with prerequisites installed. I have a few issues:Any sort of help would be appreciated as I’m trying to fix the problems I encountered while reading the documentation.", "username": "ayushjain" }, { "code": "point 2", "text": "UPDATE: I’ve found the solution to point 2 and can work on creating a PR for both 1 and 2. I’m still trying to figure out point 3.", "username": "ayushjain" }, { "code": "", "text": "Hi @ayushjain - yikes, it’s so hard when the documentation doesn’t tell us everything we need. Which parts aren’t clear? How did you solve #2? I appreciate any efforts to make things better, but I am also happy to spend more time making the build documentation better.For #3, the usage documentation needs a lot more work - there is no documentation on how to use the web site yet, and the mobile apps are only documented as far as logging in and the home screen.At the hacktoberfest kickoff meeting I did demos of the mobile and web apps - the recording is at\nhttps://mongodb.zoom.us/rec/share/tUAcApmREiTfxdZyePGl7ZlDO01bjpjuAof_dZWsfihHsFDF6xp-UfxzUp5cPDEt.6vsfO_yBgIgFVevc\nThe mobile demo starts at 13:05 and this until 20:30\nWeb demo goes from 22:50 through 28:00.I recommend you watch both demos to get a sense of the purpose of the web app - the mobile app is where information about a boat’s boarding is gathered and inputted, and the web app is where that information is aggregated. When the web app demo talks about ‘a boarding record’, it is helpful to know what that is.I hope this helps!", "username": "Sheeri_Cabral" }, { "code": "Building and testing the documentationusing web", "text": "Hey @Sheeri_CabralThanks for going through these issues. I’ve filed the point 2 as issue #120 along with the proposed solution. I’ll be happy to make a PR to solve the same.The clarity of documentation is an issue:I’ve also RSVPd for the ofish event on 20th. We can discuss more in that meeting. For now, it would be great to discuss some of the solutions I proposed in the issues across wildaid docs and ofish web app.Thanks", "username": "ayushjain" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Wildaid.github.io bugs: Issues while setting up the repo locally
2020-10-16T17:12:20.975Z
Wildaid.github.io bugs: Issues while setting up the repo locally
3,554
null
[ "swift", "legacy-realm-cloud" ]
[ { "code": " result.addChangeListener { results, changeSet ->\n if (changeSet == null) {\n // The first time async returns with an null changeSet.\n } else {\n // Called on every update.\n }\n }\nresults.observe { [weak self] (changes: RealmCollectionChange) in ... }\n", "text": "Hi,I’m trying to use functionality similar to the Kotlin api:In Swift apparently I’ve to use:But the change doesn’t have the objects itself. I only need the objects, not indices.\nDo I have to store results in an instance variable and just access it in the closure? Or how to I get the up to date results?Thanks.", "username": "Ivan_Schuetz" }, { "code": " collection.observe { (changes: RealmCollectionChange) in\n switch changes {\n case .initial(let results):\n // your initial results type\n case .update(let results, let deletions, let insertions, let modifications):\n XCTAssertEqual(results.count == 3)\n // operate on results\n case .error:\n XCTFail(\"Shouldn't happen\")\n }\n }\n", "text": "The change contains a reference to the collection which contains the objects:", "username": "Jason_Flax" } ]
How to get data in with notificationToken?
2020-04-02T14:28:04.230Z
How to get data in with notificationToken?
3,573