image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "atlas-functions" ]
[ { "code": "{\n \"error\": \"command not found on MongoDB service\",\n \"error_code\": \"ServiceCommandNotFound\"\n} \n", "text": "According to the Realm Admin API docs, it’s possible to run a command associated with a service:https://docs.mongodb.com/realm/admin/api/v3/#post-/groups/{groupid}/apps/{appid}/services/{serviceid}/commands/{commandname}My project has a single mongodb-atlas service that I’d like to interact with via the Realm API, but I’m not sure what the acceptable {commandName}s are. I’ve tried using some of these operations (https://docs.mongodb.com/manual/reference/command/) as a reference, but of the ones I’ve tried, the only one that worked was ‘listCollections’ (which I had to provide in the URL as “list_collections”) – all the other commands return a 404 Not Found with this body:My immediate goal is to retrieve custom user data (https://docs.mongodb.com/realm/sdk/ios/advanced-guides/custom-user-data/) for any and all users of my choosing. This is seemingly not possible using the Realm web SDK as it will only retrieve the custom user data associated with the Realm user you are currently logged in with. I’m hoping the Realm Admin API will provide an avenue for this.Thanks in advance for any assistance.", "username": "Josh_Burns" }, { "code": "", "text": "Hey Josh - while custom user data will only get returned for the currently logged in user, since all user data is stored in a collection linked to your app, you can define a “public” role with read access to any user data fields (e.g. profile pictures, email addresses) that you would like to make available to anyone logged in, I don’t think using the Admin API would be necessary.As for your original question, it would be helpful to see the exact URL you used with the ids.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Sumedha_Mehta1 Thank you for your quick reply. I created a role on the collection storing the user data as you suggested and was able to achieve my objective. I’ll pursue learning more about the Realm API’s ability to execute service commands if and when I need it in the future.Much appreciated!", "username": "Josh_Burns" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Executing service command via Realm Admin API
2021-03-01T00:15:01.588Z
Executing service command via Realm Admin API
1,602
https://www.mongodb.com/…4_2_1024x512.png
[ "atlas-functions" ]
[ { "code": "let posData = context.services.get(\"mongodb-atlas\").db(\"MyDB\").collection(\"MyCollection\");\n\nresult = posData.find(body);\n\nconsole.log(\"return document length\" ,result.length);\n\nawait result.forEach(obj=>{\n\nconsole.log(ojb);\n\n});\n", "text": "I am facing an issue while iterating the cursor inside a webhook. The highlighted section does not work inside webhook , but works inside a standalone Nodejs program.Also, tried cusrsor.hasNext() , not working inside webhook.Seems that I am missing some small piece. Please let me know the possible issue/resolution.Webhook code :Error:{“message”:“‘forEach’ is not a function”}Can you please suggest what am I missing ?", "username": "Sumit_Chakraborty" }, { "code": "toArrayforEach", "text": "I believe you need to use toArray before applying the forEach - here is an example: https://docs.mongodb.com/realm/mongodb/find-documents/", "username": "Sumedha_Mehta1" }, { "code": "", "text": "toArray does not work as well.", "username": "Sumit_Chakraborty" }, { "code": " const collection = context.services.get('mongodb-atlas').db('grocery').collection('items');\n const query = {};\n\nresult = collection.find(query);\n\n\nawait collection.find(query)\n .toArray()\n .then(items => {\n items.forEach(console.log)\n return items\n })\n .catch(err => console.error(`Failed to find documents: ${err}`))\n\nreturn \"hello\"\n};", "text": "Can you try the following snippet, I believe this should work:", "username": "Sumedha_Mehta1" } ]
Webhook iteration issue
2021-02-26T05:20:10.610Z
Webhook iteration issue
2,345
null
[]
[ { "code": "", "text": "Hi guys,i wanted to post a hiring post to “About the community / Careers”, but I was not able to do so, because I had to select at least 1 tag but there is no any, not even ones referred to elsewhere like: “looking-for-work” or “hiring”.\nAlso, keeping such post open only for a very short time is I think not the best, pls. keep it open much longer if possible.\nFor example this one was closed after 3 days:\nhttps://www.mongodb.com/community/forums/t/mongo-db-project/11372Thank you! ", "username": "Vane_T" }, { "code": "", "text": "Hi @Vane_T,I believe I’ve corrected the tagging issue – can you please try again?I’m not sure why the auto-close default is set to 72 hours for this category, but I will find out more background on the intended set up and usage.I expect follow-up job discussion may be more likely via direct messaging or contact details shared in the post.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_XI was able to create that post now.That would be great to be able to find here easily those guys who could be hired for smaller tasks or larger projects.\nI could hire one at Upwork or elsewhere but there are a lot of fake ratings and reviews around.\nHere in this community we can check ones history, skills and reputation…I think a post where devs interested could post their Upwork, Freelancer, Github, Stack Overflow etc. profile link would also be useful for both parties.Thank you! ", "username": "Vane_T" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Process of hiring experts from the community
2021-02-26T10:49:20.113Z
Process of hiring experts from the community
4,244
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "", "text": "Hi!I have an app that is local only for now, but I do want to get on MongoDbRealm sync.In the old realm structure you would model a parent child relationship like thispublic class Parent:Object {\nlet children = List\n}public class Child:Object {\nprivate let parents:LinkingObjects = LinkingObjects(fromType: Parent.self , property: “children”)\nfunc getParent()->Parent?{\nreturn parent.first\n}\n}But now I want the child objects to be embedded instead. How can I achieve this? Is there any example on how to migrate a non-embedded to embedded? Is there any recommended way to do this?Right now I have simply switched the Child class to extend EmbeddedObject, but this would result in an exception on migration “Cannot convert object type ‘Child’ to embedded because objects have multiple incoming links.” How can I fix this?", "username": "Simon_Persson" }, { "code": "", "text": "Looks like I might have made this a lot harder for myself as these “child” objects are being linked to from multiple parents. This is probably where the error message is coming from.I am guessing now that the easiest route might be to simply keep the old data in place and migrate data to new realm object using the new embedded document structure that will be better adapted to syncing.I still think it would be nice to get some guidance here. Judging by this post https://github.com/realm/realm-java/pull/6730 I don’t think I am the only one wondering about migrating data to embedded objects.", "username": "Simon_Persson" }, { "code": "", "text": "After further examination… I am not sure it is possible to change the type of an object to embedded Might have to create another object instead and move the properties over.", "username": "Simon_Persson" }, { "code": "", "text": "For anyone lookin into this. Looks like it isn’t possible to migrate to embedded objects. The code doesn’t even reach the migration block and crashes before that. I really wish this was documented somewhere…", "username": "Simon_Persson" }, { "code": "", "text": "Hm… I think we need a bit more context here. When you’re trying to change the “embeddedness” of an object, are you doing that in the context of a local (non-synchronized) Realm? If so, you’ll need to handle that in the migration function - I’m not super familiar with the Swift migration API, but can ping someone on the Swift team to take a look or post an example snippet.Alternatively, if you’re trying to change the schema of a synchronized Realm to make an object that was previously standalone embedded, that is not possible. This would be a destructive change which is disallowed when using sync. You’ll have to terminate and reinitialize Sync, at which point, migrating the on-device data is meaningless as it’ll be wiped when the Realm syncs with the server.One final thing to note is that you can’t synchronize a local Realm - i.e. if you have on-device data in a local Realm, you can’t turn on sync for that one and you’ll have to manually copy data over to your synchronized Realm.Clarifying which of these 3 scenarios is the one you’re trying to achieve will help us better understand your needs + point us to the docs that need improvement.", "username": "nirinchev" }, { "code": "", "text": "Hi! I am currently only using local realms and before migrating the app to use sync I want to migrate the local realms to use embedded objects to make the transition easier.I have tried using a migration function for swift, but it crashes before even reaching the migration function.But is it supposed to be possible to migrate old list data to be embedded for local realms?", "username": "Simon_Persson" }, { "code": "", "text": "It should be possible but may involve recreating the objects and re-adding them to the list. I’ll ping some folks on the Core/Cocoa teams and get back to you.", "username": "nirinchev" }, { "code": "", "text": "For some more context. When I am changing my child object form Object to EmbeddedObject the app crashes with the message Cannot convert object type ‘MyObject’ to embedded because objects have multiple incoming links*.This crash occurs before migration happens, so I am not sure how I would be able to recreate/re-add objects here.", "username": "Simon_Persson" }, { "code": "", "text": "I have tried using a migration function for swift, but it crashes before even reaching the migration function.That’s concerning - if the app crashes before reaching the migration, there are other issues to be addressed. Have you added a breakpoint in your code to see what’s actually crashing?For some more context. When I am changing my child object form Object to EmbeddedObject the app crashes with the message Cannot convert object type ‘MyObject’ to embedded because objects have multiple incoming links *.That’s a correct error as an Object and an EmbeddedObject are two different things. Additionally Embedded objects work “differently” than Objects and for example cannot have a PrimaryKey, which an Object should (generally) have. They cannot also not be shared - an Embedded object is embedded in a single parent Object.We probably need to see some updated code as the code in the original question won’t work for EmbeddedObjects and cannot exist on its own.", "username": "Jay" }, { "code": "MyChildObject2MyChildObjectMyChildObject2__RolePermissionRole", "text": "Right - I spoke with some folks on the Cocoa team and this is indeed a bug on our end. We mark the table as embedded before the migration function runs, which obviously prevents you from executing a migration that would ensure that each object has only one parent.While this is something that we plan to fix, the immediate workaround would be to create a different class - e.g. MyChildObject2 and in your migration function copy all MyChildObject data into MyChildObject2 and create all the proper relationships. If you don’t want to have messy/versioned class names in your project, you can keep the Swift name of the class and map it to a different database type. We don’t have docs how to do that but you can see it done in the Cocoa repo. Essentially this remaps the ugly __Role name to the friendlier PermissionRole.", "username": "nirinchev" }, { "code": "", "text": "I’ll probably have to do some data changes anyway, so maybe the workaround isn’t bad. I mean, I would still have to convert all floats to doubles for example for sync. However, the code will definitely be messy if I change class names. As far as changing the mapping, it sounds a bit scary? My app is on both iOS and Android, so whatever change I do I have to do twice.I am not in a super hurry to rush things out, so if this is something that will be fixed, then I guess the best thing would be to wait it out. Of course it is hard to ask when it will be fixed, but if it is a month or two then it is not a huge deal. But if it is more than that I’d have to go with a workaround.Do you know if this is a high our low prio for the team?", "username": "Simon_Persson" }, { "code": "", "text": "If this isn’t an issue on Android, I could start the migration work on Android instead and wait for the fix on iOS/Swift", "username": "Simon_Persson" }, { "code": "", "text": "Unfortunately, I’m not sure when the fix will be in as I don’t work on the Cocoa SDK. I did file a Github issue you can follow though.Can’t be too helpful for Android either - perhaps @ChristanMelchior can chime in and confirm whether Standalone → Embedded migrations are possible with the Java SDK?", "username": "nirinchev" }, { "code": "", "text": "Thanks! I’ll follow the GitHub issue. And I could simply give it a try on Android. If the Android SDK has the same issue I’ll know about it quickly.", "username": "Simon_Persson" }, { "code": "RealmObjectSchema.setEmbedded()@RealmClass(embedded = true)", "text": "RealmObjectSchema.setEmbedded() is a function that can be used to convert between embedded and non-embedded data, but switching mode for the same model class is only supported for non-synced Realm as a synced Realm consider this a destructive migration which is not supported there.Realm Java will ensure that the embeddedness constraints are upheld when you switch the mode (one parent, no primary key), so you should be able to marked the class as embedded using @RealmClass(embedded = true) and define an appropriate migration step using the RealmObjectSchema function.", "username": "ChristanMelchior" }, { "code": "", "text": "@ChristanMelchior I am currently only using local realms, but I want to convert these to use EmbeddedObjects when appropriate and to use doubles instead of floats, so I can use the same models when switching to synced realms. I won’t make changes on synced realms (I don’t have them yet).Thanks! I’ll give this a shot on Android then… and wait for the swift bug to be fixed ", "username": "Simon_Persson" }, { "code": "", "text": "Just a quick update. I just gave this a try on Android/Java and it works as expected there, so the bug is iOS/Swift only.", "username": "Simon_Persson" }, { "code": "", "text": "I’ve just posted an update in Unable to execute a migration that changes the \"embeddedness\" of an object · Issue #7060 · realm/realm-swift · GitHub.With Realm Cocoa 10.7.0 the relevant core changes have been released. This should work now.", "username": "Dominic_Frei" }, { "code": "", "text": "Thanks! I’ll definitely check that out. I understood that the app would crash if there were orphaned children laying around somewhere? I would strongly prefer these to be deleted automatically rather than crashing, but is there any way to ensure this?I’ll check out the new version regardless. This will make it a lot easier for me to start taking advantage of MongoDb Realm!", "username": "Simon_Persson" }, { "code": "", "text": "Glad to hear it makes your work easier!Regarding the deletion: we talked about that and eventually decided we cannot automatically delete them since we would silently delete data that a user might still have needed but simply overlooked while making sure that every embedded object has exactly one backlink.\nThis option is the safe way.So at the moment you would have to check manually that every object has one and only one backlink and delete objects that you do no longer need.However, to make this even easier, I created Helper function for deleting orphaned embedded objects within migration · Issue #7145 · realm/realm-swift · GitHub which will offer a way to delete all orphaned objects within a migration.", "username": "Dominic_Frei" } ]
Migrating to using the new Embedded Object in MongoDb Realm
2020-11-19T12:54:02.207Z
Migrating to using the new Embedded Object in MongoDb Realm
6,436
null
[ "crud" ]
[ { "code": "special_tagspecial_tagsdb.ip_reports.updateMany(\n { \"special_tag\": {\"$ne\": null} },\n { \"$push\": { \"$expr\": {\"special_tags\": \"$special_tag\" } } }\n);\n\"errmsg\" : \"The dollar ($) prefixed field '$expr' in '$expr' is not valid for storage.\"", "text": "I’m trying to add the string value of one field (special_tag), if exist, as an element of a new list value (special_tags)and getting the following error", "username": "Yurii_Cherkasov" }, { "code": "$push$expr", "text": "Hello @Yurii_Cherkasov, welcome to the MongoDB Community forum!To update a field using another field’s value you need to use Updates with Aggregation Pipeline. With the pipeline you get to use the Aggregation Pipeline Operators (you will not be using the $push or the $expr operators with this feature.). Note that this feature is available with MongoDB v4.2 or later only.", "username": "Prasad_Saya" }, { "code": "$addField", "text": "@Prasad_Saya\nThank you, I read about the Aggregation Framework and even started the related course at Mongo University. But even after numerous attempts and on the 2nd Chapter of the course, I can’t find the example I need. I understand I need to use $addField to extend my document. But what if I don’t want to add their projection of any existing value, but want to create a list and push this value?", "username": "Yurii_Cherkasov" }, { "code": "$addField$addFieldsdb.ip_reports.updateMany(\n { \"special_tag\": { \"$ne\": null} },\n [\n { \n $set: { \n special_tags : { $concatArrays: [ \"$special_tags\", [ \"$special_tag\" ] ] }\n }\n } \n ]\n);\nspecial_tagsspecial_tags : { $concatArrays: [ { $ifNull: [ \"$special_tags\", [ ] ] }, \"$special_tag\" ] }", "text": "I can’t find the example I need. I understand I need to use $addField to extend my document.When using a Pipeline to update a collection, you need to use the $set stage (instead of the $addFields). Here is what I think you are trying to accomplish:If you are not sure if the special_tags array field exists or not, then use this, instead of the above assignment:special_tags : { $concatArrays: [ { $ifNull: [ \"$special_tags\", [ ] ] }, \"$special_tag\" ] }", "username": "Prasad_Saya" } ]
Push value of existing field as an element of new list
2021-02-27T07:49:49.112Z
Push value of existing field as an element of new list
5,208
null
[ "aggregation", "mongoose-odm" ]
[ { "code": " {\n firstName:String,\n lastName:String,\n phone:String,\n ....\n }\n {\n name:String,\n address:String,\n notes:String,\n zipCode:Int32,\n ...\n owner:{\n type: mongoose.Schema.Types.ObjectId,\n ref: 'customers'\n }\n }\n", "text": "I have 2 main collections, first one is Customers, that contain objects in this type:And other collection with Properties, in this type:When getting all properties, I want to know the first&last name of each one, but not sure how can I do it in once ? or should I index it with cron job ? how do I do that ?", "username": "David_Tayar" }, { "code": "", "text": "Hello @David_Tayar, welcome back to the MongoDB Community forum!When getting all properties, I want to know the first&last name of each one, but not sure how can I do it in once ?As you are using Mongoose, you can use the populate(), which says:MongoDB has the join-like $lookup aggregation operator in versions >= 3.2. Mongoose has a more powerful alternative called populate(), which lets you reference documents in other collections.Population is the process of automatically replacing the specified paths in the document with document(s) from other collection(s).As an alternative, you can also use Mongoose Aggregate lookup.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation of Object with reference from another collection in Object
2021-02-28T12:23:33.467Z
Aggregation of Object with reference from another collection in Object
8,153
null
[]
[ { "code": "", "text": "I have been learning about Mongodb Realm for few days. The more I am learning the more becoming fan of this platform. I really love how easy it is to build GraphQL API and connect with third-party services.As I am a mobile developer and have little experience with javascript and node js, I find it hard to write javascript functions for custom resolvers and webhook.It would be very nice and easy if I get auto-complete inside the function editor. By this, I can easily know what is available to me.Is there any plan to add auto-complete to the function editor for MongoDB Realm?", "username": "zoha131" }, { "code": "", "text": "Hey there - no immediate plans for this feature, however writing functions locally in a Javascript friendly editor (e.g. VSCode) might be helpful here. We’re also in the process of revamping our CLI and releasing in the near future, which should make local development with Realm a bit smoother.If you’d like to request typeahead in the UI editor, we monitor our Uservoice forum quite closely Realm: Top (70 ideas) – MongoDB Feedback Engine", "username": "Sumedha_Mehta1" } ]
auto-complete for function editor
2021-02-27T12:19:12.642Z
auto-complete for function editor
1,512
https://www.mongodb.com/…4_2_1024x512.png
[ "data-modeling", "atlas-device-sync" ]
[ { "code": " {\n \"breed\": {\n \"ref\": \"#/relationship/mongodb-atlas/my-db/Breed\",\n \"foreign_key\": \"_id\",\n \"is_list\": false\n }\n }\n", "text": "Is it possible to have direct realm object relationships from one partition to another?i.e. if I was to expand on the example on the to-one relationship between a Person and a Dog found below:Lets assume the Person and Dog belong to some private user partition. Now I would want to add on a Breed to the Dog, lets assume the Breed is some domain lookup data that lives on the PUBLIC partition.Is it possible to have a relationship between the 2? Something of the sorts in the Dog schema:Right now my issue is that if I try to fetch the Breed documents from the user (private) partition/realm nothing is found (makes sense), but if I try to reference the Breed in my dog then add the Person+Dog to my user realm I get the below exception (which also makes sense):\nRealms.Exceptions.RealmObjectManagedByAnotherRealmException: ‘Cannot start to manage an object with a realm when it’s already managed by another realm’So is it possible to have an explicit relationships between partitions/realms by some way I haven’t figured out yet, or is the only way to have a reference id to the Breed and reconciliate the data in app with business logic?", "username": "Guillaume_Fortin" }, { "code": "", "text": "Relationships that span multiple Realms are not supported. What you can do is store the primary key of the object you want to reference and look it up when needed, but that’ll obviously not give you the same referential guarantees as direct relationships would.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can you have relationships between realms (partitions)?
2021-03-01T00:14:55.460Z
How can you have relationships between realms (partitions)?
2,114
null
[]
[ { "code": "Crashed: com.apple.root.background-qos\n\n0 Realm 0x1058abfb8 realm::Allocator::translate_less_critical(realm::Allocator::RefTranslation*, unsigned long) const + 68\n\n1 Realm 0x1057fce30 realm::Array::init_from_ref(unsigned long) + 192\n\n2 Realm 0x105ca356c realm::_impl::GroupFriend::get_history_ref(realm::Allocator&, unsigned long) + 68\n\n3 Realm 0x105c3ba60 bool realm::Transaction::internal_advance_read<(anonymous namespace)::TransactLogValidator>((anonymous namespace)::TransactLogValidator*, realm::VersionID, realm::_impl::History&, bool) + 176\n\n4 Realm 0x105c36400 realm::_impl::transaction::begin(std::__1::shared_ptrrealm::Transaction const&, realm::BindingContext*, realm::_impl::NotifierPackage&) + 416\n\n5 Realm 0x105c2b77c realm::_impl::RealmCoordinator::promote_to_write(realm::Realm&) + 268\n\n6 Realm 0x105ca1570 realm::Realm::begin_transaction() + 404\n\n7 Realm 0x10588ee24 -[RLMRealm beginWriteTransactionWithError:] + 24\n\n8 Inventa 0x10510b0a0 $s7Inventa15RealmDataHelperC33saveDataAA0I0CG_tFyycfU_Tf2i_n + 52\n\n9 Inventa 0x1050c2c98 $sIeg_IeyB_TR + 20\n\n10 libdispatch.dylib 0x19941c24c _dispatch_call_block_and_release + 32\n\n11 libdispatch.dylib 0x19941ddb0 _dispatch_client_callout + 20\n\n12 libdispatch.dylib 0x19942ea68 _dispatch_root_queue_drain + 656\n\n13 libdispatch.dylib 0x19942f120 _dispatch_worker_thread2 + 116\n\n14 libsystem_pthread.dylib 0x1e52d97d8 _pthread_wqthread + 216\n\n15 libsystem_pthread.dylib 0x1e52e076c start_wqthread + 8\nfunc saveAllEzoneDistinctNotifications(notifications: [Notifications]) {\n \n var filtered:[Notifications] = []\n for notif in notifications {\n for stId in notif.st_ids {\n notif.stId = stId\n notif.notif_key = \"\\(notif.ntId)-\\(notif.stId))\"\n filtered.append(notif)\n }\n }\n let backgroundQueue = DispatchQueue.global(qos: .background)\n backgroundQueue.async(execute: {\n do{\n let realm = try Realm()\n realm.beginWrite()\n realm.add(filtered, update: .all)\n try realm.commitWrite()\n }\n catch{\n print(error)\n }\n })\n }\nException Type: EXC_BAD_ACCESS\n\nException Subtype: KERN_INVALID_ADDRESS 0x00000003d74249bd\n\nRelease Type: User\n\nRealm framework version: 10.5.1\n\nXcode version: 12.3\n\niOS/OSX version: 14.3\n", "text": "Crash on transaction", "username": "brijesh_singh" }, { "code": "notifications: [Notifications]", "text": "Are these unmanaged realm objects?notifications: [Notifications]And this is also a cross-post to StackOverflow in case an answer pops up there.", "username": "Jay" }, { "code": "", "text": "Yes, It is realm object.", "username": "brijesh_singh" }, { "code": "", "text": "Notification Array is an unmanaged realm object array. st_ids is realm List.", "username": "brijesh_singh" } ]
iOS App Crash on transaction
2021-02-25T19:51:46.011Z
iOS App Crash on transaction
3,581
null
[ "swift", "app-services-user-auth" ]
[ { "code": "Task <96D401D2-F4F1-4A11-8D02-1385C4CBD1C1>.<1> finished with error [-1001] Error Domain=NSURLErrorDomain Code=-1001 \"The request timed out.\" UserInfo={_kCFStreamErrorCodeKey=-2102, NSUnderlyingError=0x6000009eb690 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 \"(null)\" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <96D401D2-F4F1-4A11-8D02-1385C4CBD1C1>.<1>, _NSURLErrorRelatedURLSessionTaskErrorKey=(\n \"LocalDataTask <96D401D2-F4F1-4A11-8D02-1385C4CBD1C1>.<1>\"\n), NSLocalizedDescription=The request timed out., NSErrorFailingURLStringKey=https://realm.mongodb.com/api/client/v2.0/app/_my_id_here_/location, NSErrorFailingURLKey=https://realm.mongodb.com/api/client/v2.0/app/_my_id_here_/location, _kCFStreamErrorDomainKey=4}\nlet realmApp = RealmSwift.App(id: \"_my_id_here_\")\nrealmApp.login(credentials: .anonymous) { result in\n switch result {\n case .success(let user):\n\t // physical iOS device & macOS go here \n case .failure(let error):\n\t\t// iOS Simulator goes here\n }\n}\nmaster", "text": "Hi!I was just trying to login using my Realm app.On my iPhone, and on macOS, it works fine - the login is successful.The issue is that only on the iOS Simulator, I get:Code:I have restarted the simulator, erased all content & settings etc.This is with Xcode 12.5 beta 2, using Realm at master (c5c5e67)Any ideas why this might happen?Thanks!", "username": "Ian_Dundas" }, { "code": "", "text": "I have a three line test project that reproduces this issue:Contribute to iandundas/RealmSync_SimulatorIssue development by creating an account on GitHub.To run, simply clone it & wait for Xcode to download Realm as a Swift Package Manager dependencyRunning it for Mac, or a physical iOS device, logs:Result: success(<RLMUser: 0x600002aa7680>)But running it in iOS Simulator:Result: failure(Error Domain=realm::app::CustomError Code=-1001 “The request timed out.” UserInfo={NSLocalizedDescription=The request timed out., realm::app::CustomError=code -1001})Thanks", "username": "Ian_Dundas" }, { "code": "", "text": "I reproduced it with the example project above on another laptop, again using Xcode 12.5 beta 2However, on Xcode 12.4, it works fine.So, definitely something reproducible going on here.", "username": "Ian_Dundas" }, { "code": "", "text": "Posted it in GitHub as that seems to be where the bugs go… Xcode 12.5 beta 1+2: Sync times out in iOS Simulator · Issue #7152 · realm/realm-swift · GitHub", "username": "Ian_Dundas" } ]
Timeout, but only from iOS Simulator (device fine)
2021-02-25T19:42:58.982Z
Timeout, but only from iOS Simulator (device fine)
5,230
null
[]
[ { "code": "{\"t\":{\"$date\":\"2021-02-22T06:56:42.725+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn112\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52506\",\"connectionId\":112,\"connectionCount\":108}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:51.443+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52517\",\"connectionId\":113,\"connectionCount\":109}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:51.444+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn113\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52517\",\"client\":\"conn113\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.6.3\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"19.6.0\"},\"platform\":\"'Node.js v10.18.0, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.219+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23018, \"ctx\":\"listener\",\"msg\":\"Error accepting new connection on local endpoint\",\"attr\":{\"localEndpoint\":\"127.0.0.1:27017\",\"error\":\"Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.696+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23018, \"ctx\":\"listener\",\"msg\":\"Error accepting new connection on local endpoint\",\"attr\":{\"localEndpoint\":\"127.0.0.1:27017\",\"error\":\"Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.730+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23018, \"ctx\":\"listener\",\"msg\":\"Error accepting new connection on local endpoint\",\"attr\":{\"localEndpoint\":\"127.0.0.1:27017\",\"error\":\"Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.962+03:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"thread114\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":24,\"message\":\"[1613966212:962628][7406:0x70000bb45000], log-server: __directory_list_worker, 46: /usr/local/var/mongodb/journal: directory-list: opendir: Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.962+03:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"thread114\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":24,\"message\":\"[1613966212:962756][7406:0x70000bb45000], log-server: __log_prealloc_once, 505: log pre-alloc server error: Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.962+03:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"thread114\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":24,\"message\":\"[1613966212:962784][7406:0x70000bb45000], log-server: __log_server, 961: log server error: Too many open files\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.962+03:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"thread114\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31804,\"message\":\"[1613966212:962806][7406:0x70000bb45000], log-server: __log_server, 961: the process must exit and restart: WT_PANIC: WiredTiger library panic\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.963+03:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23089, \"ctx\":\"thread114\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":520}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.963+03:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23090, \"ctx\":\"thread114\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.964+03:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"thread114\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Abort trap: 6).\\n\"}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"thread114\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"105A35B9C\",\"b\":\"1038CA000\",\"o\":\"216BB9C\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"s+\":\"10C\"},{\"a\":\"105A372A8\",\"b\":\"1038CA000\",\"o\":\"216D2A8\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"28\"},{\"a\":\"105A34DDB\",\"b\":\"1038CA000\",\"o\":\"216ADDB\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv\",\"s+\":\"BB\"},{\"a\":\"7FFF73CF75FD\",\"b\":\"7FFF73CF4000\",\"o\":\"35FD\",\"s\":\"_sigtramp\",\"s+\":\"1D\"},{\"a\":\"0\"},{\"a\":\"7FFF73BCD808\",\"b\":\"7FFF73B4E000\",\"o\":\"7F808\",\"s\":\"abort\",\"s+\":\"78\"},{\"a\":\"105A1C2D7\",\"b\":\"1038CA000\",\"o\":\"21522D7\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"197\"},{\"a\":\"1039976FB\",\"b\":\"1038CA000\",\"o\":\"CD6FB\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc\",\"s+\":\"1FB\"},{\"a\":\"103B02AE7\",\"b\":\"1038CA000\",\"o\":\"238AE7\",\"s\":\"__eventv\",\"s+\":\"607\"},{\"a\":\"103B02D66\",\"b\":\"1038CA000\",\"o\":\"238D66\",\"s\":\"__wt_panic_func\",\"s+\":\"FD\"},{\"a\":\"103A1E59E\",\"b\":\"1038CA000\",\"o\":\"15459E\",\"s\":\"__log_server\",\"s+\":\"44E\"},{\"a\":\"7FFF73D03109\",\"b\":\"7FFF73CFD000\",\"o\":\"6109\",\"s\":\"_pthread_start\",\"s+\":\"94\"},{\"a\":\"7FFF73CFEB8B\",\"b\":\"7FFF73CFD000\",\"o\":\"1B8B\",\"s\":\"thread_start\",\"s+\":\"F\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.3\",\"gitVersion\":\"913d6b62acfbb344dde1b116f4161360acd8fd13\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Darwin\",\"release\":\"19.6.0\",\"version\":\"Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64\",\"machine\":\"x86_64\"},\"somap\":[{\"path\":\"/usr/local/opt/mongodb-community/bin/mongod\",\"machType\":2,\"b\":\"1038CA000\",\"vmaddr\":\"100000000\",\"buildId\":\"88F05A2CDBD83B9F98DAF635FC65C2E6\"},{\"path\":\"/usr/lib/system/libsystem_c.dylib\",\"machType\":6,\"b\":\"7FFF73B4E000\",\"vmaddr\":\"7FFF67253000\",\"buildId\":\"BBDED5E6A6463EEDB33A91E4331EA063\"},{\"path\":\"/usr/lib/system/libsystem_platform.dylib\",\"machType\":6,\"b\":\"7FFF73CF4000\",\"vmaddr\":\"7FFF673F9000\",\"buildId\":\"009A7C1F313A318EB9F230F4C06FEA5C\"},{\"path\":\"/usr/lib/system/libsystem_pthread.dylib\",\"machType\":6,\"b\":\"7FFF73CFD000\",\"vmaddr\":\"7FFF67402000\",\"buildId\":\"62CB1A980B8F31E7A02BA1139927F61D\"}]}}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"105A35B9C\",\"b\":\"1038CA000\",\"o\":\"216BB9C\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"s+\":\"10C\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"105A372A8\",\"b\":\"1038CA000\",\"o\":\"216D2A8\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"28\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"105A34DDB\",\"b\":\"1038CA000\",\"o\":\"216ADDB\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv\",\"s+\":\"BB\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFF73CF75FD\",\"b\":\"7FFF73CF4000\",\"o\":\"35FD\",\"s\":\"_sigtramp\",\"s+\":\"1D\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"0\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFF73BCD808\",\"b\":\"7FFF73B4E000\",\"o\":\"7F808\",\"s\":\"abort\",\"s+\":\"78\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"105A1C2D7\",\"b\":\"1038CA000\",\"o\":\"21522D7\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"197\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"1039976FB\",\"b\":\"1038CA000\",\"o\":\"CD6FB\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc\",\"s+\":\"1FB\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"103B02AE7\",\"b\":\"1038CA000\",\"o\":\"238AE7\",\"s\":\"__eventv\",\"s+\":\"607\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"103B02D66\",\"b\":\"1038CA000\",\"o\":\"238D66\",\"s\":\"__wt_panic_func\",\"s+\":\"FD\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"103A1E59E\",\"b\":\"1038CA000\",\"o\":\"15459E\",\"s\":\"__log_server\",\"s+\":\"44E\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFF73D03109\",\"b\":\"7FFF73CFD000\",\"o\":\"6109\",\"s\":\"_pthread_start\",\"s+\":\"94\"}}}\n{\"t\":{\"$date\":\"2021-02-22T06:56:52.972+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"thread114\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFF73CFEB8B\",\"b\":\"7FFF73CFD000\",\"o\":\"1B8B\",\"s\":\"thread_start\",\"s+\":\"F\"}}}\nMongoNetworkError: connection 1 to 127.0.0.1:27017 closed\n at (anonymous function).forEach.op (/Users/project/node_modules/mongodb/lib/cmap/connection.js:68:15)\n at Map.forEach (<anonymous>)\n at Socket.Connection.stream.on (/Users/project/node_modules/mongodb/lib/cmap/connection.js:67:20)\n at Socket.emit (events.js:198:13)\n at Socket.EventEmitter.emit (domain.js:448:20)\n at TCP._handle.close (net.js:607:12) name: 'MongoNetworkError'\n\tcpu unlimited unlimited \n\tfilesize unlimited unlimited \n\tdata unlimited unlimited \n\tstack 8388608 67104768 \n\tcore 0 unlimited \n\trss unlimited unlimited \n\tmemlock unlimited unlimited \n\tmaxproc 2784 4176 \n\tmaxfiles 256 unlimited \n", "text": "Hello everyone,I started facing an issue recently. My company gave a computer (Macbook Pro 2019 Catalina 10.15.7). I set everything up as my personal environment. The only difference I haven’t format my personal environment for a long time, and this one recently bought and set up. Therefore the all project environment and settings for mongo is exactly same. After successful installation and run, after a while - which depends on the action I take on the projects - mongo crashes. Mongo logs are like following;Here is nodeLimits are; ( it is exaclty same as my personal environment)", "username": "bardanadam" }, { "code": "brew serviceslaunchctl limitunlimit -a", "text": "Hello @bardanadam, welcome to the community!in your case the ulimit setting might be the issue. Please follow this link in the middle of the page you can find detailed information:For macOS systems that have installed MongoDB Community using the brew installation method, the recommended open files value is automatically set when you start MongoDB through brew services . See Run MongoDB with brew for more information.For macOS systems running MongoDb Enterprise or using the TGZ installation method, use the launchctl limit command to set the recommended values. See your operating system documentation for the precise procedure for changing system limits on running systems.You can check your setting with unlimit -a recommended are:But before setting this manually please try to use the recommended startup options (see above, link)Regards,\nMichael", "username": "michael_hoeller" }, { "code": "{\"t\":{\"$date\":\"2021-02-23T16:06:41.382+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22942, \"ctx\":\"listener\",\"msg\":\"Connection refused because there are too many open connections\",\"attr\":{\"connectionCount\":820}}\n{\"t\":{\"$date\":\"2021-02-23T16:06:41.385+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22942, \"ctx\":\"listener\",\"msg\":\"Connection refused because there are too many open connections\",\"attr\":{\"connectionCount\":820}}\n", "text": "Hey @michael_hoeller,Thank you for your help. However, I changed the default ulimit settings to the suggested value, which is 64000; since it’s e macOS, it allowed me to assign only 2500. After I set it up, I started still receiving the same type of error in a different message that says, “Connection refused because there are too many open connections.”", "username": "bardanadam" }, { "code": "", "text": "Hello @bardanadam\n2500 feels very low, I have no Mac Laptop to verify but is does not sound ok. Are there any error messages? Access rights? Did you tried to start via brew service or rsp. used launchctl limit ? If not I recommend to do so. Sorry not much of help due to the fact that I have no Mac around.\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hey @michael_hoeller,Yes, I can confirm the issue was ulimit settings. Also, I did what you suggested, and it works. The weird point is that my previous MacBook was working without any additional settings. I checked the limits and they were default. Anyway, thank you for your helping. I also wanted you to know the solution for the Mac too.Best,\nBurak", "username": "bardanadam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Crashes due to "Too many open files" error
2021-02-22T05:02:14.055Z
Mongo Crashes due to &ldquo;Too many open files&rdquo; error
10,316
null
[]
[ { "code": "{date_created : ISODate('2021-02-01T16:00:01.000+00:00')}The provided parameters were invalid. Check your query and try again.", "text": "Pretty simple problem:I can run this filter on MongoDB compass locally and it works as expected:\n{date_created : ISODate('2021-02-01T16:00:01.000+00:00')}But when i try to run it on MongoDB atlas in the filter box, i get an error:\nThe provided parameters were invalid. Check your query and try again.Is this a bug or am i misunderstanding something? Would appreciate any explanation or resource.", "username": "Navdeep_Singh" }, { "code": "IsoDate()$date{date_created: {$date:\"2021-02-01T16:00:01.000+00:00\"}}", "text": "Hi @Navdeep_Singh,\nWelcome to MongoDB Developer Community Forums!As noted in the documentation, the Atlas Data Explorer does not support date queries that use the IsoDate() function. Instead, use the MongoDB Extended JSON (v2) $date data type for date queries.You may wish to try using the following filter in Data Explorer to see if it returns any results:\n{date_created: {$date:\"2021-02-01T16:00:01.000+00:00\"}}Hope this helps.\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't use ISODate on atlas?
2021-02-27T09:08:13.709Z
Can&rsquo;t use ISODate on atlas?
5,239
https://www.mongodb.com/…58421ebc078b.png
[ "aggregation", "queries" ]
[ { "code": "[\n {\n \"_id\": 1,\n \"rooms\": [\n { date: ISODate(\"2021-02-25T00:00:00.000Z\"), status: true, otherfield: true },\n { date: ISODate(\"2021-02-26T00:00:00.000Z\"), status: true, otherfield: true },\n { date: ISODate(\"2021-02-27T00:00:00.000Z\"), status: true, otherfield: true },\n // there will be same date's document in rooms array like below row is similar to first row\n { date: ISODate(\"2021-02-25T00:00:00.000Z\"), status: true, otherfield: true }\n ]\n },\n {\n \"_id\": 2,\n \"rooms\": [\n { date: ISODate(\"2021-02-25T00:00:00.000Z\"), status: true, otherfield: true },\n { date: ISODate(\"2021-02-26T00:00:00.000Z\"), status: false, otherfield: true },\n { date: ISODate(\"2021-02-27T00:00:00.000Z\"), status: true, otherfield: true }\n ]\n },\n {\n \"_id\": 3,\n \"rooms\": [\n { date: ISODate(\"2021-02-25T00:00:00.000Z\"), status: true, otherfield: true }\n ]\n }\n]\n2021-02-25T00:00:00.000Z2021-02-26T00:00:00.000Z$in$all$elemMatchdb.collection.find({\n rooms: { \n $elemMatch: { \n date: { \n $in: [\n ISODate(\"2021-02-25T00:00:00.000Z\"), \n ISODate(\"2021-02-26T00:00:00.000Z\")\n ]\n },\n status: true \n } \n }\n})\n\ndb.collection.find({\n \"rooms.date\": {\n $all: [\n ISODate(\"2021-02-25T00:00:00.000Z\"), \n ISODate(\"2021-02-26T00:00:00.000Z\")\n ]\n },\n rooms: { $elemMatch: { status: true } }\n})\n", "text": "I am stuck in element match in array condition,Sample Documents:Condition criteria:as per condition it should select first document see below screenshot,image793×480 28.4 KB1) Try: - this selecting 2 documents because $in means OR condition in date field, and $all will not work because date inside $elemMatch is a single sting,2) Try: - this selecting 2 documents because both condition will match in different elements of rooms,Please suggest any possible way, thank you.", "username": "turivishal" }, { "code": "date_1 = ISODate(\"2021-02-25T00:00:00.000Z\") ;\ndate_2 = ISODate(\"2021-02-26T00:00:00.000Z\") ;\nstatus_and_date_1 = { status : true , \"date\" : date_1 } ;\nstatus_and_date_2 = { status : true , \"date\" : date_2 } ;\nrooms_match_1 = { \"rooms\" : { \"$elemMatch\" : status_and_date_1 } } ;\nrooms_match_2 = { \"rooms\" : { \"$elemMatch\" : status_and_date_2 } } ;\nclauses = [ rooms_match_1 , rooms_match_2 ] ;\nquery = { \"$and\" : clauses } ;\ndb.rooms.find( query ) ;\n{\n\t\"$and\" : [\n\t\t{\n\t\t\t\"rooms\" : {\n\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\"status\" : true,\n\t\t\t\t\t\"date\" : ISODate(\"2021-02-25T00:00:00Z\")\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"rooms\" : {\n\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\"status\" : true,\n\t\t\t\t\t\"date\" : ISODate(\"2021-02-26T00:00:00Z\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t]\n}\n{\n\t\"_id\" : 1,\n\t\"rooms\" : [\n\t\t{\n\t\t\t\"date\" : ISODate(\"2021-02-25T00:00:00Z\"),\n\t\t\t\"status\" : true,\n\t\t\t\"otherfield\" : true\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2021-02-26T00:00:00Z\"),\n\t\t\t\"status\" : true,\n\t\t\t\"otherfield\" : true\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2021-02-27T00:00:00Z\"),\n\t\t\t\"status\" : true,\n\t\t\t\"otherfield\" : true\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2021-02-25T00:00:00Z\"),\n\t\t\t\"status\" : true,\n\t\t\t\"otherfield\" : true\n\t\t}\n\t]\n}\n", "text": "Let me try.I use variables to build my queries. This way, it is easier to correct errors by editing a single line rather than the whole query. In addition, I seldom do braces and brackets errors.The result query being:with the result set being:", "username": "steevej" }, { "code": "", "text": "Thank you for the solution, it is really helpful ", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to match $all and equal to conditions in $elemMatch operator?
2021-02-27T14:23:35.720Z
How to match $all and equal to conditions in $elemMatch operator?
7,249
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.13-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.12. The next stable release 4.2.13 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "Thanks for sharing the release updates! ", "username": "Soumyadeep_Mandal" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.13-rc0 is released
2021-02-26T15:15:45.999Z
MongoDB 4.2.13-rc0 is released
2,463
null
[]
[ { "code": "$ python buildscripts/scons.py mongod\nscons: Reading SConscript files ...\nscons: running with args C:\\Softwares\\python-3.7.9\\python.exe buildscripts/scons.py mongod\nscons version: 3.1.2\npython version: 3 7 9 'final' 0\nCC is cl\ncl was not found in $PATH\ncl resolves to C:\\Softwares\\mongo-4.2.12\\mongo\\cl\nCXX is $CC\n$CC was not found in $PATH\n$CC resolves to C:\\Softwares\\mongo-4.2.12\\mongo\\$CC\nChecking whether the C compiler works... (cached) yes\nChecking whether the C++ compiler works... (cached) yes\nChecking that the C++ compiler can link a C++ program... (cached) yes\nChecking if C++ compiler \"$CC\" is MSVC... (cached) yes\nChecking if C compiler \"cl\" is MSVC... (cached) yes\nDetected a x86_64 processor\nChecking if target OS windows is supported by the toolchain... (cached) yes\nChecking if C compiler is Microsoft Visual Studio 2017 15.9 or newer...(cached) yes\nChecking if C++ compiler is Microsoft Visual Studio 2017 15.9 or newer...(cached) yes\nChecking if we are using libstdc++... (cached) no\nChecking for C++17... (cached) yes\nChecking for memset_s... (cached) no\nChecking for C function strnlen()... (cached) no\nChecking Windows SDK is 8.1 or newer... (cached) yes\nChecking if we are on a POSIX system... (cached) no\nChecking for storage class thread_local (cached) yes\nChecking for C++14 std::enable_if_t support...(cached) yes\nChecking for C++14 std::make_unique support... (cached) yes\nUsing SSL Provider: windows\nChecking for C++ header file execinfo.h... (cached) no\nChecking for C library pcap... (cached) no\nChecking for C library wpcap... (cached) no\nChecking if std::atomic<int64_t> works... (cached) yes\nChecking if std::atomic<uint64_t> works... (cached) yes\nChecking if std::atomic<int32_t> works... (cached) yes\nChecking if std::atomic<uint32_t> works... (cached) yes\nChecking for extended alignment 64 for concurrency types... (cached) yes\nChecking for mongoc_get_major_version() in C library mongoc-1.0... (cached) no\nChecking for C function fallocate()... (cached) no\nChecking for C function sync_file_range()... (cached) no\nChecking for C header file x86intrin.h... (cached) no\nChecking for C header file arm_neon.h... (cached) no\nscons: done reading SConscript files.\nscons: Building targets ...\nscons: *** Do not know how to make File target `mongod' (C:\\Softwares\\mongo-4.2.12\\mongo\\mongod). Stop.\n", "text": "Hi Team,I am trying to build the tag version 4.2.12 on windows machine using cygwin. Followed the steps given in respective build.md file. Getting below error.", "username": "Annapoorna_R" }, { "code": "cmd.exemongodmongod.exe", "text": "Please try this again but from within a cmd.exe shell, not a cygwin shell. We don’t support or test running the windows targeted build inside cygwin, nor do we support targeting cygwin at all. I suspect though that SCons does internally have some awareness of cygwin, and is therefore misconfiguring the build. The fact that it is looking to build mongod rather than mongod.exe is something of a hint along these lines.", "username": "Andrew_Morrow" }, { "code": "", "text": "Hi Andrew,I tried the same steps with cmd.exe shell . Getting same error.Thanks-Annapoorna", "username": "Annapoorna_R" }, { "code": "", "text": "Hi Andrew,I tried with target core, instead of giving mongod/mongos/mongo , then it started building. And after successful build, I had mongod.exe/mongos.exe/mongo.exe generated. Anything wrong syntax I have given when specific mongod is passed as argument to scons.py?Regards\nAnnapoorna", "username": "Annapoorna_R" }, { "code": "mongodscons mongodmongod.exescons mongod.execoremongodmongod.exeinstall-corebuild/installDESTDIRPREFIXcmd.exe", "text": "I’d overlooked in your original build invocation that the target you passed to SCons was mongod. SCons accepts two types of targets on the command line: alias targets and file targets. In the v4.2 codebase, the server binaries are installed to the root of the source tree. So when you say scons mongod you are asking SCons to build the file mongod. But on windows, executables come with the .exe extension, so on that platform you need to invoke scons mongod.exe instead, since you are asking for the exact file to be built. The other sort of target, alias targets, are a synthetic name for a group of other targets. These, unlike file targets, do not vary by platform. The core target that you built is such an alias, and so it is able to build either mongod or mongod.exe as needed by the local environment. That is why that build worked for you and the other didn’t.Also, please note that in MongoDB v4.4 and later, the set of aliases has been reworked. If you move to v4.4, you will find that the target you want is install-core, and the files will not be placed in the root of the source tree, but will be installed by default to build/install. The path to which they are installed can be customized with the DESTDIR and PREFIX variables.Finally, I still recommend using cmd.exe rather than trying to build within a cygwin shell.", "username": "Andrew_Morrow" }, { "code": "", "text": "Thanks Andrew for the detailed explanation. Thats very informative.", "username": "Annapoorna_R" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Building 4.2.12 server on windows failing
2021-02-25T19:50:50.826Z
Building 4.2.12 server on windows failing
3,411
null
[ "dot-net", "connecting", "serverless" ]
[ { "code": "1.UsingImplicitSessionAsync[TResult](Func", "text": "Hello!Follow my problem:\nScenario:I have a Lambda function on AWS that fires when an SQS message is placed in the queue.\nThis lambda function was developed using C # AspNet Core 3.1.\nThe MongoDb Driver used is version 2.11.6.\nThe problem occurs sporadically when inserting the record in MongoDb Server and I get a timeout for more than 30 seconds.The error logged on the AWS CloudWatch is as follows:A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “ReplicaSet”, Type : “ReplicaSet”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/cluster0-shard-00-00.dbzdi.mongodb.net:27017” }”, EndPoint: “Unspecified/cluster0-shard-00-00.dbzdi.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 35.173.82.104:27017. Timeout was 00:00:30.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2021-02-23T13:24:36.9134812Z”, LastUpdateTimestamp: “2021-02-23T13:24:36.9134815Z” }, { ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/cluster0-shard-00-01.dbzdi.mongodb.net:27017” }”, EndPoint: “Unspecified/cluster0-shard-00-01.dbzdi.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 54.205.128.107:27017. Timeout was 00:00:30.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2021-02-23T13:24:36.9745216Z”, LastUpdateTimestamp: “2021-02-23T13:24:36.9745219Z” }, { ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/cluster0-shard-00-02.dbzdi.mongodb.net:27017” }”, EndPoint: “Unspecified/cluster0-shard-00-02.dbzdi.mongodb.net:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n—> System.TimeoutException: Timed out connecting to 35.174.57.64:27017. Timeout was 00:00:30.\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2021-02-23T13:24:36.8938358Z”, LastUpdateTimestamp: “2021-02-23T13:24:36.8938361Z” }] }.: TimeoutException\nat MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.AreSessionsSupportedAfterSeverSelctionAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.AreSessionsSupportedAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.StartImplicitSessionAsync(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSessionAsync[TResult](Func2 funcAsync, CancellationToken cancellationToken)\nat AWSLambda.Function.ProcessMessageAsync(SQSMessage message, ILambdaContext context) in C:\\Repositorios\\dtx\\Gupy-Metricas-Lambda\\app\\Gupy.Metricas.AWSLambda\\Function.cs:line 140\nat AWSLambda.Function.FunctionHandler(SQSEvent evnt, ILambdaContext context) in C:\\Repositorios\\dtx\\Gupy-Metricas-Lambda\\app\\Gupy.Metricas.AWSLambda\\Function.cs:line 76I don’t know what else to do, since I have already released access to my free tier cluster in Atlas MongoDb for any IP, that is, the problem does not seem to be access.The volume of data (json) that I send is extremely small and the insertion should not take more than 2 seconds.Could someone help me to solve this problem?", "username": "Oscar_Filho" }, { "code": "", "text": "Oscar_Filho - We reached out to you over MongoDB Atlas’ in-app chat service so that we can help you with this issue. Could you kindly log into https://cloud.mongodb.com/ and look for the chat that we opened?", "username": "Angela_Shulman" } ]
C# Asp Core 3.1 AWS Lambda Error Connection Timeout 30s
2021-02-23T18:40:42.087Z
C# Asp Core 3.1 AWS Lambda Error Connection Timeout 30s
3,966
https://www.mongodb.com/…6ee46cb3f895.png
[ "replication", "monitoring" ]
[ { "code": "", "text": "Our three-node replicaset is running on 4.2.7. We’re noticing that the oplog has a large amout of noop. You can see the 1st graph where there are almost the same amout of noops as the update). noop is written every 10 sec when the primary is in idle state, according to specifications/max-staleness.rst at master · mongodb/specifications · GitHub. Our primary has been busy updating in the 2nd graph.Can someone help understand the noops behavior?\nimage947×369 15.9 KB\n\n\nimage1519×442 30.2 KB\nThank you!", "username": "Bowen_Liu" }, { "code": "", "text": "Hi @Bowen_Liu,The rationale for periodic no-ops is per the Max Staleness specification you linked.What additional information are you looking for?Regards,\nStennie", "username": "Stennie_X" } ]
Understand oplog noop
2021-02-26T17:19:23.421Z
Understand oplog noop
3,086
https://www.mongodb.com/…a0adc923ed10.png
[ "app-services-user-auth" ]
[ { "code": "", "text": "So, I Have uploaded my web app, built on express on Heroku. Temporarily there are four app users, so I went on my deployed website and logged in my id, then my friend with his device went to the link and was directly logged in with my id, like it was pre logged in. I think the deployed server is storing the app.currentUser() when anyone logs in the server.\nI think the problem is with this file, even though i have added this file in .gitignore it still shows up.Thank you in advanced!", "username": "Debuggers_NITKKR" }, { "code": "", "text": "Can anyone please help to solve this issue. Am new to mongoDB Atlas and Realm", "username": "Debuggers_NITKKR" }, { "code": "", "text": "Adding a file to gitignore doesn’t remove it from the repoI’ve only used Realm for client applications while in this case it seems like you’re developing a server backend. You’ll need to tie the user to a session … can’t help much beyond this", "username": "Michael_Kofman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Realm only one account is loggd in at a time when deployed on heroku via github
2021-02-25T07:43:54.322Z
Mongodb Realm only one account is loggd in at a time when deployed on heroku via github
2,015
null
[ "atlas-triggers" ]
[ { "code": "", "text": "I have a realm function built firing off a database trigger. I need to add a match expression to filter the trigger, otherwise it will run on a continuous loop because it is updating the same collection that triggers it.So my match expression is this:\n{“updateDescription.updatedFields.profile”:{\"$exists\":true}}When I go into my collection and update a document’s “profile” field, nothing happens. When I take away the match expression, it runs all the time, but it runs in a continuous loop like mentioned before.Any ideas? Thanks!", "username": "Lukas_deConantseszn1" }, { "code": "{“updateDescription.updatedFields.profile”:{\"$exists\":true}}\n{\n \"updateDescription.updatedFields.profile\": {\n \"$exists\": true\n }\n}\n", "text": "Hi @Lukas_deConantseszn1,The first qoutes in the statement looks malformed:If there is an issue in parsing this the filtering might be off. Please fix as follows:If this do not help please send us a link of the trigger.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel, it seems that something is off with the way the match gets pasted In reality, the quotes are just fine in the match statement. I tried to fix them, but they are already set to the correct character type.Here is a link:\nhttps://realm.mongodb.com/groups/5bc1648acf09a2891bf25a98/apps/5e74024322411824cc3e0e1d/triggers/5f159a2084af949839f28c89Thank you so much for your help!", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "@Pavel_Duchovny what do you think?", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1,The logic is complex involving external dependency and I do not know the document type.However, your code only set a field name recommendations, how do you expect the profile field to vanish from the document??To me it looks like a classic infinity loop which has no exit condition.Please explain.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "So the inifinite loop issue is exactly why I need the match statementBasically, if the profile field is updated, I want the trigger to run.The trigger will run a function that will update the recommendations field, so that’s why I need the match statement to filter that update out. If recommendations is updated, or any change event that does not include an update to profile, I don’t want the function to run. But if the event includes profile, I do want it to run.That way, when the function updates recommendations and does not update profile, the trigger will not get run, and there will be no infinite loop.Best,\nLukas", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1,Have you tried logging the updateDescription.updatedFields values so that you see there is no profile update but the trigger is still there?Have you tried disable and re-enable the trigger?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,After doing a lot of logging, I noticed something. For context, I have been testing by updating the document from within the Atlas UI. Apparently, doing so triggers a replace operation? Didn’t know that but, very interesting.Anyway, once I figured that out I was able to implement the logic needed to only fire my function when I want. Thanks so much for your help and the great idea!", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1,Yes updates in Data Explorer are basically replace commands.I will highlight this in the documentation for further users to be aware.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes updates in Data Explorer are basically replace commands.I spent 1 hour because of this It would be great to have in documentation or in atlas data explorer as well.Viren", "username": "Viren_Gupta" }, { "code": "", "text": "Hi @Lukas_deConantseszn1, I’m having a similar problem to you.You mentioned you implemented logic to get it working, would you mind posting details? Could help me with my issue!", "username": "Calum_Craig" }, { "code": "", "text": "@Calum_Craig,Welcome to MongoDB community.I think @Lukas_deConantseszn1 had enabled a replace event because data explorer updates are actually replaces so it must catch those…Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Realm database trigger match expression seemingly not working
2020-08-30T20:19:50.344Z
Realm database trigger match expression seemingly not working
4,311
null
[ "node-js", "mongoose-odm", "field-encryption" ]
[ { "code": "", "text": "we are using mongoDb want to use CSFLE to encrypt sensitive data. After connecting to the mongodb and the mongocryptd process a few queries are successful but after a while the queries stop responding.\nWe are using following setup:\nnode,\nmongodb,\nmongodb-client-encryption,\nmongooseThis issue can be reproduced in my local machine if we run continuous queries using tools like Appache Jmeter or even call the db queries inside a loop. Queries won’t respond after 15-20 iterations.A help here would be really appreciated.", "username": "Navin_Devassy" }, { "code": "", "text": "After spending 4 days debugging mongoose, mongodb and mongodb-client-encryption modules, I found out that this issue happens only when accessing un-encrypted collections. I have done a work around in my code. I specified all my collection-names in my jsonSchema even if I am not encrypting any fields in that collection and it resolved my issue. Still want to confirm was that a miss from my side or is it a bug in autoEncryption.\nIf anyone is stuck in this issue, try doing this.\n{\n‘dbName.collection1’: {\nbsonType: ‘object’,\n},\n‘dbName.collection2’: {\nbsonType: ‘object’,\n},\n}", "username": "Navin_Devassy" } ]
Queries not responding after a while if field-level-encryption is enabled
2021-02-25T04:41:32.651Z
Queries not responding after a while if field-level-encryption is enabled
1,912
null
[ "aggregation" ]
[ { "code": "{\n _id: 1,\n client: \"Some Client\",\n type: \"Some Type\",\n firstUsed: 2021-01-05T13:23:37.000+0000\n lastUsed: 2021-05-05T18:11:23.000+0000\n}\nfirstUsedlastUsed$match$groupfirstUsedlastUsedlastUsedfirstUsed", "text": "Hello,So I’m kind of new to Mongo aggregations and after a lot of researching and googling I can’t figure out how to do a particular query.Say I have many documents with the following format:What I’m trying to do is group all the documents by type (for a particular client), and then get the first firstUsed datetime and the last lastUsed datetime.I’ve been playing with aggregations but after doing a $match on the client, $group on the type, I can only figure out then how to get only one of the dates I need.Sorting by firstUsed and getting the first document is fine, but then getting the last document doesnt mean I’m getting the correct lastUsed.How do I re-sort and get the last lastUsed datetime while keeping the already found firstUsed?Hope that makes sense,Thanks", "username": "Doto_Pototo" }, { "code": "", "text": "Look atand have one facet that does the lastUsed and another one for firstUsed. I am not too sure if you need to $group before $facet or $group inside each $facet. If you could supply a few more documents it might help to test.", "username": "steevej" }, { "code": "firstUsedlastUsedlastUsed$minfirstUsed$group$facet", "text": "Hello @Doto_Pototo, welcome to the MongoDB Community forum!I can’t clearly understand what you mean by “… get the first firstUsed datetime and the last lastUsed datetime”. You can try using the Aggregation operator $max to get the highest of the lastUsed and $min for firstUsed in the $group stage of the pipeline.As @steevej mentioned you can also try using $facet.", "username": "Prasad_Saya" } ]
Aggregation with multiple sorts then groups
2021-02-26T13:09:39.238Z
Aggregation with multiple sorts then groups
2,234
null
[]
[ { "code": "", "text": "Hi there,Last October we asked students to share their MongoDB projects with us (big thank you to all of you )I’m excited to share that we’ve launched the Student Spotlights on DevHub. With Student Spotlights, we’re showcasing projects that students are building with MongoDB. Read their stories, get inspired & see what they made with MongoDB. Are you a student, and do you have an exciting MongoDB project to share? Submit your work and have it featured on the site.https://www.mongodb.com/academia/students/", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Student Spotlights is live!
2021-02-25T19:41:51.364Z
Student Spotlights is live!
3,177
https://www.mongodb.com/…9443e805371.jpeg
[ "swift" ]
[ { "code": "ListForEach.onMove.onDelete@ObservedRealmObject var huntlet: HuntletText(\"Huntlet is called\\(huntlet.title)\")\nText(\"Huntlet has \\(huntlet.tasks.count) task(s)\")\n\nList {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task)\n }\n}\n\n// Also, just for reference\nstruct TaskListEditRowView: View {\n @ObservedRealmObject var task: Task\n \n var body: some View {\n TextField(\"Task Name\", text: $task.title)\n .foregroundColor(Color(\"navy\"))\n }\n}\nText()// List {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task)\n }\n// } \n", "text": "I’m attempting to follow the structure in the ListSwiftUI example at\nhttps://github.com/realm/realm-cocoa/blob/master/examples/ios/swift/ListSwiftU\nthat Jason demoed in the webinar atIt is my understanding that I need both List and ForEach in order to use the .onMove and .onDelete magic that Jason demoed in the webinar that showed this stuff, but I cannot figure out how to actually show the data when wrapping the ForEach in a List.In my view I am using@ObservedRealmObject var huntlet: HuntletMy list looks likeNote: The first two Text() views in this struct are just to verify that the data is getting there.This code is producing this result:\nScreen Shot 2021-02-19 at 8.10.18 AM728×378 20.5 KB\nWhen I take the list out and just use ForEach, it loads the data:produces:\n\nScreen Shot 2021-02-19 at 8.09.52 AM742×358 30.7 KB\nAny ideas?", "username": "Kurt_Libby1" }, { "code": "", "text": "Hey @Kurt_Libby1– could you show us a bit more code? Maybe how that View is being instantiated, and the full View code?SwiftUI can be a bit finicky around how you declare your Views, and if not done in the way it expects, it will fail silently.Also– is there any reason you need the wrapping List here?", "username": "Jason_Flax" }, { "code": "elseimport SwiftUI\nimport RealmSwift\n\nstruct TaskListView: View {\n \n @ObservedRealmObject var hunt: Hunt\n @ObservedRealmObject var huntlet: Huntlet\n\n @Binding var editMode: Bool\n \n var body: some View {\n VStack {\n if !editMode {\n ForEach(hunt.tasks) { task in\n TaskCardView(task: task, hunt: hunt)\n .padding([.leading, .trailing])\n }\n } else {\n if huntlet.title != \"\" {\n Text(\"Huntlet is called\\(huntlet.title)\")\n Text(\"Huntlet has \\(huntlet.tasks.count) task(s)\")\n List {\n ForEach(huntlet.tasks) { task in\n TaskListEditRowView(task: task)\n }\n }\n }\n }\n }\n }\n}\n\nstruct TaskListEditRowView: View {\n @ObservedRealmObject var task: Task\n \n var body: some View {\n TextField(\"Task Name\", text: $task.title)\n .font(Font.custom(\"RedHatDisplay-Bold\", size: 14))\n .foregroundColor(Color(\"navy\"))\n }\n}\n", "text": "My understanding with wrapping the List is that I need to include the Edit environment in order to access the .onDelete and .onMove methods, but that these are on the ForEach.Some things to note:The code I was referencing in the first post is in the else statement.Here’s the whole file:I assume that it will end up like this:Screen Shot 2021-02-19 at 8.34.51 AM1260×208 56.6 KB", "username": "Kurt_Libby1" }, { "code": "EditButtonEitherifif", "text": "If this is an iOS application, you do not need the Edit environment modifier for this. Simple having onMove and onDelete will enable their behaviour. However to explicitly show the move handle and delete button, you are meant to use the builtin EditButton now (which I would generally place in a navigation bar).It’s possible swiftUI doesn’t like the Either result from your if statement. If you remove the if and simply add the edit button, you will likely see different results.", "username": "Jason_Flax" }, { "code": "struct TaskListView: View {\n \n @ObservedRealmObject var hunt: Hunt\n @ObservedRealmObject var huntlet: Huntlet\n \n @Binding var editMode: Bool\n \n var body: some View {\n List {\n ForEach(hunt.tasks) { task in\n TaskCardView(task: task, hunt: hunt)\n .padding([.leading, .trailing])\n }\n }\n }\n}\nListForEach", "text": "Thanks.I stripped out everything and tried to just wrap the ForEach in a List and am getting nothing:Again, commenting out the List wrapper and just using ForEach brings the list elements back.Also, I tried to use .onMove and .onDelete without using List and there is no swipe function to access the Delete button and no ability to click and drag for moving, so it looks like those actually aren’t enabled without the List wrapper wrapped around the ForEach.This may be helpful to see: \n\n\n", "username": "Kurt_Libby1" }, { "code": "ForEachList", "text": "Can you try the offending ForEach with the List wrapper with non-Realm data? That will at least test if it has to do with Realm or not.The video helps ", "username": "Jason_Flax" }, { "code": "", "text": "Looks like it’s not a Realm thing, but something isn’t right in SwiftUI.At least now I can go explore the vast sparse landscape that is SwiftUI documentation \n\n\nI’ll explore and post an update when/if I find a solution.Any pointers on potential culprits would be appreciated.", "username": "Kurt_Libby1" }, { "code": "ScrollView", "text": "FOUND IT! A few parent views up there was a ScrollViewSince Lists are inherently scrollable, this isn’t supported. Switching the ScrollView to a VStack made the lists show up. ", "username": "Kurt_Libby1" }, { "code": "", "text": "Well, that worked to get the list to show up, allow swipe to delete, drag to move, edit fields, etc.But all of these actual editing functions crash the app.The change is saved and then the app crashes.Here is the log: Dropbox - EditListCrashLog.rtf - Simplify your lifeNot sure if I should file a crash report or if I’m just doing something wrong.UPDATE: It works sometimes. I started making a video to show what happens and if I edit the objects, the crash happens. But if I first add a new task and then interact with the editing, it works.Here’s that video:https://www.loom.com/share/7db9d112c338465191c5f1a680460008", "username": "Kurt_Libby1" }, { "code": "", "text": "Hmm… this appears to be the same issue as Crash: alloc.hpp:580: [realm-core-10.3.3] Invalid ref translation entry [16045690984833335023, 78187493520] · Issue #7086 · realm/realm-swift · GitHub if you want to chime in there.", "username": "Jason_Flax" }, { "code": "", "text": "Hi @Kurt_Libby1, have you been able to fix the issue when writing on the background?, Are your using Sync in your project?", "username": "Diana_Maria_Perez_Af" }, { "code": ".environment()", "text": "Hi Diana,I have not been able to fix it as is.Instead I updated to Realm Cocoa 1.6 so that I can use the new property wrappers. When I pass in the realm with .environment() and use @ObservedRealmObject or @ObservedResults, it seems to update as needed and removes that need to freeze the lists.You can see more of it here: Realm Meetup: Realm Sync in use - Building and Architecting a Mobile Chat App. - YouTube@Andrew_Morgan explains the old way and then shows the new way on how to deal with the Lists, and that seems to have fixed the issue for now.", "username": "Kurt_Libby1" }, { "code": "", "text": "Hi @Kurt_Libby1I’m happy that you’ve been able to get a solution to this issue, using the new SwiftUI implementation, so If you have any issues in the future with this or any other realm related crash, do not doubt to contact us.\nWe’ll keep looking into this issue, as we haven’t been able to reproduce it in our local environment.", "username": "Diana_Maria_Perez_Af" } ]
SwiftUI List not showing data when wrapping in List
2021-02-19T14:18:10.888Z
SwiftUI List not showing data when wrapping in List
5,802
null
[]
[ { "code": "{\n \"dayId\": 21055,\n \"modificationId\": {\n \"$oid\": \"6036f8c341b05c730a93439f\"\n },\n \"slots\": {\n \"serviceA\": [{\n \"startDate\": {\n \"$date\": \"2021-02-23T10:00:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T12:00:00.000Z\"\n },\n \"duration\": 120\n }],\n \"serviceB\": [{\n \"startDate\": {\n \"$date\": \"2021-02-23T10:00:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T11:00:00.000Z\"\n },\n \"duration\": 60\n }, {\n \"startDate\": {\n \"$date\": \"2021-02-23T11:00:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T12:00:00.000Z\"\n },\n \"duration\": 60\n }],\n \"serviceC\": [{\n \"startDate\": {\n \"$date\": \"2021-02-23T10:00:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T10:30:00.000Z\"\n },\n \"duration\": 30\n }, {\n \"startDate\": {\n \"$date\": \"2021-02-23T10:30:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T11:00:00.000Z\"\n },\n \"duration\": 30\n }, {\n \"startDate\": {\n \"$date\": \"2021-02-23T11:30:00.000Z\"\n },\n \"endDate\": {\n \"$date\": \"2021-02-23T12:00:00.000Z\"\n },\n \"duration\": 30\n }]\n }\n}\n db.availabilities.update(\n { \"_id\": ObjectId(\"6037396b41b057730a8343a3\") },\n { $pull: { \"slots.serviceA\": { duration: 120 },\n \"slots.serviceB\": { duration: 60 },\n \"slots.serviceC\": { duration: 30 } } \n }\n );\n \"slots.serviceC\": { $and: [ { \"duration\": { $gte: 30 } }, { \"duration\": { $lte: 60 } } ] }", "text": "I am currently writing an availabilities system in a MongoDB document but I can’t seem to figure out how to make my specific update (pull one element and all elements depending on the arrays) in one request to be atomic (it needs to be so).The document has a slots object which contains services and each services contain slots to be picked and thus pulled. It looks like this :The catch is some services need to have all their elements removed according to a complex condition on their dates and duration except one service only the first condition matching element.I’ve come up with this (the duration equality is normally replaced by $lte, $gte and so on…)For the more complex condition I intended to do it this way but on the dates too/instead (it seems to be working just fine) : \"slots.serviceC\": { $and: [ { \"duration\": { $gte: 30 } }, { \"duration\": { $lte: 60 } } ] }So I’d like service A and service B to pull all the matching elements and ONLY ONE for service C for example.Thanks in advance for the help,document on compass for readability", "username": "GUIGAL_Allan" }, { "code": "", "text": "Hi @GUIGAL_Allan,Welcome to make MongoDB community.Not sure I got the problem. Do you face any error with the $pull .Can’t you use several $pull for each service?https://docs.mongodb.com/manual/reference/operator/update/pull/#remove-items-from-an-array-of-documentsIf you need to do a dynamic condition on an array consider using array filters:Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB updating arrays (pull all matching elements and pull one matching element)
2021-02-25T21:09:29.382Z
MongoDB updating arrays (pull all matching elements and pull one matching element)
2,990
null
[]
[ { "code": "{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"db.collection\",\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"SUBSCAN\",\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"SORT\",\n\t\t\t\t\"sortPattern\" : {\n\t\t\t\t\t\"A\" : 1,\n\t\t\t\t\t\"B\" : 1\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"indexName\" : \"my_index\",\n\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"myinstance\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"3.6.0\"\n\t},\n\t\"ok\" : 1\n}\ndb.collection.find({conditions}).sort({A:1, B:1}).explain()\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"db.collection\",\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\"indexName\" : \"my_index\",\n\t\t\t\"direction\" : \"forward\"\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"myinstance\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"3.6.0\"\n\t},\n\t\"ok\" : 1\n}\nmy_index", "text": "Hi All,I run an explain on my query and get a “SUBSCAN” meaning stage, but could find a documentation for it. What does it mean ?Below is the full result:so the query is basically:If I remove the sort, then it’s a straightforward IXSCAN case:My wild guess would be, after getting all the documents that match the query criteria (which uses the index my_index), it must sort all the values manually based on the sort values (since there’s no range that it could leverage), hence the stage is “SUBSCAN”, so not quite COLSCAN but not optimal either. Am I getting it correctly ?Thanks for all comments and suggestions.\nTuan", "username": "Tuan_Dinh1" }, { "code": "", "text": "Hi @Tuan_Dinh1,Yes your guess seems to be correct .When an index can only satisfy a sort it has to do additionally a partial scan to filter the criteria section.Our recommendation is to build index supporting Equity , Sort and Range order.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.\nThanks\nPavel", "username": "Pavel_Duchovny" } ]
Meaning of SUBSCAN in explain() query
2021-02-26T04:27:11.531Z
Meaning of SUBSCAN in explain() query
3,368
null
[ "security", "configuration" ]
[ { "code": "", "text": "Hi everyonequestion:if the Security: keyFile file can be updated without restarting the replicaset nodes?Thanks!!!", "username": "Cristian_Carrasco" }, { "code": "", "text": "Welcome to the community!No it is not possible without restart\nWhat version are you using and your configuration like?\nHowever for latest versions you can do this with no downtimePlease check this doc", "username": "Ramachandra_Tummala" }, { "code": "# network interfaces\nnet:\n maxIncomingConnections: 64000\n serviceExecutor: adaptive\n port: 27017\n bindIp: 127.0.0.1\n ssl:\n mode: requireSSL\n PEMKeyFile: /opt/MongoDB/file.pem\n allowInvalidCertificates: true\n allowInvalidHostnames: true\n\n\nsecurity:\n keyFile: /opt/MongoDB/keyfile\n authorization: \"enabled\"\n", "text": "Hi, Thank you very much for the reply.Y have the version 4.0.5and the config:", "username": "Cristian_Carrasco" }, { "code": "", "text": "By configuration i meant your setup details like sharded/unsharded cluster,number of nodes,config servers,mongoose etc\nReason for asking this is if it is a sharded cluster you should have min. two mongoose for no downtime upgrade as per the link i sharedAlso check this doc for unsharded cluster", "username": "Ramachandra_Tummala" } ]
Config: Security: keyFile update without restart
2021-02-25T01:31:49.994Z
Config: Security: keyFile update without restart
1,862
null
[ "replication", "monitoring" ]
[ { "code": "", "text": "We’re running a mongodb 4.2 with zstd collection block compression. The oplog is set to oplogSizeMB: 3000000 MB(before compression), but the actual collection size is ~350GB(after compression). It’s like 10X compression ration. According to WiredTiger: Compressors, the default compression ration for zstd is 3. How can we understand the size difference between the data size and total size of oplog collection?PRIMARY> db.getReplicationInfo()\n{\n“logSizeMB” : 3000000,\n“usedMB” : 2996131.66,\n“timeDiff” : 123687,\n“timeDiffHours” : 34.36,\n“tFirst” : “Sun Feb 21 2021 01:55:32 GMT-0700 (MST)”,\n“tLast” : “Mon Feb 22 2021 12:16:59 GMT-0700 (MST)”,\n“now” : “Mon Feb 22 2021 12:16:59 GMT-0700 (MST)”\n}PRIMARY> db.oplog.rs.totalSize()\n379049607168PRIMARY> db.oplog.rs.dataSize()\nNumberLong(“3118230005927”)", "username": "Bowen_Liu" }, { "code": "--ultraoplogSizeMB", "text": "According to WiredTiger: Compressors, the default compression ration for zstd is 3.Welcome to the MongoDB community @Bowen_Liu!The Zstandard value you are referencing is a “compression level”, not a target compression ratio. The compression level determines the amount of effort that goes into the compression algorithm’s analysis: a lower level will produce results faster but may not result in as much compression as a higher level. Higher compression levels will have slower compression speed and use more resources (memory & CPU) in exchange for potentially better compression outcomes. There are diminishing returns in higher compression levels, especially if you want to minimise the latency for writing data to disk.Quoting from the Zstandard manual:The library supports regular compression levels from 1 up to ZSTD_maxCLevel(),\nwhich is currently 22. Levels >= 20, labeled --ultra, should be used with\ncaution, as they require more memory. The library also offers negative\ncompression levels, which extend the range of speed vs. ratio preferences.\nThe lower the level, the faster the speed (at the cost of compression).Note: compression level is currently not adjustable for MongoDB collections (although you can choose the algorithm to use like Zstandard vs Snappy). There’s a feature request you can upvote & watch for updates: SERVER-45690: Ability to customize collection compression level.How can we understand the size difference between the data size and total size of oplog collection?The compression ratio will vary based on the source data, which in this case will be block compression of oplog documents. The best estimate of expected compression ratio will be derived from observation of your deployment metrics over time.Your current oplog workload is achieving about a 10:1 compression ratio. If the nature of your workload changes significantly in future (for example, if an application started storing binary data which is less compressible) the ratio may change.In MongoDB 4.2 and earlier server versions, the maximum oplog size is based on a configured oplogSizeMB compared to the storage size of the oplog. MongoDB 4.4+ adds the option to set a time-based oplog retention period for admins who want to ensure the oplog covers an expected duration (in hours).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for the reply!", "username": "Bowen_Liu" } ]
Oplog compression ratio
2021-02-23T03:30:37.751Z
Oplog compression ratio
2,561
null
[]
[ { "code": "", "text": "How do i find the number of locks on a collection ?", "username": "Koustav_Chatterjee" }, { "code": "db.adminCommand( { lockInfo: 1 } )mongodmongostatdb.mycollection.stats()", "text": "Hi @Koustav_Chatterjee and welcome in the MongoDB Community !Which version of MongoDB are you running? I will assume the latest 4.4.4 for my answer.More info here: https://docs.mongodb.com/manual/reference/database-profiler/#system.profile.locks.More info here: https://docs.mongodb.com/manual/reference/command/lockInfo/.More info here: https://docs.mongodb.com/database-tools/mongostat/#fieldsMore info here: https://docs.mongodb.com/manual/reference/method/db.collection.stats/index.htmlWhat kind of issues are you having that made you look at locks? Happy to help if you share more details.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "We are using mmapv1 as mongo engine.\nIt uses collection level locking even for reads.\nIf we have n concurrent read requests, each of them has to wait for the previous to finish.\nWe were more interested to see this on a collection basis.\nThat is what we are trying to find out.", "username": "Koustav_Chatterjee" }, { "code": "", "text": "MMapV1 is now super old and deprecated. Don’t waste your time trying to solve your lock issues and upgrade to the latest version of MongoDB, your performances will improve greatly. There is no other way to fix this.MongoDB 3.6 will reach End Of Life in April 2021.Whichever MongoDB product you’re using, find the support policy quickly and easily.", "username": "MaBeuLux88" } ]
Number of locks on a collection
2021-02-25T12:18:12.218Z
Number of locks on a collection
3,718
null
[ "kotlin" ]
[ { "code": "val completableDeferred = CompletableDeferred<App.Result<Document>>()\nval functions: Functions = app.getFunctions(app.currentUser())\n//user.email and user.name are strings\nfunctions.callFunctionAsync(\"assignNewUser\", listOf(user.email, user.name), Document::class.java){\n completableDeferred.complete(it)\n}\nexports = async function(email, name) {\n const collection = context.services.get(\"mongodb-atlas\").db(\"quotes\").collection(\"User\");\n const filter = {email: email};\n const memberToAssign = await collection.findOne(filter);\n if (memberToAssign == null) {\n return {error: `User ${email} not found`};\n }\n try {\n return await collection.updateOne(\n {_id: memberToAssign._id},\n {$set: {\n name: name,\n }\n });\n } catch (error) {\n return {error: error.toString()};\n }\n};\n", "text": "I’m attempting to use a Realm function in my Kotlin Android application, but I’m getting the error ‘Could not resolve encoder for end type’.Here is where I’m calling the function:And here is the Realm function:Does anyone know what this error means?Also, a side-note. I’m calling this function after I’ve registered a user, because I haven’t found a way to register a user while passing in extra parameters like a name or a profile picture. I’m registering users using the App.emailPassword.registerUserAsync(email, password) method.", "username": "Joe_Barker" }, { "code": "completableDeferredimport org.bson.Document\n", "text": "Curiously, when I try to run a function like this, I hit no problems. It’s a little tricky without seeing the code for completableDeferred or your imports, but I would guess that you’re using the wrong type of Document. Check the imports at the top of your file for:If you’ve imported some other kind of Document class, Realm probably won’t know how to write data into that class. Let me know if this works!", "username": "Nathan_Contino" }, { "code": "val result = completableDeferred.await()\nif (!result.isSuccess) {\n val exception = RegisterException()\n result.error.errorMessage?.let { exception.errorMessage = it }\n throw exception\n}\nimport io.realm.mongodb.App\nimport io.realm.mongodb.functions.Functions\nimport joe.barker.domain.boundary.data.RegisterData\nimport joe.barker.domain.entity.User\nimport joe.barker.domain.exception.MyException\nimport joe.barker.domain.exception.RegisterException\nimport kotlinx.coroutines.CompletableDeferred\nimport org.bson.Document", "text": "Thanks for the reply. The Document I’m using is in fact an org.bson.Document.The rest of the Kotlin code is here, sorry, I didn’t think it relevant.Edit: And my imports:", "username": "Joe_Barker" }, { "code": "", "text": "What version of the Realm Android SDK are you using? I just tried with 10.0.0 and 10.3.1 with mostly the same code as you provided here and I didn’t have any issues.", "username": "Nathan_Contino" }, { "code": "RegisterException.errorMessage result.error.errorMessage?.let { exception.errorMessage = it }\n result.error.errorMessage?.let { exception.errorMessage = it.error.error }\n", "text": "What is the type of RegisterException.errorMessage? I wonder if your function is throwing an error and you actually need to changeto…", "username": "Nathan_Contino" }, { "code": "", "text": "The RegisterException is my own, of type Throwable and has an errorMessage val.But, quite strangely my error has changed to FunctionError: update not permitted. No idea why, I haven’t changed anything! I would assume my User needs to have the permission, where would I set this?", "username": "Joe_Barker" }, { "code": "", "text": "That’s largely dependent on your app, but you can get started by reading up on permissions here. You can also check out some example query roles here. Depending on your application you might not need anything other than allowing all reads and writes to a specific collection in a specific database.", "username": "Nathan_Contino" }, { "code": "", "text": "Great. Thanks for the help.If that encoder error appears again I’ll mention it, it’s odd that it’s stopped appearing.", "username": "Joe_Barker" }, { "code": "{\n \"roles\": [\n {\n \"name\": \"usersCanCRUDTheirOwnData\",\n \"apply_when\": {\n \"_id\": \"%%user._id\"\n },\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"read\": true,\n \"write\": true,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": [],\n \"schema\": {\n \"title\": \"User\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n },\n \"canReadPartitions\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"canWritePartitions\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"email\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"profilePicture\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_partition\",\n \"email\",\n \"name\"\n ]\n }\n}\n", "text": "So I’ve set up a schema with rules that a user can read/write their own data, but I’m still getting the update not permitted error.I can’t for the life of me figure out what I’m missing. Maybe the problem will be mysteriously replaced with another one again when I ask on here, ha Edit: I’ve also tried this without the apply_when", "username": "Joe_Barker" } ]
Android and Realm - Could not resolve encoder for end type
2021-02-25T22:01:20.844Z
Android and Realm - Could not resolve encoder for end type
3,747
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi,I’m building a headless e-commerce app that will integrate with MongoDB Realm which will be used for the primary user management.I have all users log in via MongoDB Realm and automatically create the corresponding user in the e-commerce platform when they register on the site. So far so good.When I want to fetch address data for the customer, however, I have to use an admin API for the e-commerce platform (don’t ask me why, but you can’t get that data using frontend available APIs). This is where the problem begins.Since this is sensitive data I need to ensure the user is logged in before they’re able to access the data, and they should only be able to access their own data. My idea was to pass the access token, or something similar, to our custom API and verify the token, preferably even decode it to get the customer email (used to query the e-commerce system) from there and nothing else from the request.I’ve built this functionality with Firebase in the past and they have a super simple functionality to verify a user ID token. I had hoped for something similar for Realm, but haven’t been able to find anything.What’s the best way to achieve this in Realm?", "username": "Max_Karlsson" }, { "code": "", "text": "Hey Max,Unfortunately there currently isn’t a way to verify access tokens on the server-side.Since this is sensitive data I need to ensure the user is logged in before they’re able to access the data, and they should only be able to access their own data.Is there something preventing you from using an authentication trigger that will fetch the user’s address data from the eCommerce Admin API and store it in their custom user data? This will ensure that the user has logged-in/registered and only they can access their own custom data.e.g.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Sumedha,Thanks for the response.There are 3 reasons why I don’t want to store their address in their custom data:Is there no other way you can think of? This is a hard requirement for my project and I’d be really bummed out to have to replace Realm this far into the project.", "username": "Max_Karlsson" }, { "code": "", "text": "I just had an idea that might work: in a login trigger, create a token with the relevant user information and save to the user custom data. The token is encoded with a secret that’s shared with our API and thus can be decoded there. Then use the decoded token to get any data.What I’m thinking with that, however, is that I want to renew that token as often as the access token is renewed, to ensure its security. Is there a trigger for when the access token is renewed?", "username": "Max_Karlsson" }, { "code": "", "text": "I figured out a way to achieve what I was after from this page: https://docs.mongodb.com/realm/reference/authenticate-http-client-requests#std-label-authenticate-http-client-requestsWould still love a built-in way to achieve this, but it works well enough for my use case for now.", "username": "Max_Karlsson" }, { "code": "", "text": "Hey Max - what you mentioned is a possible workaround. I was also going to suggest potentially moving some of your API logic to Realm Functions where you have confirmation that the user is authenticated and valid.We have gotten multiple requests for an API method that validates an access token. I can’t give a definite date but we’re actively investigating and looking into releasing this. I can post here with any updates.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hey folks - @Sumedha_Mehta1 promised an update, and we have one for you! We’ve just released an endpoint in our admin API that you can use to verify a client access token.The OpenAPI documentation for the endpoint is here: Atlas App Services APIAnd we’ve added a section to the “Authenticate HTTP Client Requests” page about using the endpoint to verify a client access token: https://docs.mongodb.com/realm/reference/authenticate-http-client-requests/#verify-a-client-access-tokenHope this is helpful!", "username": "Dachary_Carey" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Verify Access Token server side
2020-11-25T20:27:06.958Z
Verify Access Token server side
6,228
null
[ "queries", "data-modeling" ]
[ { "code": "", "text": "Hi all. I’m new to databases in general. I have a Node+Express multiuser app with each user’s data and login credentials in mongodb Altas database.\nSo I thought of the following structure -App Database\nAccounts Collection - To store only username and password\nProfile Collection - to store all other information related to the userI don’t know how to relate one to another. one option I can see is to have username in both profile and account document for every user. Then I can check password using the account document and if successful, I would search the username in Profile collection and return the data in a response.\nOr am I thinking too hard and I should just store both username, password, and profile data in one document for each user.What’s the best practice?\nThx", "username": "Tarun_Singh" }, { "code": "", "text": "I realise this information must be somewhere in the documentation but I kinda have to sumbit a fullstack prototype tomorrow so would really appreciate if someone could point me in the right direction", "username": "Tarun_Singh" }, { "code": "", "text": "Or am I thinking too hard and I should just store both username, password, and profile data in one document for each user.We never think too hard. B-)But sincehave to sumbit a fullstack prototype tomorrowput everything in the same collection. MongoDB is flexible enough that you can revise your schema later.In the mean time enjoy some reading:A summary of all the patterns we've looked at in this series", "username": "steevej" } ]
Database structure help and how to connect between different collections
2021-02-25T19:44:16.997Z
Database structure help and how to connect between different collections
2,595
null
[ "node-js", "app-services-user-auth", "react-native" ]
[ { "code": "", "text": "Is there any way to verify and get userId on back-end from the access token created by client-side Realm SDK?My application using Realm authentication, I need to get the user Id to save user created post on back-end, I’m using Couchbase for my post data.something like firebase admin does link", "username": "chawki" }, { "code": "", "text": "@chawki I believe you can use an authentication trigger like so -And then access user.id in context - not sure exactly what you are trying to do but triggers and functions should give you access to user data -", "username": "Ian_Ward" }, { "code": "", "text": "What I’m looking for is something similar to firebase admin.Here how firebase admin worksHow to implement this with Realm?", "username": "chawki" }, { "code": "var myId = app.currentUser()!.identity!\n", "text": "Once you login you can call something like:", "username": "Ian_Ward" }, { "code": "", "text": "Thanks @Ian_Ward,This not what I’m looking for, I think I need to implement the user verification my self.", "username": "chawki" }, { "code": "", "text": "@chawki Did you find a solution? I am in the same boat.A node.js backend but using Realm for user management. When a user logs in, I can get the accessToken returned but Realm has not method for validating the accessToken for subsequent requests…", "username": "Eden_Webb" }, { "code": "", "text": "@Eden_Webb, @chawki, were you able to find a nice workaround for this?", "username": "Vlad_Dobrovolskiy" }, { "code": "", "text": "@Vlad_Dobrovolskiy No. Following this thread Verify Access Token server side - #2 by Sumedha_Mehta1 there are some suggestions of workarounds. It also mentions they are looking into as a feature.", "username": "Eden_Webb" }, { "code": "", "text": "Hey everyone - we’ve added support for this, more info can be found in the docs: https://docs.mongodb.com/realm/reference/authenticate-http-client-requests/#verify-a-client-access-token", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to verify access token on back-end
2020-09-12T12:38:04.784Z
How to verify access token on back-end
6,630
null
[ "dot-net", "connecting" ]
[ { "code": "private static string MongoDbConnectionString => \n ConfigurationManager.AppSettings[nameof(MongoDbConnectionString)];\n\npublic static void Main()\n{\n var mongoClient = new MongoClient(MongoDbConnectionString);\n Console.WriteLine(\"All operations completed\");\n}\nMain()var mongoClient...TaskID Status Location Task\n1324\tAwaiting\tMongoDB.Driver.Core.Servers.ServerMonitor.GetIsMasterResultAsync(connection, isMasterProtocol, cancellationToken)\t\t\t\t\t\t\t\tMongoDB.Driver.Core.Servers.ServerMonitor.GetIsMasterResultAsync(connection, isMasterProtocol, cancellationToken)\n1323\tAwaiting\tMongoDB.Driver.Core.Connections.IsMasterHelper.GetResultAsync(connection, isMasterProtocol, cancellationToken)\t\t\t\t\t\t\t\t\tMongoDB.Driver.Core.Connections.IsMasterHelper.GetResultAsync(connection, isMasterProtocol, cancellationToken)\n1322\tAwaiting\tMongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol<TCommandResult>.ExecuteAsync(connection, cancellationToken)\t\t\t\tMongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol<TCommandResult>.ExecuteAsync(connection, cancellationToken)\n1321\tAwaiting\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(responseTo, encoderSelector, messageEncoderSettings, cancellationToken)\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(responseTo, encoderSelector, messageEncoderSettings, cancellationToken)\n1320\tAwaiting\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(responseTo, cancellationToken)\t\t\t\t\t\t\t\t\t\t\t\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(responseTo, cancellationToken)\n1319\tAwaiting\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(cancellationToken)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tMongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(cancellationToken)\n1318\tAwaiting\tMongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(stream, buffer, offset, count, timeout, cancellationToken)\t\t\t\t\t\tMongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(stream, buffer, offset, count, timeout, cancellationToken)\n1317\tAwaiting\tMongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(stream, buffer, offset, count, timeout, cancellationToken)\t\t\t\t\t\t\tMongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(stream, buffer, offset, count, timeout, cancellationToken)\n1316\tAwaiting\tSystem.Net.Security.SslStream.ReadAsyncInternal<TReadAdapter>(adapter, buffer)\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tSystem.Net.Security.SslStream.ReadAsyncInternal<TReadAdapter>(adapter, buffer)\nDisconnect()Dispose()", "text": "This is a reduced version of the netcore3.1 console app that is having the issue in the title.There are other statements in the actual Main() function, but they are all commented out except for these two. With the project this code belongs to set as startup project in VS and then started, the program runs and displays the “All operations completed” message in the console window, but the program does not exit. If i comment out the var mongoClient... statement, the program displays the “All operations completed” message briefly before the console window that was opened by the debugging session closes.When the program is not closing, if I pause the execution and inspect the running tasks (Debug -> Windows -> Tasks) to see what may be active, there are over 200 tasks that are active. Most of them are ones like these, but in varying orders.In no place in my code am I using any Async, so any tasks that are created are going to be from the mongodb c# driver or the .net core runtime. I created a minidump from the process when its having this condition, but I’m not going to upload it where the general public can get to it.Why is the MongoClient lacking any kind of explicit Disconnect() or Dispose()?\nHow can I get this to shutdown cleanly?I tested this out in RoslynPad to rule out Visual Studio and it does the same thing there. After changing the package versions this program will exit correctly up to v2.8.1, but v2.9.0 will start the described issue", "username": "Andrew_Stanton" }, { "code": "Disconnect()Dispose()mongoClient.Cluster.Dispose();MongoDbConnectionString", "text": "Why is the MongoClient lacking any kind of explicit Disconnect() or Dispose() ?You can try this: mongoClient.Cluster.Dispose();. Though the documentation doesn’'t explicitly says the program needs to close the connection explicitly (I believe the closing of the connection is automatic).What is the MongoDbConnectionString you are working with?", "username": "Prasad_Saya" }, { "code": " \"mongodb+srv://[username]:[password]@[instance].azure.mongodb.net/test?authSource=admin&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=true\";\nmongoClient.Cluster.Dispose();finally", "text": "The connection string (redacted) is likeUsing the mongoClient.Cluster.Dispose(); at the end does indeed stop the RoslynPad example program from lingering open. Any idea why it would stay open in the first place?\nThe VS console application still lingers when this is added into the finally block.", "username": "Andrew_Stanton" }, { "code": "", "text": "I got same problem on a .net core backgroudservice. I need to close the app so I do a _hostApplicationLifetime.StopApplication(); in finally bloc but the processus don’t stop and I see these processes are waiting : TachesPP1686×585 37 KBThe fact is my program close normally when i connect to a replicaset cluster but not when i connect to a sharded cluster… I have tried in v2.10.4 and v2.11.2 ( mongodb v4.2)\nAny idea why in replicaset mongo environmnet everything’is ok but not in sharded mongo environment ?", "username": "Romain_Semur" }, { "code": "", "text": "I am having this exact same problem using MongDB.Driver 2.11.6, on commandline using mono 6.8.0.123 built with msbuild 16.1.85. While it’s fine on my laptop, it hangs the same way on server. I can’t debug it due to lack of GDB but running mono with ‘–trace’ shows a metric buttload of bson-related activity in a thread after the main thread has exited", "username": "Jeffery_Chow" }, { "code": "", "text": "BTW the problem didn’t go away when I added a call to mongoClient.Cluster.Dispose()", "username": "Jeffery_Chow" }, { "code": "", "text": "Hi Andrew,\nWe will be tracking this issue in the Jira ticket: https://jira.mongodb.org/browse/CSHARP-3429", "username": "Boris_Dogadov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Simple c# console program wont close after `var mc = new MongoClient(connStr);`
2021-02-15T07:55:24.974Z
Simple c# console program wont close after `var mc = new MongoClient(connStr);`
5,731
null
[]
[ { "code": "", "text": "Hi there Did you know that we have a special offer for Educators? MongoDB for Academia is for educators who want to prepare students for careers that require in-demand database skills that power modern applications. It is designed to support educators using MongoDB in their teaching. If you register via educators.mongodb.com, you gain access to:You’re eligible for this program if you teach:Do you have any questions about teaching MongoDB? Feel free to create a new topic in this group with your questions, we’re here to help you getting started ", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Learn more about our offer for Educators
2021-02-25T20:13:27.302Z
Learn more about our offer for Educators
4,522
https://www.mongodb.com/…4_2_1024x512.png
[ "change-streams", "atlas-triggers" ]
[ { "code": "", "text": "Hello,For the solution that I am building, I am using MongoDB Atlas (Cloud) and building services with Spring Webflux. Using ReactiveMongoRepository and ReactiveMongoTemplate.I have now successfully implemented our services which uses ChangeStreams to get reactive updates on changes (insert and/or updates) to documents in the MongoDB Atlas database. This means, every time a user logs in (in the front-end), they are seeing a dashboard with 5 ChangeStreams active on just 1 screen. With many users, this means a good amount of ChangeStreams.I have contacted MongoDB Atlas, through the chat, but am getting mixed signals. First I am told that the ChangeStreams are part of the active connection limits as part of the cluster tier, as described by the link below:Meaning that if you have 500 concurrent connections limit, and you have 50 ChangeStreams running, than you have used 50 of the 500, thus 450 remaining.On another note, I am told, the limitations for ChangeStreams are different as per link below:I am kind of confused and hope someone can perhaps answer my questions here.Questions:Are the ChangeStream limits part of the Connection limit (as per cluster tier) or are they separate? If they are separate (and according to the second link), that means that you are not able to have a lot of ChangeStreams, if I am correct. This limits my ‘reactive’ application severely.If ChangeStreams are really that restrictive with those kind of limits. Are Triggers an alternative perhaps and how are they different compared to ChangeStreams?Hope someone is able to provide any guidance/info here. Thanks in advance.", "username": "vv001" }, { "code": "", "text": "Hi Vahid –The difference here is really whether you are opening change streams with the driver (as it sounds like you are today) or are using Realm (ex. with one of the Realm SDKs/Sync or Triggers).When opening change streams directly, each one will count as it’s own connection from the client/server to Atlas.However, Realm has built-in proxying of change streams across clients, giving some scalability benefits for change streams that you would otherwise need to write yourself in an interim layer between Atlas/change stream consumers. For this reason, when using Realm the change stream/end-user limits are structured slightly differently. When using Realm, generally the number of change streams open will count against your Atlas limit but you will be able to have far more concurrent clients due to the proxying.Hope that helps!\nDrew", "username": "Drew_DiPalma" }, { "code": "", "text": "Thanks for the explanation Drew. So just to make sure that I understand your answer correctly: in my case, since I am using ReactiveMongoTemplate as part Spring Reactive MongoDB and Spring Webflux, I am opening a change stream with the driver, thus not through use of a Realm. Therefore the connections that I am using are not as limited as would be with a Realm.Do I understand it correctly there?", "username": "vv001" }, { "code": "", "text": "Correct, the Realm limits are not applicable here, only the standard MongoDB Atlas connection limits.", "username": "Drew_DiPalma" }, { "code": "", "text": "Much appreciated Drew. That clarifies a lot.", "username": "vv001" } ]
Question regarding ChangeStreams vs. Triggers (and limitations in concurrent connections))
2021-02-22T19:20:59.630Z
Question regarding ChangeStreams vs. Triggers (and limitations in concurrent connections))
3,928
null
[ "queries", "mongoose-odm" ]
[ { "code": " const User = mongoose.model(\n \"User\",\n new mongoose.Schema({\n email: String,\n password: String,\n name: String,\n days: [\n {\n day: Date,\n data: \n { \n average_score: {type: mongoose.Schema.Types.Decimal128, default: 0 }\n }\n }\n ]\n })\n );\n User.find({ \"_id\": getUserId(req), \"days.day\":{\n \"$gte\": new Date(\"2021-01-02T00:00:00.000Z\"), \n \"$lt\": new Date(\"2021-01-04T00:00:00.000Z\")\n }},\n function (err, result) {\n if (err){\n res.status(400).send({data: { message: err }});\n return;\n }\n else if(result)\n {\n res.status(200).send({data: { message: result }});\n }\n })\n", "text": "HelloI have the following User schema:In the day field I’m storing the days in ISO format like 2020-12-29T00:00:00.000Z. The User.find query is returning all the days instead of returning the data for the days between the Date range and I’m not sure why is this happening.", "username": "Stefan_Tesoi" }, { "code": "$filter$filter", "text": "Hello @Stefan_Tesoi, welcome to the MongoDB Community Forum!Your query looks just about correct as per this documentation example: Query an Array of Embedded Documents - A Single Nested Document Meets Multiple Query Conditions on Nested Fields.You can use projection : Project Specific Array Elements in the Returned Array. The documentation also says:See the aggregation operator $filter to return an array with only those elements that match the specified condition.Here are posts with similar question and use $filter:", "username": "Prasad_Saya" }, { "code": "User.aggregate([\n {$match:{_id: ObjectID(\"5feb7b1b5438fcda7401f306\")}},\n { $project: {\n days: {\n $filter: {\n input: \"$days\", // le tableau à limiter \n as: \"index\", // un alias\n cond: {$and: [\n { $gte: [ \"$$index.day\", new Date(\"2020-12-29T00:00:00.000Z\") ] },\n { $lte: [ \"$$index.day\", new Date(\"2020-12-31T00:00:00.000Z\") ] }\n ]}\n }\n }\n }}\n ])\n .project({'days.day':1, 'days.data':1})\n .then(result => { res.status(200).send({data: { message: result }})})\n", "text": "Thank you for pointing me to the right solution.", "username": "Stefan_Tesoi" } ]
How to range query a date field within an array of nested documents?
2021-02-25T09:45:04.532Z
How to range query a date field within an array of nested documents?
22,333
null
[]
[ { "code": "", "text": "Can you please, suggest me best the approach for uploading image data and download ,\nI not able to create schema model for image storage,", "username": "Umesh_War" }, { "code": "", "text": "Hello @Umesh_War!Welcome to the MongoDB Community, so what you’re looking for is a means of static hosting correct?We have an amazing guide here for setting up static hosting that you can find here.If there is anything else that I can help you with, please let me know.Regards,Brock.", "username": "Brock_GL" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Image storage on mongodb realm for android base app
2021-02-08T17:35:23.762Z
Image storage on mongodb realm for android base app
3,165
null
[]
[ { "code": "", "text": "Hi everyone,Tidepool is a 501(c)(3) nonprofit organization. We were founded by people with diabetes, caregivers, and leading healthcare providers committed to helping all people with insulin-requiring diabetes safely achieve great outcomes through more accessible, actionable, and meaningful diabetes data.We are committed to empowering the next generation of innovations in diabetes management. We harness the power of technology to provide intuitive software products that help people with diabetes.Speaking of technology, we are using MongoDB Atlas for our primary data store, and all of our products are open source. Feel free to take a look around our GitHub repositories at Tidepool Project · GitHubCheers,Tapani Otala\nVP, Engineering", "username": "Tapani_Otala" }, { "code": "", "text": "Welcome to the MongoDB Community Forums @Tapani_Otala!Tidepool’s mission and open source commitment are excellent, and it is inspiring to see how everyone connects to the mission in their team bios .I’ve come across a growing number of interesting projects in the #WeAreNotWaiting space, like Nightscout CGM project (which coincidentally, also uses MongoDB) and OpenAPS. I was curious if you might have an ecosystem map or article to share that describes how these projects connect with (or complement) Tidepool.Does Tidepool organise any community events for contributors (eg hackathons or user groups)?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,Thanks for the warm welcome!We do not really have an ecosystem map, as such, on how the projects connect to one another. We are all in the same space, a lot of the contributors have flowed from one project to another, and we continue to support open-source contributions flowing back and forth.We do participate in community events but unfortunately have not had the bandwidth to organize any lately. Our developer portal has links to our public Slack instance for developer discussions: https://developer.tidepool.org/Cheers,Tapani", "username": "Tapani_Otala" } ]
Greetings from a remote-first organization
2021-02-23T20:56:28.949Z
Greetings from a remote-first organization
3,043
null
[]
[ { "code": "", "text": "que hacer cuando borro tablas por error e dado drop a tablas por error", "username": "alejandro_henao_diaz" }, { "code": "", "text": "Welcome to the community!Do you have backup of the table/collection?\nYou can use mongoimport/mongorestore to get the table back", "username": "Ramachandra_Tummala" } ]
What to do when I delete tables by mistake?
2021-02-24T19:24:57.036Z
What to do when I delete tables by mistake?
1,321
null
[ "replication", "backup" ]
[ { "code": "", "text": "Hi, There is a three node replication setup where mongo dump fails in all the three nodes with the error - Failed: error writing data for collection `` to disk: error reading collection: (InterruptedAtShutdown) interrupted at shutdown.I did pointed backups to different drives and still the issue is same. There is one finding like primary keeps changing between three servers very often , not sure would that be the cause of the issue.", "username": "Gayathri_Ramesh" }, { "code": "", "text": "Hi @Gayathri_Ramesh and welcome in the MongoDB Community !You need to stabilize the system first. The primary shouldn’t keep switching from one node to another. What is the root cause of this? Did you analyse the logs? That’s where I would start.Also, did you check your system? Do you have enough RAM, disk space, CPU, etc?Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Backup Fails in Three node Replication
2021-02-25T11:58:03.434Z
Backup Fails in Three node Replication
1,826
null
[ "node-js", "crud" ]
[ { "code": "Updating the path 'userJourneyEvents.$[event].touchpoints' would create a conflict at 'userJourneyEvents.$[event].touchpoints'findOneAndUpdatequery {\n userJourneyEvents: { \n '$elemMatch': { id: 'eid' }\n }\n}\neventUpdate {\n '$set': {\n 'userJourneyEvents.$[event].touchpoints.$[touchpoint0].description': 'description'\n },\n '$push': {\n 'userJourneyEvents.$[event].touchpoints': {\n '$each': [\n {\n id: 'new_id',\n kind: 'tools',\n description: ''\n }\n ],\n '$sort': { kind: 1 }\n }\n }\n}\narrayFilters [\n { 'event.id': 'eid' },\n { 'touchpoint0.id': 'tpid0' }\n]\n", "text": "I’m trying to insert documents into an array while also updating fields of existing elements in the same array. Am I doing something wrong, or is this not possible in a single operation?I’m seeing this error: Updating the path 'userJourneyEvents.$[event].touchpoints' would create a conflict at 'userJourneyEvents.$[event].touchpoints'I’m using the NodeJS driver, calling findOneAndUpdate with the following parameters:", "username": "Jon_Madden" }, { "code": "", "text": "Hi @Jon_Madden,Welcome to MongoDB community.I would like to o help you but I believe the best way is to first :Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"userJourneyEvents\": [\n {\n \"id\": \"eid\",\n \"touchpoints\": [\n {\n \"id\": \"tpid0\",\n \"kind\": \"tools\",\n \"description\": \"\"\n }\n ]\n }\n ]\n}\n{\n \"userJourneyEvents\": [\n {\n \"id\": \"eid\",\n \"touchpoints\": [\n {\n \"id\": \"tpid0\",\n \"kind\": \"tools\",\n \"description\": \"description\"\n },\n {\n \"id\": \"new_id\",\n \"kind\": \"tools\"\n }\n ]\n }\n ]\n}\n\n", "text": "Hi @Pavel_Duchovny! Thanks for the response. Here’s some sample documents:BeforeAfter", "username": "Jon_Madden" }, { "code": "db.updateTest.updateOne({\"userJourneyEvents.id\" : 'eid'},[{$set : { 'userJourneyEvents.$[event].touchpoints.$[touchpoints].description' : 'description' }\n }],{ arrayFilters: [ { \"event.id\": \"eid\" } , { \"touchpoints\": \"tpid0\" } ]});\ndb.updateTest.updateOne({\"userJourneyEvents.id\" : 'eid'},{\n $push : {'userJourneyEvents.$[event].touchpoints':{\n '$each': [\n {\n id: 'new_id',\n kind: 'tools',\n description: ''\n }\n ],\n '$sort': { kind: 1 }\n }}},{ arrayFilters: [ { \"event.id\": \"eid\" } ]});\ndb.updateTest.updateOne({\"userJourneyEvents.id\" : 'eid'},{$set : { 'userJourneyEvents.$[event].touchpoints' :\n[\n {\n \"id\": \"tpid0\",\n \"kind\": \"tools\",\n \"description\": \"description\"\n },\n {\n \"id\": \"new_id\",\n \"kind\": \"tools\"\n }\n ]\n}},{ arrayFilters: [ { \"event.id\": \"eid\" } ]});\n", "text": "Hi @Jon_Madden,Ok I se the issue, with this document structure and array nesting you cannot update and push to the same array in a single operation. BTW, there is no need to do an elemMatch if you want to find just one field and you can reference it using “.”.You will have to seperate those into 2:If you need ACID consistency on those you should use transactions to perform them in a single transaction.Another option is to build the array for this document on the client side and update the entire array all together:I thought on trying aggregation pipeline updates with $zip and $map but it super complex and not worth the effort.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny! I’ll go with a multi-request transaction until/unless it becomes a performance bottleneck. Appreciate the tip on $elemMatch.", "username": "Jon_Madden" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$push into array while $set'ing fields of existing elements
2021-02-24T19:25:02.370Z
$push into array while $set&rsquo;ing fields of existing elements
7,390
null
[]
[ { "code": "", "text": "I have tried everything reading for the forum answers yet I am unable to connect to the MongoDB.It is fetching the below error\n2021-02-25T08:44:43.459+0000 E QUERY [js] Error: connect failed to replica set atlas-h1ukvt-shard-0/sandbox-shard-00-02.9jjvz.mongodb.net.:27017,sandbox-shard-00-00.9jjvz.mongodb.net.:27017,sandbox-shard-00-01.9jjvz.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13Kindly help", "username": "Anindita_Dey" }, { "code": "", "text": "Please show the command you ran or screenshot\nI can connect to your cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "", "username": "system" } ]
Connect Failed :connect failed to replica
2021-02-25T08:48:54.278Z
Connect Failed :connect failed to replica
2,060
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello everyone,I’m starting with Mongo and even after reading the documentation I’m not sure if I’m in the right way.I have more than 100 big JSONs (2 - 150Mb) with the resullt of Cucumber execution to save everyday.If I save the JSON with GridFs I can’t query in keys inside the JSON, right? So, how can I save big files and keep some information of them to make queries?Is MongoDB a right choice to save this kind of files?Thank you in advance, it will help me a lot ", "username": "Fabricio_Bedin" }, { "code": "", "text": "Is MongoDB a right choice to save this kind of files?It is a perfect choice.JSON is the native format for Mongo. Do not save with GridFS. Just use https://docs.mongodb.com/database-tools/mongoimport/ and you will be able to query all the fields.", "username": "steevej" }, { "code": "", "text": "Thank you @steevejI tried with mongoimport treating each json as a document inside a collection ‘executions’, it worked perfectely in files with less then 16Mb but I’m getting error when I try to import JSONs with more than 16Mb.3Failed: an inserted document is too large", "username": "Fabricio_Bedin" }, { "code": "{ \n \"date\" : .... ,\n \"log_entries\" :\n [\n { \"time\" : t1 , ... }\n { \"time\" : t2 , ... }\n ... 16Mb worth of log entries ...\n { \"time\" : tN , ... }\n ]\n}\n", "text": "I do not know how the files are organized so it is hard to tell. However I suspect that 1 file is 1 document, and the size limit for one document is 16Mb. There is may be a way to split that one document into its sub-documents. For example, if the document looks like:it is possible to remove the outer braces and brackets and insert only the log_entries.Could you provide a link to the problematic file? Since it may contains sensible information, I can give you an upload link to my dropbox. May be you can redact the sensible information.", "username": "steevej" }, { "code": "{\n \"datetime\": 1613433820,\n \"execution_uri\": \"\",\n \"execution_type\": \"bvt\",\n \"sut\": \"api\",\n \"squad\": \"squad_name\",\n \"scenarios_id\": [\"unique scenario ID1\", \"unique scenario ID2\", \"unique scenario ID3\"],\n \"scenarios_uri\": [\n \"features/gherkins/ms/name/v1/endpoint.feature:126:116:85:95\",\n \"features/gherkins/ms2/name/v1/endpoint.feature:7\",", "text": "thank you @steevejI’m importing all the documents inside the same collectionthe structure is something like this\n\njson_structure_example1244×892 64.2 KB\nI’m using in this way to import the documents\n\nerror_importing1901×237 26.8 KB\nhere is an example of document that I’m trying to import\n", "username": "Fabricio_Bedin" }, { "code": "", "text": "This is like I suspected. The whole file is a single document. The first fields, from datetime to duration, are some kind of an enveloped shared by features_report.What I would do is use something like jq to put the envelop fields in one document and then extract each *features_report in separate documents. The insert the envelop document in a collection (say envelops) and then each features_report into another collection (say reports) while making sure each features_report contains a reference to its envelop document.Alternatively, with jq again duplicate the envelop fields in each features_report.But honestly, if I look the different features_report, they all look alike. Somewhat akin to an infinite loop or recurring failure.", "username": "steevej" }, { "code": "", "text": "Yes, I think I’ll save it separately like you said.Actually each document inside features_report key is the same just in this example, but the real execution all of then are completely different.I was just making sure that saving the entire document wasn’t a choice.", "username": "Fabricio_Bedin" }, { "code": "", "text": "These videos helped me a lot to understand more about Mongo concepts.", "username": "Fabricio_Bedin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to save cucumber feature reports (large JSON files)?
2021-02-21T19:29:07.402Z
How to save cucumber feature reports (large JSON files)?
4,872
null
[ "node-js", "data-modeling", "react-native" ]
[ { "code": "", "text": "Hi,\nI’m using Realm SDK for React Natve and at the moment I’m not using sync.\nI’m looking for a way to convert JSON schema to Realm data model but so far couldn’t find any solution.Thanks for the help", "username": "Mooli_Morano" }, { "code": "", "text": "Hi Mooli,In your Realm UI navigation panel, if you navigate to Build > SDKs > Data Models you should be able to see the data models which have been translated from your JSON Schema.Does that answer your question?Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi thanks for the reply, but unfortunately I need to convert our JSON schema (coming from our rest api) to Realm schema on the fly.\nFor now, we are not using MongoDB / Realm sync", "username": "Mooli_Morano" } ]
JSON Schema to Realm Data Model
2021-02-24T16:48:11.913Z
JSON Schema to Realm Data Model
2,676
null
[]
[ { "code": "db.getCollection('test-search').insertMany([\n\t{ \"name\": \"Chris Evans\" },\n\t{ \"name\": \"Chris Hemsworth\" },\n\t{ \"name\": \"Chris Pine\" },\n\t{ \"name\": \"Robert Pine\" },\n\t{ \"name\": \"Chris Pratt\" }\n])\n{\n\t\"mappings\": {\n\t\t\"dynamic\": false,\n\t\t\"fields\": {\n\t\t\t\"name\": {\n\t\t\t\t\"type\": \"autocomplete\"\n\t\t\t}\n\t\t}\n\t}\n}\nconst searchTerm = ''; // Variable\ndb.getCollection('test-search').aggregate([\n\t{\n\t\t$search: {\n\t\t\tautocomplete: {\n\t\t\t\tquery: searchTerm,\n\t\t\t\tpath: 'name'\n\t\t\t}\n\t\t}\n\t}\n]).toArray();\nconst searchTerm = 'Robert'; // => \"Robert Pine\" | As expected\nconst searchTerm = 'Pine'; // => \"Chris Pine\", \"Robert Pine\" | As expected\nconst searchTerm = 'Pine Chris'; => \"Chris Pine\", \"Robert Pine\", \"Chris Evans\", \"Chris Hemsworth\", \"Chris Pratt\" | NOT what I want, I would expect only \"Chris Pine\"\nconst searchTerm = ''; // Variable\ndb.getCollection('test-search').aggregate([\n\t{\n\t\t$search: {\n\t\t\tcompound: {\n\t\t\t\tmust: searchTerm.split(' ').map((word) => ({\n\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\tquery: word,\n\t\t\t\t\t\tpath: 'name'\n\t\t\t\t\t}\n\t\t\t\t}))\n\t\t\t}\n\t\t}\n\t}\n]).toArray();\nconst searchTerm = 'Robert'; // => \"Robert Pine\" | As expected\nconst searchTerm = 'Pine'; // => \"Chris Pine\", \"Robert Pine\" | As expected\nconst searchTerm = 'Pine Chris'; => \"Chris Pine\" | As expected - GREAT!\nconst searchTerm = 'Pine C'; => No resultsdb.getCollection('test-search').aggregate([\n\t{\n\t\t$search: {\n\t\t\tcompound: {\n\t\t\t\tmust: [{\n\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\tquery: 'Pine',\n\t\t\t\t\t\tpath: 'name'\n\t\t\t\t\t}\n\t\t\t\t}],\n\t\t\t\tshould: [{\n\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\tquery: 'R',\n\t\t\t\t\t\tpath: 'name'\n\t\t\t\t\t}\n\t\t\t\t}]\n\t\t\t}\n\t\t}\n\t}\n]).toArray();\n=> \"Chris Pine\", \"Robert Pine\"", "text": "Hi all,I’m trying to build a search that will allow the user to type multiple words in order to refine the search results. What I notice is that the default behaviour for Atlas Search is to search for each word in a search term separately.\nI think this is strange behaviour, because this means the more words you type, the more results you get. I think almost all searches I know work differently: typing more words results in less, but more specific matches.Here are some examples to explain what I want and what I’ve tried:Create a collection with the following documents:Create a search index ‘default’ on this collection:Now, lets run some queries with different search terms.\nThis is the base query I’ll use each time:These are the results with different search-terms:From the Atlas Search documentation:“If there are multiple terms in a string, Atlas Search also looks for a match for each term in the string separately.”So I guess this means match each string as an $or-condition.\nSo let’s modify the query to match each word in the search string as an $and-condition:These are the results with different search-terms:However, this does not seem to work if a word in the search term is only one letter:\nconst searchTerm = 'Pine C'; => No resultsWhy is this? I cannot find anything in the documentation that a $search does not work with a single-character query.Another approach can be to put one-letter words in a should-clause:=> \"Chris Pine\", \"Robert Pine\"\nIt will also give Chris, that’s fine, but it should at least put “Robert” first… So this is not very usefull.I also tried a solution that puts the one-letter search-terms into a separate regex-query in the must-clause, but I fear for performance and the sorting of the results is also very strange sometimes.So to summarize:", "username": "Laurens" }, { "code": "", "text": "Hi @Laurens,I think atlas search by default searches as OR expression therefore to ensure that terms are included you should use the AND and Must or Should expressions.On the other hand I don’t think single latters are being tokenized in a default analyzer. I believe you should build a custom analyzer for thatDefine a custom analyzer to transform and filter characters before indexing for search.I think you should specifically look at ngrams which define the sizes of chunks.Please note that the more granular your tokenizing will be the bigger the index might grow and this will have a performance and resources impact on your cluster.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n\t\"mappings\": {\n\t\t\"dynamic\": false,\n\t\t\"fields\": {\n\t\t\t\"name\": {\n\t\t\t\t\"type\": \"autocomplete\",\n\t\t\t\t\"tokenization\": \"edgeGram\",\n\t\t\t\t\"minGrams\": 1\n\t\t\t}\n\t\t}\n\t}\n}\nconst searchTerm = ''; // Variable\ndb.getCollection('test-search').aggregate([\n\t{\n\t\t$search: {\n\t\t\tcompound: {\n\t\t\t\tmust: searchTerm.split(' ').map((word) => ({\n\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\tquery: word,\n\t\t\t\t\t\tpath: 'name'\n\t\t\t\t\t}\n\t\t\t\t}))\n\t\t\t}\n\t\t}\n\t}\n]).toArray();\n", "text": "Hi @Pavel_Duchovny, thank you for your quick reply.\nI managed to get it work by setting the tokenization to edgeGram with minGrams 1. Thank you very much for this solution! It also helped me to understand the logic behind the search better.For reference, this is the final solution that works:The index:The query:Because of changing the minGrams to 1, this now also works with one-letter words. I’ll keep an eye out for the impact on performance.", "username": "Laurens" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Autocomplete search - Match multiple words in a search term (as AND)
2021-02-23T11:25:04.796Z
Autocomplete search - Match multiple words in a search term (as AND)
5,572
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Hi! I’m currently trying to redeem the github student pack and the url : MongoDB Student Pack doesn’t seem to be working.\nI’m logged in to my github student pack and into my new mongodb account", "username": "Malcolm_St_John" }, { "code": "", "text": "Hi @Malcolm_St_JohnThank you for your message!We’re unfortunately experiencing some issues loading the website. We’re actively working on it, and we’re hoping to solve it soon.Thank you for your patience! Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This is resolved ", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Lieke_Boon" } ]
MongoDb student url not working
2021-02-24T01:14:15.222Z
MongoDb student url not working
5,389
null
[ "production", "ruby", "mongoid-odm" ]
[ { "code": "", "text": "These minor releases in 7.2, 7.1 and 7.0 series primarily add Rails 6.1 support.Although most functionality should work with Rails 6.1, we are aware that the config generator is currently broken with Rails 6.1.Mongoid 7.2.1 additionally fixes the following issues:Mongoid 7.1.7 additionally fixes the following issue:", "username": "Oleg_Pudeyev" }, { "code": "", "text": "That’s great! Thanks for sharing!", "username": "Soumyadeep_Mandal" }, { "code": "", "text": "", "username": "system" } ]
Mongoid 7.2.1, 7.1.7, 7.0.12 released with Rails 6.1 support
2021-02-24T17:07:36.093Z
Mongoid 7.2.1, 7.1.7, 7.0.12 released with Rails 6.1 support
3,802
null
[ "production", "ruby" ]
[ { "code": "", "text": "This minor release in the 4.x series of bson-ruby adds the following improvements:The following bugs were fixed:", "username": "Oleg_Pudeyev" }, { "code": "", "text": "Thanks for sharing the release note!", "username": "Soumyadeep_Mandal" }, { "code": "", "text": "", "username": "system" } ]
Bson-ruby 4.12.0 released
2021-02-24T17:02:47.468Z
Bson-ruby 4.12.0 released
3,336
null
[ "atlas-functions" ]
[ { "code": "{\"error\":\"do not know how to interpret object type: HTTPResponse\",\"link\":\"https://realm.mongodb.com/groups/appidstuff/logs?co_id=requestid\"}\nconst error = !! payload.query? payload.query.error:\"\";\n if(!!error){\n console.error(`Error: ${error}`)\n response.setHeader(\"Content-Type\",\"application/json\")\n response.setBody(JSON.stringify({error:`Error en google: \\n${error}`}))\n response.setStatusCode(500)\n return response\n }\n", "text": "I keep getting this error:And I’m just trying to send a simple response based on the webhook queryI have no clue what could go wrong, and the error message is not very specific.Please help", "username": "arnaldoperez" }, { "code": "return responseresponse.setBody", "text": "Hi Arnaldo,Could you try removing the line below:return responseI believe response.setBody already returns the response object.Regards\nManny", "username": "Mansoor_Omar" } ]
Webhook Response Error: do not know how to interpret object type: HTTPResponse
2021-02-19T20:25:03.257Z
Webhook Response Error: do not know how to interpret object type: HTTPResponse
2,460
null
[]
[ { "code": "", "text": "Hello. FIRST post.\nI’ve been trying to work through the Task tracker tutorial where everything was going fairly well until I cloned the backend then was asked to run a realm-cli import in the directory created (…realm-tutorial-backend).That failed with an erroneous error and I’d have no idea where to go from there? Anybody try that successfully and perhaps have an idea of where to even start?error:realm-cli import\nopen C:\\Users<user>\\OneDrive…\\GIT\\realm-tutorial-backend\\environments: The system cannot find the file specifiedMight as well try this forum as I’m dead in the water at this point lolThanks in advance\nC", "username": "Colin_Poon_Tip" }, { "code": "realm-clirealm-clinpm install -g [email protected]\nrealm-cli", "text": "Hi Colin!Welcome to the forum.It looks like a recent version of realm-cli broke compatibility with our “Set up the Realm App” backend tutorial. The easiest way around this would be to downgrade your local copy of realm-cli to a version that works with the tutorial – in this case, 1.1.0. To do that, run the following command:And then proceed through the tutorial after the “install realm-cli” step. We’re working on a fix in the next version of realm-cli, but this should fix the issue for the time being.", "username": "Nathan_Contino" }, { "code": "", "text": "I’ll give this a shot. Interesting enough, right after I posted I got past the error as it seems the folder “environments” wasn’t created. So after creating it it went to the NEXT error which was:realm-cli import\nfailed to diff app with currently deployed instance: error: error finding cluster “Cluster0”: No cluster named Cluster0 exists in group 5ed7ca2481c1442ab952b7a8However, I concede and will try the downgrade you’ve suggested. Thanks!!brb", "username": "Colin_Poon_Tip" }, { "code": "npm install -g [email protected]", "text": "npm install -g [email protected] like I’ve achieved success!!Much appreciated.", "username": "Colin_Poon_Tip" }, { "code": "", "text": "Ultimately it still failed. After I got to the error looking for Cluster0 I ended up gutting my entire workspace in Compass and started a new project as well as a cluster which I called iMoCluster01. Went through the entire process again but still dead trying to find Cluster0. I’m “guessing” if I killed everything again and named my Cluster Cluster0 it might work, but is that the only way? I’m did a “scaling” webinar a couple days ago and it puts me in a “pickle” if I can’t name my own cluster while I work/learn. I’m sure when I’m “ready” for showtime I don’t want my first production cluster called Cluster0. For reference the output was as such:realm-cli import\nthis app does not exist yet: would you like to create a new app? [y/n]: y\nApp name [tasktracker]:\nAvailable Projects:\nPOS+ - 5ed7ca2481c1442ab952b7a8\nAtlas Project Name or ID [POS+]:\nLocation [US-VA]:\nDeployment Model [GLOBAL]:\nNew app created: tasktracker-oehpk\nCreating draft for app…\nDraft created successfully…\nImporting app…\nfailed to import app: error: error finding cluster “Cluster0”: No cluster named Cluster0 exists in group 5ed7ca2481c1442ab952b7a8.", "username": "Colin_Poon_Tip" }, { "code": "", "text": "You should be able to just change the name of the cluster in the backend app config on this line: https://github.com/mongodb-university/realm-tutorial-backend/blob/final/services/mongodb-atlas/config.json#L5\nin the services/mongodb-atlas/config.json file you pulled down during step E. Let me know if that works! The tutorial is definitely written with the intent of you creating a new cluster with the default name, but I believe that’s the only place where the name is referenced in the configuration. Sorry for the confusion!", "username": "Nathan_Contino" }, { "code": "", "text": "That seems to have worked. Guess my search isn’t as at one point I searched for “Cluster0” but didn’t return that file? hmph. I’m a little stunned by the support I’m receiving. Am I to expect a bill later as this is redic!!So far so good. I’m plow ahead.\nTHANK!!\nC", "username": "Colin_Poon_Tip" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Task tracker tutorial fails to import backend
2021-02-24T21:04:30.761Z
Task tracker tutorial fails to import backend
3,160
null
[]
[ { "code": "", "text": "I want to know where i can see default value of below server params :ShardingTaskExecutorPoolMinSize\ntaskExecutorPoolSize\nShardingTaskExecutorPoolMaxConnectingI also want to change default values of these params. Let me know how can we do that", "username": "Sumit_Kandoi" }, { "code": "ShardingTaskExecutorPoolMinSize", "text": "Hi @Sumit_Kandoi,\nWelcome to MongoDB Developer Community Forums — thanks for contributing!I want to know where i can see default value of below server params\nShardingTaskExecutorPoolMinSize\ntaskExecutorPoolSize\nShardingTaskExecutorPoolMaxConnecting\nI also want to change default values of these params. Let me know how can we do thatYou should be able to find all the default values for each of the parameters you have mentioned in the MongoDB Server Parameters page as well as examples of how to set non-default values for the parameters.I.e. Please see this link for the parameter ShardingTaskExecutorPoolMinSizePlease also note, the documentation I have linked is for the latest MongoDB server version 4.4. You change to the appropriate server version you are using by expanding the dropdown list on the top left corner of the documentation to select the version of MongoDB you are using.Hope this helps.\nJason", "username": "Jason_Tran" } ]
Mongos : How to change Connection Pool Params in Mongo Server
2021-02-24T19:22:47.503Z
Mongos : How to change Connection Pool Params in Mongo Server
2,439
null
[ "atlas-functions", "app-services-cli" ]
[ { "code": "", "text": "Is there a way to run Realm cloud function from within the realm-cli?", "username": "cyberquarks" }, { "code": "realm-cli", "text": "realm-cli is mostly meant for importing and exporting app configurations (basically, creating, updating, and deleting apps), and doesn’t have any means of running a Realm cloud function, unfortunately. I can think of a few different ways to implement command-line execution of Realm functions, though:", "username": "Nathan_Contino" }, { "code": "", "text": "Can you call a Realm function from the Admin API even if it is in private mode?", "username": "cyberquarks" }, { "code": "", "text": "I’m not sure what you mean by “private mode.” The Admin API always requires authentication, and you’ll have to write that logic yourself, so it’s probably the hardest of these three methods to implement. Personally I would recommend the webhook method with a small CLI utility that uses your favorite HTTP library.", "username": "Nathan_Contino" }, { "code": "{\n \"name\": \"processCsv\",\n \"private\": false\n}\n", "text": "Yes, I understand that, I mean functions have this configuration, for example:In my case, my functions are all private true so it wont be called from client-based code with the Realm SDK.I’ve posted on StackOverflow, the idea of what I am trying to achieve:", "username": "cyberquarks" }, { "code": "", "text": "Aha, now I see what you mean. Sorry for misunderstanding earlier.For this use case, I would definitely recommend using the Admin API endpoint. I’m sure you can encode the authentication logic into your IDEA plugin. And since you’re trying to build something that lets users test functions, the Admin API endpoint is perfect (since it’s meant for testing functions).The function privacy setting shouldn’t matter in this case. See the documentation on the function privacy setting – basically, setting a function to “private” stops users from calling your function directly using the Realm SDK. This is useful for situations where you want to write logic for some privileged functionality (like updating a user’s custom data, or administrating something) but you don’t want to expose that functionality to all users through a typical function call.The privacy setting doesn’t impact running a function through the Admin API. The Admin API is an administrative feature by nature – you can use it to add, update, and remove logic, functions, rules, etc. in the backend. If you think about it, you could use the Admin API to set any function to “private: false” and then run it anyway… so it doesn’t even make sense for the privacy setting to apply there. So that should work well for your use case. Let me know if you hit any other snags! This sounds like a really cool project and is something I’ve actually wished for myself in the past when developing projects with Realm.", "username": "Nathan_Contino" }, { "code": "username\napiKey\ngroupId\nappId\ngroupIdappIdgroupIdconfig.json", "text": "I started working with the plugin, so far what I have figured out that is required by the API are:All is good except for the groupId since there’s no group id that is part of the exported realm app, the appId can be found from the exported, the userame/apiKey can be configured on the Plugin setting form manually.Where can the groupId be found? or can it be part of the exported application?My plan for the plugin is you open an exported realm app using Webstorm or IntelliJ the the plugin will figure out the function automatically from the config.json files on the functions folder.Creating functions also through context menu is planned. Right click then create function, configuring the input data, name, privacy, etc.And depending which source is open a custom run terminal it can trigger the cloud function based on the parameters typed, similar to the console of Realm cloud.", "username": "cyberquarks" }, { "code": "https://realm.mongodb.com/groups/<group id>/apps/<app id>/values", "text": "If you open up the Realm UI in a browser, take a look at the URL. It should look something like this:https://realm.mongodb.com/groups/<group id>/apps/<app id>/values(I know, I know – it’s confusing that the App ID for the Admin API is not the same as the App ID for the client SDKs. Sorry about that).You should be able to extract your group and app ids from there to use in the Admin API. I suppose your users might have to do that same process manually, but there also might be a way to get them programmatically if that sounds like a huge inconvenience.I believe the group ID actually identifies your Atlas organization.", "username": "Nathan_Contino" }, { "code": "", "text": "I see, okay I will try this.", "username": "cyberquarks" } ]
Run Realm function from realm-cli
2021-02-18T10:32:11.104Z
Run Realm function from realm-cli
3,134
null
[ "kotlin" ]
[ { "code": "override suspend fun login(email:String, password: String) {\n val completableDeferred = CompletableDeferred<App.Result<User>>()\n val credentials = Credentials.emailPassword(email, password)\n app.loginAsync(credentials) {\n completableDeferred.complete(it)\n }\n onResult(completableDeferred, LoginException())\n}\n\nprivate suspend fun onResult(completableDeferred: CompletableDeferred<App.Result<User>>, exception: MyException) {\n val result = completableDeferred.await()\n if (!result.isSuccess) {\n result.error.errorMessage?.let { exception.errorMessage = it }\n throw exception\n }\n}\n", "text": "I’m fairly new to using MongoDB, but I’m finding it quite difficult to find a way to test my Mongo code. Particularly static methods, like Credentials.emailPassword or Realm.init. For example, how would I test the following using JUnit in Kotlin?Thank you for any help.", "username": "Joe_Barker" }, { "code": "", "text": "Hi Joe!Welcome to the forum.I’m actually putting together a guide on this exact subject right now for the Android SDK, but in the meantime, I can point you toward the testing environment we use to test the code snippets that occur in the documentation: https://github.com/mongodb/docs-realm/tree/master/examples/android/syncBasically, if you’re trying to test functionality that requires a connection to your backend Realm App, you have two options:Personally I like the idea of end-to-end testing since backend function logic often ends up intermingled with frontend client logic, but depending on your use case, mocking might be your best option.I’ve spent a good amount of time over the past few months setting up tests for Realm apps, so happy to help with any other questions on the subject.", "username": "Nathan_Contino" }, { "code": "", "text": "Hi Nathan, I would ideally want to mock the backend, so these examples are great, thank you. I’ll let you know if I have any questions.", "username": "Joe_Barker" } ]
Unit Testing With JUnit in Kotlin
2021-02-24T19:22:52.840Z
Unit Testing With JUnit in Kotlin
4,365
null
[]
[ { "code": "", "text": "I am unable to connect to Atlas ClusterI used the following command in the terminal also have pasted the same command in the text editor for referencethe command I am using is\n“mongo “mongodb+srv://sandbox.kp34u.mongodb.net/myFirstDatabase” --username m001-student”When I run the command in the terminal In IDE i get following error3 total, 0 passed, 0 skipped:\n[FAIL] “Successfully connected to the Atlas Cluster”Did you use the right command with the Atlas provided connection string?[FAIL] “The cluster name is Sandbox”Did you name your Atlas cluster Sandbox?[FAIL] “The username is m001-student”Did you create a username m001-student?What should I do? if any one can correct me", "username": "Shubham_Raj" }, { "code": "", "text": "Post a screenshot of the whole IDE. We need to see the terminal where you enter the command. Most likely you did not press the [ENTER] to submit the command to the shell.", "username": "steevej" }, { "code": "", "text": "image856×635 16.1 KBgeting same error ,i am not being prompted to enter password after pressing enter on keyboard", "username": "Noor_Ahamed_S" }, { "code": "", "text": "Please revise the lesson where the IDE is presented. You entered the command in the wrong area. You must enter the command in ths terminal area.", "username": "steevej" }, { "code": "", "text": "@steevej-1495 Issue solved it was some issue with the IP whitelisting", "username": "Shubham_Raj" }, { "code": "", "text": "", "username": "system" } ]
Unable to connect to Atlas Cluster
2021-02-23T21:45:47.970Z
Unable to connect to Atlas Cluster
4,165
null
[ "aggregation", "queries", "attribute-pattern" ]
[ { "code": "{\n\t\"_id\" : \"5fdb9469aa7d50693d33f522\",\n\t\"typeName\" : \"POApproval\",\n\t\"milestones\" : [\n\t\t{\n\t\t\t\"name\" : \"rejected\",\n\t\t\t\"lastReached\" : null\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"sent\",\n\t\t\t\"lastReached\" : null\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"cancelled\",\n\t\t\t\"lastReached\" : null\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"totalApproved\",\n\t\t\t\"lastReached\" : null\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"supplierApproved\",\n\t\t\t\"lastReached\" : ISODate(\"2021-02-09T14:35:35.941Z\")\n\t\t}\n\t],\n\t\"tags\" : [\n\t\t{\n\t\t\t\"name\" : \"supplierCode\",\n\t\t\t\"value\" : \"9\"\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"costCenter\",\n\t\t\t\"value\" : null\n\t\t}\n\t],\n\t\"metrics\" : [\n\t\t{\n\t\t\t\"name\" : \"total\",\n\t\t\t\"value\" : \"9\"\n\t\t}\n\t]\n}\ndb.Contexts.aggregate([\n { $project: { _id: 0, tags: \"$tags\" } },\n { $unwind: \"$tags\" },\n { $match: { $expr: { $eq: [\"$tags.name\", \"supplierCode\"] } } },\n { $group: { _id: { tag: \"$tags.value\", count: { $sum: 1 } } }\n])\n", "text": "Hi everyone!I´m wondering about the best way to aggregate a collection using the attribute pattern.My documents looks this way:If I need to group by supplierCode, I´m executing a query like this:Is there a better way to improve it and avoid a collection scan?Thanks in advance.", "username": "faramos" }, { "code": "db.collection.aggregate([\n { \n $match: { \"tags.name\": \"supplierCode\" } \n },\n { \n $unwind: \"$tags\" \n },\n { \n $group: { \n _id: { \n suppliers: { \n $cond: [ { $eq: [ \"$tags.name\", \"supplierCode\" ] }, \"$tags.value\", null ] \n }\n },\n count: { $sum: 1 }\n }\n },\n { \n $match: { \"_id.suppliers\": { $ne: null } } \n },\n]) \n\"tags.name\"", "text": "Hello @faramos,Here is, I think, is a better way to aggregate the query.Now, you can define an index on the \"tags.name\" array field (a Multikey Index) and the query will benefit from it.", "username": "Prasad_Saya" } ]
Group by using an Attribute Pattern
2021-02-24T14:27:58.591Z
Group by using an Attribute Pattern
3,741
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,In our existing MongoDB Replica cluster , we have a 3-node replica set that was running with a configuration of priority 0.5 - 0.5 - 1.We tried to change priority for two nodes from 0.5 to 1 to make it equivalent for all three node as “1” using rs.conf() and rs.reconfig().But we get success in changing only one node priority from 0.5 to 1.Even though configuration and re-configuration command successfully executed. Why not one node priority changed in configuartion.It’s showing only 0.5 and cluster priority becomes 0.5-1-1.Again next thing we come to notice that, one of the node which was primary in running change its priority from 1 to 2.We look log it’s showing “Scheduling priority takeover at -----” during that time.And finally, cluster priority becomes “2-1-0.5” which is not our desired required output.\nPlatform : WindowsKindly help us to understand the issue and why it happened.", "username": "Nagesh_Dnyandeo_Kamb" }, { "code": "rs.status", "text": "Hello @Nagesh_Dnyandeo_Kamb,Please tell about the version of MongoDB and that of the Operating System. Also, include the configuration used for changing the priorities of the replica-set nodes, configuration after applying the changes and the rs.status.The information is required to fully understand and look into the issue you are encountering.", "username": "Prasad_Saya" } ]
Does not change priority of one of the node in MongoDB replica set
2021-02-24T11:16:59.766Z
Does not change priority of one of the node in MongoDB replica set
2,094
null
[]
[ { "code": "", "text": "I noticed that the data that was saved into a database that I created with the application connected to the local shell does not show in Mongo Atlas when the application is connected to the Atlas. But the data added after connected to the Atlas does show. Is this by design or there is something I am missing to get data saved on local mongo shell to show in the cluster?", "username": "Blue_Sky" }, { "code": "", "text": "application connected to the local shellCan you post a screenshot of how you did the above? I suspect that the application was connected to a local mongod rather than Atlas because the is no reason why the data is not in Atlas if you added it while being connected to Atlas.", "username": "steevej" } ]
Why data created in Mongo Shell does not show in MongoDB Cloud when connected?
2021-02-24T02:56:28.908Z
Why data created in Mongo Shell does not show in MongoDB Cloud when connected?
2,399
null
[ "installation" ]
[ { "code": "{\"t\":{\"$date\":\"2021-02-24T00:57:29.745-03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2021-02-24T00:57:29.772-03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2021-02-24T00:57:29.772-03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"lsidTTLIndex\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n", "text": "Hi, I’m trying to start learning MongoDB, and following the installation instructions I get what’s below when executing mongod.exeWhat do I have to do?", "username": "Daniel_Tkach" }, { "code": "mongod", "text": "Hello @Daniel_Tkach, welcome to the MongoDB Community forum!Please tell what version of MongoDB you have installed and the operating system and its version. What instructions have you tried and how did you start mongod?", "username": "Prasad_Saya" }, { "code": "", "text": "The message Waiting for connections … seems to indicate that mongod is running correctly. Start a new terminal and try to connec with the mongo command.", "username": "steevej" } ]
Cannot start mongod
2021-02-24T05:56:27.190Z
Cannot start mongod
4,389
null
[ "database-tools" ]
[ { "code": "", "text": "Need to know what kind of datatype I need to use for the values (+00000.30) and (-00000.32) while loading data into MongoDB using mongoimport with columnsHaveTypes and fields", "username": "ganga_prasad" }, { "code": "type Supported Arguments\t Example Header Field\ndecimal() None price.decimal()\ndouble()\tNone revenue.double() \ndoubleNumberDecimalNumberDecimal", "text": "Hello @ganga_prasad, welcome to the MongoDB Community forum!From mongoimport --columnsHaveTypes the following are possible types for your use case:The default data type for numeric data in MongoDB is double. Depending upon your use case you can consider using NumberDecimal. This data type has capability for decimal rounding with exact precision. See Data Types in mongo Shell for notes on NumberDecimal.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you Prasad for your feedback. I have tested loading data with both data types decimal and double and the data after loading looks like ‘0.30’. I’m not able to load source data ‘+00000.30’.", "username": "ganga_prasad" }, { "code": "", "text": "‘+00000.30’If you are looking for data to be formatted exactly like that - then that would be of string type. Then, that would be of not much use if you are using it in calculations. It is good for display purposes. You can use string type and convert to numeric type during calculations.What is the reason you want the data to be in that format? It is unusual for numbers to be like that - it can only be a string.MongoDB provides aggregation operators (see Type Expression Operators) to convert from one data type to another. During your operations you can apply these operators to convert from one format to another as per your need. Finally, it all depends upon your use case, the application, the queries, etc.", "username": "Prasad_Saya" }, { "code": "", "text": "As per data functionality + sign indicates credit and - sign indicates debit amount, if there is no specific dedicated data type which can hold both sign and decimal value in MongoDB.", "username": "ganga_prasad" }, { "code": "NumberDecimal-", "text": "I think it is a design problem - how you model the data. Ideally, since it is a currency field the data type should be NumberDecimal with a negative (-) sign for negative values. The default is positive values. So, all positives are credits and negatives are debits.Another way to approach this is to store everything as positive, and introduce a new field which indicates the amount is a credit or debit.See this topic: Model Monetary Data.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you prasad ! Let me get back once I get reply from customer.basically we are migrating dada from IBM mainframe source data to MongoDB.mainframe data will be in flat files.", "username": "ganga_prasad" }, { "code": "", "text": "As per the above update customer has changed the data set and removed the sign for decimal data now data we have is one field (9.2) and other field has (13,2) eg : 000000000.30 when i’m using NumberDecimal() data loading is failed due invalid datatype …i tried using double datatype and decimal getting same error.", "username": "ganga_prasad" }, { "code": "", "text": "I see you have various fields with different formats to import into the MongoDB database.The available data types in MongoDB can accommodate all types of fields. The remaining aspect is your application and processes. It is common to have data formats of wide range during data migration from legacy systems. It is the process and the models that should take care of these scenarios.I have already provided you some ideas about data types and possibilities. I hope you will be able to figure further (I am afraid it goes beyond the scope of this topic to discuss every field and datatype you are encountering ).", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What datatype is to be used for the values (+00000.30) and (-00000.32) while loading with mongoimport with columnsHaveTypes and fields options?
2021-02-22T19:18:47.032Z
What datatype is to be used for the values (+00000.30) and (-00000.32) while loading with mongoimport with columnsHaveTypes and fields options?
3,246
null
[ "react-native" ]
[ { "code": "", "text": "I was trying to make a database for android and windows apps using react native. is it possible to do? is the realm database support for android and windows.", "username": "Krishna_Mani" }, { "code": "", "text": "I don’t think its possible yet as I came across and error building an app for macOS", "username": "Tony_Ngomana" } ]
Does Realm DB Support React native windows apps?
2020-12-16T18:48:13.822Z
Does Realm DB Support React native windows apps?
2,382
null
[ "data-modeling" ]
[ { "code": "", "text": "I have a normalized relational data model. I now need to implement this is MongoDB. I know MongoDB is essentially a document database but I want to do the following:Create a schema that is inline with the relational model. This will be used to store metadata about a document.Create a separate object to store the document itself. Both need to support querying.Is this a valid approach or is there a better way?TIA,\nBill", "username": "William_Jordan" }, { "code": "", "text": "Hello @William_Jordan, welcome to the MongoDB Community forum!Please post the details of the relational model you have and how you are going to model it in MongoDB. Then, we can discuss if the approach works fine and if there are any alternatives. It is little difficult to visualize your idea without the actual model.If you haven’t already browsed through the documentation, here is a place to start with: Data Model Design.", "username": "Prasad_Saya" } ]
Database design for normalized relational data model
2021-02-23T18:40:53.225Z
Database design for normalized relational data model
1,533
null
[ "react-native", "backup" ]
[ { "code": "Realm.acopyBundledRealmFiles('../realmdb/')~/workspace/realmdb/", "text": "Hi,I got a React native (0.63) project and I’m using realm nosync.\nI’m trying to create a copy of my db and save it in my project folder.using Realm.acopyBundledRealmFiles('../realmdb/') throw a permission error\nusing ~/workspace/realmdb/ throw an error sayign folder not exsit.At this point, I’m not sure if it’s trying to save it to the actual device or my machine either way I’m stuck.Thanks for the help.", "username": "Mooli_Morano" }, { "code": "", "text": "Hi. Realm.db file is stored inside of the application bundle.If you would like to copy it from your phone to the desktop please follow the steps described here. https://support.realm.io/support/solutions/articles/36000064525-how-do-i-locate-realm-files-on-a-device-or-simulator-If you would like to open realm file on desktop, please use Realm Studio https://docs.realm.io/sync/realm-studio", "username": "Sergey_Gerasimenko" } ]
How can I backup Realm db on my machine?
2021-02-11T09:28:52.318Z
How can I backup Realm db on my machine?
3,527
null
[ "node-js" ]
[ { "code": "", "text": "Hi,I am trying to using Realm nodejs sdk on an electron app which uses Angular components.When I try to build the app, i get errors such as below.In reality, since my app is purely electron and not going to run on browser as a web-app i am not expecting dependency to below.Did anybody face this issue?ERROR in ./node_modules/node-pre-gyp/lib/unpublish.js\nModule not found: Error: Can’t resolve ‘aws-sdk’ in ‘C:\\SLBCodeevvv\\FDPUI\\apps\\ElectronAngularRealm\\node_modules\\node-pre-gyp\\lib’ERROR in ./node_modules/node-pre-gyp/lib/publish.js\nModule not found: Error: Can’t resolve ‘aws-sdk’ in ‘C:\\SLBCodeevvv\\FDPUI\\apps\\ElectronAngularRealm\\node_modules\\node-pre-gyp\\lib’ERROR in ./node_modules/node-pre-gyp/lib/info.js\nModule not found: Error: Can’t resolve ‘aws-sdk’ in ‘C:\\SLBCodeevvv\\FDPUI\\apps\\ElectronAngularRealm\\node_modules\\node-pre-gyp\\lib’ERROR in ./node_modules/realm/lib/browser/index.js\nModule not found: Error: Can’t resolve ‘react-native’ in ‘C:\\SLBCodeevvv\\FDPUI\\apps\\ElectronAngularRealm\\node_modules\\realm\\lib\\browser’Thanks,\nVenkat", "username": "V_J" }, { "code": "npm install realm@latest", "text": "Hi, Can you elaborate a bit about how do you get this error. What does it means “When I try to build the app”. Are you packaging the app using electron-packager or something else.I would suggest looking into our docs for Electron here https://docs.mongodb.com/realm/sdk/node/integrations/\nalso there is a sample project you could look at here GitHub - mongodb-university/realm-electron-advanced-quickstart: An advanced guide to creating an Electron app with MongoDB Realm and writing to a Realm from multiple processes.If you need to update realm in these samples you could do a npm install realm@latestI tried to reproduce the error you get but I couldn’t on my machine.cheers", "username": "Lyubomir_Blagoev" } ]
Package errors on Realm with Electron using Angular
2021-02-09T19:18:06.104Z
Package errors on Realm with Electron using Angular
2,757
https://www.mongodb.com/…d71331d2f693.png
[ "crud", "graphql" ]
[ { "code": "", "text": "I have the following document:[User]\nI want to insert one more object it utms array.\nWhen I do this with the following graphql mutation, it’s not inserting a new one.It’s ovewriting the existing one, not adding one more item in the array.\nWhat is the correct way to do it with graphql mutation?\nThanks!", "username": "Fanka_Bacheva" }, { "code": "set: { utms: {link: [\"6000d6ee8e7e7e17ca723899\", \"600101d2e2b2ccf6c995f77d\"]}", "text": "Hi Fanka,Thanks for creating your first post and welcome to the MongoDB community!It appears that appending to an array is not yet possible with graphQL mutations.You would have to include all the items in the array that you want to have which will replace whatever value is currently present.Example:\nset: { utms: {link: [\"6000d6ee8e7e7e17ca723899\", \"600101d2e2b2ccf6c995f77d\"]}This has been requested as a feature in our product feedback forum below:It would be helpful to support array add/remove in update mutations. I've been running into this most often when dealing with one-to-many relationships.\n\nFor example, adding/removing books from an author. Currently, you would have to fetch the entire...Please go ahead and vote on this idea if you’d like to receive updates on it’s progress.I also found this posted as an issue on our graphql repo which you could subscribe to:Hi,\n\nCurrently when we want to edit an array by adding an element we need to c…reate a mutation with the full list of existing elements plus the new one, am I correct?\n\nIf yes, the payload can be quite big and 2 concurrent users will overlap each other. Is there a way to do it differently and just provide the new element in a mutation?Regards\nManny", "username": "Mansoor_Omar" } ]
Update existing document and push a value to an аrray
2021-02-21T19:31:40.015Z
Update existing document and push a value to an аrray
3,155
null
[]
[ { "code": "", "text": "I am transforming a bunch of our data from one collection and creating a new collection from it. Our database uses String type _id as the unique key so using the default key creation on $out / $merge is not possible as it creates ObjectIds. Is there a way I can set my own String _id or create a random String sequence to use for this instead?The idea here is NOT to make round trips to and from the server and database. I could of course pull it all down add new _ids individually and insert but this is a large dataset and I want to avoid that if possible.Also I have looked into mapReduce which would allow me to do this but then I end up with a funny document shape where the fields are all under ‘value’ and I would need to reshape them all.I’ve also looked into just setting an _id in the aggregation using ObjectId().str or similar but it only executes this code once meaning each _id is identical.Perhaps this isn’t possible but would love to know your thoughts/tips", "username": "James_Harding" }, { "code": "_id: ObjectId().valueOf()", "text": "Was there any resolution to this? I think I’m running into the same problem. I tried using $addFields in the aggregation before $merge to create a new _id: ObjectId().valueOf(), but it seems to want to insert the same ObjectId for every single record, which was a bit unexpected.Anyone else have any suggestions for how to approach this?", "username": "Colin_Gray" } ]
Generate unique string type _id during an aggregation for multiple documents
2020-07-13T16:32:13.300Z
Generate unique string type _id during an aggregation for multiple documents
3,279
null
[ "upgrading" ]
[ { "code": "", "text": "Built two new test Mongodb environments using replication at release 3.4 to replicate my production environment. I need to upgrade as 3.4 no longer supported.MongoDB running on local virtual machines on Linux using RHEL 7 and replicate between the two servers.I upgraded from 3.4 to 3.6, 4.0, 4.2 then 4.4. I took backups with unique names before doing each upgrade and verified things work working proper after each upgrade.To make sure I have a properly documented backout plan if needed for my production environment, I tried to restore from release 4.4 (or where ever the upgrade it at) to 3.4 using backups since if anything fails the desire is to go back to the beginning and try again later.I’ve tried using file backups, using mongodump / mongorestore neither which are working.Under MongoDB Atlas I wouldn’t be able to restore to previous versions.Does this leave the only option is to follow the manual downgrade process?", "username": "Delinda_Benson" }, { "code": "", "text": "MongoDB running on local virtual machines on Linux using RHEL 7 and replicate between the two servers.A healthy replicaset requires 3 nodes. You’re adding a third right now aren’t you?I’ve tried using file backupsThese need to be a filesystem snapshot or copied when the mongod is stopped. In my opinion this is the most complete and easiest to restore. What issues are you having?Does this leave the only option is to follow the manual downgrade process?In the absence of a working backup, yes.", "username": "chris" }, { "code": "", "text": "Chris,\nIn production with have 3 servers. For testing the upgrade and backout plan, we set up only two to simplify the testing.\nI will try using the file backups again today. After reloading the files, then starting the primary server then secondary server, an error occurs when entering mongo to access it. I’ll recreate this today and send the exact message.\nI was hoping to use mongorestore. I used mongodump when on release 3.4 but no that the environment is on 4.4, it seems the mongorestore is from 4.4 and I need a mongorestore from 3.4", "username": "Delinda_Benson" }, { "code": "", "text": "When I tried going back to using release 4.4 (the level before doing a restore), I started getting space related errors. I’ve had to clean up old log files. Since Linux and MongoDB are new to me, this took a little bit to figure out. I have tried so many things, I did an uninstall and now I’m doing the install and upgrade again so I have a clean process to try the file backup from. I’ll update this again when I’m done.", "username": "Delinda_Benson" }, { "code": "", "text": "The final resolution for this issue was I need to have the binaries in place for the same release in place that was used when the mongodump occurred. Once I had the same release installed, the mongorestore worked since it was for the same major release version.\nThank you.", "username": "Delinda_Benson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrade 3.4 to 4.4 and backout plan
2021-02-17T18:40:10.651Z
Upgrade 3.4 to 4.4 and backout plan
2,542
null
[ "python" ]
[ { "code": "<report_name>_data = {\"ID\": ..., \"DATE\":..., \"NAME\": ..., etc.}import pymongo\nclient = pymongo.MongoClient('127.0.0.1', ...)\n\n## started with the normal Mongo docs structure ...\nAPP_NAME = 'test' # DB NAME\nCLIENT_NAME = 'hanna_barbera' # COLLECTION NAME\n#db = client[APP_NAME] \n#db_collection = db[CLIENT_NAME] \n\n## then I just kept going!\nVENDOR_NAME = 'spacely_sprockets' # ??\nREPORT_NAME = 'contact_list' # ??\n#coll2 = db_collection[VENDOR_NAME]\n#coll3 = coll1[REPORT_NAME]\n\n## Actually, I don't need these other objects, so this works too...\ncoll3 = client[APP_NAME][CLIENT_NAME][VENDOR_NAME][REPORT_NAME]\n", "text": "I’m noob to Mongo, so I was toying around with how to model some simple flat table data I want on Mongo: <report_name>_data = {\"ID\": ..., \"DATE\":..., \"NAME\": ..., etc.}What tricky is structuring the collections and database connections. I made some tests and got something to work, but I don’t know what I did…Looking in the documentation and online for things like, nested, array, collection, I don’t find any similar examples. Everything I read says, db at the first level, then collections, then documents.Are VENDOR_NAME and REPORT_NAME still “collections”? Or, does something else happen?", "username": "xtian_simon" }, { "code": ">>> client['test']['hanna_barbera'].full_name\n'test.hanna_barbera'\n>>> client['test']['hanna_barbera']['spacely_sprockets']['contact_list'].full_name\n'test.hanna_barbera.spacely_sprockets.contact_list'\n>>> client['test']['hanna_barbera.spacely_sprockets.contact_list'].full_name\n'test.hanna_barbera.spacely_sprockets.contact_list'\ndata = {\"ID\": ..., \"DATE\":..., \"NAME\": ..., etc.}\ndata[\"vendor\"] = \"spacely_sprockets\"\ndata[\"report\"] = \"contact_list\"\nclient.test.reports.insert_one(data)\n", "text": "Yes those are collections. The name of a collection is provided via the full_name attribute: collection – Collection level operations — PyMongo 4.3.3 documentationNote that different collections in the same database are completely unrelated no matter the naming convention.It might make sense to put all the reports in a single collection, eg ‘test.reports’. You can add a “vendor” and “report” field to every document. Eg:", "username": "Shane" }, { "code": "", "text": "So is it half-dozen of one and 6 of the other? If it’s simple to insert and recall the collection with the name example I made. Or, insert & filter & query by new fields added to the input data.I noticed the post was moved to Drivers & ODMs, because of pymongo–for sure. I take it this name example is not common in Mongo Syntax?", "username": "xtian_simon" }, { "code": "collection.count_documents({'vendor': '<vendor name>'}) \n", "text": "half-dozen of one and 6 of the otherNo they are not the same. It’s much more efficient and ergonomic to store all similar data in one collection. Splitting similar data across multiple collections is definitely the exception rather than the rule, so to speak.For example, if you wanted to count all the reports by a single vendor, a very simple and reasonable request. With multiple collections this would be very painful (listCollections, string search/regex to match the vendor’s collection name(s), then count each matching collection). With a single collection it would be simply:", "username": "Shane" }, { "code": "1 client : ~50-75 vendors : 1-3 reports", "text": "Ah. Analytics. Not to bore you with the details of my project, but I really am trying to orient myself to Mongo.I need a robust way to process report files. Report ratios: 1 client : ~50-75 vendors : 1-3 reports . ~50 clients. Clients may have 30-60% unique vendors. The low estimate is 800 different reports. Schema-less lets me continue to process documents, and focus on responding to schema changes in later analysis phase. Put it another way, staff and bots continue to upload files, only the reports are late.Clients are independent entities, and serviced as independent entities. Cross-section analysis between clients could be interesting as internal business insights, but it’s not what we’re paid to do.The peas and carrots don’t mix, but I take your point.All things being equal, there is no comparison for ease of analysis the flat organization provides. It’s an important lesson for me to understand Mongo, that I don’t need to keep similar-schema reports together. If I add features into the document-level records (client, vendor, report) I can–what I’m trying to wrap my head around–have secure, stable, reliable data storage AND simple analysis.", "username": "xtian_simon" } ]
Toying around, is this still a "collection" Coll3 = client[DB][Coll][Coll2][Coll3]
2021-02-22T19:22:01.011Z
Toying around, is this still a &ldquo;collection&rdquo; Coll3 = client[DB][Coll][Coll2][Coll3]
2,076
null
[ "licensing" ]
[ { "code": "", "text": "Hi all,It has come to my knowledge, that MongoDB provides exception to the SSPL license, if the firm meets some requirements. First of all, is this true and if yes, how can we get that exception ?Thanks and regards,", "username": "Taseer_Ahmed" }, { "code": "", "text": "HI Taseer,What is your use case for MongoDB? There are a limited set of circumstances that would trigger the SSPL.Joe.", "username": "Joe_Drumgoole" }, { "code": "", "text": "mitedWe have to provide it as a service to our customer but there are some propriety/tools which we don’t want to make public.Hope that helps !Thanks!", "username": "Taseer_Ahmed" }, { "code": "", "text": "You have to comply with the SSPL or purchase a commercial license if you are providing MongoDB as a service. There are no exceptions.", "username": "Joe_Drumgoole" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SSPL Exception from MongoDB
2021-02-22T12:34:14.518Z
SSPL Exception from MongoDB
3,269
null
[]
[ { "code": "var updateDoc = {\n\n $set: { \n\n energy: { \n\n $cond: { \n\n if: { $gt: [ { $add: [ \"$energy\", 20 ] }, 100 ] }, \n\n then: 100, \n\n else: { $add: [ \"$energy\", 20 ] } \n\n } \n\n } \n\n }\n\n }", "text": "When i try to run this code below i get the error: MongoError: The dollar ($) prefixed field ‘$cond’ in ‘energy.$cond’ is not valid for storage. How can i fix this issue?", "username": "JasoO" }, { "code": "updateDocvar updateDoc = [\n {\n $set: {\n energy: {\n $cond: {\n if: { $gt: [{ $add: [\"$energy\", 20] }, 100] },\n then: 100,\n else: { $add: [\"$energy\", 20] }\n }\n }\n }\n }\n]\n", "text": "Hello @JasoO,$cond is a aggregation operator, update can’t allow that operator in this simple query,Look at the update with aggregation pipeline starting from MongoDB v4.2,You just need to wrap updateDoc object in to array,Playground", "username": "turivishal" }, { "code": "", "text": "Yeah i figured it out just now, thank you anyway ", "username": "JasoO" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoError: The dollar ($) prefixed field '$cond' in 'energy.$cond' is not valid for storage
2021-02-23T15:06:54.113Z
MongoError: The dollar ($) prefixed field &lsquo;$cond&rsquo; in &lsquo;energy.$cond&rsquo; is not valid for storage
20,502
null
[ "atlas-search", "text-search" ]
[ { "code": " Sky is beautiful today and sun is out", "text": "Hello!\nI am looking into full-text search functionality. For example: a text like Sky is beautiful today and sun is out.\nMy search requirement is : Find text that “Begins with Sky and ends with out”.\nWhat would be the best way to write a search query to achieve this?Thanks,\nSupriya", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,I suggest you look into Atlas Search service:https://docs.atlas.mongodb.com/reference/atlas-search/regex/In particular the regex operator queries sounds like a suitable one.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny. I was thinking of regex too…just wanted to check if there is anything else besides that option.", "username": "Supriya_Bansal" }, { "code": "", "text": "specifying the order, regex in Atlas Search is the best option. However, you can test some of the functionality you described using the autocomplete operator as well.", "username": "Marcus" } ]
Begins with and ends with searches
2021-02-22T19:38:29.846Z
Begins with and ends with searches
5,567
null
[ "performance", "capacity-planning" ]
[ { "code": "", "text": "Hi. The current documentation around Ensure Indexes Fit in RAM and from another thread Working set MUST fit in memory? I could gather thatfor best performance you will want your commonly used indexes and working set to fit in memoryMy question further to the database developers is, what happens when the indexes don’t fit in the memory? I want to be able to compare the trade-off between adding any resource and the complete migration process to that of dealing with slight(or major) performance impacts on my data.", "username": "naman" }, { "code": "", "text": "For each query that targets key values that are not currently in the part of the index that is not in RAM, the index will have to be read from disk.This means the system will slow down because disk I/O speed is much slower than RAM. You can experience the effect of too little RAM by adjusting wiredTiger cache size. See https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGBThat’s the beauty of running out of Atlas, migration up and down is easy. You do not have to be right the first time.", "username": "steevej" }, { "code": "cacheSizeGB", "text": "Hey @steevej, thank you for responding.speed is much slower than RAMI am aware of the cacheSizeGB option and nonetheless about the disk ops slowing down the queries in practice too.\nWhat I am further curious to understand is the underlying implementation of random key reads(some with index in RAM, some not in there) and the mechanism/role played by the on-disk format, filesystem cache, collection data in the internal cache, and indexes in the internal cache. More towards how their size during such reads vary and if there is a deterministic behavior for analyzing the impact when either of the factor changes.", "username": "naman" }, { "code": "", "text": "If it takes 5 minutes to read from RAM the same read will take 7 months to read from a spinning disk. So saying “disks are slower” sometimes doesn’t communicate how much slower they actually are.If your index doesn’t fit in memory the slow down you will experience when querying on that index as pages are read from disk expired and then reread from disk will be dramatic.The goal for every performant database is to have all its regularily queried indexes to fit completely in RAM. To futher increase performance the most regularily queried records should fit in RAM as well.For a gaming application this means data associated with logged in users, for a banking application accounts that have transactions in the last thirty days etc. etc.", "username": "Joe_Drumgoole" }, { "code": "", "text": "If your index doesn’t fit in memory the slow down you will experience when querying on that index as pages are read from disk expired and then reread from disk will be dramatic.@Joe_Drumgoole thank you for pitching in, I could sense the consequences aptly. With this question though I wanted to focus more on the mechanism to understand it in detail. Something like the “two page faults” mentioned in the comment by Alexey here.", "username": "naman" }, { "code": "", "text": "Sorry I don’t have that level of knowledge. Let me see if I can get someone else to respond.", "username": "Joe_Drumgoole" }, { "code": "", "text": "In MongoDB Collections and Indexes both use the same underlying data structure. WiredTiger tables - these are BTrees (B+ ish trees) , essentially key-value stores arranged into pages(or blocks depending who you ask) - each page is 32KB in size or one entry whichever is larger (Up to 16MB for a single 16MB BSON document) - it’s then compressed when flushed to disk. In memory a collection page contains both the the on-disk version of the page as well as a list of changes to any document not yet reconciled to disk. An Index page contains the index keys and for each key (which may be the full key or prefix compressed) a pointer to the identity of the document it points to.When you access an index - MongoDB checks to see if the page is in cache - if not it reads it from the disk (going via the OS pagecache so it may be cached there) - index pages are not compressed on disk as they already have prefix compression.The upshot is that each index lookup may in cache (nice and fast) or not in which case , nothing to do with page faults - this is all explicitly tracked, it will require a disk read which will be 32KB and a seek at the very least - if you have readahead set appropriately it may be more than 32KB. If that happens to be in your OS page cache it will be quicker but it still needs some processing to put it in cache. The seek will take 1 IO operation at least so a 1000 IOPS disk, with random seeks in an index which is much larger than ram will be very much throttled by IO.You can look at the Read-Into-Cache metric using any of the MongoDB tooling (or look in db.serverStatus() and diff the entries) you can also observer the cache for collections and indexes using mongocacheview.js (GitHub - johnlpage/MongoCacheView: See what is in your MongoDB WT Cache and how it's being used.).In general - if your working set does not fit in ram, expect one to two orders of magnitude slower operations that hit those keys (queries, inserts, deletes, updates of indexed values) , consider (a) Adding RAM (b) Making indexes smaller (c) Adding many IOPS - in that order.", "username": "John_Page" } ]
What happens when indexes don't fit in RAM
2021-02-16T13:31:30.118Z
What happens when indexes don&rsquo;t fit in RAM
11,428
null
[ "crud" ]
[ { "code": "", "text": "Quick question, is there a way for a field to have a maximum value? For example the field “cards” can only have a maximum value of 100. So if you would increment above it it would return to its maximum value.", "username": "JasoO" }, { "code": "100100", "text": "You can use an Updates with Aggregation Pipeline to check while updating and set the value 100 if it exceeds 100. Note that this feature is available with MongoDB v4.2 or later.", "username": "Prasad_Saya" }, { "code": "", "text": "Alright could you give me a more concrete small example of how to achieve that? Because from reading that page their doesn’t seem to be clear example of how to do that.", "username": "JasoO" }, { "code": "fldINC_VALUE[ \n $set: { \n fld: { \n $cond: { \n if: { $gt: [ { $add: [ \"$fld\", INC_VALUE ] }, 100 ] }, \n then: 100, \n else: { $add: [ \"$fld\", INC_VALUE ] } \n } \n } \n }\n]", "text": "Assuming the field you want to increment is fld and the value you want to increment is INC_VALUE, then this pipeline can be used to do the required update:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to restrict a field's value while updating using $inc?
2021-02-23T09:55:18.262Z
How to restrict a field&rsquo;s value while updating using $inc?
2,392
null
[ "queries", "indexes", "performance" ]
[ { "code": "db.getCollection('agendaJobs').findAndModify({\"query\": {\n $or: [\n {\n name: 'webhook',\n disabled: { $ne: true },\n lockedAt: { $eq: null },\n nextRunAt: { $lte: new Date() },\n },\n {\n name: 'webhook',\n disabled: { $ne: true },\n lockedAt: { $lte: new Date() },\n },\n ],\n },\n \"sort\": { \"nextRunAt\" : 1, \"priority\" : -1 },\n \"update\": { $set : { \"lockedAt\" : new Date() }}\n})\n \"millis\": 471830\n \"protocol\": \"op_msg\",\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 146620,\n \"w\": 146618\n }\n }, \n \"keysExamined\": 405712,\n \"docsExamined\": 2,\n \"nMatched\": 1,\n \"nModified\": 1,\n \"keysInserted\": 1,\n \"keysDeleted\": 1,\n \"writeConflicts\": 14,\n \"numYields\": 146600,\n \"planSummary\": [\n {\n \"IXSCAN\": {\n \"name\": 1,\n \"nextRunAt\": 1,\n \"priority\": -1,\n \"lockedAt\": 1,\n \"disabled\": 1\n }\n },\n {\n \"IXSCAN\": {\n \"name\": 1,\n \"nextRunAt\": 1,\n \"priority\": -1,\n \"lockedAt\": 1,\n \"disabled\": 1\n }\n }\n ],\n", "text": "Hello there, I will highly appreciate the community advice.\nI’m running MongoDB atlas version 3.6.21 and using an open-source package called agenda that using mongo as its backend. once I upgrade the package (v2.0.2 to v4.0.1), the CPU spiked to 100%. according to the atlas profiler, I identified a possible ‘bad’ query causing the issue, but I can’t figure out what’s wrong.The ‘bad’ query is findAndModify and runs the following:from the profiler, those are the most suspicious fields I see:Is there anything I’m missing? maybe I should create an additional index? how would you suggest continuing the investigation?I will highly appreciate your support and opinion!Tal", "username": "Talik" }, { "code": "db.getCollection('agendaJobs').findAndModify({\"query\": {name: 'webhook',\n disabled: false,\n $or: [\n {\n lockedAt: { $eq: null },\n nextRunAt: { $lte: new Date() },\n },\n {\n lockedAt: { $lte: new Date() },\n },\n ],\n },\n \"sort\": { \"nextRunAt\" : 1, \"priority\" : -1 },\n \"update\": { $set : { \"lockedAt\" : new Date() }}\n})\ndb.agendaJobs.createIndex({name : 1, nextRunAt : 1, priority : -1 , lockedAt : 1}, {partialFilterExpression : {disabled: false}}\n", "text": "Hi @Talik,Welcome to MongoDB community.Its not an easy task to say specifically of only this query is the root cause of the entire cpu spike.Looking at the explain plan I see 2 problems:I think with the current way the query is written there is not much index tuning to be done.But can you possibly rewrite it the following way:Then you could possibly indez the following way:Maybe this will allow a better key scanning avoiding the index intersection which is very cpu intensive.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.getCollection('agendaJobs').findOneAndUpdate({\"query\": {\n$and: [{\n name: jobName,\n disabled: { $ne: true }\n }, {\n $or: [{\n lockedAt: { $eq: null },\n nextRunAt: { $lte: this._nextScanAt }\n }, {\n lockedAt: { $lte: lockDeadline }\n }]\n }]\n},\n \"sort\": { \"nextRunAt\" : 1, \"priority\" : -1 },\n \"update\": { $set : { \"lockedAt\" : new Date() }}\n})\n {\n \"v\" : 2,\n \"key\" : {\n \"name\" : 1,\n \"nextRunAt\" : 1,\n \"priority\" : -1,\n \"lockedAt\" : 1,\n \"disabled\" : 1\n },\n \"name\" : \"findAndLockNextJobIndex\",\n \"ns\" : \"tick-tock-prod.agendaJobs\"\n },\nwriteConflictsacquireCount", "text": "@Pavel_Duchovny thank you for your quick reply!\nI’ve tried your suggestion, but I experience the same CPU issue as before.\nthis is the query I run:while sort and update are the same\nI forgot to mention but I already have the following index in place:I’m a bit concerned about the writeConflicts field. could it be that from the findOneAndUpdate process the find query runs efficiently, but then the update process is trying to modify the same document? the acquireCount which is very high looks very suspicious as well.Just to note that I’m running 2 node processes that reading from the same DB.", "username": "Talik" }, { "code": "", "text": "Hi @Talik,This is not exactly what I recommend.I recommend doing disabled: false and not disabled : { $ne : true }. Is it because documents that are not disabled does not havt this field?How high is write conflicts, in the previous example it is 14…If you concurrently hiting the same docs you may increase locking and cpu overhead.\nBut your main problem is with the way the query written the indexed intersection does not do good.I would consider changing the data model to have more selective queries and updates (use $ne as less as possible and $or as well).Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "disabled:false{$ne: true}db.getCollection('agendaJobs').findOneAndUpdate({\"query\": {\n$and: [{\n name: jobName,\n disabled: { $ne: true }\n }, {\n $or: [{\n lockedAt: { $eq: null },\n nextRunAt: { $lte: this._nextScanAt }\n }, {\n lockedAt: { $lte: lockDeadline }\n }]\n }]\n},\n \"sort\": { \"nextRunAt\" : 1, \"priority\" : -1 },\n \"update\": { $set : { \"lockedAt\" : new Date() }}\n})\n{\n $or: [{\n name: jobName,\n lockedAt: null,\n nextRunAt: {$lte: new Date()},\n disabled: {$ne: true}\n }, {\n name: jobName,\n lockedAt: {$exists: false},\n nextRunAt: {$lte: new Date()},\n disabled: {$ne: true}\n }, {\n name: jobName,\n lockedAt: {$lte: lockDeadline},\n disabled: {$ne: true}\n }]\n };\n{\n \"name\" : 1,\n \"nextRunAt\" : 1,\n \"priority\" : -1,\n \"lockedAt\" : 1,\n \"disabled\" : 1\n}\n{\n name: 1,\n disabled: 1,\n lockedAt: 1,\n nextRunAt: 1,\n priority: -1\n}\n \"planSummary\": [\n {\n \"IXSCAN\": {\n \"name\": 1,\n \"nextRunAt\": 1,\n \"priority\": -1,\n \"lockedAt\": 1,\n \"disabled\": 1\n }\n },\n {\n \"IXSCAN\": {\n \"name\": 1,\n \"nextRunAt\": 1,\n \"priority\": -1,\n \"lockedAt\": 1,\n \"disabled\": 1\n }\n }\n ],\n \"keysExamined\": 408017,\n \"docsExamined\": 2,\n \"nMatched\": 1,\n \"nModified\": 1,\n \"keysInserted\": 1,\n \"keysDeleted\": 1,\n \"writeConflicts\": 25,\n \"numYields\": 84871,\n \"reslen\": 880,\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 84900,\n \"w\": 84900\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"w\": 84900\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"w\": 84898\n }\n },\n \"oplog\": {\n \"acquireCount\": {\n \"w\": 2\n }\n }\n },\n \"protocol\": \"op_msg\",\n \"millis\": 153063\n", "text": "@Pavel_Duchovny sorry for the delay in response \nI was trying multiple approaches, but unfortunately, nothing worked yet.\nI’m using a library called agenda that manages the DB interaction and query, therefore I can’t implement your suggestion and do disabled:false instead of {$ne: true}, because documents that are not disabled, do not have this field.the latest version of agenda generate the following query:before it was different:I’m thinking, maybe we reorganize the fields on the indexfromtoWDYT?Regarding your other question, I’ve noticed that the bad performance starts when the writeConflicts is higher than 5. when it was 25 for example, it took more than 15s for the query to finish.\nSharing here the execution stats, I hope it will be usefulI highly appreciate your help! thank you!", "username": "Talik" }, { "code": "", "text": "Hi @Talik,If you cannot rewrite queries my ability to help you is very limited.The proposed index might work.Write conflicts can cause cpu exhaustion as the engine must perform each conflict in the chain and keep in memory all variations in memory.Avoiding the conflicts is done by rewriting queries or data model to not hit the same object for a lock over and over.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Slow query causing 100% CPU spike
2021-02-10T22:43:21.154Z
Slow query causing 100% CPU spike
8,327
null
[ "performance" ]
[ { "code": "", "text": "Hi, We are facing some issue with the sloweness of Mongodb in prod. We opted for M30 in Atlas, did index to all our collections, consumed 3.8gb of RAM. Is remaining 4.2GB of RAM will be sufficient to do queries like aggregate, find etc?. Right now at some peek time we are facing this sloweness, indexed queries itself taking more time to return document. Do we need to revisit and retune our queries or do we need to upgrade our instance?", "username": "Manjunath_Anantharaj" }, { "code": "", "text": "Hi @Manjunath_Anantharaj,Welcome to MongoDB community.I believe this question is best covered by Atlas support as we need to look into your clusters specific workload andn metrics.However, please bare in mind that the instance memory is not consumed solely by WiredTiger engine or MongoDB.So if you confirm that your indexes and data model is optimal you need to understand which resources draw you back.If its CPU and/or memory you should scale to a bigger instance otherwise you might need to scale storage.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Mongodb slowness issue
2021-02-22T19:22:46.272Z
Mongodb slowness issue
1,857
https://www.mongodb.com/…3_2_1024x354.png
[ "server", "installation" ]
[ { "code": "mongodmongomkdir: /data/db: Read-only file system", "text": "I am trying to run mongod in one terminal and then in another terminal, mongo to connect. From my home directory:kevinturney ~ $ mongod\nScreen Shot 2021-02-17 at 2.52.02 PM2464×852 379 KB\nFrom what I understand, since macOS Catalina, root permissions are restricted. I tried creating /data/db and the result:mkdir: /data/db: Read-only file systemI then tried this Medium post on creating /data/db in SystemI uninstalled and reinstalled, tried stackoverflow and am completely stuck. How do do I sort this out?", "username": "Kevin_Turney" }, { "code": "", "text": "All directories accessed by mongod must be writable by the user with which you start it.By default mongod uses /data/db to store its data. In your case, I suspect your username is kevinturney, so /data/db should be owned and be writable by user kevinturney.More details at https://docs.mongodb.com/guides/server/install/", "username": "steevej" }, { "code": "", "text": "That is right.In Macos /data/db access is removed\nIn the link you shared it asks you to create the dir under /System/Volumes\nDid you try that?Once dir is created you need to update the new dirpath in your mongodb config file", "username": "Ramachandra_Tummala" }, { "code": "mongod --dbpath=/Users/kevinturney/data/dbmongo", "text": "What I wound up doing was to create the data/db in my home directory. I also created the mongod.conf file:Screen Shot 2021-02-22 at 10.10.04 AM2512×662 140 KBCreating the dir in System/Volumes according to the article did not work for me.\nWhen I started the daemon: mongod --dbpath=/Users/kevinturney/data/db and in another terminal tab, mongo, the processes started and the shell opened with no problems. With this configuration am I in good shape or are there problems with where I placed my directories that I don’t know about?", "username": "Kevin_Turney" }, { "code": "mongod --dbpath=/Users/kevinturney/data/db\n", "text": "When you start withyou so not use the configuration file.I think the quotes in the configuration might cause an issue if you ever use -f or –config to start mongod.You should not have any issues with storing the data in you home directory.", "username": "steevej" } ]
Failing to be able to run a local mongod process, /data/db not found error
2021-02-17T21:37:17.471Z
Failing to be able to run a local mongod process, /data/db not found error
13,101
null
[ "python", "production" ]
[ { "code": "kms_providers = {'azure': {'tenantId': 'tenantId',\n 'clientId': 'clientId',\n 'clientSecret': 'clientSecret'}}\nkms_providers = {'gcp': {'email': 'email@email',\n 'privateKey': 'privateKey'}}\nkms_providers = {'aws': {'accessKeyId': 'accessKeyId',\n 'secretAccessKey': 'secretAccessKey',\n 'sessionToken': 'sessionToken'}}\n", "text": "The PyMongo team is pleased to announce the 1.1.0 release of PyMongoCrypt - the Python bindings for libmongocrypt. This release adds support for:Note this release also drops support for libmongocrypt 1.0 and 1.1. libmongocrypt >=1.2 is now required.See the changelog for a high level summary of what’s new and improved or see the PyMongoCrypt 1.1.0 release notes in JIRA for the complete list of resolved issues.", "username": "Shane" }, { "code": "", "text": "", "username": "system" } ]
PyMongoCrypt 1.1.0 Released
2021-02-22T20:57:40.695Z
PyMongoCrypt 1.1.0 Released
2,034
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.0.23 is out and is ready for production deployment. This release contains only fixes since 4.0.22, and is a recommended upgrade for all 4.0 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.23 is released
2021-02-22T19:46:12.156Z
MongoDB 4.0.23 is released
2,310
https://www.mongodb.com/…75af485afc7.jpeg
[ "replication", "configuration" ]
[ { "code": "course=\"M310\"\nexercise=\"HW-1.6\"\nworkingDir=\"$HOME/${course}-${exercise}\"\ndbDir=\"$workingDir/db\"\nlogName=\"mongo.log\"\n\nports=(31160 31161 31162)\nreplSetName=\"TO_BE_SECURED\"\n\nhost=`hostname -f`\ninitiateStr=\"rs.initiate({\n _id: '$replSetName',\n members: [\n { _id: 1, host: '$host:31160' },\n { _id: 2, host: '$host:31161' },\n { _id: 3, host: '$host:31162' }\n ]\n })\"\n\n# create working folder\nmkdir -p \"$workingDir/\"{r0,r1,r2}\n\n# launch mongod's\nfor ((i=0; i < ${#ports[@]}; i++))\ndo\n mongod --auth --dbpath \"$workingDir/r$i\" --logpath \"$workingDir/r$i/$logName.log\" --port ${ports[\n$i]} --replSet $replSetName --fork --setParameter authenticationMechanisms=PLAIN --setParameter sas\nlauthdPath=\"/var/run/saslauthd/mux\"\ndone\n\n# wait for all the mongods to exit\nsleep 3\n\n# initiate the set\nmongo --port ${ports[0]} --eval \"$initiateStr\"\n", "text": "I encountered some tough problems . \nI’m try to finish M310/homework 1.6 .\nI can’t fully understand description :\n\n20180620022727687791×702 53.1 KB\nThe command can success : testsaslauthd -u adam -p password -f /var/run/saslauthd/mux\n\nc145793×44 1.94 KB\nAnd first auto-shell :\n#!/bin/bashThe result is :\n\n20180620022727688990×483 72.9 KB\n \n20180620022727689971×259 45 KB\nSo I delete --auth,and kill all mongo process and remove all mongo’s data,run again! initiate success,\n\n20180620022727690849×282 35.8 KB\n\nand run follow command:\ndb.getSiblingDB(\"$external\").createUser({user:‘adam’,roles:[{role:‘root’,db:‘admin’}]})\ndb.getSiblingDB(\"$external\").auth({mechanism:“PLAIN”,user:‘adam’,pwd:‘password’,digestPassword: false})\nnow, the result:\n\n20180620022727691990×444 44.4 KB\nbut when I reboot with --auth, the errorlog:\n\n20180620022727692998×408 81.2 KB\nSo I try to start only one server:\ninitiate success,create user:‘adam’ success,and auth success. But add member :\n\n20180620022727693998×597 63.7 KB\n\n\n20180620022727694989×388 59.1 KB\nHow can I deployment the mongo replica with LDAP?", "username": "join_mic" }, { "code": "", "text": "Since this is related to mongodb university lab you may get better response from University forumDiscussions about developing with MongoDB using various programming languages and MongoDB drivers or Object-Document Mappers (ODMs).LDAP is an external authentication mechanism.You still need a keyfile for internal authentication between nodes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "It is about to change.", "username": "steevej" }, { "code": "", "text": "Yah! I have forget add \" --keyfile \" \nAll problems have been solved.\nThanks ! ", "username": "join_mic" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I initiate the mongo replica with LDAP
2021-02-21T05:36:55.994Z
How can I initiate the mongo replica with LDAP
3,440
null
[ "on-premises" ]
[ { "code": "", "text": "I’ve been having a difficult time getting On-premise Charts working with SSL connection to a local MongoDB installation (not in Docker container). Every time I run it stitch fails. I initially had a self-signed cert and stitch complained about that. Then I decided to use a Let’s Encrypt issued cert, but now stitch says:Addr: 172.17.0.1:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: x509: cannot validate certificate for 172.17.0.1 because it doesn’t contain any IP SANsI don’t believe Let’s Encrypt does IP SAN, so self-signed would seem to be the way to go. I’ve followed all the steps many times and the only way I can get fully operational is to remove the SSL requirement from MongoDB.What would be the best route as far as certs to make an On-premise Charts work? Any help would be appreciated. Thank you.James", "username": "James_C" }, { "code": "", "text": "Hi @James_CThis configuration is supported. Did you see the docs at Configure TLS/SSL for Metadata Clusters — MongoDB Charts? What errors / behaviour are you seeing with your self-signed cert?Tom", "username": "tomhollander" }, { "code": "docker run --rm quay.io/mongodb/charts:19.12.2 charts-cli test-connection 'mongodb://admin:[email protected]?ssl=true'\n2021-02-22T00:48:33.633Z ERROR main_server server/main.go:88 error starting up servers: error parsing uri\ndocker run --rm quay.io/mongodb/charts:19.12.2 charts-cli test-connection 'mongodb://admin:[email protected]/?ssl=true'\nAddr: 172.17.0.1:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: x509: cannot validate certificate for 172.17.0.1 because it doesn't contain any IP SANs\n", "text": "Hello Tom! Thanks for the reply. Sorry for the late reply I was trying to run thru things again to see if I can fix before replying, but no joy.I have read most of the installation documentation a few times. I’ll admit I mostly get confused on the SSL part. It seems there might be a combination of connection-uri and cert file confusion on my part. Below are the various connection-uri I have tried (all get verified), and the stitch-startup.log output:connection-uri:stitch ouput:I think I figured the above issue, it seems stitch wants a ‘/’ after the host.connection-uri:stitch ouput:I will try some more testing and read the docs again in the morning. Thanks.", "username": "James_C" }, { "code": "extra_hosts", "text": "OK. I’m not an expert in this area but I have made it work before. You may want to use a hostname instead of an IP (matching the value in the certificate) and use the Docker extra_hosts section if you need to force the name resolution.Tom", "username": "tomhollander" }, { "code": "Charts interprets localhost as the Docker container Charts is running in. If the database is running on the same host as the Charts Docker container but not in Docker, it will not be reachable via mongodb://localhost. Instead, use one of the following URIs depending on your Docker version when creating the Docker secret in the command below:\n\nLinux\tIP address of the docker0 interface. 172.17.0.1 by default.\n", "text": "Hi Tom! I finally got it! I feel a little foolish after figuring out the issue. I took your advice and used hostname instead as well as extra_hosts and it works. I feel a little foolish because doing that seems normal, but I got confused during the install when I read:“If the database is running on the same host as the Charts Docker container but not in Docker…”, I guess I read this the wrong way but I thought it meant if I have a database on a host (ubuntu server) and Charts Container on the same host(ubuntu server) that I would need to use the 172 address with the connection string and not the actual MongoDB deployment IP (or hostname). So now I use:docker run --rm Quay charts-cli test-connection ‘mongodb://admin:[email protected]/?ssl=true’And everything works great. Sorry I wasted your time, but the wording really confused me. Thanks for all the help.", "username": "James_C" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB-Charts Stitch SSL Cert Issues
2021-02-21T19:29:53.274Z
MongoDB-Charts Stitch SSL Cert Issues
4,135
null
[ "queries", "mongoose-odm", "react-js" ]
[ { "code": "", "text": "Hi everyone,In my mongodb compass I can see my user table with “About Me” and other info. Then I added a “Phone number” info just below “About Me” in my reactjs code. I run my reactjs website and could type my phone number in the Edit page. Then, I clicked a submit button but the website failed to show my phone number in the updated page. I tried to add “Phone” below the “AboutMe” in my mongodb compass database name code editor but it still can’t insert “Phone” on my reactjs website. Can you show me a step by step screenshot about how to add a “Phone” info on to my reactjs website and by how, mongodb compass or which way? Thank you.", "username": "Jen" }, { "code": "", "text": "Hi Jen - Welcome to the community!Were you able to get this solved? If not, you might find this example helpful: The Modern Application Stack - Part 5: Using ReactJS, ES6 & JSX to Build a UI (the rise of MERN) | MongoDB Blog", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,\nNo. But I have another question. Here is a screenshot of the Edit Profile page. At first, there is no ‘Phone number’ in the Edit Profile Page, but I have just added ‘Phone number’ in the Edit Profile page.When I click on the Edit Profile button in the Profile Page, I can update info like ‘About Me’, ‘Password’ and also ‘Phone number’ which can also show the updated changes in my mongodb database. I simply entered my phone number in alphabets since I chose the phone number type to be of a string in my react code for me to just try out if the phone number could show anything but I will change it to numbers later. But the ‘Phone number’ failed to display on the main Profile page here=\nWhen I run the website as npm start=\nStarting the development server…\nBrowserslist: caniuse-lite is outdated. Please run:\nnpx browserslist@latest --update-db\nShould I run the above command?It is also compiled with warnings. Can I ignore the warnings below?=\nFile1.js\nDuplicate key ‘background’ no-dupe-keysFile2.js\n‘React’ is defined but never used no-unused-varsFile3.js\nDuplicate key ‘background’ no-dupe-keysThank you!", "username": "Jen" }, { "code": "", "text": "A typo in my earlier post because it is the same question and not another question.", "username": "Jen" }, { "code": "backgroundReact", "text": "Hi Jen,I don’t know React, but I can take some guesses about the warnings.In File1 and File3, are you setting background twice on the same object? no-dupe-keys - ESLint - Pluggable JavaScript Linter has some information on this warning.In File2, it sounds like you are declaring a variable named React but not using it. no-unused-vars - ESLint - Pluggable JavaScript Linter has some information on this warning.Regarding the phone number issue. I think what you’re saying is…Is that right?Can you double check that the name of your phone number field in MongoDB exactly matches the name you’re using to retrieve the field in your code? It’d be helpful to see the document in your database as well as the code you’re using to retrieve the information from your database.", "username": "Lauren_Schaefer" }, { "code": "style={{\n background: \"#dcdcdc\",\n background: \"-webkit-linear-gradient(to left, #dcdcdc, #ee82ee\n)\",\n background: \"linear-gradient(to left, #dcdcdc, #ee82ee\n)\",\n}}\nimport React, { Component } from \"react\";\nimport { withRouter } from \"react-router-dom\";\n\nclass ScrollToTop extends Component {\n componentDidUpdate(prevProps) {\n if (this.props.location !== prevProps.location) {\n window.scrollTo(0, 0);\n }\n }\n\n render() {\n return(\n this.props.children\n ); \n }\n}\n\nexport default withRouter(ScrollToTop)\n", "text": "Hi Lauren,There are no errors after I renamed the background in line 2 to background1 and background in line 3 to background2. Am I right for these?=What is wrong with File 2?=Yes, that is right. Both match.Do I show you the codes in my 3 files here or do I email you these?Thank you!", "username": "Jen" }, { "code": "", "text": "Hi Jen,no-dupe-keys - ESLint - Pluggable JavaScript Linter says “Multiple properties with the same key in object literals can cause unexpected behavior in your application.” Try setting the background only to the value you want to use.For File2, you are importing React, but it sounds like you aren’t using it. You can probably remove the import.Paste your document and code here so that everyone can benefit from the discussion.", "username": "Lauren_Schaefer" }, { "code": "style={{\n background: “#dcdcdc”,\n background: “-webkit-linear-gradient(to left, #dcdcdc, #ee82ee\n)”,\n background: “linear-gradient(to left, #dcdcdc, #ee82ee\n)”,\n}}\nconst mongoose = require('mongoose');\nconst { v1: uuidv1 } = require('uuid');\nconst crypto = require('crypto');\nconst { ObjectId } = mongoose.Schema;\n\n\nconst userSchema = new mongoose.Schema({\n name: {\n type: String,\n trim: true,\n required: true\n },\n email: {\n type: String,\n trim: true,\n required: true\n },\n hashed_password: {\n type: String,\n required: true\n },\n salt: String,\n created: {\n type: Date,\n default: Date.now\n },\n updated: Date,\n photo: {\n data: Buffer,\n contentType: String\n },\n about: {\n type: String,\n trim: true \n },\n phone: {\n type: String,\n trim: true\n },\n notificationToken: {\n type: String\n },\n following: [{\n type: ObjectId,\n ref: \"User\"\n }],\n followers: [{\n type: ObjectId,\n ref: \"User\"\n }],\n resetPasswordLink: {\n data: String,\n default: \"\"\n }\n\n});\n\n//virtual field\nuserSchema.virtual('password')\n.set(function(password){\n //create temp var _password\n this._password = password;\n //generate a timestamp\n this.salt = uuidv1();\n // encrypt password\n this.hashed_password = this.encryptPassword(password);\n})\n.get(function(){\n return this._password;\n})\n\n\n//methods\nuserSchema.methods = {\n\n authenticate: function(plainText){\n return this.encryptPassword(plainText) === this.hashed_password;\n },\n\n encryptPassword: function(password){\n if(!password) return \"\";\n try{\n return crypto.createHmac('sha1',this.salt)\n .update(password)\n .digest('hex')\n } catch(err){\n return \"\"\n }\n }\n}\nmodule.exports = mongoose.model(\"User\", userSchema);\n\nProfile.js=\nimport React, { Component } from 'react';\n\nimport { isAuthenticated } from \"../auth\";\nimport { Redirect, Link } from 'react-router-dom';\nimport { read } from \"./apiUser\";\nimport DefaultProfile from '../images/avatar.jpg';\nimport DeleteUser from './DeleteUser';\nimport FollowProfileButton from './FollowProfileButton';\nimport { listByUser } from '../post/apiPost';\nimport '../css/Profile.css';\n\nimport { Tabs, Tab } from 'react-bootstrap-tabs';\n\nimport Loading from '../loading/Loading';\n\nclass Profile extends Component {\n constructor() {\n super();\n this.state = {\n // user: \"\",\n user: { following: [], followers: [] },\n redirectToSignin: false,\n following: false,\n error: \"\",\n posts: [],\n loading: false\n }\n }\n\n // check follow\n checkFollow = (user) => {\n const jwt = isAuthenticated();\n const match = user.followers.find(follower => {\n return follower._id === jwt.user._id\n })\n return match\n }\n\n\n clickFollowButton = callApi => {\n this.setState({ loading: true })\n const userId = isAuthenticated().user._id;\n const token = isAuthenticated().token;\n callApi(userId, token, this.state.user._id)\n .then(data => {\n if (data.error) {\n \n this.setState({ error: data.error })\n } else {\n this.setState({ user: data, following: !this.state.following, loading: false })\n }\n })\n }\n\n // profileUnfollow = (unfollowId) => {\n // const userId = isAuthenticated().user._id;\n // const token = isAuthenticated().token;\n // unfollow(userId, token, unfollowId)\n // .then(data => {\n // if (data.error) {\n // this.setState({ error: data.error })\n // } else {\n // this.setState({ user: data })\n // }\n // })\n // }\n\n // unfollowClick = (e) => {\n // const unfollowId = e.target.getAttribute(\"data-index\");\n // this.profileUnfollow(unfollowId);\n // }\n\n init = (userId) => {\n this.setState({ loading: true })\n const token = isAuthenticated().token;\n read(userId, token)\n .then(data => {\n if (data.error) {\n this.setState({ redirectToSignin: true });\n } else {\n let following = this.checkFollow(data);\n this.setState({ user: data, following });\n this.loadPosts(data._id);\n }\n });\n };\n\n loadPosts = (userId) => {\n const token = isAuthenticated().token;\n listByUser(userId, token)\n .then(data => {\n if (data.error) {\n console.log(data.error)\n } else {\n this.setState({ posts: data, loading: false });\n }\n })\n }\n\n componentDidMount() {\n const userId = this.props.match.params.userId;\n this.init(userId);\n }\n\n componentWillReceiveProps(props) {\n const userId = props.match.params.userId;\n this.init(userId);\n }\n\n renderProfile = () => {\n const { user, following, posts } = this.state;\n const photoUrl = user._id ? `${process.env.REACT_APP_API_URL}/user/photo/${user._id}?${new Date().getTime()}` : DefaultProfile;\n let followingBadge = <p style={{ marginBottom: \"0\" }}><span className=\"badge badge-pill badge-primary\">{user.following.length}</span> Following</p>\n let followersBadge = <p style={{ marginBottom: \"0\" }}><span className=\"badge badge-pill badge-success\">{user.followers.length}</span> Followers</p>\n let postsBadge = <p style={{ marginBottom: \"0\" }}><span className=\"badge badge-pill badge-warning\">{posts.length}</span> Posts</p>\n return <div className=\"user-profile\">\n <div className=\"row\">\n <div className=\"col-md-4\">\n <div className=\"profile-info-left\">\n <div className=\"text-center\">\n <img \n height=\"300\"\n width=\"300\"\n src={photoUrl} \n alt={user.name} \n onError={i => (i.target.src = DefaultProfile)} \n className=\"avatar img-circle\" \n />\n <h2 className=\"mt-2\" >{user.name}</h2>\n </div>\n <div className=\"action-buttons\">\n {isAuthenticated().user && isAuthenticated().user._id === user._id ? (\n <>\n <div className=\"row\">\n <div className=\"col-md-4 col-xs-6\">\n <Link \n className=\"btn btn-sm btn-raised btn-primary\"\n to={`/post/create`}\n >\n Create Post\n </Link>\n </div>\n <div className=\"col-md-4 col-xs-6\">\n <Link \n className=\"btn btn-sm btn-raised btn-dark\"\n to={`/user/edit/${user._id}`}\n >\n Edit Profile\n </Link>\n </div>\n\n </div>\n <div className=\"mt-2\">\n <DeleteUser userId={user._id} />\n </div>\n </>\n ): (\n <div className=\"row\">\n <div className=\"col-md-6 col-xs-6\">\n <Link \n className=\"btn btn-sm btn-raised btn-success ml-3\"\n to={`/chat/${isAuthenticated().user._id}/${user._id}`}\n >\n Message\n </Link>\n </div>\n <div className=\"col-md-6 col-xs-6\">\n <FollowProfileButton following={following} onButtonClick={this.clickFollowButton} />\n </div>\n </div> \n )}\n \n </div>\n <div className=\"section\">\n <h3>About Me</h3>\n <p>{user.about}</p>\n </div>\n <div className=\"section\">\n <h3>Phone Number</h3>\n <p>{user.phone}</p>\n </div>\n <div className=\"section\">\n <h3>Statistics</h3>\n <p><span className=\"badge badge-pill badge-primary\">{user.following.length}</span> Following</p>\n <p><span className=\"badge badge-pill badge-success\">{user.followers.length}</span> Followers</p>\n <p><span className=\"badge badge-pill badge-warning\">{posts.length}</span> Posts</p>\n </div>\n </div>\n </div>\n <div className=\"col-md-8\">\n <div className=\"profile-info-right\">\n <Tabs onSelect={(index, label) => console.log(label + ' selected')}>\n <Tab label={postsBadge} className=\"tab-title-name\">\n <div className=\"row\">\n {posts.map((post, i) => (\n <div key={i} style={{ paddingBottom: \"15px\" }} className=\"col-md-4\">\n <Link to={`/post/${post._id}`} >\n <figure className=\"snip1205 red\">\n <img \n style={{ objectFit: \"cover\", padding: \"0\" }}\n height=\"200\"\n width=\"200\"\n src={`${process.env.REACT_APP_API_URL}/post/photo/${post._id}`}\n alt={post.title} \n />\n <i className=\"fas fa-heart\">\n <br />\n <span style={{ color: \"white\", fontSize: \"20px\" }} >{post.likes.length}</span>\n </i>\n </figure>\n </Link>\n </div>\n ))}\n </div>\n </Tab>\n <Tab label={followersBadge} className=\"tab-title-name\">\n {user.followers.map((person, i) => (\n <div key={i} className=\"media user-follower\">\n <img \n src={`${process.env.REACT_APP_API_URL}/user/photo/${person._id}`}\n onError={i => (i.target.src = DefaultProfile)}\n alt={person.name} \n className=\"media-object pull-left mr-2\" \n />\n <div className=\"media-body\">\n <Link to={`/user/${person._id}`} >\n {person.name}<br /><span className=\"text-muted username\">@{person.name}</span>\n </Link>\n {/* <button type=\"button\" className=\"btn btn-sm btn-toggle-following pull-right\"><i className=\"fa fa-checkmark-round\"></i> <span>Following</span></button> */}\n </div>\n </div>\n ))}\n </Tab>\n\n <Tab label={followingBadge} className=\"tab-title-name\">\n {user.following.map((person, i) => (\n <div key={i} className=\"media user-following\">\n <img \n src={`${process.env.REACT_APP_API_URL}/user/photo/${person._id}`}\n onError={i => (i.target.src = DefaultProfile)}\n alt={person.name} \n className=\"media-object pull-left mr-2\" \n />\n <div className=\"media-body\">\n <Link to={`/user/${person._id}`} >\n { person.name }<br /><span className=\"text-muted username\">@{person.name}</span>\n </Link>\n {/* <button data-index = {person._id} onClick={this.unfollowClick} type=\"button\" className=\"btn btn-sm btn-toggle-following pull-right\"><i className=\"fa fa-checkmark-round\"></i> <span>Unfollow</span></button> */}\n </div>\n </div>\n ))}\n </Tab>\n </Tabs>\n </div>\n </div>\n </div>\n </div>\n }\n\n\n render() {\n const { redirectToSignin, user, loading } = this.state;\n console.log(\"state user\", user);\n if (redirectToSignin) {\n return <Redirect to='/signin' />\n }\n\n\n return (\n <div className=\"container\">\n { loading ? (\n <Loading />\n ) : (\n this.renderProfile()\n ) }\n </div>\n );\n }\n}\n\nexport default Profile;\nimport React, { Component } from 'react';\n\nimport Loading from '../loading/Loading';\n\nimport { read, update, updateUser } from \"./apiUser\";\nimport { isAuthenticated } from \"../auth\";\nimport { Redirect } from 'react-router-dom';\nimport DefaultProfile from '../images/avatar.jpg';\n\n\nclass EditProfle extends Component {\n\n constructor() {\n super();\n this.state = {\n id: \"\",\n name: \"\",\n email: \"\",\n about: \"\",\n phone: \"\",\n password: \"\",\n loading: false,\n redirectToProfile: false,\n error: \"\",\n fileSize: 0\n }\n }\n\n init = (userId) => {\n const token = isAuthenticated().token;\n read(userId, token)\n .then(data => {\n if (data.error) {\n this.setState({ redirectToProfile: true })\n } else {\n this.setState({ \n id: data._id,\n name: data.name,\n email: data.email,\n error: \"\" ,\n about: data.about,\n phone: data.phone\n });\n }\n })\n }\n\n componentDidMount() {\n this.userData = new FormData()\n const userId = this.props.match.params.userId;\n this.init(userId);\n }\n\n isValid = () => {\n const { name, email, password, fileSize } = this.state;\n const userId = this.props.match.params.userId;\n if(userId !== isAuthenticated().user._id){\n this.setState({ error: \"You are not authorized to do this !!\", loading: false });\n return false;\n }\n\n if (fileSize > 1000000) {\n this.setState({ error: \"File size should be less than 1 MB\", loading: false });\n return false;\n }\n\n if (name.length === 0) {\n this.setState({ error: \"Name is required\", loading: false });\n return false;\n }\n //test regular expression with 'test' keyword\n if (!/^\\w+([.-]?\\w+)*@\\w+([.-]?\\w+)*(\\.\\w{2,3})+$/.test(email)) {\n this.setState({ error: \"Please enter a valid email address.\", loading: false });\n return false;\n }\n if (password.length >= 1 && password.length <= 5) {\n this.setState({ error: \"Password must be at least 6 characters long\", loading: false });\n return false;\n }\n return true;\n }\n\n handleChange = e => {\n const value = e.target.name === 'photo' ? e.target.files[0] : e.target.value;\n const fileSize = e.target.name === 'photo' ? e.target.files[0].size : 0;\n this.userData.set(e.target.name, value);\n this.setState({\n error: \"\",\n [e.target.name]: value,\n fileSize\n });\n };\n\n clickSubmit = e => {\n e.preventDefault();\n this.setState({ loading: true })\n if (this.isValid()) {\n //const { name, email, password } = this.state;\n //const user = { name, email, password: password || undefined };\n // console.log(user);\n const userId = this.props.match.params.userId;\n const token = isAuthenticated().token;\n update(userId, token, this.userData)\n .then(data => {\n if (data.error) {\n this.setState({ error: data.error, loading: false });\n } else {\n updateUser(data, () => { \n this.setState({\n redirectToProfile: true\n });\n })\n }\n });\n }\n\n };\n\n signupForm = (name, email, password, loading, about, phone) => (\n <form>\n <div className=\"form-group\">\n <label className=\"text-muted\">Profile Photo</label>\n <input\n onChange={this.handleChange}\n name=\"photo\"\n type=\"file\"\n accept=\"image/*\"\n className=\"form-control\"\n />\n </div>\n <div className=\"form-group\">\n <label className=\"text-muted\">Name</label>\n <input\n onChange={this.handleChange}\n name=\"name\"\n type=\"text\"\n className=\"form-control\"\n value={name}\n />\n </div>\n <div className=\"form-group\">\n <label className=\"text-muted\">Email</label>\n <input\n onChange={this.handleChange}\n type=\"email\"\n name=\"email\"\n className=\"form-control\"\n value={email}\n />\n </div>\n <div className=\"form-group\">\n <label className=\"text-muted\">About</label>\n <textarea\n onChange={this.handleChange}\n type=\"text\"\n name=\"about\"\n className=\"form-control\"\n value={about}\n />\n </div>\n <div className=\"form-group\">\n <label className=\"text-muted\">Phone</label>\n <textarea\n onChange={this.handleChange}\n type=\"string\"\n name=\"phone\"\n className=\"form-control\"\n value={phone}\n />\n </div>\n <div className=\"form-group\">\n <label className=\"text-muted\">Password</label>\n <input\n onChange={this.handleChange}\n type=\"password\"\n name=\"password\"\n className=\"form-control\"\n value={password}\n />\n </div>\n \n <button onClick={this.clickSubmit} className=\"btn btn-raised btn-primary\">Update</button>\n </form>\n );\n\n render() {\n\n const { id, name, email, password, loading, redirectToProfile, error, about, phone } = this.state;\n if (redirectToProfile) {\n return <Redirect to={`/user/${isAuthenticated().user._id}`}></Redirect>\n }\n const photoUrl = id ? `${process.env.REACT_APP_API_URL}/user/photo/${id}?${new Date().getTime()}` : DefaultProfile ;\n\n return (\n <div className=\"container\">\n <h2 className=\"mt-5 mb-5\">Edit Profile</h2>\n <div className=\"alert alert-danger\" style={{ display: error ? \"\" : \"none\" }}>\n {error}\n </div>\n <img \n style={{ display: loading ? \"none\" : \"\" , height: \"200px\", width: \"auto\" }} \n className=\"img-thumbnail\" \n src={photoUrl} \n onError={i => (i.target.src = DefaultProfile)}\n alt={name} \n />\n {loading ? (\n <Loading />\n ) : (\n this.signupForm(name, email, password, loading, about, phone)\n )}\n </div>\n );\n }\n}\n\nexport default EditProfle;\n", "text": "Hi Lauren,I removed ‘background: “#dcdcdc”,’ in line 1 for the code but the error still shows that points to background in line 3. I don’t know which one is important for me to keep. Should I remove background in line 3 or in line 2?=Do I remove both import React, { Component } from “react”; & import { withRouter } from “react-router-dom”;?Here is my MongoDB Compass database=\nUser.js=EditProfile.js=Thank you!", "username": "Jen" }, { "code": "import { Component } from “react”;aboutphoneabouttype=\"text\"phonetype=\"string\"", "text": "Hi Jen,A couple of things…", "username": "Lauren_Schaefer" }, { "code": "index.js:1 Warning: Using UNSAFE_componentWillReceiveProps in strict mode is not recommended and may indicate bugs in your code. See https://fb.me/react-unsafe-component-lifecycles for details.\n\n* Move data fetching code or side effects to componentDidUpdate.\n* If you're updating state whenever props change, refactor your code to use memoization techniques or move it to static getDerivedStateFromProps. Learn more at: https://fb.me/react-derived-state\n\nPlease update the following components: InfiniteScroll\nconsole.<computed> @ index.js:1\n\nProfile.js:263 state user Object\nindex.js:1 Warning: Failed prop type: Invalid prop `label` of type `object` supplied to `TabComponent`, expected `string`.\n in TabComponent (at Profile.js:193)\n in Profile (at PrivateRoute.js:7)\n in Route (at PrivateRoute.js:6)\n in PrivateRoute (at MainRouter.js:35)\n in Switch (at MainRouter.js:25)\n in div (at MainRouter.js:23)\n in MainRouter (at App.js:9)\n in ScrollToTop (created by Context.Consumer)\n in withRouter(ScrollToTop) (at App.js:8)\n in Router (created by BrowserRouter)\n in BrowserRouter (at App.js:7)\n in App (at src/index.js:7)\n in StrictMode (at src/index.js:6)\nconsole.<computed> @ index.js:1\nProfile.js:263 state user Object\nreact-dom.development.js:88 Warning: componentWillReceiveProps has been renamed, and is not recommended for use. See https://fb.me/react-unsafe-component-lifecycles for details.\n\n* Move data fetching code or side effects to componentDidUpdate.\n* If you're updating state whenever props change, refactor your code to use memoization techniques or move it to static getDerivedStateFromProps. Learn more at: https://fb.me/react-derived-state\n* Rename componentWillReceiveProps to UNSAFE_componentWillReceiveProps to suppress this warning in non-strict mode. In React 17.x, only the UNSAFE_ name will work. To rename all deprecated lifecycles to their new names, you can run `npx react-codemod rename-unsafe-lifecycles` in your project source folder.\n\nPlease update the following components: Profile, TabsComponent\nprintWarning @ react-dom.development.js:88\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object\nProfile.js:263 state user Object \n", "text": "Hi Lauren,Do you also want to see each dropdown list of errors for each Profile.js above?Thank you!", "username": "Jen" }, { "code": "", "text": "Hi Jen,The background color thing should not be impacting the phone number.I recommend diving into the Profile.js errors. What is happening on line 263? What is happening in index.js on or around line 1?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,It displays a user info for line 263 for Profile.js= console.log(“state user”, user);?I have 2 index.js files. Which one to refer to? One file starts from line 1 and other file starts from line 3. If you mean the other file, it is fetching the sign up info.Thanks!", "username": "Jen" }, { "code": "labelobjectTabComponentstring", "text": "Ah, ok, so the Profile logging is just a log and not an error.I’m not sure which index.js is being access on that page. See if you can do some debugging. I’m guessing the problem is related to the warning being shown:\nindex.js:1 Warning: Failed prop type: Invalid prop label of type object supplied to TabComponent, expected string.", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,I found the right index.js and I guess there is no import React from ‘react’; or etc on line 1 but it simply begins with export const signup = (user) => {. Must it import something on line 1, for example import PropTypes from ‘prop-types’;Component.propTypes = {? Or how should the import line of code be like?Thank you!", "username": "Jen" }, { "code": "", "text": "Hi Lauren,\nWhen can you help me to debug the phone error code?\nThanks!", "username": "Jen" }, { "code": "labelobjectTabComponentstring", "text": "Hi Jen,\nHave you figured out the source of this warning?index.js:1 Warning: Failed prop type: Invalid prop label of type object supplied to TabComponent , expected string .I’m wondering if that is what is causing the problems.", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,I searched for Tab and the problem related files Profile.css, Profile.js. and chat.jsThank you!", "username": "Jen" }, { "code": "", "text": "Were you able to figure it out?", "username": "Lauren_Schaefer" }, { "code": "", "text": "No, I weren’t. Can you figure it out for me? Thanks, Lauren!", "username": "Jen" }, { "code": "", "text": "Looking at the stack trace, it looks like the error is occurring in TabComponent (at Profile.js:193). Have you checked there?", "username": "Lauren_Schaefer" } ]
How to update data to a reactjs website?
2021-01-25T19:45:21.470Z
How to update data to a reactjs website?
25,219
null
[ "aggregation", "java", "compass" ]
[ { "code": "AggregateIterable<Document> result = collection.aggregate(Arrays\n .asList(eq(\"$search\",\n eq(\"compound\",\n eq(\"should\",\n Arrays.asList(\n eq(\"text\",\n and(\n eq(\"query\", line),\n eq(\"path\", \"title\"),\n eq(\"fuzzy\", eq(\"maxEdits\", 1L)),\n eq(\"score\", eq(\"boost\", eq(\"value\", 2L))))),\n eq(\"text\",\n and(\n eq(\"query\", line),\n eq(\"path\", \"ausschreibungsText\"),\n eq(\"fuzzy\", eq(\"maxEdits\", 1L)),\n eq(\"score\", eq(\"boost\", eq(\"value\", 1L))))),\n eq(\"text\",\n and(\n eq(\"query\", line),\n eq(\"path\", \"artikelnummerHersteller\"),\n eq(\"score\", eq(\"boost\", eq(\"value\", 4L))))))))),\n limit(5),\n project(\n fields(\n excludeId(),\n include(\"title\", \"ausschreibungsText\", \"artikelnummerHersteller\", \"netPrice\", \"brutoPrice\", \"basismengenEinheit\"),\n computed(\"score\", eq(\"$meta\", \"searchScore\"))))));\n\n StreamSupport.stream(result.spliterator(), false)\n .peek(a -> LOG.info(\"result: {}\", a))\n .collect(Collectors.toList());\n", "text": "Hi, I’d like to use Atlas Search in Java.I created a working query in Compass, and exported its Java code.Exception I am getting is:com.mongodb.MongoCommandException: Command failed with error 8 (UnknownError): ‘Remote error from mongot :: caused by :: “path” is required (from “compound.should[0].text”)’ on server cluster0-shard-00-01.43pre.mongodb.net:27017. The full response is {“operationTime”: {“$timestamp”: {“t”: 1613647388, “i”: 2}}, “ok”: 0.0, “errmsg”: “Remote error from mongot :: caused by :: \"path\" is required (from \"compound.should[0].text\")”, “code”: 8, “codeName”: “UnknownError”, “$clusterTime”: {“clusterTime”: {“$timestamp”: {“t”: 1613647388, “i”: 2}}, “signature”: {“hash”: {“$binary”: {“base64”: “XcTWq9PN6XA+VigM2gDaifLd+9g=”, “subType”: “00”}}, “keyId”: 6928397309239623682}}}\nat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:302) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:258) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:99) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:500) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:224) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:110) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:343) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:334) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:220) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.CommandOperationHelper$5.call(CommandOperationHelper.java:206) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:462) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:203) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.AggregateOperationImpl.execute(AggregateOperationImpl.java:189) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:296) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.internal.operation.AggregateOperation.execute(AggregateOperation.java:41) ~[mongodb-driver-core-4.0.5.jar:na]\nat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:190) ~[mongodb-driver-sync-4.0.5.jar:na]\nat com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.0.5.jar:na]\nat com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.0.5.jar:na]\nat com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:39) ~[mongodb-driver-sync-4.0.5.jar:na]\nat java.base/java.lang.Iterable.spliterator(Iterable.java:101) ~[na:na]So message is “path” is required (from “compound.should[0].text”)’ but I am providing this path.\nWhat am I doing wrong here?", "username": "Marco_Dell_Anna" }, { "code": "", "text": "Marco, I believe this issue has to do with improperly formatted BSON. I am looking into your precise issue. In the meantime, you can follow this related JIRA.", "username": "Marcus" }, { "code": "", "text": "Thanks Mark, please let me know, since I am blocked by this.", "username": "Marco_Dell_Anna" }, { "code": "", "text": "Hello @Marco_Dell_Anna, meanwhile if you want to, you can try this approach to execute the aggregation query in Java:", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya, using native query is bypassing the problem.\nNow I got a List; since I am using Spring Data MongoDB, do you know what is the recommended approach to transform a Document into a POJO?", "username": "Marco_Dell_Anna" }, { "code": "MongoOperationsMongoTemplateMongoRepositoryMongoTemplate#AggregateDocumentMongoRepository", "text": "Hello @Marco_Dell_Anna, Spring Data MongoDB has its own APIs to work with aggregation queries. You can use MongoOperations (its implementation MongoTemplate class) or MongoRepository to build your aggregation query. For example, using MongoTemplate#Aggregate method (See MongoTemplate APIdocs) you can return the output of your aggregation as a POJO types instead of the Document class.Also, see this post solves an aggregation using Spring Data MongoDB’'s MongoRepository API: Compass pipeline export to Java not producing same results", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya, I am trying to create the query in the Spring Data MongoDB way (as in java - How to do this aggregation in spring data mongo db? - Stack Overflow), but I still need to use the “fuzzy” and “boost” functionalities provided by Atlas Search. Is it possible to do it?\nIf not I will need to use native query and I will need another way to create the POJOs, right?", "username": "Marco_Dell_Anna" }, { "code": "$searchDocumentDocument", "text": "I am trying to create the query in the Spring Data MongoDB way…I don’t see the API for the $search pipeline stage in the MongoTemplate#Aggregate methods. I don’t know if Spring Data supports Atlas Search (did not find anything online, yet).If not I will need to use native query and I will need another way to create the POJOs, right?I think, yes. You need some routine to transform the Document class to a POJO ( a method to extract fields from the Document class and map them to a POJO class).", "username": "Prasad_Saya" }, { "code": "", "text": "Hi Marco,thanks for bringing that issue up! It was fixed in the new Compass 1.26.0 Beta that we published. It will generate the Java Code for aggregation pipelines without the fluent API for now. We are working on providing the fluent API for aggregation pipelines with all the different stages as soon as possible.You should also be able to generate a working Java aggregation pipeline in previous versions of Compass by disabling the “Use Builders” checkbox in the “Export Pipeline to Language” dialog.Let me know if that helps and sorry for the inconvenience!\nMichael", "username": "Michael_Rose" }, { "code": "$search", "text": "Spring Data does not yet support $search pipeline natively, but we could see support for it soon.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using Atlas Search with Java not working
2021-02-18T11:48:07.819Z
Using Atlas Search with Java not working
6,831
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\nuserID: 1,\n solutions: [\n { textID: 2,\n solution: \"some solution text\"\n } \n]'\n}\n", "text": "Hello everyone,thank you for having me and potentially helping out already. I am currently stuck on an in my eyes simple? update query.The scenario:\nWe save texts where users can save their own written solution. For the texts, we have our own collection the same as for the users. When the user submits a solution it is saved in the user Document in an array as an object containing the text ID and the solution itself. This works fine for the first time by using $push. When writing more text or editing this solution a new array object is created, instead of updating the one present:\nExample:As mentioned I used $push with using the following filter: { userID: 1, solutions.textID: 2 } and it still created a new one .\nMaybe I am just too much into it and not able to look left and right here, that’s why I reached out to you. Is it a flaw with my data model or should I use aggregations?Any help much appreciated \nStay save!", "username": "lucdoe" }, { "code": "test:PRIMARY> db.coll.insert({userID:1,solutions: [{textID:2, solution: \"the old text\"}]})\nWriteResult({ \"nInserted\" : 1 })\ntest:PRIMARY> db.coll.update({userID:1, \"solutions.textID\":2}, {$set: {\"solutions.$.solution\": \"the new text\"}})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\ntest:PRIMARY> db.coll.findOne()\n{\n\t\"_id\" : ObjectId(\"602fa2f615271349f5e49e25\"),\n\t\"userID\" : 1,\n\t\"solutions\" : [\n\t\t{\n\t\t\t\"textID\" : 2,\n\t\t\t\"solution\" : \"the new text\"\n\t\t}\n\t]\n}\n", "text": "Hi @lucdoe,Here is an example of how to update a field in a document in an array using the $ positional operator:I hope this is what you wanted.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hey @MaBeuLux88, thanks for your quick answer, one yes this helps, but this is not really what I am aiming for. Just for clarity, I want to archive the outcome with one operation (if possible). Maybe I need to do it like you pointed out but it would be great to have it in one operation like $push, $set or similuar. Do you get where I am aiming to go with this?So if the text is written for the first time it should work with the same operation as with when I am editing.\nSorry trying to structure my thoughts haha", "username": "lucdoe" }, { "code": "", "text": "Hello @lucdoe, here is a post with similar update operation you are looking for. Let me know if it works for you.", "username": "Prasad_Saya" }, { "code": "test:PRIMARY> db.coll.insert({userID:1,solutions: [{textID:2, solution: \"the old text\"}]})\nWriteResult({ \"nInserted\" : 1 })\ntest:PRIMARY> db.coll.update({userID:1, \"solutions.textID\":2}, {$set: {\"solutions.$.solution\": \"the new text\"}})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\ntest:PRIMARY> db.coll.findOne()\n{\n\t\"_id\" : ObjectId(\"602fa2f615271349f5e49e25\"),\n\t\"userID\" : 1,\n\t\"solutions\" : [\n\t\t{\n\t\t\t\"textID\" : 2,\n\t\t\t\"solution\" : \"the new text\"\n\t\t}\n\t]\n}\n", "text": "Hi @lucdoe,Here is an example of how to update a field in a document in an array using the $ positional operator:I hope this is what you wanted.Cheers,\nMaxime.In the end, I went to this solution for everyone coming back:\nIn the backend I first executed findOne({ userID: X, textID: Y}) to see if there is a solution by this user on the given text. If I get back null I push a new object to the array.Works and is a bit simpler than @Prasad_Saya’s solution Thanks again, everyone!", "username": "lucdoe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating an Object in an Array
2021-02-19T11:18:00.639Z
Updating an Object in an Array
31,016
null
[]
[ { "code": "", "text": "The question is as follows:Why is MongoDB a NoSQL database?The answers expected includesBecause it uses a structured way to store and access data.This seems incorrect since a SQL database is also structured.", "username": "Mark_Visschers" }, { "code": "", "text": "MongoDB is one of the types of database management systems that exists today. The reason of the existence of a DBMS such as MongoDB is to attend or resolve some sort of issues or new requirements where other types of DBMS will give a poor response or no response at all, or yet, will take a considerable amount of effort and resources.There is a structure query language for a DBMS like MongoDB, it is called MSQL. The MSQL is specific to MongoDB and the internals of its data structures.The data structures of a document database such as MongoDB are different from those of a relational database or a graph database, and the internal workings of each other are different as well. Not withstanding, you can relate data in MongoDB in its proper way as you would in a relational database proper way.So, when you see “NoSQL” you can think of a DBMS that was engeneered different from a relational database model, not that it does not uses a Structured Query Language, or that does not use data structures to hold the data. Keep in mind that all types of databases must use data structures to hold the data.Jaime ", "username": "jhrozo" }, { "code": "", "text": "Hi @Mark_Visschers and @jhrozo,This is a great discussion, thank you for bringing it up. The question itself is a check all that apply question, meaning that you select all options that are correct about a statement.The structure of the lesson that precedes this question states that MongoDB is a database, which by definition is just a structured way to store data. The structure differs from one database to another, but it is always present, otherwise, we wouldn’t call it a database, but a data dump instead.We had a great discussion on this topic in this thread if you’re interested to learn more. @jhrozo’s response touches on some of the things that we discussed there.Below is my response to the linked thread for reference.\" This is a great discussion thread, and I would love to weigh in and see what you think about all this.There are a couple of things on the internet that give people the wrong impression about MongoDB. The first thing is that it is non-structured. By definition, if something is a database, then there is a structure in how the data is stored. This structure could be related tables of data, graphs, key-value pairs, or an alphabetized set of library cards. If there wasn’t structure to the way that the data is stored, it wouldn’t be a database, it would be a datadump of some sort. For example my history notebook from highschool is an unstructured way to store data. There is no way to find information in that notebook other than reading through every single page of it and hoping to find a scribble of relevant information.This conversation earlier mentioned non-structured query language . The language that is used to query the database is a separate topic. In theory, anything that is not SQL is a non-structured query language, but not because it lacks some sort of structure, but because SQL called dibs on the word structured . It is literally in its name Structured Query Language. When people want to refer to any querying language that is non-SQL they simply say “unstructured” instead of saying MQL, GraphQL, etc.Another misconception that is floating around the web is that MongoDB is schemaless. Meaning that it lacks schema altogether, and it adds to the misconception that MongoDB doesn’t have relationships. These are not true. MongoDB can easily store related data, enforce schema, and demand structure from the way the data is represented and stored. The key difference here is that there is flexibility in how the schema and relationships are being designed and used, and it isn’t uniform for everyone. Whereas in the world of related tables, there is a strict formula by which related data needs to be organized, and there is no other way to do it, hence, the lack of flexibility in the approach.Some applications need to ability to have optional fields and a variety of data types in their data, others don’t and MongoDB can accommodate for both by using good data modeling practices, and schema validation rules.Let me know if you have questions about any of this or if I wasn’t clear at some point in this post.Hope this helps clear things up a little.\"", "username": "Yulia_Genkina" }, { "code": "", "text": "Thank you for your extensive response, really appreciate it!\nThe information provided is very extensive and clears a lot up. I think it is really important info and I get that people might mistake mongo as unstructured just because the “S” in SQL.\nI will definitely look into the provided thread to learn more on this topic.What threw me off was the NoSQL - Database part in the question. For me it would be more clear if it was a separate question like: “Is Mongodb a structured or unstructed database?” or “What is true about MongoDb?” with the same answers applying. This way NoSQL part is separated from the database part.But I am actually glad that it was a little confusing for me, otherwise I would have missed this extensive answer which provided me with a lot more information!Thank you both again!", "username": "Mark_Visschers" }, { "code": "", "text": "Thank you for your extensive answer. The “S” would throw somebody off without this information. The information that you provided will make sure I will look at mongo or any other NoSQL database, in the right light.", "username": "Mark_Visschers" }, { "code": "", "text": "", "username": "system" } ]
Chapter1 Quiz 1
2021-02-15T12:50:53.605Z
Chapter1 Quiz 1
4,213
null
[]
[ { "code": "argumentsexports = function(a, b) {\n return a + b;\n};\n{\n \"numGamesPlayed\": {\n \"%function\": {\n \"name\": \"sum\",\n \"arguments\": [\n 10,\n 20\n ]\n }\n }\n}\n", "text": "I’m checking this documentation:\nCall a Function — MongoDB RealmHowever, it’s not clear as to how to call a function from a client not using any SDK and just using plain old HTTP request.What’s the endpoint to call to call a Realm function?What are the required HTTP query parameters and headers?The JSON expression discussed on the documentation is that the actual payload format? And the functions arguments is the JSON array arguments?In the example function provided:Would the actual payload for the function be like this?:", "username": "cyberquarks" }, { "code": "", "text": "Hi @cyberquarks,To run a function via http call you have to define it as an HTTP 3rd party webhook:https://docs.mongodb.com/realm/services/configure/service-webhooks/Let me know if that helps.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,How does Realm Web SDK call functions? I’m sure it is still bound to what browsers can do, either HTTP or websocket?", "username": "cyberquarks" } ]
Call Realm function from plain old HTTP request
2021-02-21T10:11:53.235Z
Call Realm function from plain old HTTP request
2,346
null
[ "indexes" ]
[ { "code": "", "text": "Hi All,I have the following use case: there is a large amount of live data on a collection (~250GB) with a set of indexes (these are created when db is empty and then data are inserted afterwards). Then, we’ve done some analysis and decided to use a new set of indexes.Question: what is the best way to deploy the new set of indexes and remove the old set of indexes (that affects existing data as well ) ?I’m wondering if there’s a better way, or better, what is the industry standard approach to deal with this problem ?Tuan", "username": "Tuan_Dinh1" }, { "code": "", "text": "Hi @Tuan_Dinh1,Yes we have a rolling maintenance index creation for replica sets and sharded clusters:\nhttps://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/If this is not possible make sure to use a background methods to not block database activity.Thanks.\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny, will give it a go.", "username": "Tuan_Dinh1" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thanks again for pointing me to this doc. The idea seems to be rolling out the new indexes on one replica of the replica set at a time, with the primary to be the last. But at each replica, there is no way-around running db.collection.createIndex(). This is great when you have to perform the index creation to live cluster.Fortunately, we don’t have this constraint in our case. Yes, there will be existing data, but we have blue/green deployment strategy where all operations can be to a new cluster and checked before it’s switched. So basically, we will do:Any comments, thoughts ?Tuan", "username": "Tuan_Dinh1" }, { "code": "", "text": "Hi @Tuan_Dinh1,Nope sounds good. In that case a regular index build will be the fastest.Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
Best way to amend indexes on large existing data
2021-02-18T23:40:12.578Z
Best way to amend indexes on large existing data
1,645
null
[ "spark-connector" ]
[ { "code": "", "text": "My JAVA project scala, hadoop, spark version as follows:\n<scala.version>2.12.10</scala.version>\n<hadoop.version>2.7.3</hadoop.version>\n<spark.version>3.0.0</spark.version>\npom.xml is:\n\norg.mongodb.spark\nmongo-spark-connector_2.12\n3.0.0\nwhen run MongoSpark.save(sparkDocuments, writeConfig), there is an error: ‘Caused by: java.lang.ClassNotFoundException: com.mongodb.client.result.InsertManyResult’\nreference: [https://docs.mongodb.com/spark-connector/master/java/write-to-mongodb](http://spark-connector write-to-mongodb)could someone assist, thanks!", "username": "alter_bin" }, { "code": " <dependency>\n <groupId>org.mongodb.spark</groupId>\n <artifactId>mongo-spark-connector_2.12</artifactId>\n <version>2.4.3</version>\n </dependency>\n", "text": "I know the error cased by mongo-spark-connector_2.12 high version 3.0.0, reducing it to 2.4.3 is ok.Thanks all.", "username": "alter_bin" } ]
"ClassNotFoundException: com.mongodb.client.result.InsertManyResult" occourred while trying to schedule the "MongoSpark.save(sparkDocuments, writeConfig)"
2021-02-19T11:13:24.997Z
&ldquo;ClassNotFoundException: com.mongodb.client.result.InsertManyResult&rdquo; occourred while trying to schedule the &ldquo;MongoSpark.save(sparkDocuments, writeConfig)&rdquo;
3,803
null
[ "crud", "graphql", "schema-validation" ]
[ { "code": " {\n \"required\": [\n \"name\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"name\": {\n \"bsonType\": \"string\",\n \"minLength\": 5\n }\n },\n \"title\": \"common.location\"\n }\n", "text": "I have a schema for a collection:When I run the validation on existing data, it works. When I insert data without a name, it gives me a validation error back. When I insert data with less than 5 characters, it still inserts. Does the schema do anything on insert or update? Especially from either mongodb or graphql?", "username": "Jorden_Lowe" }, { "code": "", "text": "So, I found out there are 2 different sections to add schema validations.The first one does simple validations like min/max length, nullables, etc. Kinda like database constraints in SQL land. You have to log into mongo compass to set these.The second one is for more business rules, like make sure this record is unique based on certain criteria. More info found here: https://docs.mongodb.com/realm/mongodb/enforce-a-document-schema/", "username": "Jorden_Lowe" } ]
JSON schema not working on insert/update in graphQL
2021-02-19T00:00:46.956Z
JSON schema not working on insert/update in graphQL
2,576
https://www.mongodb.com/…7_2_1024x796.png
[ "atlas-device-sync" ]
[ { "code": " {\n \"title\": \"Vendor\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_partitionKey\",\n \"tstamp\",\n \"isDeleted\",\n \"code\",\n \"name\",\n \"labelName\",\n \"details\",\n \"filter\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"_partitionKey\": {\n \"bsonType\": \"string\"\n },\n \"tstamp\": {\n \"bsonType\": \"date\"\n },\n \"isDeleted\": {\n \"bsonType\": \"bool\"\n },\n \"code\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"labelName\": {\n \"bsonType\": \"string\"\n },\n \"details\": {\n \"bsonType\": \"string\"\n },\n \"filter\": {\n \"bsonType\": \"string\"\n },\n \"brands\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"products\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n}\n", "text": "\nSync Error - Invalid Schema2422×1884 395 KB\nThe schema seems to get created despite this error.And these errors as well\nimage3462×2012 438 KB\nAnd for completion here is the resulting Vendor schema\nimage3460×1860 362 KB\n", "username": "Duncan_Groenewald" }, { "code": "", "text": "Here is another oneimage4246×696 189 KB", "username": "Duncan_Groenewald" }, { "code": "", "text": "Hi Duncan,Based on the error code in the first two screenshots it means they were due to invalid schema changes in the changeset.Are you still experiencing the issue since creating this post?If so, it would be best to raise a support ticket for this (if you haven’t already) as there are multiple errors here that would require further investigation of your specific setup.\nhttp://cloud.mongodb.com/supportRegards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks - yes I did raise a support ticket - seems there were some issues with using the Global realm app option that caused this.", "username": "Duncan_Groenewald" } ]
What is the meaning of this Invalid Schema error?
2021-01-14T01:24:23.520Z
What is the meaning of this Invalid Schema error?
4,426
null
[]
[ { "code": "class SourceOperator: SyncOperator { ... }\n\nclass RestSourceOperator: SourceOperator { ... }\n\nclass BatchOperator: AggregateOperator { let sources = List<SourceOperator>() ... }\n", "text": "Given the following three models:After instantiating a RestSourceOperator which inherits from SourceOperator and attempt to append it to the BatchOperator instance the application crashes with:Terminating app due to uncaught exception ‘RLMException’, reason: 'Object of type ‘RestSourceOperator’ does not match RLMArray type ‘SourceOperator’.'I’d like to avoid hacking around our domain model, any suggestions around handling this exception and helping persist a RestSourceOperator while still extending the base class?", "username": "Michael_Kofman" }, { "code": "", "text": "Realm doesn’t support inheritance, so you can’t add a child class to a collection of the base one.", "username": "nirinchev" }, { "code": "", "text": "I hope we can add this to the roadmap.\nI found the following thread useful Abstract Class / Polymorphism Support · Issue #1109 · realm/realm-swift · GitHub", "username": "Michael_Kofman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Model Inheritence Does Not Match RLMArray type
2021-02-21T19:28:57.270Z
Model Inheritence Does Not Match RLMArray type
1,900
null
[ "installation", "on-premises" ]
[ { "code": "", "text": "I’m trying to install Charts on a Ubuntu 20.4 GCP instance with port 80 and 443 open. I’ve installed the metadata MongoDB on the same instance, binding ip to 0.0.0.0 and enabling authentication. Connecting to the metadata DB with the root DB user and performing “show dbs” using “mongo” works locally.However, I’m currently unable to connect to the metadata DB using the test script:docker run --rm Quay charts-cli test-connection ‘mongodb://user:[email protected]:27017/?authSource=admin’This is the error response. I’ve spent 1-2 days and can’t get Charts to work Unable to connect to MongoDB using the specified URI.\nThe following error was returned while attempting to connect:\nMongoNetworkError: failed to connect to server [127.17.0.1:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.17.0.1:27017]\nThe result from pinging the specified server “127.17.0.1” from within the container is:\nPING 127.17.0.1 (127.17.0.1) 56(84) bytes of data.\n64 bytes from 127.17.0.1: icmp_seq=1 ttl=64 time=0.026 ms\n— 127.17.0.1 ping statistics —\n1 packets transmitted, 1 received, 0% packet loss, time 0ms\nrtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms\nPossible reasons for this error include:\n- The hostname you specified is incorrect\n- MongoDB is not running on the server\n- The Docker container was unable to resolve the hostname specified in the URI\n- The Docker container was unable to resolve the hostnames of replica set members as configured on the server\nThings you can try:\n- Check that the hostname is correct and MongoDB is running on the host.\n- Try connecting using the host IP address instead of the hostname. This may require changes to your mongod or replica set configuration\n- Configure Docker to use a custom DNS server that can resolve the hostname. For example, if your DNS server is 1.2.3.4:\n- As a parameter to this “docker run” command, add the following after -it: --dns=1.2.3.4\n- After successfully validating the connection, in the Docker Compose file for launching Charts, add this line as a child of charts: dns: 1.2.3.4\n- Configure Docker with explicit mappings between hostnames and IP addresses. For example if the host “myhost1” is reachable at 4.3.2.1 and “myhost2” is reachable at 4.3.2.2:\n- As a parameter to this “docker run” command, add the following after -it: --add-host myhost1:4.3.2.1 --add-host myhost2:4.3.2.2\n- After successfully validating the connection, in the Docker Compose file for launching Charts, add these lines as a child of charts:\nextra_hosts:\n- “myhost1:4.3.2.1”\n- “myhost2:4.3.2.2”Any idea would be super appreciated ", "username": "Minh_Nhat" }, { "code": "", "text": "Hello Minh,Are you sure that IP is correct in the connection-uri? I think for standard Linux installation it should be : 172.17.0.1", "username": "James_C" } ]
On-Prem Charts installation: Failed to test connection to metadata DB
2020-11-30T10:57:34.780Z
On-Prem Charts installation: Failed to test connection to metadata DB
3,641
null
[ "replication" ]
[ { "code": " var cfg = {\n \"_id\": \"inno-repl\",\n \"members\": [\n {\n \"_id\": 0,\n \"host\": \"inno1:27017\",\n \"priority\": 2\n },\n {\n \"_id\": 1,\n \"host\": \"inno2:27017\",\n \"priority\": 0\n },\n {\n \"_id\": 2,\n \"host\": \"inno3:27017\",\n \"priority\": 1,\n \"arbiterOnly\": true\n }\n ]\n };\n rs.initiate(cfg, { force: true });\n rs.reconfig(cfg, { force: true });\n", "text": "Hello, I have a question…MySet = [primary] - [secondary] - [arbiter]\nPrimary works fine.[Qustion][My config]I am wondering if there is any problem with the above settingsThank you all have a good day.", "username": "Jun_Kwon" }, { "code": "", "text": "Please check if you have given the correct priority for arbiter\nA secondary can never become primary if priority is set to 0", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Priority: 0 On your secondary mean it will never assume primary.You can read more about priority here", "username": "chris" }, { "code": "", "text": "Thank you very much for the advice!!", "username": "Jun_Kwon" } ]
Why the secondary is not becoming a primary in a PSA replica-set?
2021-02-21T03:38:18.649Z
Why the secondary is not becoming a primary in a PSA replica-set?
2,499
null
[]
[ { "code": "", "text": "", "username": "Rimantas_Belovas" }, { "code": "", "text": "Try @Ramachandra_37567 solution presented in\nhttps://www.mongodb.com/community/forums/t/cant-pass-last-assignment-in-chapter-1-issue/80415?u=steevej-1495", "username": "steevej" }, { "code": "", "text": "Thanks, it helped!", "username": "Rimantas_Belovas" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Cluster "Sandbox" failing first tests
2021-02-21T08:10:34.892Z
Cluster &ldquo;Sandbox&rdquo; failing first tests
1,555
null
[ "crud" ]
[ { "code": "$set: {\n health: 150,\n attack: 3,\n defence: 3,\n endurance: 10,\n power: attack + defence + endurance / 3,\n}\n", "text": "Hello everyone,I am trying to make a calculate field but for some reason i cant get it to work:The power field wont be able to make the calculation… how do i fix this?", "username": "JasoO" }, { "code": "[{$set: {\n health: 150,\n attack: 3,\n defence: 3,\n endurance: 10}, \n { $set: { $add : [ \"$power\", \"$attack\" ,\"$defence\" , { $devide : [\"$endurance\", 3]}]}}]\n", "text": "Hi @JasoOFirst its easy to calculate it on application side as you know all values.\nHowever, aggregation pipeline update can help:The following page provides examples of updates with aggregation pipelines.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "const updateDoc = {\n\n$set: {\n\nhealth: 150,\n\nattack: 3,\n\ndefence: 3,\n\nendurance: 10,\n\ncharacterimg: \"https://i.ibb.co/MPg2SMp/Apocaliptic1.png\",\n\n},\n\n$set: { $add : [ \"$power\", \"$attack\" ,\"$defence\" , { $devide : [\"$endurance\", 3]}]}\n\n}\n\nconst result = await collection.updateOne(filter, updateDoc, options);\n\nconsole.log(\n\n`${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s)`,\n\n);\n\n} finally {\n\nlocalStorage.firstTime = \"No\"\n\nres.redirect('/main'); \n\nawait client.close();\n\n}\n\n}\n\nrun().catch(console.dir); \n\n})\n", "text": "Thank you for the answer, i am trying to do it like this:But it gives me this error:MongoError: The dollar ($) prefixed field ‘$add’ in ‘$add’ is not valid for storage.", "username": "JasoO" }, { "code": "const updateDoc = [{\n$set: {\nhealth: 150,\nattack: 3,\ndefence: 3,\nendurance: 10,\ncharacterimg: “https://i.ibb.co/MPg2SMp/Apocaliptic1.png”,\n},\n$set: { $add : [ “$power”, “$attack” ,\"$defence\" , { $devide : [\"endurance\", 3]}]} }]\n", "text": "@JasoO,This is not an update document its a pipeline array.You must define it like this and have a server of 4.2+Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello Pavel,The outcome of this turns to 9, but it should be: power = attack+ defence + endurance / 3 = 5,33.Can you tell me what is going wrong here?", "username": "JasoO" }, { "code": "", "text": "Hi @JasoO,I don’t know what is value of power?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny The value of the power will be 0 at first, but then after it will do the calculation and it should be 5,33", "username": "JasoO" }, { "code": "$set: {\n health: 150,\n attack: 3,\n defence: 3,\n endurance: 10,\n characterimg: “https://i.ibb.co/MPg2SMp/Apocaliptic1.png”,\n power: 3 + 3 + 10/3,\n}\n", "text": "You already have all the values, can’t you simply do:in your client code. That’s much simpler than trying to have the server do the calculation.", "username": "steevej" }, { "code": "", "text": "I agree with @steevej.But aside of that in math 3+ 3 + 10/3 is ~9.How did you get to 5.33?Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "Oh lol i am sorry it looks like i made a typo on my calculator :L Forget that i ever asked this.", "username": "JasoO" }, { "code": "", "text": "Thats interesting, i tought that you could only connect to mongodb with the server. So i only have to add this line somewhere in the code? Or do i have to connect to it the same way as the server would?", "username": "JasoO" } ]
How to calculate a field
2021-02-18T22:10:50.898Z
How to calculate a field
10,221
null
[ "crud" ]
[ { "code": " $set: {\n\n power: { $sum: [\"$power\", \"$attack\", \"$defence\", { $divide: [\"$endurance\", 3] }, { $round: [$sum, 2] }] }\n\n }\n $set: {\n\n power: { $sum: [\"$power\", \"$attack\", \"$defence\", { $divide: [\"$endurance\", 3] }, { $round: [power, 2] }] }\n\n }\n", "text": "So how could i round the outcome of a sum and divide? Because i need a specific number but this number would never be the same.I tried this:And this:But both turn up:\nReferenceError: power is not defined\nReferenceError: $sum is not definedRight now the output is: 9.333333333333334, but i would like 9.3.", "username": "JasoO" }, { "code": "{ $round: [ { $sum: [\"$power\", \"$attack\", \"$defence\", { $divide: [\"$endurance\", 3] } , 2] }\n", "text": "Try withthat matchesround the outcome of a sum and divide?", "username": "steevej" }, { "code": "", "text": "Won’t work: MongoError: Unrecognized pipeline stage name: ‘$round’", "username": "JasoO" }, { "code": "{ $round : [ { $divide : [ $endurance , 3 ] } , 2 ] }\n", "text": "From your other thread, I deduce that power, attack and defence are integers so I would try to $round only the last term of the sum with:", "username": "steevej" } ]
Rounding the outcome of a formula on a field?
2021-02-20T15:42:34.880Z
Rounding the outcome of a formula on a field?
1,967
null
[ "aggregation", "queries", "performance", "golang" ]
[ { "code": "collection.countDocuments(ctx, filter)\n// add more filter\ncollection.Find(ctx, filter, limit(200))\ncollection.Aggregate(\n mongo.Pipeline{\n // filter first\n // then sort\n // do a facet:\n // 1st pipeline: count\n // 2nd pipeline: apply added filter, limit\n }\n)\ndb.collection.explain().aggregate(...)\n{\"t\":{\"$date\":\"2021-02-18T14:03:17.180+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"TaskManagement.task\",\"appName\":\"MongoDB Shell\",\"command\":{\"aggregate\":\"task\",\"pipeline\":[{\"$match\":{\"$or\":[{\"client_time_utc\":{\"$lt\":{\"$date\":\"2020-08-17T19:20:30.450Z\"}}},{\"$and\":[{\"client_time_utc\":{\"$date\":\"2020-08-17T19:20:30.450Z\"}},{\"_id\":{\"$lt\":\"1beecc71-c550-4ff8-8644-891e61e4e8a2\"}}]}],\"org_id\":\"2323\"}},{\"$facet\":{\"count\":[{\"$count\":\"value\"}],\"data\":[{\"$sort\":{\"client_time_utc\":-1.0,\"_id\":-1.0}},{\"$limit\":5.0}]}}],\"explain\":true,\"cursor\":{},\"lsid\":{\"id\":{\"$uuid\":\"3ec30c03-1de1-47aa-b912-d47878c32e88\"}},\"$db\":\"TaskManagement\"},\"planSummary\":\"IXSCAN { _id: 1 }, IXSCAN { client_time_utc: -1, _id: -1 }\",\"numYields\":0,\"reslen\":2714,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":1}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":2}},\"Global\":{\"acquireCount\":{\"r\":2}},\"Database\":{\"acquireCount\":{\"r\":2}},\"Collection\":{\"acquireCount\":{\"r\":2}},\"Mutex\":{\"acquireCount\":{\"r\":2}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":1}}\ncount and then findaggregate", "text": "I’m using mongodb go driver for my application and I’m confused on querying and returning meta with the results:1st optionvs2nd optionI have attempted to translate it into the mongodb commands and run them on my shellbut I’m still confused:​I’m not sure what to make use of rejectedPlan (explain output). Does that mean I have to modify my query?How do I know which one is faster for count and then find vs aggregate ?Thank you", "username": "Ariel_Ariel" }, { "code": "$match$sort$facet$facet$sort$facetdata$facet$match{\"$sort\": {\"client_time_utc\":-1.0,\"_id\":-1.0}}{ client_time_utc: -1, _id: -1 }rejectedPlanwinningPlan\"allPlansExecution\"queryPlannerexecutionStatsexecutionStats\"executionStats.executionTimeMillis\"", "text": "Hello @Ariel_Ariel, I will try to answer your questions.There’s a lot of slow query message on my mongodb log, does that certainly mean my commands are slow? I have applied index and it shows IXSCAN.This is about the aggregate query (the 2nd option):The indexes are applied alright, probably not to the best possible extent. The $match and $sort within the $facet don’t use the indexes (that is the behaviour of $facet stage - see note below). But, you can try applying the $sort stage from the $facet stage’s data pipeline to before the $facet stage (immediately after the first $match stage) - {\"$sort\": {\"client_time_utc\":-1.0,\"_id\":-1.0}}. This sort operation will utilize the index defined on the { client_time_utc: -1, _id: -1 }.This will likely benefit the query performance.See:I’m not sure what to make use of rejectedPlan (explain output). Does that mean I have to modify my query?It is normal to have a rejectedPlan sub-document within the explain output. It only means, that the query optimizer generated multiple plans and one of the plans was used (winningPlan) and the other was the rejected one. Sometimes the rejected plans are empty documents. See explain.queryPlanner.rejectedPlans.How do I know which one is faster for count and then find vs aggregate ?I think, the two queries (options) are not comparable, in terms of which is faster. Because, the aggregate runs as a single query and the find+count as two separate queries. Is it better to run two queries, rather than one? It is mostly operational related issue, and how it fits into your operations.You can run db.collection.explain() with the the \"allPlansExecution\" verbosity mode, for the find+count methods (individually) and then the aggregate query.The explain returns the queryPlanner and executionStats information for the evaluated method. The executionStats includes the completed query execution information for the winning plan. See Explain Results - executionStats for the output info which includes the \"executionStats.executionTimeMillis\".", "username": "Prasad_Saya" }, { "code": "", "text": "thanks a lot, can you plz help answer my another qn Count and filter pipelines using mongodb go driver aggregate", "username": "Ariel_Ariel" }, { "code": "", "text": "Hello @Ariel_Ariel . Is the aggregation in the linked post same or similar to the aggregation query in this post?", "username": "Prasad_Saya" }, { "code": "", "text": "no, i have changed it. i want the count before i filter the query for the 2nd time but idk how", "username": "Ariel_Ariel" } ]
Count then find vs aggregate, which one is faster?
2021-02-20T02:38:22.811Z
Count then find vs aggregate, which one is faster?
15,855
null
[ "swift", "production" ]
[ { "code": "", "text": "I’m pleased to announce the 1.1.0 release of the MongoDB Swift driver.Please see here for details on what’s new in this release. We’d love for you to try it out! Please feel free to get in touch with us via our Jira project or GitHub if you encounter any issues, have a feature request, etc.", "username": "kmahar" }, { "code": "", "text": "Great! Thanks for sharing!", "username": "Soumyadeep_Mandal" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Swift Driver 1.1.0 is released
2021-02-19T21:05:18.808Z
MongoDB Swift Driver 1.1.0 is released
1,663
null
[ "aggregation", "queries", "php" ]
[ { "code": "MongoDB\\Driver\\Exception\\CommandException: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.\ncustomers.aggregate([{\"$match\":{\"project_id\":{\"$eq\":\"5da925f416f09b7b977d7583\"},\"deleted_at\":{\"$eq\":null}}},{\"$addFields\":{\"session.last_seen_at\":{\"$convert\":{\"input\":\"$session.last_seen_at\",\"to\":\"date\",\"onError\":{\"$convert\":{\"input\":{\"$multiply\":[\"$session.last_seen_at\",1000]},\"to\":\"date\",\"onError\":\"$session.last_seen_at\"}}}}\n", "text": "Hello. I’m hitting a wall with this query. I have a “regular” index on the only column I’m sorting on, and I keep getting the following error. Using disk seems like an extreme measure on a query that is only supposed to return 1,000 records. Can anyone help?I’m noticing this coming from my ORM. I’m using Laravel…This conversion on the field i’m sorting on, I assume probably negates the index on it.", "username": "Scott_Weiner" }, { "code": "$sort$sortallowDiskUsetrue$sort", "text": "Hello @Scott_Weiner,What are you sorting on? Your aggregation doesn’t show any sort operation. Also, see Aggregation $sort Stage and Memory, and it says:The $sort stage has a limit of 100 megabytes of RAM. By default, if the stage exceeds this limit, $sort will produce an error. To allow for the handling of large datasets, set the allowDiskUse option to true to enable $sort operations to write to temporary files.", "username": "Prasad_Saya" }, { "code": "$project", "text": "Remove the sort and check that the number of records being returned is actually 1000. What is the size of each record? You may be able to use $project to reduce the size of records being retuned.", "username": "Joe_Drumgoole" }, { "code": "", "text": "I was sorting on “session.last_seen_at”, but I was able to remove it because it wasn’t needed in the query, and that solved it. Thanks.", "username": "Scott_Weiner" }, { "code": "", "text": "Good call with $project, I didn’t realize that could save time on queries. There is a lot to learn about query optimization. I hit a number of issues trying to send emails out to thousands of people. my IOPs were exceeding 300/s.", "username": "Scott_Weiner" } ]
Sort exceeded memory limit
2021-02-17T22:43:21.031Z
Sort exceeded memory limit
42,629
null
[]
[ { "code": "", "text": "Hi y’all,You’re invited to a fun event coming up in our newest MongoDB user group: “Make It Matter,” an inclusive space to elevate voices of underrepresented people in tech! Join us to meet MongoDB Champion, @Danielle_Monteiro and the illustrious @Asya_Kamsky in an interview by MongoDB Principal Developer Advocate, @Karen_Huaulme next Thursday, 25 February at 11:00AM CST!Ask your questions in advance and tell us more about what you want to hear from these exceptional techies! Link & more info:", "username": "Jamie" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Next week: Meet Champion Dani Monteiro
2021-02-19T18:41:57.837Z
Next week: Meet Champion Dani Monteiro
3,195
null
[ "queries" ]
[ { "code": "{\n name: \"John\",\n tickets: 5\n}\ntickets", "text": "Hey!Let’s say I have a raffle with documents looking like thisI want to get a random document, but favoring the documents with the highest tickets value", "username": "Victor_Back1" }, { "code": "[\n {\n '$group': {\n '_id': '$tickets', \n 'maxis': {\n '$push': '$$ROOT'\n }\n }\n }, {\n '$sort': {\n '_id': -1\n }\n }, {\n '$limit': 1\n }, {\n '$unwind': {\n 'path': '$maxis'\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$maxis'\n }\n }, {\n '$sample': {\n 'size': 1\n }\n }\n]\nfrom pprint import pprint\n\nfrom faker import Faker\nfrom pymongo import MongoClient\n\nfake = Faker()\n\n\ndef rand_tickets():\n return [{\n 'firstname': fake.first_name(),\n 'tickets': fake.pyint(min_value=1, max_value=10)\n } for _ in range(10000)]\n\n\nif __name__ == '__main__':\n client = MongoClient()\n db = client.get_database('test')\n tickets = db.get_collection('tickets')\n\n tickets.drop()\n tickets.create_index(\"tickets\")\n tickets.insert_many(rand_tickets())\n\n pipeline = [\n {\n '$group': {\n '_id': '$tickets',\n 'maxis': {\n '$push': '$$ROOT'\n }\n }\n }, {\n '$sort': {\n '_id': -1\n }\n }, {\n '$limit': 1\n }, {\n '$unwind': {\n 'path': '$maxis'\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$maxis'\n }\n }, {\n '$sample': {\n 'size': 1\n }\n }\n ]\n\n for raffle in range(1, 6):\n print(\"Raffle #\" + str(raffle))\n for res in tickets.aggregate(pipeline):\n pprint(res)\n print()\n\nRaffle #1\n{'_id': ObjectId('602fac3b9fe949b1c71beb73'),\n 'firstname': 'Jeremy',\n 'tickets': 10}\n\nRaffle #2\n{'_id': ObjectId('602fac3b9fe949b1c71bcf79'),\n 'firstname': 'James',\n 'tickets': 10}\n\nRaffle #3\n{'_id': ObjectId('602fac3b9fe949b1c71bd5c6'),\n 'firstname': 'Sarah',\n 'tickets': 10}\n\nRaffle #4\n{'_id': ObjectId('602fac3b9fe949b1c71be15c'),\n 'firstname': 'Jocelyn',\n 'tickets': 10}\n\nRaffle #5\n{'_id': ObjectId('602fac3b9fe949b1c71bd898'),\n 'firstname': 'Crystal',\n 'tickets': 10}\n{$match : {tickets: {$gt: 5}}}", "text": "Hi @Victor_Back1 and welcome in the MongoDB Community !Here is my solution using the aggregation pipeline:Here it is in action in a Python 3 example:Which print in the end 5 times 1 random document selected among the one with the maximum number of tickets.Note: You could optimize this query if you can limit number of documents from the start if you have an idea of the numbers of tickets. For example if you know that there is always some people with at least 5 tickets. You could add a {$match : {tickets: {$gt: 5}}} directly at the top.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "ticketsmongo5{\n name: \"Jim\",\n tickets: 3\n},\n{\n name: \"Joe\",\n tickets: 5 // <- highest\n},\n{\n name: \"Jack\",\n tickets: 1\n},\n{\n name: \"Jane\",\n tickets: 5 // <- highest\n},\n{\n name: \"John\",\n tickets: 2\n},\n{\n name: \"Jon\",\n tickets: 5 // <- highest\n}\ndb.collection.aggregate([ \n{ \n $group: { \n _id: null, \n docs: { $push: \"$$ROOT\" }, \n max: { $max: \"$tickets\" } \n } \n}, \n{ \n $addFields: { \n max_docs: { \n $filter: { \n input: \"$docs\", \n cond: { \n $eq: [ \"$$this.tickets\", \"$max\" ]\n }\n }\n }\n }\n},\n{ \n $project: {\n _id: 0, \n random_doc: { \n $arrayElemAt: [ \n \"$max_docs\", \n { $floor: { $multiply: [ _rand(), { $floor:{ $size: \"$max_docs\" } } ] } } \n ] \n }\n }\n},\n{\n $replaceWith: \"$random_doc\" \n}\n]).pretty()\n{\n \"_id\" : ObjectId(\"602fae75603389f49bd5533d\"),\n \"name\" : \"Jon\",\n \"tickets\" : 5\n}", "text": "Hello @Victor_Back1, welcome to the MongoDB Community forum!Here is an aggregation query which will get you a random document with highest of the tickets value. Note the query runs from the mongo shell.Lets take these six sample documents, of these there are three of them with the highest ticket value of 5. The aggregation gets one of these three documents, randomly for each query run:The aggregation:The example output:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you for your responses, but I think you’ve misunderstood. I want everyone to have a chance to be selected, but people with more tickets have a higher chance of being selected", "username": "Victor_Back1" } ]
How to get a random document favoring a higher value?
2021-02-19T11:13:38.248Z
How to get a random document favoring a higher value?
3,133
null
[ "react-js" ]
[ { "code": "", "text": "Hello, I have added a link on my reactjs website for users, who forgot their login passwords, to click on the link so that a reset password link will be sent to each of them. But after I (assuming I am the user) clicked on the link, I didn’t get an email (which has a link to reset my login password) even though my mongodb database gets updated with a reset password link data added to my mongodb database each time a user click on the reset password link.", "username": "Jen" }, { "code": "", "text": "What API are you using to send out the email? SendGrid? Mailgun? If you’re hitting a backend server (e.g Express) then you may be using Nodemailer? If you’re in development, I tecommend Mailtrap for testing.", "username": "Andrew_W" }, { "code": "", "text": "Hi Andrew, I am using Express and Nodemailer. How to fix the problem? Thanks.", "username": "Jen" }, { "code": "", "text": "There are several things that could be going wrong, not necessarily related to MongoDB –\nAs long as you can get the user’s email back from the database when you query it (via MongoDB’s own NodeJS Driver, fetch / axios, etc) on your backend and have it returned to your React app, then it’s probably not related to MongoDB.\nIf you are getting a response back, then you’ll need to make sure you are able to confirm receiving an email via a service like [Mailtrap] (https://mailtrap.io/) setup in your Nodemailer configuration.Warning: Some services or systems might complain in your console about not being able to verify the server’s certificate (localhost) when sending email. If this is the case, you’ll need to update your Express server to appear secure (https vs http) by mocking out an SSL certificate. You can learn how to do that here.", "username": "Andrew_W" }, { "code": "", "text": "Hi Andrew, I replaced my SMTP settings with mailtrap SMTP settings. Why is there an error?=\nUnhandled Rejection (TypeError): Cannot read property ‘error’ of undefined(anonymous function)C:/A/client/src/user/ForgotPassword.js:15Thank you.", "username": "Jen" }, { "code": "", "text": "Hi Jen, at this point since the issue is more likely a React/JS-specific issue rather than having to do with MongoDB, I’d recommend joining this Discord server and posting the question there as you’ll get savvy React and JS devs (myself included ) who can help you work through the issue. Of course, once it’s resolved, it’s best to come back to this post and provide the solution and mark the thread resolved so other MongoDB Community members can see what the solution was should they be experiencing the same issue.Best,Andrew", "username": "Andrew_W" }, { "code": "", "text": "Hi Andrew,\nI tried Discord before asking another question but no reply. How then? Do I show you some of my nodemailer code here?\nThanks.", "username": "Jen" }, { "code": "", "text": "Best to share on Discord/Slack – I’ll look for you there. I messaged you for more info. I’ll try to help where I can.", "username": "Andrew_W" }, { "code": "", "text": "My username is reactuser. When are you available to help me in Malaysia time? I see you there. Thanks!", "username": "Jen" }, { "code": "", "text": "Message me there so I have you on my radar. Your name was too generic as there are too many users with “reactuser” as their username.", "username": "Andrew_W" }, { "code": "", "text": "What is your username? Thanks.", "username": "Jen" } ]
Reset password link is not functioning
2021-01-27T21:09:11.227Z
Reset password link is not functioning
5,214