image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…a_2_1024x659.png
[ "legacy-realm-cloud" ]
[ { "code": "“Operation canceled” Realm Error Domain=io.realm.unknown Code=89let url = URL(string: \"realms://\\(MY_INSTANCE_ADDRESS)/common\")!\nlet config = user.configuration(realmURL: url, fullSynchronization: true)\n\nRealm.asyncOpen(configuration: config) { ... }\nprivate func connectToRealm(_ firebaseUser: User) -> Promise<SyncUser> {\n \n // if realm user already logged in, check if it's the right user\n if let user = SyncUser.current {\n guard user.identity == firebaseUser.uid else {\n // it's not the right user. log out and try again.\n user.logOut()\n return connectToRealm(firebaseUser)\n }\n \n // it's the right user.\n return Promise.value(user)\n }\n\n return firstly {\n // get JWT token with firebase function\n Promise {\n Functions.functions().httpsCallable(\"myAuthFunction\").call(completion: $0.resolve)\n }\n \n }\n .compactMap { result in\n // extract JWT token from response\n return (result?.data as? [String: Any])?[\"token\"] as? String\n \n }\n .then { token in\n // connect to Realm\n Promise {\n SyncUser.logIn(with: SyncCredentials.jwt(token), server:MY_AUTH_URL, onCompletion: $0.resolve)\n }\n }\n}\nhttps://\\(MY_INSTANCE_ADDRESS)", "text": "I’m just trying to access a global Realm file which is on my Realm Cloud (I am NOT on the beta). I get the “Operation canceled” Realm Error Domain=io.realm.unknown Code=89 error when I try to open the Realm.Realm Studio:\nScreen Shot 2020-07-21 at 14.30.051106×712 97.5 KBOpening code:Authentication code (using PromiseKit):This method is called after logging with Firebase. The auth URL is just https://\\(MY_INSTANCE_ADDRESS) .I suspect it’s a permission problem, in which case: how can I easily let this file be readable by everyone, but not writable? I’m not planning on creating a lot of theses files, I just want to do this once.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This is a crosspost to a StackOverflow question. It’s a good idea to keep questions/answers in one place so we don’t duplicate effort and folks can find answers more quickly. Here’s the link for convenience.“Operation canceled” Realm Error Domain=io.realm.unknown Code=89", "username": "Jay" }, { "code": "", "text": "Still looking for an answer to this question…", "username": "Jean-Baptiste_Beau" }, { "code": "class MyObjectType: Object {\n @objc dynamic var id = 0\n @objc dynamic var name_fr = \"\"\n @objc dynamic var name_eng = \"\"\n}\n\nstatic let MY_INSTANCE_ADDRESS = \"xxxx\"\nstatic let AUTH_URL = URL(string: \"https://\\(MY_INSTANCE_ADDRESS)\")!\nstatic let COMMON_REALM_URL = URL(string: \"realms://\\(MY_INSTANCE_ADDRESS)/common\")!\n\nfunc application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {\n SyncUser.logIn(with: SyncCredentials.anonymous(), server: AUTH_URL) { (user, error) in\n guard error == nil && user != nil else {\n print(error)\n return\n }\n }\n}\n\nfunc writeData(withId: Int) {\n let config = SyncUser.current?.configuration(realmURL: COMMON_REALM_URL, fullSynchronization: true)\n \n let realm = try! Realm(configuration: config!)\n \n try! realm.write {\n let x = MyObjectType()\n x.id = withid\n x.name_fr = \"test_fr\"\n x.name_eng = \"test_eng\"\n realm.add(x)\n }\n\n let objectsResult = realm.objects(MyObjectType.self)\n print(objectsResult)\n}", "text": "@Jean-Baptiste_BeauI think the issue is that we can’t duplicate the issue. As a test, I took your code and broke it down a bit. Created a loop that calls writeData 100 times, incrementing id in the loop.This code created 100 unique objects and printed them to console along the way, the data was both local as well as in our Realm cloud.", "username": "Jay" }, { "code": "", "text": "And you see the data in Realm Studio? And you can access it if you’re logged in on a different account?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "@Jean-Baptiste_BeauCorrect. As I mentioned, I can view it in Realm cloud (meaning realm studio). That code is actually not far off from the code we use in our app (I actually pulled it from our app).", "username": "Jay" }, { "code": "", "text": "@Jean-Baptiste_Beau You may want to clear the Cache of Realm Studio and see if that resolves the issue. It should be under one of the drop downs of Realm Studio, as I recall, under Help", "username": "Ian_Ward" }, { "code": "{\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614}", "text": "I tried clearing Cache, doesn’t help. I’ve uninstalled the app on the device countless times too. As @Jay suggested, I checked the logs on Realm studio and saw that I get the following error: {\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614} . At least that’s more specific. I tried deleting the realm and recreating one, so now the Realm on the cloud is empty, to avoid schema mismatch as described here. Any idea?", "username": "Jean-Baptiste_Beau" }, { "code": "{\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"detail\":\"A non-admin user is not allowed to create realms outside their home folder. UserId: '261b6207e5960677d91fec7a75505c46'. RealmPath: '/common'\",\"code\":614}", "text": "I’m sure the URL is correct because: after deleting the Realm file on Studio, I got this error: {\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"detail\":\"A non-admin user is not allowed to create realms outside their home folder. UserId: '261b6207e5960677d91fec7a75505c46'. RealmPath: '/common'\",\"code\":614}. As soon as I recreated the Realm in Studio, this error turned into the one described above.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Okay I can confirm this is a permission problem. In my code, I’ve logged in with some user account, and I’ve got the same error. Then, I made this particular user an administrator in Realm Studio, and I could now load the Realm, read and write data without error! Which leads to my initial question:how can I easily let this file be readable by everyone, but not writable? I’m not planning on creating a lot of theses files, I just want to do this once.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "You need to give the user permissions", "username": "Ian_Ward" }, { "code": "async function main() {\n // login\n var adminUser = Realm.Sync.User.current;\n if (adminUser === undefined) {\n let creds = Realm.Sync.Credentials.usernamePassword(adminUsername, adminPassword);\n adminUser = await Realm.Sync.User.login(auth_address, creds);\n }\n\n // open realm\n const realm = await Realm.open({\n sync: { user: adminUser, url: common_realm_address },\n schema: [MySchema]\n });\n\n // write data...\n\n // grant permissions\n const managementRealm = adminUser.openManagementRealm();\n\n var permObj;\n managementRealm.write(() => {\n permObj = managementRealm.create(\"PermissionChange\", {\n id: \"common_all_read\",\n createdAt: new Date(),\n updatedAt: new Date(),\n mayManage: false,\n mayWrite: false,\n mayRead: true,\n userId: \"*\",\n realmUrl: common_realm_address\n });\n });\n\n // Listen for `PermissionChange` object to be processed\n managementRealm\n .objects(\"PermissionChange\")\n .filtered(\"id = $0\", permObj.id)\n .addListener((objects, changes) => {\n console.log(\"Permission Status: \" + permObj.statusMessage);\n });\n\n // close realm\n realm.close();\n}\n UnhandledPromiseRejectionWarning: TypeError: adminUser.openManagementRealm is not a function", "text": "@Ian_Ward well okay, I added that to my JS script that adds objects to the shared realm. I used the code provided by bigFish24 here.\nCode looks like this:And I get the following error:\n UnhandledPromiseRejectionWarning: TypeError: adminUser.openManagementRealm is not a functionI guess there’s no way to grant permissions directly in Realm Studio? And that the code I copied is outdated? What would be the correct way now?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "@Jean-Baptiste_Beau The way shown in the docs I linked - Full-Sync permissions - Realm Sync (LEGACY)", "username": "Ian_Ward" }, { "code": "let creds = SyncCredentials.usernamePassword(username: \"realm-admin\", password: \"password\", register: false)\n SyncUser.logIn(with: creds\nlet creds = SyncCredentials.anonymous()\n SyncUser.logIn(with: creds\n", "text": "I think my question here is that while @Ian_Ward is suggesting to set up permissions, with the code you used on your SO post, you were logging in with anonymous auth and got the error.With our app, whether with auth as an admin user or anonymous auth, we can access the Realm. e.g.orproduces the same resultSo in your case, if we draw a correlation, you have an admin user ‘realm-admin’ and if you login with that, you should be able to access your data. If you switch the auth to anonymous, you should still be able to access that same data.That doesn’t really address the question directly as it’s unclear what this meansreadable by everyoneDoes ‘everyone’ mean other defined users? e.g you have 10 pre-defined users who you want to have read access (so there would be 10 users listed in the Realm Studio Users tab) or does ‘everyone’ mean everyone that uses the app and does not have a user account set up?", "username": "Jay" }, { "code": "// grant permissions\nadminUser\n .applyPermissions({ userId: \"*\" }, realmPath, \"read\")\n .then(permissionChange => {\n console.log(\"Permission applied successfully.\");\n })\n .catch(error => {\n console.log(\"Error while applying permission: \", error);\n });\n", "text": "@Jay It seems we are in the same situation, but then I don’t see why I can’t access the data while you can. The global Realm you are accessing, how did you create it? In Realm studio, from client, from server code?By “readable by everyone”, I mean by every logged-in users, whether they come from anonymous login or JWT. My app is made in such a way that users with no account set up are automatically logged in anonymously to access the shared data, as I believe you can’t access the data if you don’t have a SyncUser instance. That question was addressed here.@Ian_Ward If I use the code on the doc you link:Then neither of the two log statements is printed, and I still see “User permissions: This realm has no permissions” on Realm Studio.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "During development, we frequently delete and re-create our database from code so we actually do very little from Realm Studio - mostly use it to monitor data changes as we are building relationships between objects. So everything we do is from code.As a test, I deleted all data associated with a little Tasks macOS app we have. I ran the app with authenticated user ‘Jay’ (see my code above) and added a few tasks. I then logged out. I then logged in as an anonymous user (using the above code) and all of the Tasks Jay created were available and visible to the anonymous user. Created tasks, logged out, logged back in as Jay and the Tasks persisted.I then checked the Realm from another device using Realm Studio and all of the data from both logins was there (e.g. it sync’d to the cloud correctly).Here’s what our Realm Studio looks like - very similar to yours.RS2656×690 82.5 KB", "username": "Jay" }, { "code": "", "text": "@Jay well that’s the behavior I’m trying to achieve but it doesn’t work for me… What is your realm creation code? I’m assuming Jay is an admin user here, but do you create the realm from the client app or from backend code?", "username": "Jean-Baptiste_Beau" }, { "code": "/commonOperation canceled\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614", "text": "I thought I found the solution but no… I thought the problem could be related to JS/Swift mismatches, so I made the following:I got the exact same error. Operation canceled on the client and \"The path is invalid or current user has no access.\",\"status\":403,\"code\":614 on Realm Studio logs.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Well, the Realm is instantiated as soon as the user logs in; the Realm objects are are instantiated in the Realm at that time.Everything is done in the apps code (macOS, Swift).", "username": "Jay" }, { "code": " let MY_INSTANCE_ADDRESS = \"myinstance.cloud.realm.io\"\n let AUTH_URL = URL(string: \"https://\\(MY_INSTANCE_ADDRESS)\")!\n let COMMON_REALM_URL = URL(string: \"realms://\\(MY_INSTANCE_ADDRESS)/common\")!\n\n let adminCreds = SyncCredentials.usernamePassword(username: \"admin\", password: \"password\")\n \n SyncUser.logIn(with: adminCreds, server: AUTH_URL) { (user, error) in\n guard error == nil && user != nil else { return print(error) }\n print(\"logged in\")\n \n let config = SyncUser.current?.configuration(realmURL: COMMON_REALM_URL, fullSynchronization: true)\n\n let realm = try! Realm(configuration: config!)\n let x = MyObject(id: 2020, name_fr: \"test_fr\", name_eng: \"test_eng\")\n try! realm.write { realm.add(x) }\n \n SyncUser.current?.logOut()\n \n SyncUser.logIn(with: SyncCredentials.anonymous(), server: AUTH_URL) { (user, error) in\n guard error == nil && user != nil else { return print(error) }\n\n let config = SyncUser.current?.configuration(realmURL: COMMON_REALM_URL, fullSynchronization: true)\n\n let realm = try! Realm(configuration: config!)\n let x = MyObject(id: 2021, name_fr: \"test_fr\", name_eng: \"test_eng\")\n try! realm.write { realm.add(x) }\n }\n }\nOperation canceled", "text": "Sample code with synchronous Realm opening:Result: when opening the Realm synchronously, I don’t get the Operation canceled error on the client but I still get the permission error on Realm Studio. I also don’t see any of the two objects in Realm Studio. I see “This Realm has no classes defined”.", "username": "Jean-Baptiste_Beau" } ]
“Operation canceled” Realm Error Domain=io.realm.unknown Code=89
2020-07-21T18:50:46.862Z
“Operation canceled” Realm Error Domain=io.realm.unknown Code=89
7,803
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.3.7 of the MongoDB Go Driver.This release contains a bugfix for an error introduced in 1.3.6 that caused averageRTT to be set incorrectly. For more information please see the release notes.You can obtain the driver source from GitHub under the 1.3.7 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Isabella_Siu" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.3.7 Released
2020-08-10T15:47:34.204Z
MongoDB Go Driver 1.3.7 Released
1,534
null
[ "legacy-realm-server" ]
[ { "code": "", "text": "Hey, its been a rough day :).I did install a self hosted realm server on my companies AWS account, since its advertised on realms homepage that this is a option. Thought through a bunch of installation dependency problems only to arrive at the support of realm, that told me that self-hosted instances are no longer supported, and the realm forum points to this one.So, I did want to choose realm because of its simplicity and ease of use, and BECAUSE it can be self-hosted. My company is dealing almost exclusively with sensitive medical data, so self-hosting is way easier than setting up all needed legal data privacy and protection agreements with realm (or mongo now?). Realm’s support is being unresponsive, so my question is either, “Is it true that self-hosting is not an option anymore at all?” or “Do you have experience with another similar framework if realm/mongo are not offering what I would need?”Thanks for reading!Best,\nLukas", "username": "Lukas_Schuster" }, { "code": "", "text": "since its advertised on realms homepage that this is a optionHi @Lukas_Schuster,Apologies for any confusion. The self-hosted Realm server is currently not an option for new installations, as advised in your discussion with the support team.If you could clarify the url or reference you came across for self-hosted installation, that would be helpful to make sure it can be corrected.The currently supported option for cloud sync is the Realm Cloud, which has a 30-day free trial. If you have additional concerns for your data privacy and protection requirements, I would suggest contacting sales instead of support via [email protected]’s support is being unresponsiveDo you have a case reference number I can follow up on? Case #5780 appears to have been answered within a few minutes and some subsequent duplicates were merged into this.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X It should be noted that I also sent an email to [email protected] 10 days ago (Feb 10, 2020, 12:09 PM) and have not received a response. I’m not sure that email works anymore.", "username": "Adam_Hass" }, { "code": "", "text": "Hey Stennie \nThanks for the reply!\nthe first link if you enter “realm self hosted”, is this one: https://docs.realm.io/sync/v/3.x/self-hosted and on there there is nothing about deprecation, so its not some burried document, but the main documentation on your site. I hope you can fix this, since it did cost us quite some time now, and does not breed confidence in the company if the official docu is out of date Regarding my concerns towards data privacy and protection, they are less concerns than they are simple laws, you cannot, without legal binding contracts and agreements like a DPA be a data processor for another company under current law in most countries, which no matter if realm is setup technically well, and also is willing to sign those contracts, is putting a hurdle on people to use your, really nice and cool, product.ad my support ticket, yes I got an initial answer, asked another question and then did not hear back for my follow up question.Since self hosting is something you don’t want to offer, and any contract work usually takes some time & hustle, and we are currently in the prototype phase with a new idea, I am not sure if going with firebase, parse, etc. is maybe a more viable option. I think realm is the better product, but the legal hurdles you put in place for use-cases like ours, are maybe enough reason to exclude you since we cannot do this kind of paperwork for every prototype, and thus it won’t lead us to using it in production. Which I find sad, since I really enjoy the platform.Best,\nLukas", "username": "Lukas_Schuster" }, { "code": "", "text": "the first link if you enter “realm self hosted”, is this one: https://docs.realm.io/sync/v/3.x/self-hosted and on there there is nothing about deprecationUnfortunately the link you found leads to an older version of the documentation. If you click on the menu on the left side of that page, the latest documentation at Realm Sync Documentation - Realm Sync (LEGACY) no longer mentions self-hosted installation.Thanks for sharing how you found the link – I’ve created a docs issue to update older versions of the documentation so the messaging is consistent.I understand there may be legal requirements to meet where self-hosting is the most straightforward option. There may also be alternatives via dedicated Realm Cloud, but that’s a conversation to have with our sales team.The current product development focus is as per the public roadmap to MongoDB Realm which will integrate the Realm Cloud with MongoDB Stitch and Atlas. MongoDB Cloud services have enterprise-level security, reliability, and compliance (see: MongoDB Trust Center).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "It should be noted that I also sent an email to [email protected] 10 days ago (Feb 10, 2020, 12:09 PM) and have not received a response. I’m not sure that email works anymore.@Adam_Hass That is the correct email address and you should have received a response by now. I’ll follow up with the team.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Self Hosted Realm Object Servers?
2020-02-20T13:17:31.179Z
Self Hosted Realm Object Servers?
9,380
null
[ "connecting" ]
[ { "code": " 'Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you\\'re trying to access the database from an IP that isn\\'t whitelisted. Make sure your current IP address is on your Atlas cluster\\'s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/',mongoose.connect(\n process.env.MONGODB_URI,\n {\n useFindAndModify: true,\n useUnifiedTopology: true,\n useNewUrlParser: true,\n useCreateIndex: true,\n },\n (err) => {\n if (err) return console.log(\"Error: \", err);\n console.log(\n \"MongoDB Connection -- Ready state is:\",\n mongoose.connection.readyState\n );\n }\n);\n", "text": "Trying to connect to my app and I’m getting this error:\n 'Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you\\'re trying to access the database from an IP that isn\\'t whitelisted. Make sure your current IP address is on your Atlas cluster\\'s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/',I checked the option for allowing access to all IPs and have confirmed it’s open, and I’ve also gone through the other troubleshooting tips, tried changing password, etc. I’m not using Namecheap, which appears to be what some others were using and getting this error.Here’s my app where I’m calling it using Mongoose.Any more ideas of what I’m doing wrong?", "username": "Brittany_Joiner" }, { "code": "mongo", "text": "Hi @Brittany_Joiner welcome to the community.Could you share the MongoDB URI you used? Please remove any username/passwords in the URI beforehand.Also have you tried connecting using the mongo shell?Best regards,\nKevin", "username": "kevinadi" } ]
Can't connect to servers in my MongoDB Atlas Cluster
2020-08-09T23:42:19.599Z
Can&rsquo;t connect to servers in my MongoDB Atlas Cluster
7,535
null
[ "configuration" ]
[ { "code": "", "text": "How to update dbpath in ubuntu, simple way. I follow this link but not worked for me. Please help me with this.", "username": "hardeep_singh" }, { "code": "", "text": "What error are you getting?\nDoes the new dirpath exist\nDoes it have required permissions/ownership", "username": "Ramachandra_Tummala" } ]
Not able to update dbpath in ubuntu
2020-08-10T23:36:22.409Z
Not able to update dbpath in ubuntu
1,396
https://www.mongodb.com/…f_2_1024x615.png
[]
[ { "code": "", "text": "They weren’t there before.\nSomething to concern about ?\nHow to fix it / clear it ?mongod1331×800 201 KB", "username": "Paul_Gureghian" }, { "code": "", "text": "The tilde sign indicates that there is no line rather than a empty line. This is also confirmed by the message lines 1-11/11 (END). The first line being mongod.service … and the last being … started mongodb server", "username": "steevej" }, { "code": "", "text": "Today it’s gone. back to the normal output. how to keep that notification from returning ?", "username": "Paul_Gureghian" } ]
Are these 11 lines a bad sign?
2020-08-10T01:14:30.761Z
Are these 11 lines a bad sign?
1,443
null
[ "swift", "production" ]
[ { "code": "cSettingsPackage.swiftcSettings", "text": "I’m pleased to announce our 1.0.1 release.This release contains a single bug fix for the issue raised in #387. Due to a bug in Xcode, cSettings defined in our Package.swift file were not being correctly applied to the driver when attempting to build it via Xcode’s SwiftPM integration.\nWe have now removed the need for the driver to use cSettings at all via #513 and the driver should build with Xcode + SwiftPM as expected. (See also: SWIFT-952)", "username": "kmahar" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Swift Driver 1.0.1 Released
2020-08-10T15:42:22.058Z
MongoDB Swift Driver 1.0.1 Released
1,559
null
[]
[ { "code": "", "text": "Hi would appreciate help with this problem, cannot seem to get past it!C:\\Program Files\\MongoDB\\Server\\4.4\\bin>mongo “mongodb+srv://sandbox.9hxtk.mongodb.net/sandbox” --username m001-student\nMongoDB shell version v4.4.0\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-00.9hxtk.mongodb.net:27017,sandbox-shard-00-01.9hxtk.mongodb.net:27017,sandbox-shard-00-02.9hxtk.mongodb.net:27017/sandbox?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=atlas-vne707-shard-0&ssl=true*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.Error: can’t connect to new replica set master [sandbox-shard-00-01.9hxtk.mongodb.net:27017], err: AuthenticationFailed: bad auth Authentication failed. :\nconnect@src/mongo/shell/mongo.js:362:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "Paul_08264" }, { "code": "", "text": "Please, go to the MongoDB atlas cluster and check your network access where you can find “Add IP ADDRESS” and ip address should be activated with IP Whitelist.", "username": "Mahfuz_Raihan" }, { "code": "AuthenticationFailed: bad auth Authentication failed-p--password", "text": "Hi @Paul_08264,It says, AuthenticationFailed: bad auth Authentication failed.Please make sure that you are entering correct password. You can also add the password in the connection string as well using the -p or --password option.If you are still not able to connect then please share the connection string so that I can try it at my end.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi Shubham\nI am replying to you help advice and to say I did manage to connect to the primary on the cmd shell and also on Windows powershell, unfortunately I have not been able to load the video database and have been going around in loops with the help topics.\nAt this stage I have lost interest as it is taking up too much time, it seems to me having followed all the relevant documentations that is too much logging in and out of various applications and I have found it impossible to keep track even though I have documentation all the pathways, I am going to leave it for now, I feel for a beginners software application it is way too complicated.Kind Regardspauldonovan", "username": "Paul_08264" }, { "code": "", "text": "", "username": "system" } ]
Not able to connect to my sandbox cluster
2020-08-09T17:23:44.326Z
Not able to connect to my sandbox cluster
1,730
https://www.mongodb.com/…3_2_1024x640.png
[ "installation" ]
[ { "code": "", "text": "One for Server and the other for Shell ? the shell seems to have something possibly wrong.\nmongo-server1280×800 182 KB\n \nmongo-shell1280×800 316 KB\n", "username": "Paul_Gureghian" }, { "code": "", "text": "Both are “fine” in the functional sense. But the shell is warning you that you haven’t configured the server with all the suggested configurations.For instance, you’re not using XFS file system which is recommended for better performance. More importantly, you haven’t enabled authentication so anyone who connects to this instance will be able to read and write all data.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Can you refer me to links in Mongo docs to address these issues ?", "username": "Paul_Gureghian" }, { "code": "", "text": "My LInux install is on ext4, I need it to be on xfs ?", "username": "Paul_Gureghian" }, { "code": "", "text": "It’s preferable for top performance for it to be on xfs but ext4 is ok if you won’t be pushing the disk hard.As far as configuring authentication there is an entire large section in the docs about it.Asya", "username": "Asya_Kamsky" } ]
Are these installations good?
2020-08-08T21:52:59.275Z
Are these installations good?
1,782
null
[]
[ { "code": "", "text": "Hello everyone,I live at Marseille in France. I love coding open-source tools.\nI use MongoDB at work and I think it’s an awesome DBMS.Best regards,\nSamuel Tallet", "username": "Samuel_Tallet" }, { "code": "", "text": "Welcome @Samuel_Tallet to the community!Yes, MongoDB is awesome. Have a look around and feel free to help others, or ask questions about any MongoDB issues you have in mind. To get familiar with this community, I’d like to encourage you to read this great Getting Started Guide from @Jamie.Cheers,\nMichael", "username": "michael_hoeller" } ]
Hello from France
2020-08-09T21:10:27.781Z
Hello from France
5,262
null
[]
[ { "code": "", "text": "\ngetting this error evertime i install mongo shell", "username": "Dushyantt_Garg" }, { "code": "", "text": "As which user you are installing?\nRun as administrator and seert click on exe–>run as admin", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Dushyantt_Garg,Run as administrator and seert click on exe–>run as adminWere you able to run the exe file as an administrator ?~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Getting permission Error while installing Mongo Shell
2020-08-09T09:02:46.128Z
Getting permission Error while installing Mongo Shell
1,070
https://www.mongodb.com/…7_2_1024x575.png
[ "compass" ]
[ { "code": "", "text": "Hello, I am new in MongoDB, I installed shell & its working fine. I also installed MongoDB compass, when I run it displayed blank screen & not able to do anything … I tried lot but not able to solve this problem…\nimage1366×768 27.7 KB\n", "username": "Ravindra_Negi" }, { "code": "", "text": "Are you able to connect by shell?\nWhat is your connect string you used for Compass?\nOr check connection parameters you used", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Ravindra_Negi and welcome to the community!I also installed MongoDB compassCan you please answer a couple of questions so that we might be able to help out:", "username": "Doug_Duncan" }, { "code": "", "text": "Hi, I am using compass version 1.21.1 stable.\nOS Windows 7\nBlank screen appear right after opening compass, as I posted screen shot in my previous post.\nShell is working fine.", "username": "Ravindra_Negi" }, { "code": "", "text": "Yes, shell is working fine. But not able to see anything in compass screen due blank screen. Even not getting option to set host string or connect string. I have mentioned screen shot in previous post", "username": "Ravindra_Negi" }, { "code": "", "text": "Thanks for the info @Ravindra_Negi! Unfortunately I don’t have a Windows 7 machine to test on, but hopefully one of the engineers that work on Compass will see this soon and be able to help provide ideas.I was thinking that maybe Compass was not compatible with Windows 7, but according to the install notes it is. It also seems that Compass ask you to install the required version of the .NET framework if it’s not already installed.As for my asking when the blank screen appeared, a screenshot only shows what’s happening, not when it happens. Thanks for stating that it’s right as soon as you start the application up as that will help the Compass engineers troubleshoot.As for next steps, that I’m not sure you statedI tried lot but not able to solve this problem…but you didn’t state what those things are. Like I say, I know that the Compass team does look at posts, so hopefully one of them will be by soon to help.", "username": "Doug_Duncan" }, { "code": "", "text": "You may want to try to reinstall Compass. Maybe install the latest version 1.21.2 (it is just released).", "username": "Prasad_Saya" }, { "code": "", "text": "I just now installed Compass 1.21.2 on Windows 7. Works fine, and I use it with MongoDB Server version 4.2.3.", "username": "Prasad_Saya" }, { "code": "", "text": "I have the same issue. W7, blank screen on startup of Compass. have installed 2X exe and msi, no difference, just hangs. 1.21.1. rebooted in between. W7 home premium fp 1. does respond to close action.", "username": "Rainer_Richter" }, { "code": "", "text": "Hello @Rainer_Richter, welcome to the community.I am using similar Windows 64 bit version. I actually, installed my Server 4.2.8 and Compass 1.21.2. I was doing some re-organizing my programs and doing housekeeping on my laptop. But, I downloaded the ZIP version of the programs. They installed and work fine.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks, I tried the GA zip and the beta versions, same issue. It’s running, I see 3 processes in task manager just no gui. I’m getting a new W10 PC next week as a workaround. ", "username": "Rainer_Richter" }, { "code": "", "text": "Solved! After installing .net 4.8 and various other windows updates, it works now.", "username": "Rainer_Richter" } ]
MongoDB Compass 1.21.1 doesn't start on Windows 7
2020-05-13T12:09:26.092Z
MongoDB Compass 1.21.1 doesn&rsquo;t start on Windows 7
8,999
null
[ "sharding" ]
[ { "code": "", "text": "Currently using 4.2.8 community edition. Have a sharded cluster with 20+ shard nodes.If I turn off the balancer, will inserted documents still be distributed to the correct shard nodes – assuming a good shard key has been utilized? I believe i have chosen a good hashed key, and the data seems to get distributed well already, however, I’m experiencing some slow-downs in insert speed during peak traffic.I would like to only allow the balancer to run during scheduled times in the day to maximize my cluster’s performance but I don’t know if that means that all the inserts will just go to the primary shard and max out the disk. Can someone clarify?", "username": "Firass_Almiski" }, { "code": "", "text": "Hi @Firass_Almiski,With hash sharding the shards are assigned with pre-defined ranges , therefore collections sharded by a hash key should not get chunk moves if you don’t add/remove shards or update the keys themselves.Therefore if the data inserted/updated is of high shard key value cardinality the inserts should hit multiple shards regardless of the balancer status.Checkout this blog:Best practices for delivering performance at scale with MongoDB. Learn about different sharding schemes and how they help you in scaling-out your database.Are all of your collections sharded on a hash key? I wonder what chunks are getting. Move often? Could that be the sessions collection?Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB sharded cluster, behavior when balancer is off?
2020-08-09T11:40:19.513Z
MongoDB sharded cluster, behavior when balancer is off?
1,840
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi, new to realm and I cant find any documentation about administrating users that are registered using Email/Password Authentication provider on client side. Is it possible to CRUD new user as an admin also list the users in the provider including their Custom User Data . In the context of a Realm Functions is it possible to access the users of the authentication provider so they can be managed from there.thanks", "username": "thanos_lodas" }, { "code": "context.httpaxios", "text": "Hey Thanos,Can you explain your use-case a bit further so I can get a better understanding of why you need to administrate all your users from the client? That’s typically what the MongoDB Cloud UI is for.To answer some of your questions about CRUD Users and Custom User data:You can CRUD Users through the Admin API which gives you access to their Custom User DataYou can access the providers either by adding it to Custom User Data via an Authentication Trigger or by querying the identities provided by endpointYou can also do it via Realm functions by calling the API endpoints using context.http in functions or using a library such as axios", "username": "Sumedha_Mehta1" }, { "code": "const user = await admin.auth().createUser(newUserData);", "text": "Hi SumedhaA typical use-case would be a single page application that has no signup page, but with an Admin account that creates new users and manages them from the client GUI. Users then can login and edit their profile (Custom User Data) also users can change their email and password that simple.I was expecting something like in Firebase functions one can use the context of the method to manage user like creating a new User like soconst user = await admin.auth().createUser(newUserData);The Admin API if I can use them from client side that would require an API key pair each time so like when an Admin will login I will need an Authentication Trigger to check if the user has an Admin role and somehow create a new API key pair and attach it to the user response is that right .The third case you suggested sounds promising using custom realm functions for each CRUD operation with the context.http to call the Admin API endpoints, but how will I programmatically renew/create API key pair to use in the methods .Thank you for help", "username": "thanos_lodas" }, { "code": "", "text": "Hi Thanos, although there are no provided admin methods, another thing you can look into is Custom Function Authentication.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi SumedhaI tried the Custom Function Authentication created a Users Collection and enabled the Custom User Data then pointed it to cluster/Database/Collection and defined a string property userId on user. Finally I signed-in successfully with the web SDK, but the customData object on the currentUser retrieved all the properties including the password of user , also the Custom Function Authentication provider created an enabled user with unknown name in Users section for the signed-in user now if I delete a user from collection is there a way for a trigger or something to also delete the user in the Users section created by the Authentication provider and also a way to filter what properties are placed in customData.Edit:\nI disabled the the Custom User Data and instead tried using a login trigger to add the custom user data with no sensitive data, but it seems the login triggers do not work with custom functions authentication provider . A question when the token is refreshed is the login trigger called once more ?Thanks for your help and patience", "username": "thanos_lodas" }, { "code": "", "text": "A couple of question so I can find a solution to Administrating users1- Is there a way to filter what properties are set on Custom-User-Data ?\n2- login trigger don’t work with Function Authentication ?\n3- is login trigger executed once more when token user is refreshed ?\n4- Users email and password once they are set cannot be modified programmaticly by a different user ?Am thinking to go with @Sumedha_Mehta1 suggestion using Realm Administration API and using the local Email/Password Authentication if I can somehow find work around for a couple of hurdles", "username": "thanos_lodas" } ]
Administrate Realm users on client
2020-08-05T03:10:41.639Z
Administrate Realm users on client
2,861
null
[]
[ { "code": "{\n \"ts\" : Timestamp(1596778564, 9),\n \"t\" : NumberLong(7),\n \"h\" : NumberLong(0),\n \"v\" : 2,\n \"op\" : \"u\",\n \"ns\" : \"db.collectionName\",\n \"ui\" : UUID(\"2947862a-8fb7-4342-87d1-a0ab5f8bc0bd\"),\n \"o2\" : {\n \"_id\" : ObjectId(\"5f27e94e0174081a3feb5c6b\")\n },\n \"wall\" : ISODate(\"2020-08-07T05:36:04.402Z\"),\n \"lsid\" : {\n \"id\" : UUID(\"cbd4b90f-1bff-4ad1-b4e2-4c286fc25450\"),\n \"uid\" : BinData(0,\"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=\")\n },\n \"txnNumber\" : NumberLong(1269),\n \"stmtId\" : 0,\n \"prevOpTime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"o\" : {\n \"_id\" : ObjectId(\"5f27e94e0174081a3feb5c6b\")\n }\n}\n{\n \"ts\" : Timestamp(1596778564, 8),\n \"t\" : NumberLong(7),\n \"h\" : NumberLong(0),\n \"v\" : 2,\n \"op\" : \"u\",\n \"ns\" : \"db.collectionName\",\n \"ui\" : UUID(\"2947862a-8fb7-4342-87d1-a0ab5f8bc0bd\"),\n \"o2\" : {\n \"_id\" : ObjectId(\"5f27e94e0174081a3feb5c6b\")\n },\n \"wall\" : ISODate(\"2020-08-07T05:36:04.398Z\"),\n \"lsid\" : {\n \"id\" : UUID(\"cbd4b90f-1bff-4ad1-b4e2-4c286fc25450\"),\n \"uid\" : BinData(0,\"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=\")\n },\n \"txnNumber\" : NumberLong(1268),\n \"stmtId\" : 0,\n \"prevOpTime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"o\" : {\n \"$v\" : 1,\n \"$set\" : {\n .....\n .......\n ......... //All the values to be updated\n }\n }\n}\n", "text": "Hello All,We have a oplog entry with an unusual value, understand that the o2 object id says the object to be updated and “o” says what is being updated. In this particular instance “o2” has the objectid which is expected, but “o” also has the same object id instead of the update to be done. Any idea when can we get such an oplog as mentioned below without $set or $unset operations.The update oplog for the same object few millisec ago is given below. Which has the right set of operations.Thanks,\nMahudees", "username": "Mahudeeswaran_Palani" }, { "code": "", "text": "Hi @Mahudeeswaran_Palani,I think we use the oplog in the retryable writes and in transactions so a single operation might result in a chain of oplogs.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nMany thanks for your reply. Actually these are two different update operations happening on same data one after the other. Could the retryable writes in this scenario is because any failure in connecting DB, any pointers to check from any logs or somewhere why it was trying to retry write this operation.And how to do we identify it is actually a retryable write operation.Thanks,\nMahudees", "username": "Mahudeeswaran_Palani" }, { "code": "", "text": "Hi @Mahudeeswaran_Palani,Several oplog entires might be written to the oplog regardless if operations were retried or not.See the following blog:Retryable writes are an important foundation of MongoDB's transactionsI am trying to understand what is the underlying issue. Do you see latency or much higher consumption of oplog space?Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "_id", "text": "I don’t see that this has anything to do with retryable writes - this looks like the client did an update with replacement document with just the _id field so all the other fields would get unset. Is that what the effect of this was?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Yes, found that in our code the values are setup to ‘Update’ based on the condition if values are available or not, but update is called in all the case. So when the values are not available for setting, empty Update is called which makes this as replacement document. Have added my observation and findings in this question in stack-overflow.Many thanks you for helping.Regards,\nMahudees", "username": "Mahudeeswaran_Palani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Oplog update entry without set or unset operation
2020-08-08T06:52:15.887Z
Oplog update entry without set or unset operation
2,872
null
[ "upgrading" ]
[ { "code": "", "text": "Mongo cannot start after the upgrade, I have a single instance.\nHere is the error message:\n2020-08-06T12:36:25.867+0300 I STORAGE [initandlisten] exception in initAndListen: Location40415: BSON field ‘MinValidDocument.oplogDeleteFromPoint’ is an unknown field., terminating", "username": "Hussam_Jarrah" }, { "code": "", "text": "Please check this link.Another user got same error", "username": "Ramachandra_Tummala" } ]
Issue with upgrading mongo from 4 to 4.2
2020-08-09T08:24:47.447Z
Issue with upgrading mongo from 4 to 4.2
3,124
https://www.mongodb.com/…a_2_1024x640.png
[]
[ { "code": "", "text": "mongo1280×800 381 KBThanks.", "username": "Paul_Gureghian" }, { "code": "mongo", "text": "Hello @Paul_Gureghian, welcome to the community.Please tell about what is the version (and type) of MongoDB you have installed and the operating system (from the screenshot it looks like some *IX).The are instructions and tutorials (about installation, starting MongoDB server and connecting with mongo shell). Please refer the documentation and tell where you are stuck.", "username": "Prasad_Saya" }, { "code": "", "text": "Most likely the required directories are not created. Look at the config file /etc/mongod.conf. All directories specified in there must exist. You can look at the log file also specified in there.", "username": "steevej" }, { "code": "", "text": "Thanks. Mongo 4.4 on Ubuntu 20.04.1", "username": "Paul_Gureghian" }, { "code": "", "text": "I don’t know if the dirs were there or not when I posted (probably were not there) but now they are present and I didn’t do anything to make it so.", "username": "Paul_Gureghian" }, { "code": "", "text": "Is it working now? If not, anything in the logs?", "username": "steevej" }, { "code": "", "text": "Maybe Mongo just needed a reboot.\nI didn’t apply any fix.Since I don’t want to reformat to XFS, can I mute the warning about XFS?\nCan you refer me to a link in the Momgo docs for enabling authentication or muting the warning as well ?mongo-server1280×800 182 KB", "username": "Paul_Gureghian" }, { "code": "", "text": "mongo-shell1280×800 316 KB", "username": "Paul_Gureghian" }, { "code": "", "text": "Please check this linkFor access control check this", "username": "Ramachandra_Tummala" } ]
How to start Mongo Server / Shell?
2020-08-08T03:24:09.816Z
How to start Mongo Server / Shell?
2,774
null
[]
[ { "code": "{\n 'step': 1,\n 'name': 'house',\n 'score': 2\n}\n{\n 'step': 1,\n 'name': 'car',\n 'score': 3\n}\n{\n 'step': 2,\n 'name': 'house',\n 'score': 4\n}\n\nI'm grouping the documents with same 'step' and pushing 'name' and 'score' into an array of objects. What I get is:\n\n{\n 'step': 1,\n 'scores': \n [\n {'name':'house','score':2},\n {'name':'car','score':3}\n ]\n}\n{\n 'step': 2,\n 'scores': \n [\n {'name':'house','score':4}\n ]\n}\n", "text": "I’m using MongoDB aggregation framework. I have a Mongo collection with documents like this:For each ‘step’ I need to copy the value of previous ‘step’ in case that a ‘name’ does not exists. I should have something like this:{\n‘step’: 1,\n‘scores’:\n[\n{‘name’:‘house’,‘score’:2},\n{‘name’:‘car’,‘score’:3}\n]\n}\n{\n‘step’: 2,\n‘scores’:\n[\n{‘name’:‘house’,‘score’:4},\n{‘name’: ‘car’, ‘score’:3}\n]\n}At the second document the element {‘name’:‘car’,‘score’:3} has been copied from the previous document because at ‘step:2’ there is not documents having ‘score’ for ‘car’.If step 1 do not have car record then step 2 should not have car record.I’ll try to explain better the goal:\nFor each step, the two fields (house and car) should be inspected and in case that no value available for some of them, then missing value should be filled with the last value provided at previous steps. If no previous step has value for the field, then nothing to copy to current stepI’m not able to figure out how to do this operation with MongoDB aggregation. Some help will be very appreciated.", "username": "Merce_Bruned_Lacoma" }, { "code": "db.test1.insertMany([\n {\n step: 1,\n name: 'house',\n score: 18,\n },\n {\n step: 1,\n name: 'car',\n score: 5,\n },\n {\n step: 2,\n name: 'house',\n score: 20,\n },\n {\n step: 2,\n name: 'boat',\n score: 15,\n },\n {\n step: 2,\n name: 'yacht',\n score: 20,\n },\n {\n step: 3,\n name: 'plane',\n score: 50,\n },\n]);\n\ndb.test1.aggregate([\n {\n $group: {\n _id: '$step',\n scores: {\n $push: {\n name: '$name',\n score: '$score',\n },\n },\n },\n },\n {\n // we need this, because $group stage\n // does not guarantee consistency\n // in the order in groups\n $sort: {\n _id: 1,\n },\n },\n {\n $group: {\n _id: null,\n list: {\n // collect all docs into one list\n // to be able to compare current and previous doc\n $push: '$$CURRENT',\n },\n },\n },\n {\n $project: {\n listWithChainedScores: {\n $reduce: {\n input: '$list',\n initialValue: null,\n in: {\n $cond: {\n if: {\n $eq: ['$$value', null],\n },\n then: {\n prev: '$$this',\n calculated: ['$$this'],\n },\n else: {\n prev: '$$this',\n calculated: {\n // concat modified current doc with\n // the general list of modified docs\n $concatArrays: ['$$value.calculated', [{\n // keep the current doc id\n _id: '$$this._id',\n scores: {\n // combine scores of current and previous doc\n $setUnion: ['$$this.scores', '$$value.prev.scores'],\n },\n }]],\n },\n },\n },\n },\n },\n },\n },\n },\n // $unwind + $replaceWith will make a new document\n // per each item in the $listWithChainedScores.calculated array\n {\n $unwind: '$listWithChainedScores.calculated',\n },\n {\n $replaceWith: '$listWithChainedScores.calculated',\n },\n]).pretty();\n[\n {\n \"_id\": 1,\n \"scores\": [\n { \"name\": \"house\", \"score\": 18 },\n { \"name\": \"car\", \"score\": 5 }\n ],\n },\n {\n \"_id\": 2,\n \"scores\": [\n { \"name\": \"boat\", \"score\": 15 },\n { \"name\": \"car\", \"score\": 5 },\n { \"name\": \"house\", \"score\": 18 }, // duplicate\n { \"name\": \"house\", \"score\": 20 }, // duplicate\n { \"name\": \"yacht\", \"score\": 20 }\n ],\n },\n {\n \"_id\": 3,\n \"scores\": [\n { \"name\": \"boat\", \"score\": 15 },\n { \"name\": \"house\", \"score\": 20 },\n { \"name\": \"plane\", \"score\": 50 },\n { \"name\": \"yacht\", \"score\": 20 }\n ],\n },\n]\n[\n {\n // $unwind to be able to sort \n $unwind: '$scores',\n },\n {\n $sort: {\n // order by score from bigger to smaller\n // $sort is needed so in the $group stage we picked\n // the first score object (that will have the bigger score value)\n // change the direction to '1' if you need the opposite\n 'scores.score': -1,\n },\n },\n {\n $group: {\n // at this stage we get rid of duplicates\n _id: {\n docId: '$_id',\n scoreName: '$scores.name',\n },\n scores: {\n $first: '$scores',\n },\n },\n },\n {\n // at this stage we restore the original documents structure\n $group: {\n _id: '$_id.docId',\n scores: {\n $push: '$scores',\n },\n },\n },\n]\n", "text": "Hello, @Merce_Bruned_Lacoma! Welcome to the community!At the second document the element {‘name’:‘car’,‘score’:3} has been copied from the previous documentOk, so we need to have build some kind of relationships between two separate documents. That is achievable only if we combine all the documents into temporary list for calculation purposes.With this approach, please, note that: depending on the number of documents you take in your aggregation and the size of each document, that will go into that temporary list, you may hit the aggregation pipeline stage memory limitations, that may decrease the aggregation performance.I will extend your dataset example, so we could run the aggregation against the longer documents chain:This aggregation should provide you with the desired result:Well almost. Here is the output:Notice, that there are duplicates in ‘scores’ array.\nResolving those duplicates is tricky here. However, nobody said it is not possible Add the following stages to the end of the aggregation:All done.", "username": "slava" }, { "code": "$lookup$lookupdb.steps.aggregate([\n{$sort:{name:1, step:1}}, \n{$group:{_id:\"$name\", steps:{$push:{step:\"$step\", score:\"$score\"}}}}, \n{$lookup:{from:\"steps\", pipeline:[ {$sort:{step:-1}},{$limit:1}], as:\"lastStep\"}},\n{$unwind:\"$lastStep\"}, \n{$set:{steps: {$reduce:{\n input:{$range:[{$add:[1,{$max:\"$steps.step\"}]}, {$add:[1,\"$lastStep.step\"]}]},\n initialValue:\"$steps\", \n in: {$concatArrays:[ \n \"$$value\", \n [{$mergeObjects:[\n {$last:\"$steps\"}, \n {step:\"$$this\"}\n ]}]\n ]}\n}}}}, \n{$unwind:\"$steps\"}, \n{$group:{_id:\"$steps.step\", scores:{$push:{name:\"$_id\", score:\"$steps.score\"}}}}, \n{$sort:{_id:1}})\n{ \"_id\" : 1, \"scores\" : [ { \"name\" : \"house\", \"score\" : 2 }, { \"name\" : \"car\", \"score\" : 3 } ] }\n{ \"_id\" : 2, \"scores\" : [ { \"name\" : \"house\", \"score\" : 4 }, { \"name\" : \"car\", \"score\" : 3 } ] }\n{ \"_id\" : 1, \"scores\" : [ { \"name\" : \"car\", \"score\" : 5 }, { \"name\" : \"house\", \"score\" : 18 } ] }\n{ \"_id\" : 2, \"scores\" : [ { \"name\" : \"car\", \"score\" : 5 }, { \"name\" : \"boat\", \"score\" : 15 }, { \"name\" : \"house\", \"score\" : 20 }, { \"name\" : \"yacht\", \"score\" : 20 } ] }\n{ \"_id\" : 3, \"scores\" : [ { \"name\" : \"car\", \"score\" : 5 }, { \"name\" : \"boat\", \"score\" : 15 }, { \"name\" : \"house\", \"score\" : 20 }, { \"name\" : \"yacht\", \"score\" : 20 }, { \"name\" : \"plane\", \"score\" : 50 } ] }\n", "text": "This is an interesting problem and it actually has a much simpler solution, one that does not rely on having to push all of the documents into a single document which can definitely fail to scale when the collection is large, plus it’s unnecessarily complex.My solution only needs a single piece of information and that is what the highest (that is last) step number is. If that’s not known at aggregation there are two ways to get it - one is by running a query first to get that number, the other by inserting a $lookup stage to fetch it - luckily expressive $lookup is smart enough to only run a non-correlated subquery only once. I’ll show that solution:On the original example documents, the result is:On @slava’s example the result is:", "username": "Asya_Kamsky" }, { "code": "$last$last{'$arrayElemAt':[ <array>, {'$subtract':[{'$size':<array>}, 1]} ]}\n", "text": "I’d be happy to explain anything that’s not clear here, I do want to point out that I’m using a new-in-4.4.0 expression $last which returns the last element of an array. If you want to run this on an earlier version of MongoDB you should replace $last with a much longer and more unwieldy expressionwhich is a much more complicated way to grab the last element of an array, wouldn’t you agree?", "username": "Asya_Kamsky" } ]
Fill missing values after group
2020-08-07T06:45:33.436Z
Fill missing values after group
6,122
null
[ "upgrading" ]
[ { "code": "Error: error: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"DatabaseVersion doesn't exist in database entry { _id: \\\"db_wayee_env\\\", primary: \\\"shard2\\\", partitioned: false } despite the config server being in binary version 4.2 or later.\",\n\t\"code\" : 1,\n\t\"codeName\" : \"InternalError\",\n\t\"operationTime\" : Timestamp(1596869996, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1596869996, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"vCYWXny2iOfGYHrtK57jhIxQTys=\"),\n\t\t\t\"keyId\" : NumberLong(\"6802144151413456897\")\n\t\t}\n\t}\n}\n", "text": "First ,the version is 3.6,and I upgraded to 4.0,there is no problem.Today I upgraded to 4.2 ,when I query the database ,it show the error.how do I solve this problem?Thanks", "username": "xu_zhang" }, { "code": "", "text": "Hi @xu_zhang,Have you switched the FCV to 4.2 and restarted all mongos?Please note the above is a mandatory step.Best regards\nPavel", "username": "Pavel_Duchovny" } ]
DatabaseVersion doesn't exist in database entry { _id: \"db_wayee_env\", primary: \"shard2\", partitioned: false }
2020-08-08T10:56:07.009Z
DatabaseVersion doesn&rsquo;t exist in database entry { _id: \&rdquo;db_wayee_env\&rdquo;, primary: \&rdquo;shard2\&rdquo;, partitioned: false }
1,937
null
[]
[ { "code": "", "text": "I am trying to get into the download site but keep getting redirected to Atlas. I’ve tried clearing cache, cookies, etc to no avail. This appears to happen when I log into Atlas, once I go there I can’t get anywhere else, support, download, etc. Does this happen to anyone else? Anyone know how I can get around it?", "username": "JamesT" }, { "code": "", "text": "No it does not happen to me\nI can access both links while logged in to the communityMongoDB Paid Support. MongoDB offers help with training, upgrading, and moreDownload MongoDB Community Server non-relational database to take your next big project to a higher level!", "username": "Ramachandra_Tummala" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Accessing Mongo websites
2020-08-07T17:19:49.319Z
Accessing Mongo websites
1,426
null
[ "dot-net", "field-encryption" ]
[ { "code": "", "text": "Hi, In our .NET Core project (c#) we need to use a Client-Side Field Level on linux containers, however the CSFLE don’t work on link as specified in the documentations.https://mongodb.github.io/mongo-csharp-driver/2.10/reference/driver/crud/client_side_encryption/Are there any plans to release this functionality? If yes, when?\nWhat is work around for this issue?Thanks", "username": "Filipe_Nonato_de_Fre" }, { "code": "", "text": "Hi @Filipe_Nonato_de_Fre,This has been previously discussed in Client-Side Field Level Encryption using C# in Linux - #4 by wan.Per @wan’s update there:This is currently scheduled to be worked on, please feel free to add yourself as a watcher or up-vote the issue tracker CSHARP-2715: FLE on Linux/Mac to receive notifications for progress on the ticket.Regards,\nStennie", "username": "Stennie_X" } ]
C# Driver - Client-Side File Level Encryption don't work in Linux
2020-08-07T13:42:28.147Z
C# Driver - Client-Side File Level Encryption don&rsquo;t work in Linux
2,306
null
[]
[ { "code": "{\n \"_id\" : ObjectId(\"5f1db4f0a3f141fb2f1e6hfd\"),\n \"_partition\" : \"My Project\",\n \"online\" : [ \n ObjectId(\"5ddf98bc5fd87f00175dc6cb\"),\n ObjectId(\"5ddf98bc5fd87f00175dc6cc\"),\n ObjectId(\"5ddf98bc5fd87f00175dc6cd\")\n ],\n \"day\" : \"Monday\",\n \"minute\" : 0,\n \"hour\" : 0\n}\nexports = async function(){\n const cluster = context.services.get(\"mongodb-atlas\");\n const myCollection = cluster.db(\"myDB\").collection(\"myCollection\");\n const result = await availability.find({ day: \"Monday\", hour: 0, minute: 0 });\n //From here on, nothing is working properly\n}\n", "text": "Hello,I am struggling to retrieve the elements of an array in the customer functions of Realm UI.I have a simple collection and here is an example of a documentI want to retrieve the elements inside the ‘online’ array and later run another function over those Object ids that represent the ids of users in another collection.I am simply doing the following:Until now all is good but after that, I am not able to retrieve the elements of the ‘online’ array.\nI tried different methods, but everything is giving me “$undefined”: true …What can I do to retrieve those 3 object ids and save them in a variable?Please note that I am using the custom functions of the Realm UI. It is a must to use those functions as I am creating a Cron job using Realm.", "username": "Maz" }, { "code": "availability\nexports = async function(arg) {\n const cluster = context.services.get(\"mongodb-atlas\");\n const myCollection = cluster.db(\"myDB\").collection(\"myCollection\");\nconst query = { \"day\": \"Monday\", \"hour\":0, \"minute\":0 };\nconst projection = { \"online\": 1 };\n\nconst result = await myCollection.find(query, projection).toArray()\n\n}\n", "text": "Hi Mazen,I don’t have enough context on where the bug might until I have the full function code. Can you copy the entire function here?In the meantime, a couple of things:I’m not sure where your availability variable is defined.It seems like you might want to do something like:This will return an array of arrays (all online arrays that fit the criteria in one array). Hope that’s helpful.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Retrieve array elements into a variable in custom functions in Realm
2020-08-02T20:40:11.883Z
Retrieve array elements into a variable in custom functions in Realm
2,089
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "$metarandValsearchScoresearchHighlightsgeoNearDistancegeoNearPointrecordIdindexKeysortKeyfindAndModifyallowDiskUseMONGODB-AWSCommitQuorumcreateIndexestlsDisableCertificateRevocationCheckExceededTimeLimitLockTimeoutClientDisconnectAuthorizedDatabasesListDatabasesAsQueryable# .NET Driver Version 2.11.0 Release Notes\n\nThe main new features in 2.11.0 support new features in MongoDB 4.4.0. These features include:\n\n* Support for all new\n [``$meta``](https://www.mongodb.com/docs/manual/reference/operator/projection/meta/)\n projections: `randVal`, `searchScore`, `searchHighlights`,\n `geoNearDistance`, `geoNearPoint`, `recordId`, `indexKey` and\n `sortKey`\n* Support for passing a hint to update commands as well as\n `findAndModify` update and replace operations\n* Support for `allowDiskUse` on find operations\n* Support for `MONGODB-AWS` authentication using Amazon Web Services\n (AWS) Identity and Access Management (IAM) credentials\n* Support for stapled OCSP (Online Certificate Status Protocol) (macOS only)\n* Support for shorter SCRAM (Salted Challenge Response Authentication Mechanism) conversations\n* Support for speculative SCRAM and MONGODB-X509 authentication\n* Support for the `CommitQuorum` option in `createIndexes`\n* Support for [hedged reads](https://www.mongodb.com/docs/master/core/read-preference-hedge-option/index.html)\n\n", "text": "The main new features in 2.11.0 support new features in MongoDB 4.4.0. These features include:Other new additions and updates in this release include:An online version of these release notes is available at:The full list of JIRA issues resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.0%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:", "username": "Robert_Stam" }, { "code": "", "text": "I got an error after upgrading from 2.10.4 to 2.11.0:image1960×760 92.1 KBAny tips on what is wrong here?", "username": "programad" }, { "code": "", "text": "Our backward compatibility is typically at the source level and not the binary level. Can you try recompiling your application against the version of the driver you want to use?", "username": "Robert_Stam" }, { "code": "", "text": "The feature Client-Side Field Level Encryption (CSFLE) was it released to work on Linux?", "username": "Filipe_Nonato_de_Fre" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C#/.NET Driver 2.11.0 Released
2020-07-31T16:05:40.651Z
MongoDB C#/.NET Driver 2.11.0 Released
3,114
null
[ "upgrading" ]
[ { "code": "2020-08-07T14:58:26.205+0300 I STORAGE [initandlisten] exception in initAndListen: Location40415: BSON field 'MinValidDocument.oplogDeleteFromPoint' is an unknown field., terminating \n", "text": "Hello,I’m trying to update my MongoDB installation from 3.6 to the latest version [4.4] on Ubuntu16.04 using apt-get. Going from 4.0 to 4.2 I faced a problem during the startup process after updating the binaries and mongo.log shows the exception below:I’m using WiredTiger and I checked the feature compatibility version and other prerequisites according to the installation and the compatibility docs and everything looks fine. I don’t have any clue on what could be the source of this problem and I’d be grateful if someone can help.", "username": "Anas_Obeidat" }, { "code": "use local\ndb.replset.minvalid.find({}).pretty ();\noplogDeleteFromPoint", "text": "Hi @Anas_Obeidat,Can you start this instance in standalone mode (different port and no replication settings in config) and provide the following content when logged in as admin:For some reason I believe this document is malformed.The field oplogDeleteFromPoint should not be there when the FCV is moved to 4.2 according to SERVER-30556Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "> use local\nswitched to db local\n> db.replset.minvalid.find({}).pretty ();\n{\n\t\"_id\" : ObjectId(\"57b18fb43040dc07cefc3235\"),\n\t\"ts\" : Timestamp(1505648110, 6),\n\t\"t\" : NumberLong(-1),\n\t\"oplogDeleteFromPoint\" : Timestamp(0, 0)\n}\n", "text": "Thanks for your response @Pavel_Duchovny,I forgot to mention that but I’m experimenting on a standalone instance. I guess that’s not possible because the server won’t start after the upgrade so, I downgraded mongo to 4.0.19 and started it and connected via mongoshell and this is the output:What do you suggest in this case? do you think modifying this document manually before the upgrade would be a wise move?", "username": "Anas_Obeidat" }, { "code": "db.replset.minvalid.update(\n { \"_id\" : ObjectId(\"57b18fb43040dc07cefc3235\")},\n { $unset: { oplogDeleteFromPoint: \"\"} }\n);\n", "text": "Hi @Anas_Obeidat,Does it fail when switching the feature comparability to 4.2 or just when using a 4.2 binary?Consider backing up the dbPath before doing the following removal.I believe the way to go is to $unset this field on a Primary.Verify it is replicated to all secondaries.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "2020-08-07T14:58:26.205+0300 I STORAGE [initandlisten] exception in initAndListen: Location40415: BSON field 'MinValidDocument.oplogDeleteFromPoint' is an unknown field., terminating \noplogDeleteFromPoint", "text": "Thanks, @Pavel_Duchovny!I ran your query before the upgrade and it solved my problem.For clarification, the problem appeared after updating the binaries to version 4.2. When I start the mongo service, the service fails to start and the log shows this exception before it starts the shutdown procedure:The solution was to unset the oplogDeleteFromPoint like @Pavel_Duchovny advised. After that I updated the binaries and and the service started normally.Thanks,\nAnas", "username": "Anas_Obeidat" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Failed to start mongo.service after update from 4.0 to 4.2
2020-08-07T16:30:19.137Z
Failed to start mongo.service after update from 4.0 to 4.2
6,626
null
[ "data-modeling" ]
[ { "code": "{\nobjectId: 90rt93jkfd92,\naction: analyze,\napp: reply,\nuser:{\n id: 5\n name: Sandy,\n email: [email protected]\n }\n}\n", "text": "I need to design a simple structure where the user’s click actions on a web application are recorded. Example document,user_actionI also have a user_detail collection where user detail is stored.If some user updates their user_detail say the first name, it should be reflected in the user_action data.Which approach is best?Update the user_action collection when user detail is updated. user_action is the largest collection in the DB where all actions are recorded. Is it ok to modify such a large no. of documents though not frequently?Join fetch user_detail while querying user_action. Is this approach is right?Consider redesigning the model. If I need to redesign the model, what are some suggestions?Is there any other way to approaches this.Your help would be appreciable.\nThanks in advance!", "username": "Santhosh_Kumar" }, { "code": "", "text": "Hello @Santhosh_Kumar, welcome to the community.Here is a similar post with some answers:https://stackoverflow.com/questions/63295525/mongodb-data-modeling/63295997#63295997", "username": "Prasad_Saya" }, { "code": "nameemailuser_actionsuser_actionsusers// user document example\n{\n _id: 'U1',\n name: 'Sandy',\n age: 22,\n address: 'addr',\n email: '[email protected]',\n},\n// user_actions documents example\n[\n {\n _id: 'A1',\n action: 'login',\n app: 'store',\n user: {\n id: 'U1',\n name: 'Sandy',\n email: '[email protected]',\n },\n },\n {\n _id: 'A2',\n action: 'logout',\n app: 'store',\n user: {\n id: 'U1',\n name: 'Sandy',\n email: '[email protected]',\n },\n },\n]\nuser_actionsusersuser_actions", "text": "Welcome, @Santhosh_Kumar!I suppose, for your case, name and email props, that you embed in your user_actions collection, are not updated frequently. Thus, having the de-normalized data model would be the best option.So, you have 2 collections: user_actions and users:Actually, it is more a question about how consistent the user details in user_actions collection should be with the data in users collection.", "username": "slava" }, { "code": "", "text": "Hi @slava and @Santhosh_Kumardepending on your queries to retrieve data, the subset pattern might be interesting. You can keep the latest, frequently retrieved (action)data embedded. Implicit you gain ACID functionality since you only deal with one (user) document. Whenever the embedded data hits a certain amount you move this to a second collection - the “amount” is defined by your process.\nWhen you can use e.g. a userId as Id for the second collection you save the step of extra linking and cascaded deletes…Just some thoughts, the most important and first thing to do when you design a schema is to identify the quantity, quality, and size of your workload) and how you are going to query the data. I based this on the idea that you have a user centric and not action centric setup.Cheers,\nMichael", "username": "michael_hoeller" } ]
Data Modeling - User Action Relationship
2020-08-07T06:22:22.663Z
Data Modeling - User Action Relationship
1,478
null
[ "compass" ]
[ { "code": "", "text": "Hello Team,we are connecting to Mongo using Mongo compass.mongodb://10.210.126.16:27017,10.210.126.17:27017,10.210.126.19:27017/?authSource=admin&readPreference=primary&appname=MongoDB%20Compass&ssl=falseWe are getting error:\ngetaddrinfo ENOTFOUND rh-hadoop-07.mtg.localAll the IPs are on DNS.Kindly advise", "username": "Roshan_John" }, { "code": "", "text": "Since your DNS error is withrh-hadoop-07.mtg.localyour problem lies somewhere else and it is not related to the URI of your cluster.", "username": "steevej" } ]
ENOTFOUND error connecting with MongoDB Compass
2020-08-07T07:46:13.200Z
ENOTFOUND error connecting with MongoDB Compass
9,310
null
[ "security" ]
[ { "code": " db.updateRole(\"testrole\",{ privileges:[{\"resource\" : {\"cluster\" : true},\"actions\" : [\"fsync\",\"getCmdLineOpts\",\"getShardMap\",\"listDatabases\",\"listShards\",\"replSetGetConfig\",\"replSetGetStatus\",\"serverStatus\",\"unlock\"]},{\"resource\" : {\"db\" : \"local\",\"collection\" : \"system.replset\"},\"actions\" : [\"find\"]\n },{\"resource\" : {\"db\" : \"config\",\"collection\" : \"settings\"},\"actions\" : [\"update\"]},{\"resource\" : {\"db\" : \"\",\"collection\" : \"\"},\"actions\" : [\"collStats\",\"listCollections\"]}]})\n\n\n db.createUser( { user: \"test\", pwd: \"xxxxxx\", roles: [ { role: \"testrole\", db: \"admin\" } ] } )\n\n db.adminCommand({\"collStats\": \"system.roles\"})\n{\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on admin to execute command { collStats: \\\"system.roles\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n}", "text": "Hi,I have a created a MongoDB user with role which has following privileges, out of which collStats on all databases is also one. But when I try to execute collstats output on a specific collection, it fails. Please can someone help.\nI am trying this on a replica set.", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi @Akshaya_Srinivasan ,System collections require explicit grant. This means that you have to specify the system.roles specifically in your grants.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to execute collstats on a collection, although the privilege is granted
2020-08-06T10:51:36.702Z
Unable to execute collstats on a collection, although the privilege is granted
2,776
null
[ "java" ]
[ { "code": "db.users.aggregate([ { \"$geoNear\": { \"near\": { \"type\": \"Point\", \"coordinates\": [ 17.487652, 78.385807 ] }, \"maxDistance\": 5000, \"spherical\": true, \"distanceField\": \"distance\" }} ]).pretty()", "text": "Hello there.\nCan somebody post the java equivalent of this query please?\ndb.users.aggregate([ { \"$geoNear\": { \"near\": { \"type\": \"Point\", \"coordinates\": [ 17.487652, 78.385807 ] }, \"maxDistance\": 5000, \"spherical\": true, \"distanceField\": \"distance\" }} ]).pretty()", "username": "manlan" }, { "code": "", "text": "Hello @manlan, welcome to the community.You can use the MongoDB Compass’s Aggregation Pipeline Builder to build an aggregation query and then Export Pipeline to Specific Language - to Java.Also, see the Java Driver Tutorials on Aggregation and Geospatial Search", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you for the welcome!Oh this is great! I’ll check it out, thanks a ton!", "username": "manlan" }, { "code": "{ \"near\": { \"type\": \"Point\", \"coordinates\": [ 17.487652, 78.385807 ] }, \"maxDistance\": 5000.0, \"spherical\": true, \"distanceField\": \"distance\" }var stage = listOf(\n\t\t\t\teq(\"\\$geoNear\", and(\n\t\t\t\teq(\"near\", and(\n\t\t\t\teq(\"type\",\"Point\"),\n\t\t\t\teq(\"coordinates\",listOf(17.487652,78.385807)))),\n eq(\"maxDistance\", 5000.0),\n eq(\"spherical\", true),\n eq(\"distanceField\", \"distance\"))))\n\ncollection.aggregate(stage)\n'$geoNear requires a 'near' option as an Array' on server\n\"pipeline\": [\n\t\t\t{\"$geoNear\": \n\t\t\t\t{\"$and\": \n\t\t\t\t\t[{\"near\": \n\t\t\t\t\t\t{\"$and\": [{\"type\": \"Point\"}, {\"coordinates\": [17.487652, 78.385807]}]}\n\t\t\t\t\t }, \n\t\t\t\t\t{\"maxDistance\": 5000.0}, \n\t\t\t\t\t{\"spherical\": true}, \n\t\t\t\t\t{\"distanceField\": \"distance\"}]\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\n", "text": "I tried the above suggested approach.My input aggregation is this:\n{ \"near\": { \"type\": \"Point\", \"coordinates\": [ 17.487652, 78.385807 ] }, \"maxDistance\": 5000.0, \"spherical\": true, \"distanceField\": \"distance\" }It gave me Java code which is equivalent to this Kotlin code:But I get this error Also, I see in the logs that my application code above is turning into this during runtime:Can you help me understand what’s going wrong?", "username": "manlan" }, { "code": "[ \n { \n $geoNear: { \n \"near\": { \"type\": \"Point\", \"coordinates\": [ 17.487652, 78.385807 ] }, \n \"distanceField\": \"distance\" \n \"maxDistance\": 5000.0, \n \"spherical\": true, \n } \n } \n]\nmongo", "text": "@manlan,I tried your aggregation pipeline:It works fine when run from the Compass (v 1.21.2, MongoDB v4.2.8), the mongo shell and the Java code (Java driver 3.12.2).", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to find Java driver documentation that uses $geoNear
2020-08-04T20:47:57.001Z
Unable to find Java driver documentation that uses $geoNear
2,213
null
[]
[ { "code": "", "text": "I would like to create a local realm on my mobile client device and release to production for our users. Then at some point in the future build out the sync capability to MongoDB Atlas. I read there is a default TTL of 30 days. Does this mean only the last 30 days of data will be uploaded when I set up sync in the future. Or is there a way to get all the local data to sync to the server the first time?", "username": "toor" }, { "code": "", "text": "The question is a bit unclear.If you build a MongoDB Realm app with no sync, all of your data is persisted locally and does not time out or get removed. i.e. there is no expiration date of your data.If you add Sync at a later time, that data would then Sync to the server and then it is persisted both locally and in the MongoDB Realm cloud - again, the data doesn’t expire.From then on, the data is persisted in both locations; the cloud and locally. Are you asking if you then turn sync off what happens when you turn it back on after 30 days?", "username": "Jay" }, { "code": "", "text": "I think he is asking two questions:So under what circumstances will data expire and what data expires? I am interested as at some point we will no doubt need to migrate off Realm Cloud but we don’t want any data to expire ever - we have our own archiving process for removing old data.", "username": "Duncan_Groenewald" }, { "code": "", "text": "Data does not expire. ‘Persisted’ means that is stored and continually available (locally and when sync’d locally+cloud)We have data stored in Realm (Cloud) from… 3+ years ago and it’s still there and available.Are you seeing something that would indicate data would expire or otherwise be purged? I want to make sure I am addressing what’s being asked.", "username": "Jay" }, { "code": "", "text": "At the “Real Sync Docs” for 3.16.0 at Principles and \"good to know\" - Realm Sync (LEGACY) it says.The server uses a per-Realm transaction history log file to allow correct integration of changesets irrespective of the order or time when the client sends the changesets.The default time-to-live for the transaction history is 30 days. E.g. clients will have to connect at least once within that timeframe to ensure that changes are reflected. Outside of this time-frame the client will receive the most recent state from the server (“Client Reset”)I guess this applies for server to client changes mostly so if the user has multiple devices and one device doesn’t log in for over 30 days, that device would be missing some changes. Also if you have a shared realm and make the changes server side, all the users that don’t log in for 30 days would also be missing some data.", "username": "toor" } ]
Create local realm first then cloud realm later
2020-08-04T17:02:26.178Z
Create local realm first then cloud realm later
2,055
null
[]
[ { "code": "", "text": "I found this nice and quick installation here: macos - Installing MongoDB with Homebrew - Stack OverflowWorked well & quickly. For those not having homebrew yet, get it ", "username": "Natalie_Mikesova" }, { "code": "", "text": "Hi @Natalie_Mikesova,Thanks for sharing it with the community.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Install mongoDB shell over homebrew (macOS)
2020-08-04T09:22:38.822Z
Install mongoDB shell over homebrew (macOS)
1,202
null
[]
[ { "code": "", "text": "For my Lecture 2 - Quiz for Lesson1, it does not show as completed and It did not give me an option to Try again, Can you help me.", "username": "Srikanth_Vishnuvajhala" }, { "code": "", "text": "Show the screenshot\nDoes it have unlimited attempts or fixed number of attempts", "username": "Ramachandra_Tummala" }, { "code": "", "text": "unlimited number of attempts", "username": "Srikanth_Vishnuvajhala" }, { "code": "", "text": "Simply check or uncheck a box and then Try again. There is no point in letting you try again if you do not change the answer.", "username": "steevej" }, { "code": "", "text": "Thank you Steevej-1495, that worked.", "username": "Srikanth_Vishnuvajhala" }, { "code": "", "text": "Hi @Srikanth_Vishnuvajhala,I’m removing the screenshots from your post.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Quiz does not say completed
2020-08-02T20:26:27.004Z
Quiz does not say completed
1,209
https://www.mongodb.com/…1_2_1024x694.png
[ "aggregation" ]
[ { "code": "[{$project: {\n\n Stings:{$split:[\"$title\",\" \"]},\nFilter:{$eq: [{$size: {$split:[\"$title\",\" \"]}}, 1]},\n\n\n}}, {$match: { Filter:true}\n}, {$group: {\n _id: null, n: { $sum: 1 } \n}}]\n", "text": "Hi\nI’m trying to do some simple aggregation query trying to catch all movies with a 1-word title…!Annotation 2020-07-20 1141551332×904 28.2 KBattached my code:", "username": "Tal_Shainfeld" }, { "code": "db.test1.insertMany([\n { _id: 'A', title: \"Harry Potter\" },\n { _id: 'B', title: \"It\" },\n { _id: 'C', title: \"Star Wars\" },\n { _id: 'D', title: \"Punisher\" },\n]);\ndb.test1.find({\n title: {\n $not: {\n $regex: / /,\n },\n },\n});\n\ndb.test1.count({\n title: {\n $not: {\n $regex: / /,\n },\n },\n});\ndb.test1.aggregate([\n {\n $addFields: {\n title: {\n $trim: {\n input: '$title',\n }\n }\n }\n },\n {\n $match: {\n title: {\n $not: {\n $regex: / /,\n },\n },\n }\n }\n]);\ndb.test1.aggregate([\n {\n $addFields: {\n titleParts: {\n $split: [{\n $trim: {\n input: '$title'\n },\n }, ' '],\n }\n }\n },\n {\n $match: {\n titleParts: {\n $size: 1\n }\n }\n },\n {\n $project: {\n titleParts: false,\n }\n }\n]);\n", "text": "Hello, @Tal_Shainfeld! Welcome to the community!You did not provide any data example, so I will make one:To match/count documents with one-word title you may not use the aggregation, if you trim the titles before insert documents:If you do not trim inserted document, then you may do it during the aggregation:You can also use solution without regex, similar to what were trying to do:", "username": "slava" }, { "code": "", "text": "Hi Slava\nThanks a lot for swift replay.", "username": "Tal_Shainfeld" }, { "code": "{\n $addFields: {\n titleParts: {\n $split: [{\n $toString: '$title',\n }, ' ']\n }\n }\n}\n", "text": "Probably, some movie in your collection contains non-string title, with number (int), instead of expected string.\nYou can try to convert all incoming titles to a string type with $toString pipeline operator:Also, If your question is related to your MongoDB university courses, better to ask this in dedicated discussions for your course.", "username": "slava" }, { "code": "", "text": "Hello @Tal_Shainfeldsince this is a MongoDB University Question it would be best to answer this question in the M121 University community forum. There you will get help from teaching assistants and co-students which can also learn by reading and following your question.Further info: $splitCheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi Slava\nOK, Thanks a lot!", "username": "Tal_Shainfeld" }, { "code": "[{$match: {\n\n _id:\"76001652\"\n\n}}, {$unwind: \n {\n path: \"$Agent_Policy_Data\",\n\n }\n}, {$project: {\nPolicy_Id:{$concat:[\"$Agent_Policy_Data.POLICY_NO\",\" \"]}, \n dateDifference:{$toInt:{ $divide: [{ $subtract: [ new Date(),{$toDate:(\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\")} ] },1000*60*60*24]}} ,\ninit:{ $lte:[{ $subtract: [ new Date(),{$toDate:(\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\")} ] },30]}\n\n }}, {$match: {\n init:false\n}}, {$count: \"POLICY_VERSION_NO\"}]\n", "text": "Hi Slava\nI think there is another problem\nI have some code that runs perfectly. But when I add the last stage $count. there is an error message refer to 2 stages before (that was run perfectly!! before I add the last stage $count ), the error message said: it is wrong to subtract a date from a string. Which there is no way I do, since as you can see-before subtraction I change the string to date type with $tostring command.\nSeems that compass produces unrelated error messages when there is some problem on the last stage $count.", "username": "Tal_Shainfeld" }, { "code": "mongo$project$countmongo{ \n $project: {\n Policy_Id: { $concat:[ \"$Agent_Policy_Data.POLICY_NO\",\" \"] },\n dateDifference: { \n $toInt: { \n\t $divide: [ \n\t { $subtract: [ \n\t\t new Date(), \n\t\t { $toDate: (\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\") } ] \n\t\t }, \n\t\t 10006060*24 \n\t ] \n } \n },\n init: { \n $lte: [ \n\t { $subtract: [ new Date(), { $toDate:(\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\") } ] }, 30 \n\t ] \n }\n } \n },\n{ \n $count: \"POLICY_VERSION_NO\" \n}", "text": "the error message said: it is wrong to subtract a date from a stringPlease post the actual error message. Are you running this code from mongo shell? I tried your $project followed by the $count stages in mongo shell (see the pipeline stages below). It works fine without any errors.", "username": "Prasad_Saya" }, { "code": "", "text": "\nSame code on NoSQLbuster is running perfect, but get an error massage on Mong Compass…\nThanks", "username": "Tal_Shainfeld" }, { "code": "", "text": "What are the versions of the MongoDB and the Compass you are using?", "username": "Prasad_Saya" }, { "code": "", "text": "Hi\nCompass 1.21.2\nMongo 4.0.9\nTX", "username": "Tal_Shainfeld" }, { "code": "", "text": "I am also using the same Compass version and MongoDB v4.2.8. I tried the the same aggregation stages posted in my earlier post, and it works without any errors - in Compass.Can you provide a sample input document you are working with?", "username": "Prasad_Saya" }, { "code": "", "text": "Sorry mongo 4.2 the shell is 4.0.9", "username": "Tal_Shainfeld" }, { "code": "{\n\t\"_id\" : \"76000361\",\n\t\"Agent_Summary\" : [\n\t\t{\n\t\t\t\"NO_OF_POLICIES\" : \"1\",\n\t\t\t\"COMMISSION_BOOKED_FYP\" : \"73.14300000000000000000000000000000000000\",\n\t\t\t\"MONTH\" : \"July\",\n\t\t\t\"GWP_NEW_BUSINESS\" : \"3483.00000000000000000000000000000000000000\",\n\t\t\t\"YEAR\" : \"2018\",\n\t\t\t\"GWP_RENEWAL\" : \"0E-38\",\n\t\t\t\"GWP_ACTUAL\" : \"3483.00000000000000000000000000000000000000\",\n\t\t\t\"COMMISSION_BOOKED_RYP\" : \"0E-38\"\n\t\t},\n\t\t{\n\t\t\t\"NO_OF_POLICIES\" : \"1\",\n\t\t\t\"COMMISSION_BOOKED_FYP\" : \"0E-38\",\n\t\t\t\"MONTH\" : \"May\",\n\t\t\t\"GWP_NEW_BUSINESS\" : \"0E-38\",\n\t\t\t\"YEAR\" : \"2018\",\n\t\t\t\"GWP_RENEWAL\" : \"0E-38\",\n\t\t\t\"COMMISSION_BOOKED_RYP\" : \"0E-38\"\n\t\t},\n\t\n\t\t{\n\t\t\t\"NO_OF_POLICIES\" : \"2\",\n\t\t\t\"COMMISSION_BOOKED_FYP\" : \"0E-38\",\n\t\t\t\"MONTH\" : \"April\",\n\t\t\t\"GWP_NEW_BUSINESS\" : \"0E-38\",\n\t\t\t\"YEAR\" : \"2020\",\n\t\t\t\"GWP_RENEWAL\" : \"0E-38\",\n\t\t\t\"COMMISSION_BOOKED_RYP\" : \"0E-38\"\n\t\t},\n\t\t{\n\t\t\t\"NO_OF_POLICIES\" : \"1\",\n\t\t\t\"COMMISSION_BOOKED_FYP\" : \"0E-38\",\n\t\t\t\"MONTH\" : \"March\",\n\t\t\t\"GWP_NEW_BUSINESS\" : \"0E-38\",\n\t\t\t\"YEAR\" : \"2020\",\n\t\t\t\"GWP_RENEWAL\" : \"0E-38\",\n\t\t\t\"COMMISSION_BOOKED_RYP\" : \"0E-38\"\n\t\t}\n\t],\n\t\"Agent_Policy_Data\" : [\n\t \t\n\t\t{\n\t\t\t\"VERSION_EFF_TO_DATE\" : \"2019-07-27 00:00:00.000000\",\n\t\t\t\"POLICY_STATUS\" : \"678\",\n\t\t\t\"NEXT_RENEWAL_DATE\" : \"2020-07-25 00:00:00.000000\",\n\t\t\t\"POLICY_NO\" : \"7620482\",\n\t\t\t\"POLICY_END_DATE\" : \"2019-07-27 00:00:00.000000\",\n\t\t\t\"POLICY_VERSION_NO\" : \"7639348\",\n\t\t\t\"FIRST_ISSUE_DATE\" : \"2018-07-28 00:00:00.000000\",\n\t\t\t\"VERSION_EFF_FROM_DATE\" : \"2018-07-28 00:00:00.000000\",\n\t\t\t\"ANNUALIZED_PREMIUM\" : \"3000.00000000000000000000000000000000000000\",\n\t\t\t\"RENEWED_IND\" : \"Renewed\",\n\t\t\t\"PREMIUM_AMOUNT\" : \"300.30000000000000000000000000000000000000\",\n\t\t\t\"BUSINESS_TYPE\" : \"New Business\",\n\t\t\t\"RENEWAL_NUMBER\" : \"2\",\n\t\t\t\"LAST_RENEWED_DATE\" : \"2019-06-27 00:00:00.000000\",\n\t\t\t\"COMMISSION_AMOUNT\" : \"73.14300000000000000000000000000000000000\",\n\t\t\t\"POLICY_START_DATE\" : \"2018-07-28 00:00:00.000000\"\n\t\t},\n\t\t\n\t\t{\n\t\t\t\"VERSION_EFF_TO_DATE\" : \"2019-09-04 00:00:00.000000\",\n\t\t\t\"POLICY_STATUS\" : \"456\",\n\t\t\t\"BUSINESS_TYPE\" : \"New Business\",\n\t\t\t\"POLICY_NO\" : \"5615475\",\n\t\t\t\"POLICY_END_DATE\" : \"2019-09-04 00:00:00.000000\",\n\t\t\t\"POLICY_VERSION_NO\" : \"5630854\",\n\t\t\t\"FIRST_ISSUE_DATE\" : \"2018-09-05 00:00:00.000000\",\n\t\t\t\"POLICY_START_DATE\" : \"2018-09-05 00:00:00.000000\",\n\t\t\t\"VERSION_EFF_FROM_DATE\" : \"2018-09-05 00:00:00.000000\"\n\t\t},\n\t\t{\n\t\t\t\"VERSION_EFF_TO_DATE\" : \"2019-09-04 00:00:00.000000\",\n\t\t\t\"POLICY_STATUS\" : \"456\",\n\t\t\t\"BUSINESS_TYPE\" : \"Endorsement\",\n\t\t\t\"POLICY_NO\" : \"5615475\",\n\t\t\t\"POLICY_END_DATE\" : \"2019-09-04 00:00:00.000000\",\n\t\t\t\"POLICY_VERSION_NO\" : \"5630855\",\n\t\t\t\"FIRST_ISSUE_DATE\" : \"2018-09-05 00:00:00.000000\",\n\t\t\t\"POLICY_START_DATE\" : \"2018-09-05 00:00:00.000000\",\n\t\t\t\"VERSION_EFF_FROM_DATE\" : \"2018-09-05 00:00:00.000000\"\n\t\t},\n\t\t{\n\t\t\t\"VERSION_EFF_TO_DATE\" : \"2020-09-01 00:00:00.000000\",\n\t\t\t\"POLICY_STATUS\" : \"456\",\n\t\t\t\"NEXT_RENEWAL_DATE\" : \"2021-08-31 00:00:00.000000\",\n\t\t\t\"POLICY_NO\" : \"7605481\",\n\t\t\t\"POLICY_END_DATE\" : \"2020-09-01 00:00:00.000000\",\n\t\t\t\"POLICY_VERSION_NO\" : \"7610930\",\n\t\t\t\"FIRST_ISSUE_DATE\" : \"2019-09-03 00:00:00.000000\",\n\t\t\t\"VERSION_EFF_FROM_DATE\" : \"2019-09-03 00:00:00.000000\",\n\t\t\t\"ANNUALIZED_PREMIUM\" : \"5000.00000000000000000000000000000000000000\",\n\t\t\t\"RENEWED_IND\" : \"Renewed\",\n\t\t\t\"PREMIUM_AMOUNT\" : \"559.70000000000000000000000000000000000000\",\n\t\t\t\"BUSINESS_TYPE\" : \"Endorsement\",\n\t\t\t\"RENEWAL_NUMBER\" : \"2\",\n\t\t\t\"LAST_RENEWED_DATE\" : \"2020-08-03 00:00:00.000000\",\n\t\t\t\"COMMISSION_AMOUNT\" : \"117.53700000000000000000000000000000000000\",\n\t\t\t\"POLICY_START_DATE\" : \"2019-09-03 00:00:00.000000\"\n\t\t},\n\t\t{\n\t\t\t\"VERSION_EFF_TO_DATE\" : \"2019-07-27 00:00:00.000000\",\n\t\t\t\"POLICY_STATUS\" : \"456\",\n\t\t\t\"NEXT_RENEWAL_DATE\" : \"2020-07-25 00:00:00.000000\",\n\t\t\t\"POLICY_NO\" : \"7620482\",\n\t\t\t\"POLICY_END_DATE\" : \"2019-07-27 00:00:00.000000\",\n\t\t\t\"POLICY_VERSION_NO\" : \"7639348\",\n\t\t\t\"FIRST_ISSUE_DATE\" : \"2018-07-28 00:00:00.000000\",\n\t\t\t\"VERSION_EFF_FROM_DATE\" : \"2018-07-28 00:00:00.000000\",\n\t\t\t\"ANNUALIZED_PREMIUM\" : \"3000.00000000000000000000000000000000000000\",\n\t\t\t\"RENEWED_IND\" : \"Renewed\",\n\t\t\t\"PREMIUM_AMOUNT\" : \"348.30000000000000000000000000000000000000\",\n\t\t\t\"BUSINESS_TYPE\" : \"New Business\",\n\t\t\t\"RENEWAL_NUMBER\" : \"2\",\n\t\t\t\"LAST_RENEWED_DATE\" : \"2019-06-27 00:00:00.000000\",\n\t\t\t\"COMMISSION_AMOUNT\" : \"73.14300000000000000000000000000000000000\",\n\t\t\t\"POLICY_START_DATE\" : \"2018-07-28 00:00:00.000000\"\n\t\t}\n\t\t\n\t]\n}\n", "text": "", "username": "Tal_Shainfeld" }, { "code": "mongodb.collection.aggregate([\n{ $match: { _id : \"76000361\" } },\n{ $unwind: { path: \"$Agent_Policy_Data\" } },\n{ \n $project: {\n Policy_Id: { $concat:[ \"$Agent_Policy_Data.POLICY_NO\",\" \"] },\n dateDifference: { \n $toInt: { \n\t $divide: [ \n\t { $subtract: [ \n\t\t new Date(), \n\t\t { $toDate: (\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\") } ] \n\t\t }, \n\t\t 10006060*24 \n\t ] \n } \n },\n init: { \n $lte: [ \n\t { $subtract: [ new Date(), { $toDate:(\"$Agent_Policy_Data.NEXT_RENEWAL_DATE\") } ] }, 30 \n\t ] \n }\n } \n },\n{ \n $count: \"POLICY_VERSION_NO\" \n}\n] )", "text": "@Tal_Shainfeld,The aggregation posted here works fine (both in the mongo shell and the Compass). I am including it here. Try to use this one and see what the result is. It is possible, you are using some quotes (or special characters in your aggregation) which are not compatible.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi\nNo, I run the code as I sent to you. Till the stage I’m using $count everything run perfect , But when add that stage I got this error .I don’t get the error run the code on NoSQLBooster .\nBTW I tried to use\n{ $group: { _id: null, myCount: { $sum: 1 } } }, instead the $count - get the same error !!!\nTX\nTal", "username": "Tal_Shainfeld" } ]
$split error message: requires an expression that evaluates to a string
2020-08-02T20:39:55.926Z
$split error message: requires an expression that evaluates to a string
4,486
null
[]
[ { "code": "mongod2020-06-22T10:18:36.490+0530 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-06-22T10:18:36.708+0530 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] MongoDB starting : pid=12739 port=27017 dbpath=/data/db 64-bit host=mikhil-HP-Laptop-15-bs0xx\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] db version v4.2.8\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] git version: 43d25964249164d76d5e04dd6cf38f6111e21f5f\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] allocator: tcmalloc\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] modules: none\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] build environment:\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] distmod: ubuntu1804\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] distarch: x86_64\n2020-06-22T10:18:36.714+0530 I CONTROL [initandlisten] target_arch: x86_64\n2020-06-22T10:18:36.715+0530 I CONTROL [initandlisten] options: {}\n2020-06-22T10:18:36.715+0530 E NETWORK [initandlisten] Failed to unlink socket file /tmp/mongodb-27017.sock Operation not permitted\n2020-06-22T10:18:36.715+0530 F - [initandlisten] Fatal Assertion 40486 at src/mongo/transport/transport_layer_asio.cpp 684\n2020-06-22T10:18:36.715+0530 F - [initandlisten] \n\n***aborting after fassert() failure\n/tmp/mongodb-27017.socksudo systemctl start mongod", "text": "Whenever I run the command mongod, I am getting this output.I tried deleting the file /tmp/mongodb-27017.sock and after restarting the mongod by sudo systemctl start mongod several times. However it is happening again and again. What should I do?", "username": "MIKHIL_MOHAN_C" }, { "code": "", "text": "I suspect that a previous instance of mongod was started with the root user. As a result the file is now owned by root and cannot be deleted by your current user or the mongod user.", "username": "steevej" }, { "code": "", "text": "i am having the same problem, do you have any suggestion?", "username": "Danh_Le" }, { "code": "", "text": "Same issue on me… After mac OS update, I got this issue. Tried a lot way. Finally it saved me.A tricky problem due to Catalina’s read-only root folder\nReading time: 3 min read\n", "username": "Gwt_Poe" } ]
Mongod command is aborting after fassert() failure
2020-06-22T05:01:15.636Z
Mongod command is aborting after fassert() failure
10,618
https://www.mongodb.com/…41d52d886246.png
[]
[ { "code": "", "text": "When I try to load any of my pages with the charts embed sdk on it , I get this error.", "username": "johan_potgieter" }, { "code": "", "text": "Looking into it. We didn’t make any changes but something seems wrong. Stay tuned.", "username": "tomhollander" }, { "code": "", "text": "Looks like NPM itself has had an outage: https://status.npmjs.org/\nThe Charts SDK is fine, but if you rely on unpkg it will pull it from NPM each time. It looks like they are on top of the issue so hopefully it will recover soon.\nI’m not sure how likely it is that this will reoccur, but you can avoid the issue by installing the package as a part of your app using npm/parcel.Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks for the quick response Tom. I will watch their site. I will look at installing it as dependency.", "username": "johan_potgieter" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is Charts embedding SDK down?
2020-08-06T09:33:38.879Z
Is Charts embedding SDK down?
1,898
null
[ "xamarin" ]
[ { "code": "", "text": "I would like to performance test a mobile application that uses Realm at the backend. I am using WebLOAD as performance tool. How WebLOAD works is as follows:Now when I try to record the application under test, it is not working. And when I contacted WebLOAD their reply was -\"Your app does not respect the system proxySome apps do not honor the system defined proxy, and do direct connection regardless.Google “xamarin http proxy” and you will see many results about this - you may need to ask your developers to make sure they honor the system proxy, looks like it requires extra code to work.\"When I contacted my dev team their reply was - \" We are using realm for our synchronization, we don’t have any control over how that library is sending out it’s traffic.Basically on our end, we push our c# objects into a ‘realm’ and this manages offline sync as well as syncing with their database server \"Could you please suggest any configuration change in Realm that will enable us to fix the issue?", "username": "Diwakar_Devapalan" }, { "code": "", "text": "Anyone has any suggestions on this post? Thanks in advance.", "username": "Diwakar_Devapalan" }, { "code": "", "text": "@Diwakar_Devapalan Realm Sync uses its own proprietary syncing protocol so it will not integrate into web load testing tools. If you’d like to run a load test to stress test and simulate many mobile devices for Realm Sync you could use a non-mobile SDK like node.js - https://docs.mongodb.com/realm/node/to spin up a number of clients for your specific use case as part of a Kubernetes job for instance. You can create a RealmJS node app that either just downloads the data spread over many concurrent clients or makes writes from the client and measures propagation time. This exercises the same code paths without needing to spin up many mobile clients because the Realm Sync client and the Realm Core database are written in C++ and are the same across all wrapper bindings like Java for Android or Swift for iOS.", "username": "Ian_Ward" } ]
Performance testing of mobile app that uses Realm
2020-08-03T14:20:15.967Z
Performance testing of mobile app that uses Realm
3,539
null
[ "upgrading" ]
[ { "code": "dpkg: error processing archive /tmp/apt-dpkg-install-QQya5y/0-mongodb-org-database-tools-extra_4.4.0_amd64.deb (--unpack):\n[playdb3] out: trying to overwrite '/usr/bin/install_compass', which is also in package mongodb-org-tools 4.2.8\n", "text": "On Ubuntu 18.04 when upgrading 4.2.8 to 4.4.0 I getSo do I have to manually remove 4.2.8 before upgrading?", "username": "Andrew_Wason" }, { "code": "mongodb-org-database-tools-extramongodb-org-tools", "text": "Hi @Andrew_Wason,I believe that the compass package was moved to a package of its own called mongodb-org-database-tools-extraI assume this might be some conflict. So I would try to remove mongodb-org-tools and retry.Let me know if you have succeeded.\nBest regards\nPavel", "username": "Pavel_Duchovny" } ]
Upgrade 4.2.8 to 4.4.0
2020-08-05T20:14:35.355Z
Upgrade 4.2.8 to 4.4.0
3,216
null
[ "dot-net" ]
[ { "code": "private void EnsureMessageSizeIsValid(int messageSize)\n{\n var maxMessageSize = _description?.MaxMessageSize ?? 48000000;\n\n if (messageSize < 0 || messageSize > maxMessageSize)\n {\n throw new FormatException(\"The size of the message is invalid.\");\n }\n}\n public int MaxMessageSize\n {\n get\n {\n BsonValue value;\n if (_wrapped.TryGetValue(\"maxMessageSizeBytes\", out value))\n {\n return value.ToInt32();\n }\n\n return Math.Max(MaxDocumentSize + 1024, 16000000);\n }\n }\n", "text": "I recently updated from the MongoDB Driver from 2.7.2 to 2.10.4. After updating some of my larger queries throws this exception ‘The size of the message is invalid’. After digging through the Net driver source I found that more validation was added around July 1 2019 (CSHARP-1501: Drivers must raise an error if response messageLength > …). I’ve been trying to figure out how to set the MaxMessageSize. I found some information that makes me think its coming from the server (Azure CosmosDB). Is it possible to change this value?I’ve checked the size of some data coming back from the older version and it’s only ~19 megs.Here’s the method that throws the exception:\nsrc/MongoDB.Driver.Core/Core/Connections/BinaryConnection.csI’ve followed the _description.MaxMessageSize back to the src/MongoDB.Driver.Core/Core/Connections/IsMasterResult.csAny help would be greatly appreciated.", "username": "Jamie_Etcheson" }, { "code": "", "text": "I faced the same problem. Downgrading version of MongoDB Driver to 2.7.2 “solves” the issue. I also guess that CosmoDB is part of this problem/issue, however did not find out how to solve it on the server site, yet.", "username": "Marco_Lewandowski" }, { "code": "maxMessageSizeBytesisMastermongodb.isMaster().maxMessageSizeBytes", "text": "Welcome to the community @Jamie_Etcheson and @Marco_Lewandowski!The maxMessageSizeBytes is a server value determined during the driver connection handshake via the isMaster command. You can check the server value in the mongo shell via db.isMaster().maxMessageSizeBytes, but drivers are expected to follow the server response rather than overriding this value. The default value for MongoDB servers is 48000000 bytes.The intent of the exception is to guard against sending an invalid message size, so it sounds like this might be an issue with Cosmos DB. I suggest following up with Cosmos support.Please note that Cosmos’ API is an emulation of MongoDB which differs in features, compatibility, and implementation from an actual MongoDB deployment. Cosmos’ suggestion of API version support (eg 3.6) is referring to the wire protocol rather than the full MongoDB feature set for that version. Official MongoDB drivers (like .NET) are only tested against actual MongoDB deployments.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I also met this issue, actually it’s a new validation on message size introduced from C# driver 2.9,\nhttps://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.htmlCurrently the only safe solution is downgrading the driver to 2.8-", "username": "Jiaxing_Song" } ]
.Net Driver - 'The size of the message is invalid' when querying many small documents
2020-07-15T23:06:23.896Z
.Net Driver - &lsquo;The size of the message is invalid&rsquo; when querying many small documents
3,460
null
[ "connector-for-bi" ]
[ { "code": "[root@RH-TABLEAU-04 mysql]# isql -v MongoDBODBC\n\n[08S01][unixODBC][MySQL][ODBC 1.4(w) Driver]Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)\n\n[ISQL]ERROR: Could not SQLConnect\n[MongoDBODBC]\n\nDescription=ODBC for MongoDB BI Connector\n\nDriver=/usr/local/lib/libmdbodbcw.so\n\n#Trace=Yes\n\nTraceFile=stderr\n\nReadOnly=yes\n\nSERVER=10.210.227.39\n\nPORT=3606\n\nUSER=mms-automation\n\nPASSWORD=g@d3C\n\nDATABASE=CLM-PROD\n\nSOCKET = /var/lib/mysql/mysql.sock\n[root@RH-TABLEAU-04 tmp]# mysql --host 10.210.227.39 --user='mms-automation'\n\nERROR 2003 (HY000): Can't connect to MySQL server on '10.210.227.39' (111)\n", "text": "Hello Team,I am getting error below after configuring MongoAuth plugin\nmysql80-community-release-el7-1.noarch.rpm has been installed.ODBC entry:Kindly advise.Joe", "username": "Roshan_John" }, { "code": "", "text": "Is your mysql up and running\nThe path of .sock file in your isql command is not matching with that of odbc config file\n/tmp/mysql.sock vs /var/lib/mysql/mysql.sock\nMay be some link is missing or wrong configuration", "username": "Ramachandra_Tummala" } ]
MongoDB ODBC connector - BI connector
2020-08-05T18:31:36.816Z
MongoDB ODBC connector - BI connector
2,471
null
[ "dot-net", "field-encryption" ]
[ { "code": "", "text": "There is an ETA for Client-Side Field Level Encryption for Linux in C# Drivers? I need it! Thanks", "username": "Giorgio_Tresoldi" }, { "code": "", "text": "Hi @Giorgio_Tresoldi, and welcome to the forum,There is an ETA for Client-Side Field Level Encryption for Linux in C# Drivers?This should work with MongoDB .NET/C# driver that is compatible with v4.2 (version 2.10+). See github.com/mongodb-labs/field-level-encryption-sandbox/c-sharp for an example code.See also Client Side Field Level Encryption Guide.If you’re encountering an issue trying to perform CSFLE in Linux, could you provide:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Here Client-Side Encryption are specified that “Client-side field level encryption is supported only on Windows.” but my project works on linux docker containers.", "username": "Giorgio_Tresoldi" }, { "code": "", "text": "Hi @Giorgio_Tresoldi,specified that “Client-side field level encryption is supported only on Windows.” but my project works on linux docker containers.Unfortunately you’re correct, although there is a work around there are some tasks that needed to be completed. This is currently scheduled to be worked on, please feel free to add yourself as a watcher or up-vote the issue tracker CSHARP-2715: FLE on Linux/Mac to receive notifications for progress on the ticket.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "@wan Are there any plans to release this functionality? When?\nWhat is the contour output currently?My application works on Linux Container also.", "username": "Filipe_Nonato_de_Fre" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Client-Side Field Level Encryption using C# in Linux
2020-06-23T18:05:57.041Z
Client-Side Field Level Encryption using C# in Linux
2,974
null
[]
[ { "code": "", "text": "I want to upsert a document with timestamp value of now + 2 secondsRight now, I am doingdb.machine_status.update({“private_ip”: “’”${private_ip}\"’\"}, {“private_ip”: “10.0.0.4”, “public_ip”: “1.2.3.4”, “state”: “IDLE”, “updated_time”: new Date()}, {“upsert”: true})But this is inserting current timestampI want current timestamp + 2 seconds, how do I do it ?", "username": "Rahul_Bansal" }, { "code": "var dateToInsert = new Date(\n Date.now() + 2 * 1000\n);\n\ndb.test1.updateOne(\n {\n privateIp: '<ip>',\n },\n {\n $set: {\n updateTime: dateToInsert,\n }\n },\n {\n upsert: true,\n }\n);\n", "text": "Hello, @Rahul_Bansal! Welcome to the community!You need to calculate your new date at your application level:", "username": "slava" }, { "code": "", "text": "Oh, okay… Got it.Thanks for the welcome @slava\nHope you’re doing well", "username": "Rahul_Bansal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Add in current timestamp while upsert
2020-08-05T18:31:39.417Z
Add in current timestamp while upsert
4,617
null
[]
[ { "code": "", "text": "Hi,\nOn my 1st (original) Ubuntu 20.04 Dev Server I have Mongodb installed and use nodejs to provide a web application.I now want to create a fresh server (2nd server) installed Mongodb using this guide : https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu and all went well.However, the two server use different commands to start, stop etc and I don’t understand why ? Is it related to one being a daemon and the other being a DB ?\n1st server - MongoDB shell version v3.6.8\nsystemctl start mongodbroot@ubuntu-s-1vcpu01:/# systemctl status mongodb\n● mongodb.service - An object/document-oriented database\nLoaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)\nActive: active (running) since Wed 2020-08-05 19:54:47 UTC; 53min ago\nDocs: man:mongod(1)\nMain PID: 1250559 (mongod)\nTasks: 26 (limit: 1137)\nMemory: 133.5M\nCGroup: /system.slice/mongodb.service\n└─1250559 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf2nd server - MongoDB shell version v4.4.0\nsystemctl start mongod (no ‘b’ at end of command)root@ubuntu1:~# systemctl status mongod\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\nActive: active (running) since Wed 2020-08-05 20:45:54 UTC; 21s ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 36251 (mongod)\nMemory: 162.6M\nCGroup: /system.slice/mongod.service\n└─36251 /usr/bin/mongod --config /etc/mongod.confThanks", "username": "Jon_C" }, { "code": "mongodb-orgmongodmongodmongodmongodb.service", "text": "However, the two server use different commands to start, stop etc and I don’t understand why ? Is it related to one being a daemon and the other being a DB ?Hi @Jon_C,The difference is that your 3.6 install was created using Ubuntu’s MongoDB package, while your 4.4 install is using the official mongodb-org package.Both are service definitions for managing mongod processes, but Ubuntu named their service after the product while the official packages name the service after the mongod binary. The Ubuntu packages are older MongoDB releases with a few different configuration defaults (service name and config file path) from the MongoDB documentation.I recommend installing the official packages to get the latest release versions and support. Attempted installation of the two different packages should conflict since both provide mongod, so I suspect your mongodb.service definition remained from a previous installation that was removed.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb and mongod, which to choose for web app
2020-08-05T20:14:32.588Z
Mongodb and mongod, which to choose for web app
3,096
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": { \"$oid\": \"5f05e1d13e0f6637739e215b\" },\n \"testReport\": [\n {\n \"name\": \"Calcium\",\n \"value\": \"87\",\n \"slug\": \"ca\",\n \"details\": {\n \"description\": \"description....\",\n \"recommendation\": \"recommendation....\",\n \"isNormal\": false\n }\n },\n {\n \"name\": \"Magnesium\",\n \"value\": \"-98\",\n \"slug\": \"mg\",\n \"details\": {\n \"description\": \"description....\",\n \"recommendation\": \"recommendation....\",\n \"isNormal\": false\n }\n }\n ],\n\"anotherTestReport\": [\n {\n \"name\": \"Calcium\",\n \"value\": \"-60\",\n \"slug\": \"ca\",\n \"details\": {\n \"description\": \"description....\",\n \"recommendation\": \"recommendation....\",\n \"isNormal\": false\n }\n },\n {\n \"name\": \"Magnesium\",\n \"value\": \"80\",\n \"slug\": \"mg\",\n \"details\": {\n \"description\": \"description....\",\n \"recommendation\": \"recommendation....\",\n \"isNormal\": false\n }\n }\n ],\n \"patientName\": \"Patient Name\",\n \"clinicName\": \"Clinic\",\n \"gender\": \"Male\",\n \"bloodGroup\": \"A\",\n \"createdAt\": { \"$date\": \"2020-07-08T15:10:09.612Z\" },\n \"updatedAt\": { \"$date\": \"2020-07-08T15:10:09.612Z\" }\n }\n{\n \"_id\": { \"$oid\": \"5efcba7503f4693d164e651d\" },\n \"code\": \"Ca\",\n \"codeLower\": \"ca\",\n \"name\": \"Calcium\",\n \"valueFrom\": -75,\n \"valueTo\": -51,\n \"treatmentDescription\": \"description...\",\n \"isNormal\": false,\n \"gender\": \"\",\n \"recommendation\": \"recommendation...\",\n \"createdAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" },\n \"updatedAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" }\n },\n {\n \"_id\": { \"$oid\": \"5efcba7503f4693d164e651e\" },\n \"code\": \"Ca\",\n \"codeLower\": \"ca\",\n \"name\": \"Calcium\",\n \"valueFrom\": 76,\n \"valueTo\": 100,\n \"treatmentDescription\": \"description...\",\n \"isNormal\": false,\n \"gender\": \"\",\n \"recommendation\": \"recommendation...\",\n \"createdAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" },\n \"updatedAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" }\n },\n {\n \"_id\": { \"$oid\": \"5efcba7603f4693d164e65bb\" }, \n \"code\": \"Mg\",\n \"codeLower\": \"mg\",\n \"name\": \"Magnesium\",\n \"valueFrom\": -100,\n \"valueTo\": -76,\n \"treatmentDescription\": \"description...\",\n \"isNormal\": false,\n \"gender\": \"\",\n \"recommendation\": \"recommendation...\",\n \"createdAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" },\n \"updatedAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" }\n },\n {\n \"_id\": { \"$oid\": \"5efcba7503f4693d164e6550\" },\n \"code\": \"Mg\",\n \"codeLower\": \"mg\",\n \"name\": \"Magnesium\",\n \"valueFrom\": 76,\n \"valueTo\": 100,\n \"treatmentDescription\": \"description...\",\n \"isNormal\": false,\n \"gender\": \"\",\n \"recommendation\": \"recommendation...\",\n \"createdAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" },\n \"updatedAt\": { \"$date\": \"2020-07-01T16:31:50.205Z\" }\n }\ndb.reports.aggregate([\n {\n $match: {\n _id: ObjectId(\"5f05e1d13e0f6637739e215b\")\n }\n },\n {\n $unwind: {\n path: \"$testReport\"\n }\n },\n {\n $lookup: {\n from: \"setup\",\n \"let\": {\n testValue: {\n $toInt: \"$testReport.value\"\n },\n testName: \"$testReport.name\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [{\n \"$eq\": [\n \"$name\",\n \"$$testName\"\n ]\n },\n {\n \"$gte\": [\n \"$valueTo\",\n \"$$testValue\"\n ]\n },\n {\n \"$lte\": [\n \"$valueFrom\",\n \"$$testValue\"\n ]\n }\n ]\n }\n }\n }\n ],\n as: \"setupIds\"\n }\n },\n {\n $group: {\n _id: \"$_id\",\n patientName: {\n $first: \"$patientName\"\n },\n clinicName: {\n $first: \"$clinicName\"\n },\n gender: {\n $first: \"$gender\"\n },\n bloodGroup: {\n $first: \"$bloodGroup\"\n },\n createdAt: {\n $first: \"$createdAt\"\n },\n updatedAt: {\n $first: \"$updatedAt\"\n },\n setupIds: {\n $addToSet: \"$setupIds._id\"\n }\n }\n },\n {\n $addFields: {\n setupIds: {\n $reduce: {\n input: \"$setupIds\",\n initialValue: [],\n \"in\": {\n $setUnion: [\n \"$$this\",\n \"$$value\"\n ]\n }\n }\n }\n }\n }\n{ $merge: { into: \"updatedReports\" } },\n])\n", "text": "Hello there,\nI have a collection reports as follows:and another collection setupsI wanted to search the value from reports collection and check whether the value is in range from the setups collection and return the _id and add the returned _ids in setupIds field on reports collection.I tried with the following query:It’s working as expected. A new collection is added with a field setupIds. I again tried to run the same query but replacing the testReport with anotherTestReport on $unwind and $lookup, hoping new ids will be appended in setupIds. Instead of appending, it replaced the previous ids.Is there any way that the new values will be appended?Thanks.", "username": "tushark" }, { "code": "db.teams.insertMany([\n { _id: 'T1', country: 'France' },\n { _id: 'T2', country: 'Spain' },\n]);\nplayersIdsdb.teams.aggregate([\n {\n // match part of existing documents\n $match: {\n _id: 'T1'\n },\n },\n {\n // calculate ids of players somehow\n $project: {\n _id: null,\n playersIds: ['P1', 'P2']\n }\n },\n {\n $merge: {\n into: 'output',\n whenMatched: [\n // use this pipeline to define merging behaviour\n {\n $addFields: {\n differentIds: {\n // concat current values with new ones\n $concatArrays: [\n '$playersIds',\n {\n // detect which values are new\n $setDifference: ['$$new.playersIds', '$playersIds']\n }\n ],\n }\n }\n }\n ]\n }\n }\n]);\n{ \"_id\" : null, \"playersIds\" : [ \"P1\", \"P2\" ] }\noutput{ \"_id\" : null, \"playersIds\" : [ \"P1\", \"P2\", \"P3\", \"P4\" ] }\n", "text": "Welcome to the forum, @tushark !Let me simplify your case and provide you some example, so it would be easier for me to explain and for you - to understand.Assume, you have this dataset:And your plain is to:Here is how it can be achieved with an aggregation:The output will be:Then, if you match another team (let’s say “T2” team) and join it’s players (let’s assume their ids are: P3, P4), the document in the output collection will be updated and will look like this:", "username": "slava" } ]
Aggregate unwind and lookup query replacing the data instead of appending
2020-08-05T17:20:30.711Z
Aggregate unwind and lookup query replacing the data instead of appending
7,881
null
[]
[ { "code": "", "text": "There’s an error on this page which says : * Australia : Mumbai", "username": "anjanesh" }, { "code": "", "text": "Hi @anjanesh,Thanks for highlighting I will forward to our Marketing team!Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This is fixed! Thanks for bringing it to our attention.", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mumbai is not in Australia
2020-08-05T13:09:31.706Z
Mumbai is not in Australia
3,387
null
[ "atlas-triggers" ]
[ { "code": "", "text": "Hi, I have a M10 cluster with some collections, one of them is called test, so, I have some triggers to some collections, I have 4 and they are working good, but if I add a five one, this doesn’t work, I checked and it’s enabled, this is linked to my test cluster to my test collection, and the collection, cluster and database names are correct, in the code I just have a console.log and the trigger is saved, but when I insert a row in test collection the trigger is not activated, it doesn’t appear in the trigger logs, I test with the other triggers and all are working, just the five one isn’t, I tried to delete it and create it again, but it is not working yet.\nDoes someone know why?", "username": "Sistemas_Informatico" }, { "code": "", "text": "Hi @Sistemas_Informatico,The only limitations with triggers in an application is :MongoDB Realm limits the execution of Trigger Functions to a rate of 1000 executions per second across all Triggers in an application. If additional Triggers fire beyond this threshold, MongoDB Realm adds their associated Function calls to a queue and executes the Function calls once capacity becomes available.Database Triggers with event ordering enabled are an exception to this rule. Each ordered Trigger processes events in order, waiting for the previous event execution to complete before handling the next event. Therefore, only one execution of a particular ordered Trigger executes at any given timeCan you share the link to your application?Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Sistemas_Informatico,As we found on Realm support the command which was used to update the document via Data Explorer was “replace” but the 5th trigger was not configured to be fired on those.Mystery solved Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Triggers not working when there are more of 4
2020-07-31T21:32:23.720Z
Triggers not working when there are more of 4
2,864
null
[]
[ { "code": "", "text": "I have a MongoDB database which includes names and addresses, and I need to change the spelling of the last name for the whole people who have the last name “Mik” to “Mike”.How can do that in one command?", "username": "Emad_Omari" }, { "code": "", "text": "Hello, @Emad_Omari! Welcome to this community!Please, provide the sample document from your collection.", "username": "slava" } ]
How can I change many documents in one command?
2020-08-05T13:21:04.115Z
How can I change many documents in one command?
1,547
null
[]
[ { "code": "", "text": "Hi,\nI tried importing a json file to my Mongodb atlas by running a mongoimport… but got a message that states: zsh: command not found: mongodb. My Mongodb version is 4.4.0, My OS is Catalina. When I ran ls “$(which mongo | sed ‘s/mongo//’)” | grep mongo, I got a list of mongo, mongod, mongos. I have also tried reinstalling and I keep getting the same responses. Please help.", "username": "Eze_Amadi" }, { "code": "mongoimport", "text": "Hello @Eze_Amadi,You can download the MongoDB tools separately at Download Database Tools. As of MongoDB 4.4, the server tools are also packaged separately and have different versioning.Also, the documentation for mongoimport is found at a different location (as Database Tools), not as part from the server documentation: mongoimport.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Eze_Amadi,As Prasad noted, the Database Tools are now separately versioned and installed as of the MongoDB 4.4 server release.The macOS installation guide and MongoDB Homebrew Tap should be updated to reflect this change, including information on how to install these tools if needed.I have raised a few related improvement suggestions you can watch and upvote:DOCS-13806: Add note on installing Database Tools to the 4.4 macOS installation guideSERVER-50089: MongoDB Homebrew Tap should include information on installing Database ToolsIn the interim, you can download the Database Tools via the MongoDB Download Centre as Prasad suggested.For more background on this change, please see Separating Database Tools from MongoDB Server (July, 2020).Regards,\nStennie", "username": "Stennie_X" }, { "code": "brew tap mongodb/brew\nbrew install mongodb-database-tools\n", "text": "SERVER-50089: MongoDB Homebrew Tap should include information on installing Database ToolsI just noticed there is already a pull request in progress to update the Homebrew README.The following worked to install the latest version of the Database Tools:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "It worked. Thanks to everyone that helped with this. I really appreciate!", "username": "Eze_Amadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoimport command not found for MongoDB 4.4.0 server installation on macOS
2020-08-04T06:55:50.627Z
Mongoimport command not found for MongoDB 4.4.0 server installation on macOS
12,849
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of version 1.3.6 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.6 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Isabella_Siu" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.3.6 Released
2020-08-04T19:37:31.503Z
MongoDB Go Driver 1.3.6 Released
1,662
null
[]
[ { "code": "DocumentSourceLookupChangePostImage{fullDocument:updateLookup}_idshard_keyts=1, update: {a:1}, {$set:{b:1}}\nts=2, update: {a:1}, {$set:{b:2}}\nts=3, update: {a:1}, {$set:{b:3}}\nts=4, update: {a:1}, {$set:{b:4}}\n...\nts=1, update: {a:1}, {$set:{b:1}}{a:1}ts=2, update: {a:1}, {$set:{b:2}}{a:1, b:1}{a:1, b:2}{a:1, b:1}majoritymajorityavailablelocal", "text": "Change stream uses DocumentSourceLookupChangePostImage pipeline stage to send a new query to MongoD from MongoS when set {fullDocument:updateLookup}. But check out this scenario: users write so many update sentences with $set and $unset, so each one needs to send a new query to MongoD to get the new image with given _id and shard_key:When MongoS receives the change stream events of ts=1, update: {a:1}, {$set:{b:1}} from MongoD, and then send query to get the new image of {a:1}, however, when MongoD receive this query command, the ts=2, update: {a:1}, {$set:{b:2}} has already been applied, so what’s the result of new image? {a:1, b:1} or {a:1, b:2}? In my test, the result is {a:1, b:1}, but I wonder how does change stream ensure consistency?I also have another question about change stream, hope to get help here:Before MongoDB 4.2, the read concern level must be majority. But starting in MongoDB 4.2, change streams are available regardless of the majority read concern support. So my question is when the read concern level is available or local, how to ensure the correctness of the data when rollback happened? Or it just depends on the users setting, users should keep in mind different read concern level will lead to different result?", "username": "Vinllen_Chen" }, { "code": "", "text": "Hi @Vinllen_Chen,The change stream events are based on the order of documents in the oplog. If your updates resulted in 2 different oplog entries they should be returned as 2 events ordered by the time they are placed in the oplog.Regarding your second question I believe you are correct, in 4.2 its possible that events read with lower readconcern will be rolled back.Let me know if you have any further questions.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "DocumentSourceLookupChangePostImageDocumentSourceOplogMatch", "text": "About the first question, after i went through the source code, I wonder how does the change streams ensure consistency if I update one document several times? Because the query in DocumentSourceLookupChangePostImage stage in MongoS is different of DocumentSourceOplogMatch in MongoD, and there exists time gap.", "username": "Vinllen_Chen" } ]
Will change streams with fullDocument=updateLookup lose new image data when update very frequently?
2020-08-04T08:11:59.416Z
Will change streams with fullDocument=updateLookup lose new image data when update very frequently?
2,046
null
[ "configuration" ]
[ { "code": "", "text": "hi ,By default journal file size is 100MB, is it possible to increase or decrease this size limit?Thanks", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi @Akshaya_Srinivasan,The Wired Tiger journal file can grow “up” to 100Mb.I don’t see a reason that you will need to make it smaller or bigger as those perform for internal consistency and fault tolerance use.What you can potentially control is the compressor that compress data in those files to have motre journal data. However, better compression have some performance panelty as a stronger mechanism require more resources.Best\nPavel", "username": "Pavel_Duchovny" } ]
Is it possible to reduce the journal file size
2020-08-04T08:05:35.663Z
Is it possible to reduce the journal file size
2,859
null
[ "indexes" ]
[ { "code": "", "text": "I have a Timestamp field in my documents for one of the collection. Is it a good idea to have a index on the Timestamp column ? I have a requirement in which I need to retrieve the documents between the certain dates. Trying to figure out the best way to retrieve the documents.Thank you,\nJason", "username": "Jason_Widener" }, { "code": "db.collection.find( { timestamp: { $gte: <start_value>, $lte: <end_value> } } )", "text": "Hello @Jason_Widener,Is it a good idea to have a index on the Timestamp column ?Yes, it is generally a good idea to have an index on a field used in a query criteria. This is really useful when there are a large number of documents (e.g., a million) in the collection. The index will be used to run the query fast.An index created on one field is called as a Single Field Index. This index is applied on a query with condition to retrieve documents for a range of values (e.g., between two Timestamp values) efficiently.Typically, you use a query like this to get the documents for a range of values:db.collection.find( { timestamp: { $gte: <start_value>, $lte: <end_value> } } )References:", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks for your response @Prasad_Saya . Yes it is good idea to have index but is it good idea to have an index on Timestamp field/column ? Because anyways timestamp field is mostly going to be unique in all the documents. Unless two documents were inserted at exact same date and time.", "username": "Jason_Widener" }, { "code": "", "text": "is it good idea to have an index on Timestamp field/column ? Because anyways timestamp field is mostly going to be unique in all the documents. Unless two documents were inserted at exact same date and time.Timestamp is just a field in a document. You create an index on a field for the purpose of performance of queries and sort operations. Whether field is unique value or not, and the type of data (can be date, timestamp, string, number, etc) doesn’t matter. Indexing and field properties are two different things.If your application requires that the field be unique, then an index of type unique can be created (see Unique Indexes).", "username": "Prasad_Saya" }, { "code": "", "text": "Awesome, thanks @Prasad_Saya", "username": "Jason_Widener" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Index on Date column
2020-08-04T22:43:21.750Z
Index on Date column
12,352
null
[]
[ { "code": "", "text": "Yes, I am a newbie in cloud.mongodb.com but not a newbie in IT.\nQuestion: After I created a cluster at AWS or Azure.\nCan I changed or replace the cluster?\nCould I use glitch as cluster provider?", "username": "Mat_Jung" }, { "code": "", "text": "It’s your choiceYou can continue with Cloud provider you have choosen\nI dont think it is possible to change/replace the cluster once created\nYou have to drop and create new cluster under new providerI think Glitch is possible as per below thread but it is provided by third party provider mlab", "username": "Ramachandra_Tummala" }, { "code": "", "text": "From what I learned by now.\nGlitch will not be able to act as Cloud provider but as database client with mongoose as library.I can see the stuff that I stored via Glitch is also visible at cloud.mongodb.com → works as designed.\nTicket can be closed.", "username": "Mat_Jung" }, { "code": "", "text": "Hi @Mat_Jung,Glitch provides hosting for applications, but is not a general hosting/infrastructure provider where you can run additional services like databases. As mentioned, you can connect a Glitch application to a database cluster hosted elsewhere. You should make sure you have appropriate security configured (user authentication, TLS/SSL network encryption, firewall/whitelist) to limit access as much as possible. Limiting access via firewall or whitelist can be more difficult from shared hosting providers which can potentially have a large (and changing) range of originating IPs.Your other topic on Relationship between Atlas, MongoDB, and Cluster Providers has some helpful context for cluster providers.I think Glitch is possible as per below thread but it is provided by third party provider mlabFYI: mLab was acquired by MongoDB in October 2018 and those users have now been migrated to MongoDB Atlas.Regards,\nStennie", "username": "Stennie_X" } ]
Glitch as cluster provider?
2020-08-01T21:41:31.400Z
Glitch as cluster provider?
2,877
null
[]
[ { "code": "", "text": "Hi there,\nWe have an application in production with API servers, developped in .Net Core. The APIs call a MongoDB cluster hosted on premise.\nWe have 10 server, and each servers contains 40 API instances.\nWe used to perform a healthcheck every 5 s on each API instance. This healthcheck simply connects to MongoDB and calls listCollections.\nEverything was working fine till MongoDB 4.0, but when we migrated to 4.2 performances decreased a lot. We found out that the healthcheck was responsible of this situation. It is like if listCollections puts some locks or something it did not do before. One we deactivated healthCheck, everything was fine.\nHowever this makes me worry about other performances issue.\nHas anyone encounter such issue when migrating to 4.2 ?\nThanks.", "username": "David_DOUSSOT" }, { "code": "listCollectionnameOnly : trueviewcollectionrs.status()", "text": "Hi @David_DOUSSOT,Does your command use a plain listCollection command or have a nameOnly : true provided.A flag to indicate whether the command should return just the collection/view names and type or return both the name and other information.Returning just the name and type ( view or collection ) does not take collection-level locks whereas returning full collection information locks each collection in the database.I believe the locking mechanism of list collection is responsible for the performance issue.In general you can consider using other commands like “isMaster” or rs.status() to perform a health check.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for this tip Pavel !\nThere’s actually a new version of this healthcheck function which performs “listCollectionsNames” instead of “listCollections”. This could explain…\nWe’ll try and I’ll tell if it was efficient.", "username": "David_DOUSSOT" } ]
listCollections performance issue in 4.2
2020-08-03T08:26:34.869Z
listCollections performance issue in 4.2
1,621
null
[ "mongoose-odm", "field-encryption" ]
[ { "code": "", "text": "My database contains 3 collections: “patients”, “therapists”, “subscriptions”.I’ve configured CSFLE on the db connection, providing a json schema which defines field level encryption only on the ‘name’ and ‘email’ fields in the ‘patients’ collection.The field level encryption / decryption on the ‘patients’ collection works as expected.However, now an unrelated aggregation query fails with “MongoError: Pipeline over an encrypted collection cannot reference additional collections.”This error occurs when executing an aggregation query on the ‘therapists’ collection, which includes a $lookup from the ‘subscriptions’ collection (not “over an encrypted collection” as the error suggests).Neither ‘therapists’ nor ‘subscriptions’ are defined in the CSFLE json schema, and are not encrypted.\nI don’t understand why should this $lookup on unencrypted collections, lead to an error.\nAccording to the documentationAutomatic client-side field level encryption supports the $lookup and\n$graphLookup only if the from collection matches the collection on\nwhich the aggregation runs against (i.e. self-lookup operations).While the limitation may be acceptable when dealing with the ‘patients’ collection, I don’t think it is acceptable when dealing with other, non CSFLE, collections.I know that I can create 2 separate MongoClient instances, one with CSFLE enabled, and one without, and use the non CSFLE client for the $lookup, but this would introduce extra complexity, and it doesn’t seem like a clean and reasonable solution to me.BTW, I’m using mongoose, but the behaviour is the same when using mongodb directly.Would you say this is a bug with mongodb?Any suggestions would be greatly appreciated.", "username": "Tal_Bar" }, { "code": "", "text": "That sounds like a bug - I’ve filed a ticket in our Jira bug tracking and the team will look at it.", "username": "Asya_Kamsky" }, { "code": "", "text": "Thank you very much Asya!", "username": "Tal_Bar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Automatic Client Side Field Level Encryption (CSFLE) Restricts Operations On Unencrypted Collections
2020-08-03T13:15:09.363Z
Automatic Client Side Field Level Encryption (CSFLE) Restricts Operations On Unencrypted Collections
2,982
null
[]
[ { "code": "{\"name.family\" : /^christiansen/i}\n", "text": "Hello there,using mongo since approx. 2 months and so far I’m quite happy, however we stumbled upon our first real problem.\nOne of our main use cases is a query e.g. by name. By default, the server should search case insensitively and with startsWith, so I’m doing sg. like this:While this delivers the correct results, it is very inefficient. This is stated in the documentation and I can validate this by looking at the explain-stats (it uses the index but scans ALL documents).An index with a collation strength of 1 or 2 does not help my case, because then I still loose the startsWith capability.Normalizing data on the way in is not really an option, unfortunately.I’m quite baffled this does not really work, I think i must be missing something. That is something that is very easy to do in most common sql environments by using a simple LIKE qiery along with a functional index (toLowerCase or sg. similar).Thank you!", "username": "Johannes_B" }, { "code": "{\"name.family\" : /^christiansen/}{\"name.family\" : /^christiansen/}", "text": "Hi @Johannes_B,One of the possible solutions to your problem is a use of a text index in a combination of an additional $and expression.There are two possible ways to write this query once you indexed the field:This combination solve the insesetive search and the startWith in 2 effortsLet me know if that helps.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello Pavel,thank you for your quick answer!\nNot sure I understand it though Maybe my example was a little misleading.In short, the term /^chri/ should deliver “Christiansen” in the resultset.For solution\n1: As far as I understood the documentation, regex with a text search does not work?\n2: The collation indices don’t go well with regex searches, do they?It would be great if you could give me a short example.Thank you!", "username": "Johannes_B" }, { "code": "{ _id: ObjectID(\"5f2949ebbdee40880a3db0c0\"),\n name: { first: 'David', family: 'Christiansen' } }\n{ _id: ObjectID(\"5f294e2bbdee40880a3db0c1\"),\n name: { first: 'David', family: 'Owen-Christiansen' } }\ndb.users.aggregate([{$match: {\n $text :{ \"$search\" : \"christiansen\" }\n}}, {$match: {\n \"name.family\" : /^christiansen/i\n}}])\n\n\n[ \n name: { first: 'David', family: 'Christiansen' } } ]\n", "text": "Hi @Johannes_B,Ok thank you for explaining in more detail. My solution was relevant for the first example when the full name is provided. This is since text searches are based on words or delimited words. Therefore with a text index you cant search a partial expression.I suggest exploring our Atlas search which comes to solve those problems on Atlas:Use MongoDB Atlas Search to customize and embed a full-text search engine in your app.Learn how to use a regular expression in your Atlas Search query.The original example was considering the following documents:Where the following aggregation will find the needed document:This will utilize the text index.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ok thank you Pavel!That’s a shame, we can not consider running in an atlas cloud due to project constraints.Thank you nevertheless, time to get creative then Bye!", "username": "Johannes_B" } ]
Case insensitive query Regex (startsWith)
2020-08-03T11:11:15.036Z
Case insensitive query Regex (startsWith)
10,486
null
[ "indexes" ]
[ { "code": "", "text": "Hi, we have a collection with about 500 million documents. While doing some preparation to shard the collection, we noticed something unexpected in our test environment. When deleting documents the size of various indexes doesn’t go down. We retrieve the index sizes by running db.collection.stats().indexSizes.Is there expected behavior? Is there a way to compact the indexes? Does mongo not give back the index, but rather re-use it?", "username": "AmitG" }, { "code": "", "text": "Hi Amit,I’m sorry I don’t have an answer but I have the exact same question too. This isn’t, unfortunately, a widely talked about topic either in documentation or on other forums.If you have found a reasonable explanation for this, please do share here.Thanks,\nMurali", "username": "Murali_Rao" } ]
Why doesn't indexSize go down when removing documents?
2020-07-15T20:30:26.491Z
Why doesn&rsquo;t indexSize go down when removing documents?
1,403
null
[]
[ { "code": "", "text": "Hi guys.\nI ran into a problem upon installing the package on Mac.\nI’m inside my home directory, new terminal on the file, open nano editor, I copy the directory path, but when I want to save the changes, I get the following error message \n[ Error writing etc/paths: No such file or directory ]Does anyone had the same issue ?", "username": "Gaetan_CHABOUSSIE" }, { "code": "chsh -s /bin/zsh", "text": "Ok. the path editing worked but now it’s the console part that doesn’t work :Last login: Tue Aug 4 09:36:47 on ttys000gaetan@MacBook-Pro-de-Gaetan ~ % mongo --nodbzsh: command not found: mongogaetan@MacBook-Pro-de-Gaetan ~ % exec bashThe default interactive shell is now zsh.To update your account to use zsh, please run chsh -s /bin/zsh.For more details, please visit Use zsh as the default shell on your Mac - Apple Support.bash-3.2$ mongo --nodbbash: mongo: command not foundbash-3.2$", "username": "Gaetan_CHABOUSSIE" }, { "code": "", "text": "Ok, I can now run the commands but my Mac prompts a message which says :\nImpossible to open “mongo”, developper can’t be verified.Any ideas ?", "username": "Gaetan_CHABOUSSIE" }, { "code": "", "text": "All good and running. Thanks anyway ^^", "username": "Gaetan_CHABOUSSIE" }, { "code": "", "text": "", "username": "kanikasingla" } ]
Problem setting up paths on mac
2020-08-04T07:21:10.182Z
Problem setting up paths on mac
2,391
null
[]
[ { "code": "{\n \"nr\" : \"00000209\",\n \"tx\" : \"text\",\n \"list\" : [\n {\n \"featurename\" : \"text\",\n \"featuretext\" : \"text\"\n },\n {\n \"featurename\" : \"text\",\n \"featuretext\" : \"text\",\n \"features\" : [\n {\n \"some\" : \"text\",\n \"featurekeys\" : [\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekeys\" : \"\"\n }\n ]\n }\n ]\n },\n {\n \"featurename\" : \"text\",\n \"featuretext\" : \"text\",\n \"features\" : [\n {\n \"some\" : \"text\",\n \"featurekeys\" : [\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekey\" : \"text\"\n },\n {\n \"featurekeys\" : \"text\"\n }\n ]\n },\n {\n \"some\" : \"text\",\n \"featurekeys\" : [\n {\n \"featurekey\" : \"text\"\n }\n ]\n }\n ]\n }\n\n ]\n}\n{list: {$elemMatch: { features.featurekeys.featurekey:{ $all:[ RegExp('.*TextA.*'), RegExp('.*TextB.*'), RegExp('.*TextC.*')]}}}}\n", "text": "Hi,I wanted to search in a nested mongo doc for values with a specific name.The structures of the a little complex but the names of the values are always the same. Here is what one document is looks like:Normally I’ve a list of values [‘taxtA’, ‘textB’, ‘textC’] and i have to look through all the featurekey fields if one or more values from my list are there.My attempt looked like this, but gave me strange results:Can anyone tell me what I am doing wrong?", "username": "Herb" }, { "code": "list.features.featurekeys : { $elemMatch : {...} }\n", "text": "Hi @Herb,What blocks u from going the full route before doing an $elemMatch on that level:Please note that $all will require all elements to be in the list vs $any.Additionally, a wild card regex is a badly performant query and you should consider building a relevant text index and use a text search or Atlas search service.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot for that advise creating a text index, Pavel.I’ve created a text index for\nnr\ntx\nfeatures.featurekeys.featurekeythis works perfect but i cant use wildcards/regex’s in {$text: {$search: … } anymore wich would be necessary in some cases. For example some data contains “_” between two words some have spaces.\nIs there a way to use wildcards in a text index?", "username": "Herb" }, { "code": "", "text": "Hi @Herb,Thanks for the feedback. I think that using a text search for the wide range of cases followed by a next stage of match and $regex is expected to work much better than regex from beginning.Consider exploring the usage of pharses and negation :\nhttps://docs.mongodb.com/manual/reference/operator/query/text/#phrasesLet me know if you have any further questions.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks again Pavel. The phrases are verry helpfull.\nIs there a way to combine a $text $search with an another query like {nr:RegExp(“123.+”)} where either one or both are matching?", "username": "Herb" }, { "code": "", "text": "Hi @Herb,Well maybe a usage of $facet where one is a text search result and the second one is a regex:https://docs.mongodb.com/manual/reference/operator/aggregation/facet/Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Aggregation is a hell of a tool, no doubt. but iam not sure if it fits my needs.Maybe i should explain me problem a little better.\nMy app gets a textlist to search for. the list not in a predictable order so i have to check if a string matches either “nr”, “txt” or any “featruekey”. I wont only the documents where all values are matching", "username": "Herb" }, { "code": "", "text": "Hi @Herb,Ok so my Idea is that you create a text index on the three fields.Aggregation have several stages.This will allow you to first filter all unrelevant objects without this value at all. Following by a strict match on all values.Let me know if that helps.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks again. That helped me a lot ", "username": "Herb" } ]
Searching for values with the same name in nested object
2020-07-22T20:32:51.370Z
Searching for values with the same name in nested object
4,009
null
[ "backup" ]
[ { "code": "", "text": "Hi,\nI am trying filesystem based snapshot backup for MongoDB. I make a list of files in the dbpath , acquire lock and then take a snapshot and release the lock. I always notice that this file WiredTigerPreplog goes missing in the snapshot when compared to the list of files made before snapshot. Looks like new logs gets created and old ones are deleted, by the time the snapshot is taken. What is this file acctually used for and will this file affect the restore, if it goes missing in the snapshot ?Thanks in advance,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi,Please can someone reply to this. I want to know the significance of this wiredtigerprep log, since this goes missing in the snapshot, than the list of files collected before lock is acquired.", "username": "Akshaya_Srinivasan" } ]
WiredTigerPreplog logs missing in the snapshot
2020-07-31T07:50:10.345Z
WiredTigerPreplog logs missing in the snapshot
2,092
null
[]
[ { "code": "", "text": "Dear all,\nI am totally new to MongoDB and currently investigating to what extend I can make use of the database structure while I have mostly been working with RDBMS until now.First try was an import of an example data set (CSV) from Craft Beers Dataset | Kaggle via the MongoDB Compass Community edition under Windows.The data set consists of two CSV files to import. So I created a new database with two collections trying to upload the CSV files to one collection each.The CSV already contains IDs which is why I selected at the importing dialog at the first selectable column “ObjectID” instead of “String”. Unfortunately after confirming the selection with hitting the import button, the screen stays totally blank. Just the top menu bar remains visible but nothing else. This behaviour happens every time again while when keeping the columns data type to “String” everything works fine.Can someone please explain this behaviour and let me please know why the data cannot be linked to each other?Cheers,\nAnton", "username": "Anton_M" }, { "code": "idObjectIdObjectIdid_idObjectId\"_id\" : ObjectId(\"5f28e55e578bbec021496e67\"){ \"_id\" : ObjectId(\"5f28e55e578bbec021496e67\"), \"id\" : 1436, \"name\" : \"Pub Beer\" }id_idid{ \"_id\" : \"14363\", \"name\" : \"Pubs Beer\" }id_id", "text": "Hello @Anton_M, welcome to the community.The CSV already contains IDs which is why I selected at the importing dialog at the first selectable column “ObjectID” instead of “String”. Unfortunately after confirming the selection with hitting the import button, the screen stays totally blank …The id field in the CSV is of data type string, and not ObjectId - importing a string as ObjectId is not allowed. I think, thats why the GUI went blank. An appropriate message would have been useful.When you import with CSV’s id field as string data type it imports fine. The imported document also has a field named _id with an auto-generated unique value of type ObjectId (e.g., \"_id\" : ObjectId(\"5f28e55e578bbec021496e67\")).Your imported document is probably like this:{ \"_id\" : ObjectId(\"5f28e55e578bbec021496e67\"), \"id\" : 1436, \"name\" : \"Pub Beer\" }If you are looking to make the CSV file’s id field as _id in the collection’s document, you can try this. First, id values must be unique. Second, it will be of type string. The sample imported document might look like this:{ \"_id\" : \"14363\", \"name\" : \"Pubs Beer\" }For this to happen, you just have to change the CSV’s id field name to _id, and then do the import.You can also use the mongoimport command-line MongoDB utility program - to import data from CSV files into the MongoDB collections.Please do include a sample of CSV data as text within the post. Also, mention the versions of the MongoDB server and the Compass.", "username": "Prasad_Saya" }, { "code": "", "text": "Unfortunately after confirming the selection with hitting the import button, the screen stays totally blank. Just the top menu bar remains visible but nothing else.You can try the menu options View -> Reload Data (or Reload) for the screen to appear again.", "username": "Prasad_Saya" } ]
Importing Data and referencing Object ID resulting in blank screen
2020-08-03T22:20:08.661Z
Importing Data and referencing Object ID resulting in blank screen
8,188
null
[ "containers" ]
[ { "code": "", "text": "Hello,I apologise if this isn’t the right category. I am working on a project for a small business where i’m looking to migrate their data from MySQL over to Mongo as part of a rebuild of their services. I do feel for the most part mongo actually will be better suited than their MySQL instance especially for some of the aggregations and reporting.I’m quite new to mongo, although I do have a fair amount of experience as an engineer querying/aggregating data but have no experience from the sysadmin side of things.I’m mostly looking for advice and possibly documentation in order to correctly configure and set-up mongoDB for a production environment – i’m nervous about moving their data over to mongo and having problems or loss of data, and would like to prepare myself as best i can to migrate.I wanted to run mongo in docker and mount the storage as a volume which I can then back-up with rsync to an external storage periodically. Would this be viable?Thanks,\nAsh", "username": "Ashley_Meadows" }, { "code": "", "text": "Hello @Ashley_Meadows well come to the community!It is great that you want to utilize the strong features of MongoDB. As you mention you have a solid SQL background. To get the most out of an noSQL Setup, you need to change the way of thinking about schema design. Your first goal will no longer be to get the maximal normalized Schema, Denormalization is not bad, the requirement of your queries will drive your design. The story will start to think about a good schema design. In case you move the SQL normalized Data Model 1:1 to MongoDB you will not have much fun or benefit.So in a first step I’d suggest to check out if you have a well fitting schema to utilize the MongoDB / noSQL advantages. If this is not the case, please do not worry about DBA issues and setup parameters - most likely you approach would not satisfy you without good data modelling.That said, DBA and setup parameter should not be underestimated! As start you can try out a sample DB as free tier in MongoDB Altas. This would move away many DBA issues and you can focus on the data modeling. Please also check out MongoDB Compass. Compass is the GUI for MongoDB. You can visually explore your data, run ad hoc queries, interact with your data with full CRUD functionality, view and optimize your query performance and index suggestions.Unfortunately I am not aware of a compiled list of DBA best practices (@Stennie_X, @chris, @Doug_Duncan, @Prasad_Saya do you know one this could be of general interest).You can find further information on the Transitioning from Relational Databases to MongoDB in the linked blog post. Please note also the links at the bottom of this post, and the referenced migration guide .Since you are new to MongoDB and noSQL I highly recommend to take some of great and free classes from the MongoDB Univerity:This is just a sample which can get you started very well. In case this is going to be a mission critical project\nI’d recommend getting Professional Advice to plan a deployment There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Hope this helps to start, while getting familiar and all time after, feel free to ask you questions here - we will try to help.Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hello @Ashley_Meadows, welcome to the forum.Just adding to what @michael_hoeller had mentioned, I found that there is a new MongoDB University Course MongoDB for SQL Pros, and this would be of interest to you.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @michael_hoeller and @Prasad_SayaThank you for your replies, and the nice welcome into the community I appreciate the resources you’ve referenced and will be checking those out this week and certainly looking at the blog post and the Mongo University course. I have some experience with Compass and mongo shell so I feel somewhat comfortable with those.Thanks,\nAsh", "username": "Ashley_Meadows" }, { "code": "rsync", "text": "Hi @Ashley_Meadows,If you are setting up a self-hosted deployment, I recommend reviewing the following documentation:For a production deployment, I strongly recommend deploying a replica set. Replica sets provides data redundancy, high availability, and administrative flexibility for upgrading and backing up without downtime.If you are new to MongoDB administration, I would also consider starting with a MongoDB Atlas deployment rather than self-hosting. MongoDB Atlas is a managed database service – you don’t have to worry about the underlying administrative knowledge and tasks to configure, secure, monitor, backup, and scale your MongoDB deployments.The minimum Atlas cluster deployment is a 3 member replica set running MongoDB Enterprise configured with best practice features such as role-based access control, TLS/SSL network encryption, firewall restrictions, and monitoring. You can configure additional features and behaviour via the Atlas UI or API. With Atlas taking care of the operational aspects of your cluster, you (or your team) can focus on development.I wanted to run mongo in docker and mount the storage as a volume which I can then back-up with rsync to an external storage periodically. Would this be viable?Backing up with rsync is a possible approach (see: Back Up by Copying Underlying Data Files), but to capture a valid backup you need to stop all writes before copying the data files. This will not be an ideal approach if you have a standalone server and cannot afford maintenance windows.As noted earlier, I would recommend running a replica set in production (especially if you have to support the production deployment). A replica set provides data redundancy so unavailability of a single server (or more, depending on the fault tolerance of your deployment configuration) does not result in downtime. This is extremely useful for rolling maintenance (upgrading one member of your replica set at a time) and mitigating failure scenarios.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Guidelines for setting up a production instance
2020-08-03T01:15:22.264Z
Guidelines for setting up a production instance
1,831
null
[ "installation" ]
[ { "code": "", "text": "I tried to install MongoDB as a service and it runs through even with changing the directories for log and data to D:\\develop\\DB\\MongoDb\\Server\\4.2…\nWhen I try to run mongod from the command line I always get:\nexception in initAndListen: NonExistentPath: Data directory C:\\data\\db\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file., terminatingTrying to run tghe service as a domain user results in an error dialog.I wonder why MongoDb complains about C:}data\\db.", "username": "Dirk_Ulrich" }, { "code": "", "text": "How did you start mongod from command line?\nIf you just run mongod without dbpath it tries to start on default dir C:\\data\\db\nSo check if dir exists or not", "username": "Ramachandra_Tummala" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: D:\\develop\\DB\\MongoDB\\Server\\4.2\\data\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: D:\\develop\\DB\\MongoDB\\Server\\4.2\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n", "text": "I just typed ‘mongod’ in the command line because in mongod.cfg I have:", "username": "Dirk_Ulrich" }, { "code": "", "text": "When you are invoking mongod manually it is not using this default config file\nIt is looking for C:\\ data\\db.Since dir does not exist it is failingThe config file which you have pasted is used by default mongod which runs as service on WindowsTry to give a different port and a different path and see.It will work", "username": "Ramachandra_Tummala" }, { "code": "mongod -f c:\\path\\to\\mongod.conf\nmongod --config c:\\path\\to\\mongod.conf\nmongod", "text": "I just typed ‘mongod’ in the command line@Dirk_Ulrich you need to point to the config file by using something like the following:orAs @Ramachandra_Tummala states, running just mongod, the process will use the default data values.", "username": "Doug_Duncan" }, { "code": "", "text": "Both versions result in a non-responding prompt in the command line.\nAs you said, I installed Mongod already as a Windows service. I stopped it and tried to start mongod as you suggested but nothing happened.", "username": "Dirk_Ulrich" }, { "code": "", "text": "This happens also when I started CMD as admin.", "username": "Dirk_Ulrich" }, { "code": "", "text": "It is not a non responsive prompt\nMost likely your mongod is up and it is expected behaviour in WindowsYou have to open another cmd prompt and connect to mongodI am trying to understand your requirement\nThere was no need to stop the mongod which was running as service\nThis is default mongod which runs on prot 27017\nAll you have to do is connect by issuing mongoIf you want to start another mongod you have to use different port and path\nThat is what i meant in my reply\nYou should not change the default config fileCreate your own config file or you can start from command line alsomongod --port xyz --dbpath --logpathNote:You cannot have two mongods running on same portPlease go through mongo documentation for more details", "username": "Ramachandra_Tummala" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: D:\\develop\\DB\\MongoDB\\Server\\4.2\\data\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: D:\\develop\\DB\\MongoDB\\Server\\4.2\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27027\n bindIp: 127.0.0.1\n\n#processManagement:\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n", "text": "Well, I copied the original config file and renamed it from mongod.cfg to mongod.conf and changed port:Both config files are in the same directory…but that shouldn’t cause problems, should it?\nStill I have a non responding prompt.\nDid I understand you right, that MongoDb uses C:\\data\\db as default? …how does this work with the service running properly under windows althought this directory doesn’t exist?", "username": "Dirk_Ulrich" }, { "code": "D:\\develop\\DB\\MongoDB\\Server\\4.2\\data", "text": "D:\\develop\\DB\\MongoDB\\Server\\4.2\\dataWhen it is running as service it is using default config file and using this path-D:\\develop\\DB\\MongoDB\\Server\\4.2\\dataAlso it runs in backgroundBut when you run it manually it looks for C:\\data\\db\nand it runs in foreground\nThat’s why your session appears to be hung\nSo you have to open another cmd prompt and connect using mongo --port your_portWhat exactly you mean by unresponsive prompt\nIt would have given some messageConfig file can be at same location or a different location but when you start mongod give the full path and change the dirpath to new path\nYou changed the port but left the dbpath same as the one being used when it runs as serviceDid you try a simple command line method?mongod --dbpath your_homedir --port 28001 --logpath your_homedir/mongod.logYou can try above by config file alsomongod -f path_to_your_homedir mongod.conf\nHere i am assumng your config file resides in your home dir", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data directory C:\data\db\ not found
2020-07-28T19:50:04.408Z
Data directory C:\data\db\ not found
36,409
null
[]
[ { "code": "", "text": "I am looking to read from local.oplogThis is a cluster deployment with a replicaSet.\nI am unable to connect or create a user that can read from local.How do you create a user that has read access to local.oplogThe project is to populate a real time data warehouse by reading the oplog", "username": "Anantha_Rao" }, { "code": "", "text": "I am unable to connect or create a user that can read from local.How you are creating the user?Atlas or command linePlease check these links", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Anantha_Rao, welcome to the MongoDB community.Aside from @Ramachandra_Tummala’s questions, I would also like to point out that the oplog is for MongoDB internal use, and so there’s no guarantee that the format would stay the same from version to version.Instead, I would encourage you to use Change Streams which is overall a better method than tailing the oplog. Some advantages of change streams are:Best regards,\nKevin", "username": "kevinadi" } ]
Read from Oplog
2020-07-31T21:32:36.448Z
Read from Oplog
1,791
null
[]
[ { "code": " cartCursor.forEach(cart => {\n Carts.push(cart);\n if ( Carts.length == batch ) { sendCarts() }\n \n },e => {\n console.log('end',e)\n sendCarts();\n });\n", "text": "Hi,I have a query that will return ~ 30k documents each of ~6k bytes ( if stringified ).I am sorting the query, but have created an Index on that field, but the Cursor still returns the ''Sort operation used more than the maximum 33554432\" error.Here’s the Query…Any ideas?Peter", "username": "Peter_Alderson" }, { "code": "cursor.sortallowDiskUse", "text": "Hello @Peter_Alderson,The error ''Sort operation used more than the maximum 33554432\" occurs when the sort operation requires more than 32 MB memory - the cursor.sort allows maximum 32 MB only (see MongoDB Limits and Thresholds - Sort Operations).You can try using an Aggregation query instead, which can use more memory (100 MB) for the sort. In case the sort needs still more memory, you can use the allowDiskUse option.Please post a sample document you are working with and the entire query showing the sort operation.", "username": "Prasad_Saya" } ]
Cursor with Sort forEach exceeds maximum RAM, with Index
2020-08-02T20:40:48.046Z
Cursor with Sort forEach exceeds maximum RAM, with Index
2,467
null
[]
[ { "code": " 2020-03-01T17:12:29.261+0100 E STORAGE [WTJournalFlusher] WiredTiger error (5) [1583407709:261464][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __posix_sync, 99: /var/lib/mongodb/journal/WiredTigerLog.0000000003: handle-sync: fdatasync: Input/output error Raw: [1583407709:261464][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __posix_sync, 99: /var/lib/mongodb/journal/WiredTigerLog.0000000003: handle-sync: fdatasync: Input/output error\n 2020-03-01T17:12:32.214+0100 E STORAGE [WTJournalFlusher] WiredTiger error (-31804) [1583407712:214632][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1583407712:214632][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic\n 2020-03-01T17:12:32.214+0100 F - [WTJournalFlusher] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 414\n 2020-03-01T17:12:32.214+0100 F - [WTJournalFlusher] \n\n ***aborting after fassert() failure\n\n 2020-03-01T17:12:32.224+0100 F - [WTJournalFlusher] Got signal: 6 (Aborted).\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"55D8C855A000\",\"o\":\"281F591\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"55D8C855A000\",\"o\":\"281ED8E\"},{\"b\":\"55D8C855A000\",\"o\":\"281EE26\"},{\"b\":\"7F6E6233F000\",\"o\":\"12730\"},{\"b\":\"7F6E6217E000\",\"o\":\"377BB\",\"s\":\"gsignal\"},{\"b\":\"7F6E6217E000\",\"o\":\"22535\",\"s\":\"abort\"},{\"b\":\"55D8C855A000\",\"o\":\"CDEF3B\",\"s\":\"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj\"},{\"b\":\"55D8C855A000\",\"o\":\"A268A6\"},{\"b\":\"55D8C855A000\",\"o\":\"E617AB\"},{\"b\":\"55D8C855A000\",\"o\":\"A33EC2\",\"s\":\"__wt_err_func\"},{\"b\":\"55D8C855A000\",\"o\":\"A34326\",\"s\":\"__wt_panic\"},{\"b\":\"55D8C855A000\",\"o\":\"E32F03\"},{\"b\":\"55D8C855A000\",\"o\":\"E17976\",\"s\":\"__wt_log_force_sync\"},{\"b\":\"55D8C855A000\",\"o\":\"E1E28B\",\"s\":\"__wt_log_flush\"},{\"b\":\"55D8C855A000\",\"o\":\"E53A6B\"},{\"b\":\"55D8C855A000\",\"o\":\"DDCB84\",\"s\":\"_ZN5mongo22WiredTigerSessionCache16waitUntilDurableEbb\"},{\"b\":\"55D8C855A000\",\"o\":\"DBA336\",\"s\":\"_ZN5mongo18WiredTigerKVEngine24WiredTigerJournalFlusher3runEv\"},{\"b\":\"55D8C855A000\",\"o\":\"26FA63F\",\"s\":\"_ZN5mongo13BackgroundJob7jobBodyEv\"},{\"b\":\"55D8C855A000\",\"o\":\"294519F\"},{\"b\":\"7F6E6233F000\",\"o\":\"7FA3\"},{\"b\":\"7F6E6217E000\",\"o\":\"F94CF\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"4.2.3\", \"gitVersion\" : \"6874650b362138df74be53d366bbefc321ea32d4\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"4.19.0-8-amd64\", \"version\" : \"#1 SMP Debian 4.19.98-1 (2020-01-26)\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"55D8C855A000\", \"elfType\" : 3, \"buildId\" : \"C1E6FA2DCE46DBD4F26AF59B9ECD4DC451A187D5\" }, { \"b\" : \"7FFD5C3E5000\", \"path\" : \"linux-vdso.so.1\", \"elfType\" : 3, \"buildId\" : \"B89B19527F25345B43708CB3E56B29B343FE85F0\" }, { \"b\" : \"7F6E628A3000\", \"path\" : \"/lib/x86_64-linux-gnu/libcurl.so.4\", \"elfType\" : 3, \"buildId\" : \"B124C5E8D77B1B3F0CDDBF4E39B1F9132347E16C\" }, { \"b\" : \"7F6E62889000\", \"path\" : \"/lib/x86_64-linux-gnu/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"026C3BA167F64F631EB8781FCA2269FBC2EE7CA5\" }, { \"b\" : \"7F6E625A0000\", \"path\" : \"/lib/x86_64-linux-gnu/libcrypto.so.1.1\", \"elfType\" : 3, \"buildId\" : \"E4D80B6A27F74CF1ABBD353A72622B7C5FDBA771\" }, { \"b\" : \"7F6E6250E000\", \"path\" : \"/lib/x86_64-linux-gnu/libssl.so.1.1\", \"elfType\" : 3, \"buildId\" : \"329B528F65883B62C397B42F1F0C3FB55E66C2E5\" }, { \"b\" : \"7F6E62509000\", \"path\" : \"/lib/x86_64-linux-gnu/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"D3583C742DD47AAA860C5AE0C0C5BDBCD2D54F61\" }, { \"b\" : \"7F6E624FF000\", \"path\" : \"/lib/x86_64-linux-gnu/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"5DCF98AD684962BE494AF28A1051793FD39E4EBC\" }, { \"b\" : \"7F6E6237A000\", \"path\" : \"/lib/x86_64-linux-gnu/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"885DDA4B4A5CEA600E7B5B98C1AD86996C8D2299\" }, { \"b\" : \"7F6E62360000\", \"path\" : \"/lib/x86_64-linux-gnu/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"DE6B14E57AEA9BBEAF1E81EB6772E2222101AA6E\" }, { \"b\" : \"7F6E6233F000\", \"path\" : \"/lib/x86_64-linux-gnu/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"E91114987A0147BD050ADDBD591EB8994B29F4B3\" }, { \"b\" : \"7F6E6217E000\", \"path\" : \"/lib/x86_64-linux-gnu/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"18B9A9A8C523E5CFE5B5D946D605D09242F09798\" }, { \"b\" : \"7F6E6293B000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"F25DFD7B95BE4BA386FD71080ACCAE8C0732B711\" }, { \"b\" : \"7F6E62156000\", \"path\" : \"/lib/x86_64-linux-gnu/libnghttp2.so.14\", \"elfType\" : 3, \"buildId\" : \"11070FEAA71B4F7C2E5714A61B66028FA86EAE5E\" }, { \"b\" : \"7F6E62137000\", \"path\" : \"/lib/x86_64-linux-gnu/libidn2.so.0\", \"elfType\" : 3, \"buildId\" : \"93835C08B4818817E355044CEF05F7F5BA573386\" }, { \"b\" : \"7F6E61F18000\", \"path\" : \"/lib/x86_64-linux-gnu/librtmp.so.1\", \"elfType\" : 3, \"buildId\" : \"F8F137851A6C9F76F2AFB296C77499E3DB004E4B\" }, { \"b\" : \"7F6E61EEA000\", \"path\" : \"/lib/x86_64-linux-gnu/libssh2.so.1\", \"elfType\" : 3, \"buildId\" : \"4AEBD6D1D4181EACBCA6F6E30CB293A73FF25FD4\" }, { \"b\" : \"7F6E61ED7000\", \"path\" : \"/lib/x86_64-linux-gnu/libpsl.so.5\", \"elfType\" : 3, \"buildId\" : \"E7463248F4FD5ADA5D53F36A7F11BA66C9A7DA3C\" }, { \"b\" : \"7F6E61E8A000\", \"path\" : \"/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"A8A22DB4384DFA17A6A486FF7960DB822976F74C\" }, { \"b\" : \"7F6E61DAA000\", \"path\" : \"/lib/x86_64-linux-gnu/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"118BE45FCDE6F2645A56C8027EE2F3A25A7EC083\" }, { \"b\" : \"7F6E61D76000\", \"path\" : \"/lib/x86_64-linux-gnu/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"699B18B4849021A396E46FF7B435D6D7497649B3\" }, { \"b\" : \"7F6E61D6E000\", \"path\" : \"/lib/x86_64-linux-gnu/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"DFFD546CDF7248805473C118886139F88BF01415\" }, { \"b\" : \"7F6E61D1A000\", \"path\" : \"/lib/x86_64-linux-gnu/libldap_r-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"7A56C455C57C30F696306CA4FE639BAF28FDBBB0\" }, { \"b\" : \"7F6E61D09000\", \"path\" : \"/lib/x86_64-linux-gnu/liblber-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"F239F8CFD0087ACCEEECD2E93C5DF56104CFFA76\" }, { \"b\" : \"7F6E61AEB000\", \"path\" : \"/lib/x86_64-linux-gnu/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"3AF7C4BCEB19B6C83F76E2822B9A23041D85F6D1\" }, { \"b\" : \"7F6E61967000\", \"path\" : \"/lib/x86_64-linux-gnu/libunistring.so.2\", \"elfType\" : 3, \"buildId\" : \"2B976CABA5F5BF345388917673C45EE626A576D0\" }, { \"b\" : \"7F6E617B9000\", \"path\" : \"/lib/x86_64-linux-gnu/libgnutls.so.30\", \"elfType\" : 3, \"buildId\" : \"20C08C96D01B993206BCA6CBFC919A5426726BCA\" }, { \"b\" : \"7F6E61780000\", \"path\" : \"/lib/x86_64-linux-gnu/libhogweed.so.4\", \"elfType\" : 3, \"buildId\" : \"B548A14003EE05ADA36686A3B48D1913BACD540D\" }, { \"b\" : \"7F6E61748000\", \"path\" : \"/lib/x86_64-linux-gnu/libnettle.so.6\", \"elfType\" : 3, \"buildId\" : \"696C145020FC52F49A604B409E80C0F604514CBE\" }, { \"b\" : \"7F6E616C5000\", \"path\" : \"/lib/x86_64-linux-gnu/libgmp.so.10\", \"elfType\" : 3, \"buildId\" : \"CF7737ED0FEB1A97D13F3EF9BBAD9AE2E0EEEF48\" }, { \"b\" : \"7F6E615A7000\", \"path\" : \"/lib/x86_64-linux-gnu/libgcrypt.so.20\", \"elfType\" : 3, \"buildId\" : \"C698702313BFDED270BF0C7C106B38C66AA46982\" }, { \"b\" : \"7F6E61598000\", \"path\" : \"/lib/x86_64-linux-gnu/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"C8A3343E37DE6461A09AB7849F62A8C6CF01E551\" }, { \"b\" : \"7F6E6158F000\", \"path\" : \"/lib/x86_64-linux-gnu/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"B33B7F30AEA5D2BC14A939FA750862D09A4AC80E\" }, { \"b\" : \"7F6E61572000\", \"path\" : \"/lib/x86_64-linux-gnu/libsasl2.so.2\", \"elfType\" : 3, \"buildId\" : \"99BF5A225908FD4124228D4F3E19C67D7138144F\" }, { \"b\" : \"7F6E61443000\", \"path\" : \"/lib/x86_64-linux-gnu/libp11-kit.so.0\", \"elfType\" : 3, \"buildId\" : \"6147AE8F2D6FA2184DA7D46016746D0DF0C77895\" }, { \"b\" : \"7F6E61230000\", \"path\" : \"/lib/x86_64-linux-gnu/libtasn1.so.6\", \"elfType\" : 3, \"buildId\" : \"9D60C41CEC3F57BC859B75C1E834187E04DF7C99\" }, { \"b\" : \"7F6E6120D000\", \"path\" : \"/lib/x86_64-linux-gnu/libgpg-error.so.0\", \"elfType\" : 3, \"buildId\" : \"0B8984CF2F0DD4F4901E9100CDB9410D7EBE7930\" }, { \"b\" : \"7F6E61201000\", \"path\" : \"/lib/x86_64-linux-gnu/libffi.so.6\", \"elfType\" : 3, \"buildId\" : \"9ED5213748F3F5D008D615DFF0368A6E38E1DE55\" } ] }}\n mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55d8cad79591]\n mongod(+0x281ED8E) [0x55d8cad78d8e]\n mongod(+0x281EE26) [0x55d8cad78e26]\n libpthread.so.0(+0x12730) [0x7f6e62351730]\n libc.so.6(gsignal+0x10B) [0x7f6e621b57bb]\n libc.so.6(abort+0x121) [0x7f6e621a0535]\n mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x55d8c9238f3b]\n mongod(+0xA268A6) [0x55d8c8f808a6]\n mongod(+0xE617AB) [0x55d8c93bb7ab]\n mongod(__wt_err_func+0x90) [0x55d8c8f8dec2]\n mongod(__wt_panic+0x39) [0x55d8c8f8e326]\n mongod(+0xE32F03) [0x55d8c938cf03]\n mongod(__wt_log_force_sync+0x286) [0x55d8c9371976]\n mongod(__wt_log_flush+0xEB) [0x55d8c937828b]\n mongod(+0xE53A6B) [0x55d8c93ada6b]\n mongod(_ZN5mongo22WiredTigerSessionCache16waitUntilDurableEbb+0x2D4) [0x55d8c9336b84]\n mongod(_ZN5mongo18WiredTigerKVEngine24WiredTigerJournalFlusher3runEv+0x106) [0x55d8c9314336]\n mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x9F) [0x55d8cac5463f]\n mongod(+0x294519F) [0x55d8cae9f19f]\n libpthread.so.0(+0x7FA3) [0x7f6e62346fa3]\n libc.so.6(clone+0x3F) [0x7f6e622774cf]\n----- END BACKTRACE -----\n", "text": "Hi,I’m running a single instance of mongod on single node with Debian 10. With package “mongodb-org/buster,now 4.2.3 amd64” installed. Recently mongod crashed with following error.I found this assertion only once in connection with access issues of the WiredTiger.turtle file.https://groups.google.com/d/msg/mongodb-user/QQC-HKx8LCo/ph1iHrVEAwAJAny suggestions are kindly appreciated! Thanks!Is there further documentation regarding analyzing such crashes?Thanks,MarcBacktrace produced:", "username": "marcomayer_ww" }, { "code": "", "text": "fdatasync: Input/output errorI/O Check your storage", "username": "chris" }, { "code": "WTJournalFlusher/var/lib/mongodb/journal/WiredTigerLog.0000000003fdatasync()mongodmongodmongod", "text": "Is there further documentation regarding analyzing such crashes?A backtrace or stack trace is generally only meaningful for a developer to look at the server execution context when an exception is encountered. A stack trace can also be useful to differentiate execution paths for a similar assertion. For example: your error is definitely different from the issue you mentioned in the mongodb-user group.Developers normally demangle stack traces with the help of debug symbols to map addresses to function calls. See Parsing Stack Traces on the MongoDB source code wiki.Errors immediately preceding the stack trace are often a useful indication of the problem, but since those error codes and messages are returned directly from system libraries they can be somewhat opaque.In your case, the key log line is:2020-03-01T17:12:29.261+0100 E STORAGE [WTJournalFlusher] WiredTiger error (5) [1583407709:261464][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __posix_sync, 99: /var/lib/mongodb/journal/WiredTigerLog.0000000003: handle-sync: fdatasync: Input/output error Raw: [1583407709:261464][6196:0x7f6e5b1ef700], WT_SESSION.log_flush: __posix_sync, 99: /var/lib/mongodb/journal/WiredTigerLog.0000000003: handle-sync: fdatasync: Input/output errorThis log line indicates that the WTJournalFlusher thread encountered an I/O error trying to flush changes to the journal file /var/lib/mongodb/journal/WiredTigerLog.0000000003 using the fdatasync() library function. Since the mongod process was unable to write essential data, the next action is a fatal assertion.As @chris suggested, you should verify your storage as there may be filesystem or I/O errors.If you restart mongod after an unexpected shutdown, it will try to recover and continue if possible. If your mongod process is unable to start and the reasons are unclear, please provide any additional log messages from the unsuccessful startup attempt.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for quick response. Yes after the restart of mongod the recovery was triggered and completed.\nIt runs normal now. Is the issue connected to EXT4 FS?\nBut I will continue to monitor it.I’m running another single instance of mongod also on a virtual machine with Debian 10 using default ESX file system (but on internal plain ESX server). But without crashes.Currently I’m reviewing documents concering running MongoDB on virtual Linux machines (Debian/Ubuntu).\nAre there other recommended resources?Again thanks for the fast feedback!", "username": "marcomayer_ww" }, { "code": "dbPathmongod", "text": "Is the issue connected to EXT4 FS?The only ext4 issue I’m aware of is SERVER-18314: Stall during fdatasync phase of checkpoints under WiredTiger and EXT4. This was the motivation for adding a startup warning in MongoDB 3.4+ if ext4 is detected for the current dbPath. We have not observed or had reports of similar stalls with XFS.The issue you encountered was an unrecoverable I/O error which is different from the ext4 stalls that have been observed. In your case I expect the cause may have been a filesystem or hardware error.Even if mongod successfully recovered after restarting, I would still advise verifying your filesystem and checking for storage errors.Currently I’m reviewing documents concering running MongoDB on virtual Linux machines (Debian/Ubuntu).\nAre there other recommended resources?The Production Notes & Operations Checklist you found are the usual general guidance we provide. These notes are aggregated from user feedback and field experience, so although they may not be an issue for all workloads there are common and impactful considerations.There are also some MongoDB white papers on operational and planning topics that may be of interest.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "It seems that I have the same issue (): 2020-08-02T00:08:05.698+0200 E STORAGE [WTJournalFlusher] WiredTiger error (5) [1596319685:698351][937:0x7f6b9f4c4700], WT_SESSION.log_flush: __posix_sync, 99: /var/lib/mongodb/journal/WiredTigerLog.0000000157: handle-sync: fdatasync: Input/output errorHow to chech was there I/O issue on ubuntu os? I ran smartctl to check disk and it said the disk had no errors. Here is a link to reportedBug: https://jira.mongodb.org/browse/SERVER-50069?filter=-2", "username": "firstName_lastName" }, { "code": "fdatasync: Input/output errorsmartlctlfsckbadblocksfsckfsckfscksmartctlfsck", "text": " How to chech was there I/O issue on ubuntu os? I ran smartctl to check disk and it said the disk had no errors. Here is a link to reportedBug: https://jira.mongodb.org/browse/SERVER-50069?filter=-2 Hi @firstName_lastName,In general, please start a new topic if you have a similar problem in a different environment. This will help keep details & discussion for each environment distinct.The fdatasync: Input/output error message is returned by system libraries, so happens at a lower layer than MongoDB. You may be able to get more relevant advice on an Ubuntu or Linux site (for example, Ask Ubuntu).The smartlctl utility reports information from the SMART controller for your drive. In my experience the SMART warnings generally aren’t insightful unless your drive is in imminent danger of failing, but look for attributes that are increasing significantly over time or approaching the warning threshold. Most SMART attributes & thresholds are specific to your hard drive vendor and/or model, and are meant to be predictive indicators of failure. You’ll have to compare those with other reports for the same drive models.Other likely tools to use for Linux I/O issues include fsck and badblocks (which can also be invoked via fsck ). Check the fsck man page for available options in your version. fsck will report (and possibly resolve) filesystem errors, which may be logical errors rather than physical faults reported by smartctl . For example, files can be corrupted due to unexpected system or process restarts with active writes in progress. If your MongoDB instance is hosted in a VM or container, you will also want to check the host drive.Unfortunately there isn’t much that can be done to repair random corruption in data files: “repair” in those cases generally means skipping over file segments that can’t be read (aka “salvage”), which will result in data loss unless those segments happen to be unused. In SERVER-49317 you mention fsck found and fixed some inconsistencies: blocks of some MongoDB data files may have been repaired to an unexpected state if they were part of the detected inconsistencies.If there are no obvious errors on your drive or filesystem, another possibility to look into would be unexpected process or system restarts. MongoDB uses journalling and checksums to try to avoid corruption issues, but if you are seeing this problem frequently I would look into the stability of your environment. I would also make sure you are using a recommended filesystem (generally XFS for WiredTiger) mounted locally.To mitigate risk in a production environment, we recommend deploying a replica set so that you have data redundancy and availability across multiple MongoDB instances (ideally on different physical hosts).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod crashed on Debian 10 with Fatal Assertion 50853
2020-03-05T18:33:14.874Z
Mongod crashed on Debian 10 with Fatal Assertion 50853
7,577
null
[]
[ { "code": "MongoClient.connect()the options [servers] is not supported\nthe options [caseTranslate] is not supported\nthe options [dbName] is not supported\nthe options [srvHost] is not supported\nthe options [credentials] is not supported\n[sometimes more]\nMongoClient.connect((err) => { // err handling\n // code using the database\n});\nMongoClient.connect((err) => /* err handling */)\n// code using the database\n", "text": "Hey. I’m working on a backend system that includes multiple node.js files. It means that in each of the files using MongoDB, I have to open a new connection using MongoClient.connect().But the thing is, that MongoDB doesn’t really seem to like it: I got multiple deprecation errors in the console:So my question is: How can I open only 1 connection, so I can use it in all my files? Since the code looks like thatnot like thatI can’t figure out how to find an answer.Thanks in advance for your help.", "username": "a2b" }, { "code": "", "text": "Hi @a2b,\nWhat is placed as the connection string for this client?What is the minPoolSize and maxPoolSize? They should be both set to 1.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "The issue has been solved with this StackOverflow question.", "username": "a2b" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Only use 1 connection in multiple files
2020-07-31T18:01:23.679Z
Only use 1 connection in multiple files
5,410
null
[]
[ { "code": "", "text": "Sir can I get the datasets used in our course", "username": "Rishabh_Shukla" }, { "code": "", "text": "Most data is on the class cluster. Please read the instructions cardfully and you will find how to connect.", "username": "steevej" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Sir can I get the datasets used in our course
2020-08-02T10:37:33.491Z
Sir can I get the datasets used in our course
1,327
null
[ "java" ]
[ { "code": "", "text": "Hi Team,I need some support on the usage of MongoDB reactive streams for asynchronous processing .How can i use of MongoDB reactive streams along with CompletableFufure for asynchronous processing oris there any other way than CompletableFuture to achieve asynchronous processing with MongoDB reactive streams?Thanks in Adavance Regards,\nAkshay", "username": "Akshay_Bajpai" }, { "code": "", "text": "You can use GitHub - akarnokd/RxJavaJdk8Interop: RxJava 2/3 interop library for supporting Java 8 features such as Optional, Stream and CompletableFuture [discontinued] to convert reactive types to CompletableFuture. We have been using it while migrating our app to reactive streams. However, I really would like to recommend to check out more about reactive streams and consider fully writing your app that way.", "username": "st-h" } ]
MongoDB reactive stream asynchronous processing examples with CompletableFuture
2020-07-27T13:32:04.019Z
MongoDB reactive stream asynchronous processing examples with CompletableFuture
2,638
null
[]
[ { "code": "", "text": "Hi all,\nI have hosted a website on GoDaddy shared hosting developed in NodeJs.\nThe connection string is\nmongoose.connect(‘mongodb+srv://USER:[email protected]/school?retryWrites=true&w=majority’,{\nuseNewUrlParser: true\n//, useUnifiedTopology: true\n}).then(()=>console.log(‘Connection successful’))\n.catch((err)=>console.error );The Same project with the same connection string works fine with Localhost on local pc.\nBut fails to connect on the GODADDY.There is no error in Godaddy terminal,\nEven the ‘Connection successful’ message is not printed in Godaddy console.Getting 404 Error in the webpage.", "username": "Abhijeet_Singh" }, { "code": "", "text": "Could be whitelist issues\nIs your godaddy IP allowed to access Atlas?Please check other threads in stackoverflow\nThey discuss about route path issue", "username": "Ramachandra_Tummala" } ]
Unable to connect to Atlas DB by Website hosted on Godaddy
2020-07-31T18:02:33.361Z
Unable to connect to Atlas DB by Website hosted on Godaddy
3,242
null
[]
[ { "code": "storage:\n dbPath: \"/data/db\"\nsystemLog:\n destination: \"file\"\n path: \"/data/mongod.log\"\nreplication:\n replSetName: M103\nnet:\n bindIp: localhost\n port: 27000\nsecurity:\n keyFile: \"/data/keyfile\"\nprocessManagement:\n fork: true\nsh-4.4# mongod --config mongod.conf \nabout to fork child process, waiting until server is ready for connections.\nforked process: 398\nERROR: child process failed, exited with error number 1\nTo see additional information in this output, start without the \"--fork\" option.\nsh-4.4# \n", "text": "in the M103 university course in the IDE, I have updated mongod.conf file with following detailsthe minimum requirements arewhen I run mongod -f mongod.conf OR mongod --config mongod.conf file keep getting below messageWhat am I missing or got wrong", "username": "Ramkumar_Krishnaswam" }, { "code": "", "text": "Please post it under University forum\nhttps://www.mongodb.com/community/forums/c/M103Did you run without fork as suggested in the error?\nMake sure dbpath,logpath,keyfile path exist", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Course M103: lab configuration
2020-08-01T21:44:28.205Z
Course M103: lab configuration
1,902
null
[ "java" ]
[ { "code": "MongoClient mongoClient = MongoClients.create(\"mongodb+srv://excel:<password>@excelerate.svfgy.mongodb.net/database?retryWrites=true&w=majority\");\ndatabase = mongoClient.getDatabase(\"database\");\ncollection = database.getCollection(\"collection-1\");\ndatabase.getCollection().find(Filters.eq(\"user-id\", this.userId)).first();\n", "text": "First of all - I am using MongoDB java driver 3.12.2I have created an project on atlas (using the free plan, so shared cluster) and am having trouble connecting to it using the java driver.Currently, I code I am using for connecting looks like this:And that works completely fine. However, when I try to search for a document in the database using code which looks like this:I get a very long exception:com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@7e32e7b1. Client view of cluster state is {type=REPLICA_SET, servers=[{address:27017=excelerate-shard-00-01.svfgy.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 18.158.163.233 found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address 18.158.163.233 found}}, {address:27017=excelerate-shard-00-00.svfgy.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 18.185.151.83 found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address 18.185.151.83 found}}, {address:27017=excelerate-shard-00-02.svfgy.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 3.122.218.128 found}, caused by {java.security.cert.CertificateException: No subject alternative names matching IP address 3.122.218.128 found}}]My initial thoughts were that this was an authentication issue - so I triple checked that my network access was set to “access from anywhere”, username and password were correct.Another small thing I took notice of was that even if your password is incorrect in the connection URI, the MongoClients.create won’t throw an exception.\nAlso I tried connecting to the database in the mongo shell, and that seems to work just fine, which led me to think that this is an issue with the driver (which is why this is in the drivers section and not atlas section).I have done numerous google searches to figure this out, and I can’t find anyone with the exact exception I got.Any help is greatly appreciated, and if you need more information I will add that as soon as I get a chance.", "username": "Excel8392" }, { "code": "caused by {java.security.cert.CertificateException: No subject alternative names matching IP address 18.158.163.233 found}},\n", "text": "Hi @Excel8392,Considering the following cause:I believe your issue is that the java key store cannot locate the atlas public CA. This is required as Atlas traffic requires SSL.Please verify that the latest certificate is pushed in your java store:Also look on the java consideration on that page.Let me know if that helps.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the quick response!I do not have a Let’s Encrypt certificate set up, and the process of setting one up seems to be a little tedious. Is it possible to either use a different type of certificate, or no certificate at all?\nSorry if the answer to my question is obvious, I am new to using MongoDB and am still trying to figure everything out.", "username": "Excel8392" }, { "code": "keytool -importcert -file <root-certificate-filename> -keystore </path/to/keystore/keystore.jks> -alias \"Alias\"\nkeytool -list -cacerts >certs.txt\n grep -i 'dst_root' certs.txt\n", "text": "Hi @Excel8392,Atlas requires this certificate for the SSL encryption.For latest Java runtime it should be present in the key store, but I suspect you are using an older version:Let’s Encrypt isn’t present in the default trust store for Java version 7 prior to the 7u111 update, or Java version 8 prior to the 8u101 update. Use a Java release after 19 July 2016.Please ensure your Java client software is up-to-date. The latest Java versions are strongly recommended for many improvements beyond these new Certificate Authority requirements for our TLS certificates.Anyway it should not take so much time to configure it on your machine.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you so much for the quick response, that fixed it.", "username": "Excel8392" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issues connecting to Atlas Database
2020-07-29T20:48:52.803Z
Issues connecting to Atlas Database
12,491
null
[ "android" ]
[ { "code": "import io.realm.RealmAppval app: App = App(AppConfiguration.Builder(appID).build())Realm.init(this)import io.realm.mongodb.AppUnresolved reference: mongodb", "text": "I have installed Realm on new android project and import realm as follow:import io.realm.RealmI’m still not able to initialize the app. App is throwing an error\nAm I missing somethingval app: App = App(AppConfiguration.Builder(appID).build())However, I can still do Realm.init(this) without errorPS: I tried importing import io.realm.mongodb.App but MongoDB import is throwing an error saying Unresolved reference: mongodb", "username": "Safik_Momin" }, { "code": "\n \n package com.mongodb.tasktracker\n \n import android.app.Application\n import android.util.Log\n \n import io.realm.Realm\n import io.realm.log.LogLevel\n import io.realm.log.RealmLog\n import io.realm.mongodb.App\n import io.realm.mongodb.AppConfiguration\n \n lateinit var taskApp: App\n \n ", "text": "@Safik_Momin What version are you using? You should be able to clone the tutorial repo, build, and run - I just tested it out and it works for me.", "username": "Ian_Ward" }, { "code": "implementation \"io.realm:android-adapters:4.0.0\"", "text": "Thank you for the update @Ian_Ward. It looks like realm android documentation doesn’t mention to add implementation \"io.realm:android-adapters:4.0.0\" in the dependencies section. This fixes the above problem for me.", "username": "Safik_Momin" }, { "code": "", "text": "implementation “io.realm:android-adapters:4.0.0”I’m migrating an older project that was working fine with full sync, based on the quick-start.\nhttps://docs.mongodb.com/realm/android/quick-start/\nI ran into a similar issue. A ton of clean/rebuilds later I got it to work.Seems like the final solution was updating android-adapters. I changed\nimplementation “io.realm:android-adapters:3.0.0”\nto\nimplementation “io.realm:android-adapters:4.0.0”I also still had to add these 2 imports explicitly.import io.realm.mongodb.App\nimport io.realm.mongodb.AppConfigurationAt any rate, thanks @Safik_Momin for asking this earlier and posting that final implementation. Probably saved me another few hours of digging.", "username": "Ryan_Goodwin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Android] App initialization throws an error
2020-06-11T02:44:06.517Z
[Android] App initialization throws an error
3,579
https://www.mongodb.com/…e99cb1072e65.png
[ "compass" ]
[ { "code": "", "text": "after installing compass, i ran the program but it only get to the point where it is “activating plugins” but never goes beyond this and the circle \"loader keeps on turning but nothing happens … hours and hours can go by, i can restart my machine and still everytime i run this program it gets to the same point the does not go any further than “activating plugins” …someone please help!!!\n", "username": "Angelo_Hedley" }, { "code": "", "text": "Hi @Angelo_Hedley can you please provide a little more information:", "username": "Doug_Duncan" }, { "code": "", "text": "Hello @Angelo_HedleyDo you see any errors when you click ALT+CTRL+I and check out the console tab on the right side?\ngrafik1280×210 23 KB\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "gram but it only get to the point where it is “activating plugins” but never goes beyond this and the circle \"loader keeps on turning but nothing happens … hours and hours can go by, i can restart my machine and still everytime i run this program it gets to the same point the does not go any further than “activating plugins” …HI DougIt only gets to the activating plugins then it does not go any further\nWindows 7 i7 HP ELITEBOOK 8560p\nversion\n1.21.0 (Stable )", "username": "Angelo_Hedley" }, { "code": "", "text": "HI there, I’m experiencing the same issue and look forward to getting a resolve. I’ve uninstalled and reinstalled Compass and still get the same result.", "username": "Charlie_Maru" }, { "code": "%APPDATA%/MongoDB Compass", "text": "Hi @Charlie_Maru, we are looking into the problem but we are having some trouble reproducing it.Can I ask you to do a quick test? You should have all the Compass user preferences in %APPDATA%/MongoDB Compass. Can you delete that folder (make a copy of it in case you have something that you want to keep, like favorite connections or query history) and start Compass again?", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi @Massimiliano_Marcon\nI’ve done as you’ve suggested but it still stalls at “Activating Plugins”. It goes no further.\n\nimage1600×900 58.8 KB\n", "username": "Charlie_Maru" }, { "code": "", "text": "Thank you for trying. That’s helpful to at least exclude one cause.\nI will keep you posted as we make progress with this issue.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi, I am also having this problem!", "username": "Alfredo_N_A" }, { "code": "", "text": "I also have this issue…", "username": "Gabriel_Timofti" }, { "code": "", "text": "Hi @Alfredo_N_A, @Gabriel_Timofti,Welcome to the community! If you are experiencing the same issue please provide some additional information to help us track down the problem:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi. Thx for Response.\nversion: 1.21.0 (Stable Community)\nOs: Win 10 x 64\nIt’s a fresh install.I’ll keep updating the status", "username": "Gabriel_Timofti" }, { "code": "", "text": "I just borrowed my son’s Windows 10 machine to test and Compass opens up just fine for me. I did notice that there are three packages that can be downloaded for Windows.I downloaded the EXE package. @Gabriel_Timofti, @Alfredo_N_A and @Charlie_Maru can you verify which package type you’re using? I wonder if it might be a problem with the MSI by chance.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Doug,Thank you for your email. So “what is a bear’s favorite drink? = koka-koala!”, I downloaded the MSI package. Keep in mind I am following the MongoDB University slides at the moment, maybe that’s a great place to start.Semper fi,", "username": "Alfredo_N_A" }, { "code": "", "text": "Hi Stennie,\nMongodb 1.21.0\nWin 10 x64\nI’ve had mongo for some time, and only recently (3 weeks ago) it started to lock up when activating plugins", "username": "Charlie_Maru" }, { "code": "", "text": "Hi @Charlie_Maru, Compass 1.21.0 was released on April 28th (about two weeks ago). Is it possible that you updated around that time? If so it’s possible that the update did something. If you didn’t update then hopefully the MongoDB engineers can help you track down what’s going on.", "username": "Doug_Duncan" }, { "code": "", "text": "Last week, we released Compass 1.21.1. It should fix some error conditions as well as print some useful error messages in the console of the loading screen if something goes wrong. Could you please give that a try?Thank you!", "username": "Massimiliano_Marcon" }, { "code": "", "text": "i was told to use the isolated edition instead until the problem is fixed. it works out ok but how will i know when the problem is fixed? who will let me know ?", "username": "Angelo_Hedley" }, { "code": "", "text": "@Angelo_Hedley would you be able to try 1.21.1 (non isolated edition)? We fixed some of the potential issues in there but we’ve had a hard time reproducing the issue in our windows boxes. It’d be great to get some feedback from the community on whether the problem is really solved.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Quick update: we found the problem and fixed it. We will release Compass 1.21.2 with the fix very soon.Thank you for the patience.", "username": "Massimiliano_Marcon" } ]
Mongodb compass- freeze on activating plugins
2020-05-03T21:56:32.219Z
Mongodb compass- freeze on activating plugins
11,594
null
[]
[ { "code": "const collection = context.services.get(\"mycluster\").db(\"mydb\").collection(\"mycollection\");\ncollection.createIndex({ \"StartTime\": 1 });\n", "text": "I have a scheduled trigger that depends on an index, I would like to ensure that the index exists before running the trigger. I’ve attempted to do it this way:But I am getting an error saying createIndex does not exist. Is there any way to ensure an index exists within a scheduled trigger?", "username": "Greg_Fitzpatrick-Bel" }, { "code": "runCommand", "text": "Hi @Greg_Fitzpatrick-Bel,Creating an index via createIndex commanf is not supported by the collection Realm API. Run of runCommand is not available as well.Having said that, if you are using a dedicated cluster you can use the rolling index create atlas API via a context.http request from your function.https://docs.atlas.mongodb.com/reference/api/rolling-index-create-one/The following blog can show case how to use atlas api via a trigger:Learn how to automate your MongoDB Atlas cluster with scheduled triggers.Please bear in mind that a trigger is bound to 90s execution time. A trick can be to have a flow collection saving a state of the task and updating the state after each run. Then a database based follow-up trigger can be run until completion which will stop the execution chain.Let me know if that helps.Best regards\nPavel", "username": "Pavel_Duchovny" } ]
Creating an Index within a scheduled trigger
2020-07-31T21:32:49.296Z
Creating an Index within a scheduled trigger
2,122
null
[]
[ { "code": "", "text": "Can anyone provide any guidance here - I have used the javascript previously to migrate a database file to the cloud service and don’t recall having any problems. We are just preparing for the upgrade to RealmSwift 5.2.0 and are having trouble loading a copy of the database into a test environment.he script appears to login in just fine and completes the migration from the local fine with no errors but there doesn’t seem to be any data showing up in the realm when opened with Realm Studio.Can’t think what we might have done wrong. Is there any way to check on the cloud service whether the client connected and started the sync?", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Duncan_Groenewald We will need more information to help - like SDK versions, new or old product, code snippets, and some logs - preferably with trace or debug enabled.", "username": "Ian_Ward" }, { "code": "", "text": "Sorry my bad - the script was picking up and old version of the file which had a different server url in it !!", "username": "Duncan_Groenewald" }, { "code": "", "text": "And thanks for the prompt response ", "username": "Duncan_Groenewald" }, { "code": "let items = realm.objects(Item.self)\n\n// This correctly lists all items\nfor item in items {\n print(\"item: \\(item.id), \\(item.name)\"\n}\n\n// and this always returns nil\nlet item = realm.object(ofType: Item.self, forPrimaryKey: id)\nprint(\"item: \\(item?.id), \\(item?.name)\")\nRealm.asyncOpen(configuration: self.realmSyncConfig, callbackQueue: .global()) { (realm, error) in\n completion(realm, error)\n }\nRealm.asyncOpen(configuration: self.realmSyncConfig, callbackQueue: .main) { (realm, error) in\n completion(realm, error)\n }\n", "text": "Ian - we are seeing some new (strange) behaviour when using 5.2.0 now that doesn’t make a lot of sense. We use quite a lot of background threads for processing of reports etc. and for some reason now we are seeing a lot of failures when doing realm.objects(XXX.self).filter(\"id == %@, id) - which returns nil but when doing a listing of the results of realm.objects(XXX.self) we get a complete list of all the expected objects.Is there some change in behaviour in RealmSwift 5.2.0 that might be cause a problem with realm queries - or any known bugs that we might be unaware of ? We have been using RealmSwift 4.4.1/macOS 10.15 with no problems but figure we should upgrade in preparation for an eventual migration to Mongo Realm.I am puzzled as to how something like the following can return nilWe will continue to investigate but if anyone has encountered anything like this before please let me know.We do make extensive use of operation queues but have never had an issue in the past so unless there is some change between SwiftRealm 4.4.1 and 5.2.0 that may cause problems when using operation queues I can’t see why we should suddenly be encountering any problems.The example above is on consecutive lines in the same function call so even with operation queues I would expect that would be executed on the same thread so I can’t think switching threads of execution could be the reason for the problems now.One of the function calls that fails with 5.2.0 is the following:So has to be changed to the following:Are there release notes for 5.2.0 that explain why this change is necessary ?", "username": "Duncan_Groenewald" }, { "code": "", "text": "We will continue to investigate but if anyone has encountered anything like this before please let me know.Yes the issue is described here - Realm.object returns NIL object when queried by primary key in Realm 10.0 beta2 · Issue #6672 · realm/realm-swift · GitHubWe are currently investigating. Feel free to follow the issue", "username": "Ian_Ward" } ]
Migration to Cloud using javascript script appears to complete but no data is showing up in the realm cloud database
2020-07-31T11:54:28.476Z
Migration to Cloud using javascript script appears to complete but no data is showing up in the realm cloud database
1,401
null
[]
[ { "code": "", "text": "I opened the ToDo project from GitHub (my_first_realm_app (dot net)) and replaced the AuthUrl var with my realm address. Everything works as expected except for these two issues: First the app crashes here: realm = await Realm.GetInstanceAsync(configuration); I replaced the line with GetInstance(configuration) as a quick fix. Second, the user and the data is not saved locally. The user’s credentials have to be reentered each time the app is initialized, and when the program is offline the data does not populate. I am working in C# with Visual Studio 2010 with Realm version 4.3.0\nAny ideas on how to fix these issues. Thanx!", "username": "Jean-Luc_Chalumeau" }, { "code": "", "text": "Hi @Jean-Luc_Chalumeau. We’ve been using Realm .NET for a few years. We have found GetInstanceAsync unpredictable. Instead, at app start we await Session.AwaitForDownloadAsync. Then we use Realm.GetInstance. This ensures you are working with an uptodate realm.You do need to provision for either await GetInstanceAsync or AwaitForDownloadAsync never returning, so work out a mechanism for timing them out. We use this little helper\npublic static async Task TimeoutAsync(Task mainTask, int timeout = 5000)\n{\nTask delayTask = Task.Delay(timeout);\nTask winner = await Task.WhenAny(mainTask, delayTask);\nbool result = (winner == mainTask);\nreturn result;\n}", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "Thank you for your suggestion. It worked and fixed the GetInstance issue. Any idea on how to resolve the User.Current and data not being saved locally. Thanx!", "username": "Jean-Luc_Chalumeau" }, { "code": "", "text": "My understanding with sync is there is no local database, merely a cache. This cache is cleared if the user is logged out.The user credentials are also cached so it should not be necessary to login the user again, unless the user has been logged out. At startup we check\nUserState loginState = realmUser.State;\nif (loginState == UserState.Active)\nisLoggedIn = true;Also don’t use\nconfig.ClientResyncMode = ClientResyncMode.DiscardLocalRealm\nas this discards the local cache when the app shuts down.", "username": "Nosl_O_Cinnhoj" } ]
User.Current not cashing
2020-07-30T22:29:57.084Z
User.Current not cashing
1,917
null
[ "aggregation" ]
[ { "code": "db.getCollection('sessions').aggregate([\n { \n $match: { \n $and: [ \n { \"StartTime\": { $gte: ISODate(\"2020-02-01T18:52:34.000Z\") } }, \n { \"StartTime\": { $lte: ISODate(\"2020-02-02T18:52:34.000Z\") } } \n ] \n } \n },\n {\n $group: { \n _id: { id: \"$locationId\" }, \n numberOfSessions: { $sum: 1 },\n Sessions: { \n $push: { \n _id: \"$_id\", \n SessionID: \"$SessionID\", \n StartTime: \"$StartTime\", \n EndTime: \"$EndTime\"\n } \n }\n }\n },\n {\n $match: {\n numberOfSessions: { $gte: 2 }\n }\n },\n { $sort: { 'Sessions.StartTime': 1 } },\n { $project: { StartTime: -1, Sessions: -1, numberOfSessions: -1 } }\n ],\n {\n allowDiskUse: true\n } \n)", "text": "I’m attempting to group sessions by location, and then sort the session objects within the nested Sessions array by StartTime. I’ve found a way to do this using an index on StartTime, which is used in the initial $match, but I’d like a way to explicitly specify my sorting intentions. Does anyone have a way I can do this?This how I would like the aggregation to work. Essentially the $sort stage here does nothing. Is there any way to $sort the objects pushed into Sessions in the $group stage by StartTime?", "username": "Greg_Fitzpatrick-Bel" }, { "code": "db.sessions.insertMany([\n {\n _id: 'sesh-1',\n startTime: ISODate('2020-07-01T18:52:34.000Z'),\n endTime: ISODate('2020-08-01T18:52:34.000Z'),\n locationId: 'L2',\n },\n {\n _id: 'sesh-2',\n startTime: ISODate('2020-05-01T18:52:34.000Z'),\n endTime: ISODate('2020-06-01T18:52:34.000Z'),\n locationId: 'L1',\n },\n {\n _id: 'sesh-3',\n startTime: ISODate('2020-03-01T18:52:34.000Z'),\n endTime: ISODate('2020-04-01T18:52:34.000Z'),\n locationId: 'L1',\n },\n {\n _id: 'sesh-4',\n startTime: ISODate('2020-01-01T18:52:34.000Z'),\n endTime: ISODate('2020-02-01T18:52:34.000Z'),\n locationId: 'L2',\n },\n]);\ndb.sessions.aggregate([\n {\n $match: {\n // match the docs anyhow you want\n // this $match stage will match all docs\n // from sessions collection\n }\n },\n {\n $group: {\n // group by $locationId\n _id: '$locationId',\n numberOfSessions: {\n $sum: 1,\n },\n sessions: {\n $push: {\n _id: '$_id',\n startTime: '$startTime',\n andTime: '$endTime',\n }\n }\n }\n },\n {\n // this is needed to sort items in $sessions array\n $unwind: '$sessions',\n },\n {\n $sort: {\n // specify $sessions sort params here\n 'sessions.startTime': 1,\n }\n },\n {\n // this $group stage is needed, because we did\n // $unwind before\n $group: {\n _id: '$_id',\n numberOfSessions: {\n $first: '$numberOfSessions',\n },\n sessions: {\n $push: '$sessions',\n }\n }\n }\n]).pretty();\n[\n {\n \"_id\" : \"L1\",\n \"numberOfSessions\" : 2,\n \"sessions\" : [\n {\n \"_id\" : \"sesh-3\",\n \"startTime\" : ISODate(\"2020-03-01T18:52:34Z\"),\n \"andTime\" : ISODate(\"2020-04-01T18:52:34Z\")\n },\n {\n \"_id\" : \"sesh-2\",\n \"startTime\" : ISODate(\"2020-05-01T18:52:34Z\"),\n \"andTime\" : ISODate(\"2020-06-01T18:52:34Z\")\n }\n ]\n },\n {\n \"_id\" : \"L2\",\n \"numberOfSessions\" : 2,\n \"sessions\" : [\n {\n \"_id\" : \"sesh-4\",\n \"startTime\" : ISODate(\"2020-01-01T18:52:34Z\"),\n \"andTime\" : ISODate(\"2020-02-01T18:52:34Z\")\n },\n {\n \"_id\" : \"sesh-1\",\n \"startTime\" : ISODate(\"2020-07-01T18:52:34Z\"),\n \"andTime\" : ISODate(\"2020-08-01T18:52:34Z\")\n }\n ]\n }\n]\n", "text": "Hello, @Greg_Fitzpatrick-Bel!Let me show how to solve this by example.\nAssume, we have this dataset:And this aggregation:Will provide use with this result:Notice, that session objects are inserted in arbitrary order, but aggregation returned sessions, ordered by startTime.", "username": "slava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using $group to group data by a field, and then $sort by field within nested array
2020-07-31T17:59:43.820Z
Using $group to group data by a field, and then $sort by field within nested array
10,657
https://www.mongodb.com/…d783b57c8603.png
[ "production", "php" ]
[ { "code": "mongodbClient::listDatabaseNamesDatabase::listCollectionNames()nameOnlylistCollectionsClient::listDatabases()authorizedDatabasesCollection::deleteOne()deleteMany()findOneAndDelete()hintCollection::findOneAndReplace()findOneAndUpdate()hintCollection::createIndex()createIndexes()commitQuorumMongoDB\\Operation\\AggregateMongoDB\\Operation\\ExplainableCollection::explain()explainCollection::aggregate()driver$driverOptionsmongodbappNamemongodbcomposer require mongodb/mongodb^1.7.0\nmongodb", "text": "The PHP team is happy to announce that version 1.7.0 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension. This release adds support for new features in MongoDB 4.4.Release HighlightsNew Client::listDatabaseNames and Database::listCollectionNames() methods allow enumeration of database and collection names without returning additional metadata. In the case of collection enumeration, this leverages the nameOnly option for listCollections and avoids taking a collection-level lock on the server.Client::listDatabases() now supports an authorizedDatabases option, which can be used with MongoDB 4.0.5 or newer.The Collection::deleteOne(), deleteMany(), and findOneAndDelete() methods now support a hint option to specify an index that should be used for the query. This option is also supported for delete operations in bulk writes. This option requires MongoDB 4.4 or later.The Collection::findOneAndReplace() and findOneAndUpdate() methods now support a hint option, which requires MongoDB 4.2.Collection::createIndex() and createIndexes() now support a commitQuorum option, which can be used with MongoDB 4.4.The MongoDB\\Operation\\Aggregate class now implements the MongoDB\\Operation\\Explainable interface and can be used with Collection::explain(). This is an alternative to the explain option supported by Collection::aggregate() and allows for more verbose output when explaining aggregation pipelines.The Client constructor now supports a driver option in its $driverOptions parameter, which can be used by wrapping drivers and libraries to append metadata (e.g. name and version) to the server handshake. The PHP library will also now append its own name and version to the metadata reported by the mongodb extension. Note that this feature is primarily designed for custom drivers and ODMs, which may want to identify themselves to the server for diagnostic purposes. Applications should use the appName URI option instead of driver metadata.This release upgrades the mongodb extension requirement to 1.8.0. Support for PHP 5.6 has been removed and the library now requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "jmikola" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.7.0 Released
2020-07-31T21:00:59.894Z
MongoDB PHP Library 1.7.0 Released
2,419
null
[ "performance" ]
[ { "code": "", "text": "Hello I recently had a problem with Atlas and I would like to have your opinion on the subject.In fact, I have created a very simple process that retrieves the elements of a collection called “balances” (this collection contains about a hundred elements in total).\nThis works very well on a local mongo database (response time of a few seconds), but when trying on the remote mongodb Atlas database, the same code takes several minutes to read and display the same entries.\nSince in both cases I tested with the same code and the same entries in the DB, I suspect a latency in the connection server <-> Atlas DB.I can’t find answers on the internet so I would like to know whether you have an idea of the reasons behind this latency.Thank you in advance for your feedback,\nYasmine from TEO", "username": "TEO_TheEnergyOrigin" }, { "code": "", "text": "Hi Yasmine,Usually when folks report that there is unexpectedly high latency, this is either because they’re not testing from within the same region on Atlas, or instead because they’re opening a new connection for every query. When using MongoDB Atlas, connectivity requires TLS/SSL (encryption over the wire) and SCRAM authentication. There is a work function implicit in this connection protocol which does take on the order of 100ms to run. However, after a connection is opened, it can be re-used in your code.Is it possible that you’re opening and closing connections unnecessarily or if it looks like something else may be going on?Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Network latency server <-> Atlas (not happening on local MongoDB)
2020-07-31T18:00:23.734Z
Network latency server &lt;-&gt; Atlas (not happening on local MongoDB)
4,743
null
[]
[ { "code": "", "text": "I’m attempting use a scheduled trigger to see if the time of two documents (think of two documents as a ‘session’ with a start time and end time) overlap. My scheduled trigger runs once a day and will return a large array of documents, which then must be compared to other documents to see if there is any overlap.I have this working in my scheduled trigger in practice by passing in an array of 10 documents, but in reality, when the trigger runs, it will be processing up to 1 thousand documents. I’m finding that my trigger often exceeds the execution time limit and errors out. Is there any way to increase the execution time limit in a scheduled trigger? I’ve read about event bridging, but that functionality seems to only be available in database triggers, not scheduled triggers.", "username": "Greg_Fitzpatrick-Bel" }, { "code": "", "text": "I solved this by using .explain() to see what kind of query was running and adding the appropriate index", "username": "Greg_Fitzpatrick-Bel" } ]
Using a scheduled trigger on a large dataset
2020-07-30T17:12:48.570Z
Using a scheduled trigger on a large dataset
1,918
null
[ "python", "production" ]
[ { "code": "collection.update_one({\"quantity\": 1057, \"category\": \"apparel\"},{\"$set\": {\"reorder\": True}})\ncollection.database.command(SON([('explain', SON([('update', 'products'), ('updates', [{'q': {'quantity': 1057, 'category': 'apparel'}, 'upsert': False, 'multi': False, 'u': {'$set': {'reorder': True}}}])])), ('verbosity', 'queryPlanner')]))\nExplainCollection(collection).update_one({\"quantity\": 1057, \"category\": \"apparel\"},{\"$set\": {\"reorder\": True}})\npip install pymongoexplain", "text": "We are pleased to announce the release of PyMongoExplain, an easier way to run explain on PyMongo commands.\nPyMongoExplain greatly simplifies the amount of effort needed to explain commands. For example, suppose we wanted to explain the following update_one:Before PyMongoExplain, one would need to convert the update_one into the equivalent MongoDB command:After PyMongoExplain:Links:", "username": "Julius_Park" }, { "code": "", "text": "", "username": "system" } ]
PyMongoExplain 1.0.0 Released
2020-07-31T16:06:44.877Z
PyMongoExplain 1.0.0 Released
2,205
null
[ "java" ]
[ { "code": "", "text": "Hello, I want to all my operations to be done async, so I choosed mongodb driver async for my project but it’s deprected, and it says that to use mongodb reactive streams. But are those default done async ? And also they are slightly different than normal mongodb java driver. Can someone give me tutorial on them, and answer me if they are async ?Edit: I tried to insert one document following this tutorial Quick Tour but there is no class named OperationSubscriber and PrintDocumentSubscriber.\nI am currently using reactive streams 1.13.1.", "username": "Zdziszkee_N_A" }, { "code": "", "text": "Hello @Zdziszkee_N_A,The link to OperationSubscriber and PrintDocumentSubscriber helper classes on GitHub.", "username": "Prasad_Saya" } ]
Async Java MongoDB driver
2020-07-30T10:55:51.281Z
Async Java MongoDB driver
3,289
null
[ "production", "php" ]
[ { "code": "tlsDisableOCSPEndpointChecktlsDisableCertificateRevocationChecktlsAllowInvalidCertificatestlsInsecuredirectConnectionreplicaSetdirectConnection=falsereplicaSetdirectConnection=truereplicaSetzstdcompressorshedge['enabled' => true]MongoDB\\Driver\\BulkWrite::update()hintdriver$driverOptionspecl install mongodb-1.8.0\npecl upgrade mongodb-1.8.0\n", "text": "The PHP team is happy to announce that version 1.8.0 of the mongodb PHP extension is now available on PECL. This release adds support for new features in MongoDB 4.4.Release HighlightsThis release introduces support for OCSP and OCSP stapling, which is used to validate the revocation status of TLS certificates. OCSP is enabled by default, but can be controlled via two new URI options: tlsDisableOCSPEndpointCheck and tlsDisableCertificateRevocationCheck. The existing tlsAllowInvalidCertificates and tlsInsecure URI options may also be used to disable OCSP.The driver now supports a directConnection URI option, which can be used to control replica set discovery behavior when only a single host is provided in the connection string. By default, providing a single member in the connection string will establish a direct connection or discover additional members depending on whether the replicaSet option is omitted or present, respectively. This default behavior remains unchanged, but applications can now specify directConnection=false to force discovery to occur (if replicaSet is omitted) or specify directConnection=true to force a direct connection (if replicaSet is present).The driver now supports Zstandard compression if it is available during compilation. Applications can opt into using Zstandard by specifying zstd in the compressors URI option, which is used to negotiate supported compression formats when connecting to MongoDB.The ReadPreference constructor now supports a hedge option, which can be passed ['enabled' => true] to enable Hedged Reads when connected to a MongoDB 4.4 sharded cluster.This release adds several authentication improvements. The driver supports the new MONGODB-AWS authentication mechanism, which can be used when connecting to a MongoDB Atlas cluster that has been configured to support authentication via AWS IAM credentials. Additionally, the driver now uses a shorter conversation with the server when authenticating with a SCRAM mechanism.MongoDB\\Driver\\BulkWrite::update() now supports a hint option, which can be used with MongoDB 4.4 or later.The Manager constructor now supports a driver option in its $driverOptions parameter, which can be used by wrapping drivers and libraries to append metadata (e.g. library name and version) to the server handshake.This release upgrades our libbson and libmongoc dependencies to 1.17.0. Support for PHP 5.6 has been removed and the extension now requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.8.0 Released
2020-07-31T05:54:58.399Z
MongoDB PHP Extension 1.8.0 Released
4,484
null
[ "aggregation" ]
[ { "code": "var lastHour = new Date();\nlastHour.setHours(lastHour.getHours()-24);\n \ndb.tt.aggregate([ \n { $match: { \"lastModifiedDate\":{$gt: lastHour} } },\n { $group: { \n _id: { \n month: { $month: \"$lastModifiedDate\" }, \n day: { $dayOfMonth: \"$lastModifiedDate\" }, \n year: { $year: \"$lastModifiedDate\" }, \n }, \n reject: { $sum: { $cond : [{ $in : [\"$meta.ui.step\", [\"rejectedCheck\",\"rejectedWrongDetails\"]]}, 1, 0]} },\n cases: { $sum: 1 }\n } }, \n {$project:{Aborted:1, reject:1, cases:1, ratio: { $divide: [ \"$reject\", \"$cases\" ]}}\n },\n \n ])\n{\n \"_id\" : ObjectId(\"5f058e3feab5bf000668563f\"),\n \"meta\" : {\n \"ui\" : {\n \"step\" : \"rejectedCheck\",\n \"wasStepCompleted\" : true,\n \"uiVersion\" : \"5.5.2\"\n },\n \"attributions\" : []\n },\n \"custom\" : {\n \"CaseId\" : \"2070r41111\",\n \"statusLastUpdate\" : NumberLong(1594206118668)\n },\n \"caseToken\" : \"hjkhjkhj.tyjggy\",\n \"uuid\" : \"bd5c0b13-fe0d-4d94-a1b3-aeb66b1ad61d\",\n \"createdDate\" : ISODate(\"2020-07-08T09:13:35.699Z\"),\n \"lastModifiedDate\" : ISODate(\"2020-07-08T11:01:58.717Z\")\n \n}\n", "text": "Can you please advice how can i filter the ratio and the cases columns?I want to filter ratio>0.5 and cases>2Basically I’m trying to convert below tsql queryselect\ncase when cast(rejects as float)/ cast(cases as float) > 0.66 and cases >=20 then ‘1’ else ‘0’ end as Alert from\n( select count (distinct uuid) cases, count (distinct case when [meta.ui.step] =\n‘RejectedMBCheck’ or [meta.ui.step] = ‘rejectedBCheck’ or [meta.ui.step]\n= ‘rejectedWrongDetails’ then [Uuid]end ) rejects, cast([createdDate.$date] as date)\ndate from dwh_fact_cases group by cast([createdDate.$date] as date)Here is my query:Here is a sample document:", "username": "Lital_ez" }, { "code": "", "text": "Hello, @Lital_ez!In order to help you, provide example of a document from your collection.\nAlso, please, make sure your post is well-formatted and easily readable.", "username": "slava" }, { "code": "", "text": "hey.\nDo you need more info?Thanks", "username": "Lital_ez" }, { "code": "{\n \"_id\" : {\n \"month\" : 7,\n \"day\" : 8,\n \"year\" : 2020\n },\n \"reject\" : 1,\n \"cases\" : 1,\n \"ratio\" : 1\n}\ndb.tt.aggregate([\n { $match: { /* ... */ } },\n { $group: { /* ... */ } },\n { $project: { /* ... */ } },\n {\n $match: {\n ratio: {\n $gt: 0.5\n },\n cases: {\n $gt: 2\n }\n }\n }\n]);\n", "text": "Hi, @Lital_ez!Ok, so the example output of your aggregation is:You want to filter this out that result by those conditions, right?I want to filter ratio>0.5 and cases>2If so, you can just add another $match stage in the end of your pipeline:", "username": "slava" }, { "code": "", "text": "Thanks it working…can you also advice why below returning same value?\nboth should return different resultsreject: { $sum: { $cond : [{ $in : [\"$meta.ui.step\", [“rejectedMBCheck”,“rejectedBBLCheck”,“rejectedWrongDetails”]]}, 1, 1]} },cases: { $sum: 1 }", "username": "Lital_ez" }, { "code": "[\n /* ... */\n {\n $group: {\n /* ... */\n reject: {\n $sum: {\n $cond: [\n {\n $in : [\n \"$meta.ui.step\", \n [\n 'rejectedMBCheck',\n 'rejectedBBLCheck',\n 'rejectedWrongDetails'\n ]\n ]\n },\n 1, // returns, if above contition will resolve to TRUE\n 1, // returns, if above contition will resolve to FALSE\n ]\n }\n },\n }\n },\n /* ... */\n]\n{ $sum: 1 }\n", "text": "Have a look at your condition inside $cond operator: no matter to what value (true or false) your condition is resolved, you will always take the same action - add 1.\nSo, it works just like:That’s why you get the same result for both expressions.", "username": "slava" }, { "code": "$group: {\n /* ... */\n reject: {\n $sum: {\n $cond: [\n {\n $in : [\n \"$meta.ui.step\", \n [\n 'rejectedMBCheck',\n 'rejectedBBLCheck',\n 'rejectedWrongDetails'\n ]\n ]\n },\n 1, // returns, if above contition will resolve to TRUE\n 0, // returns, if above contition will resolve to FALSE\n ]\n }\n },\n }\n },\n /* ... */\n]\n", "text": "Thanks for you fast reply!!!\nso its should be 1,0", "username": "Lital_ez" }, { "code": "", "text": "so its should be 1,0Yes, if you want to count only values, that would resolve to true with your $cond operator ", "username": "slava" }, { "code": "", "text": "its not working. can you please advise?\nreject: { $sum: { $cond : [{ $in : [\"$meta.ui.step\", [“rejectedMBCheck”]]}, 1,0]}},", "username": "Lital_ez" }, { "code": "", "text": "Can you provide a small dataset and the aggregation you use, so I can reproduce the issue?", "username": "slava" }, { "code": "", "text": "while preparing smaple data for you it did work.\nso all good\nthanks a lot!!! ", "username": "Lital_ez" }, { "code": "db.getCollection('test').find(\n\n{$and: [ {'lastModifiedDate': {$gte: ISODate('2019-07-13T00:00:00.000Z')}},{'lastModifiedDate': {$lte: ISODate('2020-07-29T23:59:59.999Z')}}\n ,{'meta.ui.step':{$in:[\"brokerInitiated\",\"rejectedMBCheck\",\"rejectedBBLCheck\",\"rejectedWrongDetails\"]}}\n ] \n }\n )\nvar lastHour = new Date();\nlastHour.setHours(lastHour.getHours()-24000);\n lastHour\ndb.test.aggregate([ \n { $match: { \"lastModifiedDate\":{$gt: lastHour} } },\n { $group: { \n _id: { \n month: { $month: \"$lastModifiedDate\" }, \n day: { $dayOfMonth: \"$lastModifiedDate\" }, \n year: { $year: \"$lastModifiedDate\" }, \n }, \n reject: { $sum: { $cond : [{ $in : [\"$meta.ui.step\", [\"brokerInitiated\",\"rejectedBBLCheck\",\"rejectedWrongDetails\"]]}, 1,0]}},\n cases: { $sum: 1 }\n } }, \n {$project:{Aborted:1, reject:1, cases:1, ratio: { $divide: [ \"$reject\", \"$cases\" ]}}\n },\n { $match: { ratio: { $gt: 0.5}, cases: {$gt: 1 } }}\n])\n", "text": "hey slava,\ni think there is a bug in the aggregation frameworkworkingnot working", "username": "Lital_ez" } ]
Aggregation query assistance
2020-07-24T08:11:41.996Z
Aggregation query assistance
1,800
null
[]
[ { "code": "", "text": "Hi All,I was learning index and thought of checking how plan work with findOne. When I executed below querydb.people.findOne({“address.city”:“Burgessborough”}).explain(“executionStats”)the below exception is raised:uncaught exception: TypeError: db.people.findOne(…).explain is not a functionAny thoughts?Regards,\nYadvinder", "username": "Yadvinder_Kumar" }, { "code": "explainfindOneexplaindb.collection.explain().find()db.collection.find().explain()find", "text": "Hello @Yadvinder_Kumar, welcome to the forum.The explain method cannot be used with the findOne method. Here are the list of methods on which the explain can be used with: db.collection.explainExplain can be used two ways, with the find method:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you @Prasad_Saya.", "username": "Yadvinder_Kumar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Explain function exception with findOne
2020-07-30T17:13:00.956Z
Explain function exception with findOne
4,747
null
[]
[ { "code": "", "text": "I have followed the instructions for the osx/linux installation to the letter but just does not work. Has anyone experienced this and how was it solved", "username": "Stephen_Hackman" }, { "code": "", "text": "Is that version listed in downloads?Check this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "", "username": "system" } ]
Chapter 1: Installing MongoDB on Ubuntu 20.04 does not work
2020-07-31T00:31:25.127Z
Chapter 1: Installing MongoDB on Ubuntu 20.04 does not work
1,365
null
[]
[ { "code": "", "text": "I can’t figure this out. In the mongo shell I run this:const d = new Date();\nprint(d);I get\nThu Jul 30 2020 17:21:55 GMT-0500 (CDT)I want\n30-JUL-2020 17:21:55how do you do this?", "username": "David_Lange" }, { "code": "", "text": "const d = new Date();\nprint(d.getMonth().toString() + ‘-’ + d.getDate().toString() + ‘-’ + d.getFullYear().toString() + ’ ’ + d.getHours().toString() + ‘:’ + d.getMinutes().toString() + ‘:’ + d.getSeconds().toString());\n~", "username": "David_Lange" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Format a date from ISO to string
2020-07-30T22:25:34.457Z
Format a date from ISO to string
1,535
null
[ "node-js" ]
[ { "code": "", "text": "Hi,We are working on server-less solutions in Azure and using a lot of Node.js functions to connect to Atlas.It’s turned out that we have “connections” problem. Let me explain what is the problem:Every time when a new function instance is fired up by Azure, we need to fire up a connection to Atlas and then function become active, let’s say for 5 min. Then a new a new function instance generates and a new connection is needs to be created.But then in our Test environment we’ve got 14000 connections very quickly, we were very supersized.By looking at the performance for this solution, we realized it would be much more efficient to create one Atlas connection and use it across all node.js function instance withing one microservice.We have done some research and it’s turned out that the same problem happened in AWS cloud, when people use Lambda functions.We have found this article: https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/ and as a best practice it’s recommended to use “callbackWaitsForEmptyEventLoop”. But the problem that we can’t find analog in Azure functions.Can someone please help us?", "username": "Dmitry_Shyryayev" }, { "code": "//Hold client across function invocations\nlet client = null;\n\nmodule.exports = async function (context, req) {\n// Connect using MongoClient\nif (client == null) {\n await MongoClient.connect(uri, function (err, _client) {\n client = _client;\n console.log(\"(Re)Created Client\");\n });\n}\n//Read documents in the collection\nconst collection = client.db(\"dbName\").collection(\"collectionName\");\ncollection.find({}).toArray(function (err, docs) {\n console.log(\"Found the following records\");\n console.log(docs)\n});\n}\n", "text": "Hi Dmitry,For Azure functions, it is recommended to create a static client for MongoDB outside the event handler, to reuse connections across function invocations, instead of recreating the connection each time.This is one simple example\nconst MongoClient = require(‘mongodb’).MongoClient;\nconst uri = “URI from config”;The example provided in the blog post for AWS you referenced will work too.Azure functions do not support callbackWaitsForEmptyEventLoop , There is an open GitHub issue for that - Let user configure option to error if on empty event loop · Issue #67 · Azure/azure-functions-nodejs-worker · GitHubHope this helps.", "username": "Prashant_Gupta" } ]
callbackWaitsForEmptyEventLoop for Azure?
2020-07-26T23:42:59.078Z
callbackWaitsForEmptyEventLoop for Azure?
1,969
https://www.mongodb.com/…233057a9031a.png
[ "connector-for-bi" ]
[ { "code": "https://docs.mongodb.com/bi-connector/current/connect/powerbi/\n", "text": "I have a MongoDB Community installation, on an Ubuntu 18.04 server. I can connect to it fine, using Compass from my desktop.I would like to connect PowerBI Desktop to my MongoDB Server. I tried the instructions here:Specifically:downloaded the MongoDB Power BI Connector\nInstalled it\nGet Data\nMore\nODBC\nConnectBut, here’s what I get:Screen Shot 2020-05-21 at 14.32.32935×332 6.49 KBNo option to selected the BI Connector SDN per instructions. Any ideas? Also, other than downloading the MongoDB PowerBI Connector, any ideas how to connect?", "username": "Joseph_Mouhanna" }, { "code": "", "text": "Have you tried creating a DSN? It is one of the required steps. Once you do, you should be able see it in the drop down and select it to create the connection.PowerBI connectors typically store all the connection information (server name, port, database, username…) in the DSN.", "username": "Bora_Beran" } ]
Connecting Power BI to MongoDB Server
2020-05-21T22:33:37.248Z
Connecting Power BI to MongoDB Server
2,731
null
[ "indexes" ]
[ { "code": "", "text": "https://university.mongodb.com/mercury/M201/2020_July_7/chapter/Chapter_2_MongoDB_Indexes/lesson/58ab5626d280e426708ee3da/lectureIn this Chapter it is mentioned Mongodb Uses B-Tree for Indexes. So i have a question Why it does not use Binary Search Tree because Insertion, deletion, searching of an element is faster in BINARY SEARCH TREE than BINARY TREE due to the ordered characteristics. So Does it Uses BST if no than Why?", "username": "Vaibhav_Parashar" }, { "code": "", "text": "Welcome to the community @Vaibhav_Parashar!It looks you are conflating a B-Tree with a simple Binary Tree: these are not the same data structures.A B-tree is a self-balancing generalisation of the Binary Search Tree (BST) data structure with better performance characteristics for use cases that work with large amounts of data. B-trees and BSTs have the same average Big-O complexity, but a BST’s worst case complexity for search, insert, and update is O(n)) versus B-tree’s O(log(n)).The above links include some helpful information on the differences and benefits of these data structures. B-trees are commonly used (and well-suited) for general database and filesystem indexing. There are some variants of B-trees (for example: B-tree category on Wikipedia) and differences in implementation specific to the architecture of different applications, but the high level concepts are similar.There are also some variations in how B-tree indexes may be used in different products. For example, MongoDB’s WiredTiger storage engine uses prefix compression for index values by default. Prefix compression still stores values in a general B-Tree structure, but reduces some of the repetitive details for more efficient index size. For an older (but still applicable) overview, see WiredTiger - how to reduce your MongoDB hosting costs.due to the ordered characteristicsB-trees are ordered, and the order of keys is important if you want to efficiently use an index to sort query results.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why do indexes use B-Tree and not Binary Search Tree?
2020-07-30T17:12:57.480Z
Why do indexes use B-Tree and not Binary Search Tree?
10,073
null
[ "production", "cxx" ]
[ { "code": "cxx-driver", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.0. This release provides support for new features in MongoDB 4.4 and support for Client-Side Field Level Encryption.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx-driver . Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.0 Released
2020-07-30T22:30:48.514Z
MongoDB C++11 Driver 3.6.0 Released
1,426
null
[ "python" ]
[ { "code": "", "text": "We are pleased to announce the 3.11.0 release of PyMongo - MongoDB’s Python Driver. This release adds support for MongoDB 4.4.See the changelog for a high level summary of what’s new and improved or see the PyMongo 3.11.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Shane" }, { "code": "", "text": "", "username": "system" } ]
PyMongo 3.11.0 Released
2020-07-30T21:50:19.635Z
PyMongo 3.11.0 Released
3,690
null
[ "compass", "security" ]
[ { "code": "", "text": "Hi. I’m using MongDB Compass (version 1.21.2). I’m connecting using SSL. I need to pass the sslAllowInvalidHostnames when I connect. I can do it from the command-line, using the mongo CLI, but I can’t figure out how to pass it in Compass. I’m doing the configuration via “Fill in connection fields individually”, since I’m also setting up an SSH tunnel. Is there any way to configure my connection so sslAllowInvalidHostnames is set?Thanks in advance,\nEric", "username": "Eric_Marthinsen" }, { "code": "", "text": "Welcome to the Community forumOn the tab more options under SSL you can see Unvalidated(inseccure) option", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks @Ramachandra_Tummala. Unfortunately, if I select that option, it takes away my ability to pass a CA cert, which is something that I need to do. I also have a sneaking suspicion that selecting “Unvalidated (insecure)” allows invalid certificates rather thant invalid hostnames - but that’s just a hunch.", "username": "Eric_Marthinsen" }, { "code": "", "text": "On command line -ssl takes both sslAllowInvalidHostnames & sslAllowInvalidCertificates as parameters\nSo closest that matches above on Compass is UnvalidatedLooks like sslAllowInvalidHostnames cannot be set on Compass as per below jira ticket\nhttps://jira.mongodb.org/browse/COMPASS-2207", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Eric_Marthinsen As a workaroud are you able to updates hosts entries on your local machine wher compass runs ?", "username": "chris" }, { "code": "", "text": "@chris Sorry about the delay. Yes, I can update host entries on my local machine. How might that workaround work?", "username": "Eric_Marthinsen" }, { "code": "", "text": "@Eric you can add entries so that the hostnames match the subject certificate name(s). There shoule be no need for sslAllowInvalidHostnames.", "username": "chris" }, { "code": "", "text": "Ah, I see what you mean. That makes sense. I’ll give it a shot.", "username": "Eric_Marthinsen" } ]
Passing sslAllowInvalidHostnames when connecting to a server using Compass
2020-07-22T00:55:25.435Z
Passing sslAllowInvalidHostnames when connecting to a server using Compass
6,700
null
[ "production", "c-driver" ]
[ { "code": "", "text": "I’m pleased to announce version 1.17.0 of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.\nFor more information, see the 1.17.0 release on GitHub.libmongoc\nFeatures:Bug fixes:Notes:libbson\nFeatures:Bug fixes:", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.0 released
2020-07-30T20:10:22.520Z
MongoDB C driver 1.17.0 released
1,880
null
[ "node-js", "production" ]
[ { "code": "saslStartDb.prototype.createCollectioncreateCollectionlistCollectionsCollectioncreateCollectionconst client = new MongoClient('...');\nawait client.connect();\n \nawait client.db('foo').collection('bar').insert({ importantField: 'llamas' });\nawait client.db('foo').createCollection('bar', {\n validator: { $jsonSchema: {\n bsonType: 'object',\n required: ['importantField'],\n properties: { name: { bsonType: 'boolean' } }\n }\n});\ncreateCollectionbar", "text": "The MongoDB Node.js team is pleased to announce version 3.6.0 of the driver, supporting the MongoDB 4.4 release.MongoDB drivers maintain a local view of the topology they are connected to, and ensure the accuracy of that view by polling connected nodes on average every ~10s. In MongoDB 4.4, drivers are now able to receive push notifications about topology updates, effectively reducing the time for client recovery in failover scenarios to the time it takes for the server to make the election and report the outcome.This feature is enabled by default when connecting to MongoDB 4.4, no changes are needed for user code.The MONGODB-AWS authentication mechanism uses your Amazon Web Services Identity and Access Management (AWS IAM) credentials to authenticate users on MongoDB 4.4+. Please read more about this new authentication mechanism in our documentation.There were two projects to transparently improve performance of authentication in MongoDB 4.4:A driver can now include the first saslStart command in its initial handshake with server. This so-called “speculative authentication” allows us to reduce one roundtrip to the server for authentication a connection. This feature is only support for X.509, SCRAM-SHA-1 and SCRAM-SHA-256 (default) authentication mechanisms.The SCRAM conversation between driver and server can now skip one of it’s empty exchanges which also serves to reduce the roundtrips during a SCRAM authentication.OCSP stapling greatly improves performance when using LetsEncrypt certificates, removing the need for an external request to LetsEncrypt servers for each authentication attempt. No additional changes were required to support OCSP stapling in the driver, but extensive testing was added to verify that the feature works as expected.The createCollection helper used to internally run a listCollections command in order to see if a collection already existed before running the command. If it determined a collection with the same name existed, it would skip running the command and return an instance of Collection. This behavior was changed in v3.6.0 to avoid potentially serious bugs, specifically that the driver was not considering options passed into createCollection as part of the collection equality check. Imagine the following scenario:The createCollection call which defines a JSON schema validator would be completely bypassed because of the existence of bar, which was implicitly created in the first command. Our policy is strictly adhere to semver, but in rare cases like this where we feel there is potential for a data corrupting bug, we make breaking behavioral changes to protect the user.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.6 · mongodb/node-mongodb-native · GitHub\nRelease Notes: Release Notes - MongoDB JiraWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks very much to all the community members who contributed to this release!", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.6.0 Released
2020-07-30T20:02:02.900Z
MongoDB Node.js Driver 3.6.0 Released
2,619
null
[ "node-js", "production" ]
[ { "code": "nullCollectionDbMongoClientmongodb+srvmongodb+srvRangeError: Maximum call stack size exceededmaxStalenessSecondsMongoClientmaxStalenessSecondsReadPreferencelsid{ w: 0 }", "text": "The MongoDB Node.js team is pleased to announce version 3.5.10 of the driverNOTE: This will be the final release in the 3.5.x branch, please consider upgrading to 3.6.0@adrian-gierakowski helped us identify a bug with our ChangeStreamCursor, specifically when the cursor\nwas complete it would not return a valid document but instead a null value.The server selection specification indicates that the “runCommand” helper should act\nas a read operation for the purposes of server selection, and that it should use a default read\npreference of “primary” which can only be overridden by the helper itself. The driver had a bug\nwhere it would inherit the read preference from its “parent” type (Collection, Db, MongoClient)\nwhich is at odds with the specified behavior.Due to a bug in how we referred to ipv6 addresses internal to the driver, if a mongodb+srv\nconnection string was provided with an ipv6 address the driver would never be able to connect\nand would result in a the following error RangeError: Maximum call stack size exceeded.There was a bug in our connection string and MongoClient options parsing where a value provided\nfor maxStalenessSeconds would not end up being reflected in the ReadPreference used internal\nto the driver.MongoDB can provide no guarantees around unacknowledged writes when used within a session. The\ndriver will now silently remove the lsid field from all writes issued with { w: 0 }, and\nwill return an error in these situations in the upcoming 4.0 major release.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.5 · mongodb/node-mongodb-native · GitHub\nRelease Notes: Release Notes - MongoDB JiraWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks very much to all the community members who contributed to this release!", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.5.10 Released
2020-07-30T20:00:11.031Z
MongoDB Node.js Driver 3.5.10 Released
2,233
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.0 of the MongoDB Go Driver.This release contains support for the MongoDB 4.4 server features, as well as multiple driver-specific improvements. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.0 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Divjot_Arora" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.0 Released
2020-07-30T16:40:11.996Z
MongoDB Go Driver 1.4.0 Released
1,402
null
[]
[ { "code": "exports = async function(payload, response) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n\n const eventsdb = mongodb.db(\"mydatabase\");\n\n const eventscoll = eventsdb.collection(\"mycollection”);\n\n const result= await eventscoll.find({“payload.name\": {$exists: true}});\n\nresponse.setBody(\"{\"ok\": true,\"details”:”Found”}\");\n\n } else {\n\nresponse.setBody(\"{\"ok\": false,\"details\":\"Not found”}\");\n\n }\n\n return { text: `searched` };\n\n }\n", "text": "HiI’m testing out a Realm app working on an incoming webhook post request.I have an incoming webhook with a JSON payload which has a particular field that needs to be defined (or filtered?) and used to query the mongodb database collection I created. I want to see if this particular incoming value exists in the database and then return one of two responses.As a starting point for the webhook function I used a mongodb tutorial based on an incoming webhook.Then I set the find() method with the $exists: true query selector, and define the incoming payload field which is name.For the response I have the response object method of setBody(body) with one of two responses.When running the function on realm I get an error: error transpiling function source (Syntax error: exit status 1)Please help, I’m a newbie still learning) and would love to make use of mongodb apps properly!Many thanks, AndiHere is the function I’m working on:", "username": "a_Jn" }, { "code": " const eventscoll = eventsdb.collection(\"mycollection”);\n\n const result= await eventscoll.find({“payload.name\n“var body = JSON.parse(payload.body.text())\nvar name = body.name\n", "text": "Hi @a_Jn,Looks like you have some bad quotes :See “ …Additionally I do not see an if before. Not sure what you are trying to achieve but if you want to get the attribute name from the payload you first need to extract it:Now do you need to look it under a collection document or look for a Field with the passed value?Replace those with correct ones they are probably a copy paste error.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "exports = async function(payload, response) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n\n const eventsdb = mongodb.db(\"mydatabase\");\n\n const eventscoll = eventsdb.collection(\"mycollection\");\n\n var body = JSON.parse(payload.body.text());\n\n var name = body.name;\n\n const result= await eventscoll.find({ \"payload.name\": { $exists: true } });\n\n if(result) {\n\n response.setBody( \"{\"ok\": true,\"details”:”Found”}\" ); \n\n } else {\n\n response.setBody( \"{\"ok\": false,\"details\":\"Not found”}\" );\n\n }\n", "text": "Hi PavelThank you for your reply – to try clarify:to handle a request and send a response,I want to query “mycollection\" in “mydatabase\" on mongoldb-atlasand I need to use the value of a specific field extracted from an incoming webhook to query “mycollection\"(this specific field called “name\" will always have a different value to match to “mycollection” – so yes I need to look for the incoming webhook’s extracted/passed value in the collection document)then I need to return a response,if this extracted value matches a value in “mycollection”it will return a response of {“ok”: true,\"details”:”Found”}if this extracted value does not existit will return a response of {“ok”: false,“details”:\"Not found”}Sorry I left out the payload extract and if – I’ve updated the script below, although it is not showing the syntax error when run it still shows: error transpiling function sourceMany thanks Andi", "username": "a_Jn" }, { "code": "", "text": "Just a quick note after looking around some more –does async await work with find() to query/loop through the database collection and return a result?or how can this query work to match a result (or not) and return the response whenever the webhook with the value to query comes in?Many thanks Andi", "username": "a_Jn" }, { "code": "{\n \"name\" : \"...\"\n}\nexports = async function(payload, response) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n\n const eventsdb = mongodb.db(\"mydatabase\");\n\n const eventscoll = eventsdb.collection(\"mycollection\");\n\n var body = JSON.parse(payload.body.text());\n\n var name = body.name;\n\n const result= await eventscoll.count({ \"name\": name });\n\n if(result.count > 0 ) {\n\n response.setBody( JSON.stringify({\"ok\": true,\"details\":\"Found\"}) ); \n\n } else {\n\n response.setBody( JSON.stringify({\"ok\": false,\"details\":\"Not Found\"}) );\n\n }\n}\ncurl \\\n-H \"Content-Type: application/json\" \\\n-d '{\"name\":\"bar\"}' \\\nhttps://webhooks.mongodb-realm.com/api/client/v2.0/app/<APP_ID>/service/<SRV_NAME>/incoming_webhook/<WEBHOOK_NAME>\n{\"details\":\"Not Found\",\"ok\":false}\n", "text": "Hi @a_Jn,Ok I got you requirment. I assume that you document have a field named “name” correct?And you need to take a payload “name” field and search it. If the above is correct and you have a “POST” webhook, please use the following functionTo find if value exists you need to search it with a count operator and test if the count is more than 0.Then you need to setBody with a stringfied JSON.Now you can call the webhook:My test resulted in:Consider indexing the search field for best performance.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"_links\": {\n },\n \"_embedded\": {\n \"ff:items\": [\n {\n \"_links\": {\n },\n \"_embedded\": {\n \"ff:options\": [\n {\n \"_links\": {\n },\n \"name\": \"Eyecolor\",\n \"value\": \"brown\",\n }\n ],\n \"ff:category\": {\n \"_links\": {\n },\n \"name\": \"Number\",\n \"code\": \"101\",\n \"send_email\": false,\n }\n },\n \"group\": \"B\",\n \"name\": \"Jonty22\",\n \"date_created\": null,\n \"date_modified\": \"2020-06-20T06:11:08-0900\"\n },\n {\n \"_links\": {\n },\n \"_embedded\": {\n \"ff:item_category\": {\n \"_links\": {\n },\n \"email\": \"\",\n \"type\": \"live\",\n \"weight\": 65,\n \"count_name\": \"\",\n }\n ],\n \"ff:counts\": [\n {\n \"code\": 22,\n \"name\": \"Default count\",\n \"display\": \"22\",\n \"future_count\": false\n }\n ],\n \"ff:custom_field\": [\n {\n \"name\": \"custom_note\",\n \"value\": \"Happy Jonty\",\n \"is_hidden\": 0\n }],\n \"ff:area\": {\n \"city\": \"London\",\n \"country\": \"UK\",\n },\n \"ff:people\": {\n \"email\": false,\n \"is_anonymous\": \"0\",\n \"_embedded\": {\n \"ff:transfers\": [\n {\n \"type\": \"all\",\n \"reception\": null\n }\n ],\n \"ff:default\": {\n \"code\": \"\",\n \"phone\": \"\"\n }\n }\n }\n },\n \"language\": \"\",\n \"session_id\": \"jjks2312md9sw8ee3kljd3\",\n}\n", "text": "Hi PavelGreat thank you – really good to see how the count() is the solution instead of $exists with async await here.Regarding the payload value –below is an example of the incoming payload, it has an array of values, and I want the “name” value “Jonty22” to search with.To get to this do we need to loop through the _embedded[‘ff:items’] object ?How would this specific name value of “Jonty22” be extracted?Many thanks Andi", "username": "a_Jn" }, { "code": "var body = JSON.parse(payload.body.text());\n var items = body._embedded[\"ff:items\"];\nvar name= items[0].name;\n", "text": "Hi @a_Jn,Well when you parse this object you need to use a “.” Notation to access the value of the desired “name” level.I rather access special characters field names with a [].Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Great thank you Pavel,I’m getting a response now, except – it is the same \"Not Found” response for both when the name is in the collection and also when it is not.Is there a way to see what the “name” value extracted from the incoming payload is? so to make sure it is extracting the correct “Jonty22” value to search with?Then is there a way to test the search and see where the problem is and why it is responding with “Not found” in both instances?Here is the database collection with the “Jonty22” id value to match and return the ”Found” responsehttps://webhooks.mongodb-stitch.com/api/client/v2.0/app/app1-ifvea/service/httpGET/incoming_webhook/webhookhttpGETMany thanks Andi", "username": "a_Jn" }, { "code": "exports = async function(payload, response) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n\n const eventsdb = mongodb.db(\"mydatabase\");\n\n const eventscoll = eventsdb.collection(\"mycollection\");\n\n var body = JSON.parse(payload.body.text());\n\n var items = body._embedded[\"ff:items\"];\n \n var name= items[0].name;\n\n const result= await eventscoll.count({ \"name\": name });\n\n if(result.count > 0 ) {\n\n response.setBody( JSON.stringify({\"ok\": true,\"details\":\"Found\"}) ); \n\n } else {\n\n response.setBody( JSON.stringify({\"ok\": false,\"details\":\"Not Found\"}) );\n\n }\n}\n", "text": "And the function:", "username": "a_Jn" }, { "code": "const result= await eventscoll.count({ \"id\": name });\nconsole.log()", "text": "Hi @a_Jn,Wait I see that the field in the collection is “id” and not “name”.If you need to search a field name “id” for that value you need it to be specified in the count query:To debug functions you can use console.log() and print any of the parameters , the output will be shown on screen/logs.Please consider doing sopme of our query tutorials to grasp you head around MongoDB queries.MongoDB Manual. How do I query documents, query top level fields, perform equality match, query with query operators, specify compound query conditions.Best regards,\nPavel[", "username": "Pavel_Duchovny" }, { "code": "", "text": "Great thank you Pavelyes the database collection key is “id\", and the search key from the payload is “name\" (looping through the _embedded[‘ff:items’] object)I was just about to ask how I can log this to try root out why it’s not getting the results expected,like how to print the \"name” I’m getting from the payload to a log so I can see what I’m gettingthis will be super helpful for the futureMany thanks Andi", "username": "a_Jn" }, { "code": "const result= await eventscoll.count({ \"id\": name });\n{\n \"mydatabase.mycollection\": {\n \"no_matching_role\": 0\n }\n}\n", "text": "with the:I still get the same result:need to see what what the “name” value is it extracts to search with", "username": "a_Jn" }, { "code": "", "text": "Hi @a_Jn,I think your rules/authetication method for the webhook does not allow you to see the documents.Can you change the authentication for webhook to SYSTEM?Please share the application main page link and I can review.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"_links\": {\n \"curies\": [\n {\n \"name\": \"ff\",\n \"href\": \"https://api.test.com/\",\n \"templated\": true\n }\n ],\n \"self\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"This\"\n },\n \"ff:attributes\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Attributes\"\n },\n \"ff:counts\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"counts\"\n }\n },\n \"_embedded\": {\n \"ff:items\": [\n {\n \"_links\": {\n \"curies\": [\n {\n \"name\": \"ff\",\n \"href\": \"https://api.test.com/\",\n \"templated\": true\n }\n ],\n \"self\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Idea\"\n },\n \"ff:item_category\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Idea Category\"\n },\n \"ff:item_options\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Idea Options\"\n },\n \"ff:attributes\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Idea Attributes\"\n }\n },\n \"_embedded\": {\n \"ff:item_category\": {\n \"_links\": {\n \"curies\": [\n {\n \"name\": \"ff\",\n \"href\": \"https://api.test.com/\",\n \"templated\": true\n }\n ],\n \"self\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"DEFAULT\"\n },\n \"ff:email_templates\": {\n \"href\": \"https://api.test.com/\",\n \"title\": \"Email Templates\"\n }\n },\n \"code\": \"DEFAULT\",\n \"name\": \"Default for all\",\n \"default_weight\": 65,\n \"count_type\": \"\",\n \"date_created\": \"2020-07-09T12:03:06-0700\",\n }\n },\n \"category_uri\": \"\",\n \"name\": \"Jonty22\",\n \"weight\": 65,\n \"code\": \"\",\n \"count_type\": \"\",\n \"url\": \"\",\n \"date_modified\": \"2020-07-27T10:02:16-0700\"\n }\n ],\n \"ff:counts\": [],\n \"ff:custom_fields\": [\n {\n \"name\": \"total\",\n \"value\": \"20\",\n },\n {\n \"name\": \"total_future\",\n \"value\": \"0\",\n },\n {\n \"name\": \"agree\",\n \"value\": \"No\",\n }\n ],\n \"ff:ament\": {\n \"city\": \"London\",\n \"region\": \"\",\n \"country\": \"GB\",\n },\n \"ff:ament_results\": [],\n \"ff:cuter\": {\n \"id\": \"0\",\n \"first_name\": \"\",\n \"last_name\": \"\",\n \"email\": \"[email protected]\",\n \"is_anonymous\": 1,\n \"_embedded\": {\n \"ff:aments\": [\n {\n \"year\": \"\",\n \"age\": null,\n }\n ],\n \"ff:faults\": {\n \"country\": \"\",\n \"city\": \"\",\n }\n }\n }\n },\n \"customer_uri\": \"\",\n \"template_set_uri\": \"\",\n \"language\": \"\",\n \"date_created\": null,\n \"date_modified\": \"2020-07-27T09:06:24-0700\",\n \"ip\": \"90.252.101.84\",\n \"session_name\": \"fcsid\",\n \"session_id\": \"bs4mpdbd3hjlc1e5reumeaptd0\",\n}\n", "text": "Thank you Pavel,yes strange because when it receives the payload that does have the “Jonty22” name field it returns the “Not found” when searching that database which does have the “Jonty22”so I’m not sure if it actually extracts/uses this “Jonty22” name value from the incoming payload.(The webhook Authentication is set to SYSTEM, and I have not created any rules.)Below is another, maybe better, example of the incoming payload that has the “Jonty22” to search the database collection with.Please let me know which link you need to review?Many thanks Andi", "username": "a_Jn" }, { "code": "", "text": "Hi @a_Jn,When you are on the webhook page in the UI please copy the browser URL and post it here so I can view the code.In general the second document also have “name” looking for on the same level under '_embedded. “ff:items”.name`Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Is there a way to see in the logs what the extracted value is that this webhook uses to search with? I cant find/see the value, onlyHeaders:{\n“X-Forwarded-Proto-Stitch”: [\n“https”\n],\n“Sslclientcertstatus”: [\n“NoClientCert”\n],\n“Content-Length”: [\n“6301”\n],etc…Many thanks Andi", "username": "a_Jn" }, { "code": "", "text": "Sorry here is the linkhttps://realm.mongodb.com/groups/5e80b666924a0058a7e2e79d/apps/5e80be3b308f9d66390b6f83/services/5f1bf380a6ac7caa8b53020a/incomingWebhooks/5f1bf3abfb55eba1d0c15fb5Many thanks Andi", "username": "a_Jn" }, { "code": "", "text": "https://realm.mongodb.com/groups/5e80b666924a0058a7e2e79d/apps/5e80be3b308f9d66390b6f83/services/5f1bf380a6ac7caa8b53020a/incomingWebhooks/5f1bf3abfb55eba1d0c15fb5", "username": "a_Jn" }, { "code": " var name= items[0].name;\n console.log(\"Name : \" + name);\n", "text": "Add the following lines and rerun:Thanks!", "username": "Pavel_Duchovny" }, { "code": "[\n\n\"Name : Jonty22\"\n\n]\n", "text": "Great that shows up! so it is extracting the correct field value to search withLogs:Function Call Location:US-VAQuery Arguments:{}", "username": "a_Jn" } ]
$exists query database with incoming payload value and respond
2020-07-25T13:51:57.818Z
$exists query database with incoming payload value and respond
7,560
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4 is now generally available for production deployments. MongoDB 4.4 delivers the features and enhancements most demanded by you. We think of 4.4 as “user-driven engineering”, building on the MongoDB 4.x release family as an ideal foundation for modern workloads. The result is a database that enables you to build transactional, operational, and analytic applications faster and more efficiently. You can scale them out globally, with the flexibility to define and refine data distribution at any time as your requirements evolve. All while giving you some of the most sophisticated latency, resilience, and security controls anywhere. Download MongoDB today, or try MongoDB 4.4 in the cloud with Atlas in minutes.Here are the highlights of what’s new and improved in MongoDB 4.4:For more information about MongoDB 4.4, please review our complete release notes and download our Guide to What’s New.Last but not least, we would like to acknowledge the following community members who have contributed to this release: Xavier Guihot, Frederick Zhang, Roberth Godoy, Markus Schroder, Łukasz Walukiewicz, McKittrick Swindle, Graeme Yeates, Johan Suárez, Rui Ribeiro, James Harvey, Andres Kalle, Ryan Schmidt, Łukasz Karczewski, Adam Flynn, Ralph Seichter, Marek Kresnicki, David Lynch, Yohei Tamura, Barak Gilboa, David Bartley, Jared D. Cottrell, Dan Dascalescu, Michael Hofer, Anton Papp, Artem, Mohammed Sulthan, Mitar, Ricardo Bánffy, Adam Comerford, David Schneider, LinGao, lipengchong, Piyush Kumar, Henry Bettany, Ralf Strobel, jackin huang, Greg Studer, Alexey Glukhov, Devendra Chauhan, John Arbash Meinel, Connecting Media, Gilad Peleg, Travis Redman, Chad Kreimendahl, Alice Classy, Remon van Vliet, Rohit Kumar, Sam Tolmay, Ofer Cohen, zhaoliweiMongoDB 4.4 Release Notes | Changelog | Downloads– The MongoDB Team", "username": "Kelsey_Schubert" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4 GA Released
2020-07-30T16:12:02.192Z
MongoDB 4.4 GA Released
2,706
null
[]
[ { "code": "Origin: mongodb\nLabel: mongodb\nSuite: focal\nCodename: focal/mongodb-org\nArchitectures: amd64 i386\nComponents: multiverse\nOrigin: mongodb\nLabel: mongodb\nSuite: bionic\nCodename: bionic/mongodb-org\nArchitectures: amd64 i386 s390x arm64\nComponents: multiverse\n", "text": "The following URL contains Focal configuration data (wrong)https://repo.mongodb.org/apt/ubuntu/dists/bionic/mongodb-org/testing/ReleaseAnd the same for the following URL containing Bionic configuration information (wrong)https://repo.mongodb.org/apt/ubuntu/dists/focal/mongodb-org/testing/ReleaseTo be honest it’s been difficult to use the MongoDB repos for a while now. Be good to set up testing before publishing. Need help?", "username": "Shane_Spencer" }, { "code": "", "text": "Welcome to the community @Shane_Spencer!Thanks for bringing this to our attention – I let our packaging team know about the errors.To be honest it’s been difficult to use the MongoDB repos for a while now. Be good to set up testing before publishing. Need help?We generally have automated testing in place for builds and releases, but I’m not familiar with our current release automation set up. Although you are using the testing branch, this should have been an avoidable issue as part of our post-release testing.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "hi @Shane_Spencer,\nThank you again for finding this issue. We found the underlying problem and fixed it. We are currently working on process improvements to avoid issues like this.Regards,\nJon", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB/Testing Ubuntu Repo Bionic/Focal Switched
2020-07-28T04:17:51.688Z
MongoDB/Testing Ubuntu Repo Bionic/Focal Switched
1,809
null
[]
[ { "code": "", "text": "I see that Stitch is mentioned here and I read somewhere MongoDB realm is merging with Stitch? can somebody confirm if Realm is the way to go moving forward or Stitch will still be retained?", "username": "Marvin_Trilles1" }, { "code": "", "text": "Hi @Marvin_Trilles1,It’s mostly merger and rebranding of these two platforms. It’s not gonna go away.Please refer our documentation : https://docs.mongodb.com/realm/~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Stitch VS Realm
2020-07-30T03:32:21.974Z
Stitch VS Realm
1,633