image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"upgrading"
] | [
{
"code": "",
"text": "I’m moving along the upgrade path from 2.6.11 to 4.2.6 of my standalone database and have run into my first problem. When I start 3.6.18 mongod.exe returns right away leaving the following in the log file.** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.4 before attempting an upgrade to 3.6; see http://dochub.mongodb.org/core/3.6-upgrade-fcv for more details.I’m not worried about remote hosts, so I don’t see anything at the referenced page that is helping me.Any idea what I’m missing?Thanks,\nScott",
"username": "Scott_Reynolds"
},
{
"code": "featureCompatibilityVersion",
"text": "Hi Scott,This error message is suggesting that you have probably missed setting the featureCompatibilityVersion (FCV) to “3.4” as the final step of your MongoDB 3.4 upgrade.The FCV setting controls whether an upgraded deployment is able to persist backwards-incompatible data changes. You must finish all MongoDB 3.4 upgrade steps before starting the upgrade to the next major release series. FCV was added as part of the MongoDB 3.4 release so there is a more deterministic point after which downgrading will become more difficult because of backwards-incompatible features persisted in the data files. You’ll have to set FCV appropriately as the last step for any major version upgrades after the 3.4 release.The dochub link should redirect to the Upgrade Procedures section of the 3.6 release notes which suggests checking the FCV version via the output of:db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )The release notes page is quite long, so it may take a moment to scroll to the relevant section of the page after it loads.What is the current FCV setting for your deployment? If it is not set to “3.4” please follow the last step in the 3.4 upgrade guide.Since upgrade steps and caveats may change between major versions, I’d definitely recommend reading the relevant upgrade procedures and compatibility changes for each release to avoid any unexpected challenges.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I had already set that parameter prior to submitting this plea (though I had missed it earlier). From the 3.4.24 shell:db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )\n{ “featureCompatibilityVersion” : “3.4”, “ok” : 1 }",
"username": "Scott_Reynolds"
},
{
"code": "",
"text": "ARGH!\nI have seen the enemy, and it is I!\nIn an effort to make rollbacks easy, I’ve been copying databases forward as I proceed from version to version. Unfortunately, I failed to copy the 3.4.24 db forward to my 3.6.18 workarea after setting the FCV. Thanks for the reminder!Sorry for the trouble.\nScott",
"username": "Scott_Reynolds"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Upgrade Problem 3.4.24 to 3.6.18 | 2020-05-28T15:38:37.212Z | Upgrade Problem 3.4.24 to 3.6.18 | 6,043 |
null | [] | [
{
"code": "",
"text": "Shout out to anyone affected by this week’s events in the U.S. and around the world, including the death of George Floyd, related riots, the pandemic, or anything else. If y’all need to talk, we can open a Zoom call for low-risk venting and human connection. Just let me know if this is something that would be desired.",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | On a serious note | 2020-05-28T16:54:52.098Z | On a serious note | 4,772 |
null | [
"legacy-realm-cloud"
] | [
{
"code": " let partialSyncRealm = RealmClient.shared.getPartialSyncRealm()\n searchResults = partialSyncRealm.objects(UserData.self)\n subscription = searchResults!.subscribe(named: SyncSubscriptionType.SearchUsers.getName())\n \n subscriptionToken = subscription.observe(\\.state, options: .initial) { state in\n if state == .complete {\n \n // self.activityIndicator.stopAnimating()\n // This is never called when I login the app as admin user\n\n } else {\n print(\"Subscription State: \\(state)\")\n >> creating\n >> pending\n }\n }\n \n notificationToken = searchResults!.observe { [weak self] (changes) in\n guard let tableView = self?.table else { return }\n \n switch changes {\n case .initial:\n tableView.reloadData()\n \n case .update(_, let deletions, let insertions, let modifications):\n \n tableView.beginUpdates()\n tableView.insertRows(at: insertions.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n tableView.deleteRows(at: deletions.map({ IndexPath(row: $0, section: 0)}),\n with: .automatic)\n tableView.reloadRows(at: modifications.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n tableView.endUpdates()\n \n \n case .error(let error):\n fatalError(\"\\(error)\")\n }\n }",
"text": "I am developing an iOS application using the realm-cocoa SDK. I am using the Realm Cloud as a backend for my production environment.I cannot create a partial sync subscription if I log in the app as an admin user in a production environment.No error message is displayed, but it remains pending all the time. If I turn off the admin flag from Realm Studio and log in again, I can create subscriptions.When migrating our instance from partial sync to full sync, we need to retrieve and modify the data for all our users.Is it possible to log in as an administrator user from the client SDK and operate the data of our users?Client SDK version is realm-cocoa 3.21.0.",
"username": "Enoooo"
},
{
"code": "",
"text": "This is not an answer but you shouldn’t be using a Query Based Sync (partial sync) solution going forward as that’s looks like it’s going to be depreciated (note there’s not official word on this).Query-based Sync is not recommended. For applications using Realm Sync, we recommend Full Sync. Learn more about our plans for the future of Realm Sync hereQuestion: Can you share the code you’re using to log into the app as an Admin user?",
"username": "Jay"
},
{
"code": "func login(email : String , password : String, successCallback : @escaping (SyncUser)->Void, failCallback : @escaping (Error?)->Void) {\n let creds = SyncCredentials.usernamePassword(username: email, password: password, register:false)\n SyncUser.logIn(with: creds, server: RealmConstants.AUTH_URL, onCompletion: { (user, err) in\n if let error = err {\n failCallback(error)\n } else if let syncUser = user {\n successCallback(syncUser)\n }\n })\n}",
"text": "Hi Jay,Thank you for your reply and advice.My understanding is that MongoDB Realm doesn’t support partial sync in public beta, but seems to support partial sync in the GA phase.There are specific Realm Cloud features that will not be integrated into MongoDB Realm during the public beta phase:In the second half of 2020, we expect to make the architecture and performance improvements that will allow us to bring MongoDB Realm from beta to GA.\nAt this point, we’ll begin work on key features like:The code I’m using to login to the app as an Admin user is below.",
"username": "Enoooo"
},
{
"code": "",
"text": "Thanks for sharing that info - I was aware of the original timeline and am surprised it’s not been refreshed with more a more current timeline. IMO, based general chat and some discussion groups, the whole project is about a year behind that initial time-frame. I could be totally wrong and it may be right on schedule - any of you Realm’ers out there feel free to chime in. Of course ‘the second half of 2020’ is a 6 month window, so who knows.It looks like you’re using the standard log in… Did you create an administrator role (not ROS admin) and add yourself (assuming you’re the admin) to that role? Also, have you already set up ACL’s and if so, what do they look like.We’re able to access users data here so just trying to clarify how yours is set up.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay\nThank you for providing the information.Did you create an administrator role (not ROS admin) and add yourself (assuming you’re the admin) to that role?Maybe my understanding was insufficient.\nWas it just not enough to change the “Role” item from “RegularUser” to “Administrator” from the Realm Studio user list? Could you please tell me about the above procedure?The Role created by default is set for all PermissionUsers.\nAlso, all users belong to the “everyone” role.I set the ACL settings by referring to the link\nhttps://docs.realm.io/sync/v/3.x/getting-started-1/react-native-quick-start/step-2-adding-permissions[Realm-level permissions]\nrole: everyone\ncanRead - true\ncanUpdate - true\ncanSetPermissions - false\ncanModifySchema - false[Class-level permissions]\nrole: everyone\ncanRead - true\ncanUpdate - true\ncanCreate - true\ncanQuery - true\ncanSetPermissions - false\ncanModifySchema - false[Object-level permissions - UserData class]\nNot set due to shared account information[Object-level permissions - Other classes]\nrole: [Default User’s Role]\ncanRead - true\ncanUpdate - true\ncanDelete - true\ncanSetPermissions - trueNeither the UserData class nor any other class can create subscriptions.",
"username": "Enoooo"
},
{
"code": "",
"text": "From the original post:I cannot create a partial sync subscription if I log in the app as an admin user in a production environment.Is this next statement what you meant?nor any other class can create subscriptions.",
"username": "Jay"
},
{
"code": "",
"text": "@JayI didn’t have enough explanation.\nIf I log in the app as an admin user, I cannot create partial sync subscriptions for all classes.",
"username": "Enoooo"
},
{
"code": "",
"text": "That’s a little unclear. Are you able to access the data when you log in as an Admin? If so, and you log into Realm Studio and add a Regular user to a role that has access to that data, are you saying they cannot access it?Maybe in you include some screen shots from Realm Studio about how your permissions are set up, it would be more clear. _Class, _Permission and _Realm and maybe _Role would be relevant.",
"username": "Jay"
},
{
"code": "",
"text": "@JayThank you for your reply.Are you able to access the data when you log in as an Admin?If I log in the app as an admin user, I cannot access the data because I cannot create partial sync subscriptions.Maybe in you include some screen shots from Realm Studio about how your permissions are set up, it would be more clear.I have attached each screenshot. It would be very helpful if you could check it.\n__Class592×1164 53.5 KB\n \n__Permission602×1188 79.3 KB\n \n__Realm602×1188 53.9 KB\n \n__Role592×1164 77.9 KB\n",
"username": "Enoooo"
},
{
"code": "",
"text": "I was aware of the original timeline and am surprised it’s not been refreshed with more a more current timeline. IMO, based general chat and some discussion groups, the whole project is about a year behind that initial time-frame. I could be totally wrong and it may be right on schedule - any of you Realm’ers out there feel free to chime in.I think we are generally on schedule, you can see that we have gone GA with our 6.0 releases here:\nhttps://www.mongodb.com/community/forums/t/realm-releases-core-6-and-multiple-sdk-updatesIn preparation for our new product launch. We hope that you’ll be pleased with the integration - feel free to email me at [email protected] for more details",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Great news! I hope the project remains on track.Unfortunately Realm has become almost unusable with the 5.0.0 release as Realm Studio no longer works and simple filters are returning inaccurate results (amongst a myriad of other issues that didn’t previously exist). Yes, tickets are open with no response for a number of days.It’s all moving forward but unless the existing issues are rectified it’s going to be a rough ride.OP: Since we’ve upgraded to 5.0.0 Realm Studio no longer works so we can’t duplicate your exact settings. We were going to toggle the admin flag but can’t do that currently via RS.However, our Query Based Sync app continues to work and our Admin can in fact see and work with other users data so we can’t duplicate the issue you describePerhaps the issue lies elsewhere in your code - is it possible the data you’re attempting to access is stored locally on the users device and not sync’d? Can you see it in Realm Studio (if yours is still working)?",
"username": "Jay"
},
{
"code": "",
"text": "Unfortunately Realm has become almost unusable with the 5.0.0 release as Realm Studio no longer works and simple filters are returning inaccurate results (amongst a myriad of other issues that didn’t previously exist). Yes, tickets are open with no response for a number of days.@Jay What tickets are you referring to? Can you send them to me please?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardSure. Here’s the post here and I am sending ticket info off list.",
"username": "Jay"
}
] | How to access data of all users from the admin user in the app | 2020-05-13T20:42:58.614Z | How to access data of all users from the admin user in the app | 4,298 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Hi,My overall goal for this post is to figure out how I can implement a trigger that will be called when an insert is performed on one of my collections, and when that trigger is fired, it should have my REACT application do some processing on the data that was inserted.I don’t know if what I want to do is even possible - ??I have set up a trigger on my DB and used the default “commented out” function in the Edit Trigger page.I have read the following:\nhttps://docs.mongodb.com/stitch/triggers/database-triggers/\nhttps://docs.mongodb.com/manual/reference/change-events/#insert-event\nhttps://docs.mongodb.com/stitch/triggers/trigger-snippets/The last one showing the trigger snippets seems like it’s getting close to what I want to do… but instead focuses on sending text messages via twilio and new user authentication… instead of having the application do some processing like I would like it to.Thanks\nMike",
"username": "MLiss"
},
{
"code": "",
"text": "I have code in place in my app that will insert data into the table associated with the trigger.I have executed that code and see that the trigger is firing. Here is an excerpt from the trigger log:garminPushNotification\nDatabase\nOK\n2020-04-10T15:23:01-07:00\n4ms\n5e90f1c5d1e388afc4f82479",
"username": "MLiss"
},
{
"code": "",
"text": "Any body have any idea on this?",
"username": "MLiss"
},
{
"code": "",
"text": "Hi Michael – If you want your React application to do the processing instead of a Function, I think you will want to use the Watch() functionality instead of Triggers. While both are based on MongoDB’s Change Streams Triggers are meant solely for calling Stitch Functions, while Watch() is meant to be used within an application to monitor for and react to changes – Is this what you’re looking for?",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Drew,Thanks… that helps out !",
"username": "MLiss"
},
{
"code": "",
"text": "Hi @MLiss,\nI have the same need but for a Flutter app.\nBy any chance have you managed to implement this ?\nWould you have a minute to spare for a quick feedback ?\nI read about the integration with Realm db,\nI wonder if this means that using a mobile Realm db improves integration for such features.",
"username": "Pierre_Gancel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Stitch triggers to call React application processing | 2020-04-10T21:01:27.316Z | Using Stitch triggers to call React application processing | 1,810 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello everyone!I am curious to look at example of top level data structures (how the dbs and collections are organized) for different use cases, e.g. different types of apps. How dbs and collections and sub collections do different types of use cases typically have? How are they organized? Does anyone know of a resource like that?PS. My guiding-thought re database structure is that the data-hierarchy you have in your mind should be reflected in the structure of the cluster to extent possible, but I was wondering if anyone else had any thoughts on this.",
"username": "Rich_Guy"
},
{
"code": "",
"text": "Hi @Rich_Guy, welcome!I am curious to look at example of top level data structures (how the dbs and collections are organized) for different use cases, e.g. different types of appsThe concept of a database in MongoDB is just a logical construct, which is similar to a namespace for a collection. The structure of collections however depends on your use case, i.e. whether you choose two have a single collection containing embedded documents, or two collections with related documents.PS. My guiding-thought re database structure is that the data-hierarchy you have in your mind should be reflected in the structure of the cluster to extent possibleIt’s less about the database structure, but more about the data model itself. An example case where you would consider a different data model for a cluster’s structure would be Aggregation $lookup: sharded collection restrictions.Does anyone know of a resource like that?Please review the following resources related to data modelling:There’s also a free online course at MongoDB University, one that focuses on creating data models for MongoDB: M320: Data ModelingRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi Rich. Thanks for the question.I would add to what Wan said by noting that there are some anti-patterns with DB/collection usage for multi-tenant setups. I would avoid (for example) having a separate database for each customer, and then repeating the same collection names inside each database. As each collection, and each index require a separate file on the storage layer, you can get situations where there are 100’s of thousands of different files. Each fille requires a file handle in the OS, and these can add up to take a significant amount of memory which could otherwise be used for data caching.I would instead recommend having shared collections for the entire SaaS with a customer-identifing field in the document schema, and have just one giant logical database with one collection for each entity type.Hope this helps,\nNic.",
"username": "Nic"
},
{
"code": "",
"text": "Hi Nic,I was also planning a similar approach - one database per customer and the same collection names repeated for each customer, but you mention that it could get into a nightmare of files and OS handling (sic).but if I use the suggested alternative of a shared collection with customer-identifying field then there is a huge risk - one programming error, one data leakage, and its all over.Besides separate databases seemed to have some advantageswhat alternative approach can you suggest instead of shared collection with customer-identifier particularly to nullify the risk of cross customer data leakage due to a programming oversightregards\nSanjay",
"username": "Sanjay_Minni"
},
{
"code": "",
"text": "Hi Sanjay,I agree that there are certainly benefits to having a single deployment and one database per customer. You can mitigate the file handle issue somewhat by having a highly sharded cluster (say 100 shards) and have 200 customer databases homed on each shard, giving 20k databases. Since each database has a primary shard you can use movePrimary to distribute them as you wish. As long as you you keep all databases/collections unsharded this will scale well.At the other extreme end, you could to launch one deployment per customer. On-prem this is difficult, but on Atlas you could programmatically launch a new M10 for each customer via the API. This reduces the risk of a “noisy neighbor”. If one of your customer’s is highly loaded they are not going to negatively impact the performance of other customers since their VMs are isolated.",
"username": "Nic"
},
{
"code": "",
"text": "while I understand this information, i also want to clarify on the requirements:",
"username": "Sanjay_Minni"
}
] | Top Level DB Structure Examples? | 2020-03-09T16:41:57.094Z | Top Level DB Structure Examples? | 2,557 |
null | [
"golang"
] | [
{
"code": "db.ShardCollection(...)collection.Shard(...)db.RunCommand",
"text": "I am looking for a way to shard a collection (https://docs.mongodb.com/manual/reference/method/sh.shardCollection/) using the Go driver (GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB).I was somehow expecting a method db.ShardCollection(...) or collection.Shard(...) but I couldn’t find anything.I suppose it would work using db.RunCommand? But is there a more Go idiomatic way?Thanks,\nIulian",
"username": "Iulian_Nitescu"
},
{
"code": "db.RunCommandcmd := bson.D{\n {\"shardCollection\", \"dbName.collectionName\",\n {\"key\", bson.D{{\"firstKeyField\", 1}, {\"secondKeyField\", 1}, ...},\n {\"unique\", true},\n {\"numInitialChunks\", 5},\n}\nerr := db.RunCommand(ctx, cmd).Err()\n// handle error\n",
"text": "Hi @Iulian_Nitescu,Sharding a collection is usually considered an administrative task that users would do once using the shell or another tool, so drivers don’t offer a helper for it. You can do this using db.RunCommand. The command document should be based on the information at https://docs.mongodb.com/manual/reference/command/shardCollection/. It would look something like this:– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do you shard a collection with the Go driver? | 2020-05-27T22:19:24.367Z | How do you shard a collection with the Go driver? | 4,109 |
null | [
"node-js",
"production"
] | [
{
"code": "writeErrorswriteErrorsBulkWriteErrormessageBulkWriteErrorwriteErrorsjournaljournaljMongoClientjournal",
"text": "The MongoDB Node.js team is pleased to announce version 3.5.8 of the driver@adityapatadia helped uncover an issue with our server selection logic which filtered out servers after evaluating whether they were in the latency window. This meant that non viable servers were considered during the window calculation and would render certain viable servers unviable.@vkarpov15 submitted a patch to always include writeErrors on a BulkWriteError. We have logic to set the message of BulkWriteError to the message of the first error encountered if there is only one error. Unfortunately, this logic removed the writeErrors field when doing that, so users could be faced with an error which conditionally changed shape.@dead-horse identified a memory leak in the new connection pool where wait queue members which timed out might be left in the queue indefinitely under sufficient load. The fix here was to ensure that all wait queue members are flushed during wait queue processing before evaluating whether there were available sockets to process new requests.Once @dead-horse was able to patch the connection pool memory leak, they also identified a edge case where implicit sessions could be leaked in a very specific error condition. The logic to release implicit sessions was simplified, preventing this from happening in the futureA bug introduced last summer prevented unordered bulk write operations from continuing after the first write error - one of the most important features of being an unordered operation. We now properly support this feature again.@nknighter filed a report that the journal option was ignored when provided via the connection string. The paramater j was supported both through the connection string and explicit added to MongoClient options, but the official documentation for connection strings support a journal option.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.5 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks very much to all the community members who contributed to this release!The MongoDB Node.js team",
"username": "mbroadst"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Node.js Driver 3.5.8 Released | 2020-05-28T13:07:13.412Z | MongoDB Node.js Driver 3.5.8 Released | 1,976 |
null | [
"spring-data-odm"
] | [
{
"code": "orgIdorgId",
"text": "Hi team, got a question for you.Here is the scenario… there will be standard (one mandatory) connection to one db, in this db, there is a collection which has customer db details(username,pwd). On each API call, there’ll be orgId that is passed as request body. Based on the orgId input, we should be fetching the relevant customer db details from the initial db. Based on the db details acquired from the DB call, we should connect to that particular DB and perform CRUD operation on the particular collectionCan we achieve this using mongorepository and spring boot?",
"username": "shivananda_swamy"
},
{
"code": "AbstractMongoClientConfigurationMongoRepository",
"text": "Hello @shivananda_swamy,you can configure to connect to a particular database (and there are many ways to configure). Repository calls can query a collection based upon an input (supplied) parameter value.The query results can be used to perform further actions.For example, you can configure connection to a particular database using AbstractMongoClientConfiguration class and use with MongoRepository.More details at: Spting Data MongoDB Reference.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for writing back Prasad.What I am looking for right now is, not to go with mongoTemplate, and just using mongoRepository alone, just as we do with normal single connection mongodb where we have repository interface for one document.To explain my problem more clearly, I obtain the connection to the 1stDB, from this DB I query for the particular collection to fetch customer db credentials and now using this details(mongodb credentials) I want to connect to that particular DB and then perform CRUD operation on the collections in that particular DB.All this I want to achieve using mongoRepository way of spring boot applicationHope this explanation is helpful",
"username": "shivananda_swamy"
},
{
"code": "",
"text": "To explain my problem more clearly, I obtain the connection to the 1stDB, from this DB I query for the particular collection to fetch customer db credentials and now using this details(mongodb credentials) I want to connect to that particular DB and then perform CRUD operation on the collections in that particular DB.There are two databases, and you are performing CRUD on the second database collections?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Yes Prasad\nFirst database is used only for fetching second database credentials and nothing else",
"username": "shivananda_swamy"
},
{
"code": "",
"text": "All this I want to achieve using mongoRepository way of spring boot applicationYou will be able to do that. Using the latest of the Spring Data MongoDB software as well as the MongoDB server, facilitates latest features as well as developer APIs.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for writing Prasad.Could you help me out with documentation on this which could be helpful in tackling this challenge",
"username": "shivananda_swamy"
},
{
"code": "MongoRepository",
"text": "Could you help me out with documentation on this which could be helpful in tackling this challengeIn addition to the Spring Data MongoDB reference documentation I had already linked in my earlier post, these are useful documentation:The Spring Data MongoDB reference documentation has code samples for most of the usage scenarios - as in your case using the MongoRepository API for configuration and operations on the MongoDB server.You can always lookup online for specific articles, post queries, etc., at your favorite websites.",
"username": "Prasad_Saya"
}
] | How to do pipelining db connection | 2020-05-27T22:19:37.276Z | How to do pipelining db connection | 2,459 |
null | [] | [
{
"code": "",
"text": "I have a free DB in mLab that I would like to move to my new MongoDB paid instance. How do I go about that? #covid-19",
"username": "Stephen_Wright"
},
{
"code": "mongodumpmongorestoremongorestore",
"text": "Welcome to the forum @Stephen_Wright!There’s a comprehensive Guide to Migrating to Atlas from mLab.For a free tier cluster, the short scoop is that you should use mongodump to take a backup of your mLab deployment and then mongorestore into your Atlas cluster. See Seed with mongorestore in the Atlas documentation.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for engaging with me.I read through the official migration process and started to do it but then realized that I can’t migrate in to an existing cluster and if I make a second cluster I will be charged double. I believe I can have multiple DB in the same cluster. I am a public school HS Computer Science teacher. MongoDB was generous to give me a $300 credit and I need to milk it. I have one DB in Atlas in an M2 cluster ($9/mo) that is running great! I have one other app I want to move over from a free tier in mLab to that same cluster in MongoDB. It seems the official process with wipe out my existing cluster. So, I need to use the mongodump and mongorestore process?",
"username": "Stephen_Wright"
},
{
"code": "",
"text": "Hi Stephen,The “mongodump” and “mongorestore” method as suggested by Stennie will work fine!Kind regards,\nLeo",
"username": "Leo_Van_Snippenburg"
},
{
"code": "mongorestoremongorestore--drop--nsFrom--nsTo",
"text": "Hi @Stephen_Wright,Were you able to migrate your mLab Sandbox data across to Atlas?I believe I can have multiple DB in the same cluster.A single MongoDB deployment can have multiple databases. Shared tier Atlas deployments (M0, M2, and M5) currently allow up to 100 databases and 500 collections in total. See: Atlas M0 (Free Tier), M2, and M5 Limitations. Dedicated clusters (M10+) do not have limits on the number of databases or collections.Depending on how you want to combine data from your backup into your destination Atlas cluster, there are several approaches to consider. The mongorestore documentation includes a full list of options, but here are some common starting points:No collection conflictsIf collections you are restoring do not already exist on your target Atlas cluster, you can mongorestore your backup as-is. New databases and collections will be created alongside any existing ones in the deployment.Replacing existing collections with the backup version:If collections you are restoring exist on your target Atlas cluster and you want to replace the existing data, use the --drop option to drop existing collections that are also in the backup. This option does not drop collections that aren’t in the backup.Rename collections while restoring to avoid conflictsIf collections you are restoring exist on your target Atlas cluster and you want to preserve the existing data, use the --nsFrom and --nsTo options to Change Collections’ Namespaces during Restore.If you have any further questions, please provide more detail on what you are trying to achieve so we can suggest an approach.Regards,\nStennie",
"username": "Stennie_X"
}
] | Moving from MLab to MongoDB | 2020-05-20T22:23:31.045Z | Moving from MLab to MongoDB | 2,732 |
[] | [
{
"code": "",
"text": "Sharded Cluster Configuration of 2 Shards node\ncollection_name is “test”,\ndocument_content is\nfor (var i = 0; i <= 10,000; i++) db.test.insert( { index : i } )How does this-SHARD_MERGE nReturned: 2- be make final result 5 Documents when i did “db.test.find( { index : { $gt : 4990, $lte : 5000 } } ).skip(5).explain()”?I don’t understand this sentence.캡처576×664 65.5 KB",
"username": "Kim_Hakseon"
},
{
"code": "index",
"text": "Sharded Cluster Configuration of 2 Shards node\ncollection_name is “test”,\ndocument_content is\nfor (var i = 0; i <= 10,000; i++) db.test.insert( { index : i } )Hello. What is the Shard Key field (is the collection sharded)? Is there an index created on the index field (in case it is not the shard key)?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi. Thank you for reading my postShard Key is { “index” : “hashed” }.",
"username": "Kim_Hakseon"
},
{
"code": "mongos{ index : { $gt : 4990, $lte : 5000 } }mongosmongosmongosmongosmongosmongos\"executionStages\" : {\n \"stage\" : \"SHARD_MERGE\",\n \"nReturned\" : 2,\n\n \"shards\" : [\n\n \"shardName\" : \"shard01\"\n \"stage\" : \"SKIP\",\n \"nReturned\" : 0,\n \"stage\" : \"SHARDING_FILTER\"\n \"nReturned\" : 3\n\n \"shardName\" : \"shard02\"\n \"stage\" : \"SKIP\",\n \"nReturned\" : 2,\n \"stage\" : \"SHARDING_FILTER\"\n \"nReturned\" : 7\n ]\n}\n\"nReturned\"\"stage\" : \"SHARDING_FILTER\"\"nReturned\"mongosskip(5)\"stage\" : \"SKIP\".\"nReturned\"\"nReturned\"\"stage\" : \"SHARDING_FILTER\"\"shard02\"\"shard01\"\"stage\" : \"SHARD_MERGE\", \"nReturned\" : 2",
"text": "How does this-SHARD_MERGE nReturned: 2- be make final result 5 Documents when i did “db.test.find( { index : { $gt : 4990, $lte : 5000 } } ).skip(5).explain()”?The actual query was executed this way in your cluster with two shards:NOTE: The mongos applies the skip to the merged set of results (not at the shard level).Based upon the above steps, the query returns the five documents - as expected.You have generated a query plan with “executionStats”. And, your question is more related to the query plan numbers from the “executionStats”. I will try to explain.Stages:These are some details from the plan’s “executionStats”:The \"nReturned\" values from the \"stage\" : \"SHARDING_FILTER\" is the actual number of documents returned from the cursors (from each shard). Note the values of \"nReturned\" are 3 and 7 (sums to 10); the total number of documents returned to the mongos. The skip(5) is applied to this total number of 10 documents, returning 5 documents (the actual result).If you notice, each of the shards also have a \"stage\" : \"SKIP\". And, the \"nReturned\" value for that shard is actually the skip applied on the \"nReturned\" value from the \"stage\" : \"SHARDING_FILTER\". For example, for the \"shard02\", the skip is applied on the value 7 and which results as 2. For the \"shard01\", the skip is applied on 3, which results a negative number and hence shows 0. But, actually the skip is never applied - these are just projections on each shard, I think. The sum of these projections are shown in the \"stage\" : \"SHARD_MERGE\", \"nReturned\" : 2.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "You mean the final result could be different from explain().Thank you for your answer. It really helped me a lot.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Then, can I call this a notation bug?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "You mean the final result could be different from explain().Thats what I found too ( I created the similar cluster and generated the query plan on the same query and data). I am working with MongoDB version 4.2.3 Enterprise Server.Then, can I call this a notation bug?I don’t know. if its a bug. May be its the way the plan is for shard clusters. I will be looking up for more details and post the findings here (later).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you, Thank you~Very Very your answer helped me a loooooooooooooooooooooooooooooooooooooooooot",
"username": "Kim_Hakseon"
}
] | I have a question about curser.skip() | 2020-05-26T08:28:01.364Z | I have a question about curser.skip() | 2,625 |
|
null | [
"indexes"
] | [
{
"code": "",
"text": "I have some questions about ttl based indexes",
"username": "Prateek_GUpta"
},
{
"code": "Does mongod create 1 thread for all the collections with ttl index enabled or is it 1 per collectionmaxIndexBuildMemoryUsageMegabytesdb.adminCommand( { setParameter: 1, maxIndexBuildMemoryUsageMegabytes: 70000 } )\n2) What happens when the this thread created by mongod doesn't exit in 60 seconds, does a new thread gets created after 60 seconds, if yes, then I guess doesn't it mean that ttl based index should not be used for write heavy systems?\n",
"text": "Here, is the information I have received over chat\n1)\nDoes mongod create 1 thread for all the collections with ttl index enabled or is it 1 per collectionMongoDB index creation is not Multi- Threaded .You can try increasing the maxIndexBuildMemoryUsageMegabytes parameter value.The default value is 500 MB.What does it do?Limits the amount of memory that the simultaneous foreground index builds on one collection may consume for the duration of the builds.So by increasing this limit, may increase the performance of index creation. For Example:It looks like it will go beyond the 60seconds until it deletes the doc.By Default, the TTLMonitor thread runs once in every 60 seconds. You can find out the sleep interval using following admin command.||> db.adminCommand({getParameter:1, ttlMonitorSleepSecs: 1});{ “ttlMonitorSleepSecs” : 60, “ok” : 1 }To change this interval, supply another admin command with the desired interval:||> db.adminCommand({setParameter:1, ttlMonitorSleepSecs: 3600}); // 1 hour\n{ “was” : 60, “ok” : 1 }Only for 4th Question I seek answer",
"username": "Prateek_GUpta"
},
{
"code": "TTLMonitormongod",
"text": "Welcome to the community @Prateek_GUpta,It looks like you already have answers to most of your questions aside from #4. Your comment for #1 is also related to index creation, not the TTL background thread.There is only a single TTLMonitor thread, which wakes up every 60 seconds by default and iterates TTL indexes to find expired documents to remove.The TTL background thread is enabled by default on all data-bearing mongod instances. The TTL background thread will be idle for a replica set member unless it is currently in Primary state, since documents can only be directly deleted on a Primary. Secondary members apply delete operations via replication so they are consistent with the TTL expiry outcome on the Primary.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | TTL index internals | 2020-05-14T03:50:13.037Z | TTL index internals | 3,564 |
null | [
"indexes"
] | [
{
"code": "{\n \"updated_at\" : 1.0\n},\n\"name\" : \"updated_at_1\",\n\"ns\" : \"xxx-yyy\",\n\"expireAfterSeconds\" : 180.0,\n\"sparse\" : true,\n\"background\" : true\n db.adminCommand({getParameter:1, ttlMonitorSleepSecs: 1});\n{\n\t\"ttlMonitorSleepSecs\" : 300,\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1589968984, 4),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1589968984, 4),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"YwsahwMVAlldgw3S02zksPGbVM0=\"),\n\t\t\t\"keyId\" : NumberLong(\"6799818280593784833\")\n\t\t}\n\t}\n}\n{\n\t\"_id\" : ObjectId(\"5ec4ede5f523db6eb8b98c7a\"),\n\t\"session_id\" : \"1587992244204\",\n\t\"user_id\" : \"XXXXXXX\",\n\t\"messages\" : [\n\t\t\n\t],\n\t\"created_at\" : NumberLong(1589964261),\n\t\"**updated_at**\" : NumberLong(1589964263)\n}\n",
"text": "I have set up a TTL index on one of the collections\nkey\" :It is scheduled to expire documents every 3 minutesMy TTLMonitor thread is scheduled to run every 300 secs - 5 minutesI have checked the admin logs as welldb.setLogLevel(1, “index”);TTL thread is running and also looking at my index, but it’s not purging the documents{ updated_at: 1.0 } name: updated_at_1\n2020-05-20T10:09:32.873+0000 D INDEX [TTLMonitor] deleted: 0Example of one of the document I have in my collectionNow this updated_at is generated using epoch time in seconds. Please tell me how can I make it work?",
"username": "Prateek_GUpta"
},
{
"code": "cron",
"text": "Now this updated_at is generated using epoch time in seconds. Please tell me how can I make it work?Hi @Prateek_GUpta,TTL Indexes currently only support expiration of data based on comparison with a BSON Date value for the expiry:If the indexed field in a document is not a date or an array that holds a date value(s), the document will not expire.If a document does not contain the indexed field, the document will not expire.The BSON Date type is a signed 64-bit integer representing milliseconds after (or before) the epoch, so it should be straightforward to convert your existing values to the supported data type.If you want to support expiration of documents based on field types or comparisons outside of what is provided in the built-in TTL index, an alternative approach would be writing your own script and scheduling using something like the cron utility (Linux/Unix operating systems) or Task Scheduler (Windows).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | TTL index not purging documents | 2020-05-28T04:41:46.256Z | TTL index not purging documents | 3,763 |
null | [
"cxx"
] | [
{
"code": " #include <iostream>\n #include <bsoncxx/json.hpp>\n #include <mongocxx/client.hpp>\n #include <mongocxx/stdx.hpp>\n #include <mongocxx/uri.hpp>\n #include <mongocxx/instance.hpp>\n\n\n using bsoncxx::builder::basic::make_document;\n using bsoncxx::builder::basic::kvp;\n\n int main(int, char**) {\n\n std::cout << \"Creating instance\" << std::endl;\n mongocxx::instance instance{};\n auto uri = mongocxx::uri{mongocxx::uri::k_default_uri};\n\n std::cout << \"Creating client\" << std::endl;\n auto client = mongocxx::client{uri};\n\n std::cout << \"Accessing DB \" << std::endl;\n auto db = client[\"mydb\"];\n\n auto coll = db[\"my_collection\"];\n\n auto criteria = make_document(kvp(\"x\", \"foo\"));\n auto update = make_document(kvp(\"$set\", make_document(kvp(\"x\", \"bar\"))));\n\n auto write_concern = mongocxx::write_concern{};\n write_concern.journal(true);\n write_concern.acknowledge_level(\n mongocxx::write_concern::level::k_majority);\n\n std::cout << \"Setting options\" << std::endl;\n auto options = mongocxx::options::find_one_and_update()\n .write_concern(std::move(write_concern))\n .return_document(mongocxx::options::return_document::k_before);\n\n std::cout << \"Invoking find_one_and_update\" << std::endl;\n coll.find_one_and_update(\n criteria.view(),\n update.view(),\n options);\n\n std::cout << \"Done\" << std::endl;\n }\nCreating client\nAccessing DB\nSetting options\nInvoking find_one_and_update\nDone\nCreating client\nAccessing DB\nSetting options\nInvoking find_one_and_update\nterminate called after throwing an instance of 'mongocxx::v_noabi::write_exception'\n what(): BSON field 'j' is an unknown field.: generic server error\nAborted\njwrite_concern.journal(true);1405 2020-05-24T21:57:55.798+0000 D2 COMMAND [conn2] run command mydb.$cmd { findAndModify: \"my_collection\", query: { x: \"foo\" }, update: { $set: { x: \"bar\" } }, w: \"majority\", j: true, $db: \"mydb\", lsid : { id: UUID(\"a2067505-a443-4427-84a6-500f83ce310c\") } }\n1406 2020-05-24T21:57:55.799+0000 D1 - [conn2] User Assertion: Location51177: BSON field 'j' is an unknown field. src/mongo/db/commands/find_and_modify.cpp 315\n1407 2020-05-24T21:57:55.799+0000 D1 COMMAND [conn2] assertion while executing command 'findAndModify' on database 'mydb' with arguments '{ findAndModify: \"my_collection\", query: { x: \"foo\" }, update: { $set: { x: \"bar\" } }, w: \"majority\", j: true, $db: \"mydb\", lsid: { id: UUID(\"a2067505-a443-4427-84a6-500f83ce310c\") } }': Location51177: BSON field 'j' is an unknown field.\n1408 2020-05-24T21:57:55.799+0000 I COMMAND [conn2] command mydb.$cmd command: findAndModify { findAndModify: \"my_collection\", query: { x: \"foo\" }, update: { $set: { x: \"bar\" } }, w: \"majority\", j: true , $db: \"mydb\", lsid: { id: UUID(\"a2067505-a443-4427-84a6-500f83ce310c\") } } numYields:0 ok:0 errMsg:\"BSON field 'j' is an unknown field.\" errName:Location51177 errCode:51177 reslen:124 locks:{} proto col:op_msg 0ms\n",
"text": "The following simple code exhibits different behaviour on mongo 4.2.6 and 4.0On mongo 4.0, things work as expected and give this output,However, when I run this against a mongo 4.2.6 server, I see this,This j seem to come from the write_concern.journal(true); line. Here is what the mongo 4.2.6 logs say about this,I was going to create a bug for this in the mongocxx JIRA project. Posting this here so that I get a few more people to look at this - in case I am overlooking something.Details,\nMongo 4.2.6 was fetched from: http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.6.tgz\nMongo 4.0.0 was fetched from: http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.0.tgz\nMongo-cxx was fetched and built from: https://github.com/mongodb/mongo-cxx-driver/archive/r3.5.0.tar.gzI also posted this on stackoverflow: mongodb - Different behaviour on mongo 4.0 and 4.2 using mongocxx 3.5 - Stack Overflow",
"username": "Mohammad_Ghazanfar"
},
{
"code": "mongod --port=27017 --bind_ip_all -vvvvv --fork \\\n --dbpath=/my/db/path/ --logpath=/my/db/path/logs/mongod.log\n",
"text": "In both cases, mongod was started with a command like this,",
"username": "Mohammad_Ghazanfar"
},
{
"code": "",
"text": "Hi @Mohammad_Ghazanfar, welcome!Thank you for reporting this issue.I performed a brief test and able to reproduce the same issue. I have created an issue ticket CXX-2028, please feel free to upvote or add yourself as a watcher to receive notifications on the ticket.Regards,\nWan.",
"username": "wan"
}
] | Different behaviour on mongo 4.0 and 4.2 using mongocxx 3.5 | 2020-05-24T22:54:58.078Z | Different behaviour on mongo 4.0 and 4.2 using mongocxx 3.5 | 2,803 |
null | [
"replication",
"containers"
] | [
{
"code": "{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2020-05-27T22:27:30.923Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(1),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1590618446, 1),\n \"t\" : NumberLong(1)\n },\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1590618446, 1),\n \"t\" : NumberLong(1)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1590618446, 1),\n \"t\" : NumberLong(1)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1590618446, 1),\n \"t\" : NumberLong(1)\n }\n },\n \"lastStableCheckpointTimestamp\" : Timestamp(1590618426, 1),\n \"members\" : [ \n {\n \"_id\" : 0,\n \"name\" : \"mongomaster:27017\",\n \"health\" : 1.0,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 2171988,\n \"optime\" : {\n \"ts\" : Timestamp(1590618446, 1),\n \"t\" : NumberLong(1)\n },\n \"optimeDate\" : ISODate(\"2020-05-27T22:27:26.000Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1588464660, 2),\n \"electionDate\" : ISODate(\"2020-05-03T00:11:00.000Z\"),\n \"configVersion\" : 5,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n }, \n {\n \"_id\" : 1,\n \"name\" : \"mongoslave:27018\",\n \"health\" : 1.0,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 501546,\n \"optime\" : {\n \"ts\" : Timestamp(1590116889, 1),\n \"t\" : NumberLong(1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1590116889, 1),\n \"t\" : NumberLong(1)\n },\n \"optimeDate\" : ISODate(\"2020-05-22T03:08:09.000Z\"),\n \"optimeDurableDate\" : ISODate(\"2020-05-22T03:08:09.000Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-05-27T22:27:29.378Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-05-22T03:08:11.889Z\"),\n \"pingMs\" : NumberLong(2),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 5\n }\n ],\n \"ok\" : 1.0,\n \"operationTime\" : Timestamp(1590618446, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1590618446, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"P7jNLR6UPsFLFA3EHFCpl23MJDM=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6822403774141693953)\n }\n }\n}\n",
"text": "Hi,We have configured a MongoDB primary (4.0.6) and MongoDB secondary (4.0.12) [without voting-static primary].It looks like the replication is running without errors.The problem is: when I make data updates (on primary) the updates do not replicate to the secondary.Please help me to fix the problem and load delta changes to secondary.The replication status shows:",
"username": "Anton_Turbin"
},
{
"code": "optimeDaters.printReplicationInfo()",
"text": "Hi @Anton_Turbin and welcome to the MongoDB community forums.Have you looked at the mongo log files to see if you can see what’s going on? Your secondary does not appear to be getting the most recent operations as the optimeDate is almost 6 days behind your master. I would check the log files for both servers to see if you have any errors in them.What is the result of running rs.printReplicationInfo() on both machines?I did notice that this replica set only has two members and that is not a recommended setup, unless you’re doing some testing I would add a third member to the replica set.As for syncing the data, if the secondary is not able to catch up you would have to resync that member.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "yes for now I testing only with 2 servers. They are configured via replica set Primary-(rw) and Secondary-(r - only).\nThe result “rs.printReplicationInfo()” from servers\nPrimary:\nconfigured oplog size: 1622.255859375MB\nlog length start to end: 1139912secs (316.64hrs)\noplog first event time: Thu May 14 2020 22:00:34 GMT+0300 (Jerusalem Standard Time)\noplog last event time: Thu May 28 2020 02:39:06 GMT+0300 (Jerusalem Standard Time)\nnow: Thu May 28 2020 02:39:09 GMT+0300 (Jerusalem Standard Time)Secondary:\nconfigured oplog size: 11401.712695121765MB\nlog length start to end: 1438197secs (399.5hrs)\noplog first event time: Tue May 05 2020 14:38:12 GMT+0300 (Jerusalem Standard Time)\noplog last event time: Fri May 22 2020 06:08:09 GMT+0300 (Jerusalem Standard Time)\nnow: Thu May 28 2020 02:39:25 GMT+0300 (Jerusalem Standard Time)",
"username": "Anton_Turbin"
},
{
"code": "",
"text": "Another question about resync, how I can resync without stopping the server (can I configure resynchronisation on fly?)How I can prevent the crushes at the future and configure auto reload changes from Primary server if synchronization error happens",
"username": "Anton_Turbin"
},
{
"code": "",
"text": "Yes I see error on slave \" HostUnreachable: Error connecting to mongomaster:27017\"\nI really cannot connect from Secondary to Primary\nmongo “mongodb://:@mongomaster:27017/users?authSource=admin”\ntested the port and no problems with the port (the port is open and I can connect from other vm)",
"username": "Anton_Turbin"
},
{
"code": "",
"text": "OK Thanks!!!\nAfter checking the logs I did many tests to find problem,the problem in connection from Secondary to Master -> the crash happened by docker + iptables (the docker bridge very problematic or I cant find stable release of docker )\nnow all automatically was synchronized and I VERY HAPPY THANKS AGAIN!!!",
"username": "Anton_Turbin"
},
{
"code": "",
"text": "@Anton_Turbin very glad to hear you got things figured out and running again. Just remember that the logs are your first line of troubleshooting generally. They might not always tell the whole story but they will point you in the right direction.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update not replicated to secondary | 2020-05-27T22:59:09.548Z | Update not replicated to secondary | 3,356 |
[] | [
{
"code": "",
"text": "I am new to MongoDB, I’m using the Python driver to work with the database. I’d like to be able to implement optimistic concurrency and was wondering if MongoDB has some built in concurrency checking. I tried to test it by creating a database with a single table with a single entry and do the followingWhen I run this the second write overwrites the first so it appears to me there is no checking by MongoDB. Can anyone direct me on what I’m doing wrong? Here is a screenshot of the code:\ntestLocking893×899 162 KB",
"username": "Alan_Strong"
},
{
"code": "",
"text": "What were you expecting?You are calling update_one twice, so you are asking mongod to update the same document twice. You are getting both changes. The document you get is the result of applying the 2 updates. What would have worry me is that the result would have been a document with StudyId:2222, since it is the first update_one.",
"username": "steevej"
},
{
"code": "",
"text": "I was hoping that the database would have kept track of writes with and recognised that the second write had changed. What I was looking to find out was if MongoDB itself implements optimistic locking or whether the developer needs to take care of that.",
"username": "Alan_Strong"
},
{
"code": "",
"text": "I am not sure what you mean byimplements optimistic lockingas for the mongod, you are doing 2 writes and it does it. The first write has terminated successfully and the second one could proceed and succeeded.If you want to implement some king of network wide mutex, it can be easily done.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve, thanks for the input. What I want to achieve is ensure that when a p1 write is performed that it doesn’t overwrite a different write that was performed by p2 in between the time of the p1 read the record and is going to write it. I had been wondering if MongoDB had some mechanism to identify this. I’ve created a function in python that I believe will do what I want to achieve this by adding a version field to each collection that contains the datetime and use the version from the read in the match filter for the update. I also update the record with the current datetime. Here is a copy of the code. Now when I try to perform the second write it checks to see if the modification was successful and if not raises an exception.\nconc692×760 140 KB",
"username": "Alan_Strong"
},
{
"code": "",
"text": "I had been wondering if MongoDB had some mechanism to identify this.The mechanism is the query (name filter in your case) and in the set. Like I wrote only one updateDb() will succeed because only one will have a match because you change the matching value in your $set. Just like I wrote, only one will set mutex:1 because only one will match mutex:0. You are simply using ‘version’ like ‘mutex’ but with a more complex value.In addition, if matched_count == 0 then modified_count will be 0. If matched_count > 0 then modified_count will be 1 because func is update_one.So it looks like you got your solution.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to implement optimistic concurrency | 2020-05-27T11:14:10.692Z | How to implement optimistic concurrency | 9,359 |
|
null | [] | [
{
"code": "",
"text": "Hi,\nI am using MongoDB c# latest driver from Nuget Packages.The count of documents CountDocuments(); will return long type.So ideally it should be able to skip long size of documents. But Skip() function accepts only integer type.This will restrict from using MongoDB c# driver will many records. I am trying to implement paging by Skip() and Limit function.With regards,\nNithin B.",
"username": "Nithin_Bandaru"
},
{
"code": "db.collection.find({ a: 1 }).limit(100).skip(0)db.collection.find({ a: 1 }).limit(100).skip(100)db.collection.find({ a: 1 }).limit(100).skip(2147483600)2147483600",
"text": "Hi @Nithin_Bandaru,You’ve stumbled upon a classic database performance trap: skip and limit for paging. The driver is doing the right thing by requiring an integer type. I strongly recommend that you do not skip large quantities of documents. And here’s why:When you query documents like this db.collection.find({ a: 1 }).limit(100).skip(0), the database happily finds 100 documents to return 100 documents. Similarly, when you query documents like this: db.collection.find({ a: 1 }).limit(100).skip(100) the database happily finds 200 documents to return 100 documents.See the problem? To make it even more apparent, when you do this: db.collection.find({ a: 1 }).limit(100).skip(2147483600), where 2147483600 approaches the maximum size of an integer type, the database happily finds 2,147,483,700 documents to return 100 documents. That’s a lot of work for any database! That’s 2,147,483,600 documents that the database must find, potentially pulling from disk before the database can even begin returning the 100 documents you’re requesting!Thankfully, because MongoDB provides powerful options for using flexible schema design, there are better ways of doing paging. I recommend you check out my blog post on the subject here:\nPaging with the Bucket Pattern: Part 1Thanks!Justin",
"username": "Justin"
},
{
"code": "",
"text": "@Justin Thanks for the suggestion.The bucket pattern looks interesting. It looks similar to Cassandra DB storage where data gets grouped by partition columns. There they will get auto grouped but here it manual in this pattern.To implement this kind of storage I would need to keep in-memory queues and extract them in a specific time interval and save it to DB only once and in other ways, if updating records or realtime will be very expensive.What do you think?I also have written one more question which is very similar to this problem https://www.mongodb.com/community/forums/t/how-to-loop-through-mongodbs-all-records-with-resume-from-a-particular-record/4621\nHere I was wondering if I can use the _id field to make jumps for resume functionality of migration because it already has an index and more importantly it is unique. If I am able to apply $gt on this field then this will not require any bucket patters(Hoping that MongoDB uses B+ trees for indexed).Basically what I am saying is we need to have one unique column that can be comparable for greater or lesser. _id column is one of them. Not sure if it is comparable with $gt.What do you think about this?I was just wondering how will the IDE’s work in these scenarios with too many records.\nAll IDE’s of mostly all databases show records in paging format right.\nAs of my understanding, I think all of them use cursors. Just my assumption.\nGenerally, the cursor is like Enumerator where you can read next not skip few and read next.\nThat’s why they don’t have the ability to jump from one page to a far next page, they can only move to the next page.Am I correct?",
"username": "Nithin_Bandaru"
},
{
"code": "$gt_id_id",
"text": "Hi @Nithin_Bandaru,Basically what I am saying is we need to have one unique column that can be comparable for greater or lesser. _id column is one of them. Not sure if it is comparable with $gt.You definitely can use $gt with _id. That’s a common and efficient strategy for paging. Just remember to sort by _id too.As of my understanding, I think all of them use cursors. Just my assumption.When you issue a find command on MongoDB, you get a cursor like most other databases. It works the same way. Issuing a find statement and iterating through every document in the collection isn’t paging (and thus doesn’t use skip/limit), it’s iterating and falls under batch size. However, most applications and IDEs don’t use this method, they use paging.Remember that each time you issue a new find command (or select statement on a relational database), you’re given a new cursor. Keeping a long running cursor and iterating through results has completely different performance characteristics than iterating through the same result set using skip and limit (generating a new cursor for every query). Almost all applications use skip and limit for paging. Keeping a cursor around for extended periods of time with any database is problematic for a variety of reasons that I won’t go into now.Thanks,Justin",
"username": "Justin"
}
] | C# driver skip function is of type integer | 2020-05-26T19:54:48.416Z | C# driver skip function is of type integer | 4,763 |
null | [
"java"
] | [
{
"code": "{\n \"_id\" : \"testID\",\n \"memberID\" : \"testMemberID\",\n \"memberName\" : \"TestUser\",\n \"creationDate\" : \"25-05-2020\"\n}\n",
"text": "So i got a test Document inside my CollectionNow i want to check if there already is a Document with the _id “testID” before creating it of course (Java). Or is there no need for something like this (are there no duplicates by default)",
"username": "EnderOffice"
},
{
"code": "_id_idObjectId_id_id_id_id\"E11000 duplicate key error ...\"11000com.mongodb.MongoWriteExceptionE11000 duplicate key error collection: test.test index: _id_ dup key: { _id: \"testID\" }_id/*\n * Returns a boolean true if document is inserted, else false.\n */\nprivate boolean insertDocument() {\n\n Document doc = new Document(\"_id\", \"testID\");\n \n try {\n collection.insertOne(doc);\n }\n catch(MongoWriteException e) {\n e.printStackTrace();\n if (e.getCode() == 11000) {\n System.out.println(\"You are trying to insert a document with *duplicate* _id: \" + doc.get(\"_id\"));\n }\n else { \n System.out.println(\"Some error while inserting document with _id: \" + doc.get(\"_id\"));\n }\n return false;\n }\n \n System.out.println(\"Document inserted with value of _id: \" + doc.get(\"_id\"));\n return true;\n}\nprivate boolean insertDocument() {\n\n Document doc = new Document(\"_id\", \"testID\");\n \n try {\n Document d = collection.find(eq(\"_id\", doc.get(\"_id\"))).first();\n \n if (d == null) {\n collection.insertOne(doc);\n }\n else {\n System.out.println(\"Document already exists with _id: \" + doc.get(\"_id\"));\n return false;\n }\n }\n catch(MongoWriteException e) {\n e.printStackTrace();\n System.out.println(\"Some error while inserting document with _id: \" + doc.get(\"_id\"));\n return false;\n }\n\n System.out.println(\"Document inserted with value of _id: \" + doc.get(\"_id\"));\n return true;\n}",
"text": "Some general rules about the _id field:The _id field is present in all documents in the collection - it is mandatory. If the _id is not supplied by the application / user, MongoDB will create it when the document is inserted (and it will be of type ObjectId). The _id value has a default unique index on it - this is automatically created by the MongoDB. So, a collection can have only one document with a specific _id. Also, the _id value cannot be modified for a document, and the unique index on this field cannot be deleted (or modified).Now i want to check if there already is a Document with the _id “testID” before creating it of course (Java). Or is there no need for something like this (are there no duplicates by default)If you are creating a document, and if another document exists with the same _id, there will be an error and the new document will not be inserted (as it is considered as a duplicate). And, the error says something like \"E11000 duplicate key error ...\". The 11000 is the error code.In Java, the concerned exception class is com.mongodb.MongoWriteException. And, the exception message is likely to be something like this: E11000 duplicate key error collection: test.test index: _id_ dup key: { _id: \"testID\" }So, you can try one these following approaches, based upon your application needs:Java code samples for both options:Option 1:Option 2:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you very much! Worked out",
"username": "EnderOffice"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Noob Problem (Java) check if a Document Exists inside my Collection | 2020-05-25T12:40:06.578Z | Noob Problem (Java) check if a Document Exists inside my Collection | 8,584 |
null | [
"queries",
"performance"
] | [
{
"code": "",
"text": "I have multiple collections with 100000 documents in each collection and 10000 columns in each document. There is a python script which executes aggregate queries in a multi-threaded fashion. Each thread invokes an aggregate on a separate collection.When the script is executed, the amount of time it takes to complete the aggregation is proportional to the number of threads. i.e. the latency linearly increases with number of queries concurrently executed.If aggregation on single collection takes ‘x’ amount of time, then multi-threaded aggregations on ‘n’ collections takes almost ‘n*x’ amount of time.\nMy expectation was that the multi-threaded queries would take roughly the same amount of time as the single-threaded one. Now it’s apparent that multiple queries are not executed concurrently in mongodb. Is this the known limitation? Is there any configuration parameter in mongodb to control concurrency?I’ve asked the same question on stackoverflow as well: multithreading - MongoDB concurrent queries on different collections are slow - Stack Overflow",
"username": "Ajinkya_Surnis"
},
{
"code": "",
"text": "In client/server system performance analysis is not as easy as you seem to think. It depends of a multiple of factors. But the first thing to do is to isolate the bottleneck. How did you come to the following conclusion?Now it’s apparent that multiple queries are not executed concurrently in mongodb. Is this the known limitation?",
"username": "steevej"
}
] | MongoDB concurrent queries on different collections are slow | 2020-05-27T07:37:18.993Z | MongoDB concurrent queries on different collections are slow | 3,358 |
null | [
"aggregation"
] | [
{
"code": "ref1: 5ec68d5edcf68a016c4d1f68\nref2: 5ec68d5edcf68a016c4d1f68\ncountry: \"US\"\ndate: 2020-05-22T00:00:00.000+00:00\nhour: 14\ntime: 2020-05-22T14:42:11.396+00:00\ntype: \"x\"\nref1: 5ec68d5edcf68a016c4d1f68\nref2: 5ec68d5edcf68a016c4d1f68\ndate: 2020-05-22T00:00:00.000+00:00\ncontry: {\n US: {x: 1, y: 10 ...},\n CA: ....\n .\n .\n\n},\nhour: {\n 14: {x: 5, y: 0 ...},\n 20: ....\n .\n .\n\n}\nref1: 5ec68d5edcf68a016c4d1f68\nref2: 5ec68d5edcf68a016c4d1f68\ndate: 2020-05-22T00:00:00.000+00:00\naggregatedBy: 'country'\nres: {\n US: {x: 1, y: 10 ...},\n CA: ....\n .\n .\n\n}\n{\n _id: { date : '$date', ref1: '$ref1', ref2: '$ref2' },\n x: {\n $sum: {$cond: [{$eq: ['$type', 'x']}, 1, 0]}\n },\n y: {\n $sum: {$cond: [{$eq: ['$type', 'y']}, 1, 0]}\n },\n}\n",
"text": "I have documents that looks like that:which I aggregate by type. my desired outcome is something like:or alternatively to create a document for each ‘attribute’:currently I have $group that aggregates only per ref1,ref2,date and looks like:I have no clue on how to apply that deeper level.Thanks",
"username": "Dor_Golan"
},
{
"code": "{\n \"ref\" : 1,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : \"US\",\n \"hour\" : 14,\n \"type\" : \"x\"\n}\n{\n \"ref\" : 1,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : \"US\",\n \"hour\" : 15,\n \"type\" : \"y\"\n}\n{\n \"ref\" : 1,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : \"CA\",\n \"hour\" : 16,\n \"type\" : \"y\"\n}\n{\n \"ref\" : 2,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : \"RU\",\n \"hour\" : 17,\n \"type\" : \"x\"\n}\n{\n \"ref\" : 1,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : \"US\",\n \"hour\" : 14,\n \"type\" : \"y\"\n}\ndb.groups.aggregate([\n { \n $group: { \n _id: { ref: \"$ref\", date: \"$date\", country: \"$country\", type: \"$type\" },\n sum: { $sum: 1 }\n }\n },\n { \n $group: { \n _id: { ref: \"$_id.ref\", date: \"$_id.date\", country: \"$_id.country\" },\n sums: { $push: { k: \"$_id.type\", v: \"$sum\" } }\n }\n },\n { \n $addFields: {\n k: \"$_id.country\", v: { $arrayToObject: \"$sums\" }\n }\n },\n { \n $project: { \n ref: \"$_id.ref\", \n date: \"$_id.date\", \n country: { $arrayToObject: [ [ { k: \"$k\", v: \"$v\" } ] ] }, \n _id: 0 \n } \n },\n { \n $group: { \n _id: { ref: \"$ref\", date: \"$date\" },\n country: { $push: \"$country\" }\n }\n },\n { \n $project: { \n ref: \"$_id.ref\", \n date: \"$_id.date\", \n country: 1, \n _id: 0 \n }\n }\n])\n{\n \"ref\" : 1,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : [\n {\n \"US\" : {\n \"x\" : 1,\n \"y\" : 2\n }\n },\n {\n \"CA\" : {\n \"y\" : 1\n }\n }\n ]\n}\n{\n \"ref\" : 2,\n \"date\" : \"2020-05-22T00:00:00.000+00:00\",\n \"country\" : [\n {\n \"RU\" : {\n \"x\" : 1\n }\n }\n ]\n}\n",
"text": "Hello @Dor_Golan,How to create maps by attributes with aggregationThis is mostly, a series of grouping and projections. I have an example and this mainly uses the aggregation operator $arrayToObject to create the maps.I have used a somewhat simpler version of the documents, for brevity. Here are some sample documents followed by the query and the output.The Aggregation:The Output:",
"username": "Prasad_Saya"
}
] | How to create maps by attributes with aggregation | 2020-05-23T07:38:25.913Z | How to create maps by attributes with aggregation | 1,980 |
[
"compass"
] | [
{
"code": "",
"text": "sort1575×854 71.8 KB Hi,\nI’m trying to sort documents on multiple fields, “ctype” and “cname”.\nAs you can see in the attached screen capture, it does not work, I should have:Axis, MA\nAxis, MB\nAxis, MCWhat is wrong ?\nThanks!",
"username": "Helene_ORTIZ"
},
{
"code": "{$sort: {'_id.ctype': 1, 'id.cname': 1}}\n$group_id",
"text": "What you probably want to do isAfter the $group, _id is a nested document and you are trying to sort by 2 fields of that document.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "It does not change anything to the result, data are not correctly sorted.",
"username": "Helene_ORTIZ"
},
{
"code": "_id$sort",
"text": "Hi @Helene_ORTIZ can you paste your pipeline here so we can see the full thing? The screenshot has parts cut off. You can copt the text of the pipeline by going into the aggregation and clicking the Export button (the right of Save) and then click on the Copy button in the left hand pane (titled My Pipeline). That way we can see what’s actually going through the pipeline.As @Massimiliano_Marcon stated it sure looks like you’ve nested the fields under _id and his $sort should work for you.",
"username": "Doug_Duncan"
},
{
"code": "[{\n $match: {\n $and: [{\n instrument: 94\n },\n {\n timestamp: {\n $gte: new Date('2020-04-01')\n }\n },\n {\n timestamp: {\n $lte: new Date('2020-04-25')\n }\n }\n ]\n }\n}, {\n $group: {\n _id: {\n ctype: \"$ctype\",\n cname: \"$cname\",\n pname: \"$pname\",\n alias: \"$alias\"\n }\n }\n}, {\n $sort: {\n '_id.ctype': 1,\n 'id.cname': 1\n }\n}]\n",
"text": "Hi,\nHere is my pipeline:",
"username": "Helene_ORTIZ"
},
{
"code": " $sort: {\n '_id.ctype': 1,\n 'id.cname': 1\n }\n$sort_id.cname$group",
"text": "Hi Hélène,The second field in your $sort should be _id.cname to match the output of your previous $group stage.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Oops sorrry for that, I’m a bit distracted!\nThanks a lot for your help",
"username": "Helene_ORTIZ"
},
{
"code": "",
"text": "Hi Hélène,No worries. Sounds like we were able to help you find a solution .Regards,\nStennie",
"username": "Stennie_X"
}
] | Sort stage does not sort documents correctly | 2020-05-26T12:44:02.860Z | Sort stage does not sort documents correctly | 4,460 |
|
null | [
"change-streams",
"scala"
] | [
{
"code": "",
"text": "When I open the changeStream on a specific collection, I can see in my debug logs that a db command ‘getMore’ is issued every now and then, which will be querying the db across the lifetime of the cursor.\nIf this is the case, how are we subscribing to events then? Shouldnt Mongo be emitting events out, rather than the observer issuing ‘getMore’ ?\nI am using mongo-scala-driver.Thank you for your thoughts!",
"username": "Atil_Pai"
},
{
"code": "TAILABLE_AWAITtail",
"text": "Hi @Atil_Pai,MongoDB Change Streams is an abstraction of a TAILABLE_AWAIT cursor, with support for resumability. Conceptually it is equivalent to the tail Unix command with the “follow” mode. The client uses getMore command to retrieve batches of documents/results currently pointed to by the cursor. The cursor waits for a few seconds after returning a full result set so that it can capture and return additional data added during a query.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks for the reply Wan. I was initially concerned with the get more command because our tailable curosr implementation of the Oplog issues a lot of get more commands, but in case of Change Streams it is proportion to the amount of change cursors opened, which is great and helps reduce a lot of the load that we previously used to experience.",
"username": "Atil_Pai"
},
{
"code": "",
"text": "Hi @wan, I am thinking about my last comment more. Does it sound right to you? Is the Change Stream more performant than tailing the Oplog, especially in terms of the getMore command load added onto the DB?Thanks.",
"username": "Atil_Pai"
},
{
"code": "TAILABLETAILABLE_AWAIT_idresume_afterstart_after",
"text": "Hi @Atil_Pai,I was initially concerned with the get more command because our tailable curosr implementation of the Oplog issues a lot of get more commands, but in case of Change Streams it is proportion to the amount of change cursors openedWithout knowing more of the your code implementation, you’re probably utilising TAILABLE cursor. Which basically a cursor that is not closed when the last data is retrieved but are kept open, the cursor location marks the final document position. If more data is seen, iteration of the cursor will continue from the last document seen.Change Stream utilises TAILABLE_AWAIT, which is a tailable cursor with an await option set. It’s a cursor that will wait for a few seconds after returning a full result set, so that it can capture and return additional data added during a query. Depending on the use case, this potentially could be more efficient in terms of data round trips. See also MongoDB Specifications: Change Streams for more information on MongoDB driver specs on Change Stream implementation.In addition, Change Stream is more than just tailing the Oplog. Key benefits of Change Streams over tailing Oplog are:Utilise the built-in MongoDB Role-Based Access Control. Applications can only open change streams against collections they have read access to. Refined and specific authorisation.Provide a well defined API that are reliable. The change events output that are returned by change streams are well documented. Also, all of the official MongoDB drivers follow the same specifications when implementing change streams interface. While the entries in Oplog may change between MongoDB major versions.Change events that are returned as part of change streams are at least committed to the majority of the replica set. This means the change events that are sent to the client are durable. Applications don’t need to handle data rollback in the event of failover.Provide a total ordering of changes across shards by utilising a global logical clock. MongoDB guarantees the order of changes are preserved and change events can be safely interpreted in the order received. For example, a change stream cursor opened against a 3-shard sharded cluster returns change events respecting the total order of those changes across all three shards.Due to the ordering characteristic, change streams are also inherently resumable. The _id of change event output is a resume token. MongoDB official drivers automatically cache this resume token, and in the case of network transient error the driver will retry once. Additionally, applications can also resume manually by utilising parameter resume_after and start_after. See also Resume a Change Stream.Utilise MongoDB aggregation pipeline. Applications can modify the change events output. Currently there are five pipeline stages available to modify the event output. For example, change event outputs can be filtered out (server side) before being sent out using $match stage. See Modify Change Stream Output for more information.I would recommend using Change Stream instead of writing a custom code to tail Oplog.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "@wan, our legacy app has implementation of the oplog.find DbQuery with Options Tailable, AwaitData, NoTimeout with OplogReplay. This is wrapped in a while loop with a 20min reset timeout on the cursor(repeat the while loop then, along with some helper code). Do you think this configuration is fine for an Oplog implementation? Also, an Oplog tail cursor with DbQuery options Tailable and AwaitData, can I say it’s efficiency is similar to the changestream then?",
"username": "Atil_Pai"
},
{
"code": "ChangeStreamIterable<Document> changes =\n client.getDatabase(<DBNAME>)\n .watch(Collections.singletonList(\n Aggregates.match(Filters.in(\"ns.coll\", Arrays.asList(WATCHED_COLLECTIONS)))))\n .fullDocument(FullDocument.UPDATE_LOOKUP);\nCOMMAND [conn20161] command DBNAME.$cmd command: getMore { getMore: 1760441711222280319, collection: \"$cmd.aggregate\", $db: \"DBNAME\", $clusterTime: { clusterTime: Timestamp(1590477125, 7396), signature: { hash: BinData(0, 17B8B1B3ADE3FEFC381F56E9201694DC9509BC38), keyId: 6829683829607759874 } }, lsid: { id: UUID(\"f88e3593-bec6-47cc-a067-6042f36aa1a3\") } } originatingCommand: { aggregate: 1, pipeline: [ { $changeStream: { fullDocument: \"updateLookup\" } }, { $match: { ns.coll: { $in: [ \"COLLECTION1\", \"COLLECTION2\", \"COLLECTION3\" ] } } } ], cursor: {}, $db: \"DBNAME\", $clusterTime: { clusterTime: Timestamp(1590160602, 2), signature: { hash: BinData(0, 39A22239ED8BA07ED1E8B710D4212AE8CDB52663), keyId: 6829683829607759874 } }, lsid: { id: UUID(\"f88e3593-bec6-47cc-a067-6042f36aa1a3\") } } planSummary: COLLSCAN cursorid:1760441711222280319 keysExamined:0 **docsExamined:11890** numYields:7138 nreturned:0 reslen:305 locks:{ ReplicationStateTransition: { acquireCount: { w: 7141 } }, Global: { acquireCount: { r: 7141 } }, Database: { acquireCount: { r: 7141 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 7141 } } } storage:{ data: { bytesRead: 14 } } protocol:op_msg 351ms",
"text": "Great information here, thanks!I just have one related question.I’m watching 3 collections in a DB using the Java driver. Each of those collections has one document only, each of which have embedded documents. My client code looks like this:where the variable “WATCHED_COLLECTIONS” is an array of the 3 collection names that I want to watch.Since I’ve used the “match” stage, this filtering should be happening at the server side right?\nDespite that, in the mongo logs, I can see that ‘docsExamined’ is very high! Why would that be happening, since there’s only one document in each collection? Even if we count all the embedded documents it doesn’t come up to 11000 documents. Is it also examining all the other documents that were upserted in the window between 2 ‘getMore’ operations?COMMAND [conn20161] command DBNAME.$cmd command: getMore { getMore: 1760441711222280319, collection: \"$cmd.aggregate\", $db: \"DBNAME\", $clusterTime: { clusterTime: Timestamp(1590477125, 7396), signature: { hash: BinData(0, 17B8B1B3ADE3FEFC381F56E9201694DC9509BC38), keyId: 6829683829607759874 } }, lsid: { id: UUID(\"f88e3593-bec6-47cc-a067-6042f36aa1a3\") } } originatingCommand: { aggregate: 1, pipeline: [ { $changeStream: { fullDocument: \"updateLookup\" } }, { $match: { ns.coll: { $in: [ \"COLLECTION1\", \"COLLECTION2\", \"COLLECTION3\" ] } } } ], cursor: {}, $db: \"DBNAME\", $clusterTime: { clusterTime: Timestamp(1590160602, 2), signature: { hash: BinData(0, 39A22239ED8BA07ED1E8B710D4212AE8CDB52663), keyId: 6829683829607759874 } }, lsid: { id: UUID(\"f88e3593-bec6-47cc-a067-6042f36aa1a3\") } } planSummary: COLLSCAN cursorid:1760441711222280319 keysExamined:0 **docsExamined:11890** numYields:7138 nreturned:0 reslen:305 locks:{ ReplicationStateTransition: { acquireCount: { w: 7141 } }, Global: { acquireCount: { r: 7141 } }, Database: { acquireCount: { r: 7141 } }, Mutex: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 7141 } } } storage:{ data: { bytesRead: 14 } } protocol:op_msg 351ms",
"username": "Murali_Rao"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Change Stream Watch Cursor | 2020-03-19T19:05:26.076Z | Change Stream Watch Cursor | 8,746 |
null | [
"charts",
"on-premises"
] | [
{
"code": "",
"text": "Is [Filter Dashboards by Field Values] in MongoDB Charts (https://docs.mongodb.com/charts/master/dashboard-filtering/#filter-dashboards-by-field-values) is available only in Atlas?This is very important feature of Charts and looks like it’s not available in Charts on-premises. Is there any work-around to have filter applied on Dashboards on charts in on-premises version?Thanks!",
"username": "astro"
},
{
"code": "",
"text": "Hi @astro -Currently Dashboard Filters are only available in the cloud release of Charts. The cloud version receives updates every month, so it’s always the first to get new capabilities. We don’t currently have a timeline for an update to the on-prem version.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Charts: Filter Dashboards by Field Values | 2020-05-26T13:43:28.688Z | MongoDB Charts: Filter Dashboards by Field Values | 2,588 |
null | [
"python"
] | [
{
"code": "",
"text": "I want to create an admin application to monitor data collection. For this the user registration process is based on the database access i.e, when we create a new database user through MongoDB atlas, they will immediately be able to log into the admin application with their database username and password. How do I get a mongo document/response containing the list of database users and their hashed passwords using python?",
"username": "Mmanuel_N_A"
},
{
"code": "commandusersInfoshowCredentials: trueusersInfo",
"text": "You use PyMongo’s command helper to run the usersInfo (https://docs.mongodb.com/manual/reference/command/usersInfo/) command with showCredentials: true to get this information. Note that the user that runs the usersInfo command must have certain privileges to be able to see other user’s information: https://docs.mongodb.com/manual/reference/command/usersInfo/#required-access",
"username": "Prashant_Mital"
}
] | Python get users of database | 2020-05-06T03:58:04.053Z | Python get users of database | 2,188 |
null | [] | [
{
"code": "{\n \"_id\" : ObjectId(\"5eccf898ac7ff694845f1ccf\"),\n \"attributes\" : [ \n {\n \"k\" : \"first_name\",\n \"v\" : \"John\"\n }, \n {\n \"k\" : \"last_name\",\n \"v\" : \"Doe\"\n }, \n {\n \"k\" : \"email\",\n \"v\" : \"[email protected]\"\n }, \n {\n \"k\" : \"gender\",\n \"v\" : \"Male\"\n }\n ],\n \"events\" : {\n \"event\" : \"add_to_cart\",\n \"event_data\" : [ \n {\n \"k\" : \"product_name\",\n \"v\" : \"T-Shirt\"\n }, \n {\n \"k\" : \"price\",\n \"v\" : 25\n }, \n {\n \"k\" : \"variants\",\n \"v\" : [ \n {\n \"k\" : \"color\",\n \"v\" : \"red\"\n }, \n {\n \"k\" : \"size\",\n \"v\" : \"xl\"\n }, \n {\n \"k\" : \"matherials\",\n \"v\" : [ \n [ \n {\n \"k\" : \"name\",\n \"v\" : \"Cotton\"\n }\n ], \n [ \n {\n \"k\" : \"name\",\n \"v\" : \"Wool\"\n }\n ]\n ]\n }\n ]\n }\n ]\n },\n \"created_at\" : \"2020-05-25 16:12:58\",\n \"updated_at\" : \"2020-05-25 16:12:58\"\n}\ndb.clients.ensureIndex({\"events.event_data.k\" : 1, \"events.event_data.v\" : 1 })\ndb.clients.find({\n \"events.event_data\": {\n \"$elemMatch\": {\n \"k\": \"product_name\",\n \"v\": \"T-Shirt\"\n }\n }\n})\ndb.clients.find({\n \"events.event_data.v.v\": \n { \n \"$elemMatch\": { \n \"$elemMatch\" : {\n \"k\": \"name\", \"v\": \"Cotton\"\n }\n } \n }\n})\n",
"text": "My data is very dynamic with the possibility to add custom attributes and events.This is how my dataset looks like:I definitely can create the following index:and it performs great. However, if you take a look at my dataset, you’ll notice that values can be very nested (material e.g.).This query works great:anyway, when I need to query a deeper level then it scans the whole size. Here is how to scan clients who bought cotton T-Shirt:However in this case it does COLLSCAN which is, obviously, something I would like to avoid?Thank you.",
"username": "jellyx"
},
{
"code": "events.data_events.v.kevents.data_eventskv{ k: \"name\", v: \"wool\" }{ name: \"wool\" }v{ k: \"materials\", v: [ { k: \"name\", v: \"Cotton\" } ] }{ k: \"materials.name\", v: \"cotton\" }",
"text": "Hi @jellyx,Great use of the attribute pattern! You have a few options here and I’ll briefly discuss my top two, both of which will require small tweaks to your schema. Unfortunately**, to prevent the collection scan you must have an index on the field being queried, even in a multi-key index (i.e. events.data_events.v.k).(1) Version 4.2 offers wildcard indexes. It de-emphasizes the need to use the attribute pattern by allowing you to index everything within a document (not recommended) or within a sub-object (very usable with your schema). Instead of having events.data_events with an object containing k and v, you use the current key’s value as the actual key. For example: { k: \"name\", v: \"wool\" } becomes { name: \"wool\" }. The wildcard index will ensure the key and value are both properly indexed.(2) Alternatively, I suggest flattening your attribute array so it’s always one dimensional. Instead of having v be an array, always use a string. You can cleverly manipulate your key names to make this possible. For example: { k: \"materials\", v: [ { k: \"name\", v: \"Cotton\" } ] } becomes { k: \"materials.name\", v: \"cotton\" }.** You can use wildcard indexes to make this work with your current schema design, but I would recommend one of the previous two options instead.Hopefully this will spur some ideas for a schema that avoids the dreaded collection scan!Thanks,Justin",
"username": "Justin"
},
{
"code": "db.clients.createIndex({\"events.event_data.k\" : 1, \"events.event_data.v\" : 1 })\n",
"text": "Hi @Justin,Thank you so much for your great response.Yes, we’re using attribute pattern, but also the outlier pattern since there could be a lot of events for some clients (outlier, as the name suggests). I read a bit about wildcard indexes, but I read also the following:You cannot shard a collection using a wildcard index. Create a non-wildcard index on the field or fields you want to shard on. For more information on shard key selection, see Shard Keys.Since I’m pretty new to Mongo DB - and haven’t researched a lot about sharding - I just decided to go with attribute pattern.My question is (if you don’t mind): would sharding with wildcard index produce some issues to me? How would I be able to horizontally scale the number of clients across multiple machines.And yes, I executed the following:db.clients.createIndex({“events.event_data.v.k” : 1})which seems not to produce any results. I already have this one:Not sure if I should have both of them…Your idea of floating an array is brilliant. If not with wildcard indexes, I’ll go with flattening an array.Many thanks!",
"username": "jellyx"
},
{
"code": "",
"text": "Great! I’m glad my suggestion provided some insight!Quickly answering your question about wildcard indexes:I read a bit about wildcard indexes, but I read also the following:You cannot shard a collection using a wildcard index. Create a non-wildcard index on the field or fields you want to shard on. For more information on shard key selection, see Shard Keys.This is correct, you cannot shard in a wildcard index. But whenever you index an array, you create a multikey index, which also cannot be used as a shard key. If you aren’t having issues with the attribute pattern now (which use a multikey index) than you definitely won’t have problems using a wildcard index!Thanks,Justin",
"username": "Justin"
},
{
"code": "{\n \"events.event_data.k\" : 1.0,\n \"events.event_data.v\" : 1.0\n}\n",
"text": "Thanks again! It has provided some insight definitely!I feel a bit uncomfortable asking too many questions, but still have some opened questions:a) Isn’t this a compound index:b) If it is a multi-key index and wildcard index doesn’t allow sharding, then how am I supposed to shard my collection when that time comes? Maybe I’m worrying too early… I guess…Many thanks,",
"username": "jellyx"
},
{
"code": "events.data_key",
"text": "Don’t worry about asking too many questions, we’re here to help! I may not be online much longer to answer but the community is great at covering a wide range of topics.a) Yes, that is a compound index. It’s also a multikey index when events.data_key is an array. It can’t be used for sharding.b) You’ll need to pick a different shard key since both multikey and wildcard indexes can’t be used as a shard key. This makes a lot of sense if you dive into it. A document may only live on one shard. What would happen if a shard key on an array contained values requiring a single document to reside on more than one shard? It’d be problematic, to say the least.Picking a shard key is a big topic so some further reading may be helpful:Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Multiple elemMatch and index | 2020-05-26T22:30:42.808Z | Multiple elemMatch and index | 6,109 |
null | [
"data-modeling"
] | [
{
"code": "module.exports = {\n id: {type: 'UUID', required: true},\n userId: {type: 'UUID of User', required: true},\n likesCount: {type: 'Integer', required: true, default: 0},\n commentsCount: {type: 'Integer', required: true, default: 0},\n description: {type: 'String'},\n media: [{\n id: {type: 'UUID', required: true},\n URL: {type: 'URL', required: true},\n mimeType: {type: 'String', required: true},\n createdAt: {type: 'Timestamp'}\n }],\n active: {type: 'Boolean', required: true, default: true},\n _deleted: {type: 'Boolean', required: true, default: false},\n tagUsers: [{\n userId: {type: 'UUID of User', required: true},\n userInGameName: {type: 'String', required: true}\n }],\n reports: {\n count: {type: 'Integer', required: true, default: 0},\n reportedBy: [{\n userId: {type: 'UUID of Users'},\n comments: {type: 'String', required: true}\n }],\n },\n createdAt: {type: 'Timestamp'},\n updatedAt: {type: 'Timestamp'}\n};\n",
"text": "Hi,\nFollowing is my Schema which i’m using for storing the POSTWith this i’m facing the issue 16 Mb increase the size of collection,\nCan anybody suggest me over here how do i handle this kind of BIG Data,\nAnd i need to make structure like which can handle Milllions of Data.Thankd",
"username": "Tajinder_Singh1"
},
{
"code": "",
"text": "Hi @Tajinder_Singh1,Working with large documents can complicate your database usage and likely isn’t a great idea. For example, it greatly increases the memory requirements of your server. Having large documents travel between the server and client also slows the application down. And there’s the obvious problem of the client application having to create objects the size of your document!I recommend working with smaller documents. It makes for better, more scalable applications. We have a lot of content on how best to design schemas. Here are a few of my favorite resources:Building with Patterns: A SummaryMongoDB University: Data ModelingI’m also doing a session on advanced schema design for MongoDB.live:View more about this event at MongoDB.live 2020Thanks,Justin",
"username": "Justin"
}
] | Need to know how we handle the bigger data | 2020-05-26T21:22:11.776Z | Need to know how we handle the bigger data | 2,929 |
[] | [
{
"code": "",
"text": "Happy Wednesday, community!As you may have noticed, Dark Mode is now the default theme for our forums. This is one of the first big requests we’ve had in the Site Feedback category for the forums and I’m thrilled that we’re able to roll out this change. We designed this version of Dark Mode to match the design on our recently launched Developer Hub and you’ll see it begin to appear across many of our other developer resources in the coming weeks as we work to better align your experience with MongoDB. We’re certainly still open to feedback on the implementation of the design in these forums and will iterate as needed.If the Dark Mode theme doesn’t thrill you, you can still switch back to Light Mode in your personal settings.\nPreferences1158×272 38.6 KB\n\nInterface733×436 30.4 KB\nLet me know your feedback in the Site Feedback category, especially if you run into any issues with using the new theme.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Looks good @Jamie! Thanks to you and the team for getting this done.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hello @Jamie, a further - Thanks to all working on it!",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Oh sweet relief !!Thanks !",
"username": "chris"
},
{
"code": "",
"text": "Awesome!! Thank you @Jamie and thanks to everyone who worked to make this happen!",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Dark Mode is here, y'all! | 2020-05-20T19:01:34.031Z | Dark Mode is here, y’all! | 5,811 |
|
null | [
"atlas-device-sync",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,\nI am currently developing an android version for my ios app, and i made a mistake, i forgot to add an @PrimaryKey attribute on a property, and now when i try to open a realm which contains data from ios, i get a schema mismatch error, but when i try to add the @PrimaryKey annotation, i get :“The following changes cannot be made in additive-only schema mode:\n- Primary Key for class ‘SomeClass’ has been added.”",
"username": "Octavian_Milea"
},
{
"code": "",
"text": "@Octavian_Milea You can wipe the app on Android and then re-download the realm from the server side once you match up the schema.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hello Ian, thanks for your reply, the problem is that i already tried it, because that’s how i was doing it on ios, but it seems that the local realm file doesn’t get deleted once the app is deleted, and i also don’t have google backup enabled.",
"username": "Octavian_Milea"
},
{
"code": "",
"text": "Hello Ian, thanks for your reply, the problem is that i already tried it, because that’s how i was doing it on ios, but it seems that the local realm file doesn’t get deleted once the app is deleted, and i also don’t have google backup enabled.",
"username": "Octavian_Milea"
},
{
"code": "",
"text": "You can try wiping the entire emulator then.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "So, if i am using a device, a factory reset should do, right ?",
"username": "Octavian_Milea"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do i perform a destructive schema using full synced realms? | 2020-05-26T12:09:24.195Z | How do i perform a destructive schema using full synced realms? | 3,229 |
null | [
"golang"
] | [
{
"code": "db.getCollection('province').find({\"provincename\" : \"somename\"},{\"localmunicipalities.municipality\":1})\ncur, err := collection.Find(context.TODO(), bson.D{{\"provincename\", provinceName}}, opt)\n",
"text": "HiHow can I convert the following mongo cli query to Go bson query:I have this in my Go function\nopt := options.Find()\nopt.SetProjection(bson.D{{“localmunicipalities.municipality”, 1}})Thanks",
"username": "Steven_Venter"
},
{
"code": "mongo",
"text": "Hi @Steven_Venter, welcome !How can I convert the following mongo cli query to Go bson query:The example Go snippet code that you provided should return the same result as your mongo CLI query. Could you please elaborate further the problem that you’re experiencing ?Could you also provide the following:Regards,\nWan.",
"username": "wan"
},
{
"code": "{\n \"_id\" : ObjectId(\"5ec0fc0be1bf7aa04ac28ddb\"),\n \"provincename\" : \"Northern Cape\",\n \"localmunicipalities\" : [ \n {\n \"municipality\" : \"!Kheis Local\",\n \"url\" : \"/overview/1181/kheis-local-municipality\",\n \"municipalcities\" : [ \n \"Brandboom\", \n \"Groblershoop\"\n ]\n }, \n {\n \"municipality\" : \"Dawid Kruiper Local\",\n \"url\" : \"/overview/1245/dawid-kruiper-local-municipality\",\n \"municipalcities\" : [ \n \"Mier\", \n \"Rietfontein\", \n \"Upington\"\n ]\n }, \n {\n \"municipality\" : \"Dikgatlong Local\",\n \"url\" : \"/overview/1160/dikgatlong-local-municipality\",\n \"municipalcities\" : [ \n \"Barkly West\", \n \"Delportshoop\", \n \"Windsorton\"\n ]\n }, \n {\n \"municipality\" : \"Emthanjeni Local\",\n \"url\" : \"/overview/1173/emthanjeni-local-municipality\",\n \"municipalcities\" : [ \n \"Britstown\", \n \"De Aar\", \n \"Hanover\"\n ]\n }\n]\n}\n",
"text": "Hi WanI don’t get any error messages, just empty data and only 1 record.Diver is go.mongodb.org/mongo-driver v1.3.3This is a sample but I truncated itWant I would like to receive back from the query are all the names in the municipalcities array for a given provincename.Hope that helps, if you require the full document I can send it to you.This is a screenshot from Postman\n2020-05-19 17_38_45-Window869×371 11.1 KB\nThanks for your help.\nSteven",
"username": "Steven_Venter"
},
{
"code": "localmunicipalities.municipalcitieslocalmunicipalities.municipalityopts := options.Find().SetProjection(bson.D{{\"localmunicipalities.municipalcities\", 1}})\ncursor, err := collection.Find(context.Background(), \n bson.D{{\"provincename\", \"Northern Cape\"}}, \n opts)\nmunicipalcitiesmunicipalities",
"text": "Hi @Steven_Venter,Thanks for the extra information.What I would like to receive back from the query are all the names in the municipalcities array for a given provincename.If you would like to return back localmunicipalities.municipalcities you can project that field instead of localmunicipalities.municipality. As below example:In addition, please note that municipalcities could be a typo of municipalities.It looks like your application is a REST service, it would be great if you could limit the debugging scope. For example, try to debug just the function that perform the query.If you still have further questions, could you isolate a smaller function scope and provide that as a reproducible example ?Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks for the update. I will try it and let you know.",
"username": "Steven_Venter"
},
{
"code": "",
"text": "Hi WanUnfortunately this does not solve the problem either. Only one object is returned and it is empty.",
"username": "Steven_Venter"
},
{
"code": "Collection.FindCursor",
"text": "Hi @Steven_Venter,I wasn’t able to reproduce this issue. It would be really helpful if you could create a standalone code repro that we can run locally. Also, when you say the object is empty, how are you accessing it? The Collection.Find function returns a Cursor and there’s multiple ways to iterate it, so it would be really helpful to have that code.– Divjot",
"username": "Divjot_Arora"
}
] | Convert cli query to Go bson query | 2020-05-18T19:21:46.891Z | Convert cli query to Go bson query | 4,041 |
null | [
"atlas-device-sync",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,I am developing my app using full sync realms, but while trying to open a realm, i got the following error on the client side:Error Domain=io.realm.unknown Code=11 “Bad WebSocket response 401 unauthorized” UserInfo={Category=realm::util::websocket::Error, NSLocalizedDescription=Bad WebSocket response 401 unauthorized, Error Code=11}and on the server-side:HTTP upgrade failed (service did not respond properly) {“type”:“GitBook credentials - failed to parse token data”,“status”:401,“code”:611} Request: c1e84a30-ce80-4cd9-8726-a91cd0249ff0 GET /realm-sync/%2Fruntimeallusers%2F__partial%2F54f81d30dbd51a1865e329d6440c623f%2F76bbbfaf69057d9d348a6f4ca0cc2d831de8204b HTTP/1.1 Host: SOME_Private_URL Upgrade: websocket Connection: upgrade X-Request-ID: 7286b2a235a0f64a8ccab16a4805cead X-Real-IP: some private ip X-Forwarded-For: some private ip X-Forwarded-Host: some private host X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Original-URI: /realm-sync/%2Fruntimeallusers%2F__partial%2F54f81d30dbd51a1865e329d6440c623f%2F76bbbfaf69057d9d348a6f4ca0cc2d831de8204b X-Scheme: http Authorization: Realm-Access-Token version=1 token=\" private token\" Sec-WebSocket-Key: lfseedaM8icvINSkop5i3g== Sec-WebSocket-Protocol: io.realm.sync.26-30 Sec-WebSocket-Version: 13 User-Agent: RealmSync/4.9.4 (macOS Darwin 19.4.0 Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64) RealmObjectiveC/4.3.1 bundle id Response: HTTP/1.1 401 Unauthorized\nServer: Realm-Object-Server/3.28.4",
"username": "Octavian_Milea"
},
{
"code": "",
"text": "@Octavian_Milea You are trying to open a partial realm, which is a realm for query-based sync, with a full-sync API call - the two methods of sync are incompatible with one another.You can see this in your logs, the GET request has a __partial URI component",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problem while opening a fullSync Realm | 2020-05-26T11:46:06.886Z | Problem while opening a fullSync Realm | 4,015 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "mongodump seems to work much faster on the node it backs up vs. over a network link. Wondering if this is because it’s chatty and wondering if there’s a way to make it use larger batches when reading collections? Looks like the tool is written in go, and presumably if using the go driver it would use the default go driver’s options but not sure.",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": "Hi Nuri,You might want to play around with the --numParallelCollections parameter in mongodump. By default this is set to 4, so if your network bandwidth is sufficient, increasing this may show an increased dump speed.Having said that, you might want to do a basic sanity check on your network bandwidth. If you’re dumping locally and it’s fast vs. over the network, typically this implies that the bottleneck is the network. You might want to check the size of your database e.g. by using dbStats command and calculate the time required to transfer the whole database using your network bandwidth.Best regards,\nKevin",
"username": "kevinadi"
}
] | Any way to see or increase mongodump's batch size? | 2020-05-19T00:50:00.925Z | Any way to see or increase mongodump’s batch size? | 2,730 |
null | [] | [
{
"code": " for doc in db.col.find(field==\"bla\"):\n file.write(doc) \n",
"text": "I am currelying devoloping a program that read documents from mongo and write them in a file… something like this:my problem is that something can happen while doing this process (it gonna take a week to do all the writes), for example, a shutdown or network problem. my question is… is there something similar to journal for write operations to recover from a checkpoint? So i dont need to do all the write to file all over again.",
"username": "Jonathan_Ferrer"
},
{
"code": "mongod",
"text": "Hi Jonathan,Could you elaborate on the purpose of the code? It looks like you’re trying to dump the whole collection. Are you trying to backup up the whole collection?If backup for restoring into another MongoDB server is the goal, you might want to examine mongodump and mongorestore, where they have the --oplog option that caters for exactly this purpose (performing a dump and taking note of writes while the dump operation is ongoing).Having said that, this feature would require you to have a replica set deployment, as the oplog is not available on a standalone mongod deployment.If you need further help with this, could you post additional details:Best regards,\nKevin",
"username": "kevinadi"
}
] | Is there something similar to journal for read operations in mongodb? | 2020-05-19T20:33:57.734Z | Is there something similar to journal for read operations in mongodb? | 1,158 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi Team,We have a setup of 3.6.9 version, and observed two primaries on setup. Now as per this link\nthere can be a possibility of two P nodes being available.\nAs per docs, “When this occurs, clients that connect to the former primary may observe stale data despite having requested read preference primary, and new writes to the former primary will eventually roll back.”\nbut in our case, the writes did not happen to either of transient primary. Why is it so?",
"username": "Joanne"
},
{
"code": "",
"text": "If the write was accepted by old primary then it should have been rolled back and the data is written out to the rollback directory in the mongo data directory.",
"username": "chris"
},
{
"code": "",
"text": "No, data was not accepted in either available transient primaries. There was loss of incoming trades and nothing was rolled back as per logs.",
"username": "Joanne"
},
{
"code": "",
"text": "We have a setup of 3.6.9 version, and observed two primaries on setup.Can you expand on “two primaries on setup”No, data was not accepted in either available transient primaries.This should have resulted in an error then, was there any logging info from the application.",
"username": "chris"
},
{
"code": "primary[new writes to the former primary will eventually roll back",
"text": "Can you expand on “two primaries on setup”Due to heartbeat failure and network problem, our setup observed two primary nodes(transiently for about 2-3 seconds.) In the meantime application errors were logged for 20 seconds where it was not able to connect to primary node.As per 3.6 documentation excerpt,\nWhen this occurs, clients that connect to the former primary may observe stale data despite having requested read preference primary, and new writes to the former primary will eventually roll back.We have the rollback files generated during same time of 30 seconds window when application couldn’t discover primary node to write to; can we say that the statement from documentation holds true. Means that if rollback files are generated [new writes to the former primary will eventually roll back](Replica Set Primary — MongoDB Manual) statement holds true? Is this understanding correct?",
"username": "Joanne"
},
{
"code": "primary[new writes to the former primary will eventually roll back",
"text": "Hi Team, please help me with the query:As per 3.6 documentation excerpt,\nWhen this occurs, clients that connect to the former primary may observe stale data despite having requested read preference primary , and new writes to the former primary will eventually roll back.We have the rollback files generated during same time of 30 seconds window when application couldn’t discover primary node to write to; can we say that the statement from documentation holds true. Means that if rollback files are generated [new writes to the former primary will eventually roll back ](Replica Set Primary — MongoDB Manual) statement holds true? Is this understanding correct?",
"username": "Joanne"
}
] | Info about a replica set to temporarily have two primaries | 2020-05-19T06:01:55.948Z | Info about a replica set to temporarily have two primaries | 3,707 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi,I have spent ~2 hours looking for solutions to create an automatic backup of my Realm Cloud database. Can you please post a link to documentation on how to accomplish this?Thanks.",
"username": "Austin_Teague"
},
{
"code": "",
"text": "@Austin_Teague Automatic backup is built into the Realm Cloud database in case of the need to restore. If you are wanting to build your own backup from Realm Cloud to some other storage then you can use our server SDKs, such as node.js, to download and sync the realm, open it, and then iterate through each object and then write it to some other location. For instance, you could use JSON.stringify, write to a flat file, and then upload to S3 as part of a cron job or similar.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks @Ian_Ward! I’ve done just that and it very easy. Is there any plan to make the Realm Cloud backups quickly accessible? Right now it appears I have to request it using a support ticket.",
"username": "Austin_Teague"
},
{
"code": "",
"text": "@Austin_Teague Not at this time but once the merged MongoDB and Realm product is launched you will be able to backup your data on Atlas which has much more extensive backup and restore capabilities.",
"username": "Ian_Ward"
},
{
"code": "async function runBackup (backupDir) {\n var credentials = Realm.Sync.Credentials.usernamePassword('admin', process.env.REALM_ADMIN_PW, false)\n\n var adminUser = await Realm.Sync.User.login(SyncAuthURL, credentials)\n\n // __Admin Realms\n fs.mkdirSync(backupDir)\n addLog(' > Downloading Realms...')\n \n const config = adminUser.createConfiguration({ sync: { fullSynchronization: true, url: CommonRealmURL + '/__admin' }})\n const adminRealm = await Realm.open(config)\n const realmFiles = adminRealm.objects('RealmFile')\n var realm = null\n for (const realmFile of realmFiles) {\n const backupRealmFile = `${backupDir}${realmFile.path}.realm`\n await fs.promises.mkdir(path.dirname(backupRealmFile), { recursive: true })\n const cfg = adminUser.createConfiguration({ sync: { fullSynchronization: true, url: CommonRealmURL + realmFile.path }})\n realm = await Realm.open(cfg)\n realm.writeCopyTo(backupRealmFile)\n realm.close()\n }\n}",
"text": "Post here my solution that I use for backup my realm-cloud realms:",
"username": "rouuuge"
},
{
"code": "",
"text": "@rouuuge Awesome, thanks. That’s certainly one solution. You can then open this in Realm Studio and export it as a CSV. Then open the desired Realm and import data via CSV.We decided to create the CSV direction from within the function.Thanks for the info though! Realm support was also able to access up to 30 days of backups since we use their Realm Cloud. I just needed to submit a support ticket since they currently don’t have a self-service option.",
"username": "Austin_Teague"
}
] | Automatic Realm Cloud Backup | 2020-04-24T18:26:42.515Z | Automatic Realm Cloud Backup | 3,452 |
null | [] | [
{
"code": "",
"text": "We had a production database that was compromised and need to move users to a new database. How can we take a current users, who is authenticated and connected to old Realm database, and sync their data with the new database?I have tried following the following recommendations, but haven’t had success.",
"username": "Austin_Teague"
},
{
"code": "if (!currentRealmInstance.syncSession.url.startsWith(desiredRealmURL))const user = await Realm.Sync.User.login(ServerURL, credentials);const newRealm = new Realm(realmConfig);const promises = ['User'].map(obj =>\n const data = oldRealm.objects(schema);\n if (data.length) {\n data.forEach(el => {\n newRealm.write(() => {\n newRealm.create(schema, el, 'modified');\n });\n });\n }\n);\nPromise.all(promises)\n .then(async () => {\n return dispatch({\n type: 'set_realm_connection',\n payload: newRealm,\n });\n})\n checkRealmInstance = setInterval(async () => {\n const {realm} = this.props;\n if (!Realm.Sync.User.current) {\n clearInterval(this.checkExist); // Don't perform checks if user isn't logged into Realm\n } else if (realm.syncSession.url && !realm.syncSession.url.startsWith(RealmURL)) {\n await this.props.getUserData(); // Loads data from current (old) Realm\n this.props.updateSpinnerVisibility(true, 'Updating Profile');\n this.props.copyDataToNewRealm();\n } else if (realm.syncSession.url && realm.syncSession.url.startsWith(RealmURL)) {\n clearInterval(this.checkExist);\n this.props.updateSpinnerVisibility(false);\n await this.props.getUserData(); // Retrieves new data and places in Redux\n this.props.navigation.navigate('App');\n } else {\n console.log('.'); // Waiting for initial Realm to load\n }\n }, 100);\n",
"text": "So if anyone else runs into this question/issue, here is what I did to resolve it:Check to see if the user’s current Realm is not the same as the Realm you would like them to be in.\nif (!currentRealmInstance.syncSession.url.startsWith(desiredRealmURL))If not, logged the user into the new Realm server. We use JWT, so this involved creating the credentials then logging in.\nconst user = await Realm.Sync.User.login(ServerURL, credentials);Opened a new instance of Realm with this user.\nconst newRealm = new Realm(realmConfig);. We also are keeping a reference of the old Realm.Copy the data over into the new Realm. We used an array of the Realm schema objects we wanted to copy over. In this code example it is just the User data.Here’s the data on the frontend that we used to open the app or wait until the current Realm URL equals the desired Realm URL.",
"username": "Austin_Teague"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Migrating to new Realm Database | 2020-05-25T02:04:49.862Z | Migrating to new Realm Database | 2,110 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello, I am using the node.js library and “insertOne” function to insert a document: Super-Scraper/server.js at master · JimLynchCodes/Super-Scraper · GitHubFor some reason, this creates TWO of this document in the database (3 seconds apart).I am totally baffled here. Can anyone explain why this creates two documents and not one? BTW - it would be nice if you guys didn’t kill the slack channel. ",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "turns out I was actually just calling the insertOne function twice.",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB "InsertOne" inserts multiple documents | 2020-05-24T21:37:30.448Z | MongoDB “InsertOne” inserts multiple documents | 1,439 |
null | [] | [
{
"code": "",
"text": "Hi, am having issues with my MongoDB. I wrote a huge post, submitted it, and now have been waiting a few hours for it to be approved. I can’t even go to my post to add a comment for things I am trying in the meantime.Slack was great because you can just ask other people who are really there and talk to people in real-time.I guess this site is worth having around, but it in no ways replaces Slack. I don’t see why they had to close the mongo Slack community.Please just fork over some $$ to Slack and bring back the MongoDB Slack community. ",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "Hi Jim,Only the first few posts any member makes are sent through moderation for approval. It’s to help protect our community against spammers.I’m sorry that you’re having an issue with your MongoDB instance, but I’ve gone ahead and approved your post. Hope you are able to get the help you need from our neighbors here.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "I prefer forums over slack for serious support and discussion any day. Sure, Slack gives sense that things progress fast, but that is only true when there are right people online, and your question is trivial enough / interesting enough to get attention right away. If neither of those is true, open question gets lost really fast.I expect new user post approval to become smoother over time, when community grows and stabilizes. Then it most likely has community members who have enough privileges to do such first post approval, reducing time greatly.In world where most people think everything is in rush and has to happen instantly, forums are nice place to slow down, give a thought on your post and really concentrate what you are saying & doing. /me likes ",
"username": "kerbe"
},
{
"code": "",
"text": "Forums in general are ok, but this forum is absolutely terrible!I just created a post, and now I can’t even see it anywhere because it takes 3 days to become “approved”. SMH!",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "Hi @Jim_Lynch,It looks like your most recent post was waiting in moderation for about 8 hours over a long weekend (Memorial Day in the US). That’s about the longest a post will currently wait in moderation, but most requests are actioned significantly faster (particularly during weekday working hours). We currently have moderator reminders for requests waiting longer than 12 hours and the last time a reminder was needed was March 8th.New user restrictions (like moderation) are an aspect of the Discourse forum software that is designed to encourage users to learn more about the community and how to engage in quality discussion (for example, avoiding duplicate posts). Initial moderation also limits drive-by spam, which is unfortunately common.Additional permissions are granted to more experienced users based on positive community involvement. You can learn more about Trust Levels in the Getting Started with the MongoDB Community welcome post.There are many reasons why Slack did not suit our community growth. This was discussed on several occasions in Slack, but since the archives are not public or searchable I’ll repost the gist here.We have invested significant planning effort into understanding and addressing feedback from the community so we can provide a long term solution including features that aren’t available in Slack:Discourse is an open source platform which allows us to adapt to the needs of the community.We certainly appreciate any constructive feedback on how we can improve your experience.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "The “waiting for your post to be approved” is horribly bad UX.Why not follow the lead of better forums like Reddit and Stack Overflow- let the post go through and allow moderators to delete them.Seems to me like you guys just want to drive traffic to this lame website for some reason instead of people being able to actually talk with each other.",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Slack Was Better | 2020-04-24T20:04:38.928Z | Slack Was Better | 7,194 |
null | [
"aggregation",
"java"
] | [
{
"code": "AggregateIterable<Document> parameters = valuesCollection.aggregate(\n\t\t\t\tArrays.asList(\n\t\t\t\t\t\tAggregates.match(Filters.and(Arrays.asList(\n\t\t\t\t\t\t\t\tFilters.eq(instrument, 35), \n\t\t\t\t\t\t\t\tFilters.gte(timestamp, fromDate), \n\t\t\t\t\t\t\t\tFilters.lte(timestamp, toDate)\n\t\t\t\t\t\t\t\t))), \n\t\t\t\t\t\tAggregates.group(Filters.and(\n\t\t\t\t\t\t\t\tFilters.eq(ctype, \"$ctype\"), \n\t\t\t\t\t\t\t\tFilters.eq(cname, \"$cname\"), \n\t\t\t\t\t\t\t\tFilters.eq(pname, \"$pname\"),\n\t\t\t\t\t\t\t\tFilters.eq(alias, \"$alias\")))\n\t\t\t\t\t\t));\nAggregateIterable<Document> parameters = valuesCollection.aggregate(\n\t\t\t\tArrays.asList(\n\t\t\t\t\t\tAggregates.match(Filters.and(Arrays.asList(\n\t\t\t\t\t\t\t\tFilters.eq(instrument, 35), \n\t\t\t\t\t\t\t\tFilters.gte(timestamp, fromDate), \n\t\t\t\t\t\t\t\tFilters.lte(timestamp, toDate)\n\t\t\t\t\t\t\t\t))), \n\t\t\t\t\t\tAggregates.group(Filters.and(\n\t\t\t\t\t\t\t\tFilters.eq(ctype, \"$ctype\"), \n\t\t\t\t\t\t\t\tFilters.eq(cname, \"$cname\"), \n\t\t\t\t\t\t\t\tFilters.eq(pname, \"$pname\"),\n\t\t\t\t\t\t\t\tFilters.eq(alias, \"$alias\"))),\n\t\t\t\t\t\tAggregates.project(Projections.fields(Projections.excludeId(), Projections.include(\"ctype\",\"cname\",\"pname\",\"alias\")))\n\t\t\t\t\t\t));\n",
"text": "Hi,\nI’m trying to add a project stage to a pipeline in Java code.\nMy collection contains the fields:\n_id, instrument, ctype, cname, pname, alias, timestamp, valueThe purpose of my query is to get distinct groups of (ctype+cname+pname+alias) for a given instrument and a range of dates.My code -before adding project stage- is:The reason why I need to use a project stage is that the only field in the returned document is _id.\nAdding the project stage, my query is now like this:But it doesn’t work and returns no document.Thanks for your help!",
"username": "Helene_ORTIZ"
},
{
"code": "_id_idctypepname_id.ctypectypeAggregates.project(\n Projections.fields(\n Projections.computed(\"ctype\", \"$_id.ctype\"), \n Projections.computed(\"pname\", \"$_id.pname\"),\n Projections.computed(\"ctype\", \"$_id.ctype\"), \n Projections.computed(\"pname\", \"$_id.pname\"),\n Projections.excludeId()\n )\n)",
"text": "Hello Hélène Ortiz,The group stage returns documents with only the _id field. If you notice carefully, the _id field is a sub-document with the four fields (ctype, pname, …). So, you need to project, for example, _id.ctype as ctype (same for the remaining 3 fields). The following code does that:",
"username": "Prasad_Saya"
},
{
"code": "_idctypepname",
"text": "If you notice carefully, the _id field is a sub-document with the four fields ( ctype , pname , …)I do not see that from the code posted. Please enlighten me.",
"username": "steevej"
},
{
"code": "Aggregates.group(\n Filters.and(\n Filters.eq(ctype, “$ctype”),\n Filters.eq(cname, “$cname”),\n Filters.eq(pname, “$pname”),\n Filters.eq(alias, “$alias”))\n)\nAggregates.project(\n Projections.fields(\n Projections.excludeId(), \n Projections.include(“ctype”,“cname”,“pname”,“alias”)\n )\n)\nmongo{ \n $group: { \n _id: { \n ctype: \"$ctype\",\n cname: \"$cname\",\n pname: \"$pname\",\n alias: \"$alias\"\n }\n }\n}\n{\n $project: {\n _id: 0,\n ctype: 1,\n cname: 1,\n pname: 1,\n alias: 1\n }\n}\nAggregates.project(\n Projections.fields(\n Projections.computed(\"ctype\", \"$_id.ctype\"), \n Projections.computed(\"cname\", \"$_id.cname\"),\n Projections.computed(\"pname\", \"$_id.pname\"), \n Projections.computed(\"alias\", \"$_id.alias\"),\n Projections.excludeId()\n )\n)\n\n{\n $project: {\n _id: 0,\n ctype: \"$_id.ctype\",\n cname: \"$_id.cname\",\n pname: \"$_id.pname\",\n alias: \"$_id.alias\"\n }\n}",
"text": "This is the Java code (of MongoDB Java Driver) posted by @Helene_ORTIZ. I have only shown the group and project stages of the aggregation, respectively.The group and project stages are equivalent to the following in mongo shell:The problem is now quite evident in the project stage. My solution (in Java and the shell query):",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks.I am so used to not use the Java builders that I completely missed that. I agree now thatThe problem is now quite evident",
"username": "steevej"
},
{
"code": "",
"text": "Thanks a lot, it works great now !",
"username": "Helene_ORTIZ"
}
] | Aggregates with project stage | 2020-05-20T18:35:57.145Z | Aggregates with project stage | 3,670 |
null | [] | [
{
"code": "",
"text": "For the query {“tripduration”:{\"$gte\":60,\"$lte\":65}} citibike.trips collection returns a total of 937 documents.whereas below are the options,The question should be updated.",
"username": "Bhavana_19872"
},
{
"code": "",
"text": "I believe the question is still valid with a valid answer. Please check your filter and see if it matches what is asked.",
"username": "Benjamin_93799"
},
{
"code": "",
"text": "This question has the correct answer please check your filter.",
"username": "cevor"
},
{
"code": "",
"text": "@Benjamin_93799 and @cevor. Yes you both are right. My query was wrong because of which it was returning back different results. Thank you.",
"username": "Bhavana_19872"
},
{
"code": "",
"text": "3 posts were split to a new topic: Lab 1.6: Scavenger Hunt, Part 2",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Please create a new thread if you are facing similar issue. Closing this thread as it’s almost 2 years old.",
"username": "Shubham_Ranjan"
}
] | Lab 1.6: Scavenger Hunt, Part 2 - Update Options | 2018-11-02T22:12:22.817Z | Lab 1.6: Scavenger Hunt, Part 2 - Update Options | 1,867 |
null | [] | [
{
"code": "",
"text": "I am having issue connecting to Realm cloud instance - it was working three hours a go and now its nor accepting any connection.I have tried it with different internet connections and tools ( Realm Studio and through realm node library).nothing worked!I have no0t changed anything on my side it should be server side issue.I dont use proxy appreciate any suggestion.Cheers\nReza",
"username": "Reza_Ghaleh"
},
{
"code": "",
"text": "Welcome to the MongoDB community forum @Reza_Ghaleh!For help with Realm Cloud operational issues you’ll have to create a support case for investigation.I can see you’ve already done so, so please wait for the support team to follow up.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue connecting to Realm Cloud | 2020-05-25T05:48:09.168Z | Issue connecting to Realm Cloud | 1,380 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.2.7 is out and is ready for production deployment. This release contains only fixes since 4.2.6, and is a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Luke_Chen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.7 is released | 2020-05-25T03:22:11.528Z | MongoDB 4.2.7 is released | 2,160 |
null | [
"charts",
"on-premises"
] | [
{
"code": " adminuser \n\n user1\n\n user2\n\n user3\n\n user4\n\n ....... \n",
"text": "Hi,I have collections based on the user, each users data I have different collection, so whenever users created I need copy all the Stadrad Dashboard and point to the collection belongs to that users.So want some like belowI have some db called AreadDB and this has collections based on the users.AreadDBI have default dashboard created by adminuser, dashboards are simple like show numbers of action by user, number of assets he has.So whenever new user is on boarded, some backed code is creating user account(usr1) using chart-cli in MongoDB chart and also creating new collections.So Data of that user is going and it will be written to user1 collections . Now when user1 logs in he should be able to see only that data from user1 collections and the default dashboards.is there a this kind of facility to do that in mongoDB chart to support this kind of features ?",
"username": "Great_Info"
},
{
"code": "metadata",
"text": "Hi @Great_Info -Great question! Right now we don’t have any API to programmatically create dashboards and charts. This is something we are looking at doing in a few months.Since you’re using the on-prem version of Charts, you do have a bit more flexibility in that you can write documents into the metadata database collections which represent things like dashboards and charts. It’s actually pretty easy to duplicate dashboards and charts - you just need to make sure each has a unique ID, the IDs are linked correctly and the dashboard owner is set to a valid user id.However it probably isn’t practical to create users or data sources programmatically, since these are spread across multiple collections and have sensitive detail (passwords and URIs) encrypted.So - if you are able to create the dashboard and data source manually, it should be feasible for you to write a script that generates/duplicates dashboards and charts for each user.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to add users and mongodb charts sources dynamically | 2020-05-23T23:17:31.932Z | How to add users and mongodb charts sources dynamically | 3,719 |
null | [
"graphql",
"stitch"
] | [
{
"code": "{\"error\":\"must authenticate first\",\"link\":\"https://stitch.mongodb.com/groups/SOMEID/apps/SOMEID/logs?co_id=SOMEID\"}\n",
"text": "I try to add a new document via graphql mutation, In my app I use Email/Password I got also the token, then I put the token in the header but all get isI’m authenticated 100%, so what can it be? Is it something with the Rules?, I already allowed read & write",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "I fixed the error by changing my GraphQL tetsing tool, before I used Graphiql Desktop(electron app), I switched to Postman and all problems are gone, Postman is amazing!",
"username": "Ivan_Jeremic"
}
] | GraphQL Mutation Stitch: "error":"must authenticate | 2020-05-24T13:33:29.383Z | GraphQL Mutation Stitch: “error”:”must authenticate | 2,491 |
[
"replication"
] | [
{
"code": "",
"text": "i read documentation showing , replicaset maintains high availability.\nWhy there is no mention of replicaset being fault-tolerant. if i run a 5 node replicaset and 1 node goes down, still my application will be up and running. How this doesnot prove that replicaset provides both highavailability and fault-tolerance?\nimage1582×252 55 KB\n",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "How this doesnot prove that replicaset provides both highavailability and fault-tolerance?Hi,Fault tolerance, high availability, and automatic failover broadly describe the same behaviour although they may have slightly different connotations. The concept of High Availability includes durability, redundancy, and automatic failover with minimal or no downtime.Fault tolerance is more specifically the number of members that can be unavailable in a deployment while still maintaining full availability of read/write behaviour. For example, assuming all members of your 5-member replica set are voting, the fault tolerance would be 2 members.Scroll down to the next paragraph on the documentation page you referenced:Replication provides redundancy and increases data availability. With multiple copies of data on different database servers, replication provides a level of fault tolerance against the loss of a single database server.Consider Fault Tolerance is also one of the sections on the Replica Set Architecture documentation. Your deployment can be highly available with a fault tolerance of 1, or you can increase the fault tolerance by adding additional members.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why is replicaSet considered as Highly available but not fault tolerant? | 2020-05-24T01:54:24.109Z | Why is replicaSet considered as Highly available but not fault tolerant? | 3,450 |
|
null | [
"performance"
] | [
{
"code": "",
"text": "iam running a 3 node replicaset on a 4core 8gb ram(3Instances). Previously i was using 4core8gbram standaloneMongodb setup.i have enabled read on secondaryNodes. So writes will be happening on primaryNode.\nWill this increase my performance in any way?",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi @Divine_Cutler you might get some benefits with increased read performance as you could spread your reads out to more machines. Note that depending on settings, you could get stale reads if the write hasn’t hit the secondary that you’re reading from. Note that read preference secondary means that secondaries will serve the reads. Should you have a primary-secondary-arbiter replica set and one of the data bearing nodes is down, then you won’t serve reads. In that type of scenario you might want to make that secondaryPreferred to be safe.As for writes, you might get a little bit of increased performance, but probably not that much. You can scale your writes out by sharding. This will give you more primary nodes to write to, but if not done right, you could actually get less performance on writes.Do you have a performance problem currently that needs to be taken care of now, or is this for general knowledge? The best way to figure out the set up for your environment is to set up test machines with actual data and send typical loads at them to see how they handle that load.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan this is a problem that needs to be takencare of. could you share any available monitoring tools to check the performance of mongodb replicaset?",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "@Divine_Cutler MongoDB has a monitoring section in their documentation that can provide ideas for how you can monitor your instances.One thing that they don’t mention however is the use of prometheus/grafana. I am a fan of these tools and even use them to monitor/visualize metrics from my computers at home.Prometheus is a systems monitoring/alerting toolkit and you can find exporters for various needs. Prometheus just stores the data, and you need exporters for what you want to monitor. There are a couple of third-party (read non-official) exporters for MongoDB that I know of:Grafana is a visualization and alerting tool that can be used with different data stores (Prometheus is one several available). If you go this route you will have to set each of the pieces up and learn the query language to build out your dashboards to get the most out of it, although you can find pre-built dashboards to get you started.Prometheus/Grafana is not a toolset for everyone (which is why I mentioned the monitoring section of the docs above), but provides you the ability to build out the dashboards to help you visualize your system in the way that helps you out the best. If you don’t have the time to invest learning the tools, look at what’s prebuilt to see if they meet your needs.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hello @Divine_Cutlerthere is one more to add. This is the TICK stack (Telegraf, InfuxDB, Chronograf, Kapacitor). I did some work with Prometheus/Grafana (thanks @Doug_Duncan, love to use this ) and lately was asked to use the TICK stack. Both focus on the same subject, the cool thing with the TICK stack is that these are Go binaries which have no dependencies. What does that mean?\nYou start with the InfluxDB get it here just follow the brief documentation. For a test you can start with the default settings. Just run the DB with the default settings. Just add where your MongoDB is running.Then get Telegraf, download the binary - it is just on executable, do not search for plugins, it is all in the binary! Then read the plugin documentation for mongodb. Go to the github page, copy the defaults and past it in the telegraf.conf file.Start the telegraf, this will collect the data and write it to the InfluxDB. Now you want to get the visual part. Download Chronograf quick check the docs, but if you stay with the defaults, there is almost nothing to do. Again it is only one binary. Run it. Move to http://localhost:8888 and you can start monitoring.When you once have passed the above steps (yes one time getting familiar with it is not in the binaries … ) then it will take you less then 30 min to set up the full TICK stack.Happy Monitoring \nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "thank you @michael_hoeller i will take a look at it and update you",
"username": "Divine_Cutler"
},
{
"code": "mongodb",
"text": "@Divine_Cutler working through this introduction to the TICK stack helped me out when I pulled it down to play with earlier today. The document is geared towards Mac users, but it should be easy enough to convert to the OS of your choice.It did take me some playing around to get the mongodb input filter added and how to query the system to return results. I didn’t find much in the way of documentation, although I didn’t try all that hard. It’s more fun for me to play around to figure out how things work. While it’s easier to use than Prometheus/Grafana, I don’t think you have as many controls for building out dashboards. Of course I’ve used Prometheus/Grafana for years now and the TICK stack for a couple hours.",
"username": "Doug_Duncan"
},
{
"code": "telegraf --input-filter cpu:mem:system --output-filter influxdb config > /usr/local/etc/telegraf.conftelegraf.exe --config telegraf.conf",
"text": "Hello @Doug_Duncan and @Divine_Cutler\nDoug, thanks for the intro link. In step 2 they configure the telegraf viatelegraf --input-filter cpu:mem:system --output-filter influxdb config > /usr/local/etc/telegraf.confto simplify for the start and later use (since you may want to play with the already shipped config I’d suggest to take the default telegarf.conf as is, comment or uncomment as needed and run it like thistelegraf.exe --config telegraf.conf\n(you may want to add a path if needed for the on or the other)Cheers\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Will using replica set increase my read/write performance? | 2020-05-12T23:42:18.814Z | Will using replica set increase my read/write performance? | 4,467 |
null | [
"security"
] | [
{
"code": "mongotopmongostatmongotopmongotop --host testdbr1.insuredmine.info --port 12017 -u fred -p fred1 --authenticationDatabase \"admin\" {\n \"role\" : \"userAdminAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"dbAdminAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"readWriteAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"clusterAdmin\",\n \"db\" : \"admin\"\n }\nroles: [ { role: \"readWrite\", db: \"dbz\" }]Failed: (Unauthorized) not authorized on admin to execute command { serverStatus: 1, recordStats: 0, lsid: { id: UUID(\"cf3fe918-2abf-44c9-b67d-1c7916f84f7f\") }, $clusterTime: { clusterTime: Timestamp(1590182369, 3), signature: { hash: BinData(0, 30748E38362C79FB87FB2FA102DC33259F87300E), keyId: 6823102724234543106 } }, $db: \"admin\", $readPreference: { mode: \"primaryPreferred\" } }\nmongotopmongotop",
"text": "iam able to run mongotop mongostat only on user that has admin roles. For example i have assigned below roles to the user fred. iam able to run the mongotop only with user credentials having admin role as shown below.mongotop --host testdbr1.insuredmine.info --port 12017 -u fred -p fred1 --authenticationDatabase \"admin\"when i use an usercredentials with below role, i get below error\nroles: [ { role: \"readWrite\", db: \"dbz\" }]\ni checked the docs to see if there are any roles i could add to this user to make this command work using this userid, but iam not able to find it in docs https://docs.mongodb.com/manual/reference/program/mongotop/why i need this?\nwhen i run mongotop with my admin credential i see output of all databases present in the mongodbinstance. i have multiple database in my replicaset, i dont want to see the output of mongotop from all the collections in those databases. i want to see only the output of collections running in one specific database",
"username": "Divine_Cutler"
},
{
"code": "mongotopserverStatustopmongostatserverStatus",
"text": "Hi @Divine_Cutler, The information you need can be found in the MongoDB documentation.For mongotop the user needs to have both serverStatus and top privileges.For mongostat the user needs to have serverStatus privileges.The built in clusterManager role has these two privileges plus more that a monitoring admin might need, so it might fit your needs as well.",
"username": "Doug_Duncan"
},
{
"code": "createUser{ role: \"<role>\", db: \"<database>\" ,actions: [ \"serverStatus\",\"top\"] }\n",
"text": "i can see privileges in this doc. but how to add it to an useraccount?in which property should i add privilege?\n\nimage1604×906 82.6 KB\ni found some clue in this doc https://docs.mongodb.com/manual/reference/resource-document/#resource-document .i think it could be added like this, but iam not so sure as i don’t find the syntax for it in createUser methodplease let me know",
"username": "Divine_Cutler"
}
] | How to make mongotop/mongostat work for a specific database user that does not have an admin role? | 2020-05-22T21:29:59.177Z | How to make mongotop/mongostat work for a specific database user that does not have an admin role? | 4,043 |
null | [
"mongoose-odm",
"indexes"
] | [
{
"code": " const unitSchema = new mongoose.Schema({\n name: {\n type: String,\n required: true,\n },\n type: {\n type: String,\n required: true,\n },\n parent: {\n type: String,\n required: true,\n },\n administrators: {\n type: Array,\n required: true,\n },\n });\n\n unitSchema.index({ type: 1, parent: 1, name: 1 }, { unique: true });\n\n const Unit = mongoose.model('Unit', unitSchema);\n",
"text": "I would like to have a combination of 3 keys (type, parent and name) as unique compound index. I defined the schema as follows, but I can still create a unit that has the same type, parent and name. What am I missing?",
"username": "Ayumi_Nakamura"
},
{
"code": "",
"text": "I am wondering of this may helpI was working on a project for an organisation I volunteer for, I encountered some difficulties in ma...",
"username": "Natac13"
},
{
"code": "",
"text": "Thank you @Natac13! It worked!!As the author of this article says, it’s pretty weird that we have to use mongoose-unique-validator to accomplish this.",
"username": "Ayumi_Nakamura"
},
{
"code": "",
"text": "There is also this option I remembered",
"username": "Natac13"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setting unique compound index in schema definition | 2020-05-20T14:00:29.418Z | Setting unique compound index in schema definition | 9,445 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Hey, I’m new to MongoDB,I have an Ionic App for a local restaurant where you have some prdoucts which you can order. The app also have a register to create some users. There is also a Angular Web App where you can put in products and look up users etc.Both apps are connected to the MongoDB. Unfortunatelly I don’t have a clue which data plan is necessary for the deployment of these two apps.Can anybody help me please?Best regards\nBasti",
"username": "Bastian_Eckersberger"
},
{
"code": "",
"text": "Hi Basti,When in doubt, start small, index, and scale conservatively over time – note that you can also enable Atlas auto-scaling.The first order contributor to sizing is figuring out how much of your data and indexes will be warm, e.g. accessed in typical period–this tells you your working set size. This is the amount of memory you want to have ideally with some headroom, so this informs what Atlas cluster tier to use.You can easily scale vertically quickly, and if/when you find a need for long-term linear scale out, you can do so with sharding.https://docs.atlas.mongodb.com/sizing-tier-selection/ contains a bunch more info!Cheers\n-Andrew",
"username": "Andrew_Davidson"
}
] | Which M.. tier do I need for my app? | 2020-05-13T16:24:25.900Z | Which M.. tier do I need for my app? | 1,491 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Imagine a simple collection where each record has four primitive attributes - OrderId, CustomerEmail, OrderDate, ProductOrdered.A business owner asks the question:\nWhat percentage of orders in April 2020 were from new customers?Important to note:\nSame customer for this purpose is where the email is the same.\nIf your solution uses count <=1 be aware that a customer that orders twice in April is also a a new customer.For stage 1 the output can be something like this:\nNew customers April: 13,343\nRepeat customers (has at least 1 order prior to 1st april): 3,423I guess converting a percentage isn’t technically necessary at this stage… but bonus point if done…I can do this in SQL using CTEs, but we’re looking at ways of allowing stakeholders to model their own queries and sql builders based around string concatatention aren’t appealing, I’m actually looking at query DSLs and how appriorate they are to modelling these types of questions - with the possible view of sticking some UX report builder on top one day.So back on point, can the Mongo query language model the above question?Thanks",
"username": "Matt_2x"
},
{
"code": "db.collection.aggregate([\n { \n $group: { \n _id: \"$CustomerEmail\", \n count_new: { $sum: { $cond: [ { $gte: [ \"$OrderDate\", ISODate(\"2020-04-01\") ] }, 1, 0 ] } },\n count_repeat: { $sum: { $cond: [ { $lt: [ \"$OrderDate\", ISODate(\"2020-04-01\") ] }, 1, 0 ] } }\n } \n },\n { \n $group: { \n _id: \"Customers:\",\n new_custs: { $sum: { $cond: [ { $eq: [ \"$count_repeat\", 0 ] }, 1, 0 ] } },\n repeat_custs: { $sum: { $cond: [ { $gt: [ \"$count_repeat\", 0 ] }, 1, 0 ] } }\n } \n }\n])\n{ \"_id\" : \"Customers:\", \"new_custs\" : 1, \"repeat_custs\" : 2 }",
"text": "For stage 1 the output can be something like this:\nNew customers April: 13,343\nRepeat customers (has at least 1 order prior to 1st april): 3,423The output:{ \"_id\" : \"Customers:\", \"new_custs\" : 1, \"repeat_custs\" : 2 }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Prasad,[Note: EDITED. I did indeed say the 100% wrong thing prior to this edit, my bad, apologises]\nrepeat_customers are customers that ordered prior to april and in april.\nnew_customers are customers that order in april, but have not ordered prior.To be clearer lets try question “What amount of orders in April 2020 were from new customers and what amount from repeat customers?”For stage 1 the output can be something like this:\nNew customers April (has no orders prior to 1st april): 13,343\nRepeat customers (ordered in april, has at least 1 order prior to 1st april): 3,423Thanks",
"username": "Matt_2x"
},
{
"code": "db.collection.aggregate([\n { \n $group: { \n _id: \"$CustomerEmail\",\n count_new: { $sum: { $cond: [ { $gte: [ \"$OrderDate\", ISODate(\"2020-04-01\") ] }, 1, 0 ] } },\n count_repeat: { $sum: { $cond: [ { $lt: [ \"$OrderDate\", ISODate(\"2020-04-01\") ] }, 1, 0 ] } },\n } \n },\n { \n $group: {\n _id: null,\n total_april: { $sum: \"$count_new\" },\n total_new_custs: { $sum: { $cond: [ { $eq: [ \"$count_repeat\", 0 ] }, \"$count_new\", 0 ] } },\n total_repeat_custs: { $sum: { $cond: [ { $gt: [ \"$count_repeat\", 0 ] }, \"$count_new\", 0 ] } },\n }\n },\n {\n $project: {\n _id: 0,\n new_cust_orders_percentage: { $divide: [ { $multiply: [ \"$total_new_custs\", 100 ] }, \"$total_april\" ] }\n }\n }\n])\n{ \"new_cust_orders_percentage\" : 66.66666666666667 }{ OrderId: 1, CustomerEmail: \"e-1\", OrderDate: ISODate(\"2020-05-22\"), ProductOrdered: \"p-1\" }\n{ OrderId: 2, CustomerEmail: \"e-2\", OrderDate: ISODate(\"2020-03-25\"), ProductOrdered: \"p-91\" }\n{ OrderId: 3, CustomerEmail: \"e-2\", OrderDate: ISODate(\"2020-04-25\"), ProductOrdered: \"p-90\" }\n{ OrderId: 4, CustomerEmail: \"e-1\", OrderDate: ISODate(\"2020-05-20\"), ProductOrdered: \"p-0\" }\n{ OrderId: 5, CustomerEmail: \"e-3\", OrderDate: ISODate(\"2020-02-01\"), ProductOrdered: \"p-66\" }",
"text": "Hi Matt Freeman,What percentage of orders in April 2020 were from new customers?The following aggregation returns the percentage:The output:{ \"new_cust_orders_percentage\" : 66.66666666666667 }I used these as sample documents:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Sorry I am making a real mess of explaining this,We are interested in the order stats for the month April 2020.A new customer is a customer that ordered in April 2020 but has not ordered previously before April 2020.\nA repeat customer is a customer that ordered in April 2020 and has ordered at some point previous to April 2020.A customer should only be counted once.",
"username": "Matt_2x"
},
{
"code": "",
"text": "@Parasad.I think this looks right now, thank you very much, greatly appreciated.So it seems mongo query language is suited to representing complicated questions, I struggled to model this with (Incomplete) MBQL Reference · metabase/metabase Wiki · GitHub or other non-SQL-string query DSLsAt this point I should be completely honest and say that I may not be intending to use the query language against Mongo, but to just borrow the concept and apply it to our non-mongo datastore.Long shot, for inspiration is anyone aware of any other DSLs that serialize neatly to JSON for representing queries of such complexity.Thanks",
"username": "Matt_2x"
}
] | Is this query possible - % of orders in April 2020 that were from new customers | 2020-05-22T04:51:24.508Z | Is this query possible - % of orders in April 2020 that were from new customers | 2,607 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Cannot connect to the MongoDB at localhost:27016.Error:Network is unreachable. Reason: couldn’t connect to server localhost:27016, connection attempt failed: SocketException: Error connecting to localhost:27016 (127.0.0.1:27016) :: caused by :: Connection refusedgetting this error while i am trying to connect my db . Help me anyone.",
"username": "Deepak_Kanojia"
},
{
"code": "",
"text": "Is your mongod up and running on port 27016?\nHow you are connecting by shell or some tool\nWhat command was issued to connect to mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@Ramachandra_Tummala i am using node js in that i am using mongo DB , So After connection i tried to run my server there a error is shown , then i tried it to start with robo3t then this error is showing up",
"username": "Deepak_Kanojia"
},
{
"code": "",
"text": "The default port is 27017. So unless you started your mongod with a different port setting you should connect to port 27017.",
"username": "steevej"
},
{
"code": "",
"text": "If getting connection refuse at that port,I am confident mongod is not running ,first check your mongodb.conf file for the port it is set to run on.If running as a service on linux, try systemctl start mongod . Then try connect again with your mongo shell.",
"username": "Kirk-PatrickBrown"
}
] | Connection refused | 2020-05-21T20:26:37.404Z | Connection refused | 14,901 |
null | [
"python",
"connecting",
"atlas"
] | [
{
"code": "",
"text": "I am trying to pass data back and forth via Python to a free MongoDB collection that I have created with my account. I’ve populated the cluster with the sample data. I am trying to access the documents within the shipwrecks collection. The error that I receive is:pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-01-5zsny.mongodb.net:27017: [WinError 10061] No connection could be made because the target machine actively refused it,cluster0-shard-00-00-5zsny.mongodb.net:27017:Here is my code. I have removed my password on purpose. Please help.import pymongocluster = pymongo.MongoClient(“mongodb+srv://markwoody:@cluster0-5zsny.mongodb.net/test?retryWrites=true&w=majority”)db = cluster[“sample_geospatial”]\ncollection = db[“shipwrecks”]results = collection.find_one()print(results)",
"username": "Mark_Woodmansee"
},
{
"code": "",
"text": "Hi @Mark_Woodmansee, have you remembered to whitelist your machine’s IP address in the Atlas settings? Forgetting to do that is generally the cause of the error you’re getting.",
"username": "Doug_Duncan"
}
] | MongoDB cloud example with Python | 2020-05-21T22:33:10.491Z | MongoDB cloud example with Python | 3,199 |
null | [] | [
{
"code": "",
"text": "Hi MongoDB Community!My name is Nadine, and I’m a Senior Developer Advocate at Rockset. We recently partnered with MongoDB, and we wanted to introduce ourselves. Rockset is a real-time indexing database used alongside MongoDB for building data-driven microservices. We’re thrilled to join MongoDB World this year and have an on-demand session where we will talk about Joins and Aggregations using real-time indexing on MongoDB. Following the session, join us for a #tech-talk-q-and-a with CoFounder & CTO of Rockset, Dhruba on our community slack channel on June 10th at 3pm . We also have a dedicated #mongodb channel - feel free to ask us questions there anytime!If you’re curious to see how to do a MongoDB-Rockset integration in under 15 minutes, check out this tutorial. To help get your feet wet, we are currently hosting a few challenges (with, of course, prizes, such as the Bose SoundLink). Hop on over to our slack channel to plug into these challenges, #mongodb. We can’t wait to see what you create!Happy hacking!\n-nadine + Rockset team\n@nadine-rockset on our community slack channel",
"username": "nadine_farah"
},
{
"code": "",
"text": "Welcome to the community, Nadine! We’re happy to have you join us.",
"username": "Jamie"
}
] | Hello from Rockset! | 2020-05-21T20:26:20.833Z | Hello from Rockset! | 2,175 |
null | [
"mongodb-live-2020"
] | [
{
"code": "",
"text": "Welcome to the MongoDB forums, MongoDB.live attendees!We’re thrilled that you’ve joined us here. Take a look around and familiarize yourself with a few important documents before getting started:* Our Code of Conduct* Tips for Getting Started with Our CommunityThen, introduce yourself here:* Welcome!Lastly, keep an eye on this MongoDB Events category for post-MongoDB.live discussions. If you’re posting a follow-up question or comment on a session you viewed at MongoDB.live, please add the mongodb-live-2020 tag to your post. Thanks!",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
}
] | Welcome, MongoDB.live attendees! | 2020-05-21T17:30:08.436Z | Welcome, MongoDB.live attendees! | 5,322 |
null | [
"xamarin"
] | [
{
"code": " [PrimaryKey]\n public string Id { get; set; } = Guid.NewGuid().ToString();\n public string Name { get; set; }\n public string Role { get; set; }\n\n\n [Backlink(nameof(ToDoItem.Employee))]\n public IQueryable<ToDoItem> ToDoItems { get; }\n}\n public Assignee Employee { get; set; }\n}\n",
"text": "My current employer is developing a mobile app using Xamarin.Forms and Asp.net mvc on the backend. I suggested to use realm in the mobile app. My manager want to see a POC(Proof of concept) app using realm with backlink feature before allowing it to be used in the app. I am working on the POC on GitHub . The documentation is very limiting and the GitHub repo of realm-dotnet don’t have any good sample app.\nI completed the project. But unable to implement backlink. The sample app I have developed allow user to create assignees(employees) in the first page. The user can delete or edit the employees using context menu. When the user clicks on the employee name the app navigates to the ToDoListPage of that particular employee. Here the user can create ToDoItems . On this ToDoList page I want to show the ToDoItems that where assigned to that employee only.\nThe models are as follows:public class Assignee : RealmObject\n{\npublic Assignee()\n{\nToDoItems = Enumerable.Empty().AsQueryable();\n}public class ToDoItem : RealmObject\n{\n[PrimaryKey]\npublic string Id { get; set; } = Guid.NewGuid().ToString();\npublic string Name { get; set; }\npublic string Description { get; set; }\npublic bool Done { get; set; }I am adding employee to each ToDo Item:Item.Employee = Employee;\n_realm.Add(Item);Now I want to access the ToDoItems for the Employee:Items = _realm.All<Assignee\">().Where(x => x.Id == EmployeeId).FirstOrDefault().ToDoItems;But this does not work. I will be grateful if someone can help me out by preferably writing code in my sample app or give the correct code in the reply.Thank you",
"username": "Paramjit_Singh"
},
{
"code": "",
"text": "I didn’t get any reply on MongoDB community forum. But get the answer on Stackoverflow. So answering my own question. I am using realm 4.3.Firstly, Realm .NET doesn’t currently support traversing properties (x.Employee.Id). Due to this the app crashes with the exception:The left-hand side of the Equal operator must be a direct access to a persisted property in RealmRealm supports object comparison, so we can fix this like so:var employee = _realm.Find(EmployeeId);Items = _realm.All().Where(x => x.Employee == employee);The second issue is that the EmployeeId parameter is null. Since the EmployeeId is being populated after the load logic has been triggered, we don’t need to load the data in the ctor.Finally, since I won’t be loading the data in the ctor, and instead in the SetValues method, the UI needs to know, when the data has been updated, what exactly to redraw. Thus, I need to mark the collection to be Reactive too:[Reactive]public IEnumerable Items { get; set; }Then, I change the SetValues method to use object comparison, instead of traversing:async Task SetValues(){Employee = _realm.Find(EmployeeId);Title = Employee.Name;Items = _realm.All().Where(x => x.Employee == Employee);}To sum up - I don’t need to try and load the data in the ctor, since I don’t know when the EmployeeId will be set. I already tracking when the property will change and inside the SetValues command simply need to change the expression predicate.",
"username": "Paramjit_Singh"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Backlink feature of realm-dotnet in Xamarin.Forms app | 2020-05-16T08:59:11.297Z | Using Backlink feature of realm-dotnet in Xamarin.Forms app | 2,947 |
[
"compass",
"atlas"
] | [
{
"code": " I am trying to connect to my Database Cluster, but I get the following error: \n \"getaddrinfo ENOTFOUND cluster0-shard-00-00-tehz2.mongodb.net\"\n Please see screenshot below. Any ideas how to fix this? \n",
"text": "Hello,Thanks in Advance,\nRajneshScreen Shot 2020-05-20 at 6.05.08 PM878×354 40.3 KB",
"username": "Rajnesh_Domalpalli"
},
{
"code": "",
"text": "Are you able to ping your cluster?\nMay be the hostname is not correct\nPlease check again\nAlternately you can try fill in connection fields individually tab",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for such a prompt reply, Ramachandra.\nI’m not sure how to ping the cluster since I’m new to MongoDb.\nI’ll read the Documentation and try it out in a day or two.\nThanks again.Regards,\nRajnesh",
"username": "Rajnesh_Domalpalli"
},
{
"code": "",
"text": "The cluster cluster0-tehz2 does not seem to be setup correctly.We usually get the info about the replica set nodes forming the cluster as in:",
"username": "steevej"
}
] | Error message in Compass when connecting to Cluster | 2020-05-21T02:15:45.182Z | Error message in Compass when connecting to Cluster | 6,148 |
|
null | [
"dot-net"
] | [
{
"code": "System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector\n\n{ AllowedLatencyRange = 00:00:00.0150000 }\n\n}. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"\n\n{ ClusterId : 1, EndPoint : \"Unspecified/ serverFQN:29031\" }\n\n\", EndPoint: \"Unspecified/serverFQN:29031\", State: \"Disconnected\", Type: \"Unknown\", LastUpdateTimestamp: \"2020-05-06T08:50:17.7201620Z\" }, { ServerId: \"\n\n{ ClusterId : 1, EndPoint : \"Unspecified/serverFQN:29032\" }\n\n\", EndPoint: \"Unspecified/serverFQN:29032\", State: \"Disconnected\", Type: \"Unknown\", LastUpdateTimestamp: \"2020-05-06T08:50:17.7260395Z\" }, { ServerId: \"\n\n{ ClusterId : 1, EndPoint : \"Unspecified/serverFQN:29033\" }\n\n\", EndPoint: \"Unspecified/ serverFQN:29033\", State: \"Disconnected\", Type: \"Unknown\", LastUpdateTimestamp: \"2020-05-06T08:50:17.7265232Z\" }] }.\nat MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChanged(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Clusters.Cluster.SelectServer(IServerSelector selector, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelection(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.AreSessionsSupported(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClient.StartImplicitSession(CancellationToken cancellationToken)\nat MongoDB.Driver.OperationExecutor.StartImplicitSession(CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl`1.FindSync[TProjection](FilterDefinition`1 filter, FindOptions`2 options, CancellationToken cancellationToken)\nat MongoDB.Driver.FindFluent`2.ToCursor(CancellationToken cancellationToken)\nat MongoDB.Driver.IAsyncCursorSourceExtensions.ToList[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\nat Ms.Console.Program.Main(String[] args) in /src/console/Program.cs:line 81\n var clientSettings = MongoClientSettings.FromConnectionString(url);\nif (requireSsl)\n{\nclientSettings.SslSettings = new SslSettings\n\n{ ServerCertificateValidationCallback = (sender, certificate, chain, errors) => true, CheckCertificateRevocation = false}\n\n;\nclientSettings.VerifySslCertificate = false;\nclientSettings.UseSsl = true;\nclientSettings.AllowInsecureTls = true;\n}\ntry\n\n{ clientSettings.ServerSelectionTimeout = TimeSpan.FromMinutes(5); var client = new MongoClient(clientSettings); var database = client.GetDatabase(dbName); var fileFilter = Builders<BsonDocument>.Filter.Exists(\"_id\"); var file1 = database.GetCollection<BsonDocument>(\"fs.files\").Find(Builders<BsonDocument>.Filter.Empty).ToList(); System.Console.WriteLine(file1.Count); var file2 = database.GetCollection<BsonDocument>(\"fs.files\").Find(Builders<BsonDocument>.Filter.Empty).ToList(); System.Console.WriteLine(file2.Count); var file3 = database.GetCollection<BsonDocument>(\"fs.files\").Find(Builders<BsonDocument>.Filter.Empty).ToList(); System.Console.WriteLine(file3.Count); }\n\ncatch (Exception e)\n\n{ System.Console.WriteLine(e); } \nFROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base\nWORKDIR /app\n\nFROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build\nWORKDIR /src\nCOPY [\"console/Ms.Console.csproj\", \"console/\"]\nRUN dotnet restore \"console/Ms.Console.csproj\"\nCOPY . .\nWORKDIR \"/src/console\"\nRUN dotnet build \"Ms.Console.csproj\" -c Release -o /app/build\n\nFROM build AS publish\nRUN dotnet publish \"Ms.Console.csproj\" -c Release -o /app/publish\n\nFROM base AS final\nWORKDIR /app\nCOPY --from=publish /app/publish .\nENTRYPOINT [\"dotnet\", \"Ms.Console.dll\"]\n",
"text": "Hello, I have a .net core program run in linux docker container. When I run the program in docker , the program fail to connect mongodb with replica set due to serverselectiontimeout. and it report below errors. We set a long time out 3-4 minutes, it could be connected. But too slow to connect mongodb. If I run in linux using dotnet run, the program is normal without connection issue. The issue born us a long time. Any solutions? how to resolve the issue?/////////////////////////////////////////////////////////////////////////////////Environment info:Mongodb driver version 2.10.4, actually, we try from 2.9.0 to 2.10.4, neither one worksMongodb version 4.2.Net core version 3.1Linux version: centos-release-7-7.1908.0.el7.centos.x86_64Docker version: we try on 18.09.6, build 481bc77156 and 19.03.5, build 633a0ea/////////////////////////////////////////////////////////////////////////////////Error message:///////////////////////////////////////////////////////////////////////////////////Sample code we used when connect mongodb:///////////////////////////////////////////////////////////////////////////////////docker file content:",
"username": "baichun_mu"
},
{
"code": "serverFQN",
"text": "Hi @baichun_mu, welcome!Based on the error messages and that you could connect from the host but unable to from the Docker container, this looks like a problem in the Docker networking setup. From within the Docker container, it has failed to resolved network address serverFQN or have access to the ports specified.I would recommend to debug the Docker networking first. i.e. connection between container and host and container to external networks.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | System.TimeoutException when connecting using .net core driver with docker | 2020-05-19T00:24:18.318Z | System.TimeoutException when connecting using .net core driver with docker | 10,131 |
null | [
"stitch"
] | [
{
"code": "",
"text": "Hello,Yesterday we had to experience an unexpected failure of our Stitch application/instance. We have reached out to support, but they cannot help since we are not on one of the premium support plans. Hopefully someone here has any suggestions that might help us prevent this from happening in the future.The instance has been running fine since december 2019 (so about 6 months). Basically it only provides us an easy access point to 2 collections from a web interface using the Stitch JavaScript SDK. A great way to build an application quickly without all the hassle of maintaining servers and such.\nThe application does some basic operations such as search and update. These were also never giving any troubles. In the past days however we were seeing an increased number of errors in a very important part of the application which keeps some important info on the main screen of our application up-to-date, using the collection.watch() function (aka a Change Stream). This particular function gave more and more errors “Origin https://myorigin is not allowed by Access-Control-Allow-Origin.” (also reported in this topic).We have been trying to solve this. Most likely cause (besides something really wrong with CORS, whcih we obviously checked first) is a problem with the ChangeStream count. To mitigate this we renamed the collection and then set it back to the original name, whcih should force all open Change streams to close. But it did not help.Yesterday afternoon the browsers started to receive an error “429 Too Many Requests”. And the Stitch instance “died”. When looking at the Atlas Stitch Admin page we see a 404-error for the instance. It cannot be restarted or anything, just deleted.To be able to quickly continue our normal work we have meanwhile re-created the Stitch instance/application and point our browsers to that locations. But obviously we would like to prevent this from happening again.I hope somenone can point out where the problem might be so we can fix this.Kind regards,\nLeo",
"username": "Leo_Van_Snippenburg"
},
{
"code": "var ChangeStream = false;\nasync function watcher() {\n // see https://docs.mongodb.com/stitch-sdks/js/4/interfaces/remotemongocollection.html#watch\n if (typeof(ChangeStream.isOpen) === 'function' && ChangeStream.isOpen()) {\n ChangeStream.close();\n }\n // Create a change stream that watches the collection\n try {\n ChangeStream = await jobsCollection.watch([]); \n } catch (e) {\n console.log(e);\n watcher(); //re-init\n }\n ChangeStream.onError((e) => {\n watcher(); //re-init\n });\n ChangeStream.onNext((event) => {\n if (app) app.updateOrder(event);\n });\n};\n",
"text": "For those who wonder, this is the (textbook) code for the collection.watch() bit:Leo",
"username": "Leo_Van_Snippenburg"
}
] | Stitch instance "died" on us | 2020-05-20T07:22:45.528Z | Stitch instance “died” on us | 1,868 |
null | [
"node-js"
] | [
{
"code": "MongoDB{\n \"_id\": { \"8uk4f9653fc4gg04dd7ab3d3\"},\n \"title\": \"my title\",\n \"url\": \"https://myurl/entry/1ethd485\",\n \"author\": \"john\",\n \"created\": { \"2020-05-20T08:25:47.438Z\"},\n \"vote\": 1619\n},\n{\n \"_id\": { \"6fd4fgh53fc4gg04dd7gt56d\"},\n \"title\": \"my title\",\n \"url\": \"https://myurl/entry/1ethd485\",\n \"author\": \"john\",\n \"created\": { \"2020-05-19T04:12:47.457Z\"},\n \"vote\": 1230\n}\n// Home Route\napp.get('/', function(req, res){\n Entry.find({}).sort({ \"vote\" : -1 }).limit(500).exec(function(err, entries){\n if(err){\n console.log(err);\n } else {\n res.render('index', {\n entries: entries\n });\n }\n });\n});\nextends layout\n block content\n h1 #{title}\n ul.list-group\n each entry, i in entries\n li.list-group-item\n a(href=entry.url)= entry.title",
"text": "I have below data records in my MongoDB collection;As you can see title, url and author could be same but _id, created and vote is always unique.This is my Route in app.js;This route is displaying 500 records descending order of vote value. This is displaying both records above that have vote values 1619 and 1230. However what i want to achieve is to display only the biggest vote value for same title, url and author. In this example it should display only the record with vote value 1619. What is the best way to do it? What is the correct way of using distinct in here?And just for your reference this is my pug layout;",
"username": "Senol_Sahin"
},
{
"code": "findtitleurlauthor",
"text": "This is displaying both records above that have vote values 1619 and 1230. However what i want to achieve is to display only the biggest vote value for same title, url and author.Hello Senol_Sahin ,To find the max value for a set of fields, you have to do an aggregation. That is write a db.collection.aggregate method (instead of the find).Distinct values are arrived at using the $group stage of the aggregation query. You can use the $max aggregation operator to get the biggest vote value for the distinct (grouping) of title, url and author fields.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting distinct value from MongoDB | 2020-05-20T20:48:52.864Z | Getting distinct value from MongoDB | 1,253 |
null | [] | [
{
"code": "",
"text": "Where is the Dark Mode ? For here and the docs.mongodb.comI disrespectfully disagree with any marketing person about ‘stating on brand’.Docker is one example I want to highlight(albeit for their docs site)",
"username": "chris"
},
{
"code": "",
"text": "@chris I agree with you. I am not sure why more sites don’t default to a color scheme (or at least given an option) that removes the bright white background from their web sites. While not ideal, I would even be OK with a solarized theme.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Dark Mode is definitely a trending user preference on our radar, but proper implementation also requires reviewing & updating existing image assets and styles so they either work in both light & dark modes or have alternative variations. Something worth doing is also worth doing right .For example, we recently added a Dark Theme for embedded MongoDB Charts.Some related feature requests on the MongoDB Feedback site that you may want to watch & upvote:The docs.mongodb.com site spans multiple products & versions with 1000s of pages and 100s of images, so that isn’t a quick task to review and update. However, most of our docs images are SVG and should render reasonably if you have a custom dark theme.Until an official dark mode is available for your favourite web destinations, you can also personalise the browser experience by creating (or finding) community themes using a browser extension like Stylus. See Dark MongoDB Docs (userstyles.org) for a quite usable example.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "To add on to @Stennie_X’s comments, I am not opposed to implementing a Dark Mode version of this site. It won’t be in the launch version, but something we can consider developing for down the road. As Stennie mentioned, there are additional style elements required, but I also personally use everything (Twitter, Slack, etc.) in Dark Mode.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Maybe people are well aware of this option, but I utilize a chrome plugin that may suffice for the time being until it’s added.Chrome: Dark Reader - Chrome Web StoreFirefox: Dark Reader – Get this Extension for 🦊 Firefox (en-US)",
"username": "mongo_maas"
},
{
"code": "",
"text": "Thanks for sharing this Timothy!!",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "Hey @mongo_maas I had tried Dark Reader in the past but there was something that I didn’t like about it and quit using it. I don’t remember what that was so I’ll give it another try, but hopefully MongoDB’s marketing will release an “official” dark mode sooner rather than later.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "It works pretty good. I won’t lie that some sites can’t handle what it does to it, so just need to disable it for those specific ones. As you can see, it messes up the logos at the header of the pages here. Overall easy on the eyes for me, and that’s what I’m mainly looking for.",
"username": "mongo_maas"
},
{
"code": "",
"text": "We’re not marketing. We report to engineering I will ensure that getting a dark mode option for post-launch is a top priority. ",
"username": "Jamie"
},
{
"code": "",
"text": "Just sharing if you wish to enable dark mode without using any extension but just by chrome flag itself.Go to chrome and visit chrome://flagsFind the dark mode in the available tab, You’ll see something like this screenshot\nforce_dark_mode747×137 9.86 KB\nEven docs.mongodb.com looks awesome after enabling this.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "This is awesome, because all pages that Chrome extensions can’t access (eg. source code page, chrome web store, etc.) also turns dark.Unfortunately, while Dark Reader will not darkify already dark websites, Chrome will do it anyway, which will always make these websites ugly.",
"username": "KaKi87"
},
{
"code": "",
"text": "Can confirm: Dark mode is imminent. ",
"username": "Jamie"
},
{
"code": "",
"text": "Thanks @Jamie to you and the team for working on this. My eyes will greatly appreciate it. The Dark Reader plugin is good but doesn’t always get color schemes in a good state.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Regards\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for the surprise. But, that was scary for a moment !",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Oh My eyes! Dark mode? | 2020-01-29T20:30:00.513Z | Oh My eyes! Dark mode? | 21,099 |
null | [] | [
{
"code": "",
"text": "Hi all,I’m just joining the community as CEO of TrustiT a Tunisian startup offering a marketplace of electronic devices repair services with remote services and home pickup and delivery !We hope to benefit from your experiences and suggestions, inspite i’m not the Tech Guy but i have some IT background !",
"username": "Mohamed_Amine_Ouni"
},
{
"code": "",
"text": " Hi @Mohamed_Amine_Ouni and welcome to the community forums! We’re definitely here to help out where we can so ask questions as they arise.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Welcome to the forums, @Mohamed_Amine_Ouni! We’re thrilled to have you here ",
"username": "Jamie"
}
] | Hi All, it's TrustiT from Tunisia ! | 2020-05-19T20:33:39.220Z | Hi All, it’s TrustiT from Tunisia ! | 1,795 |
null | [
"node-js"
] | [
{
"code": "const uri = `mongodb+srv://<username>:<password>@<cluster>t/test?retryWrites=true&w=majority`;\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\nclient.connect((err, client2) => {\n ifError(err)\n const db = client2.db(\"moving-db\");\n const collection = db.collection(\"testing\");\n console.log('Connected to the server', db.databaseName, collection.collectionName);\n const s = {\n name: 'Test Session',\n created: '2020-05-19',\n provider: 'TocBox'\n }\n collection.insertOne(s, function(err, r) {\n console.log('inside insertOne')\n ifError(err);\n console.log(r);\n });\n\n client2.close();\n});\n",
"text": "Hi!Just getting started with MongoDB and having some newbie problems I’m using the node.js Driver and trying a simple example of inserting some data into a new DB and Collection using the code directly from the driver docs:The console.log statements show that I am connected to the server OK, but the call to insertOne is returning the error “Cannot use a session that has ended”Any ideas?",
"username": "Steve_Tomas"
},
{
"code": "",
"text": "I should add that I am using MongoDB driver version 3.5.7 against an Atlas cluster.",
"username": "Steve_Tomas"
},
{
"code": "",
"text": "I figured it out - simple problem.Moved the statement client2.close() inside the callback of insertOne(…)",
"username": "Steve_Tomas"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting Error "Cannot use a session that has ended" when inserting into a collection | 2020-05-20T07:12:55.816Z | Getting Error “Cannot use a session that has ended” when inserting into a collection | 14,588 |
null | [] | [
{
"code": "",
"text": "Hey people,I am having this confusion starting off with MongoDB. I believe the come as a package, Atlas takes care of the DaaS part and Compass is the actual GUI where you can see your data.This implies we cannot see the data with the help of Atlas, it is a mere configurator of the database. Is my understanding correct?Thanks!",
"username": "Aashish_Chaubey"
},
{
"code": "",
"text": "Hi @Aashish_Chaubey,This implies we cannot see the data with the help of Atlas, it is a mere configurator of the database. Is my understanding correct?That’s not correct. You can do data manipulation in Atlas. There are a lot of MongoDB cloud products which are stitched together with Atlas which make it much more powerful. If you want to manipulate the data or sth then here are the steps :Hope it helps!~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | What is the difference between Atlas and Compass? | 2020-05-20T03:55:43.083Z | What is the difference between Atlas and Compass? | 8,621 |
null | [] | [
{
"code": "",
"text": "I could not find many of the fields/objects referred to in the lecture. I found no array named skyCoverLayer in any of the documents in the collection",
"username": "Aveek_Sen"
},
{
"code": "",
"text": "I do not remember a quiz or lab requiring to look for skyCoverLayer. But may be you do not find it becauseNote: In the video above the database name is named “100YWeather”, it should be “100YWeatherSmall”.",
"username": "steevej"
},
{
"code": "skyCoverLayer{skyCoverLayer:{$exists:true}}",
"text": "Hi @Aveek_Sen,This field is not present in all documents. You can run this filter in Compass to get those documents where the skyCoverLayer field is present.{skyCoverLayer:{$exists:true}}\nScreenshot 2020-05-06 at 9.31.35 PM2052×1518 361 KB\nAlso, as @steevej-1495 mentioned all the labs and quizzes are consistent with the dataset in the atlas cluster.Let us know if you have any questions.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "There exists no such field: \ndelete11600×900 94.4 KB\nMaybe it was in the older collection",
"username": "Aveek_Sen"
},
{
"code": "",
"text": "You started with the wrong collection. Now it is the wrong field.You might have better luck withskyCoverLayerCut-n-paste prevents a lot of typing errors.",
"username": "steevej"
},
{
"code": "",
"text": "It doesn’t show up. There was no database by the name you talk of. It was in a previous dataset. So I obviously can’t refer to a database which doesn’t existNeither is there anything wrong with the field name right now. I checked again after your comment. There is definitely something wrong in the course lecture and I am politely pointing it out. If you want to continue with the wrong course material, I don’t care. It’s your wish",
"username": "Aveek_Sen"
},
{
"code": "skycoverlayerskyCoverLayer",
"text": "Hi @Aveek_Sen,If you look closely in the screenshot that you have shared, the field name that you have typed is - skycoverlayer. However, the correct field name is : skyCoverLayer.I hope you can see the difference in the Casing here.Let me know if the issue still persists.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "{skyCoverLayer:{$exists:true}}Thanks this works @Shubham_Ranjan",
"username": "Sumitra_Sastri"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Chapter 1: Geospatial Data lecture error? | 2020-05-05T20:15:38.196Z | Chapter 1: Geospatial Data lecture error? | 1,489 |
null | [] | [
{
"code": "",
"text": "In “MongoDB Documents: Fields with Arrays as Values” the instructor mentions “Flexible Data Models”.\nDoes this imply that, for example, in a Collection “Random Objects” we would have documents that look like:\n{“weight”: 1, “height”: 2, “length”: 3, “width”: 4} and {“mass”: 1, “height”: 2, “length”: 3, “width”: 4}?",
"username": "Tyler_Fenton"
},
{
"code": "Data Modelling",
"text": "Hi @Tyler_Fenton,You can totally have different fields in different documents in the same collection. MongoDB does not complain about it.There is a dedicated course on Data Modelling : M320: Data Modeling.You can also read more about it in our documentation : Data Modeling IntroductionLet me know if you have any questions.",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Flexible Data Model | 2020-05-20T01:51:08.423Z | Flexible Data Model | 1,082 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.2.7-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.6. The next stable release 4.2.7 will be a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Luke_Chen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.7-rc1 is released | 2020-05-20T00:26:17.571Z | MongoDB 4.2.7-rc1 is released | 1,684 |
null | [
"compass"
] | [
{
"code": " \"schedule\": \"FIVE_ON_TWO_OFF\",\n \"date\": { \"$date\": \"2020-05-27T05:00:00.000Z\"},\n \"modality\": \"MGW\",\n \"wodDescription\": [\"20 min AMRAP\", \"30 cal Row\", \"20 Push-ups\", \"14 Dumbbell Power Snatch 50/35 (Masters 55+/Teens 35/25)\"],\n \"wodResultType\": \"ROUNDS_REPS\",\n \"repsPerRound\": 64\n{ wodResultType : \"ROUNDS_REPS\" }\n{ Country: \"Brazil\" }\n{ author : { $eq : \"Joe Bloggs\" } }\n",
"text": "Having read the Query Your Data page of the Compass documentation, this is what I have in the Filter field of Compass…Why is the FIND button disabled as if that’s not a valid query?Also, the documentation shows 2 different query filters for what I assume is a query on a String field:No explanation is given as to why the query operator is used in the second example, but not the first.",
"username": "Brian_Sheely"
},
{
"code": "{ Country: \"Brazil\" }\n{ author : { $eq : \"Joe Bloggs\" } }\n$eq{ field: { $eq: value } }{ field: value } } \"schedule\": \"FIVE_ON_TWO_OFF\",\n \"date\": { \"$date\": \"2020-05-27T05:00:00.000Z\"},\n \"modality\": \"MGW\",\n \"wodDescription\": [\"20 min AMRAP\", \"30 cal Row\", \"20 Push-ups\", \"14 Dumbbell Power Snatch 50/35 (Masters 55+/Teens 35/25)\"],\n \"wodResultType\": \"ROUNDS_REPS\",\n \"repsPerRound\": 64\n{ wodResultType : \"ROUNDS_REPS\" }\n",
"text": "Also, the documentation shows 2 different query filters for what I assume is a query on a String field:No explanation is given as to why the query operator is used in the second example, but not the first.The $eq operator documentation says:$eq specifies equality condition. The operator matches documents where the value of a field equals the specified value.{ field: { $eq: value } } is equivalent to { field: value } }Having read the Query Your Data page of the Compass documentation, this is what I have in the Filter field of Compass…Why is the FIND button disabled as if that’s not a valid query?I found the FIND button is enabled with the same document and query filter:\nfind1972×530 34.5 KB\n",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "It turned out to be Compass was hung. I became suspicious when the RESET button didn’t work either.",
"username": "Brian_Sheely"
}
] | FIND button disabled in Compass | 2020-05-17T22:17:07.137Z | FIND button disabled in Compass | 3,990 |
null | [
"mongoose-odm",
"transactions"
] | [
{
"code": "const db = mongoose.connection\n\n let orderTotal = 0\n\n //try with async iterators\n\n let session = await db.startSession()\n\n try {\n\n session.startTransaction()\n\n for await (const item of req.body) {\n\n for await (const val of item.values) {\n\n await ProdIns.findOneAndUpdate({\n\n SKU: item.SKU,\n\n categoryOptId: val.optionId,\n\n categoryOptVal: val._id,\n\n stockQty: { $gt: item.qty }\n\n }, {$inc:{stockQty:-item.qty}}, { session: session })\n\n await ProdIns.findByIdAndUpdate(val._id, { $inc: { stockQty: -item.qty } }\n\n )\n\n }\n\n await Cart.findByIdAndUpdate(item._id, { $set: { completed: true } }, { session: session })\n\n }\n\n await session.commitTransaction()\n\n session.endSession()\n\n res.status(201).send()\n\n } catch (e) {\n\n console.log(e)\n\n await session.abortTransaction()\n\n res.status(500).send()\n\n }\n",
"text": "In the above snippet I clearly know that I dont have any matching doc in ProdIns model that has stockQty field $gt: item.qty; In this case isnt it obvious that my transaction would get aborted? if not then why? And how to make it abort if ProdIns collection does not have any matching document that has stockQty $gt item.qty.Also I am using for loop to iterate through user’s cart items array. for every instance of the Cart items array my intent is to iterates through the variants(values) of that item and deduce the number of quantity user desires to order from ProdIns collection. Is it viable option to use async iteration? Any better alternative? Any Similar example you can give? how do you take your cart items and convert them into orders any tips on that please?",
"username": "shorif_shakil"
},
{
"code": "prodInabortTransaction",
"text": "Hey @shorif_shakilFor you first question about the transaction not aborting when your prodIn findOneAndUpdate does not find. The transaction will only abort when you throw an error and it is caught as your code reads. If you want to abort the entire transaction when no document found then you would have to check for that and call abortTransaction. Keep in mind that falling to find is not an error. Just return undefinedYour next question about iterating a loop with for of await. If you can I think it may be better to loop through your cart and create an array of bulk write objects. Then once you have all the updates, inserts, etc you send then to mongodb with a bulk write.",
"username": "Natac13"
},
{
"code": "const db = mongoose.connection\n\n let orderTotal = 0\n\n //try with async iterators\n\n let session = await db.startSession()\n\n try {\n\n session.startTransaction()\n\n for await (const item of req.body) {\n\n for await (const val of item.values) {\n\n ProdIns.findOneAndUpdate({\n\n SKU: item.SKU,\n\n categoryOptId: val.optionId,\n\n categoryOptVal: val._id,\n\n stockQty: { $gt: item.qty }\n\n }, { $inc: { stockQty: -item.qty } }, { session: session, rawResult: true }, (err, doc, res) => {\n\n if (err) {\n\n throw new Error(\"Could not find\")\n\n } else if (!doc.lastErrorObject.updateExisting) {\n\n throw new Error(\"Could not update\")\n\n }\n\n })\n\n }\n\n await Cart.findByIdAndUpdate(item._id, { $set: { completed: true } }, { session: session })\n\n }\n\n await session.commitTransaction()\n\n session.endSession()\n\n res.status(201).send()\n\n } catch (e) {\n\n console.log(e)\n\n console.log(\"aborting\")\n\n await session.abortTransaction()\n\n res.status(500).send()\n\n }\nevents.js:292\napi_1 | throw er; // Unhandled 'error' event\napi_1 | ^\napi_1 |\napi_1 | Error: Could not update\napi_1 | at /usr/app/router/functions/OrderFunc.js:36:19\napi_1 | at /usr/app/node_modules/mongoose/lib/model.js:4849:16\napi_1 | at /usr/app/node_modules/mongoose/lib/model.js:4849:16\napi_1 | at /usr/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:24:16\napi_1 | at /usr/app/node_modules/mongoose/lib/model.js:4872:21\napi_1 | at /usr/app/node_modules/mongoose/lib/query.js:4379:11\napi_1 | at /usr/app/node_modules/kareem/index.js:135:16\napi_1 | at processTicksAndRejections (internal/process/task_queues.js:79:11)\napi_1 | Emitted 'error' event on Function instance at:\n",
"text": "hey Sean Campbell, I appreciate your support. but I need to have the commit all or nothing feature of transactions. I tried to throw an error from a callback but that doesn’t seem to work.///////////////////////////////////\nthis time it returns the following error\n///////////////////////////\nwith the previous await syntax I dont think that I have an option to check rawResult before session.commitTransaction(). Can you please be kind enough to assist me further?",
"username": "shorif_shakil"
},
{
"code": "const updatedItem = await ProdIns.findOneAndUpdate({\n...\n})\n\nif (!updateItem) {\nthrow new Error('Missing item')\n}\n\nfindOneAndUpdate",
"text": "Your previous syntax did not need to be changed. It was good. just add in thisI would just suggest to try and avoid making calls to the database in a loop. So I would recommend that you do a bulk write and if any of those fails to find then throw the error to abort the transaction. Plus I would try to see where you can make changes so that you are not calling findOneAndUpdate twice on the same collection. Does this seem the most efficient to you?",
"username": "Natac13"
},
{
"code": "",
"text": "Hey @Natac13\nThis creates the first issue transaction commits despite there is no doc matching to the condition passed to findOneAndUpdate. My intention is to iterate each document in cartItems collection and reduce matching prodIns.qty(if exists) by cartItems.qty and then commit if every reduction in prodIns agains every Item in cartItems is successful. Do you think that bulkwrite is the best way for this? Can you provide any example of using transaction sessions with bulkWrite?",
"username": "shorif_shakil"
},
{
"code": "const db = mongoose.connection\n\n let orderTotal = 0\n\n //try with async iterators\n\n let session = await db.startSession()\n\n // from body\n const cartItems = [\n { \n _id: 'mongodb1234Id',\n SKU: 'UO-1223',\n qty: 3\n },\n { \n _id: 'mongodb5678Id',\n SKU: 'UO-4556',\n qty: 1\n }\n ] \n\n try {\n\n session.startTransaction()\n const bulkWrites = cartItems.map((item) => {\n return {\n updateOne: {\n filter: { _id: item._id, stockQty: { $gt: item.qty } }, // or however you want to find them\n update: { $inc: { stockQty: -item.qty } },\n }\n }\n })\n\n const result = await ProdIns.bulkWrite(bulkWrites, { session })\n\n if (result.nMatched !== cartItems.length) {\n session.abortTransaction()\n }\n\n // else continue\n\n } catch (err) {\n\n session.abortTransaction()\n }\n\n",
"text": "A very quick idea could bePlease forgive me as I do not know how your data is structured. And I still do not understand why you have to make 2 calls to update the ProdIns collection right after each other.",
"username": "Natac13"
},
{
"code": "",
"text": "@Natac13 thanks! after spending an week I think finally I got a solution. Appreciate your help very much!",
"username": "shorif_shakil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need help to understand transactions a little bit better | 2020-05-18T21:50:06.548Z | Need help to understand transactions a little bit better | 4,978 |
null | [
"java"
] | [
{
"code": "",
"text": "Hello Team,I am facing an issue in my Java Application.While I am saving a normal string to mongo which has a decimal , I am able to persist to DB\nEx: {“price”:“20.00”}But When I am trying to save list of decimals as a string which has precision ‘00’ (5.00) as list of strings, only one zero in precision can be able persist as below.\nEx:\n{“values”:[\n“6.0”,\n“5.0”\n]\n}Our DB Server version is 3.6.+Trying with spring-data-mongodb version 1.8.+Please help me out in this what could be the issue.Thanks,\nAjay Prasad.",
"username": "Ajay_Prasad_Goli"
},
{
"code": "values",
"text": "Hello Ajay_Prasad_Goli ,It will help if you post the code which you are using to store the string decimals into the array values. Also, please specify the MongoDB Java driver and Java versions.As such MongoDB v3.4 started supporting the decimal data type - which is more conducive to store decimal numbers (e.g., using as monetary data).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "he MongoDB Java driver and Java verHello @Prasad_Saya,I am using Java 8 version & org.mongodb:mongo-java-driver:2.13.3 & org.springframework.data:spring-data-mongodb:1.8.+::Below is the code snippet:List list = new ArrayList();\nlist.add(“6.00”);\nlist.add(“5.00”);\nQuery query = new Query(Criteria.where());\nUpdate update = new Update();\nupdate.set(“values”, list);\nmongoOperations.upsert(query, update, ValuesPojo.class); or\nmongoTemplate.upsert(query, update, ValuesPojo.class);Please let me know if any info required.",
"username": "Ajay_Prasad_Goli"
},
{
"code": "ValuesPojo.java",
"text": "mongoOperations.upsert(query, update, ValuesPojo.class); or\nmongoTemplate.upsert(query, update, ValuesPojo.class);This is required: ValuesPojo.java (with the variable, and get/set methods associated with the array in question).",
"username": "Prasad_Saya"
},
{
"code": "private List<String> values;\n",
"text": "Hello @Prasad_Saya,This is my class lets say it has get/set methods associated with variable.\nmyCollection–> is my collection name .@Document(collection = “myCollection”)\npublic class ValuesPojo {}Everything is available but my data persisting is different. After precision it must be two ‘0’ s like “5.00” instead of “5.0” in my collection object array as mentioned…Please let me know if any info required.",
"username": "Ajay_Prasad_Goli"
},
{
"code": "MongoTemplateList list = Arrays.asList(\"100.0\",\"6.00\", \"5.00\");{ \"_id\" : \"1\", \"values\" : [ \"100.0\", \"6.00\", \"5.00\" ] }",
"text": "Hi @Ajay_Prasad_Goli,I just tried your code with available setup of Spring Data MongoDB 2.2.7, MongoDB v4.2.3, and Java SE 8.I used MongoTemplate API for the update / upsert operation. With this input list data:\nList list = Arrays.asList(\"100.0\",\"6.00\", \"5.00\");I get the updated document as expected: { \"_id\" : \"1\", \"values\" : [ \"100.0\", \"6.00\", \"5.00\" ] }I suspect it might be an issue with the older versions of the APIs or the database. Cannot conclude what the issue is.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya\nWill check on the higher versions and try for this possibility.",
"username": "Ajay_Prasad_Goli"
}
] | Issue with decimal in string while persisting to DB | 2020-05-19T11:01:39.677Z | Issue with decimal in string while persisting to DB | 3,851 |
null | [] | [
{
"code": "",
"text": "I’m doing the Mongo M001 Course (Basics) and I use macOS, and I couldn’t download the enterprise server, instead I downloaded the community server. Everything was fine, I installed mongo shell guided by this Installing MongoDB on Mac (Catalina and non-Catalina) | Zell Liew , then I followed every step of the mongo course to connect the compass, I didn’t receive any warning or error, “PRIMARY>” showed as in the tutorial. But when I typed “show collections” didn’t show anything, it was supposed to show “data”. But then I typed “use video” “show collections” it did show the movies collection. I don’t know why it didn’t show the data collection but did for the videos. I don’t know if it is due to the community server, or so. Please help!!!",
"username": "Monica_Nava_Palomo"
},
{
"code": "mongo \"mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0\" --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basics",
"text": " Hi @Monica_Nava_Palomo and welcome to the community!It depends how you have connected to the mongodb shell. I assume that you copied the connection string from the class handouts.mongo \"mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0\" --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basicsThis will connect to the database “test”, there are no collections. just follow the course video at 1:40 min to 3:40 and you will see that you should need to connect to the 100YWeatherŚmall database, there you will find the “data” collection.The university discussions are not yet part of this forum, as subscriber of the course you can access the University forum discussions hereJust for completeness, the attached link show you the differences between the enterprise vs. community versionHope that helps\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi Michael I have changed the db - to 100YWeatherSmall - while I get the command line prompts working, I can not see the database when I move to the next sectionmongo “mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/100YWeatherSmall?replicaSet=Cluster0-shard-0” --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basics",
"username": "Sumitra_Sastri"
},
{
"code": "",
"text": " Hi @Sumitra_Sastri a further welcome to the community, happy to see you working on m001.I assume that you work on lectures in m001, chapter 2.\nI am not sure what the problem is, you get the command line working and you can connect to 100YWeathersmall.data? Then you move to the next section and you do not see a db which you expect to see?\nWhen you can tell me exactly which section you are working at, I’ll have a look.May I just put a kind reminder here? As mentioned in the previous posting the university discussions are not yet part of this forum, as subscriber of the course you can access the University forum discussions here - This is the place I’d suggest to ask future questions.As of your current questions, just let me know where you are and what you expect to see, but you don’t do – I’ll see what I can do.Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi Michael, I typed the connection you wrote. And this happened: \nCaptura de Pantalla 2020-05-18 a la(s) 18.09.191018×535 138 KB\n\nStill doesn’t show the data collection, what should I do?",
"username": "Monica_Nava_Palomo"
},
{
"code": "mongo \"mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,clu0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/100YWeatherSmall?replicaSet=Cluster0-shard-0\" --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basics\nshow dbsMongoDB Enterprise Cluster0-shard-0:PRIMARY> show dbs\n100YWeatherSmall 0.128GB\nadmin 0.000GB\naggregations 0.067GB\ncitibike 0.367GB\ncity 0.002GB\nconfig 0.016GB\ncoursera-agg 0.083GB\nlocal 0.940GB\nmflix 0.449GB\nresults 0.000GB\nships 0.001GB\nvideo 0.513GB\n100YWeatherSmallMongoDB Enterprise Cluster0-shard-0:PRIMARY> use 100YWeatherSmall\nswitched to db 100YWeatherSmall\nshow collectionsdataMongoDB Enterprise Cluster0-shard-0:PRIMARY> show collections\ndata\nsystem.profile\nuse datadatashow collections",
"text": "Hi @Monica_Nava_Palomo, I just signed up for the course to see what’s going on and was able to connect to the course servers with the following command:From there I typed in show dbs and got the following results:Here we can see a list of the the databases available to us, with the one in question: 100YWeatherSmall.Next you will want to switch into the context of the databaseYou should get a confirmation message:If you use show collections now, you should see your data collction:In the screen shot you provided above, you did use data which means you switched into the context of the data database which does not exist and that is why show collections does not return anything.As @michael_hoeller has mentioned, asking questions on the MongoDB University forums for the M001 course would be the correct place to ask as that’s where the course admins are looking for questions. You might get answers here, but they will most likely be delayed or not as beneficial unless another student taking the course happens to be around.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "It worked! thank you! and sorry",
"username": "Monica_Nava_Palomo"
},
{
"code": "",
"text": "Hi @Doug_Duncan thank you, good to be in different timezones - so a quick answer is guaranteed \nCherrs, Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks for getting back to me Michael. I resolved the challenge as the problem was Apple’s privacy and security settings which by default block all apps unless you override the security settings. I have added Compass now to the list of apps that work on overriding Apple’s default settings.",
"username": "Sumitra_Sastri"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does it matter if I downloaded the community server instead of enterprise server for MongoDB University Basics course? | 2020-05-18T06:15:13.676Z | Does it matter if I downloaded the community server instead of enterprise server for MongoDB University Basics course? | 2,567 |
[
"vscode"
] | [
{
"code": "",
"text": "Yesterday, we announced MongoDB for VS Code, an extension that allows you to quickly connect to MongoDB and MongoDB Atlas and work with your data to build applications right inside your code editor. With MongoDB for VS Code you can:MongoDB for VS Code is open-source under the Apache 2 license. You can install it directly from the VS Code marketplace.You can read the full announcement on the MongoDB blog.Try it today and let us know what you think!",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Great news that you have delivered on the promise from last years Local event.What is your roadmap for the extension? I can see on the Blog plost some suggestions already.",
"username": "NeilM"
},
{
"code": "",
"text": "In the near future, we will likely be working on improvements to what we just launched and probably start looking into document editing.If you have suggestions or ideas you can submit them here: MongoDB for VS Code: Top (51 ideas) – MongoDB Feedback Engine",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This is awesome! Thanks for this!",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Introducing MongoDB for VS Code | 2020-05-15T07:21:34.961Z | Introducing MongoDB for VS Code | 3,949 |
|
null | [] | [
{
"code": "",
"text": "Hi,\nMy experience with MongoDB goes back to 2011. I am an author of several Pluralsight courses on MongoDB and frequently give tech-talks on the subject. Been involved with the wonderful Mongo community and Mongo Masters for a while now. Besides MongoDB my role as software architect involves delivering software projects that work, and Mongo has been a key component of that.",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": " Hi @Nuri_Halperin! Glad to see you around these parts.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Welcome @Nuri_Halperin! Glad to see you here ",
"username": "Jamie"
},
{
"code": "",
"text": "Hi Doug! Hope you are doing well in these crazy times!",
"username": "Nuri_Halperin"
}
] | 🌟 Hello from Nuri | 2020-01-29T23:26:44.483Z | :star2: Hello from Nuri | 2,103 |
null | [
"c-driver"
] | [
{
"code": "No package mongo-c-driver available.\nError: Nothing to do\nmongo-c-driver-1.16.2-2.fc32.src.rpm[henry@localhost mongo]$ rpm -ivh mongo-c-driver-1.16.2-2.fc32.src.rpm \nUpdating / installing...\n 1:mongo-c-driver-1.16.2-2.fc32 ################################# [100%]\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: /usr/bin/pkg-config (found version \"0.27.1\") \n-- Checking for module 'libzstd'\n-- No package 'libzstd' found\n-- Not found\n-- Found OpenSSL: /usr/local/ssl/lib/libcrypto.a (found version \"1.0.2p\") \n-- Looking for ASN1_STRING_get0_data in /usr/local/ssl/lib/libcrypto.a\n-- Looking for ASN1_STRING_get0_data in /usr/local/ssl/lib/libcrypto.a - not found\n-- Searching for sasl/sasl.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/sasl/include for SASL support)\n-- Searching for libsasl2\n-- Not found (specify -DCMAKE_LIBRARY_PATH=/path/to/sasl/lib for SASL support)\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for res_nsearch\n-- Looking for res_nsearch - found\n-- Looking for res_ndestroy\n-- Looking for res_ndestroy - not found\n-- Looking for res_nclose\n-- Looking for res_nclose - found\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Detected parameters: accept (int, struct sockaddr *, socklen_t *)\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\nSearching for libmongocrypt\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Success\n-- Compiling against OpenSSL\n-- SASL disabled\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/henry/packageRoot/mongo/mongo-c-driver-1.16.2/cmake-build\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bn_sqrt.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bn_exp2.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bn_gf2m.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(ec_print.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(rsa_gen.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(rsa_saos.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(rsa_pss.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(dsa_gen.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(dh_gen.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(dso_dlfcn.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(a_set.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bio_ndef.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(asn_mime.o): relocation R_X86_64_32 against `.text' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bio_b64.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(bio_asn1.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\n/usr/bin/ld: final link failed: Nonrepresentable section on output\ncollect2: error: ld returned 1 exit status\nmake[2]: *** [src/libmongoc/libmongoc-1.0.so.0.0.0] Error 1\nmake[1]: *** [src/libmongoc/CMakeFiles/mongoc_shared.dir/all] Error 2\nmake: *** [all] Error 2\n",
"text": "CentOS7,\ncmake version 3.17.2\ngcc version 7.5.0 (GCC)I follow this page to install my mongo-c-driver.But no way can success.link address:\nhttp://mongoc.org/libmongoc/current/installing.htmlI run this command:yum install mongo-c-driverresult is:install failed…I download rpm mongo-c-driver-1.16.2-2.fc32.src.rpmwhen I run :BUT, I cant find any mongoc.h in any where…find / -xdev -name “mongoc.h”INSTALL FAILED…again…I download release source code.\nhttps://github.com/mongodb/mongo-c-driver/releases/download/1.16.2/mongo-c-driver-1.16.2.tar.gzmongo-c-driver-1.16.2.tar.gzI use cmake to build:cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DCMAKE_BUILD_TYPE=Release -fPIC …I the end , it print done. I think i build success.\nThen I run:make2 Errors accour. The content is below:I search a lot . They told to add -fPIC to the CMakeCache.txt.\nbut I realy add .Then run make, but also failed…(I am chinese, my English is not good…)Please Give me some advices about how to install mongo-c-driver on CentOS7…\nThanks VERY MUCH !",
"username": "Henry_He"
},
{
"code": "libmongoc",
"text": "Hi @Henry_He, have you followed the instructions in the Installing libmongoc with a Package Manger documentation? It looks like CentOS 7 should have the package as long as you have EPEL repo enabled.",
"username": "Doug_Duncan"
},
{
"code": "mongo-c-driver-1.16.2-2.fc32.src.rpm",
"text": "Oh, I dont’t know what package manager you mean.but i download mongo-c-driver-1.16.2-2.fc32.src.rpm,\nand use rpm cmd to install it … but not effect.",
"username": "Henry_He"
},
{
"code": "",
"text": "Thank you very much…\nI don’t install EPEL( the package manager).I do these:\nsudo yum -y install epel-release\nsudo yum -y install mongo-c-driverthis time, It can download success.\nAnd it print :Installed:\nmongo-c-driver.x86_64 0:1.3.6-1.el7Dependency Installed:\nlibbson.x86_64 0:1.3.5-6.el7 mongo-c-driver-libs.x86_64 0:1.3.6-1.el7Complete!",
"username": "Henry_He"
},
{
"code": "",
"text": "After install mongo-c-driver success.\nI find that , the version is too old.\nAnd I still can’t find mongoc.hAnother question.\nDo you konw why?\nWhen I build mongo-c-driver source project, why can’t build success…When I run\nrpm -ivh mongo-c-driver-1.16.2-2.fc32.src.rpm\nwhy have no effect.",
"username": "Henry_He"
},
{
"code": "",
"text": "I try a lot , and finally I successed, Even though I don’t know how did it success.If necessary, you should run this first:sudo yum install perl-core perl pcre-devel zlib-devel cyrus-sasl-develSTEP1:\nI rebuild openssl project with -fPIC\nlike this:./config no-shared zlib-dynamic -fPICand run cmd:make testall tests passed.STEP2:\nI checked my openssl PATH, and find that i did not add ‘/usr/local/ssl/bin’ to path\nThis is my config in ~/.bash_profile . you can look.PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/local/bin:/usr/local/ssl/bin:/usr/local/python3/bin:/usr/local/include/libmongoc-1.0:/usr/local/include/libbson-1.0\nLD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/lib64:/usr/lib64\nCC=/usr/local/bin/gcc\nCXX=/usr/local/bin/g++\nOPENSSL_ROOT_DIR=/usr/local/ssl\nOPENSSL_CRYPTO_LIBRARY=/usr/local/ssl/lib\nOPENSSL_INCLUDE_DIR=/usr/local/ssl/includeexport PATH\nexport LD_LIBRARY_PATH\nexport CC\nexport CXX\nexport OPENSSL_ROOT_DIR\nexport OPENSSL_CRYPTO_LIBRARY\nexport OPENSSL_INCLUDE_DIRSTEP3:\nrun command :source ~/.bash_profileSTEP4:\nrebuild mongo-c-driver project.like this:cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DCMAKE_BUILD_TYPE=Release …sudo makesudo make installFinally, I successed !@Doug_Duncan Thank you very much.",
"username": "Henry_He"
},
{
"code": "-devel-libsLD_LIBRARY_PATHmakesudomake; sudo make install",
"text": "@Henry_He A few things:The installation instructions page contains a link to the Fedora package overview, which indicates that EPEL7 repository (the repository which you would be using for CentOS) contains C driver version 1.3.6, which is probably not sufficient for your use case; packages in the EPEL repository are generally only updated for security issuesYou might consider using the ELN repository (also provided by Fedora), which tries to provide the latest versions of packages; though, I am not certain how it works with CentOS 7When you install from the package manager, you need to ensure that you have the -devel package (includes the compilation headers and other components required for development against the library) in addition to the -libs package (which contains only the runtime components).(not specific to building the C driver) It is difficult to understand why you are building openssl from source as it is included as a core package in CentOS and based on the information you provided, you are not building it in any special way; this also seems to have contributed to your difficulty in building the C driver(not specific to building the C driver), if your setup requires you to populate LD_LIBRARY_PATH with standard system libraries (which all three that you have listed are), then something else is likely wrong with your environmentAlso, as a general best practice, running make with sudo is not a good thing; the sequence is normally make; sudo make install as the root privileges are only needed for copying into a system directory for installation",
"username": "Roberto_Sanchez"
},
{
"code": "LD_LIBRARY_PATH",
"text": "@Roberto_Sanchez Thank you very much!About the LD_LIBRARY_PATH , I hope there is no wrong with my environment.God bless me.Thank you again.",
"username": "Henry_He"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | CentOS7 install Mongo-C-Driver | 2020-05-19T02:06:21.509Z | CentOS7 install Mongo-C-Driver | 4,862 |
null | [] | [
{
"code": "{\n \"BlogId\" : 1,\n \"Items\" : [\n {\n \"Cat\" : 1,\n \"Up\" : 555,\n }, \n {\n \"Cat\" : 2,\n \"Up\" : 666,\n }\n ]\n}\ndb.exp.update({ BlogId: 1 }, \n[\n {\n $set: {\n \"Items.Cat\": {\n $cond: [ \n { \"Items.Cat\": 1 } , \n { $inc: { \"Items.Up\": 1 } }, \n { $push: { \"Items\": { Cat: 1, Up: 555 }} } \n ]\n }\n }\n }\n])\n",
"text": "I want push new object to nested array if object with field ‘Cat’=1 not exists. Or if object with ‘Cat’=1 field exists then increment Up field in this object.I cant write right syntax, help plzDocument:Query (with wrong syntax):",
"username": "alexov_inbox"
},
{
"code": "db.exp.update( { BlogId: 1 },\n [ \n { \n $set: { \n Items: {\n $reduce: {\n input: { $ifNull: [ \"$Items\", [] ] }, \n initialValue: { items: [], update: false },\n in: {\n $cond: [ { $eq: [ \"$$this.Cat\", INPUT_DOC.Cat ] },\n { \n items: { \n $concatArrays: [\n \"$$value.items\",\n [ { Cat: \"$$this.Cat\", Up: { $add: [ \"$$this.Up\", 1 ] } } ],\n ] \n }, \n update: true\n },\n { \n items: { \n $concatArrays: [ \"$$value.items\", [ \"$$this\" ] ] \n }, \n update: \"$$value.update\" \n }\n ]\n }\n }\n }\n }\n },\n { \n $set: { \n Items: { \n $cond: [ { $eq: [ \"$Items.update\", false ] },\n { $concatArrays: [ \"$Items.items\", [ INPUT_DOC ] ] },\n { $concatArrays: [ \"$Items.items\", [] ] }\n ] \n }\n }\n }\n ] \n)\nCatUp1CatINPUT_DOCItemsItemsINPUT_DOCINPUT_DOC = { Cat: 1, Up: 555 } INPUT_DOC = { Cat: 3, Up: 888 }",
"text": "The update query:The update does the following:Try with the object values: INPUT_DOC = { Cat: 1, Up: 555 }, or INPUT_DOC = { Cat: 3, Up: 888 }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "WOW thx.\nBut why so hardy for easzy operation. amazing… with $reduce, $$this, $concatArrays, $$value \nIntresting can we optimize for more light code query",
"username": "alexov_inbox"
},
{
"code": "$set$inc$push",
"text": "WOW thx.You are welcome The operation uses an aggregation instead of an update - and there is some logic involved (find-and-modify-or-insert), hence all the code. And, this doesn’t allow the update operators like, $set, $inc, $push, etc.The $set used above is not an operator - it is an aggregation pipeline stage. The operators used within aggregation are different, are easy to use and can derive complex programming logic using objects, dates, arrays, conditions, strings, etc - which is not possible with direct update.Without using aggregation, we can end up using two operations (and lighter code) instead of one operation. As such this update operation is atomic.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.2 update pipeline: “Push or Update” in nested array | 2020-05-18T19:22:00.455Z | MongoDB 4.2 update pipeline: “Push or Update” in nested array | 7,302 |
null | [] | [
{
"code": "",
"text": "Hello,can anyone answer my question? I want to use the mongodb with Microsoft Failover Cluster. Just like we can use MS Sql and other services on MS Failover Cluster. can we implement it ? if yes then what are the steps?Thanks",
"username": "Imran_Ali"
},
{
"code": "",
"text": "Welcome to the community @Imran_Ali!I’m not aware of any specific configuration or integration with Microsoft Failover Cluster. However, MongoDB is designed as a distributed database and has built-in failover support via replica sets.A properly configured replica set provides data redundancy and fault tolerance. MongoDB clients/drivers monitor replica set configuration and state changes, and can automatically recover from transient events like failover to a new primary or addition/removal of replica set members.For more information, start with Replication and Replica Set Deployment Architectures in the MongoDB manual. For further learning, check out the free online courses at MongoDB University and the MongoDB for DBAs Learning Path.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB with Microsoft Failover Cluster | 2020-05-07T17:42:42.265Z | MongoDB with Microsoft Failover Cluster | 2,090 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi Everyone,Is it possible to have multiple documents with same _id field in a sharded mongodb environment with zones.Consider below configuration :Shard1 - ZoneA\nShard2 - ZoneA\nShard3 - ZoneB\nShard4 - ZoneBCan i insert 2 docs with same _id into ZoneA / ZoneB ?",
"username": "nithin_reddy"
},
{
"code": "",
"text": "_id is primary and autogenerated. You can create a custom column abc_id and zone_id and then shared it.",
"username": "Dominic_Kumar"
},
{
"code": "_id_id_id_id_id_id_id_id_id_id{x: 1}_id_id1_id1_id_id",
"text": "Welcome to the community @nithin_reddy!Can i insert 2 docs with same _id into ZoneA / ZoneB ?The technical possibility depends on your shard key index rather than zoning, but this is a scenario you definitely want to avoid.The _id field uniquely identifies a document within a given collection on a shard. If documents have the same _id values on different shards, attempted migration of those documents to the same shard will result in a duplicate key exception. Non-unique _id values are also likely to lead to logic errors for developers or tools assuming _id is a unique identifier.If your _id values are using default ObjectIDs, the chance of collision should be extremely low. However, if you are setting custom _id values and sharding without using _id as the shard key or a prefix of the shard key, you need to guard against the possibility of creating duplicate _id values.This behaviour is described in the documentation on Sharded Clusters and Unique Indexes:If the _id field is not the shard key or the prefix of the shard key, _id index only enforces the uniqueness constraint per shard and not across shards.For example, consider a sharded collection (with shard key {x: 1} ) that spans two shards A and B. Because the _id key is not part of the shard key, the collection could have a document with _id value 1 in shard A and another document with _id value 1 in shard B.If the _id field is not the shard key nor the prefix of the shard key, MongoDB expects applications to enforce the uniqueness of the _id values across the shards.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it possible to have duplicate _id in a shard zone? | 2020-05-11T06:34:41.417Z | Is it possible to have duplicate _id in a shard zone? | 6,723 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hi, first I’d like to congratulate the MongoDB team on the student pack benefits, they are really helpful and I have been taking the opportunity to dive deep into new concepts and tools.Regarding the certification exam, it is stated, on the student pack page, thatDuring this COVID-19 time, we are here to help you. Complete one of our learning paths and enrich your resume with our free certification!I have two questions regarding this:Thank you in advance!",
"username": "jpdamas"
},
{
"code": "",
"text": "\nHello @jpdamas well come to the forum!There was already a thread touching the second of your questions.However reading the pages, and that what you quoted, leaves the answer open when it will be ended. I’d assume that no one really knows when the COVID-19 pandemic will be ended. There will be an official statement on this from the WHO, as they say this will surely not in the close future. On the other hand the local recovery processes will be very different and at very different paces all around the world.I’d take the chance given, take the learning path and go for the certification. This will not take too much time.@Lieke_Boon, may I address this to you? So that this will not end in assumptions.Cheers, Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @jpdamasThank you for reaching out to us and welcome to the forum!The discount code can be used once.Thank you @michael_hoeller for sharing more information We don’t know when the COVID-19 pandemic will end. It might change in the future, but at this moment we offer the free certification temporarily now that most schools & universities are closed. As we expect that this situation will not change anytime soon, we have no plans to change this offer in the near future.Good luck and thank you for using MongoDB ",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Certification exam conditions | 2020-05-19T02:05:49.901Z | Certification exam conditions | 5,937 |
null | [
"python"
] | [
{
"code": "def readRawPlotData():\n\n Methods = Collection.distinct(\"header.Method\")\n\n plotMethods = [method for method in Methods if method in constants.getPlotColumnsByMethod(method, keys=True)]\n\n rawPlotData = []\n\n for method in plotMethods:\n\n project = {\"_id\":0,\"header\":1}\n\n for plotColumn in constants.getPlotColumnsByMethod(method):\n\n project[\"data.\"+plotColumn] = 1\n\n methodData = Collection.find({\"header.Method\":method},project)\n\n for data in methodData:\n\n rawPlotData.append(dumps(data))\n\n return rawPlotData\n",
"text": "Hi, I have a function written with pymongo that will access a collection and retrieve all documents that has a specific field, and then I use a specific projection for that field.\nNow I am doing that in a loop for all specific pairs of fields and projections.\nWhat I’m trying to find out is if there is a way to “string queries” like this into just one call to the collection?",
"username": "Fredrik_Niva"
},
{
"code": "constants.GetPlotColumnsByMethod()find()methoddb.collection.find(\n {\"header.Method\":{\"$in\": [\"a\", \"b\", \"c\"]}}, \n {\"_id\":0, \"header\":1, \"data.a\":1, \"data.b\":1, \"data.c\":1}\n)\nconstants.GetPlotColumnsByMethod()methodmethod",
"text": "Hi @Fredrik_Niva, welcome!What I’m trying to find out is if there is a way to “string queries” like this into just one call to the collection?I’d assume that the function constants.GetPlotColumnsByMethod() returns an array of desired methods’ value.\nIf that’s the case, you should be able to utilise $in operator in find() to avoid querying the database for each method. For example the query should be:If you still have further question, it would be helpful to provide the relationship between constants.GetPlotColumnsByMethod() with method. For example, if it’s a fixed constant per method, perhaps it’d be useful to include those value into the document in the database.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "Hi again,\nSorry for not being clear enough, I will try to clarify!\nI’ve attached a snippet from my DB to show the structure.\nexample517×591 10.1 KB\n\nSo the constants.GetPlotColumnsByMethod() gives an array of column names.The documents might contain column “A”,“B”,“C”,“D”,“E”, but say Method 1 wants to retrieve only columns “A”,“B”,“C”, and Method wants to retrieve “A”,“C”,“D”, Method 3, “A”,“D”,“E” and so forth.In short I want to pair a specific projection with each method.I will have to note also that I’m both new to MongoDB and relatively so also to python \nYour help is much appreciated.\nSincerely,\nFredrik",
"username": "Fredrik_Niva"
},
{
"code": "dataplots{\n \"header\": {\"Method\": \"cpt\", \"ID\":\"185440-CPT\", \"Group\":0}, \n \"plots\": [\"QC\", \"NA\"], \n \"data\": [\n {\"index\": 0, \"QC\":10, \"FS\":2, \"TA\":3, \"NA\":7, \"NB\":132, \"NC\":245},\n {\"index\": 1, \"QC\":11, \"FS\":22, \"TA\":33, \"NA\":77, \"NB\":232, \"NC\":900},\n ]\n}\ndatadb.collection.aggregate([\n {\"$match\":{\"header.Method\": {\"$in\": [\"cpt\", \"foobar\"]}}}, \n {\"$project\":{\n \"_id\": 0, \n \"header\": 1, \n \"data\": {\n \"$map\":{\n \"input\": \"$data\",\n \"as\":\"x\",\n \"in\": {\n \"$arrayToObject\": {\n \"$filter\":{\n \"input\": {\"$objectToArray\":\"$$x\"}, \n \"as\":\"y\", \n \"cond\": {\"$in\": [\"$$y.k\", \"$plots\"]}\n }\n }\n }\n }\n }\n }}\n]) \n",
"text": "Hi @Fredrik_Niva,Sorry for not being clear enough, I will try to clarify!Not a problem, thanks for providing an example document. It wasn’t obvious before that data is an array of documents.In short I want to pair a specific projection with each method.If you’re able to store the columns per method on the document, this will save you a round trip back to the client just to check which method needs which columns.\nFor example, if you add a field plots to contain the pairing for method/projection as below example:Then you can utilise MongoDB Aggregation Pipeline to project only data fields that matches in `plots. For example:I will have to note also that I’m both new to MongoDB and relatively so also to python If you would like to learn more about MongoDB and Python, I’d recommend to enrol in a free online course on MongoDB University https://university.mongodb.com , specifically M220P: MongoDB for Python DevelopersRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank you so much!\nI actually took that course but seems I didn’t quite make the most out of it (even though that pipeline is a bit more than the course covered) \nBut seeing your solution it mostly makes sense, didn’t think of adding that array to the document myself!\nThanks again for taking the time.\nSincerely,\nFredrik",
"username": "Fredrik_Niva"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query for different documents with different projections in pymongo | 2020-05-17T21:11:10.244Z | Query for different documents with different projections in pymongo | 3,956 |
null | [
"aggregation",
"performance"
] | [
{
"code": "",
"text": "Hi!\nI’m working on optimization slow aggregation pipelines. I use explain() method to show more details about what MongoDB do with this query, but it doesn’t give as many information as MsSQL.\nIs there any other tools besides explain()? Something that can tell me which part of execution of pipeline takes the most of time?",
"username": "Mateusz_Krawczyk"
},
{
"code": "",
"text": "Hi @Mateusz_Krawczyk,When using MongoDB Compass, you have access to the Visual Explain Plan which can help you parse the explain results much more easily.",
"username": "alexbevi"
},
{
"code": "explaindb.collection.explain(\"...\").aggregate( [ ... ] )",
"text": "Hello Mateusz_Krawczyk,The explain has different modes:Each mode returns a query plan with different verbosity / details. You may want to try out and see which one suits your need. The default mode is “queryPlanner”.The way to use is: db.collection.explain(\"...\").aggregate( [ ... ] )Reference: Information on Aggregation Optimization",
"username": "Prasad_Saya"
}
] | Aggregation pipeline optimization tools | 2020-05-18T17:22:46.185Z | Aggregation pipeline optimization tools | 1,836 |
null | [] | [
{
"code": "",
"text": "https://university.mongodb.com/mercury/M220P/2020_March_31/chapter/Chapter_2_User-Facing_Backend/lesson/5aba954a31b11b851a7b87fc/problemI am not getting the ticket generated for the lesson I solved the lab for.",
"username": "Suhas_Sonawane"
},
{
"code": "",
"text": "You will have more luck asking your questions on the M220P forum at https://www.mongodb.com/community/forums/c/M220P/",
"username": "steevej"
},
{
"code": "",
"text": "It looks like you have not activated your mflix virtual env.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you I resolved my issue",
"username": "Suhas_Sonawane"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lab for MongoDB Python course | 2020-05-18T19:21:55.530Z | Lab for MongoDB Python course | 2,030 |
null | [
"aggregation"
] | [
{
"code": "db.test.aggregate([\n\t{ $lookup : {\n\t\tfrom: \"grl\", \n\t\tlocalField: \"subGroup\", \n\t\tlocalField: “date\",\t\n\t localFiled: \"currency\"\n\t\tforeignField: “Group\", \n\t\tforeignField: “s_date\", \n foreignFiled: \"currency\" \n\t\tas: \"data\" \t \n\t} }\n])\n",
"text": "Hi friends,I am doing aggregation on $lookup on three collections.\nThe question is does mongo supports more than 1 field(more than one equality).\nIn my case I have three localFields and three foreignFields?\nIf it supports, how to achieve more than on equality condition using $lookup.How to change my $lookup aggregation so that i have get combined records from all three collections",
"username": "Murali_Muppireddy"
},
{
"code": "$lookup$lookup",
"text": "Hi @Murali_Muppireddy,In my case I have three localFields and three foreignFields?Take a look at the Specify Multiple Join Conditions with $lookup section of the documentation.How to change my $lookup aggregation so that i have get combined records from all three collectionsIs this a different request where you want to join documents from three different collections, or is that a typo? To do a three collection join, you would need to perform multiple $lookup stages.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Doug_Duncan, this is part of request… where I have to do join documents from three different collections, not sure how to do multiple $lookup stages…plz point me if there is some reference matching to my scenario.\nIn all three collections there are 3 common(same ) fields, based on equality matching on these three columns, I need to merger all three collections to one.three collections, those i want merge into one based on1st three fields from each collection.\n \ncol1796×242 5.72 KB\n ",
"username": "Murali_Muppireddy"
},
{
"code": "$lookupdb.test650.aggregate([\n {\n \"$lookup\": {\n \"from\": \"test750\",\n \"localField\": \"Z\",\n \"foreignField\": \"Z\",\n \"as\": \"750joined\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"test850\", \n \"localField\": \"X\", \n \"foreignField\": \"X\", \n \"as\": \"850joined\"\n }\n }\n])\ntest650test750Ztest650test750750joinedtest850Xtest850test850XY_DTZ$lookup",
"text": "Hi @Murali_Muppireddy,where I have to do join documents from three different collections, not sure how to do multiple $lookup stages…plz point me if there is some reference matching to my scenario.For this you just follow the first $lookup stage with a second one.This will will join the test650 and test750 collections together on the shared Z field. The fields for test650 will be in the top level document while the fields for test750 will be nested in an array field called 750joined. These results are then joined to data from test850 on the shared X field. The data from test850 will be nested in an array field called test850.From what I remember in another post, you were joining collections on X, Y_DT and Z. You would use similar methods to the above, but change the $lookup to match how you joined on multiple fields.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Doug_Duncan, The above query is giving the one setup of results, I need to apply two more lookups on another local fields from the collection test650 and on query fields 750joined and on 850joined. As said each collection has 3 three common fields(local fields) on which I have to do grouping, with the above query i got one setup of results, now I have to work with other two local fields on return array fields? is it possible and how to do it?",
"username": "Murali_Muppireddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to $lookup using multiple localFields referencing multiple foreignFields | 2020-05-13T20:06:06.545Z | How to $lookup using multiple localFields referencing multiple foreignFields | 39,612 |
null | [
"atlas-functions",
"stitch"
] | [
{
"code": "",
"text": "The stitch functions is slow about 60 seconds or 90 seconds with timeout, anybody can help me?",
"username": "Alailson_Ribeiro"
},
{
"code": "",
"text": "Hi Alailso, Is this still occurring? For troubleshooting, it may be helpful to send me the following –",
"username": "Drew_DiPalma"
}
] | Stitch Functions with slow db connection | 2020-05-06T13:55:33.018Z | Stitch Functions with slow db connection | 2,013 |
null | [
"atlas-functions",
"stitch"
] | [
{
"code": "exports = function(payload) {\n const httpService = context.services.get('http');\n var batch = context.values.get('batch');\n\n// Company info\n let url = `https://sandbox.iexapis.com/stable/stock/market/batch?types=company,peers&symbols=${batch.symbol}&token=${batch.token}`;\n \n console.log(\"Fetching \" + url);\n return httpService.get( {url: url}).then(response => {\n \n let json = JSON.parse(response.body.text());\n json.observationDate = new Date(json.dt * 1000);\n \n var collection = context.services.get('mongodb-atlas').db('stocks').collection('profiles');\n collection.insertOne(json);\n console.log('Inserted document!');\n });\n};\n",
"text": "Hi,I’m using a working API in Stitch to post stock market information from IEX Cloud in MongoDB.Below is a working API, but it’s really designed to get information for one stock symbol at a time. When used to process a batch, it’s not ideal.Does anyone know how I could iterate the function over many stock symbols stored as a value?Currently, the code will accept batch.symbol: “aapl,fb,googl,tsla”, but this results in one document with one _id and all the stock symbols.But I really want to iterate through an array and capture data for batch.symbol: [“aapl”, “fb”, “googl”, “tsla”], and create one _id and document for each stock symbol.Thanks!\nP",
"username": "Patrik_Hellstrand"
},
{
"code": "",
"text": "Hi Patrick – Have you tried the following –",
"username": "Drew_DiPalma"
}
] | Iterate over an array to iterate a Stitch API function | 2020-05-09T22:38:58.780Z | Iterate over an array to iterate a Stitch API function | 2,529 |
null | [] | [
{
"code": "",
"text": "I have successfully installed mongodb-compass in the operating system that I am using (I use slackware 14.2). but after that I can’t open the application via the terminal command and an error message appears: “segmentation fault mongodb-compass”. Should I use Ubuntu inside my slackware with virtual box ?",
"username": "Arsan_69294"
},
{
"code": "",
"text": "I have the exact same problem with Ubuntu 18.10; people using Fedora complain about the same thing",
"username": "vicusbass"
},
{
"code": "",
"text": "Yeah, now I have succeded to run mongodb-compass on my linux (slackware). but the way I use it is with docker. My solution :1.I’am make Dockerfile with content like this :\nFROM ubuntu:16.04\nMAINTAINER Admatic Engineering [email protected]\nENV DEBIAN_FRONTEND=noninteractive\nRUN apt-get -y update\nRUN apt-get install -y libsecret-1-0 libgconf-2-4 libgtk-3-0 libxss1 libnss3 libasound2\nADD https://downloads.mongodb.com/compass/mongodb-compass_1.16.3_amd64.deb /opt/mongodb-compass_1.16.3_amd64.deb\nRUN cd /opt/ && dpkg -i mongodb-compass_1.16.3_amd64.deb\nCMD mongodb-compassand then…\n2. i’am build the Docker with this command:\ndocker build -t mongodb-compass .then…\n3. Running mongodb-compass from terminal like this :\ndocker run --net=host --env=“DISPLAY” --volume=“$HOME/.Xauthority:/root/.Xauthority:rw” mongodb-compassreferences : Running GUI Applications inside Docker Containers | by Saravanan Sundaramoorthy | MediumHappy learn…, i hope my solution can help you too",
"username": "Arsan_69294"
},
{
"code": "",
"text": "",
"username": "kanikasingla"
},
{
"code": "",
"text": "I have somewhat different issue. When I download the mongodb-deb file it does not open through the software, it opens through the Archive Manager. So I am unable to install it.",
"username": "Omkar_33587"
},
{
"code": "",
"text": "I assume you are dealing with debian packages. you can use this command via terminal\nsudo apt install /path/to/package/name.deb",
"username": "Gapster"
},
{
"code": "",
"text": "I down the zip and create a folder in documents and stract there, this work for me ",
"username": "pblfer"
},
{
"code": "",
"text": "Got the same problem with the stable version on Ubuntu 19.04. Downloaded the beta release, and it worked well.",
"username": "Martin_66138"
},
{
"code": "",
"text": "Neither version worked for me, the window frame would come up, but then it would hangpiping output gets it to work$ mongodb-compass 2>&1 | tee -a mongodb-compass.out\nlibGL error: No matching fbConfigs or visuals found\nlibGL error: failed to load driver: swrast\nlibGL error: No matching fbConfigs or visuals found\nlibGL error: failed to load driver: swrast\n",
"username": "lufthans"
},
{
"code": "wget https://downloads.mongodb.com/compass/mongodb-compass_1.14.1_amd64.deb\nsudo dpkg -i mongodb-compass_1.14.1_amd64.deb\nmongodb-compass",
"text": "Same problem here ( Compass 1.18.0 on Ubuntu 18.04.2 LTS) - it keeps “Loading…” forever - but solved: using an old version (1.14.1).My steps [after uninstall the 1.18.0 version]:",
"username": "Leonardo_93208"
},
{
"code": "",
"text": "Hi @Leonardo_93208,Thanks for notifying the issue!!Please allow us sometime to check the issue and we’ll update you as soon as it gets fixed.I hope you are able to work on older version of Compass. If you have any doubt, please let me know.Thanks,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "Hi @Leonardo_93208,Please try downloading Compass 1.18.0 version now. I have replicated the issue on my system and it is working now.Thanks,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "Hello @Sonali_Mamgain,\nNow the package mongodb-compass (1.18.0-1) on Ubuntu 18.04.2 LTS is working fine.\nThanks!",
"username": "Leonardo_93208"
},
{
"code": "",
"text": "hello I use linux especially fedora 30 and is running correctly the only thing I did was download mongoDB compass on the official page and install by means of rpm guide me from this page",
"username": "wenikore"
},
{
"code": "",
"text": "I am not able to connect to mongodb through compass. i am using 1.18.0 on windows.\nIt is giving me error- Could not connect to MongoDB on the provided host and port .",
"username": "gaurav_05376"
},
{
"code": "",
"text": "I assume you are using stable version\nAre you able to connect thru shell?\nMay be firewall or antivirus software preventing connection thru compass",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Guys, I have installed my MongoDB Compass, but when i try to follow lectura: Documents: Scalar Value type, I watch on video the schema view on Compass, bu on my screen it does not appear. Any reason to that behavior?, should I have to do something in order to see the schema view? On my screen appears: Documents, Aggregations, Explain Plan and Indexes topics, but schema view does not.Thank you.\nVersión of my Compass Installation is: 1.19.12regards.",
"username": "Ignacio_68110"
},
{
"code": "",
"text": "Hi @Ignacio_68110,You might be using MongoDB Compass Community edition which is not recommended in our course as it lacks some functionalities. Please download Compass 1.19.12 (Stable) version from the Download Centre.Hope it helps!If you have any other issue please feel free to get back to us.Happy Learning Thanks,\nShubham Rajan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "use the cmd in your terminal\nsudo dkpg -i “magodeb file name”\nafter successful installing you can search in the search box",
"username": "shiva_sunny"
},
{
"code": "",
"text": "2 posts were split to a new topic: How to enrol into M001 course",
"username": "Shubham_Ranjan"
}
] | Linux (Slackware) Solution: Mongodb-compass can't open after install | 2019-01-08T22:50:37.220Z | Linux (Slackware) Solution: Mongodb-compass can’t open after install | 7,192 |
null | [] | [
{
"code": "Directory: C:\\Users\\harsh\\M001\\loadMovieDetailsDataset\n",
"text": "Hi ,I have created my cluster and able to connect but while trying to load data set getting below error . can you please help on this?PS C:\\Users\\harsh\\M001\\loadMovieDetailsDataset> dirMode LastWriteTime Length Name-a---- 01-01-2020 00:00 1381666 loadMovieDetailsDataset.jsPS C:\\Users\\harsh\\M001\\loadMovieDetailsDataset> mongo “mongodb+srv://sandbox-5evyi.mongodb.net/test” --username m001-student\nMongoDB shell version v4.2.6\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-01-5evyi.mongodb.net:27017,sandbox-shard-00-02-5evyi.mongodb.net:27017,sandbox-shard-00-00-5evyi.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=Sandbox-shard-0&ssl=true\n2020-05-17T14:39:14.986+0530 I NETWORK [js] Starting new replica set monitor for Sandbox-shard-0/sandbox-shard-00-01-5evyi.mongodb.net:27017,sandbox-shard-00-02-5evyi.mongodb.net:27017,sandbox-shard-00-00-5evyi.mongodb.net:27017\n2020-05-17T14:39:14.986+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to sandbox-shard-00-01-5evyi.mongodb.net:27017\n2020-05-17T14:39:14.987+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to sandbox-shard-00-00-5evyi.mongodb.net:27017\n2020-05-17T14:39:14.987+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to sandbox-shard-00-02-5evyi.mongodb.net:27017\n2020-05-17T14:39:15.728+0530 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for Sandbox-shard-0 is Sandbox-shard-0/sandbox-shard-00-00-5evyi.mongodb.net:27017,sandbox-shard-00-01-5evyi.mongodb.net:27017,sandbox-shard-00-02-5evyi.mongodb.net:27017\nImplicit session: session { “id” : UUID(“5bb51df7-60ca-4106-9b23-d7c175068718”) }\nMongoDB server version: 4.2.6\nError while trying to show server startup warnings: user is not allowed to do action [getLog] on [admin.]\nMongoDB Enterprise Sandbox-shard-0:PRIMARY> load(“loadMovieDetailsDataset.js”)\n2020-05-17T14:39:47.275+0530 I NETWORK [js] DBClientConnection failed to receive message from sandbox-shard-00-01-5evyi.mongodb.net:27017 - HostUnreachable: Connection closed by peer\n2020-05-17T14:39:47.275+0530 E QUERY [js] uncaught exception: Error: error doing query: failed: network error while attempting to run command ‘drop’ on host ‘sandbox-shard-00-01-5evyi.mongodb.net:27017’ :\nDB.prototype.runCommand@src/mongo/shell/db.js:169:19\nDBCollection.prototype.drop@src/mongo/shell/collection.js:692:11\[email protected]:2:1\n@(shell):1:1\n2020-05-17T14:39:47.275+0530 E QUERY [js] Error: error loading js file: loadMovieDetailsDataset.js :\n@(shell):1:1\n2020-05-17T14:39:47.276+0530 I NETWORK [js] Marking host sandbox-shard-00-01-5evyi.mongodb.net:27017 as failed :: caused by :: Location40657: Last known master host cannot be reachedThanks,\nShreeharsha",
"username": "Shree_Harsha"
},
{
"code": "",
"text": "May be your session was inactive for too long and expired\nPlease exit and try to connect again and then load",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra,i have tried that also from 2 days same issue…i deleted the cluster and created new tried but error is still same. not sure where i am missing…Thanks,\nShreeharsha",
"username": "Shree_Harsha"
},
{
"code": "",
"text": "Please provide the URI of your new cluster so that we can test.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,Thanks… Please find below informationmongo “mongodb+srv://test-5evyi.mongodb.net/test” --username shreepassword : harsha",
"username": "Shree_Harsha"
},
{
"code": "",
"text": "I can connect to above cluster without any issues\nWhat error you are getting now with new test cluster you created\nDid you try to pass the --password in the connect string",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Same as @Ramachandra_37567, I was able to connect with both mongo shell and Compass.",
"username": "steevej"
},
{
"code": "Directory: C:\\Users\\harsh\\M001\\loadMovieDetailsDataset\n",
"text": "i am able to connect but the issue is coming while load a database…PS C:\\Users\\harsh\\M001\\loadMovieDetailsDataset> dirMode LastWriteTime Length Name-a---- 01-01-2020 00:00 1381666 loadMovieDetailsDataset.jsPS C:\\Users\\harsh\\M001\\loadMovieDetailsDataset> mongo “mongodb+srv://test-5evyi.mongodb.net/test” --username shree --password harsha\nMongoDB shell version v4.2.6\nconnecting to: mongodb://test-shard-00-02-5evyi.mongodb.net:27017,test-shard-00-00-5evyi.mongodb.net:27017,test-shard-00-01-5evyi.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=test-shard-0&ssl=true\n2020-05-17T19:51:03.868+0530 I NETWORK [js] Starting new replica set monitor for test-shard-0/test-shard-00-02-5evyi.mongodb.net:27017,test-shard-00-00-5evyi.mongodb.net:27017,test-shard-00-01-5evyi.mongodb.net:27017\n2020-05-17T19:51:03.875+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-01-5evyi.mongodb.net:27017\n2020-05-17T19:51:03.876+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-02-5evyi.mongodb.net:27017\n2020-05-17T19:51:03.878+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-00-5evyi.mongodb.net:27017\n2020-05-17T19:51:06.027+0530 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for test-shard-0 is test-shard-0/test-shard-00-00-5evyi.mongodb.net:27017,test-shard-00-01-5evyi.mongodb.net:27017,test-shard-00-02-5evyi.mongodb.net:27017\nImplicit session: session { “id” : UUID(“058d869e-1bc9-4235-b4f9-c7e3079ea650”) }\nMongoDB server version: 4.2.6\nError while trying to show server startup warnings: user is not allowed to do action [getLog] on [admin.]\nMongoDB Enterprise test-shard-0:PRIMARY> load(“loadMovieDetailsDataset.js”)\n2020-05-17T19:51:21.850+0530 I NETWORK [js] DBClientConnection failed to receive message from test-shard-00-02-5evyi.mongodb.net:27017 - HostUnreachable: Connection closed by peer\n2020-05-17T19:51:21.851+0530 E QUERY [js] uncaught exception: Error: error doing query: failed: network error while attempting to run command ‘drop’ on host ‘test-shard-00-02-5evyi.mongodb.net:27017’ :\nDB.prototype.runCommand@src/mongo/shell/db.js:169:19\nDBCollection.prototype.drop@src/mongo/shell/collection.js:692:11\[email protected]:2:1\n@(shell):1:1\n2020-05-17T19:51:21.853+0530 E QUERY [js] Error: error loading js file: loadMovieDetailsDataset.js :\n@(shell):1:1\n2020-05-17T19:51:21.860+0530 I NETWORK [js] Marking host test-shard-00-02-5evyi.mongodb.net:27017 as failed :: caused by :: Location40657: Last known master host cannot be reached\n2020-05-17T19:51:21.870+0530 I CONNPOOL [js] dropping unhealthy pooled connection to test-shard-00-02-5evyi.mongodb.net:27017\n2020-05-17T19:51:21.873+0530 I CONNPOOL [js] dropping unhealthy pooled connection to test-shard-00-01-5evyi.mongodb.net:27017\n2020-05-17T19:51:21.875+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-02-5evyi.mongodb.net:27017\n2020-05-17T19:51:21.876+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-01-5evyi.mongodb.net:27017\n2020-05-17T19:51:21.877+0530 I CONNPOOL [js] dropping unhealthy pooled connection to test-shard-00-00-5evyi.mongodb.net:27017\n2020-05-17T19:51:21.878+0530 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to test-shard-00-00-5evyi.mongodb.net:27017",
"username": "Shree_Harsha"
},
{
"code": "",
"text": "I suspect you have a very slow network. Is there any way you can do that from another location?Can you share the first few lines loadMovieDetailsDataset.js? May be the file is corrupted.",
"username": "steevej"
},
{
"code": "",
"text": "Sure Steeve…Please find the below information on that load filedb = db.getSiblingDB(“video”);\ndb.movieDetails.drop();\ndb.movieDetails.insertMany([\n{“title”:“Once Upon a Time in the West”,“year”:1968,“rated”:“PG-13”,“runtime”:175,“countries”:[“Italy”,“USA”,“Spain”],“genres”:[“Western”],“director”:“Sergio Leone”,“writers”:[“Sergio Donati”,“Sergio Leone”,“Dario Argento”,“Bernardo Bertolucci”,“Sergio Leone”],“actors”:[“Claudia Cardinale”,“Henry Fonda”,“Jason Robards”,“Charles Bronson”],“plot”:“Epic story of a mysterious stranger with a harmonica who joins forces with a notorious desperado to protect a beautiful widow from a ruthless assassin working for the railroad.”,“poster”:“http://ia.media-imdb.com/images/M/MV5BMTEyODQzNDkzNjVeQTJeQWpwZ15BbWU4MDgyODk1NDEx._V1_SX300.jpg\",“imdb”:{“id”:“tt0064116”,“rating”:8.6,“votes”:201283},“tomato”:{“meter”:98,“image”:“certified”,“rating”:9,“reviews”:54,“fresh”:53,“consensus”:\"A landmark Sergio Leone spaghetti western masterpiece featuring a classic Morricone score.”,“userMeter”:95,“userRating”:4.3,“userReviews”:64006},“metacritic”:80,“awards”:{“wins”:4,“nominations”:5,“text”:“4 wins \\u0026 5 nominations.”},“type”:“movie”},",
"username": "Shree_Harsha"
},
{
"code": "",
"text": "Hi @Shree_Harsha,As @steevej-1495 mentioned, It looks like a network issue to me as well.Is it possible for you to connect using any other network ?~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Location40657: Last known master host cannot be reached | 2020-05-17T09:21:50.112Z | Location40657: Last known master host cannot be reached | 2,409 |
[
"charts"
] | [
{
"code": "",
"text": "I’m using charts on atlas to display summary data on a number of records.\nUsers are used to excel pivot tables so something similar to this is my aim.The text table version shows the raw data.\nHeatmaps is 95% of the way there, but only shows the value of each cell as you hover over it.\nIs there any way to get heatmaps to display values in each cell? Or is there a better mongocharts way to show pivot table style data.",
"username": "Neil_Albiston1"
},
{
"code": "",
"text": "Hi @Neil_Albiston1 -Yes you can do this with the Table chart type in the Text category. Please see the following example of a table similar to what you show.Note that we don’t currently have a way of shading cells in a table like a heatmap (I’m not sure if this is a requirement for you or not) but we will be enabling conditional formatting on tables later in the year.\nimage1381×795 41.6 KB\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Perfect. Thank you.\nThe ‘count’ option on Text tables is exactly what I was looking for.Neil",
"username": "Neil_Albiston1"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Charts pivot table | 2020-05-15T12:52:31.298Z | Mongo Charts pivot table | 3,653 |
|
null | [] | [
{
"code": "",
"text": "I read that MongoDB has a new extension for VSCode. Can I use that in the course? instead of installing Compass.",
"username": "Marvin_Trilles"
},
{
"code": "",
"text": "Please check this link from Devp forum",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Marvin_Trilles,Some of the labs require you to use Compass. I would recommend you to give it a try as it is a powerful tool to visualize your data and you might find it helpful when you are learning Aggregation pipelines (Not a part of this course).~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB for VSCode | 2020-05-16T00:40:59.264Z | MongoDB for VSCode | 1,088 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Hey,\nI’m developing a real time embedded application which requires onboard-embedded DB. I decided to use mongodb and now i’m trying to integrate “dbclient” to my sw for inserting and querying data to/from the local server.\nIm using “dbclient.h” and creating bson objects using this example (Getting Started with the C++ Driver — MongoDB Manual). I have encountered a problem with inserting /indexing geo data (using geojson format) and looked for code examples… but bearly found any c++ examples.\nI noticed that there are several ways to implement mongo DB client… also with mongocxx.\nWhich way is the “Good” way? can you refer me with end-to-end c++ example of indexing and querying geo data?\nThanks",
"username": "Arieh_Salomon"
},
{
"code": "",
"text": "Hi @Arieh_Salomon, welcome!Im using “dbclient.h” and creating bson objects using this example (Getting Started with the C++ Driver — MongoDB Manual).I would recommend to review the documentation on Tutorial for mongocxx and Working with BSONI have encountered a problem with inserting /indexing geo data (using geojson format) and looked for code examplesPlease see Tutorial: Query Collection , Tutorial: Insert One, and MongoDB GeoSpatial Queries to get started.If you’re still encountering a problem, please provide:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks!\nSo if I understand correctly, <dbclient.h> is the old driver and is the newer (recommended) version?I installed the latest version (3.5 i think…) and already did the recommended examples.I’m trying to insert lots of geographic points with descriptor, something like this:{\n“name”: “name_1”,\n“location”: {\n“coordinates”: [39.301, 21.211],\n“type” : “point”\n}\n“descriptor” : “12345789abcdef”\n}I need to be able to perform a query to get all the points within a certain polygon.Can you show me a relevant example of how to insert, index and query (c++ code)?Thanks!",
"username": "Arieh_Salomon"
}
] | C++ MongoDB client | 2020-05-17T12:34:21.083Z | C++ MongoDB client | 2,719 |
null | [
"compass"
] | [
{
"code": "$objectToArray",
"text": "Hallo,\nDoes anyone know why the MongoDB Compass do not have the possibility to use the $objectToArray ($objectToArray (aggregation)) feature on his aggregation creating window. Does the MongoDB Compass not support all aggregation features?",
"username": "Niclas_T"
},
{
"code": "$objectToArray$projecttest{ obj: { str: \"foo\", num: 99 } }db.test.aggregate( [ { $project: { obj_to_arr: { $objectToArray: \"$obj\" } } } ] )",
"text": "The $objectToArray aggregation operator works fine in the Compass (I am using version 1.21.2). Note that it not an aggregation stage, it is an operator which is used within another stage, like $project.For example using the input document of test collection: { obj: { str: \"foo\", num: 99 } }The aggregation query:db.test.aggregate( [ { $project: { obj_to_arr: { $objectToArray: \"$obj\" } } } ] )The screenshot showing the same using Compass:\nagg1961×446 20.6 KB\n",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Oh this was the problem thank you for the explanation",
"username": "Niclas_T"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Compass missing aggregation options | 2020-05-17T21:11:03.512Z | MongoDB Compass missing aggregation options | 3,817 |
null | [] | [
{
"code": "",
"text": "on mysql,oracle,sybase, i got never problem with a just simple distincti need to check no duplicate uuid field before create index with unique id on very big dataset of 70 millions recordsbut if i use distinct(“uuid”) i got message exeeed limit of 16MBi use aggregation with count on uuid and allowDisckUse: true, i got bson data to large erroris it impossible to do a distinct like other database sql ?_",
"username": "Jp_B"
},
{
"code": "db.testColl.aggregate([ \n {$group: { _id: \"$uuid\" } }, \n]).itcount()\n\ndb.testColl.aggregate([ \n { $group: { _id: \"$uuid\" } },\n { $count: \"c\" }\n])",
"text": "Try these and count the distinct values:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "uuidit’s working, thanksbut i must add allowDiskUse:true\ndb[“mycollection”].aggregate([ {$group: { _id: “$uuid” } } ],{allowDiskUse:true}).itcount()",
"username": "Jp_B"
}
] | How to make distinct on big dataset | 2020-05-17T21:10:34.483Z | How to make distinct on big dataset | 5,950 |
[
"mongodb-shell"
] | [
{
"code": "",
"text": "I am learning about variables in the m001 course. So I tried a few commands using variables.\n\nimage1920×399 20.2 KB\nBut the variable (document) is printing only once. After some searching on the internet, I found this.\n\nimage911×677 36.2 KB\nfind() is a cursor, It can hold the variable only once.This might be known to all. But as a beginner, I struggled for a while. I hope this helps beginners like me.",
"username": "jayanthsaikiran_N_A"
},
{
"code": "forEachexplainhasNextnexttoArray",
"text": "The db.collection.find() returns a cursor. You can apply any of these cursor methods on the returned cursor. Some of the cursor methods often used are the forEach, explain, hasNext (and next), toArray, etc.The db.collection.findOne() returns one document that matches the filter condition, or else a null.",
"username": "Prasad_Saya"
}
] | MongoDB shell variable printing only once | 2020-05-17T21:10:42.812Z | MongoDB shell variable printing only once | 3,104 |
|
null | [
"aggregation"
] | [
{
"code": "var q1 = [\n { $unwind: '$resp.'+starCode },\n { $project: { stars: '$resp.'+starCode } },\n { $group: {\n _id: 1,\n total: { $sum: '$stars' },\n count: { $sum: { $cond: [{ $ne: ['$stars',''] },1,0] } },\n average: { $avg: '$stars' } \n }\n }\n]\n[\n {\n \"_id\": 1,\n \"total\": 51,\n \"count\": 14,\n \"average\": 3.642857142857143\n }\n]\nvar q2 = [ \n { $group: {\n _id: { $dateToString: { format: '%H:%M %Y-%m-%d', date: '$date' } },\n count: { $sum: 1 }\n }\n } \n]\n[\n {\n \"_id\": \"14:58 2020-05-14\",\n \"count\": 2\n },\n {\n \"_id\": \"14:46 2020-05-14\",\n \"count\": 2\n },\netc,etc,etc.\nvar q3 = [\n { $unwind: '$resp.'+starCode },\n { $project: { stars: '$resp.'+starCode } }, \n { $group: {\n _id: { $dateToString: { format: '%H:%M %Y-%m-%d', date: '$date' } },\n total: { $sum: '$stars' },\n count: { $sum: 1 }\n }\n } \n]\n[\n {\n \"_id\": null,\n \"total\": 51,\n \"count\": 14\n }\n]\n",
"text": "MongoDB 3.6.18I have two queries which work…which returns…And another…which returns…However when I try an combine these ( count of stars in the period ) using…The the _id value becomes NULL and there’s no grouping - ie…Any Suggestions welcome.",
"username": "Peter_Alderson"
},
{
"code": "$projectstarsdate$groupdate: 1$project",
"text": "Hi @Peter_Alderson, your $project stage is saying only pass on the stars field to to the next stage. That means that date if not present in the $group stage. Try adding date: 1 into the $project stage to see if you get the expected results.",
"username": "Doug_Duncan"
},
{
"code": "{ $addFields: { stars: '$resp.'+starCode } }$project$addFields$project$group",
"text": "{ $project: { stars: ‘$resp.’+starCode } }As @Doug_Duncan mentioned you can change your $project stage, or instead use a $addFields stage:{ $addFields: { stars: '$resp.'+starCode } }Both these stages have some common functionality, and different purposes. $project is mostly used to include / exclude fields from the document to be accessed in the following stage. $addFields adds new fields to the already existing ones in a document.In your case, the $project stage looks appropriate as you are using only two fields in the following $group stage.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Doug_Duncan.Yep, that worked. I was assuming that $project was additive !Peter",
"username": "Peter_Alderson"
},
{
"code": "$project$addFields",
"text": "Hi @Peter_Alderson glad that worked out for you.$project either I want this list of fields, or I don’t want this subset returned.For adding in new fields, while keeping all the rest that are in the current pipeline, you would use $addFields as @Prasad_Saya mentioned earlier.Which you use depends on which fields you need from that point on. It’s best to only send the fields on that you need to complete the pipeline to save on the amount of data being passed around.",
"username": "Doug_Duncan"
},
{
"code": "$project$project$project",
"text": "It’s best to only send the fields on that you need to complete the pipeline to save on the amount of data being passed around.The aggregation pipeline automatically determines fields that are required, so it is actually best to only add a $project stage if results need to be renamed or reshaped (typically at the end of a pipeline).Adding an early $project stage can be less efficient because it bypasses the automatic dependency analysis.If you don’t need to rename or reshape results, there is no need to include a $project stage.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "var q3 = [\n { $unwind: '$resp.'+starCode },\n { $project: { stars: '$resp.'+starCode } }, \n { $group: {\n _id: { $dateToString: { format: '%H:%M %Y-%m-%d', date: '$date' } },\n total: { $sum: '$stars' },\n count: { $sum: 1 }\n }\n } \n]\n$projectvar q3 = [\n { $unwind: '$resp.'+starCode },\n { $group: {\n _id: { $dateToString: { format: '%H:%M %Y-%m-%d', date: '$date' } },\n total: { $sum: '$resp.'+starCode },\n count: { $sum: 1 }\n }\n } \n]",
"text": "The aggregation can be improved with the pipelinereplaced with the following, and eliminating the $project stage:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Prasad_Saya,Can you explain to me the benefit of the improvement you identify ?Peter",
"username": "Peter_Alderson"
},
{
"code": "$project$unwind",
"text": "Can you explain to me the benefit of the improvement you identify ?Not having the $project stage. This means your aggregation doesn’t have to scan all the documents after the initial $unwind stage. That is less processing. It matters when there are a large number of documents.",
"username": "Prasad_Saya"
},
{
"code": "$match()$project$project()",
"text": "@Stennie_X, you know more about MongoDB and it’s inner workings than I do since you’ve worked at MongoDB for a number of years now, and I am glad that you are here to share your knowledge.The link you posted is to only the optimization where MongoDB can pull up $match() stages if they are after projections. I think what you really meant was to link the the section above that:Projection OptimizationThe aggregation pipeline can determine if it requires only a subset of the fields in the documents to obtain the results. If so, the pipeline will only use those required fields, reducing the amount of data passing through the pipeline.Now having said that, I don’t know that I necessarily agree with your statement:If you don’t need to rename or reshape results, there is no need to include a $project stage.Sure the optimizer can figure out the fields necessary to pass through the pipeline and I see that being a great thing badly written aggregation pipelines. I prefer, however, to be explicit about what I am passing through (it helps me see the data that I’m interested in, especially if there are a large number of fields) and until it’s proven that running a projection, early in the process, can cause performance issues I will continue to $project() only the fields that I need in my pipelines as early as I can. That doesn’t mean that my way is better, but if I’m not having any performance issues, I’m OK with doing it this way. Also isn’t the optimizer just implicitly doing what I’m explicitly doing?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "In addition I will think that if you $project fields of a compound index you avoid a document fetch.",
"username": "steevej"
},
{
"code": "",
"text": "Stennie is right, $project can make your pipeline less efficient if you happen to specify a field you don’t need or forget to exclude a field you don’t.For instance in order for a covered index on a and b to be used in an explicit projection you would have to remember to exclude _id if it’s not in the index and not needed.Unnecessary stages are better left out.In fact, in this particular aggregation you don’t even need $unwind - it should just be a single $group stage.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation by Date | 2020-05-14T18:02:47.842Z | Aggregation by Date | 2,913 |
null | [
"react-native"
] | [
{
"code": "useEffect()componentWillUnmount()realm.isClosed is undefined",
"text": "Link to gistI modified the example app in the Realm doc to use newer API’s like Hooks. The app crashes in the second useEffect() used to replace the deprecated componentWillUnmount() method. The error states that realm.isClosed is undefined.Any idea what is causing the error?",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "I discovered that the error was not Realm related. I was using Hooks incorrectly. The gist will show the updated code.",
"username": "Michael_Stelly"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | realm.isClosed is undefined | 2020-05-16T22:14:07.137Z | realm.isClosed is undefined | 2,169 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Hi, I have been following this guide with the hopes of using mongoDB with QT 5.12.7 MinGW 64 bit in QT creator\nhttp://mongocxx.org/mongocxx-v3/installation/For step 1, I used c:\\msys64\\mingw64.exe to get mongo-c-driver 1.16.2 to install with no issues following this link: Installing the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2For step 2, I made the assumption that i will need -DBSONCXX_POLY_USE_BOOST=1, and i downloaded boost_1_73_0 from online and made C:\\local\\boost_1_73_0For step 3, I followed pasted the lines from the text box into the MinGW shell in c:\\msys64\\mingw64.execurl -OL https://github.com/mongodb/mongo-cxx-driver/archive/r3.5.0.tar.gz\ntar -xzf r3.5.0.tar.gz\ncd mongo-cxx-driver-r3.5.0/buildFor step 4, i pasted this in the mingw shell‘C:\\Program Files\\CMake\\bin\\cmake.exe’ … \n-G “Visual Studio 15 2017 Win64” \n-DBOOST_ROOT=C:\\local\\boost_1_73_0 \n-DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \n-DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver \n-DBUILD_VERSION=3.5.0my problem is nearly identical to this post, but i didnt want to hijack the threadAs far as the libraries not being found, how is your build specifying the location and resources of the C++ driver? Can you provide the complete error output? Also, are you using Visual Studio for all of your builds (C driver, C++ driver, and your own project), or are you mixing Visual Studio and MinGW?as for the questions related to this topic\n-location is specified with mingw shell using this command cd mongo-cxx-driver-r3.5.0/build\n-i dont plan to use Visual studio, just QT creator with minGW\n-complete error output*$ ‘C:\\Program Files\\CMake\\bin\\cmake.exe’ … *\n*> -G “Visual Studio 15 2017 Win64” *\n*> -DBOOST_ROOT=C:\\local\\boost_1_73_0 *\n*> -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver *\n*> -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver *\n> -DBUILD_VERSION=3.5.0\n– The CXX compiler identification is MSVC 19.16.27025.1\n– Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe\n– Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe - works\n– Detecting CXX compiler ABI info\n– Detecting CXX compiler ABI info - done\n– Detecting CXX compile features\n– Detecting CXX compile features - done\n– No build type selected, default is Release\n– The C compiler identification is MSVC 19.16.27025.1\n– Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe\n– Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Professional/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe - works\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Detecting C compile features\n– Detecting C compile features - done\n– Auto-configuring bsoncxx to use boost std library polyfills since C++17 is inactive and compiler is MSVC\nbsoncxx version: 3.5.0\nCMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):By not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project*has asked CMake to find a package configuration file provided by*“libbson-1.0”, but CMake did not find one.*Could not find a package configuration file provided by “libbson-1.0”*(requested version 1.13.0) with any of the following names:*libbson-1.0Config.cmake*libbson-1.0-config.cmake*Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set*“libbson-1.0_DIR” to a directory containing one of the above files. If*“libbson-1.0” provides a separate development package or SDK, be sure it*has been installed.*– Configuring incomplete, errors occurred!\nSee also “C:/msys64/home/agovan/mongo-cxx-driver-r3.5.0/build/CMakeFiles/CMakeOutput.log”.",
"username": "Akash_Govan"
},
{
"code": "-DBUILD_VERSION=...C:\\mongo-c-driver",
"text": "@Akash_Govan, first if you build from the release tarball (which your post indicates that you are not) then you will not need to specify -DBUILD_VERSION=... in your CMake command. The correct link for the release tarball is https://github.com/mongodb/mongo-cxx-driver/releases/download/r3.5.0/mongo-cxx-driver-r3.5.0.tar.gz while you downloaded the source tree snapshot where it was tagged. The team is aware of the discrepancy and we are discussing how to update the documentation to eliminate this and other items which are unclear.That said, let’s consider this part of your output:CMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):This indicates that either you did not actually install the C driver at the specified location (C:\\mongo-c-driver, based on the CMake command you are using for the C++ driver build) either by not executing the install target or by choosing a different directory, or you supplied some option to the C driver build to disable the build of the DLL (which would produce only static libraries). If you would post the entirety of the output for your C driver build, I could help you determine the precise cause.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "If you would post the entirety of the output for your C driver build, I could help you determine the precise cause.I tried removing the files and doing a reinstall of the C driver build.using c:\\msys64\\ming64.exe\ni installed the dependencies with\npacman --noconfirm -Syu\npacman --noconfirm -S mingw-w64-x86_64-gcc mingw-w64-x86_64-cmake\npacman --noconfirm -S mingw-w64-x86_64-extra-cmake-modules make tar\npacman --noconfirm -S mingw64/mingw-w64-x86_64-cyrus-saslthen i ran these commands after untaring the build\nmkdir cmake-build\ncd cmake-build\nCC=/mingw64/bin/gcc.exe /mingw64/bin/cmake -G “MSYS Makefiles” -DCMAKE_INSTALL_PREFIX=“C:/mongo-c-driver” -DCMAKE_C_FLAGS=“-D__USE_MINGW_ANSI_STDIO=1” …\nmake installthere is a character limit, so pasting 20 pages of install output wont fit, but i’ll try to paste the main points\nagovan@LENOVO-AKASH MINGW64 ~\n$ curl -LO https://github.com/mongodb/mongo-c-driver/releases/download/1.16.2/mongo-c-driver-1.16.2.tar.gz\n% Total % Received % Xferd Average Speed Time Time Time Current\nDload Upload Total Spent Left Speed\n100 637 100 637 0 0 1151 0 --:–:-- --:–:-- --:–:-- 1149\n100 6726k 100 6726k 0 0 2164k 0 0:00:03 0:00:03 --:–:-- 3334kagovan@LENOVO-AKASH MINGW64 ~\n$ tar xzf mongo-c-driver-1.16.2.tar.gzagovan@LENOVO-AKASH MINGW64 ~\n$ cd mongo-c-driver-1.16.2agovan@LENOVO-AKASH MINGW64 ~/mongo-c-driver-1.16.2\n$ mkdir cmake-buildagovan@LENOVO-AKASH MINGW64 ~/mongo-c-driver-1.16.2\n$ cd cmake-buildagovan@LENOVO-AKASH MINGW64 ~/mongo-c-driver-1.16.2/cmake-build\n$ CC=/mingw64/bin/gcc.exe /mingw64/bin/cmake -G “MSYS Makefiles” -DCMAKE_INSTALL_PREFIX=“C:/mongo-c-driver” -DCMAKE_C_FLAGS=“-D__USE_MINGW_ANSI_STDIO=1” …\n– The C compiler identification is ;GNU 10.1.0\n– Check for working C compiler: C:/msys64/mingw64/bin/gcc.exe\n– Check for working C compiler: C:/msys64/mingw64/bin/gcc.exe - works\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Detecting C compile features\n– Detecting C compile features - done\n– No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo\nfile VERSION_CURRENT contained BUILD_VERSION 1.16.2\n– Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 1.16.2\n– Check if the system is big endian\n– Searching 16 bit integer\n– Looking for sys/types.h\n– Looking for sys/types.h - found\n– Looking for stdint.h\n– Looking for stdint.h - found\n– Looking for stddef.h\n– Looking for stddef.h - found\n– Check size of unsigned short\n– Check size of unsigned short - done\n– Searching 16 bit integer - Using unsigned short\n– Check if the system is big endian - little endian\n– Looking for snprintf\n– Looking for snprintf - found\n– Looking for reallocf\n– Looking for reallocf - not found\n– Performing Test BSON_HAVE_TIMESPEC\n– Performing Test BSON_HAVE_TIMESPEC - Success\n– struct timespec found\n– Looking for gmtime_r\n– Looking for gmtime_r - not found\n– Looking for rand_r\n– Looking for rand_r - not found\n– Looking for strings.h\n– Looking for strings.h - found\n– Looking for strlcpy\n– Looking for strlcpy - not found\n– Looking for clock_gettime\n– Looking for clock_gettime - found\n– Looking for strnlen\n– Looking for strnlen - found\n– Looking for stdbool.h\n– Looking for stdbool.h - found\n– Looking for SYS_gettid\n– Looking for SYS_gettid - not found\n– Looking for syscall\n– Looking for syscall - not found\n– Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH\n– Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH - Success\n– Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH\n– Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH - Success\n– Looking for pthread.h\n– Looking for pthread.h - found\n– Performing Test CMAKE_HAVE_LIBC_PTHREAD\n– Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\n– Found Threads: TRUE\nlibmongoc version (from VERSION_CURRENT file): 1.16.2\n– Searching for zlib CMake packages\n– Found ZLIB: C:/msys64/mingw64/lib/libz.dll.a (found version “1.2.11”)\n– zlib found version “1.2.11”\n– zlib include path “C:/msys64/mingw64/include”\n– zlib libraries “C:/msys64/mingw64/lib/libz.dll.a”\n– Looking for include file unistd.h\n– Looking for include file unistd.h - found\n– Looking for include file stdarg.h\n– Looking for include file stdarg.h - found\n– Searching for compression library zstd\n– Found PkgConfig: C:/msys64/mingw64/bin/pkg-config.exe (found version “0.29.2”)\n– Checking for module ‘libzstd’\n– Found libzstd, version 1.4.4\n– Found zstd version 1.4.4 in C:/msys64/mingw64/include\n– Check size of socklen_t\n– Check size of socklen_t - done\n– Looking for sched_getcpu\n– Looking for sched_getcpu - not found\n– Searching for compression library header snappy-c.h\n– Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\n– No ICU library found, SASLPrep disabled for SCRAM-SHA-256 authentication.\n– If ICU is installed in a non-standard directory, define ICU_ROOT as the ICU installation path.\nSearching for libmongocrypt\n– libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n– Performing Test MONGOC_HAVE_SS_FAMILY\n– Performing Test MONGOC_HAVE_SS_FAMILY - Failed\n– Compiling against Secure Channel\n– Compiling against Windows SSPI\n– Configuring done\n– Generating done\n– Build files have been written to: C:/msys64/home/agovan/mongo-c-driver-1.16.2/cmake-buildagovan@LENOVO-AKASH MINGW64 ~/mongo-c-driver-1.16.2/cmake-build\n$ make install\nScanning dependencies of target bson_shared\n[ 1%] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bcon.c.obj[16 pages of installing from 1% to 100 % here, which will exceed the post character limit][100%] Built target bulk1Install the project…– Install configuration: “RelWithDebInfo”– Installing: C:/mongo-c-driver/share/mongo-c-driver/COPYING– Installing: C:/mongo-c-driver/share/mongo-c-driver/NEWS– Installing: C:/mongo-c-driver/share/mongo-c-driver/README.rst– Installing: C:/mongo-c-driver/share/mongo-c-driver/THIRD_PARTY_NOTICES– Installing: C:/mongo-c-driver/lib/libbson-1.0.dll.a– Installing: C:/mongo-c-driver/bin/libbson-1.0.dll– Installing: C:/mongo-c-driver/lib/libbson-static-1.0.a– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-config.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-version.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bcon.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-atomic.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-clock.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-compat.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-context.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-decimal128.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-endian.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-error.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-iter.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-json.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-keys.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-macros.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-md5.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-memory.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-oid.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-prelude.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-reader.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-string.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-types.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-utf8.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-value.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-version-functions.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson/bson-writer.h– Installing: C:/mongo-c-driver/include/libbson-1.0/bson.h– Installing: C:/mongo-c-driver/lib/pkgconfig/libbson-1.0.pc– Installing: C:/mongo-c-driver/lib/pkgconfig/libbson-static-1.0.pc– Installing: C:/mongo-c-driver/lib/cmake/bson-1.0/bson-targets.cmake– Installing: C:/mongo-c-driver/lib/cmake/bson-1.0/bson-targets-relwithdebinfo.cmake– Installing: C:/mongo-c-driver/lib/cmake/bson-1.0/bson-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/bson-1.0/bson-1.0-config-version.cmake– Installing: C:/mongo-c-driver/lib/cmake/libbson-1.0/libbson-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/libbson-1.0/libbson-1.0-config-version.cmake– Installing: C:/mongo-c-driver/lib/cmake/libbson-static-1.0/libbson-static-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/libbson-static-1.0/libbson-static-1.0-config-version.cmake– Installing: C:/mongo-c-driver/lib/libmongoc-1.0.dll.a– Installing: C:/mongo-c-driver/bin/libmongoc-1.0.dll– Installing: C:/mongo-c-driver/lib/libmongoc-static-1.0.a– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-config.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-version.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-apm.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-bulk-operation.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-change-stream.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-pool.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-side-encryption.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-collection.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-cursor.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-database.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-error.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-flags.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-find-and-modify.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-bucket.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-page.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-list.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-handshake.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-host-list.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-init.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-index.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-iovec.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-log.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-macros.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-matcher.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-opcode.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-prelude.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-read-concern.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-read-prefs.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-server-description.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-client-session.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-socket.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls-libressl.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls-openssl.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-buffered.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-file.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-gridfs.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-socket.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-topology-description.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-uri.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-version-functions.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-write-concern.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-rand.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-stream-tls.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc/mongoc-ssl.h– Installing: C:/mongo-c-driver/include/libmongoc-1.0/mongoc.h– Installing: C:/mongo-c-driver/lib/pkgconfig/libmongoc-1.0.pc– Installing: C:/mongo-c-driver/lib/pkgconfig/libmongoc-static-1.0.pc– Installing: C:/mongo-c-driver/lib/pkgconfig/libmongoc-ssl-1.0.pc– Installing: C:/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-targets.cmake– Installing: C:/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-targets-relwithdebinfo.cmake– Installing: C:/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/mongoc-1.0/mongoc-1.0-config-version.cmake– Installing: C:/mongo-c-driver/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake– Installing: C:/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config.cmake– Installing: C:/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config-version.cmake– Installing: C:/mongo-c-driver/share/mongo-c-driver/uninstall.cmdi fixed my commands after that, but i still get the same error message as my first post\ncurl -OL https://github.com/mongodb/mongo-cxx-driver/releases/download/r3.5.0/mongo-cxx-driver-r3.5.0.tar.gz\ntar -xzf mongo-cxx-driver-r3.5.0.tar.gz\ncd mongo-cxx-driver-r3.5.0/build‘C:\\Program Files\\CMake\\bin\\cmake.exe’ … \n-DBOOST_ROOT=C:\\local\\boost_1_73_0 \n-DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \n-DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driverone thing i wanted to ask is this cmake command supposed to be building for Visual Studio 15 2017 with -G “Visual Studio 15 2017 Win64”, or should i be using something else if i want it to work in QT creator with mingw?Im not sure if this may be relevant to my issue, but there is no file called libmongoc in my C:\\mongo-c-driver folder, although there appear to be files related to it in the sub folders\n\nimage725×545 34.2 KB\n",
"username": "Akash_Govan"
},
{
"code": "MSYS Makefiles-G \"...\"libbson....libmongoc....bson....mongoc....lib",
"text": "I see. The problem is that you are building the C driver for MSYS Makefiles and then building the C++ driver for Visual Studio. Use the same -G \"...\" option to CMake for both builds. Building the two components with two different toolchains is not supported and, as you have find, may not even work.What is happening, from a technical perspective, is that the C driver build is creating CMake packages called libbson.... and libmongoc.... (because it is treated as a Unix-like system), but the Visual Studio build is looking for CMake packages called bson.... and mongoc.... (because the convention on Windows is to not use the lib prefix).",
"username": "Roberto_Sanchez"
},
{
"code": "-G \"...\" -G \"MSYS Makefiles\" \\\n-DBOOST_ROOT=C:\\local\\boost_1_73_0 \\\n-DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \\\n-DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver\nlibbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n",
"text": "Use the same -G \"...\" option to CMake for both builds.I gave it a try, and i got no success, since my goal is to use mongoDB in QT creator with MinGW, and not visual studio, It’s my understanding that -G “MSYS Makefiles” from the “Building on Windows with MinGW-W64 and MSYS2” section of Installing the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2 is the correct option.when i try this, i end up getting the same erroragovan@LENOVO-AKASH MINGW64 ~/mongo-cxx-driver-r3.5.0/build\n$ ‘C:\\Program Files\\CMake\\bin\\cmake.exe’ … \\– The CXX compiler identification is GNU 10.1.0\n– Check for working CXX compiler: C:/msys64/mingw64/bin/g++.exe\n– Check for working CXX compiler: C:/msys64/mingw64/bin/g++.exe - works\n– Detecting CXX compiler ABI info\n– Detecting CXX compiler ABI info - done\n– Detecting CXX compile features\n– Detecting CXX compile features - done\n– No build type selected, default is Release\n– The C compiler identification is GNU 10.1.0\n– Check for working C compiler: C:/msys64/mingw64/bin/gcc.exe\n– Check for working C compiler: C:/msys64/mingw64/bin/gcc.exe - works\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Detecting C compile features\n– Detecting C compile features - done\n– Auto-configuring bsoncxx to use MNMLSTC for polyfills since C++17 is inactive\nbsoncxx version: 3.5.0\nCMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\nBy not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n“libbson-1.0”, but CMake did not find one.Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.13.0) with any of the following names:Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set\n“libbson-1.0_DIR” to a directory containing one of the above files. If\n“libbson-1.0” provides a separate development package or SDK, be sure it\nhas been installed.– Configuring incomplete, errors occurred!\nSee also “C:/msys64/home/agovan/mongo-cxx-driver-r3.5.0/build/CMakeFiles/CMakeOutput.log”.",
"username": "Akash_Govan"
},
{
"code": "C:\\mongo-c-driver\\lib\\cmake\\bson-1.0-config.cmakeC:\\mongo-c-driver\\lib\\cmake\\libbson-1.0-config.cmake",
"text": "This is exceptionally puzzling. Can you post the contents of the files C:\\mongo-c-driver\\lib\\cmake\\bson-1.0-config.cmake and C:\\mongo-c-driver\\lib\\cmake\\libbson-1.0-config.cmake?",
"username": "Roberto_Sanchez"
},
{
"code": "include(\"${CMAKE_CURRENT_LIST_DIR}/bson-targets.cmake\")\n# Copyright 2017 MongoDB Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nmessage(WARNING \"This CMake target is deprecated. Use 'mongo::bson_shared' instead. Consult the example projects for further details.\")\n\nset (BSON_MAJOR_VERSION 1)\nset (BSON_MINOR_VERSION 16)\nset (BSON_MICRO_VERSION 2)\nset (BSON_VERSION 1.16.2)\n\n\n####### Expanded from @PACKAGE_INIT@ by configure_package_config_file() #######\n####### Any changes to this file will be overwritten by the next CMake run ####\n####### The input file was libbson-1.0-config.cmake.in ########\n\nget_filename_component(PACKAGE_PREFIX_DIR \"${CMAKE_CURRENT_LIST_DIR}/../../../\" ABSOLUTE)\n\nmacro(set_and_check _var _file)\n set(${_var} \"${_file}\")\n if(NOT EXISTS \"${_file}\")\n message(FATAL_ERROR \"File or directory ${_file} referenced by variable ${_var} does not exist !\")\n endif()\nendmacro()\n\nmacro(check_required_components _NAME)\n foreach(comp ${${_NAME}_FIND_COMPONENTS})\n if(NOT ${_NAME}_${comp}_FOUND)\n if(${_NAME}_FIND_REQUIRED_${comp})\n set(${_NAME}_FOUND FALSE)\n endif()\n endif()\n endforeach()\nendmacro()\n\n####################################################################################\n\nset_and_check (BSON_INCLUDE_DIRS \"${PACKAGE_PREFIX_DIR}/include/libbson-1.0\")\n\n# We want to provide an absolute path to the library and we know the\n# directory and the base name, but not the suffix, so we use CMake's\n# find_library () to pick that up. Users can override this by configuring\n# BSON_LIBRARY themselves.\nfind_library (BSON_LIBRARY bson-1.0 PATHS \"${PACKAGE_PREFIX_DIR}/lib\" NO_DEFAULT_PATH)\n\nset (BSON_LIBRARIES ${BSON_LIBRARY})\n",
"text": "the directories are slightly different\nC:\\mongo-c-driver\\lib\\cmake\\bson-1.0\\bson-1.0-config.cmake hasas the only line\nand\nC:\\mongo-c-driver\\lib\\cmake\\libbson-1.0\\libbson-1.0-config.cmake has",
"username": "Akash_Govan"
},
{
"code": "C:\\foo/cygrdive/c/foo",
"text": "You installation seems to be fine.Does MSYS require that you reference Windows paths in a particular way? For example, I recall that for Cygwin, I would sometimes have to refer to C:\\foo as /cygrdive/c/foo or some programs would not be able to find it. Apart from that, I am not sure what the problem could be.",
"username": "Roberto_Sanchez"
}
] | Mongo-cxx MinGW installation | 2020-05-14T02:28:59.823Z | Mongo-cxx MinGW installation | 5,180 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hello everyone !I make a discord bot with mongoDB and I create a command to modify an article from my bot’s shop.\nBefore modifying a particular article, it must already be found in the database. Is it possible to find out if the name typed in the command exists in the database ? If it does not exist, it returns an error message.Thank you and good evening !",
"username": "Axel_Demorest"
},
{
"code": "db.collection.find({_id: \"some_id\"}, {_id: 1}).limit(1)\n",
"text": "you can use this query to check if NONE returns …based on the response, you can use a javascript to check to display the approirate message.",
"username": "Dominic_Kumar"
}
] | MongoDB / Discord.js - checking existence of a collection | 2020-05-14T20:38:58.976Z | MongoDB / Discord.js - checking existence of a collection | 5,170 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi all,First time posting a question here and a noob with mongo. Apologies if question = stupid.I have an application that needs to be able to run on a local computer, even if that computer loses connection to the local network. The application will be used in a very small organisation … and in the event they have issues with their local network (switch failure eg.), the application still needs to work. The application uses a local mongo database. On the other hand, the application is designed to be able to being used in much larger organisations as well, with multiple clients, even in different sites connecting to the same database. Even in this situation, the requirement is still that the application must be able to run without network connection in case of serious network problems. So every client needs at least a working (read-write) local replica.To add some redundancy, easy backup and scale up the app, I started testing with replica sets. As long as the client computer is connected to the network, this works great. I can add more nodes, make backups, datadumps, have some offsite replica’s etc. But the problem is that as soon as the client computer loses network connection, the local database (primary) goes into read-only (secondary). I can add as many nodes as I want … from what I understand, in this situation, the local database will always become readonly, even if that member was primary before the connection was lost.My question is … is there a solution for this problem? Is there maybe another way to accomplish what I need, besides replica sets?Thanks for the infomationT",
"username": "Tom_Apers"
},
{
"code": "",
"text": "So all the client must connect to the same database / share the same datas, but the database must be stored in local for each client too in case of network failure.You can achieve that with a primary node accessible through network and a secondary node on each computer.But this imply that in case of network failure the local database will be only on readonly mode.\nOtherwise, if client A and client B modifiy the same data on their local database, you will have a database inconsistency, the primary will not able to know which value used for update.",
"username": "Guillaume_Didier"
},
{
"code": "",
"text": "Thank you for your response.\nThat confirms what I was thinking. We’ll have to find a solution for this. ",
"username": "Tom_Apers"
},
{
"code": "",
"text": "I never tried this and maybe someone will able to confirm this,You can probably use a shard cluster to achieve this, each shard will be a replica set.One shard accessible through network containing commons datas, this shard is a replicaset and each client contains a secondary of this shard, in case of network failure they will be able to readonly the common datas.One shard one each client containing his user datas, the primary run on the client and one secondary is running on each other client, in case of network failure, the client will be able to read/write his user datas, and readonly other user datasI even don’t know if it’s working, but i think you can dig into this, it will more flexible than just using a replicaset",
"username": "Guillaume_Didier"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Question regarding design and replica sets | 2020-05-15T12:53:48.693Z | Question regarding design and replica sets | 1,955 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’m using the Realm Cloud in a production environment. I’m using partial sync.MongoDBRealm seems to focus on full sync first.Query-based Sync : MongoDB Realm will require this feature to be re-architected to maximize scalability and performance. MongoDB Realm will initially focus on full sync and within the GA phase we expect to re-architect query-based sync to be fully optimized for MongoDB Realm.I’m wondering if I’m waiting for MongoDB Realm’s partial sync support or migrating existing user data from partial sync to full sync.The MongoDB Realm public beta will be released soon, and new sign-ups to the Realm Cloud will be suspended. This also forces my service to stop signing up.I think that GA will be released after that. Does MongoDB Realm support partial synchronization when GA is released?Or will the partial sync support be some time after the GA release?Please give a rough estimate of the schedule.",
"username": "Enoooo"
},
{
"code": "",
"text": "@Enoooo Something akin to query-based sync is certainly in our plans but it is difficult to give an exact date; we are certainly working on it. Unfortunately I cannot tell you if it will land for GA for MongoDB Realm but we will endeavor to make that happen. What I can say is that I do recommend moving to full sync if at all possible - of course, it is use case and load dependent but many architectures can be solved by full sync if partitioning and schema design is thought about initially.The MongoDB Realm public beta will be released soon, and new sign-ups to the Realm Cloud will be suspended. This also forces my service to stop signing up.While we will likely stop net new sign-ups, the users of your app will still be able to use Realm Cloud and new signups for your app can continue even after the launch because your app is in production",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward\nThank you for your reply.I understand. Since it is uncertain when partial sync will be supported, we will move data from the partial sync realm to the full sync realm.While we will likely stop net new sign-ups, the users of your app will still be able to use Realm Cloud and new signups for your app can continue even after the launch because your app is in productionI was relieved to hear this. I was misunderstanding.\nFor the time being, we will use the Realm Cloud in a production environment.",
"username": "Enoooo"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | The timing for supporting partial synchronization | 2020-05-14T17:43:19.173Z | The timing for supporting partial synchronization | 2,829 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I am coming from the SQL world and I try to do simple example of customer/product/order databases.\nI thought it will look something like this:Customers:\ncustomer_id\nname\norders (this will be array with reference ids to orders)Orders\norder_id\nIncludes (this will be array of documents that each one will store the refrence id to the product and the quantity of this product)Products\nproduct_id\nname\npriceIs this right thinking or maybe using embedded documents is better? tell me what you think and how would you do that (im still newbie in NoSQL hehe)",
"username": "liran_pana"
},
{
"code": "",
"text": "my 2 cents.\nsay for example the customers are somewhat static in count.\nBut each customer can have many orders.\nso,Orders\norder_id\ncustomer_id\ndate\nproducts_orders (product_id,unit,total price)if you have a sample data, pass it on, and i can compare the benchmark using varioud data model designs.",
"username": "Dominic_Kumar"
}
] | Database design with relations | 2020-05-08T12:51:21.621Z | Database design with relations | 1,211 |
null | [
"swift",
"release-candidate"
] | [
{
"code": "withTransactionMongoClient.shutdown()MongoClient.syncShutdown()MongoClient.close()MongoClient.syncClose()ReadPreferencestructclasslet rp = ReadPreference(.primary) // old\nlet rp = ReadPreference.primary // new\n\nlet rp = try ReadPreference(.secondary, maxStalenessSeconds: 100) // old\nlet rp = try ReadPreference.secondary(maxStalenessSeconds: 100) // new\n\nlet options = FindOptions(readPreference: ReadPreference(.primary)) // old\nlet options = FindOptions(readPreference: .primary) // new\nTLSOptionsClientOptionsmaxScanFindOptionsFindOneOptionsIntstartTransactioncommitTransactionabortTransactionstartTransactioncommitTransactionabortTransactionTransactionOptionsNIOThreadPoolObjectIdJSONEncoderJSONDecoderClientSessionOptionsStartTransactionOperationCommitTransactionOperationAbortTransactionOperationSelftype(of: self)listIndexNamesDocumentIndexModelTLSOptionsClientOptionsIntallowDiskUsemaxPoolSizeauthorizedDatabasesMongoClientReadPreference",
"text": "We are pleased to announce the second release candidate for our 1.0.0 release.Please note that this release drops support for Swift 5.0. The driver officially supports Swift 5.1 and 5.2 on macOS, Ubuntu 16.04, and Ubuntu 18.04.A full list of included tickets is available below, but here are some highlights:The driver now provides an API for transactions! Note that MongoDB supports transactions in replica sets as of v4.0, and in sharded clusters as of v4.2.Please see the Transactions Guide in our documentation for details and examples.Work on the convenient API for transactions (i.e. a withTransaction helper that includes helpful logic to automatically retry transactions on certain errors) is currently in progress.",
"username": "kmahar"
},
{
"code": "",
"text": "Hello,\nWhen do you expect to have the final version released? Thank you!",
"username": "Oscar_Rodriguez"
},
{
"code": "",
"text": "Hi Oscar! We expect to tag 1.0 within the next month.",
"username": "kmahar"
},
{
"code": "",
"text": "Thank you so much! I am looking forward to it!",
"username": "Oscar_Rodriguez"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Swift driver 1.0.0-rc1 released | 2020-05-05T18:45:15.670Z | MongoDB Swift driver 1.0.0-rc1 released | 1,636 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi All, I was trying to set a 3 member replica set with mongod running on 3 different servers on Azure with RHEL image. when I was trying add replica members getting error as below:“Either all host names in a replica set configuration must\nbe localhost references, or none must be; found 1 out of 2”,“code” : 103,\"Tested by commenting # BindIP in configuration BindIP: 0.0.0.0 or BindIP : of the server but none helped.Could someone advise what should be the BindIP to be set on each of the server as am new to MongoDB.",
"username": "KA_Priya"
},
{
"code": "",
"text": "Sounds like you are using localhost as a hostname for a replica hostname.The error is telling you all members must be localhost(like a testbed on your local machine) if it sees any member with localhost as the hostname.You need to use the addressable hostname that the other nodes(and clients) can access the host on.",
"username": "chris"
},
{
"code": "",
"text": "All hosts /etc/hosts been set as hostname.for eg: Already updated hosts file as below:\n10.0.0.1 test1\n10.0.0.2 test2\n10.0.0.3 test3Please advise if you are referring this.",
"username": "KA_Priya"
},
{
"code": "rs.initiate()rs.add()",
"text": "No, although that is helpful in being able to resolve them. I am referring to the hosts that are passed as arugments to rs.initiate() or rs.add().",
"username": "chris"
},
{
"code": "",
"text": "Thank you so much Chris for your support on this. Am passing as Hostname only as rs.aadd( “test1”)",
"username": "KA_Priya"
},
{
"code": "rs.initiate()rs.conf()rs.reconfigure()",
"text": "You might need to look at what happened with the rs.initiate() the localhost may have been introduced there.rs.conf() will dump out the current configuration. You can then use rs.reconfigure() with a new configuration document to change the member hostname if it is incorrect.",
"username": "chris"
},
{
"code": "",
"text": "Thanks a Ton, Chris. After running a reconfig command, was able to add the servers into replica set.Thank you so much for all your inputs and suggestions.",
"username": "KA_Priya"
},
{
"code": "",
"text": "Thank you so much Chris. Will check by running a reconfig.could you please advise what value should I mention in BindIP of config file.should I comment it or specify as 0.0.0.0 or should I mention private IP of the host server.",
"username": "KA_Priya"
},
{
"code": "",
"text": "If it only has one lan ip bindAll. If it is multihomed then pich the network(s) the nodes and app servers are connecting from.",
"username": "chris"
},
{
"code": "mongod127.0.0.1",
"text": "If it only has one lan ip bindAll.While this has the same outcome for a host with a single IP (and may be more convenient), I’d recommend explicitly binding to the IPs you want mongod to listen to. For a replica set member that would typically be 127.0.0.1 and a private IP.If additional network interfaces are added in future for some reason (for example, a public IP), limiting IPs might avoid unexpectedly exposing a service that should be private (although there should also be appropriate firewall rules in place).For available security measures, please review the MongoDB Security Checklist.Regards,\nStennie",
"username": "Stennie_X"
}
] | Unable to configure replica set | 2020-05-14T18:02:51.455Z | Unable to configure replica set | 2,810 |
null | [
"aggregation"
] | [
{
"code": "\t\t\t\t\t\t{$lt: [\"$dateOfCheck\", \"$$endDate\"]}\n\t\t\t\t\t\t\n\t\t\t\t\t]\n\t\t\t}\n\t\t}}\n\t],\n",
"text": "Hello,I have to join two collections based on multiple criteria, let’s suppose something like:exam:\n{_id: 1, patientId: “PatID1”, dateOfExam:“2020-04-01”, description:“Some exam”}\n{_id: 2, patientId: “PatID1”, dateOfExam:“2020-04-15”, description:“Some exam”}check:\n{_id: 10, patientId: “PatID1”, dateOfCheck:“2020-03-31”, type: “PreExam”, description:“Some check”}\n{_id: 11, patientId: “PatID1”, dateOfCheck:“2020-03-31”, type: “Generic”, description:“Some generic check”}\n{_id: 12, patientId: “PatID1”, dateOfCheck:“2020-04-04”, type: “PreExam”, description:“Some check”}\n{_id: 13, patientId: “PatID1”, dateOfCheck:“2020-04-12”, type: “PreExam”, description:“Some check”}Let’s suppose I need to get all exams which have checks of type PreExam in the day before the exam.So I only want the exam of 2020-04-01 (_id: 1) together with its “PreExam” check of 2020-03-31 (_id: 10).I’m using a lookup with let/pipeline because I need to “join” on two fields (patientId and type), but I need to compare dates which are actually strings.\nIs there some way of dealing with such matching inside the lookup phase? I’ve tried to get “the day before” inside the let, in different ways, but with no success.\nMy best guess, I thought, was:let: {joinKey: “$patientId”, dataType: “PreExam”, endDate: “new Date((new Date($dateOfExam).getTime() - 1 (1000606024))).toISOString().substring(0,10)”},\npipeline: [\n{$match: {\n$expr: {\n$and: [\n{$eq: [\"$type\", “$$dataType”]},\n{$eq: [\"$patientId\", “$$joinKey”]},But it seems that “let” doesn’t allow evaluating expressions…I solved in another way, but is there a clean solution to manage cases where the join condition needs to be a function?Thank you very much in advance!",
"username": "Davide_Cicuta"
},
{
"code": "letdateOfExamDateendDate{ $lt: [ \"$dateOfCheck\", \"$$endDate\" ] }dateOfCheckDate\"new Date((new Date($dateOfExam).getTime() - 1 (1000606024)) .toISOString().substring(0,10)\"",
"text": "In the let you can use aggregation operators.For example, to convert the string date of dateOfExam to a Date object, you can use the $toDate operator. And, to arrive at the value of endDate you can use aggregation arithmetic operators. You don’t have to convert it back to a string value again to use it in the pipeline. Instead, in the { $lt: [ \"$dateOfCheck\", \"$$endDate\" ] } you can convert the dateOfCheck to a Date object - so that you are comparing two date objects.What are you deriving in this following code: \"new Date((new Date($dateOfExam).getTime() - 1 (1000606024)) .toISOString().substring(0,10)\"",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you very much for the explanation and the links!\nI’ll try first thing on Monday! And will study in the meantime ",
"username": "Davide_Cicuta"
}
] | Lookup joining through a function | 2020-05-15T11:00:33.201Z | Lookup joining through a function | 1,872 |
null | [
"vscode"
] | [
{
"code": "",
"text": "Hello,Any news on this from MongoDB.I was in the London local event last September and had a discussion about having a properly supported VSC extension from the MongoDB. It was mentioned that this was in the works … so any news?Now I already have the Azure CosmoDB installed and that is very useful for connecting to local and cloud resources, the intellisense is great, especially when connected to a database, but it would great if this was properly supported.There was talk about being about to have command shell sessions directly from within VSC itself.Thanks",
"username": "NeilM"
},
{
"code": "",
"text": "It would be nice! +1",
"username": "DavidSol"
},
{
"code": "",
"text": "Hi @NeilM, thank you for asking We are working on it and we’ll definitely update the community as soon as we have the first version ready. Stay tuned!",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Good to hear,Hopefully ready by London Local this year? Neil",
"username": "NeilM"
},
{
"code": "",
"text": "@NeilM Hi there. Just wanted to let you know that yesterday we announced MongoDB for VS Code: Introducing MongoDB for VS Code.Let us know what you think!",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Thank you for this and actually remembering this post.",
"username": "NeilM"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Visual Studio Code extension from MongoDB | 2020-02-21T11:29:37.845Z | Visual Studio Code extension from MongoDB | 3,794 |
Subsets and Splits