image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "dot-net" ]
[ { "code": "", "text": "Hey All,We have released Realm-Dotnet 5.0.1 which includes our Core6 upgrade. Have a look at the release notes here:\nhttps://www.mongodb.com/community/forums/t/realm-net-5-0-1-released/8963Up next, a v10 beta version with the new sync. Stay tuned-Ian", "username": "Ian_Ward" }, { "code": "", "text": "And there is a nice blog post in DevHub which explains all this !\nhttps://www.mongodb.com/article/realm-database-and-frozen-objects", "username": "MaBeuLux88" } ]
Realm .NET 5.0.1 Released with Frozen Object Support
2020-09-10T12:48:19.149Z
Realm .NET 5.0.1 Released with Frozen Object Support
2,387
null
[]
[ { "code": "", "text": "I am very new to MongoDB. I have successfully installed it on a Centos linux server and wanted to know how I can connect to it using a browser to create a cluster and database. As yet I haven’t setup a virtual host because I don’t know which directory I need to be the root directory.If someone can point me to all the right documentation that would be great", "username": "Russell_Rose" }, { "code": "", "text": "If you want to configure a cluster through the web, then I would look into making a free cluster of MongoDB Atlas. https://docs.atlas.mongodb.com/getting-started/If you want to have it on a local Centos server than you will have to configure the cluster through a configuration file. https://docs.mongodb.com/manual/reference/configuration-options/The default directory that mongo runs on is /data/db. But you can use any path that you would like for your database / log files as long as you define the path in your configuration file.Also MongoDB has great videos on Mongo University that will walk you through how to setup an Atlas cluster or a replica set in a VM. See the course Mongo University M001.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I have looked at the MongoDb courses and they are all based on Atlas and I would much prefer to use this on my own server. I have also looked at the configuration file and made some changes and now when I run netstat I get:\ntcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 52833/mongodHowever, when I try to connect using Compass on either port 27017 or 28017 I get a connection timeout. I also timeout if I try to connect using a browser", "username": "Russell_Rose" }, { "code": "", "text": "I now find when I run a browser locally using :27017 I get:\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.\nbut when I run remotely I get connection timeout", "username": "Russell_Rose" }, { "code": "", "text": "Hi @Russell_RoseYou might need to open/filter port 27017/tcp on the firewall on the centos server. Depending on your server setup SELinux could also be a factor. SELinux is covered in the install guide", "username": "chris" }, { "code": "", "text": "The courses M103, M310 and M312 at https://university.mongodb.com/ are not based on Atlas and cover running mongodb on your own hardware.IntroductoryLearn how to start up basic MongoDB deployments, from the basic single mongod process, to replica sets and sharded clusters. This course will teach you to explore and configure these deployments using the MongoDB shell.AdvancedLearn basic MongoDB security features, integration capabilities and resources. The course project involves creating secured deployments of MongoDB for production ready environments.AdvancedLearn how to diagnose and debug issues that can arise in your MongoDB deployment. This course will go over a set of scenarios you might find in production, and introduce you to many of the tools and functionality that MongoDB’s support and field teams use to diagnose issues, and how to fix those problems once they’re identified.", "username": "steevej" } ]
Connect to MongoDb on linux server (Compass/Browser)
2020-09-09T20:03:51.111Z
Connect to MongoDb on linux server (Compass/Browser)
5,708
null
[ "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5f4e18f4f90762fe549a1eee\"),\n \"bookingNumber\" : \"NOO1034684\",\n \"bookingLines\" : [ \n {\n *\"bookingLineId\" : 11062129,*\n *\"bookingLineId\" : 99169522,*\n \"parentBookingLineId\" : null,\n \"articleCode\" : \"E\",\n \"isExtension\" : false,\n }, \n {\n \"bookingLineId\" : 11062130,\n \"parentBookingLineId\" : 11062129,\n \"articleCode\" : \"COL2\",\n \"isExtension\" : false,\n }\n ],\n \"bookingSchoolId\" : null,\n \"marketCode\" : \"NOO\",\n \"marketKey\" : \"Norway\",\n \"opportunityId\" : \"0061w000019Byu0AAC\",\n \"programCode\" : \"ILS\",\n}\n", "text": "Hi,I have the below data where there are 2 bookingLineId fields. I would like to get the number of such duplicate bookingLineId fields in my collection. Please let us know on this.", "username": "Vinay_Gangaraj" }, { "code": "> use test\nswitched to db test\n> db.community.insertOne({\n... \"bookingNumber\" : \"NOO1034684\",\n... \"bookingLines\" : [\n... {\n... \"bookingLineId\" : 11062129,\n... \"bookingLineId\" : 99169522,\n... \"parentBookingLineId\" : null,\n... \"articleCode\" : \"E\",\n... \"isExtension\" : false,\n... },\n... {\n... \"bookingLineId\" : 11062130,\n... \"parentBookingLineId\" : 11062129,\n... \"articleCode\" : \"COL2\",\n... \"isExtension\" : false,\n... }\n... ],\n... \"bookingSchoolId\" : null,\n... \"marketCode\" : \"NOO\",\n... \"marketKey\" : \"Norway\",\n... \"opportunityId\" : \"0061w000019Byu0AAC\",\n... \"programCode\" : \"ILS\",\n... })\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"5f59e8ffc942c3417cad207b\")\n}\n> db.community.findOne()\n{\n\t\"_id\" : ObjectId(\"5f59e8ffc942c3417cad207b\"),\n\t\"bookingNumber\" : \"NOO1034684\",\n\t\"bookingLines\" : [\n\t\t{\n\t\t\t\"bookingLineId\" : 99169522,\n\t\t\t\"parentBookingLineId\" : null,\n\t\t\t\"articleCode\" : \"E\",\n\t\t\t\"isExtension\" : false\n\t\t},\n\t\t{\n\t\t\t\"bookingLineId\" : 11062130,\n\t\t\t\"parentBookingLineId\" : 11062129,\n\t\t\t\"articleCode\" : \"COL2\",\n\t\t\t\"isExtension\" : false\n\t\t}\n\t],\n\t\"bookingSchoolId\" : null,\n\t\"marketCode\" : \"NOO\",\n\t\"marketKey\" : \"Norway\",\n\t\"opportunityId\" : \"0061w000019Byu0AAC\",\n\t\"programCode\" : \"ILS\"\n}\n>\n", "text": "If you inserted such a document into MongoDB only one of those fields would get created. Its not possible to store a document in MongoDB with two fields with the same key at the same level. The second key will overwrite the first.I ran your example in the shell:The first ID is obliterated on insert.", "username": "Joe_Drumgoole" }, { "code": "", "text": "If I may add. Please see https://docs.mongodb.com/manual/core/document/#field-namesIn particular the following 2 paragraphs:BSON documents may have more than one field with the same name. Most MongoDB interfaces, however, represent MongoDB with a structure (e.g. a hash table) that does not support duplicate field names. If you need to manipulate documents that have more than one field with the same name, see the driver documentation for your driver.Some documents created by internal MongoDB processes may have duplicate fields, but no MongoDB process will ever add duplicate fields to an existing user document.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Finding a duplicate key in the collection
2020-09-08T13:20:06.317Z
Finding a duplicate key in the collection
6,122
null
[ "queries", "performance" ]
[ { "code": "", "text": "I am using mongodb with rails app. From past few days, I am facing weird issues while querying the data directly from mongodb. Let’s say there is object M1 and this object has few parameters for which I need to query data from mongo. Now suppose I query data for current day, there are almost 5000 records, and I am using .to_a in rails controller, but this operation is taking almost 50~60 seconds. Now for same object M1 and for same as previous parameters, when I query data of any past day and use .to_a, it takes <10 seconds. Now again if I query for same object M1 and same parameters and use .to_a, it now takes <5 seconds. Also now if I change few of object parameters, it again take >50 seconds. And any query for historical data is taking <10 seconds but for current day is taking >50 seconds, unless I run that same query once for any past day. I have checked the indexes, and are being used correctly in each query. What can be possible reason for this?", "username": "Kunal_Kunkulol" }, { "code": "", "text": "Without any other measures I suspect that your data working set does not fit RAM and the server keeps going to disk to retrieve the required data. That is only a gut feeling as there is insufficient metrics to support any conclusion.", "username": "steevej" }, { "code": ".explain(\"executionStats\")find", "text": "When you query cold data (i.e. data which is not in memory) it must be paged from disk and this takes time. Once the data is in memory subsequent queries do not incur the same disk overhead and will be many orders of magnitude faster. The database will retain this data in memory until a different query requires it to flush that data from memory to page in data to satisfy a new query. Hence new queries that touch the same working set will be fast. Queries that hit a new data set will be slow. The big performance hit comes when you don’t allocate enough memory to keep all indexes in memory. Then every query using an index that is not in memory requires paging.Remember disks are about 100,000 times slower than memory when doing random access queries i.e. database queries.When posting questions about query speed we always recommend you post the explain plan. This can be obtained by running .explain(\"executionStats\") on the find in the shell.", "username": "Joe_Drumgoole" } ]
Mongo Query Performance Issue
2020-09-09T13:31:42.249Z
Mongo Query Performance Issue
2,085
null
[]
[ { "code": "", "text": "Given a query result cursor, is it safe to delete/modify each item while iterating over the cursor?I run a query and get a cursor, for each of the result, I want to modify (“replace”) the found resource, or delete it.Is this a safe operation? Or am I required to finish iterating over ALL the results and only then make the changes?I’m using the Node SDK, if it matters.", "username": "Nathan_Hazout" }, { "code": "find", "text": "Hi @Nathan_Hazout!Good news: It’s a safe operation. Running a find query against a collection doesn’t imply a lock on the data in the database, so you can execute updates or deletes at any time.", "username": "Mark_Smith" }, { "code": "", "text": "Thanks so it doesn’t affect the cursor?\nDeleting an item doesn’t make the cursor jump back or forth, somehow modifying the underlying data that backs the cursor ?", "username": "Nathan_Hazout" }, { "code": "", "text": "When you read a cursor it actually calls getmore behind the scenes to get a batch of results. Each batch of results is copied from the database. Once they are in memory writes or deletes may affect the contents of the database independent of your query. So you might be reading a result which in reality has been deleted. If you want to guarantee that results are not being modified while they are being read you should enclose the query in an transaction.", "username": "Joe_Drumgoole" } ]
Deleting resources in cursor loop
2020-09-09T08:24:26.258Z
Deleting resources in cursor loop
4,459
null
[ "swift" ]
[ { "code": "public class WorkspaceMember: EmbeddedObject {\n@objc public dynamic var _id: String = UUID.init().uuidString\n@objc public dynamic var name: String = \"\"\n@objc public dynamic var userId: String = \"\"\n@objc dynamic var permission: Int = WorkspacePermission.read.rawValue\nlet workspace = LinkingObjects(fromType: Workspace.self, property: \"members\")\n\npublic var workspacePermission: WorkspacePermission {\n return WorkspacePermission(rawValue: permission) ?? .read\n}\n\n}\n\npublic class Workspace: UniqueObject {\n\n\n \n@objc public dynamic var title: String? = nil\n\n@objc public dynamic var createdAt: Date = Date()\n \n@objc public dynamic var active = true\n\npublic let members = List<WorkspaceMember>()\n \n }\nFailed to transform received changeset: Schema mismatch: 'WorkspaceMember' is an embedded table on one side, but not the other", "text": "I just discovered EmbeddedObject in the RealmSwift SDK reference and tried implementing it like this:However, I am facing two problems that I cannot resolve and don’t see documented.Compatibility with custom “objectTypes” in Configuration. In my RealmConfiguration, I want to specify the Types I use inside the “objectTypes” argument which expects an [Object.Type] input. However EmbeddedObject does not inherit from Object and cannot be passed here. When omitting it, I’m confronted with a Realm runtime error (WorkspaceMember not included in ObjectTypes)…When not specifying ObjectTypes I’m seeing the following error in my local Sync log:Failed to transform received changeset: Schema mismatch: 'WorkspaceMember' is an embedded table on one side, but not the otherAlso when removing Configuration for Workspace and WorkspaceMember and deleting the table on the server, I’m seeing other errors (e.g. duplicate configuration). So a solution for 1. would probably help here as well.Did anybody else experiment with EmbeddedDocument yet?", "username": "Christian_Huck" }, { "code": "", "text": "Also with Beta 3 for RealmSwift I could not get EmbeddedDocument to work.", "username": "Christian_Huck" }, { "code": "realm:v10realm:em/config-type-embedded", "text": "@Christian_Huck Can you try beta.4 ? We merged a fix for this here:This changes the property type of Realm.Configuration.objectTypes from\n```\npub…lic var objectTypes: [Object.Type]?\n```\nto \n```\npublic var objectTypes: [ObjectBase.Type]?\n```\nwhere `public typealias ObjectBase = RLMObjectBase`.\n\nThis would allow developers to use embedded objects with a custom defined schema in a Realm.Configuration. Since `EmbeddedObject` inherits from `RLMObjectBase` and not `RealmSwift.Object` passing an embedded object into a `objectTypes` is invalid:\n```\nCannot assign value of type '[RLMObjectBase.Type]' to type '[Object.Type]?'\n```\nDeveloper defined classes that inherit from `Embedded Object` instead of `Object` will be never registered directly to the Realm, ie. the class won't be in the realm schema.\n```\nUnlike normal top-level objects,\nembedded objects cannot be directly created in or added to a Realm. Instead,\nthey can only be created as part of a parent object, or by assigning an\nunmanaged object to a parent object's property.\n```\nBut leaving out a custom class in the `config.objTypes` will throw this error if the embedded parent class is included:\n```\n'child.memberOf' links to class 'Parent', which is missing from the list of classes managed by the Realm'\n```\n\n[036e857](https://github.com/realm/realm-cocoa/pull/6703/commits/036e8571ce71f66d5d31c1d28808122507e8b120) addresses a change to SwiftLint rules.For number 2 - that looks like a schema mismatch. I would check your schema on the server-side", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward thank you for the fix for 1) Indeed I was now able to specify my objectTypes correctly. Now with the new betas (3 and 4) I have the issue that my functions do not get called anymore. It used to work with version 2. There are no logs on the server. I´m working on an example to reproduce that. Then I will see if 2) is really a schema error as you assume and I could finally use EmbeddedDocuments in my app.", "username": "Christian_Huck" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using EmbeddedObject in RealmSwift
2020-08-25T07:34:19.227Z
Using EmbeddedObject in RealmSwift
1,903
https://www.mongodb.com/…dc7410be9e25.png
[ "atlas-search" ]
[ { "code": "", "text": "Hi,I have update my collection with aggregate $out, but my search index was not updated and still have old documents inside !Any Idea what is appening ?Thanks for help !\nimage1090×281 15.8 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,Not sure what exactly you observe ? Do you see that regular aggregation return one document but the text search can’t find it after the $out?Also what cluster type is that?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I got M30.The problem was i got 192912 documents in my collection. But in my Search Index i got 325113 documents.\nWhy my search index was not updated ?( I have talk about aggregate because i update my collection for searching with an aggregate $out, my aggregate return 192912 documents and update my collection ! But after this my search index was not updated )Thanks !", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,.I am afraid that the Atlas search cannot support auto indexing of $out results at the moment. So when $out is finished it will rename the collection it output to and that changes the uuid of the collection resulting in detaching the index .You should try the following:Thanks for sharing this we will update docs to point this limitation.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for your help !$merge was the solution of my problem ! He update my search index instantly <3", "username": "Jonathan_Gautier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Index Search Not Updated Aggregate $out
2020-09-08T23:33:24.987Z
Index Search Not Updated Aggregate $out
2,049
null
[ "cxx" ]
[ { "code": "", "text": "Hello, I want to write a function instead of aggregation. How can I do this in mongocxx?Here (https://docs.mongodb.com/manual/reference/operator/aggregation/function/?searchProperty=current&query=%24function) is a function writing with aggregation, how can I write another function instead?I’ve been searching for a long time, but I couldn’t find an example that says function with cxx.", "username": "sylvester" }, { "code": "$function", "text": "Hi @sylvester,I’ve been searching for a long time, but I couldn’t find an example that says function with cxx.If you’re asking whether you can write a function in C++ instead of JavaScript, unfortunately currently (MongoDB version 4.4) the only supported language is JavaScript.What kind of operations were you trying to perform using $function ? If it’s possible I would recommend to try using MongoDB Aggregation Pipeline.Regards,\nWan.", "username": "wan" } ]
How to define function in mongocxx
2020-09-04T10:40:25.680Z
How to define function in mongocxx
1,711
null
[]
[ { "code": "", "text": "I’ve been trying to link a dev realm app and a production realm app to different subdomains. So the dev one would go to dev.example.com and the production one would go to app.production.com. For some reason the production app will not successfully verify the custom domain I am providing. I have had the CNAME record for verification in place for over 24 hours.Additionally, something strange is going on when I got o app.example.com. It is showing me the development realm app. Not sure what is going on there. It might be worth noting I have already created a CNAME record for app.example.com that is pointing to my mongodbstitch.com domain. Not sure if that has anything to do with it.My questions are:Thanks!", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1,In your case app and dev are your subdomains and not the domain name. I honestly not sure what happens if two different apps are only differed by a subdomain but I can find out.I am afraid we might only query the domain name which is already mapped to dev and that could explain why you eventually get routed to it.Please verify that you do not have something mentioned in the note boxes\nhttps://docs.mongodb.com/realm/hosting/use-a-custom-domain-name/#specify-the-custom-domain-in-realmAlso what dns service do you use?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,That’s right. So you are thinking it could possibly be fixed by removing the CNAME to dev.example.com?My DNS is DigitalOcean, but the domain is registered through 101Domains where I have manually updated the nameserver records to point to DigitalOcean.I don’t believe that DigitalOcean proxies domain requests from what I’ve read, but I’m not 100% sure.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1,According to our engineers the subdomain configuration should work - just need to specify the full domain, including the subdomain itself, when configuring it for each app.Can you share your application links? And a screen shot of dns configBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnyAfter using the subdomain for the custom domain, it seemed to work! I was confused because previously for my development site for dev.example.com, I had set the custom domain to just example.com and that’s how it had worked.", "username": "Lukas_deConantseszn1" } ]
Multiple Realm Apps linking to Same Domain, Multiple Subdomains
2020-09-08T04:23:51.470Z
Multiple Realm Apps linking to Same Domain, Multiple Subdomains
2,612
null
[]
[ { "code": "", "text": "I have been trying to implement a CI/CD pipeline that deploys my code between a development realm app, and a production realm app, depending on the github branch that is merged.I have been trying to figure out how to do it and it seems pretty difficult due to all of the configuration that exists inside the repo for each realm app. It’s almost like I need env variables I can use throughout the functions, serices, values, and triggers files. But I’m not even sure about that working fully.I almost feel like I need 3 separate github repos. One to hold my dev app, one to hold my production app, and then another to hold my hosting files which is the front-end app. In this case, my CI/CD process could basically watch for PRs from the front-end app and do commits into the respective realm app repo. Then each realm app repo would be auto-deploy linked to the app.I don’t really know at all if this is best. Does anybody have experience with this issue? I just want the most logical set up for having multiple realm apps that separate environments like prod vs dev.Thanks!\nLukas", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Hi @Lukas_deConantseszn1What blocks you currently from deploying with the github integration to 3 different applications:\nDev, Stage , Prod.Each one can be linked to a branch and therefore be deployed respectively.\nhttps://docs.mongodb.com/realm/deploy/deploy-automatically-with-github/The docs show exactly how to export an app.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "It seems to me that some of the code in the config files, like the services files, specifies a specific DB to use. What if I want to use a different database between dev and prd? If this is possible, it is not very obvious on how to do so.On a similar note, I am trying to make some changes to my github automatic deploy link and it doesn’t seem to be working. I am trying to connect to a different rep, and it keeps pulling up the old repo.", "username": "Lukas_deConantseszn1" }, { "code": "context.services\n .get(\"mongodb-atlas\")\n .db(context.values.get(\"defaultDatabase\"))\nimport path from \"path\";\nimport fs from \"fs-extra\";\nimport glob from \"glob\";\nimport deepmerge from \"deepmerge\";\n\nif (process.argv.length < 4) {\n throw new Error(\"Expected at least 4 runtime arguments\");\n}\n\nconst args = process.argv.slice(process.argv.length - 3);\n\nconst mappingPath = path.resolve(args[0]);\nconst appPath = path.resolve(args[1]);\nconst destinationPath = path.resolve(args[2]);\n\nconsole.log(`Copying ${appPath} to ${destinationPath} (overwriting)`);\nfs.removeSync(destinationPath);\nfs.copySync(appPath, destinationPath, { overwrite: true });\n\nconst mapping = fs.readJSONSync(mappingPath);\nfor (const [fileGlob, replacement] of Object.entries(mapping)) {\n const files = glob.sync(fileGlob, { cwd: destinationPath });\n for (const relativeFilePath of files) {\n const filePath = path.resolve(destinationPath, relativeFilePath);\n const content = fs.readJSONSync(filePath);\n const mergedContent = deepmerge(content, replacement as any);\n fs.writeJSONSync(filePath, mergedContent, { spaces: 2 });\n }\n}\n{\n \"config.json\": {\n \"app_id\": \"my-app-prod-abcde\",\n \"hosting\": {\n \"custom_domain\": \"app.my-app.io\",\n \"app_default_domain\": \"my-app-prod-abcde.mongodbstitch.com\"\n },\n \"custom_user_data_config\": {\n \"database_name\": \"my-app-prod\"\n }\n },\n \"values/defaultDatabase.json\": {\n \"value\": \"my-app-prod\"\n },\n \"services/mongodb-atlas/rules/*.json\": {\n \"database\": \"my-app-prod\"\n }\n}\nts-nodets-node --project scripts/tsconfig.json scripts/search-replace.ts realm-app/production.json realm-app production-realm-app\nrealm-appproduction.json", "text": "You might find Lauren Schaefer’s series of articles on “DevOps + MongoDB Realm Serverless Functions = ” an interesting read: https://www.mongodb.com/how-to/unit-test-realm-serverless-functionsAnd her video an interesting watch: DevOps + MongoDB Serverless = Wow! - YouTubeI don’t know of other “official” guidelines on this, but I know that the team is currently revisiting the app export/import and configuration file format, in part to tackle this very problem. I think it would be valuable for them to hear what you think would be a good solution to the problem.As a (temporary) workaround, I’ll share what I’ve personally done on one of my projects.I’ve stored the name of my environment specific database in a “value” (named “defaultDatabase”) and retrieve it from functions like so:But, that only solves the issue for function definitions.For the configuration files I’ve written a small (TypeScript) script that takes the declaration of my “staging” app and patch in values that are relevant for production:I execute this with ts-node like so:It basically copies the app config files from realm-app and for every key in production.json it finds files matching the glob (specified by the key’s string value) and deep-replace the values defined by the value in those files.I hope that all makes sense, feel free to use the code above if you choose to go down the same path as me.", "username": "kraenhansen" }, { "code": "", "text": "Hi @Pavel_Duchovny and @kraenhansen,Thank you very much for the responses. That’s also a lot of great info and I am very grateful. In this case, are you not using the GitHub linked automatic deployments? What kind of CI/CD tool are you using? Are you running this find/replace using some sort of CI/CD process that checks out the code and then runs your script? Or are you just doing it locally/manually each time? I might try this path. Your script looks very robust so thanks!How can I stay up to date on what the team decides to do with the export/import formats?", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Thanks for the shoutout!The talk you linked to specifically explains how I handled using GitHub autodeployments for deploying to dev/qa/prod. Here is the GitHub repo with my Travis CI file and an explanation of how it all works together: GitHub - mongodb-developer/SocialStats.Note that when I wrote this earlier this year, autodeploys only worked from the master branch, which is why I have 3 GitHub repos. I haven’t had time to revisit this yet to rework it so dev/qa/prod all live in the same GitHub repo.", "username": "Lauren_Schaefer" }, { "code": "", "text": "@Lauren_Schaefer thanks so much for posting and I did watch your video on TravisCI and your SocialStats app. It was very informative and I loved the part about unit testing for realm functions.For my set up, I really need to do the whole find/replace thing for different database names. I don’t remember you mentioning find/replace or different DB names in the video, so I’m wondering if each DB had the same name just in different Atlas projects?I think that something Realm could really benefit from is the concept of environments that can assign apps to. Like each environment would have it’s own DB, Domain, and other relevant values stored in some kind of variable that could sprinkle throughout the realm code. Like realmEnvironment.DATABASE or something similar.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "The databases and collections had the same names–I used a different Atlas project for each stage (dev/qa/prod). I did app configurations in a few different places:Yes, these are a bit hacky, but it’s working for me. The Realm team is aware of the need for environment variables and looking into solutions (though I can’t say if/when they’ll be coming). FYI @Drew_DiPalma", "username": "Lauren_Schaefer" }, { "code": "", "text": "Update: I started working on a new project today. I decided to try deploying from branches other than master, and it’s working great. The config we setup is…", "username": "Lauren_Schaefer" }, { "code": "", "text": "I am really liking the use of separate clusters and I am going to implement this. Has a lot of benefits and solves a lot of complexities with different Realm apps.The one thing I wish the automatic deploy had is some way to run some CI before the auto-deploy goes out. Like an intermediary step. This is where I would like to run a react build on my application before it goes into Realm. Otherwise, I would be utilizing the automatic build, but because of that issue I am using GitHub Actions to deploy.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
One github repo, Multiple Realm Apps, Production vs Dev, CI/CD implications
2020-09-05T14:55:41.185Z
One github repo, Multiple Realm Apps, Production vs Dev, CI/CD implications
3,965
null
[]
[ { "code": "", "text": "Hi all, we are trying to migrate the MongoDB from a Virtual Server to a Physical Server and in that process when we do the import of the data its failing on one of the index with the following issue.I have searched in Google and found no help so reaching out the forum here to see if any one else has faced the same issue.We are currently using Version 3.6 and running on a server with 300gb ram and 32 cores.Pl let me know if you have any ideas of how to get this fixed.", "username": "Ravi_Thotapalli" }, { "code": "", "text": "Hi @Ravi_Thotapalli welcome to the community.Can you post the whole error message? There should be more information regarding the cause of the error in the full message.Please also post:Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Kevin, thanks for your email and here is the info.DB version of the Mongo we are running is 3.6.16We did follow the same process of mongodump/mongorestore. We are doing this to migrate our DB from a VM to a Physical server.I dont think we dropped the Index as this is related to a Product and we did not want to mess with it.Here is what we see on the error msgs after we enabled the logging level to debug mode.COMMAND [conn3894] command admin.$cmd command: isMaster { ismaster: true, $db: “admin” } numYields:0 reslen:208 locks:{} protocol:op_msg 0ms\n2020-09-06T12:51:11.308-0700 F INDEX [conn4488] Found an invalid index { v: 2, key: { _id: 1 }, name: “id”, ns: “bigid-server.tmp.agg_out.46” } on the bigid-server.tmp.agg_out.46 collection: CannotCreateIndex: use of v0 indexes is only allowed with the mmapv1 storage engine\n2020-09-06T12:51:11.308-0700 F - [conn4488] Fatal Assertion 28782 at src/mongo/db/catalog/index_catalog_impl.cpp 176\n2020-09-06T12:51:11.308-0700 F - [conn4488]thanks\nravi", "username": "Ravi_Thotapalli" }, { "code": "mongorestore --noIndexRestore", "text": "That is a peculiar error message, and typically seen on a failed upgrade/downgrade process. You shouldn’t see this message using a straightforward mongodump/mongorestore process. I also tried many different ways to try to induce this message, but haven’t been successful so far.Could you provide more information:One last option to try is to do mongorestore --noIndexRestore. However this would mean that the indexes would need to be recreated manually.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Kevin, here is the infoI will have to work with my team on the restore. however we are thinking of actually creating a new instance of DB on the same server and try it out if it works atleast we will not be dead in the water. As this is a prod system we have been down over 2 weeks now. Working with our vendor who supplied this community edition is working slowly.thanks\nravi", "username": "Ravi_Thotapalli" }, { "code": "mongorestore --noIndexRestore", "text": "mongorestore --noIndexRestore .Kevin I believe we have tried the noIndexRestore option and even with that we are seeing the same issue.", "username": "Ravi_Thotapalli" }, { "code": "mongod --version", "text": "Hi Ravi, let me recap what I understand so far:Do the points above reflect your experience?Can I ask you to provide the output of mongod --version as well? Also how did you install MongoDB in the physical server? Did you use the steps outlined in Install MongoDB Community Edition on Red Hat or CentOS, or are you using some other method?Out of curiosity, have you tried restoring the dump file into another VM and not the physical server?Best regards,\nKevin", "username": "kevinadi" } ]
Fatal assertion 28782 at src/mongo/db/catalog/index_catalog_impl.cpp
2020-09-04T04:35:16.016Z
Fatal assertion 28782 at src/mongo/db/catalog/index_catalog_impl.cpp
3,493
null
[ "node-js", "mongoose-odm" ]
[ { "code": "welcome", "text": "Hello everyone,So I’ve been working to finish my Welcomer bot on Discord, but I couldn’t, because I can’t find a way how I can remove a String from an object!So I have my welcome object and 3 strings inside of the welcome object. How can I delete one of the strings with using JavaScript & mongoose? Some people told me to use $unset but when I try to use it, it deletes the welcome object with my 2 stringsAny help would be appreciated \nScreenshot_20200909_172813|690x107", "username": "Mystic_Devs" }, { "code": "You care for the pipeline,this part\n[{\"$project\":{\"welcome\":{\"channel\":\"$welcome.channel\",\"message\":\"Hello updated!\"}}}]\n> use testdb\nswitched to db testdb\n> db.testcoll.drop()\ntrue\n> db.testcoll.insert({\"welcome\":{\"channel\":\"73577\",\"message\":\"Hello\",\"guildid\":\"75720\"}});\nWriteResult({ \"nInserted\" : 1 })\n> db.testcoll.find({}).pretty();\n{\n\t\"_id\" : ObjectId(\"5f59524e2780882cafc2417b\"),\n\t\"welcome\" : {\n\t\t\"channel\" : \"73577\",\n\t\t\"message\" : \"Hello\",\n\t\t\"guildid\" : \"75720\"\n\t}\n}\n> db.testcoll.update({},[{\"$project\":{\"welcome\":{\"channel\":\"$welcome.channel\",\"message\":\"Hello updated!\"}}}]);\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n> db.testcoll.find({}).pretty();\n{\n\t\"_id\" : ObjectId(\"5f59524e2780882cafc2417b\"),\n\t\"welcome\" : {\n\t\t\"channel\" : \"73577\",\n\t\t\"message\" : \"Hello updated!\"\n\t}\n}\n", "text": "Hello @Mystic_Devs : )The below does 3 things,keeps the channel as it was,updated the message,and removes the guilid.\nIts update pipeline so you need mongoDB >=4.2.With the old update way its even simpler from this\ncode but pipelines updates are so powerful,so i use those.Hope it helps and good luck with your bot.", "username": "Takis" } ]
Removing a String from an Object
2020-09-09T20:03:27.246Z
Removing a String from an Object
2,184
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.1 is out and is ready for production deployment. This release contains only fixes since 4.4.0, and is a recommended upgrade for all 4.4 users.\nFixed in this release:4.4 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.1 is released
2020-09-09T21:18:49.431Z
MongoDB 4.4.1 is released
2,038
null
[ "kafka-connector" ]
[ { "code": "Caused by: org.apache.kafka.con │ │ nect.errors.DataException: Only Struct objects supported for [converting timestamp formats], found: java.lang.String\\n\\tat org.apache.kafka.connect.transforms.util.Requirements.requireStruct(Requireme │ │ nts.java:52)\\n\\tat org.apache.kafka.connect.transforms.util.Requirements.requireStructOrNull(Requirements.java:61)\"transforms\": \"revecied_at_unix_to_iso\",\n\"transforms.revecied_at_unix_to_iso.type\": \"org.apache.kafka.connect.transforms.TimestampConverter$Value\",\n\"transforms.revecied_at_unix_to_iso.field\": \"fullDocument.received.$date\",\n\"transforms.revecied_at_unix_to_iso.format\": \"yyyy-MM-dd'T'HH:mm:ssZ\",\n\"transforms.revecied_at_unix_to_iso.target.type\": \"string\"\n", "text": "When using mongo-kafka source connector, I receiveCaused by: org.apache.kafka.con │ │ nect.errors.DataException: Only Struct objects supported for [converting timestamp formats], found: java.lang.String\\n\\tat org.apache.kafka.connect.transforms.util.Requirements.requireStruct(Requireme │ │ nts.java:52)\\n\\tat org.apache.kafka.connect.transforms.util.Requirements.requireStructOrNull(Requirements.java:61)with the following config for transforms:How to make SMT work with mongo-kafka connect?", "username": "Sendoh_Daten" }, { "code": "mongo-kafkafullDocument.receivedformatyyyy-MM-dd'T'HH:mm:ssZ", "text": "Hi @Sendoh_Daten,The messages from mongo-kafka source connector are Extended JSON strings. What does the document look like in the collection ? Especially for fullDocument.received field.Looking at Kafka Connect TimestampConverter SMT, the format property can used to generate the output or parse the input. Could you clarify whether you’re trying to generate a string date in the format of yyyy-MM-dd'T'HH:mm:ssZ ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "The next version of the connector will support formats other than string so you will be able to do SMTs", "username": "Robert_Walters" } ]
Transform timestamp in mongo-kafka connector
2020-02-17T12:16:07.069Z
Transform timestamp in mongo-kafka connector
4,771
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hi, the documentation explains clearly\n“Data is captured via Change Streams within the MongoDB cluster and published into Kafka topics”Question: Upon creating a new connector - is it possible to load entire collections into Kafka?\nOr will it only ever be possible to begin with changes?", "username": "Hartmut" }, { "code": "", "text": "If above would be a good or bad idea to remain undecided.\n(But for the data volumes I’m looking at for my current use case it might be feasible…)", "username": "Hartmut" }, { "code": "db.getCollection(\"mycollection\").aggregate([{ $out: 'mycollection_copy' }]);\ndb.getCollection(\"mycollection\").drop();\ndb.getCollection(\"mycollection_copy\").aggregate([{ $out: 'mycollection' }]);\n", "text": "I tried to stimulate / trigger a ‘full refresh’ manually but didn’t succeed.Both with & without the ‘drop’ in-between the aggregate ‘copyTo’ documents are actually copied / replaced. But the change stream does not seem to be triggered, so it’s not working for my purpose.Any tips or ideas?", "username": "Hartmut" }, { "code": "", "text": "copy.existing parameter might help you", "username": "Robert_Walters" } ]
Kafka Connect Source - read/load entire collection upon new connector created
2020-09-02T17:21:41.468Z
Kafka Connect Source - read/load entire collection upon new connector created
2,752
null
[ "golang" ]
[ { "code": "[ec2-user@ip-172-31-35-142 mongo-driver]$ pwd\n/home/ec2-user/go/src/go.mongodb.org/mongo-driver\n[ec2-user@ip-172-31-35-142 mongo-driver]$ dir\nbenchmark bson cmd CONTRIBUTING.md data etc event examples go.mod go.sum internal LICENSE Makefile mongo README.md tag THIRD-PARTY-NOTICES vendor version x\n[ec2-user@ip-172-31-35-142 mongo-driver]$ cd mongo\n[ec2-user@ip-172-31-35-142 mongo]$ dir\nbatch_cursor.go client_examples_test.go crypt_retrievers.go index_options_builder.go options single_result_test.go\nbson_helpers_test.go client.go cursor.go index_view.go readconcern testatlas\nbulk_write.go client_options_test.go cursor_test.go integration readpref testaws\nbulk_write_models.go client_side_encryption_examples_test.go database.go main_test.go read_write_concern_spec_test.go util.go\nchange_stream_deployment.go client_test.go database_test.go mongocryptd.go results.go with_transactions_test.go\nchange_stream.go collection.go doc.go mongo.go results_test.go writeconcern\nchange_stream_test.go collection_test.go errors.go mongo_test.go session.go\nclient_encryption.go crud_examples_test.go gridfs ocsp_test.go single_result.go\n[ec2-user@ip-172-31-35-142 mongo-driver]$ go env\nGO111MODULE=\"\"\nGOARCH=\"amd64\"\nGOBIN=\"\"\nGOCACHE=\"/home/ec2-user/.cache/go-build\"\nGOENV=\"/home/ec2-user/.config/go/env\"\nGOEXE=\"\"\nGOFLAGS=\"\"\nGOHOSTARCH=\"amd64\"\nGOHOSTOS=\"linux\"\nGONOPROXY=\"\"\nGONOSUMDB=\"\"\nGOOS=\"linux\"\nGOPATH=\"/home/ec2-user/go\"\nGOPRIVATE=\"\"\nGOPROXY=\"direct\"\nGOROOT=\"/usr/lib/golang\"\nGOSUMDB=\"off\"\nGOTMPDIR=\"\"\nGOTOOLDIR=\"/usr/lib/golang/pkg/tool/linux_amd64\"\nGCCGO=\"gccgo\"\nAR=\"ar\"\nCC=\"gcc\"\nCXX=\"g++\"\nCGO_ENABLED=\"1\"\nGOMOD=\"/home/ec2-user/go/src/go.mongodb.org/mongo-driver/go.mod\"\nCGO_CFLAGS=\"-g -O2\"\nCGO_CPPFLAGS=\"\"\nCGO_CXXFLAGS=\"-g -O2\"\nCGO_FFLAGS=\"-g -O2\"\nCGO_LDFLAGS=\"-g -O2\"\nPKG_CONFIG=\"pkg-config\"\nGOGCCFLAGS=\"-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build744267489=/tmp/go-build -gno-record-gcc-switches\"\n\n[ec2-user@ip-172-31-35-142 ~]$ sudo docker build -t my-go-app .\nSending build context to Docker daemon 136.8MB\nStep 1/8 : From golang\n ---> 75605a415539\nStep 2/8 : MAINTAINER Jyoti Sarkar <[email protected]>\n ---> Using cache\n ---> d61475aade96\nStep 3/8 : RUN mkdir /app\n ---> Using cache\n ---> 91952fea89b3\nStep 4/8 : ADD . /app\n ---> Using cache\n ---> eecad49ab17c\nStep 5/8 : WORKDIR /app\n ---> Using cache\n ---> f0ab8a35fcb6\nStep 6/8 : RUN go build -o main\n ---> Running in 808220c76627\nmain.go:9:2: cannot find package \"go.mongodb.org/mongo-driver/bson\" in any of:\n /usr/local/go/src/go.mongodb.org/mongo-driver/bson (from $GOROOT)\n /go/src/go.mongodb.org/mongo-driver/bson (from $GOPATH)\nmain.go:10:2: cannot find package \"go.mongodb.org/mongo-driver/mongo\" in any of:\n /usr/local/go/src/go.mongodb.org/mongo-driver/mongo (from $GOROOT)\n /go/src/go.mongodb.org/mongo-driver/mongo (from $GOPATH)\nmain.go:11:2: cannot find package \"go.mongodb.org/mongo-driver/mongo/options\" in any of:\n /usr/local/go/src/go.mongodb.org/mongo-driver/mongo/options (from $GOROOT)\n /go/src/go.mongodb.org/mongo-driver/mongo/options (from $GOPATH)\nThe command '/bin/sh -c go build -o main' returned a non-zero code: 1\n", "text": "Dear friends,\nNeed your help. I have installed GO in EC2, able to run hello world program. I am trying to access mongodb database from my program. I have installed the driver, checked the folders structure, check the paths. Here is the detail:Folders structure in EC2 instanceBuilding my go programWhy am I getting above error? Pls advice.", "username": "Jyoti_Sarkar" }, { "code": "go env/usr/lib/golang/home/ec2-user/go/usr/local/go//go/", "text": "Hi @Jyoti_Sarkar,This looks like an issue with environment variables not being carried over when using Docker. Your go env output shows the GOROOT is /usr/lib/golang and GOPATH is /home/ec2-user/go but the Docker output shows that it’s looking for the driver in /usr/local/go/ for GOROOT and /go/ for GOPATH. I don’t have much experience with Docker, but a good starting point would be to investigate how environment variables should be set, either on your system or in your Docker config file, to make sure they’re propagated correctly.", "username": "Divjot_Arora" } ]
GO+MongoDB in EC2 instance - build error
2020-09-01T20:52:42.382Z
GO+MongoDB in EC2 instance - build error
2,598
null
[ "mongoose-odm" ]
[ { "code": "seeds/seed.jsubuntu@ip-10-0-104-49:~/AppFolder/app$ node seeds/seed.js \n(node:1799) DeprecationWarning: `open()` is deprecated in mongoose >= 4.11.0, use `openUri()` instead, or set the `useMongoClient` option if using `connect()` or `createConnection()`. See http://mongoosejs.com/docs/4.x/docs/connections.html#use-mongo-client\nDatabase Cleared\nDatabase Seeded\nrun show dbs;rs0:PRIMARY> show dbs;\nlocal 0.000GB\nposts 0.000GB\nnode seeds/seed.js2020-05-13T01:47:32.158+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: \"rs0\", version: 3, protocolVersion: 1, members: [ { _id: 0, host: \"ip-10-0-1-100:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: \"ip-10-0-2-100:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: \"ip-10-0-3-100:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ebb51b31fa00dc07eacc09f') } }\n2020-05-13T01:47:32.158+0000 I REPL [ReplicationExecutor] This node is ip-10-0-1-100:27017 in the config\n2020-05-13T01:47:32.158+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to ip-10-0-3-100:27017\n2020-05-13T01:47:32.161+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to ip-10-0-3-100:27017, took 4ms (2 connections now open to ip-10-0-3-100:27017)\n2020-05-13T01:47:32.161+0000 I REPL [ReplicationExecutor] Member ip-10-0-2-100:27017 is now in state STARTUP2\n2020-05-13T01:47:32.162+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to ip-10-0-3-100:27017, took 4ms (2 connections now open to ip-10-0-3-100:27017)\n2020-05-13T01:47:32.163+0000 I REPL [ReplicationExecutor] Member ip-10-0-3-100:27017 is now in state STARTUP\n2020-05-13T01:47:33.026+0000 I REPL [rsSync] transition to primary complete; database writes are now permitted\n2020-05-13T01:47:34.162+0000 I REPL [ReplicationExecutor] Member ip-10-0-2-100:27017 is now in state SECONDARY\n2020-05-13T01:47:34.164+0000 I REPL [ReplicationExecutor] Member ip-10-0-3-100:27017 is now in state SECONDARY\n2020-05-13T01:47:59.204+0000 I NETWORK [conn54] received client metadata from 10.0.104.49:57540 conn54: { driver: { name: \"nodejs\", version: \"2.2.34\" }, os: { type: \"Linux\", name: \"linux\", architecture: \"x64\", version: \"4.4.0-1106-aws\" }, platform: \"Node.js v8.17.0, LE, mongodb-core: 2.1.18\" }\n2020-05-13T01:48:32.162+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Ending idle connection to host ip-10-0-3-100:27017 because the pool meets constraints; 1 connections to that host remain open\nvar mongoose = require('mongoose');\n\nvar PostSchema = new mongoose.Schema({\n title: String,\n body: String\n});\n\n\nmodule.exports = mongoose.model('Post', PostSchema);\nvar Post = require('../models/post');\nvar mongoose = require('mongoose');\nvar faker = require('faker');\n\nif(process.env.DB_HOST) {\n mongoose.connect(process.env.DB_HOST);\n\n Post.remove({} , function(){\n console.log('Database Cleared');\n });\n\n var count = 0;\n var num_records = 100;\n\n for(var i = 0; i < num_records; i++) {\n Post.create({\n title: faker.random.words(),\n body: faker.lorem.paragraphs()\n }, function(){\n count++;\n if(count >= num_records) {\n mongoose.connection.close();\n console.log(\"Database Seeded\");\n }\n }); \n }\n}\nexport DB_HOST=mongodb://ip-10-0-1-100:27017/posts,ip-10-0-2-100:27017/posts,ip-10-0-3-100:27017/posts?replicaSet=rs0", "text": "Hello everybody.Seeking for advice about a nodejs app and mongoDB seeding.I have a nodejs app which it should be able to seed a mongodb once i export a variable DB_HOST which the correct IP.I am able to go through all the steps and be able to ping my mongodb instance from my app instance.When i run the command node seeds/seed.jsi get:but if i log in my mongodb instance and in mongo console\nrun show dbs;all i have is:the posts, should have some data in, as my node seeds/seed.js should create some random posts and pass them to mongodb posts.now, i checked my mongodb logs for troubleshooting but i dont have any error, actually it says that he recived clientdata from my app instance:but still my posts is empty.Any idea about why is this happening? i always had a problem with step.i dont know if this might help but i post the important parts of the app js:this is posts.jsand this is the seeds.js:all what i had to do, is set the DB_HOST to a mongodb link.which in my case is:\nexport DB_HOST=mongodb://ip-10-0-1-100:27017/posts,ip-10-0-2-100:27017/posts,ip-10-0-3-100:27017/posts?replicaSet=rs0there are 3 because i am working with a replicaSet. i have the IPs set in etc/hosts and i am able to ping all of them from my app instance.Thank you in advance.", "username": "Hamza_El_Aouane" }, { "code": "", "text": "What version of Mongoose are you running? MongoDB Atlas is only supported in version <5.0. I believe I have gotten this same error connecting to Atlas on older versions of Mongoose. ", "username": "JoeKarlsson" } ]
Seeding MongoDB nodejs
2020-05-13T02:04:45.573Z
Seeding MongoDB nodejs
9,194
null
[]
[ { "code": " SyncConfiguration config = new SyncConfiguration.Builder(user,new ObjectId(\"some-object-id partition key\")).waitForInitialRemoteData()\n .build();\n2020-09-04 10:05:34.891 32754-440/com.appid A/REALM_JNI: An exception has been thrown on the sync client thread:\n The specified table name is already in use\n Exception backtrace:\n <backtrace not supported on this platform>\n ---snip---\n2020-09-04 10:05:34.904 32754-440/com.appid E/AndroidRuntime: FATAL EXCEPTION: Thread-11\n Process: com.appid, PID: 32754\n io.realm.exceptions.RealmError: An exception has been thrown on the sync client thread:\n The specified table name is already in use\n Exception backtrace:\n <backtrace not supported on this platform>", "text": "Hi,\nI’m getting a crash on Android client with error log “The specified table name is already in use”. We could successfully sync our schema without any issues, but after adding some (simple fields on one class) it seems to break.I am seeing this on 10.0.0-BETA.6 and Beta 5 of the Android library.It happens after the app tries to get an instance of the synced realm which I’m doing withFull error log:", "username": "Diederik_Hattingh" }, { "code": "", "text": "We found the problem: there was a problem in the rules section for Realm. We were mapping more than one mongodb collection to the same realm collection.When that was fixed, we could sync again.", "username": "Diederik_Hattingh" }, { "code": "", "text": "Diederik - we should have been quicker with a response but glad that you go that resolved in the end anyway. Yes, 1:1 mapping is recommended.", "username": "Shane_McAllister" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting crash on initial sync
2020-09-04T13:53:31.071Z
Getting crash on initial sync
2,953
null
[ "queries", "performance" ]
[ { "code": "", "text": "A query on collection is taking longer than 5 seconds at time, and it is having no dataquery: { name: “test123”, insertTime: { $gte: new Date(1598767200000) } } } planSummary: IXSCAN { insertTime: -1.0 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:44 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 889msThanks\nSanthosh", "username": "santhosh_K" }, { "code": "$ db.collection.find{{}).explain(\"executionStats\")\n", "text": "We need a little more information. Where is the server running, locally or remotely? Are there other programs imposing a load?Whenever you see a performance problem the first step is the run the query in the shell with an explain plan e.g.This can often tell us where the bottleneck is.", "username": "Joe_Drumgoole" } ]
Why query on empty collection is taking longer than 5 sec
2020-09-09T08:25:46.789Z
Why query on empty collection is taking longer than 5 sec
1,688
null
[ "node-js", "mongoose-odm" ]
[ { "code": "DATABASE=mongodb://localhost:27017/node-form -: Connection error: connect ECONNREFUSED 127.0.0.1:27017 :-\n(node:8745) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\n at new MongooseServerSelectionError (/Users/gaetan/WorkSpace/Back et front/Back/Fichiers_de_cours/node-forms_4/node_modules/mongoose/lib/error/serverSelection.js:24:11)\n at NativeConnection.Connection.openUri (/Users/gaetan/WorkSpace/Back et front/Back/Fichiers_de_cours/node-forms_4/node_modules/mongoose/lib/connection.js:823:32)\n at Mongoose.connect (/Users/gaetan/WorkSpace/Back et front/Back/Fichiers_de_cours/node-forms_4/node_modules/mongoose/lib/index.js:333:15)\n at Object.<anonymous> (/Users/gaetan/WorkSpace/Back et front/Back/Fichiers_de_cours/node-forms_4/start.js:8:10)\n at Module._compile (internal/modules/cjs/loader.js:1138:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)\n at Module.load (internal/modules/cjs/loader.js:986:32)\n at Function.Module._load (internal/modules/cjs/loader.js:879:14)\n at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)\n at internal/main/run_main_module.js:17:47\n(node:8745) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)\n(node:8745) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.\n", "text": "Hi, I’m new at Mongo, I do have worked with CLI and have a basic understanding of Mongo.\nI’m trying to connect to a local database with my NodeJS app, and I have a ‘dotent’ file, with I use to store the connection URI, like this :\nDATABASE=mongodb://localhost:27017/node-formI use mongoose btw.\nI get this error :How do i fix this ?\nI use MacOS and I can’t even run “mongod” in CLI, yet I have mongo installed. When I write mongo --version in CLI, I have version 4.4.0 installed. I just don’t get it.", "username": "Gaetan_CHABOUSSIE" }, { "code": "mongodmongo", "text": "Do you have a mongod process running? Can you connect to that process by running the mongo shell, mongo. These are the first steps you should take as new user of MongoDB.We have a full tutorial here. As a beginner you may be better starting with MongoDB Atlas.", "username": "Joe_Drumgoole" }, { "code": "", "text": "I have atlas installed, and the conection works fine with my online cluster. I’m accustomed to mongo, but I don’t quite understand why the local instance of mongo won’t connect.", "username": "Gaetan_CHABOUSSIE" }, { "code": "mongomongodmongod", "text": "Can I clarify. Your local mongo shell (mongo not mongod) can connect to Atlas but your local mongo shell cannot connect to a local instance of mongod running on your own workstation/laptop?", "username": "Joe_Drumgoole" }, { "code": "", "text": "Sory i din’t updated my post. Problem soved !\nI just use the path argument when running local database :\nmongod --dbpath /usr/local/var/mongodbThanks anyway.", "username": "Gaetan_CHABOUSSIE" }, { "code": "", "text": "Excellent glad you sorted the problem. Please feel to post any and all beginner questions here. It’s easy to go wrong when you are starting.", "username": "Joe_Drumgoole" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't connect to localhost via Node
2020-09-07T12:42:52.099Z
Can&rsquo;t connect to localhost via Node
28,940
null
[ "dot-net" ]
[ { "code": "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond..\n---> System.Net.Sockets.SocketException (10060): A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.\n --- End of inner exception stack trace ---\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token)\n at System.Net.Security.SslStream.<FillBufferAsync>g__InternalFillBufferAsync|215_0[TReadAdapter](TReadAdapter adap, ValueTask`1 task, Int32 min, Int32 initial)\\r\\n at System.Net.Security.SslStream.ReadAsyncInternal[TReadAdapter](TReadAdapter adapter, Memory`1 buffer)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync()\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync()\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.InitializeConnectionAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.GetChannelAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.InitializeAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.CreateAsync(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.AggregateAsync[TResult](IClientSessionHandle session, PipelineDefinition`2 pipeline, AggregateOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.AsyncCursorHelper.SingleOrDefaultAsync[T](Task`1 cursorTask, CancellationToken\nConnectTimeout = new TimeSpan(0,0,0,9000);\nMaxConnectionIdleTime = new TimeSpan(0,0,0,9000);\nSocketTimeout = new TimeSpan(0,0,0,0, 9000);\n", "text": "I have a .NET Core 3.1 application that uses the MongoDB C# driver (version 2.10.4) and every time I deploy the application that the connection to the DB seems to work for a little bit (maybe 5-10 mins) and then then throws the following exception when trying to read a record from the database:Based on the exception I thought it was an issue with the Socket so I tried to implement this suggestion:https://jira.mongodb.org/browse/CSHARP-2543?focusedCommentId=2290849&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-2290849Unfortunately that didn’t work either, I also tried to set the following values:But unfortunately nothing seems to work, the connection will keep throwing the same exception posted above over and over. Here is what I currently have:Any advice or suggestions would be appreciated. Thanks in advance.", "username": "OdotN" }, { "code": "", "text": "thought it was an issue with the Socket so I tried to implement this suggestionHI Friend\nI am also same exception after 15-20 min. DId you get any solution for mentioned error?", "username": "DIVYA_MOHAN" }, { "code": "services.AddSingleton<IMongoClient, MongoClient>()\n", "text": "What worked for me was a combination of making sure to implement a singleton pattern for the connection, in .NET Core it’s something like:The other important piece ended up being firewall rules. We had several firewall rules and one of them would prevent the connection from reaching the DB server after a while. I would suggested doing a debugging/troubleshooting session with someone that knows the firewall rules and make sure it’s not blocking the connection.Hope this helps.", "username": "OdotN" }, { "code": " mongoClientSettings.MaxConnectionLifeTime = TimeSpan.FromHours(24);\n mongoClientSettings.SocketTimeout = TimeSpan.FromMinutes(15);\n mongoClientSettings.MaxConnectionIdleTime = TimeSpan.FromMinutes(20);\n mongoClientSettings.ConnectTimeout = TimeSpan.FromMinutes(5);\n mongoClientSettings.ServerSelectionTimeout = TimeSpan.FromMinutes(5);\n mongoClientSettings.MinConnectionPoolSize = 10;\n mongoClientSettings.MaxConnectionPoolSize = 100;\n //mongoClientSettings.ClusterConfigurator = cb => cb.ConfigureTcp(tcp => tcp.With(socketConfigurator: (Action<Socket>)SocketConfigurator));\n", "text": "Thanks alot. OdotN.\nI am working on large data chunk with Mongo DB.\nTo resolve network and transport related issues with Mongo DB sever. I have done below setting while making connection with MongoDB .Although i have done/ configured above settings but i still don’t have much knowledge about these parameters.Are these parameters values ok for big data crunch and don’t impact mongo DB server for long run?", "username": "DIVYA_MOHAN" } ]
C# .NET Core 3.1 - MongoConnectionException - IOException: Unable to read data from the transport connection
2020-08-17T07:33:51.207Z
C# .NET Core 3.1 - MongoConnectionException - IOException: Unable to read data from the transport connection
13,679
null
[ "dot-net" ]
[ { "code": "mongodb://username:password@server:27017.AddSingleton<IMongoClient>(factory => new MongoClient(Configuration[\"Storage:MongoConnectionString\"]))<PackageReference Include=\"MongoDB.Bson\" Version=\"2.11.0\" />\n<PackageReference Include=\"MongoDB.Driver\" Version=\"2.11.0\" />\n<PackageReference Include=\"MongoDB.Driver.Core\" Version=\"2.11.0\" />\nusing System;\nusing System.Collections.Generic;\nusing System.Threading.Tasks;\nusing Microsoft.Extensions.DependencyInjection;\nusing MongoDB.Driver;\n\nnamespace MongoDbIssueExample\n{\n internal class Program\n {\n private static IServiceProvider services;\n\n private static IServiceProvider BuildDependencyInjector()\n {\n services = new ServiceCollection()\n .AddSingleton<TestThingsService>()\n .AddSingleton<IMongoClient>(factory => new MongoClient(\"mongodb://username:password@server:27017\"))\n .BuildServiceProvider();\n\n return services;\n }\n\n private static async Task DoSeed()\n {\n var service = services.GetService<TestThingsService>();\n // Don't do these async as we'll never get any data in...\n service.CreateTestThings().Wait();\n service.CreateOtherTestThings().Wait();\n }\n\n private static async Task DoTest()\n {\n var service = services.GetService<TestThingsService>();\n\n var things = service.GetTestThings();\n var otherThings = service.GetOtherTestThings();\n\n Task.WaitAll(things, otherThings);\n }\n\n private static async Task Main(string[] args)\n {\n BuildDependencyInjector();\n\n await DoTest();\n }\n }\n\n public class TestThingsService\n {\n private readonly IMongoClient _client;\n private readonly IMongoDatabase _database;\n private readonly IMongoCollection<OtherTestThing> _otherTestThingsCollection;\n private readonly IMongoCollection<TestThing> _testThingsCollection;\n\n public TestThingsService(IMongoClient client)\n {\n _client = client;\n _database = _client.GetDatabase(\"Things\");\n _testThingsCollection = _database.GetCollection<TestThing>(\"TestThings\");\n _otherTestThingsCollection = _database.GetCollection<OtherTestThing>(\"OtherTestThings\");\n }\n\n public async Task CreateOtherTestThings()\n {\n for (var item = 1; item <= 10000; item++)\n {\n var testThing = new OtherTestThing {Id = item, Name = $\"Other thing no. {item}\", WhenCreated = DateTime.UtcNow};\n await _otherTestThingsCollection.ReplaceOneAsync(f => f.Id == item, testThing, new ReplaceOptions {IsUpsert = true});\n }\n }\n\n public async Task CreateTestThings()\n {\n for (var item = 1; item <= 10000; item++)\n {\n var testThing = new TestThing {Id = item, Name = $\"Thing no. {item}\", WhenCreated = DateTime.UtcNow};\n await _testThingsCollection.ReplaceOneAsync(f => f.Id == item, testThing, new ReplaceOptions {IsUpsert = true});\n }\n }\n\n\n public async Task<List<OtherTestThing>> GetOtherTestThings()\n {\n return await _otherTestThingsCollection.Find(_ => true).ToListAsync();\n }\n\n public async Task<List<TestThing>> GetTestThings()\n {\n return await _testThingsCollection.Find(_ => true).ToListAsync();\n }\n }\n\n public class OtherTestThing\n {\n public int Id { get; set; }\n public string Name { get; set; }\n public DateTime WhenCreated { get; set; }\n }\n\n public class TestThing\n {\n public int Id { get; set; }\n public string Name { get; set; }\n public DateTime WhenCreated { get; set; }\n }\n}\n\t\t<PackageReference Include=\"Microsoft.Extensions.DependencyInjection\" Version=\"3.1.6\" />\n\t\t<PackageReference Include=\"Microsoft.Extensions.DependencyInjection.Abstractions\" Version=\"3.1.6\" />\n\t\t<PackageReference Include=\"MongoDB.Bson\" Version=\"2.11.0\" />\n\t\t<PackageReference Include=\"MongoDB.Driver\" Version=\"2.11.0\" />\n\t\t<PackageReference Include=\"MongoDB.Driver.Core\" Version=\"2.11.0\" />\n", "text": "I recently added authentication to my development database, authenticating against the “admin” database, and using a username/password combination in my connection string, e.g. mongodb://username:password@server:27017. Almost immediately I started seeing connections failing to open with an exception showing “Server sent an invalid nonce”. To try and mitigate the problem I looked at the lifetime of my IMongoClient objects, and moved from instantiating many such objects to using a Singleton injected into my business services using the Microsoft.Extensions.DependencyInjection libraries. This hasn’t alleviated the issue. I set up the MongoClient in my Startup.cs using .AddSingleton<IMongoClient>(factory => new MongoClient(Configuration[\"Storage:MongoConnectionString\"])). I know that the connection string is correct as it works in MongoDB Compass, and also because the first couple of calls through the driver work successfully; the issue starts occuring when multiple concurrent calls are in progress.I am using the MongoDB .NET driver, version 2.11.0, under .NET Core 3.1.2. The problem occurs in my local environment running Windows 10, and also my staging environment running inside Docker on VMware Photon.There are two components of the app that make connnections to MongoDB, both of which are ASP.Net Core applications, one serving an API for interactive usage of my applicaiton, and one running a Quartz scheduler for background processing. I’m running MongoDB 4.4.0 Community inside a Docker container.My references to include the driver are:According to this post on the MongoDB Jira site I’m not the first person to experience this issue. Mathias Lorenzen suggested in the issue on Jira that he had reduced the number of errors he encountered with various fixes including recreating the user, using SCRAM-SHA-1, and increasing the maximum number of connections permitted on the server. With these changes in place, the issue still occurs for me.I’m guessing that the problem is related to threading when used in conjunction with database authentication. I can’t, for obvious reasons, put this code into production use by disabling authentication to work around the problem, and equally the use of a synchronous model rather than async seems counter-productive. What steps can I take to try and resolve the authentication issues? Is this likely to be a bug in the Mongo C# driver, or am I simply using it wrong.Insight on what I can try next, or alternative approaches, would be gratefully received.Edit: Minimum reproducible example as requested:Requires references as follows:", "username": "DDA" }, { "code": "", "text": "Hi, you have found any solution, I have same problem when I upgrade my current solution to drivers latest version ?So I can’t migrate my server to 4.4 ", "username": "Herve_TINANT" }, { "code": "", "text": "Hi,After some test, I finaly find if you create user without SCRAM-SHA-256 (my application user can only login with SCRAM-SHA-1), it work flawlessly.I hope help you", "username": "Herve_TINANT" }, { "code": "db.createUser(\n {\n user: \"reportUser256\",\n pwd: passwordPrompt(), // Or \"<cleartext password>\"\n roles: [ { role: \"readWrite\", db: \"reporting\" } ],\n mechanisms: [ \"SCRAM-SHA-1\" ]\n }\n)", "text": "Hi Herve,Can you provide an example of how you create the user differently?Would this work?", "username": "DDA" }, { "code": "mongos version v4.4.0\nBuild Info: {\n \"version\": \"4.4.0\",\n \"gitVersion\": \"563487e100c4215e2dce98d0af2a6a5a2d67c5cf\",\n \"openSSLVersion\": \"OpenSSL 1.1.1d 10 Sep 2019\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"debian10\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\nvar client = new MongoClient(_connection_string);;\nvar database = client.GetDatabase(Database_name);\nvar collection = database.GetCollection<T>(Collection_name);\nvar document_count = collection.EstimatedDocumentCount();\nMongoDB.Driver.MongoAuthenticationException: Server sent an invalid nonce.\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquiredConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ServerChannelSource.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ChannelSourceHandle.GetChannel(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Initialize(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Create(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.CountOperation.Execute(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperation[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass43_0.<EstimatedDocumentCount>b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.EstimatedDocumentCount(EstimatedDocumentCountOptions options, CancellationToken cancellationToken)\n", "text": "Same issue in my project.\n \nEnvironment: .Net Core SDK 3.1.401, Mongodb.Driver 2.11.0\nMongoDB Version: \nThis is the code:And the exception: \nAny advise?", "username": "Wang_Yu" }, { "code": "", "text": "Hi,\nI got this error, and it’s resolved after create new user in that database, before I using use from admin database. Hope help for you.", "username": "dainghia" }, { "code": "var client = new MongoClient(_connection_string);;\nvar database = client.GetDatabase(Database_name);\nvar collection = database.GetCollection<T>(Collection_name);\n", "text": "Find more information:Code:Using mongodb.driver 2.11.0, it create 2 connections, when if using mongodb.driver 2.10.4, it create only 1 connection.Now I back my projects to 2.10.4, still testing …", "username": "Wang_Yu" }, { "code": "", "text": "I revert to old version 2.10.4, no way to working with latest version.", "username": "dainghia" }, { "code": "", "text": "Reverting to 2.10.4 seems to be the problem for me.\nI’m sad and confused ", "username": "Esteban_Cervantes" }, { "code": "MongoDB.Driver.MongoAuthenticationException: Server sent an invalid nonce.\n", "text": "Hi @DDA,Looks like another user (Mark Weaver) reported this issue on CSHARP-3196. There is a patch that is currently in code review, please watch/up-vote the issue tracker to receive notifications on it.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Server sent an invalid nonce when making multiple rapid connections from C# driver
2020-08-04T20:47:41.435Z
Server sent an invalid nonce when making multiple rapid connections from C# driver
5,610
null
[]
[ { "code": "{ \"expiresAt\": 1 }, { expireAfterSeconds: 0 } \nexpiresAt _id :5f55f440058cdd3754907255\n name :\"first\"\n language :\"Plain Text\"\n content :\"Some text\"\n createdAt :2020-09-07T08:50:08.708+00:00\n expiresAt :2020-09-07T08:51:08.709+00:00\n __v :0\n", "text": "I followed this article - https://docs.mongodb.com/manual/tutorial/expire-data/\nI created the index on atlas first. It was added successfully. But the documents weren’t deleted as per expiry.\nI used the same method on localhost mongo shell, and it worked fine.I added this indexwhere expiresAt is a Date object in my document.This is the document I expected to be deleted.", "username": "Mithil_Poojary" }, { "code": "expireAfterSecondsexpireAfterSeconds", "text": "Hi @Mithil_Poojary,The created index is not expected to work as the expireAfterSeconds is set to 0. See here:\nTo expire data after a specified number of seconds has passed since the indexed field, create a TTL index on a field that holds values of BSON date type or an array of BSON date-typed objects and specify a positive non-zero value in the expireAfterSeconds field. A document will expire when the number of seconds in the expireAfterSeconds field has passed since the time specified in its indexed fieldPlease specify a non zero positive integer.How did you create the ttl index on atlas? Was via a shell or the data explorer?Which version is your localhost deployment and what is the atlas version?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "4.2.8v4.4.0", "text": "Hi @Pavel_DuchovnyThanks for looking into this!The created index is not expected to work as the expireAfterSeconds is set to 0.I followed the second half of the article here.\nI created the index online on web version of atlas.\nOn my atlas dashboard, on top right I see version = 4.2.8.\nOn my Ubuntu localhost, mongo version is → MongoDB shell version v4.4.0.", "username": "Mithil_Poojary" }, { "code": "db.collection.getIndexes()", "text": "Hi @Mithil_Poojary,Ok I see what you mean, sorry for overlooking.Please provide db.collection.getIndexes() from the Atlas connection.Can you confirm that the documents are still present in the collection. I am asking this since the TTL thread is running every min and until it runs documents may still exist but should be removed in the next time. Additionally, if the amount of documents to expire is large it can take time to clear all the batches.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.collection.getIndexes()", "text": "Please provide db.collection.getIndexes() from the Atlas connection.Please hold on, I am setting up the mongocli to get this.Can you confirm that the documents are still present in the collection.Yes they are still present, the document that I posted in the question has not been deleted, even though the time (in UTC) has passed a long back.", "username": "Mithil_Poojary" }, { "code": "modelsdb.models.getIndexes()\n[\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\",\n\t\t\"ns\" : \"test.models\"\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"unique\" : true,\n\t\t\"key\" : {\n\t\t\t\"name\" : 1\n\t\t},\n\t\t\"name\" : \"name_1\",\n\t\t\"ns\" : \"test.models\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"expiresAt\" : 1\n\t\t},\n\t\t\"name\" : \"expiresAt_1\",\n\t\t\"ns\" : \"test.models\"\n\t}\n]\n\n", "text": "Ok I didn’t need the mongocli tool, and I was able to connect through mongodb shell. My collection name is models. This is what I got.", "username": "Mithil_Poojary" }, { "code": "{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"expiresAt\" : 1\n\t\t},\n\t\t\"name\" : \"expiresAt_1\",\n\t\t\"ns\" : \"test.models\"\n\t}\n{ expireAfterSeconds: 0 }", "text": "Hi @Mithil_PoojaryThe created index was not created as TTL as it does not have the TTL clause therefore the documents were not deleted.Please recreate the index and share the screenshot of how you create this index via the UI.Please note that the { expireAfterSeconds: 0 } needs to be placed in the “options” section of the UI and will be probably ignored if placed in the main window of the fields specification.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "DropCreate Index{ \"expiresAt\": 1 }, { expireAfterSeconds: 0 }{ expireAfterSeconds: 0 }options", "text": "Please recreate the index and share the screenshot of how you create this index via the UI.Ok.Please note that the { expireAfterSeconds: 0 } needs to be placed in the “options” section of the UI and will be probably ignored if placed in the main window of the fields specification.Oh is that so? Because this is what I was doing. I will retry this time by pasting inside options.", "username": "Mithil_Poojary" }, { "code": "", "text": "Is this looking right? @Pavel_Duchovny\nimage632×516 20.2 KB", "username": "Mithil_Poojary" }, { "code": "options{ \"expiresAt\": 1 }\n { expireAfterSeconds: 0 }\n", "text": "Hi @Mithil_Poojary,Oh is that so? Because this is what I was doing. I will retry this time by pasting inside options .I see the confusion, the last posted image is also not properly defined.Ok you need to place field name under FIELDS:And under OPTIONS:If you look on the createIndex command you will see that there are 2 separate documents to place parameters. The UI mimic this structure therefore all fields specs are under FIELDS and any option is under OPTIONS.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "It worked! I have been trying since last 2 days. Thank you so very much for your time and patience.@Pavel_Duchovny \nI am new to mongoDB and this thread alone has taught me quite a lot. ", "username": "Mithil_Poojary" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB TTL does not work on atlas but works fine on localhost
2020-09-08T22:55:05.651Z
MongoDB TTL does not work on atlas but works fine on localhost
8,574
null
[ "aggregation", "data-modeling" ]
[ { "code": "", "text": "Hello,I am new to MongoDB so I am having a bit of troubleIn traditional relational databases, I would have a “join” table to create a one-to-many relationship which looks like this.1 | [email protected]\n2 | [email protected]\n2 | [email protected]\n2 | [email protected]\n2 | [email protected]", "username": "bradford_li" }, { "code": "", "text": "Hi @bradford_li,Joins consider an antipattern in MongoDB and should be avoided as possible by embading documents as subdocuments and performing several lookup queries .If for any reason joining collections is a must you can use $lookup or $graphLookup to perform this aggregationPavel", "username": "Pavel_Duchovny" } ]
Join table equivalent in MongoDB
2020-09-08T22:55:02.273Z
Join table equivalent in MongoDB
1,680
null
[ "data-modeling" ]
[ { "code": " {\"$oid\": \"5f56a52a9eafb85d4314c612\"},\n\"created_at\": {\"$date\": \"2020-09-07T21:24:58.766Z\"},\n\"data\": [\n {\n \"phone\": 77777777,\n \"email\": \"[email protected]\",\n \"n_id\": 12\n },\n {\n \"phone\": 177777777,\n \"email\": \"[email protected]\",\n \"n_id\": 112\n }\n],\n\"updated_at\": {\"$date\": \"2020-09-07T21:24:58.766Z\"}\n\nSecond option\n\n {\"_id\": {\"$oid\": \"5f56a5ba9eafb862c7676c02\"},\n\"created_at\": {\"$date\": \"2020-09-07T21:27:22.454Z\"},\n\"updated_at\": {\"$date\": \"2020-09-07T21:27:22.454Z\"},\n\"email\": [\"[email protected]\", \"[email protected]\"],\n\"n_id\": [12, 112],\n\"phone\": [77777777, 3377777777]}", "text": "Advise how to properly organize the data storage structure in Mongo. Input data E-mail, phone number and a unique number of the site visitor are stored in a cookie and, accordingly, may change for various reasons (n_id). Let’s say I receive n_id and phone number. I’m creating a Document. The second time I get n_id and email. I need to check, by E-mail and n_id, there is such a document that contains either this or that. If there is, then supplement it with the missing data. That is, 1 document can have 2 numbers, 10 n_id, and 5 email. When I thought 2 storage structures, maybe you can advise how best.\n1.Variant\n{\n“_id”:", "username": "11188" }, { "code": "\n“_id”:\n\n {\"$oid\": \"5f56a52a9eafb85d4314c612\"},\n\"created_at\": {\"$date\": \"2020-09-07T21:24:58.766Z\"},\n\"identities\": [\n {\n \"Identity\": 77777777,\n \"Type\": \"phone\",\n \"n_id\": 12\n },\n {\n \"Identity\": 177777777,\n \"Type\": \"phone\",\n \"n_id\": 112\n },\n {\n \"Identity\": \"[email protected]\",\n \"Type\" : \"email\",\n \"n_id\": 123\n }\n],\n\"updated_at\": {\"$date\": \"2020-09-07T21:24:58.766Z\"}\n{ identities.n_id : 1, identities.Identity : 1}", "text": "Hi @11188,Thanks for sharing your use case and thoughts.So if I understand correctly your application will keep client cookie identities comprising of a property and an n_id unique value , you will always search the user document with this 2 combinations.I would like to suggest another alternative to not duplicate values and also search efficiently when keeping all data in one doc:Now if the n_id is unique cross users you can index { identities.n_id : 1, identities.Identity : 1} and search them both providing the values.This will retrieve the user and all his known identies.Let me know what you think.Best\nPavel", "username": "Pavel_Duchovny" } ]
How to properly organize a Mongo structure
2020-09-08T22:55:17.375Z
How to properly organize a Mongo structure
2,665
null
[ "dot-net" ]
[ { "code": " [BsonElement(\"disabled_commands\")] public List<string> MyList{ get; set; } = new List<string>();", "text": "Hey, is there a way to default to a value when a field is missing in the document itself.\nSo basically, I have a list of strings and this may be null for certain documents (because this field was added afterwards) and in this case I want to just default to the already assigned value, but the MongoDB Driver automatically overrides the value to null. Is there a way to achieve this? [BsonElement(\"disabled_commands\")] public List<string> MyList{ get; set; } = new List<string>();With friendly regards", "username": "ZargorNET" }, { "code": "SetDefaultValue(){\n \"_id\": ObjectId(\"5f583cc562d0ce4a58c7da08\"),\n \"Name\": \"X\"\n}\npublic class MyClass \n{\n [BsonId]\n private ObjectId Id {get; set;}\n public string Name {get; set;}\n public List<string> MyList {get; set;}\n}\nBsonClassMap.RegisterClassMap<MyClass>(cm =>\n{\n cm.AutoMap();\n cm.GetMemberMap(x => x.MyList).SetDefaultValue(new List<string>());\n});\nMyList[] public class MyClass \n {\n [BsonId]\n private ObjectId Id {get; set;}\n public string Name {get; set;}\n public List<string> MyList {get; set;}\n\n public MyClass () \n {\n Name = \"\";\n MyList = new List<string>();\n }\n}\n", "text": "Hi @ZargorNET, and welcome to the forumHey, is there a way to default to a value when a field is missing in the document itself.You can utilise SetDefaultValue() to assign a value to a field if the document does not have a value for the field (null is a value) during deserialisation.For example if you have the following document stored in the database :With class mapping example as below:You can register and set a default value for deserialisation as below:This should deserialise MyList value into []. For more information see also MongoDB .NET/C# Mapping ClassesI have a list of strings and this may be null for certain documents (because this field was added afterwards)Without knowing more of the context, you could also declare a constructor the class. This should help defined a default value before inserting to the database. For example:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Auto default value in the C# Driver
2020-09-04T13:52:16.087Z
Auto default value in the C# Driver
10,719
null
[ "aggregation", "dot-net" ]
[ { "code": "public class BaseValidity\n{\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string id { get; set; } = ObjectId.GenerateNewId().ToString();\n}\nvar queryToExecute = _dataStreams.Aggregate()\n .Match(...)\n .Project(\n dataStream => new {\n validations = dataStream.episodes\n .Select(\n episode => episode.validations\n .Where(validity => validity.id == baseValidityId).First()\n ).Where(validity => validity != null)\n } \n );\n{ \"$project\" : {\n \"validations\" : {\n \"$filter\" : {\n \"input\" : {\n \"$map\" : {\n \"input\" : \"$episodes\",\n \"as\" : \"episode\",\n \"in\" : {\n \"$arrayElemAt\" : [\n { \"$filter\" : {\n \"input\" : \"$$episode.validations\",\n \"as\" : \"validity\",\n \"cond\" : **{ \"$eq\" : [\"$$validity._id\", \"5f55fd057e29a07970048ce9\"] }**\n }}, 0\n ]}\n }\n },\n \"as\" : \"validity\",\n \"cond\" : {\n \"$ne\" : [ \"$$validity\", null ]\n }\n }\n }\n}}\n{ \"$eq\" : [\"$$validity._id\", ObjectId(\"5f55fd057e29a07970048ce9\")] }\n", "text": "HiI’m using MongoDB driver for .Net Core 3.1 and there is a wrong query translation when I’m using lambda expression in the Project stage of the Aggregation pipelineThe model contains the following definition:And the query is:But the output of the project stage is:it should be:Is there any workaround for this issue?", "username": "Mordechai_Ben_Zechar" }, { "code": "public ObjectId id { get; set; } = ObjectId.GenerateNewId();\n", "text": "I’m not sure what’s happening here but it looks like your function returns a “string” due to the “ToString” method which would indeed not result in an ObjectId but in the string representation of an ObjectId.So maybe changing your get/set function into something like can help:", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you @MaBeuLux88.I don’t think it related to this line since when I used the Builder function it gives me the right $eq query.\nI can send an example if needed", "username": "Mordechai_Ben_Zechar" }, { "code": "validity.idnew ObjectId(validity.id).Where(validity => new ObjectId(validity.id) == new ObjectId(baseValidityId)).First()\n", "text": "validity.idThis returns a “string” right? So maybe you need something like new ObjectId(validity.id) or something like this in here?I’m a Java guy so I can’t quite read what’s happening here but maybe something like this would fix the issue here?It’s just a wild guess. If it’s not something like this, I don’t know, sorry .", "username": "MaBeuLux88" } ]
Wrong query for ObjectId
2020-09-07T09:30:00.297Z
Wrong query for ObjectId
2,842
null
[ "c-driver" ]
[ { "code": "", "text": "Hi,I need help building mongo c driver.I’m following all the steps from the installation link.Installing the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2And I have successfully configured the build.– Build files have been written to: /home/administrator/Projects/c or c++/mongo-c-driver/mongo-c-driver/cmake-buildBut when i’m trying to executing the build, I’m always stuck with this error (whether it is executing a build from tarball or git).…\n[ 37%] Building C object src/libmongoc/CMakeFiles/mongoc_shared.dir/__/kms-message/src/sort.c.o\n[ 38%] Linking C shared library libmongoc-1.0.so\ncc: error: or: No such file or directory\ncc: error: c++/mongo-c-driver/mongo-c-driver/src/libmongoc/…/…/build/cmake/libmongoc-hidden-symbols.map: No such file or directory\nsrc/libmongoc/CMakeFiles/mongoc_shared.dir/build.make:3012: recipe for target ‘src/libmongoc/libmongoc-1.0.so.0.0.0’ failed\nmake[2]: *** [src/libmongoc/libmongoc-1.0.so.0.0.0] Error 1\nCMakeFiles/Makefile2:1266: recipe for target ‘src/libmongoc/CMakeFiles/mongoc_shared.dir/all’ failed\nmake[1]: *** [src/libmongoc/CMakeFiles/mongoc_shared.dir/all] Error 2\nMakefile:151: recipe for target ‘all’ failed\nmake: *** [all] Error 2Please help.Thanks and Regards,\nMonthy", "username": "MONRUZ" }, { "code": "c or c++c_or_c++", "text": "@MONRUZ, the issue is that the build breaks when the source directory path includes a space. Rename your c or c++ directory to something like c_or_c++ and it should work. I am preparing a fix so that this will be fixed in the future.", "username": "Roberto_Sanchez" }, { "code": "", "text": "Hi @Roberto_Sanchez,Thanks, it works.Best regards,\nMonthy", "username": "MONRUZ" } ]
Mongo c driver error when executing a build
2020-09-08T01:16:32.669Z
Mongo c driver error when executing a build
3,748
null
[]
[ { "code": "user-id, route-id, [positions]\ndate,lat,long\ndb.students.update(\n { user-id: 2323, route-id: 0 },\n { $push: { positons: {date: 123, lat: 123, long: 123} } }\n)\n", "text": "Hi Guys,I want to save gps positions of a driven route. So want to create a collection which stores:The positions should be a list of values:I found out, I can $push into array fields to add information: https://docs.mongodb.com/manual/reference/operator/update/push/So everytime when I get a new route coordinate, I would do:But what I need to do at the first insertion? When there isn’t a entry in the database, where I can do a $push ?Can I just set upsert to true ? But would this query insert the array and the user-id and the route-id ?T", "username": "Tim_Ta" }, { "code": "", "text": "You could do an insert or an insertOne.\ndb.students.insertOne({“user-id”: 2323, “route-id”: 0, “positions”: {“date”:123, “lat”: 123, “long”: 123}}). Which would give you the follow document in your DB.\nThen if you needed to update you could use the $push you showed above.Also, mongodb has information on geoJson, which can allow you to use $geonear in an aggregation pipeline and some other features as well. https://docs.mongodb.com/manual/geospatial-queries/#geospatial-geojson", "username": "tapiocaPENGUIN" }, { "code": "", "text": "I had a typo in my insert query it should be:\ndb.students.insertOne({“user-id”: 2323, “route-id”: 0, “positions”: [{“date”:123, “lat”: 123, “long”: 123}]}).Here is the update, and the modified document:\nimage1141×100 5.09 KB", "username": "tapiocaPENGUIN" } ]
Simple update which inserts
2020-09-08T13:20:58.298Z
Simple update which inserts
1,379
null
[ "kafka-connector" ]
[ { "code": "", "text": "I’m wondering what delivery guarantees does kafka connector offer.Thank you for your help!", "username": "11185" }, { "code": "", "text": "For sink default is At Least Once. If there is an error when processing data from a topic the connector will retry the write. However, if the data on the topic contains a unique attribute, it is possible to achieve exactly once semantics by configuring the Sink connector to use upserts and the DocumentIdAdder strategy. The sink connector can not support at most once.For source default is At least once. There is a risk of duplicate messages if you use the copy.existing flag. Note that change stream events are idempotent so the need to support other delivery guarantees are not applicable.", "username": "Robert_Walters" } ]
Kafka connector delivery guarantees
2020-09-04T13:53:42.139Z
Kafka connector delivery guarantees
3,275
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I’m developing an e-commerce mobile app with React Native and I need to store my user data on MongoDB Atlas I thought of using Realm for this but there will be a lot of requests from my app to user data and I might end up in the huge bill.I thought of using Realm only for user authentication and connecting MongoDB atlas on my EC2 server and requesting user data from to my app from there.", "username": "chawki" }, { "code": "", "text": "Hi @chawki,One of the benefits of using Realm Auth Providers within your application is the ability to enforce user access to your services/data access by defining simple and declarative roles/rules which can be automatically map to your user context.This ability will not be available if you use Realm Application just for authentication and then a regular driver to connect to Atlas.Do you mean you would use Realm Authentication just to get an OK for the provided credentials? If this is the case this is possible but I would recommend considering using Realm scalable and secure backend as a full solution. Moreover , that we have offline sync capabilities for mobile reactive application developments.Realm has a pretty large free tier range and the cost can be extrapulated by using the following formulas:Free tier metrics are zeroed every for each month:Can you elaborate your concern on getting a large bill? What is the method you used to calculate that?Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "We are just start-up because Realm is paid as you go seems little shivering we wanna keep under the free tier until we start generating revenue.Can I able to connect my users data with Realm in future if I go this way?\nFor now, using Realm only for authentication and Storing user data on Atlas.Thanks @Pavel_Duchovny", "username": "chawki" }, { "code": "", "text": "Hi @chawki,Yes you can go this way, I believe you can use Realm without even specifying a credit card or a payment method so you can always try it out and develop considering future paid service when you grow.I just think that other backend resources that you will need to operate and secure your application (data consuming wise) might cost much more than Realm.Best regards,\nPavelBest regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I able to use Realm only for user authentication?
2020-09-08T04:15:17.267Z
Can I able to use Realm only for user authentication?
2,140
null
[ "security" ]
[ { "code": "The doc mentions \"The use of an ephemeral key ensures that even if a server’s private key is compromised, you cannot decrypt past sessions with the compromised keyhow does the ephemeral key know that the server's private key is compromised?", "text": "i’m looking through docs to setup TLS in mongodb instance.The doc mentions \"The use of an ephemeral key ensures that even if a server’s private key is compromised, you cannot decrypt past sessions with the compromised keyMy doubt is how does the ephemeral key know that the server's private key is compromised?", "username": "Divine_Cutler" }, { "code": "", "text": "It is a protocol feature. Here is a link to get you started.Contribute to ssllabs/research development by creating an account on GitHub.Forward secrecy (sometimes also called perfect forward secrecy) is a protocol feature that enables secure conversations that are not dependent on the server’s private key. With cipher suites that do not provide forward secrecy, someone who can recover a server’s private key can decrypt all earlier recorded encrypted conversations. You need to support and prefer ECDHE suites in order to enable forward secrecy with modern web browsers. To support a wider range of clients, you should also use DHE suites as fallback after ECDHE. Avoid the RSA key exchange unless absolutely necessary. My proposed default configuration in Section 2.3 contains only suites that provide forward secrecy.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Doubt in forward secrecy concept in TLSencryption
2020-09-08T11:27:29.533Z
Doubt in forward secrecy concept in TLSencryption
1,580
null
[ "node-js" ]
[ { "code": "interface CollectionChangeSet {\n insertions: number[];\n deletions: number[];\n newModifications: number[];\n oldModifications: number[];\n }\n\ntype CollectionChangeCallback<T> = (collection: Collection<T>, changes: CollectionChangeSet) => void;\ncollectionchangesinsertionsnewModificationsoldModificationsdeletionsdeletionsoldModificationsinsertionsnewModificationsinsertionsnewModificationsdeletionsoldModificationsdeletionsoldModificationsnewModifications[obj1, obj2, obj3, obj4]( [obj1, obj3, obj4], { deletions: [1] } ) => ...[\n {\n index: <index of inserted/modified/deleted item>,\n id: <ID of inserted/modified/deleted Item>\n }\n]\nObjectChangeCallbackinterface ObjectChangeSet {\n deleted: boolean;\n changedProperties: string[]\n}\n", "text": "Hi There,I have always struggled with the API for listening to changes on a collection.\n@Ian_Ward - maybe you could shed some light on this?The current API from the 10.0.0-beta.12 docs is:Whilst it’s pretty trivial to manage insertions and newModifications, I can’t quite figure out the best way to manage deletions and oldModifications.Lets focus on deletions for a moment (mostly because I don’t know what oldModifications would even be useful for - For that matter, even newModifications doesn’t really help that much other than letting me know that something changed… see my second point below)I’m not sure I fully understand how to leverage these deletion indices on a collection that changes over time.Lets say we have a collection => [obj1, obj2, obj3, obj4]Some time later, a change comes through and our listener is called with:\n( [obj1, obj3, obj4], { deletions: [1] } ) => ...So what am I to do with this index?Let’s say in my case that to do something useful with this information I need the ID of the deleted object (obj2) - I do not use the results collection in an array, and even if I were, I would be forced to maintain the original order for the index to be meaningful.Do I have to keep an array of IDs inside the listener and reference/mutate that in sync with the changes to the original collection?I don’t imagine it would work to close over the original results collection and reference the object in that - if it is deleted, it won’t be there any more…\nBesides, what happens the next time a change occurs in this collection, the original results collection would be stale.Surely the API for these changes - especially deletions, should at least be an array of objects like:Secondly,\nLets say I wanted to listen for changes to individual objects in a collection. (The current API just says “something changed in the item at index x”)\nWould the best approach be to map over the results and to add individual listeners to the constituent objects? This seems like what you’d need to do - but maybe the collection listener API needs an overhaul to encapsulate useable info like the the much more easily consumable ObjectChangeCallback API.\ni.e.That way we can use similar approaches for listener callbacks on both collections and individual objects.Either way - some guidance on this would be great.Thanks in advance!B", "username": "Benjamin_Storrier" }, { "code": "", "text": "The idea here is to use them to update your UI. If you have a UITableView, for example, you can tell it that the cell with index 1 was removed and it will animate everything correctly. Same with newModifications - you can tell your tableview that the cell with index X was modified, so it needs to be redrawn. Then it will invoke the GetCellAtIndex (or whatever the correct method name is) and fetch the new data from the collection.These indices are for creating a slick UI/UX - you don’t need to use them. For instance in React, you have to tell it to just redraw everything so the indices are less useful. You could just clobber the TableView and redraw everything - most of the time the user won’t notice - it depends on your use case.I take your point though and we are looking at ways to improve this in future - for instance, by leveraging frozen realms automatically in change notifications which we believe would improve developer experience. Stay tuned.", "username": "Ian_Ward" }, { "code": "", "text": "Hi @Ian_WardThanks for the feedback.Indeed - I’m using react - so I don’t especially want to clobber the UI.\nThis is why I’m interested in more detail in the notifications.So - in short, I’m not missing anything. It’s just not been designed with my use case in mind.\nI’m very interested in a better API here - if you need further input let me know.Until then I’ll just have to roll my own solution.BCheersB", "username": "Benjamin_Storrier" } ]
Realm.Collection => addListener()
2020-09-05T06:43:23.539Z
Realm.Collection =&gt; addListener()
2,663
null
[ "indexes", "performance" ]
[ { "code": "", "text": "Some queries which are normally fast are very slow sometimes, yet the execution plan is IXSCAN and there are locks that the query had to wait according to the profiler. Someone at Stack Overflow suggested looking at the globalLock section of serverStatus but I don’t know how to interpret the data. There are very few users reading from the database at the same time so MongoDb should be able to handle the load.This is the output of serverStatus when the query is slow{ \"host\": \"monguito\", \"version\": \"4.2.2\", \"process\": \"mongod\", \"pid\": 11749, - e009b65c", "username": "Alejandro_Carrazzoni" }, { "code": "", "text": "Having IXSCAN is a good start, but it doesn’t mean it has an efficient index to support your queries. Analyze the logs first and also determine if you have an undersized resources. Keyhole can help on the performance analytics.", "username": "ken.chen" }, { "code": "", "text": "It seems the problem is high disk I/O. I have created indexes but the problem still persists. Is there anything else I can do to reduce disk I/O?", "username": "Alejandro_Carrazzoni" }, { "code": "", "text": "Hi @Alejandro_CarrazzoniIt’s difficult to say what’s going on in the server using a single snapshot of serverStatus. Ideally a series of serverStatus output captured during a period of time could show a more complete picture. However this is not a trivial troubleshooting effort and would require tooling, understanding of how WiredTiger interfaces with MongoDB, and a lot of time and patience Having said that, from your description, it sounds like your hardware is struggling to meet demand. IXSCAN won’t help much in an overburdened machine due to various reasons, e.g. the working set is too large for the amount of RAM, slow disks, multiple queries that requires a change in cache content which results in many loading/unloading of cache contents, among many.If it’s possible, the low hanging fruit is to try to increase the RAM size of your deployment and see if it improves the situation. I would next try to hunt for inefficient queries to understand if the underlying cause was simply not enough hardware, or something else.Best regards,\nKevin", "username": "kevinadi" } ]
Queries are slow intermittently
2020-08-27T19:45:38.414Z
Queries are slow intermittently
4,294
null
[ "database-tools" ]
[ { "code": "root@f3263528d241:/bp2/src# mongo --version\nMongoDB shell version v4.0.19\ngit version: 7e28f4296a04d858a2e3dd84a1e79c9ba59a9568\nOpenSSL version: OpenSSL 1.1.1 11 Sep 2018\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: ubuntu1804\n distarch: x86_64\n target_arch: x86_64\nroot@f3263528d241:/bp2/src# mongodump --uri mongodb://172.16.0.24:27017 --out test\n^\\SIGQUIT: quit\nPC=0x55937d1ba701 m=0 sigcode=128\n\ngoroutine 0 [idle]:\nruntime.futex(0x55937dde5600, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7ffe0146a4c8, 0x55937d16a776, ...)\n\t/opt/golang/go1.11/src/runtime/sys_linux_amd64.s:531 +0x21\nruntime.futexsleep(0x55937dde5600, 0x559300000000, 0xffffffffffffffff)\n\t/opt/golang/go1.11/src/runtime/os_linux.go:46 +0x4b\nruntime.notesleep(0x55937dde5600)\n\t/opt/golang/go1.11/src/runtime/lock_futex.go:151 +0xa6\nruntime.stopm()\n\t/opt/golang/go1.11/src/runtime/proc.go:2016 +0xe7\nruntime.findrunnable(0xc00002d400, 0x0)\n\t/opt/golang/go1.11/src/runtime/proc.go:2487 +0x4e2\nruntime.schedule()\n\t/opt/golang/go1.11/src/runtime/proc.go:2613 +0x13e\nruntime.goexit0(0xc00014f380)\n\t/opt/golang/go1.11/src/runtime/proc.go:2793 +0x1ea\nruntime.mcall(0x55937d58b8f0)\n\t/opt/golang/go1.11/src/runtime/asm_amd64.s:299 +0x53\n\ngoroutine 1 [sync.Cond.Wait]:\nsync.runtime_notifyListWait(0xc000146228, 0xc000000040)\n\t/opt/golang/go1.11/src/runtime/sema.go:510 +0xef\nsync.(*Cond).Wait(0xc000146218)\n\t/opt/golang/go1.11/src/sync/cond.go:56 +0x94\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*mongoCluster).AcquireSocket(0xc000146200, 0x0, 0xc0001f6601, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1000, 0x55937d16ba3b, ...)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/cluster.go:609 +0xc9\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*Session).acquireSocket(0xc0001f6680, 0x1, 0x0, 0x0, 0x0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/session.go:4596 +0x249\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*Database).Run(0xc00019da18, 0x55937da1dba0, 0x55937dabe430, 0x0, 0x0, 0x0, 0x0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/session.go:755 +0x44\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*Session).Run(0xc0001f6680, 0x55937da1dba0, 0x55937dabe430, 0x0, 0x0, 0x55937da1d0e0, 0xc0001d2ca0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/session.go:2138 +0x8c\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*Session).Ping(0xc0001f6680, 0xc000146200, 0x0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/session.go:2167 +0x4d\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.DialWithInfo(0xc0001fc000, 0xc000022600, 0xc00000c300, 0xc00019db90)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/session.go:542 +0x4f7\ngithub.com/mongodb/mongo-tools/common/db.(*VanillaDBConnector).GetNewSession(0xc00000e118, 0x55937dabd618, 0xc0000101f0, 0xc00019dc68)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/db/connector.go:80 +0x30\ngithub.com/mongodb/mongo-tools/common/db.(*SessionProvider).GetSession(0xc0000101e0, 0x0, 0x0, 0x0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/db/db.go:115 +0x9a\ngithub.com/mongodb/mongo-tools/common/db.(*SessionProvider).GetNodeType(0xc0000101e0, 0x0, 0x0, 0x0, 0x0)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/db/command.go:90 +0x3e\nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/db/command.go:128 +0x31\ngithub.com/mongodb/mongo-tools/mongodump.(*MongoDump).Init(0xc0001e60c0, 0x55937dabd370, 0xc00001e120)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongodump/mongodump.go:139 +0x20c\nmain.main()\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongodump/main/mongodump.go:81 +0x5ea\n\ngoroutine 19 [syscall]:\nos/signal.signal_recv(0x0)\n\t/opt/golang/go1.11/src/runtime/sigqueue.go:139 +0x9e\nos/signal.loop()\n\t/opt/golang/go1.11/src/os/signal/signal_unix.go:23 +0x24\ncreated by os/signal.init.0\n\t/opt/golang/go1.11/src/os/signal/signal_unix.go:29 +0x43\n\ngoroutine 5 [select]:\ngithub.com/mongodb/mongo-tools/common/progress.(*BarWriter).start(0xc000010190)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/progress/manager.go:153 +0x10d\ncreated by github.com/mongodb/mongo-tools/common/progress.(*BarWriter).Start\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/progress/manager.go:142 +0x48\n\ngoroutine 6 [select]:\ngithub.com/mongodb/mongo-tools/common/signals.handleSignals(0xc00005ad80, 0xc00001e120)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/signals/signals.go:45 +0x38d\ncreated by github.com/mongodb/mongo-tools/common/signals.HandleWithInterrupt\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/common/signals/signals.go:31 +0x69\n\ngoroutine 7 [sleep]:\ntime.Sleep(0x1dcd6500)\n\t/opt/golang/go1.11/src/runtime/time.go:105 +0x155\ngithub.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.(*mongoCluster).syncServersLoop(0xc000146200)\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/cluster.go:368 +0x3b4\ncreated by github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo%2ev2.newCluster\n\t/data/mci/5755a6975615efd0851778a0f798abcc/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/vendor/gopkg.in/mgo.v2/cluster.go:78 +0x15a\n\nrax 0xca\nrbx 0x55937dde54c0\nrcx 0x55937d1ba703\nrdx 0x0\nrdi 0x55937dde5600\nrsi 0x80\nrbp 0x7ffe0146a490\nrsp 0x7ffe0146a448\nr8 0x0\nr9 0x0\nr10 0x0\nr11 0x286\nr12 0xffffffffffffffff\nr13 0x19e\nr14 0x19d\nr15 0x200\nrip 0x55937d1ba701\nrflags 0x286\ncs 0x33\nfs 0x0\ngs 0x0\n", "text": "Hi, I have a fresh deployment (no database created yet). I run a mongodump and it is stuck there not doing anything. I would suppose that this is a issue either in what I am doing or in execution. Please let me know", "username": "Sandeep_Kalra" }, { "code": "127.0.0.1mongodb://172.16.0.24:27017", "text": "Hi @Sandeep_Kalra,How long does mongodump got stuck for? Did you try to leave it running for a while, or did you forcibly kill it after some time?Note that by default, MongoDB will bind to IP 127.0.0.1 only. I noticed that you’re trying to connect using the uri mongodb://172.16.0.24:27017. Could you try to connect to that IP address using the mongo shell and see if it connects?Also, how are you running MongoDB? Is this a Docker image? If yes, what’s the parameters of the instance?Best regards,\nKevin", "username": "kevinadi" } ]
Mongodump is stuck
2020-08-31T16:42:27.019Z
Mongodump is stuck
3,942
null
[]
[ { "code": "", "text": "Hey guys,My colleagues and I are working with inherited infra so please don’t judge Today we were upgrading Mongo on one of the servers - from 3.2 to 4.2. After mongodump > mongorestore we have noticed that the size of the collections has decreased by about 30% (~4GB less). We’re not that familiar with the stored data since we’re new hires but it does not look anything is missing.Do you have any idea what could have caused that reduction in storage size? Do recent versions of Mongo store data in a more efficient way than back in 2015?Thanks", "username": "Josh_White" }, { "code": "wiredTiger.block-manager.file bytes available for reusedb.collection.stats()", "text": "Hi @Josh_White welcome to the community.If you think all your data are there, I don’t think there’s anything to worry about. This is a side effect of how WiredTiger manages its storage. If a document was deleted, WiredTiger doesn’t necessarily release the space back to the OS, with the thinking that a typical database usually will have more data in the future, not less. Thus, space left by deleted documents are left to be reused.If WiredTiger keep releasing space to the OS and reallocate them again, this release-reallocate cycle does no useful work and will be a net negative to performance. Hence WiredTiger does not do this.This is outlined briefly in How do I reclaim disk space in WiredTiger. In your case, the number from the old database’s wiredTiger.block-manager.file bytes available for reuse from the output of db.collection.stats() should add up to the “missing” size in the new database.Best regards,\nKevin", "username": "kevinadi" } ]
Shrinked data size after upgrading 5yo MongoDB.
2020-09-05T03:41:48.642Z
Shrinked data size after upgrading 5yo MongoDB.
2,132
null
[ "performance" ]
[ { "code": "", "text": "Hi,With 3.4, and Linux environment, I see log is filling up with write conflicts as belowDBException thrown :: caused by :: 112 WriteConflictWhat does it mean and how to resolve? in detail", "username": "santhosh_K" }, { "code": "WriteConflict", "text": "Hi @santhosh_KWhat’s the full error message you’re seeing, and what method are you using to connect to MongoDB?Typically WriteConflict was caused by two or more threads/processes trying to update one document at exactly the same time. Since this is not easy to do with a small number of threads/clients, usually this means that there are a large-ish number of threads/clients trying to access the database and update one document at the same time.Usually this is due to the design of the application. It means that you’re artificially bottlenecking the app on a single document. Without more information, I would suggest you to consider some alternative schema design that doesn’t rely on a single document to be able to serve multiple clients simultaneously.Note that as per the support policy, MongoDB 3.4 series is out of support since Jan 2020. I would encourage you to upgrade to a newer, supported version of MongoDB.Best regards,\nKevin", "username": "kevinadi" } ]
Write conflict - 3.4
2020-09-05T10:27:39.798Z
Write conflict - 3.4
2,871
null
[]
[ { "code": " mongod --config /etc/mongod.conf\nmongod 2020-09-03T14:49:33.713+0000 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file., terminating\n", "text": "For some reason, I need to fork the mongod.conf every time I reboot my server by running the command:I can tell since, when I reboot my server and run mongod I get the following message:Is there a way to make mongodb to “read” the config file without needing me to fork it every time I reboot my server?", "username": "George_K" }, { "code": "", "text": "After reboot are you running just mongod command?\nIf yes it tries to start mongod on default port 27017 and default dirpath /data/db\nSince the dir path is not existing it is failing\nYou need to create the directory\nWhen you run manually mongod --config /etc/mongod.conf it may be using different dirpath and working fine\nPlease check contents of your configfile for dbpath to confirm this", "username": "Ramachandra_Tummala" }, { "code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /etc/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/www/site.com/data/db\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n", "text": "Hi,Yes, I use a custom path for the data (dbpath) being /var/www/site/data/db. I would like to make this the default though so every-time I reboot my system, mongodb knows this path without requiring me to run mongod --config /etc/mongod.conf.This is my config file:Why mongodb doesn’t recognize my config file?", "username": "George_K" }, { "code": "storage.dbPath/data/dbmongod/etc/mongod.confsystemd", "text": "Is there a way to make mongodb to “read” the config file without needing me to fork it every time I reboot my server?Hi @George_K,The storage.dbPath option has a hard-coded default of /data/db on Linux. A default path for a configuration file is not baked into the mongod binary, since the convention for config file locations can vary by Linux distro and administrative preferences.However, the recommended way to manage starting and stopping MongoDB with consistent configuration & environment options is using a service definition which includes a config file path so you only have to start and stop the service. If you installed MongoDB using the official packages, you should already have a suitable service definition which includes /etc/mongod.conf as the configuration file path.See Install MongoDB Community Edition on Ubuntu for examples of service management using systemd or System V Init.Regards,\nStennie", "username": "Stennie_X" }, { "code": "MongoDB shell version v4.2.9\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-09-07T12:53:59.326+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-09-07T12:53:59.331+0000 F - [main] exception: connect failed\n2020-09-07T12:53:59.331+0000 E - [main] exiting with code 1\n", "text": "Hi @Stennie_XThank you. I have just created a /data/db root folder but mongodb still doesn’t even recognize it when I restart the server. I still cannot figure out what’s wrong with mongodbI have followed the exact link you have shared when I went to install mongodb. However, now when y type mongo, I get the following error:I am sure there is something wrong with my mongodb configuration. Is there a way to fix it?", "username": "George_K" }, { "code": "", "text": "Is your mongod up and running after you started as servicesudo systemctl status mongodPlease check mongod.log for additional details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This looks like the output of mongo shell rather than the output of mongod server.", "username": "steevej" }, { "code": "Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n\nActive: **failed** (Result: exit-code) since Mon 2020-09-07 12:53:50 UTC; 2h 1min ago\n\nDocs: https://docs.mongodb.org/manual\n\n Main PID: 1273 (code=exited, status=2)\n\nSep 07 12:53:50 me systemd[1]: Started MongoDB Database Server.\n\nSep 07 12:53:50 me mongod[1273]: Error opening config file: Permission denied\n\nSep 07 12:53:50 me mongod[1273]: try '/usr/bin/mongod --help' for more information\n\nSep 07 12:53:50 me systemd[1]: **mongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT**\n\nSep 07 12:53:50 me systemd[1]: **mongod.service: Failed with result 'exit-code'.**\n\nSep 07 12:53:54 me systemd[1]: **/lib/systemd/system/mongod.service:10: PIDFile= references a path below legacy directory /var/run/, updating /var/run/mongodb/mongod.pid → /run/mongodb/mongod.pid; please update the unit file accordingly.** \n020-09-03T13:53:10.097+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-09-03T13:53:10.195+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] MongoDB starting : pid=1989 port=27017 dbpath=/var/lib/mongodb 64-bit host=me\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] db version v4.2.9\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] git version: 06402114114ffc5146fd4b55402c96f1dc9ec4b5\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1c 28 May 2019\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] modules: none\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] build environment:\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] distmod: ubuntu1804\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] distarch: x86_64\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] target_arch: x86_64\n2020-09-03T13:53:10.196+0000 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLo>\n2020-09-03T13:53:10.196+0000 E NETWORK [initandlisten] Failed to unlink socket file /tmp/mongodb-27017.sock Operation not permitted\n2020-09-03T13:53:10.196+0000 F - [initandlisten] Fatal Assertion 40486 at src/mongo/transport/transport_layer_asio.cpp 684\n2020-09-03T13:53:10.196+0000 F - [initandlisten]\n", "text": "Hi,Yes mongodb starts as soon as the server boots. Just checked and it says that it fails to startThis is the mongo.log output log:What can I do to fix this mess?", "username": "George_K" }, { "code": "", "text": "Hi,I opened the mongod.log file which is located in the /etc path, so /etc/mongod.log. Is there another log file you are referring to?", "username": "George_K" }, { "code": "", "text": "Your both snapshots have different timestamps?\nOne is from Sep7th(latest) and other is of Sep 3rd\nIs the mongod.log from the run just after reboot?\nor you must have pasted older entries\nInvestigate this error\nSep 07 12:53:50 me mongod[1273]: Error opening config file: Permission denied\nls -lrt /etc/mongod.conf\nAlso review contents of your config file for dbpath,logpath", "username": "Ramachandra_Tummala" } ]
Mongodb needs me to fork the path to the mongod.conf file every time I reboot my Ubuntu server. Why?
2020-09-06T20:31:09.319Z
Mongodb needs me to fork the path to the mongod.conf file every time I reboot my Ubuntu server. Why?
11,671
null
[]
[ { "code": "", "text": "Hi,\nIn the collection video.movies, what represent the graph in the left of the field name of genre ? For exemple the Drama represent 12%. 12% of what ? please.\nThank you.", "username": "mixd" }, { "code": "", "text": "Are you referring to schema tab?\nYou have different categories for genres like Western,Drama,Comedy etc\nSo 12% of the docs are with Documentary\n3% with Drama and so on", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_37567, thank you.\nYes, i was in the schema tab. This is where i see the percent of each genre. But i’d like to know the percent means what. Can you explain me the stick chart means what please ?", "username": "mixd" }, { "code": "Schema tabDocumentary", "text": "Hi @tcho,The report shown under the Schema tab is based on a random sample of 1000 documents.But i’d like to know the percent means what.For example if it says : 12 % DocumentaryIt simply means in this random sample of 1000 documents, 12 % of 1000 documents has genre Documentary.Whatever data you are seeing right now might change after a refresh.It is just for an estimation.Hope it helps!~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Thank you @Ramachandra_37567 and @Shubham_Ranjan for your answers.\nI understand now.", "username": "mixd" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Schema of the collection video.movies
2020-09-06T21:17:00.776Z
Schema of the collection video.movies
2,219
null
[]
[ { "code": "", "text": "Hello, everyone,\nI am developing a card game for mobile devices and while searching for a system to save all player data, I came across MongoDB!\nI am following the basic lessons of MongoDB, but I have some doubts that I hope some of you can clear up for me.\nAs I wrote to you just now, I would need to constantly save all my players’ data, so that they are always safe and updated in real time (cards owned, statistics etc.), if I use a service like Mongo Atlas, can I have the application communicate directly with my db? If yes, is there a way to make the communication secure? I ask this because I wanted to understand if a user, by decompiling the app, can manage to enter the db and have access to all the data.\nIf not, do you have any advice to give me on this, perhaps on the best method to use for my needs?Thanks", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea.Atlas have a built in security mechanisms to verify that the cluster is secured.Additional available mechanisimshttps://docs.atlas.mongodb.com/setup-cluster-security/As I wrote to you just now, I would need to constantly save all my players’ data, so that they are always safe and updated in real time (cards owned, statistics etc.), if I use a service like Mongo Atlas, can I have the application communicate directly with my db? If yes, is there a way to make the communication secure? I ask this because I wanted to understand if a user, by decompiling the app, can manage to enter the db and have access to all the data.\nIf not, do you have any advice to give me on this, perhaps on the best method to use for my needs?If you follow our best practices and recommendations your application should be secure.Having said that, if you want an ease and agility of mobile development with all the above cluster security as well as integrated Auth providers (Google/Facebook/JWT etc) consider exploring the MongoDB realm platform:Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Your mobile application should not communicate directly with your MongoDB Atlas cluster. MongoDB Realm (or equivalent) must be in between to handle authentifications and permissions. If you choose not to use MongoDB Realm, then you need your own backend system to manage the authentifications and access rights.Also, if you want to deploy a new version of your mobile app and change the data model or something else, you will have to make sure you are retro-compatible with the oldest version as some users might never do the update.You need a backend service in between to make sure you stay in control. Multitier architecture - WikipediaThe mobile app is just the presentation layer. It should just present the data. Not manipulate it. Each operation must land in your backend system where your can check if the user is authenticated and check that the user has the required permissions to do this action. You should always be in control of your data. The “orders” should not come from systems you don’t have control over.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for your answers @Pavel_Duchovny, @MaBeuLux88!\nI think MongoDB Relam is for me! By giving users the ability to connect directly to the Atlas db, I would give them the ability to modify their data at will, and this doesn’t have to happen (for example a player who buys a 5 gold pack might tamper with the request and add 5000 gold instead of 5 to his account).\nI’ve given a quick read to Relam’s features, and I think I’ve figured out that I need the features. I could write features that interact with the db, and have users call up those features (this way I don’t give users direct access to the db). Right? Or are there better methods to use?I also have another question: Is it possible to call up functions via an HTTP request of the app and have an answer (for example a JSON file) always via HTTP? To make a simple comparison, as would happen with a .php page that communicates with a MySQL db.", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea,Yes those are called Http webhooks;.\nhttps://docs.mongodb.com/realm/services/http/#incoming-webhooksSee this post as well:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Question on connection of Atlas db with application
2020-09-06T20:32:03.090Z
Question on connection of Atlas db with application
1,997
null
[ "monitoring" ]
[ { "code": "", "text": "Is there any official Atlas or MongoDB documentation material on what exactly a “command” is?\nWhen I visit a cluster shard’s metrics page, there are several “opcounters”. [1]The command metric is usually very high for our cluster.\nDoes someone know which operations are covered by the “command” metric?\nI guess aggregations are part of it, but I did not find any documentation on this yet.[1] Cloud: MongoDB Cloud", "username": "MartinLoeper" }, { "code": "", "text": "I think you have a list of all the MongoDB commands here and I think each of them counts in the monitoring as a “command”.\nJust like they count in mongostat I presume.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is a command in MongoDB Atlas Metrics?
2020-09-05T22:11:15.504Z
What is a command in MongoDB Atlas Metrics?
3,261
null
[]
[ { "code": "{ \n \"_id\" : ObjectId(\"5f5024314a74f35fc6fb37e1\"), \n \"date\" : ISODate(\"2020-09-02T23:00:00.000+0000\"), \n \"device\" : ObjectId(\"5dd7596761ced7001253aab3\"), \n \"telemetry\" : \"total-active-power\", \n \"max\" : 17175070000.0, \n \"min\" : 17174710000.0, \n \"nsamples\" : NumberInt(4), \n \"samples\" : [\n {\n \"date\" : ISODate(\"2020-09-02T23:00:07.194+0000\"), \n \"data\" : {\n \"samples\" : NumberInt(12), \n \"latest\" : 17174710000.0\n }\n }, \n {\n \"date\" : ISODate(\"2020-09-02T23:01:07.328+0000\"), \n \"data\" : {\n \"samples\" : NumberInt(12), \n \"latest\" : 17174730000.0\n }\n }, \n {\n \"date\" : ISODate(\"2020-09-02T23:02:07.500+0000\"), \n \"data\" : {\n \"samples\" : NumberInt(12), \n \"latest\" : 17174760000.0\n }\n }, \n {\n \"date\" : ISODate(\"2020-09-02T23:03:07.751+0000\"), \n \"data\" : {\n \"samples\" : NumberInt(12), \n \"latest\" : 17174790000.0\n }\n }\n ], \n \"sum\" : 68699160000.0\n}\nSELECT data.* FROM vehicles v\n INNER JOIN LATERAL (\n SELECT * FROM location l\n WHERE l.vehicle_id = v.vehicle_id\n ORDER BY time DESC LIMIT 1\n ) AS data\nON true\nORDER BY v.vehicle_id, data.time DESC;\n", "text": "We have IoT telemetry data coming through into 1 hour buckets (documents with many samples).\nExample document:It is easy to create an aggregate query that finds the latest value for 1 device and 1 telemetry ObjectId, but it seems impossible to get the latest value for MANY devices and telemetry ObjectIds.Can do: Latest value for device ObjectId(“5dd7596761ced7001253aab3”) and telemetry “total-active-power”\nCan’t do: Latest value for device ObjectId array (many) and “total-active-power”In SQL I would do it like this (timescaledb):See Timescale Documentation | Querying dataHow would I do this in a Mongo aggregation?Thanks", "username": "Jeremy_Carter" }, { "code": "{\n\t\"_id\" : ObjectId(\"5f5024314a74f35fc6fb37e1\"),\n\t\"date\" : ISODate(\"2020-09-02T23:00:00Z\"),\n\t\"device\" : ObjectId(\"5dd7596761ced7001253aab3\"),\n\t\"telemetry\" : \"total-active-power\",\n\t\"max\" : 17175070000,\n\t\"min\" : 17174710000,\n\t\"nsamples\" : 4,\n\t\"samples\" : [\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:00:07.194Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174710000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:01:07.328Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174730000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:02:07.500Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174760000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174790000\n\t\t\t}\n\t\t}\n\t],\n\t\"sum\" : 68699160000\n}\n{\n\t\"_id\" : ObjectId(\"5f5024314a74f35fc6fb37e2\"),\n\t\"date\" : ISODate(\"2020-09-02T23:00:00Z\"),\n\t\"device\" : ObjectId(\"5dd7596761ced7001253aab4\"),\n\t\"telemetry\" : \"total-active-power\",\n\t\"max\" : 17175070000,\n\t\"min\" : 17174710000,\n\t\"nsamples\" : 4,\n\t\"samples\" : [\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:00:07.194Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174710000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:01:07.328Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174730000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:02:07.500Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174760000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174790000\n\t\t\t}\n\t\t}\n\t],\n\t\"sum\" : 68699160000\n}\n{\n\t\"_id\" : ObjectId(\"5f5024314a74f35fc6fb37e3\"),\n\t\"date\" : ISODate(\"2020-09-02T23:00:00Z\"),\n\t\"device\" : ObjectId(\"5dd7596761ced7001253aab5\"),\n\t\"telemetry\" : \"total-active-power\",\n\t\"max\" : 17175070000,\n\t\"min\" : 17174710000,\n\t\"nsamples\" : 4,\n\t\"samples\" : [\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:00:07.194Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174710000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:01:07.328Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174730000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:02:07.500Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174760000\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\t\"data\" : {\n\t\t\t\t\"samples\" : 12,\n\t\t\t\t\"latest\" : 17174790000\n\t\t\t}\n\t\t}\n\t],\n\t\"sum\" : 68699160000\n}\n[\n {\n '$match': {\n 'telemetry': 'total-active-power', \n 'device': {\n '$in': [\n new ObjectId('5dd7596761ced7001253aab3'), new ObjectId('5dd7596761ced7001253aab4'), new ObjectId('5dd7596761ced7001253aab5')\n ]\n }\n }\n }, {\n '$unwind': '$samples'\n }, {\n '$sort': {\n 'samples.date': -1\n }\n }, {\n '$group': {\n '_id': '$device', \n 'value': {\n '$first': '$samples'\n }\n }\n }\n]\n{\n\t\"_id\" : ObjectId(\"5dd7596761ced7001253aab3\"),\n\t\"value\" : {\n\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\"data\" : {\n\t\t\t\"samples\" : 12,\n\t\t\t\"latest\" : 17174790000\n\t\t}\n\t}\n}\n{\n\t\"_id\" : ObjectId(\"5dd7596761ced7001253aab5\"),\n\t\"value\" : {\n\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\"data\" : {\n\t\t\t\"samples\" : 12,\n\t\t\t\"latest\" : 17174790000\n\t\t}\n\t}\n}\n{\n\t\"_id\" : ObjectId(\"5dd7596761ced7001253aab4\"),\n\t\"value\" : {\n\t\t\"date\" : ISODate(\"2020-09-02T23:03:07.751Z\"),\n\t\t\"data\" : {\n\t\t\t\"samples\" : 12,\n\t\t\t\"latest\" : 17174790000\n\t\t}\n\t}\n}\n", "text": "Hi @Jeremy_Carter and welcome in the community !I made a small collection with these 3 documents (I just incremented the “_id” and the “device” by 1 each time.And I wrote this little aggregation:Which gives me this result:I think it’s what you asked for but maybe I misunderstood your demand. Please let me know if that’s not it.", "username": "MaBeuLux88" }, { "code": "[\n {\n $match: {\n telemetry: 'total-apparent-power', \n device: ObjectId(\"5dce189883c74c001239c602\")\n }\n },\n { $sort: { date: -1 } },\n { $limit: 2 },\n {\n\t$unwind: {\n\t path: '$samples'\n\t}\n },\n {\n\t$sort: {\n\t 'samples.date': -1\n\t}\n },\n {\n\t$limit: 1\n },\n {\n\t$project: {\n\t _id: 0,\n\t date: '$samples.date',\n\t device: '$device',\n\t telemetry: '$telemetry',\n\t value: '$samples.data'\n\t}\n }\n]\n", "text": "Hi @MaBeuLux88. Thanks for having a look at this. Let me just add a bit more information to the mix.The collection of telemetry buckets currently has 400,000 documents. Each bucket is for 1 hour worth of telemetry. This was modelled as per the Mongo IoT whitepaper.The current query which has an input of 1 device and 1 telemetry key (eg total-active-power, temperature, etc). This query finds the top most bucket by date and unwinds the samples.See this aggregation (find the latest value for 1 telemetry and 1 device only):This query is very fast because the sort and limit are combined and of course the limit is just 1.Now the problem is that what if we wanted to do this query for 500 devices without sending 500 aggregations to mongo and waiting for them all to return?If I run your aggregation on our large collection I have to send allowDiskUse true, and it takes 1 second to get the latest for 5 devices. But if I send 5 x single aggregation it takes < 40ms total.", "username": "Jeremy_Carter" }, { "code": "[\n {\n '$match': {\n 'telemetry': 'total-active-power', \n 'device': {\n '$in': [\n new ObjectId('5dd7596761ced7001253aab3'), new ObjectId('5dd7596761ced7001253aab5')\n ]\n }\n }\n }, {\n '$sort': {\n 'date': -1\n }\n }, {\n '$group': {\n '_id': '$device', \n 'doc': {\n '$first': '$$ROOT'\n }\n }\n }, {\n '$project': {\n 'date': '$doc.date', \n 'device': '$doc.device', \n 'telemetry': 'doc.telemetry', \n 'max': '$doc.max', \n 'min': '$doc.min', \n 'nsamples': '$doc.nsamples', \n 'samples': {\n '$last': '$doc.samples'\n }, \n 'sum': '$doc.sum'\n }\n }\n]\n{\"_id\":{\"$oid\":\"5f5024314a74f35fc6fb37e1\"},\"date\":{\"$date\":\"2020-09-02T23:00:00Z\"},\"device\":{\"$oid\":\"5dd7596761ced7001253aab3\"},\"telemetry\":\"total-active-power\",\"max\":1.717507E+10,\"min\":1.717471E+10,\"nsamples\":4.0,\"samples\":[{\"date\":{\"$date\":\"2020-09-02T23:00:07.194Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717471E+10}},{\"date\":{\"$date\":\"2020-09-02T23:01:07.328Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717473E+10}},{\"date\":{\"$date\":\"2020-09-02T23:02:07.5Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717476E+10}},{\"date\":{\"$date\":\"2020-09-02T23:03:07.751Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717479E+10}}],\"sum\":6.869916E+10}\n{\"_id\":{\"$oid\":\"5f5024314a74f35fc6fb37e2\"},\"date\":{\"$date\":\"2020-09-02T22:00:00Z\"},\"device\":{\"$oid\":\"5dd7596761ced7001253aab3\"},\"telemetry\":\"total-active-power\",\"max\":1.717507E+10,\"min\":1.717471E+10,\"nsamples\":4.0,\"samples\":[{\"date\":{\"$date\":\"2020-09-02T22:00:07.194Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717471E+10}},{\"date\":{\"$date\":\"2020-09-02T22:01:07.328Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717473E+10}},{\"date\":{\"$date\":\"2020-09-02T22:02:07.5Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717476E+10}},{\"date\":{\"$date\":\"2020-09-02T22:03:07.751Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717479E+10}}],\"sum\":6.869916E+10}\n{\"_id\":{\"$oid\":\"5f5024314a74f35fc6fb37e3\"},\"date\":{\"$date\":\"2020-09-02T23:00:00Z\"},\"device\":{\"$oid\":\"5dd7596761ced7001253aab5\"},\"telemetry\":\"total-active-power\",\"max\":1.717507E+10,\"min\":1.717471E+10,\"nsamples\":4.0,\"samples\":[{\"date\":{\"$date\":\"2020-09-02T23:00:07.194Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717471E+10}},{\"date\":{\"$date\":\"2020-09-02T23:01:07.328Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717473E+10}},{\"date\":{\"$date\":\"2020-09-02T23:02:07.5Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717476E+10}},{\"date\":{\"$date\":\"2020-09-02T23:03:07.751Z\"},\"data\":{\"samples\":12.0,\"latest\":1.717479E+10}}],\"sum\":6.869916E+10}\n", "text": "The issue with my pipeline is that it’s actually “unwining” all the samples for all the dates of device X, Y and Z of the telemetry T. You need to find a way to reduce the documents for each device.If you know each devices have an entry in the last 24 hours, you could use this to filter down the list of docs in the first match with date $gt (NOW - 24h) to limit the number of documents in the pipeline at this stage.If some device have been stopped or are no longer reporting data, the date might be out of the filter. To solve this, we can find the latest entry for each device + telemetry like this:In this pipeline, I’m extracting ONLY the latest doc for each device with the $group stage. Then, you don’t really need to use $unwind at all, as you always want to retrieve the latest entry in each “sample” array anyway. So we can use the $last array operator to retrieve this value.I think this pipeline is a lot more optimized and should not need the allowDiskUse option for just 500 devices.This should perform really with an index {telemetry: 1, date: -1, device: 1} in this order as this should avoid the in-memory sort.It would perform even better if you could had a filter on the date in the first $match stage to limit the number of documents in this stage.To validate my pipeline, I used these 3 documents (one device has 2 entries with 2 different dates).I hope this solves your issue !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query to get the latest value for each parent
2020-09-03T02:46:44.126Z
Query to get the latest value for each parent
2,422
null
[ "dot-net" ]
[ { "code": "", "text": "It has been very quiet lately when it comes to .NET support. What is going on?", "username": "Void" }, { "code": "", "text": "My concern also @Void. It seems Mongo moved the Realm .NET people to other areas. They have an ad somewhere for .NET people, but it’s hardly reassuring they would replace experienced people with inexperienced people.Doesn’t look like the Mongo acquisition was good news for Realm.NET. It doesn’t seem Mongo have much in the way of .NET expertise … or interest.@nirinchev1 has been active on the repo for the .NET SDK lately. That’s good news. His help was invaluable in getting us going with Realm.NET.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "Hi Folks – We’re working on getting a .NET SDK released for MongoDB Realm. We now have an engineer who will be working on .NET full time (before it was a shared responsibility) and are building out a team to ensure that we can commit to quality support when we release.", "username": "Drew_DiPalma" }, { "code": "", "text": "Is there any ETA on the next .Net release?", "username": "Jordan_Hafer" }, { "code": "", "text": "Nikola is back as the Lead for .NET - back by popular demand - we’ve also just hired a bunch of engineers to fill out the team, which is great! We hope to have something to share this quarter - stay tuned!", "username": "Ian_Ward" }, { "code": "", "text": "Great! We look forward to the next release.", "username": "Jordan_Hafer" } ]
What is happening to .NET support?
2020-07-21T12:29:01.542Z
What is happening to .NET support?
2,071
null
[]
[ { "code": "", "text": "Hello,(please note, I am not a programmer and I am using a mac)I am using mongodb as the data base for the TANGO plugin of imageJ. Recently, my drive was full and the connection with mongo crashed and I was unabl to reconnect.After relocating my mongo files to a new drive (with the same name as my old drive) and multiple attempts at running the repair command, I now get this error:DK5002124778:~ rosinlf$ sudo /usr/local/Cellar/mongodb/4.0.3/bin/mongod --storageEngine wiredTiger --repair --dbpath /Volumes/SSD-T5/mongodb-SSD2020-09-01T16:14:47.105-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’2020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] MongoDB starting : pid=45614 port=27017 dbpath=/Volumes/SSD-T5/mongodb-SSD 64-bit host=DK50021247782020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] db version v4.0.32020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] git version: 7ea530946fa7880364d88c8d8b6026bbc9ffa48c2020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] allocator: system2020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] modules: none2020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] build environment:2020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] distarch: x86_642020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] target_arch: x86_642020-09-01T16:14:47.118-0400 I CONTROL [initandlisten] options: { repair: true, storage: { dbPath: “/Volumes/SSD-T5/mongodb-SSD”, engine: “wiredTiger” } }2020-09-01T16:14:47.118-0400 W STORAGE [initandlisten] Detected unclean shutdown - /Volumes/SSD-T5/mongodb-SSD/mongod.lock is not empty.2020-09-01T16:14:47.122-0400 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.2020-09-01T16:14:47.122-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=32256M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),2020-09-01T16:14:47.561-0400 I STORAGE [initandlisten] WiredTiger message [1598991287:561575][45614:0x7fffa9091380], txn-recover: Set global recovery timestamp: 02020-09-01T16:14:47.571-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)2020-09-01T16:14:47.571-0400 I STORAGE [initandlisten] Repairing size cache2020-09-01T16:14:47.572-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:572297][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_block_read_off, 291: sizeStorer.wt: read checksum error for 4096B block at offset 12288: block header checksum of 3558255388 doesn’t match expected checksum of 3684020887 Raw: [1598991287:572297][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_block_read_off, 291: sizeStorer.wt: read checksum error for 4096B block at offset 12288: block header checksum of 3558255388 doesn’t match expected checksum of 36840208872020-09-01T16:14:47.572-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:572420][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 1 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 33 00 00 00 0b 00 00 00 01 00 00 00 00 10 00 00 1c ab 16 d4 01 00 00 00 e2 f5 1a 80 e2 0f c0 df c0 80 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Raw: [1598991287:572420][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 1 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 33 00 00 00 0b 00 00 00 01 00 00 00 00 10 00 00 1c ab 16 d4 01 00 00 00 e2 f5 1a 80 e2 0f c0 df c0 80 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 002020-09-01T16:14:47.573-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:573025][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 2 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Raw: [1598991287:573025][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 2 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 002020-09-01T16:14:47.573-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:573557][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 3 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Raw: [1598991287:573557][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 3 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 002020-09-01T16:14:47.573-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:573812][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 4 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Raw: [1598991287:573812][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __wt_bm_corrupt_dump, 144: {12288, 4096, 3684020887}: (chunk 4 of 4): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 002020-09-01T16:14:47.574-0400 E STORAGE [initandlisten] WiredTiger error (0) [1598991287:574368][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __verify_filefrag_chk, 474: file ranges never verified: 1 Raw: [1598991287:574368][45614:0x7fffa9091380], file:sizeStorer.wt, WT_SESSION.verify: __verify_filefrag_chk, 474: file ranges never verified: 12020-09-01T16:14:47.574-0400 I STORAGE [initandlisten] Verify failed on uri table:sizeStorer. Running a salvage operation.2020-09-01T16:14:47.575-0400 F - [initandlisten] Fatal assertion 28577 DataModifiedByRepair: Salvaged data for table:sizeStorer at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 5542020-09-01T16:14:47.575-0400 F - [initandlisten]Any suggestions on how to proceed are greatly appreciated. After consultaning with an IT specialist, he believes the sizeStorer.wt file is corrupt and there seems to be an issue with checksum, but we are not sure how to proceed. Thank you in advance!", "username": "Leah_Rosin" }, { "code": "", "text": "We were able to resolve this issue by downloading a newer version of mongo and running the repair again (per this post https://jira.mongodb.org/browse/SERVER-39710)Thanks!", "username": "Leah_Rosin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Corrupt sizeStorer.wt and failed repair after full drive crash help
2020-09-05T03:42:05.988Z
Corrupt sizeStorer.wt and failed repair after full drive crash help
3,446
null
[]
[ { "code": "", "text": "Would like to know if creating 3 collections per user in a Mongodb is acceptable. Or will it cause any performace and storage issues.\nExpecting around 1million users in a period of 2years.", "username": "hera_1002" }, { "code": "_iduser_id", "text": "Hi @hera_1002,The thing to keep in mind is that every instance holding the collection data will create a file per collection and a file per every index on the collection.So if you create 3 collections per user (at least 1 index per collection is default _id) you will endup with 3 million collections and 4 million files.This number is an exrtreme number which can introduce many problems for MongoDB server. Therefore, I don’t think that this design is scalable.Can you have 3 collections with a user_id field where you filter user data based on this.See the following documentation which is relevant for any MongoDB deployment. One of the suggestions is covered under “Reduce Number of Collections”:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Why do you need three collections per user? If you describe the problem you are trying to solve we may be able to propose a better solution.", "username": "Joe_Drumgoole" }, { "code": "", "text": "I’m a newbie to NoSQL. As of now I have come across a mobile app & I’m asked to continue the development .The database was having 3 collections per user and i wanted to confirm if that’s okay to continue with before putting out concerns to the vendor.It would be really helpful if you could suggest for my requirement.\nI’m supposed to track the user’s routine such as food intake , water intake and exercises everyday .\nAnd my apologies for delayed response\nThanks in advance", "username": "hera_1002" }, { "code": "{\n_id : ...,\nUsername: ...,\nEmail : ...,\nLoginDate : ...,\nTotalCounters : [ { totalDaysUsed : ....} ...]\nActivity_202009_id : ...,\nUserId : ...,\nDayDate: '20200907',\nFoodActivities: [ { activityId: xxxx, ... }, {....}],\nDrinksActivities : [ { activityId: yyyyy, ... }, {....}],\nExcersizes: [{ activityId: zzzz, ... }, {....}],\n....\n}\n{UserId : 1, DayDate : 1}ExtendedActivity_202009_week1{ ActivityId: xxxx,\n Properties : [ {k : \"metric1\", v : \"value1}, ... ]\n{ ActivityId: xxxx,\n Metric1 : \"value1\"\n...\n}\n{ActivityId : 1}", "text": "Hi @hera_1002,Alright, so there is more then one option to approach this use case and it is really a subject of your data access patterns, how will data be segregated on the screen and its volumes.A possible option could be the following:This collection should endup with max of 2m documents which if correctly indexed is good.In this collection the {UserId : 1, DayDate : 1} should probably be a compound index.OrThe idea is that this collection will be accessed based on {ActivityId : 1} when you look at a specific activity full view.With the above design and 1m users will have max of 30M in the activities collection and probably less than that in extended collection per week.When you grow above 1m consider doing a weekly collection or sharding the environment to expand.As I mentioned this is one option, the idea is that you keep as few collections as possible and query data in less documents without inflating collections and keeping simple data access patterns.Best\nPavel", "username": "Pavel_Duchovny" } ]
3 Collections per user in database
2020-08-31T11:25:38.912Z
3 Collections per user in database
4,774
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Is there a good sample realm sync project with partition keys for different types (public, private, group) with multiple collections?The current realm tutorial discussed a sample with collection user, project, task.But it does not discuss how to set up collections with different partition keys so that users can be associated with certain realms for task and project (public, private, group) . A practical “real” sample would be helpful to design proper realm sync.", "username": "jerry_he" }, { "code": "let user = app.currentUser()\nlet partitionValue = \"PUBLIC\"\npublicRealm = try! Realm(configuration: user.configuration(partitionValue: partitionValue)) \n\n...\n\nlet user = app.currentUser()\nlet partitionValue = \"user=1231\"\nprivateRealm = try! Realm(configuration: user.configuration(partitionValue: partitionValue))\n\n...\n\nlet user = app.currentUser()\nlet partitionValue = \"family=851024\"\nsharedRealm = try! Realm(configuration: user.configuration(partitionValue: partitionValue))\n", "text": "@jerry_he we are working on v2 of the tutorial right now and should be released shortly. But to answer your question you will need to open multiple realms and name them different realm references with different sync configurations that have different partitionKeys - for example:Docs here:\nhttps://docs.mongodb.com/realm/sync/partitioning/", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian,Thanks for note. The above code is good. How to make sure some/current user can access realm with partitionValue = “family=851024”? We need to do something in rules/filters to make permission. That is the part l am struggling. We can setup any partition value, but in order for someone to access that realm, some permission rules/filers need to be in place. Wonder you have good example?Also for v2 of the tutorial, is something already in somewhere so that we can have earlier access?Thanks,\nJH", "username": "jerry_he" }, { "code": "{\n\"%%user.custom_data.readPartitions\" : \"%%partition\"\n}", "text": "@jerry_he One way would be set custom user data that includes metadata fields about what partitions the user has read or write access to - https://docs.mongodb.com/realm/users/define-custom-user-data/index.htmlYou can then define the permissions like so in the Sync Permissions UI\nhttps://docs.mongodb.com/realm/sync/rules/#id5For instance this could be the syntax for read permissions", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward This was the missing link that I was looking for. Thanks! The key here is that the custom_data is modified by the server (through MongoDB CRUD functions), not the user on the client side, to store application specific data about the user. This should not be confused with the user_data metadata that is set on the User object through a JWT authentication token.", "username": "Richard_Krueger" }, { "code": "java.lang.IllegalArgumentException: Configurations cannot be different if used to open the same file. user = app.currentUser()\n\n val configStore = SyncConfiguration.Builder(user!!, \"store_id=1234\")\n .waitForInitialRemoteData()\n .build()\n Realm.getInstanceAsync(configStore, object : Realm.Callback(){\n override fun onSuccess(realm: Realm) {\n [email protected] = realm\n \n }\n override fun onError(exception: Throwable) {\n super.onError(exception)\n }\n })\n\nval configGlobal = SyncConfiguration.Builder(user!!, \"global\")\n .waitForInitialRemoteData()\n .build()\n Realm.getInstanceAsync(configGlobal, object : Realm.Callback(){\n override fun onSuccess(realm: Realm) {\n [email protected] = realm\n }\n override fun onError(exception: Throwable) {\n super.onError(exception)\n\n }\n })", "text": "It looks like with android this doesn’t work. I’m getting an error java.lang.IllegalArgumentException: Configurations cannot be different if used to open the same file. ", "username": "Safik_Momin" }, { "code": "", "text": "So every time a different user wants to share a document, I have to create a new realm? And if those 'share privileges ’ are then changed (e.g. the user that shared a document with me decides not to share it with me anymore), then that triggers a client reset?", "username": "Anthony_CJ" }, { "code": "_partitiondocID=abc-1234-xyz-987", "text": "@Anthony_CJ\nThe easiest way to think about it is that all objects in a realm share the same permissions across the entire realm. So a user that can write to document A in realm X, can also write to document B in realm X.if you want a group of users to share the same permissions for an object or set of objects, then you will need a realm with those permissions set.In your case - that sounds like you’ll need to make a realm per document. Then keep a list of partitions on the users custom_data and update that to apply/remove permissions to realms.\nMake the _partition something like docID=abc-1234-xyz-987.\nThat way you will never need to update the partition and trigger a reset.\nHaving lots of realms is common and expected.", "username": "Benjamin_Storrier" }, { "code": "", "text": "Okay that makes sense. So if permissions are changed for a document/realm, e.g. a user is removed, then what happens with the sync for said realm?", "username": "Anthony_CJ" }, { "code": "", "text": "You’d not need to change the permissions on the realm per se.You’d remove the string that represents the realm/partition from the list of partitions that the user can access on their custom_data.The document/realm remains unchanged. The permissions remain unchanged. The user would effectively lose their access because the partition of the realm does not appear in the list of realms they can access.Using the approach outlined above, the idea is that when a realm sync is attempted by a user, the custom_data property containing a list of partitions they are allowed to access is interrogated - and access is granted/denied on that basis.You would need to manage the access to this document manually by adding and removing the partition string from any users custom_data", "username": "Benjamin_Storrier" }, { "code": "user_123user_123", "text": "Thanks @Benjamin_Storrier. So if I end up having a realm for each document, then whenever users add e.g. user_123 to the shared users for that document, and a new realm is created, does it automatically then sync with user_123 because of the partition key?", "username": "Anthony_CJ" }, { "code": "", "text": "@Benjamin_Storrier Your idea of using custom data to store the list of realms (or partitions) that a user can read and/or write to is brilliant. This is a very elegant solution that does not trigger Realm resets. I assume that a server side function would maintain this custom data, so that a user could not spoof the system by writing into his/her own custom data to grant themselves permission to something. In one stroke, you sort of solved the whole permission issue that I have been struggling with for the past two months. Thank you!", "username": "Richard_Krueger" }, { "code": "{\n \"$or\": [\n \"%%partition\": \"%%user.id\",\n \"%%partition\": {\n \"%in\": \"%%user.custom_data.readPermissions\"\n }\n ]\n}\n", "text": "Haha @Richard_Krueger - awesome - I thought that was where @Ian_Ward was heading all along - so there you go Here is what I had planned on using as the read permissions expression.\nI’d use the same structure for write permissions.I haven’t tested it yet because I’m still in dev mode and currently getting getting a big red error in the sync panel that does not allow me to terminate sync which is rather frustrating.Do you think this approach would work?B", "username": "Benjamin_Storrier" }, { "code": "", "text": "@Anthony_CJ\nYou would need to handle the addition and deletion of these values on the user data yourself.\nPreferably on a server somewhere.\nYou don’t want users having write access to these values or they can spoof them to gain access to things they shouldn’t.", "username": "Benjamin_Storrier" }, { "code": "", "text": "@Benjamin_Storrier I assume via Triggers would be a way to achieve this yeah? But TBH I don’t understand why the user can’t do this. If the user has permission to update their document (in this use case, add or remove other users to the list of who has access to that document), why couldn’t it be done that way?And back to my other question, regardless of how the update is made, does updating who has access to that Realm (either adding access or removing access), automatically sync that realm to the added users and remove access for the users who no longer have access?", "username": "Anthony_CJ" }, { "code": "", "text": "Hi @Anthony_CJI assume triggers would be a way to do this. But I am not using them so I can’t offer help there.\nI can’t speak as to the architectural reasons for why it is the way it is sorry.The users who have access to the new realm would have to open the realm and then it would sync. The updates to custom_data merely grant the permissions.", "username": "Benjamin_Storrier" }, { "code": "{\n \"title\": \"RealmPermissions\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_partition\",\n \"userId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n },\n \"userId\": { \n \"bsonType\": \"string\"\n }, // links the user to custom_data\n \"readPermissions\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n } // contains partition values you want a user to read from\n },\n \"writePermissions\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n } // contains partition values you want a user to be able to write to\n }\n }\n}\n{\n \"%%partition\": {\n \"%in\": \"%%user.custom_data.readPermissions\"\n }\n}\n{\n \"%%partition\": {\n \"%in\": \"%%user.custom_data.writePermissions\"\n }\n}\nRealm.open(...)Realm.User.refreshCustomData()", "text": "As a note for those who seek to control permissions using custom_data, here are a few tips.assuming your custom_data collection is structured like this, we’ll call it ‘RealmPermissions’:Once you have a user and their custom data linked, you can use a permission config like this:Reading:Writing:Then you can add and remove permissions from these arrays on the RealmPermissions for fine grained control over a user’s access.A little gotcha is that custom_data easily becomes stale and a newly added permission won’t necessarily be in the custom data when a user goes to access a realm if it has been recently created.\nSo, to solve this, I have prefaced all Realm.open(...) calls with calls to Realm.User.refreshCustomData()So far so good.Happy realming.B", "username": "Benjamin_Storrier" }, { "code": "", "text": "@Benjamin_Storrier I think that write access to the custom data is a server side privilege only, so that should in theory prevent spoofing.", "username": "Richard_Krueger" }, { "code": "", "text": "@Richard_Krueger\nYou could actually make the collection that houses the custom_data a realm and allow certain users to modify that realm. But that’s potentially a house of cards ", "username": "Benjamin_Storrier" }, { "code": "", "text": "@Benjamin_Storrier my sense is that would be security risk. My preference would be to have the custom data only accessible by server functions. The minute you open the door to god like powers at the client side, you are opening pandoras box so to speak.", "username": "Richard_Krueger" } ]
Sample realm sync with partition keys for different types (public, private, group)
2020-07-28T02:11:45.874Z
Sample realm sync with partition keys for different types (public, private, group)
8,404
null
[ "morphia-odm" ]
[ { "code": "", "text": "I am beyond thrilled to announce the release of not one but two new versions of Morphia. We’ll start off with the 1.6.0 release. Release notes can be found here. This release has two main goals:The deprecation coverage isn’t complete and there are a few changes you’ll want to make after migrating to 2.0 but if you clean up all the deprecations you can on 1.6, it should compile just fine on 2.0. Then you’ll just want to complete the job of cleaning up the deprecations.The bigger news is that 2.0.0 has officially been released. You can find those release notes here. I won’t go in to all the details here because I’ve written a blog with the history of the release for those interested and a breakdown of the updates. This release represents almost 3 years of work (woven between other responsibilities and releases) and hundreds if not thousands of hours of work. I do hope you’ll check it out. It’s the foundation for many great things I have planned.As always if you run in problems or questions with either release, please don’t hesitate to file an issue and we’ll try to work things out together. A big thank you to everyone who’s provided feedback so far.Take care.", "username": "Justin_Lee" }, { "code": "", "text": "Hi Justin,Glad to know about this update. I was unsure about the future of Morphia when I got to know it went out of official mongodb umbrella. But now I hope this continues and remain as the best ODM option for Java.We’re using Morphia in production (currently it is v1.3.2). Our database is hosted on Atlas cloud and I would like to continue using the mongodb along with the Morphia.Can you help me understand how shall I proceed to get my project smoothly updated with latest Morphia - from 1.3.2 to 1.6.0 (or 2.0.0)? If you’re aware about any resources regarding upgrades, please share me in reply. Thanks.Thanks and best regards,\nPawan", "username": "Pawan_Dalal" }, { "code": "", "text": "The upgrade to 1.4 is a simple package upgrade. Before you make the leap to 2.0, I would recommend upgrading to 1.6 first. This should be a drop in replacement and you can continue on 1.6 as long as you’d like. There are a number of deprecations in 1.6 to help you prepare for the 2.0 upgrade. Some of those deprecations do not have replacements in 1.6 due to various reasons but many do. Once you eliminate the deprecations you can, upgrading to 2.0 should be seamless as well. Once on 2.0, you can eliminate the remainder of those deprecations by migrating to the new APIs at your own pace. The 2.0 update is worth it (yes, I’m biased) as the API is leaner and more consistent. The formal migration guide hasn’t yet made it in to the docs but i’m working on that now. There’s a skeleton guide in the github repository, however.If you run in to upgrade issues, please file an issue at on github and I’ll do my best to get you sorted.", "username": "Justin_Lee" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Morphia 1.6.0 and 2.0.0 have been released
2020-07-07T14:29:45.460Z
Morphia 1.6.0 and 2.0.0 have been released
5,957
null
[]
[ { "code": "11.3.11.9.3^10.0.0-beta.6Error: Unable to resolve module `./subscription` from `node_modules/realm/lib/browser/index.js`\nnode_modules/realm/lib/browser/index.jssubscription", "text": "Hi guys,So I am trying to build my first MongoDB/Realms App - a colleague already used Realms before it got acquired by MongoDB and only told me great things about it.Weird thing is, I cant even get started. Setup:When I do everything as explained here, and start the simulator from XCode, the first page already immediately throws the following error:Weird thing is: in node_modules/realm/lib/browser/index.js I can not find anything related to subscription anywhere.Please help guys, I am completely stuck here.cheers, Patrick", "username": "Orderlion" }, { "code": "^10.0.0-beta.6nvm use [email protected] installnpm start.xcodeworkspace", "text": "Hi Patrick -I was able to get realm working with React Native with the following environment.And then doing:", "username": "Sumedha_Mehta1" }, { "code": "", "text": "i am on ubuntu with the same issue.", "username": "Pravin_kumar" }, { "code": "", "text": "Tried searching for subscription in the entire solution, seems the package-lock.json is looking for the module. i’ve deleted the package-lock.json. Issue was persisting.Finally, did a restart . haah… it worked after giving me so much of frustration.OS: Ubuntu.", "username": "Pravin_kumar" } ]
Fresh Setup: Realm React Native Error: Unable to resolve module `./subscription` from `node_modules/realm/lib/browser/index.js`
2020-06-30T17:47:05.070Z
Fresh Setup: Realm React Native Error: Unable to resolve module `./subscription` from `node_modules/realm/lib/browser/index.js`
4,283
null
[]
[ { "code": "", "text": "I got error message when trying telnet on Mac terminal$ telnet cluster0-shard-00-00-jxeqq.mongodb.net 27017-bash: telnet: command not found", "username": "Yi_23117" }, { "code": "", "text": "Hi @Yi_23117,Please refer to the following doc to install telnet on Mac:I hope it helps. If you any queries, please let me know.Thanks,\nSonali", "username": "Sonali_Mamgain" }, { "code": "", "text": "Page not found.In my case, the message is “zsh: command not found: telnet”\nAnd it is resolved after instal telne. Please refer to the following link:https://osxdaily.com/2018/07/18/get-telnet-macos/", "username": "linh_rua" }, { "code": "", "text": "", "username": "system" } ]
Test connection to cluster0-shard-00-00-jxeqq.mongodb.net
2019-04-23T23:51:15.986Z
Test connection to cluster0-shard-00-00-jxeqq.mongodb.net
1,929
null
[ "aggregation" ]
[ { "code": "db.documents.updateMany(\n {\n companyId: ObjectId(\"5f29074048538b6403bc71ab\",\n tree: {\n $in: [ObjectId(\"5f539bf696fa1748fa417b07\")],\n },\n },\n {\n $set: {\n tree: {\n $function: {\n body:\n \"function(t) { t.push(ObjectId('5f5494a1d708bda4988d75d4')); return t.filter(v => [ObjectId('5f539bf696fa1748fa417b07')].includes(v))}}\",\n args: [\"$tree\"],\n lang: \"js\",\n },\n },\n },\n }\n);\n\nThe dollar ($) prefixed field '$function' in 'tree.$function' is not valid for storage\n$function", "text": "I have an array tree and I want to update that array in my collection with this syntaxwhy I write body function with string? because in Golang I can’t use code for bson, so I must use function , the syntax above is I run on command line mongo cli to try there first then implement to my Go appthe error I got isanyone has same problem ? or I did wrong to use $function operator ??", "username": "Virtual_Database" }, { "code": "", "text": "Hello : )The function operator requires mongodb 4.4,and it is an aggregation operator.\nTo use it on update,you have to use a pipeline,and aggregation operators.\n(you use the $set update operator,not the $set($addField) aggregation operator)\nAlso to use the pipepile,your driver must support mongodb >= 4.2,and provide\na update method to accept a pipeline as argument.Pipeline updates are different,but here looks like that just adding a [ ]\nin the update part will work,if your driver supports it.", "username": "Takis" }, { "code": "", "text": "I am using Mongo Atlas and I am sure it gonna valid for using $function, and in my mongo cli I login as username on Mongo Atlas, can u give me example for update query with $function ? because the example on documentation only for query find data ,? @Takis", "username": "Virtual_Database" }, { "code": "> use testdb\nswitched to db testdb\n> db.testcoll.drop()\ntrue\n> db.testcoll.insert([{\"_id\":1,\"mystring\":\"a\"},{\"_id\":2,\"mystring\":\"b\"},{\"_id\":3,\"mystring\":\"c\"},{\"_id\":4,\"mystring\":\"a\"}]);\nBulkWriteResult({\n\t\"writeErrors\" : [ ],\n\t\"writeConcernErrors\" : [ ],\n\t\"nInserted\" : 4,\n\t\"nUpserted\" : 0,\n\t\"nMatched\" : 0,\n\t\"nModified\" : 0,\n\t\"nRemoved\" : 0,\n\t\"upserted\" : [ ]\n})\n> db.testcoll.find();\n{ \"_id\" : 1, \"mystring\" : \"a\" }\n{ \"_id\" : 2, \"mystring\" : \"b\" }\n{ \"_id\" : 3, \"mystring\" : \"c\" }\n{ \"_id\" : 4, \"mystring\" : \"a\" }\n> db.testcoll.updateMany({\"mystring\" : \"a\"},[{\"$addFields\":{\"mystring\":{\"$function\":{\"args\":[\"$mystring\"],\"lang\":\"js\",\"body\":\"function mypush(s) { return s+\\\" Updated\\\";}\"}}}}]);\n{ \"acknowledged\" : true, \"matchedCount\" : 2, \"modifiedCount\" : 2 }\n> \n> db.testcoll.find();\n{ \"_id\" : 1, \"mystring\" : \"a Updated\" }\n{ \"_id\" : 2, \"mystring\" : \"b\" }\n{ \"_id\" : 3, \"mystring\" : \"c\" }\n{ \"_id\" : 4, \"mystring\" : \"a Updated\" }\n\n", "text": "HelloThis updated the documents where $mystring=“a”,and to “a Updated”.\nUsing the $function,i runned it on mongo shellI tested on updating array also,its the same way,it worked fine,i used push.\nThis is how it works,but pipeline update are different from the old updates way.\nWith pipeline the result of the pipeline is the new document you want.", "username": "Takis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using $function in updateMany not working
2020-09-06T10:31:46.813Z
Using $function in updateMany not working
6,105
null
[ "indexes" ]
[ { "code": "{\n _id: 12345,\n quizzes: [\n { \n _id: 111111,\n done: true\n }\n ]\n},\n{\n _id: 78910,\n quizzes: [\n { \n _id: 22222,\n done: false\n }\n ]\n}\ndoneAnswer.find({ 'quizzes.0.done': true }).explain('queryPlanner');\nqueryPlanner: {\n plannerVersion: 1,\n namespace: 'iquiz.answers',\n indexFilterSet: false,\n parsedQuery: { 'quizzes.0.done': [Object] },\n winningPlan: { stage: 'COLLSCAN', filter: [Object], direction: 'forward' },\n rejectedPlans: []\n}\n{ quizzes.done: 1 }\n{ quizzes.[$**].done: 1 }\n{ quizzes: 1 }\n{ quizzes.0.done: 1 }\n", "text": "I have the following collection:I want to select the documents where a certain quiz from the quizzes was done and want to make sure that it uses the appropriate index. So I use the following query:Which returns:The query is not using any index as seen from the output. I have tried the following indexes and none get used:The only 1 that actually gets used:However this is not really practical as I may target any quiz from the quizzes array not just the first one. Is there a certain syntax for the index in my case or this is a current limitation of mongodb?Thanks in advance", "username": "Michael_Azer" }, { "code": " quizzes.done", "text": "Hi @Michael_Azer,MongoDB support multikey indexes on array subdocs.You can just index quizzes.done and it should support the query. Having said that true or false are not that selective so perhaps a full scan can in some cases be faster (collscans are optimised and return better than jumping between docs and index entries)Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "quizzes.donehttps://i.ibb.co/JvXmxr9/Screen-Shot-2020-08-19-at-11-18-43-PM.png\nhttps://i.ibb.co/WDxxcZC/Screen-Shot-2020-08-19-at-11-18-51-PM.png\nAnswer.find({ 'quizzes.done': true }).explain('queryPlanner');\nquizzes.0.donequizzes.1.donequizzes.2.done", "text": "Hi @Pavel_DuchovnyI tried the query using Mongo compass and Mongoose (NodeJS) and the index quizzes.done is NOT used even though a multikey index should be created as you say.Take a look at the following screenshots for confirmation:If I modify the query to be likeThe index gets used.However the fastest index that works that I found is quizzes.0.done but this is not dynamic enough.\nThis means I would have to create quizzes.1.done quizzes.2.done …etc which does not make sense.Thanks!", "username": "Michael_Azer" }, { "code": "quizzes.<number>.done", "text": "Hi @Michael_Azer,So you are trying to query quizzes.<number>.done explicitly? If yes the engine will not be able to map this query shape to the index.Have you tried using $arrayElementAt instead?Additionally, test if specifying a hint with the multikey yield better results.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "$arrayElementAtquizzes.0.donequizzes.$**.donequizzes.0.donequizzes.1.donequizzes.2.done", "text": "Hi @Pavel_Duchovny Yes I’m filtering a certain array element index like above.I tried $arrayElementAt but it was extremely slowHinting the multikey index did not helpTo summarize, the fastest index is quizzes.0.doneCan you please add a feature request for a new variation of a wildcard index like quizzes.$**.done so that we don’t have to create quizzes.0.done , quizzes.1.done , quizzes.2.done …etc?\nThat would be very helpful.Thanks again Pavel for your effort.\nRegards", "username": "Michael_Azer" }, { "code": "", "text": "Hi @Michael_Azer,I will try to search for a better solution.Have you tried a wild card index on this collection?You can file a feature request here https://feedback.mongodb.comThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "quizzes.$**", "text": "Hi @Pavel_DuchovnyYes. I tried quizzes.$** or do you mean something else?\nThis was very slow again.Please update me if you find a better solution.Thanks!", "username": "Michael_Azer" }, { "code": "", "text": "Hi @Michael_Azer,In my repro both indexes the wild card and the multikey worked for me\nScreen Shot 2020-08-26 at 16.49.042034×1594 229 KBI am not sure why you can’t utilize them.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "quizzes.$**quizzes", "text": "Hi @Pavel_DuchovnyYes, the wildcard index quizzes.$** gets used but it’s much slower like I explained before.\nJudge how it performs on a large database:Screen Shot 2020-08-28 at 12.39.14 PM2880×1800 387 KBEven without any index at all it’s much faster:Screen Shot 2020-08-28 at 12.37.04 PM2880×1800 353 KBIf by multikey-index you mean just quizzes, I tried it and it won’t get used by the query without hinting.\nWith hinting, the multikey-index would be as slow as the wildcard index above.Thanks in advance\nRegards\nMichael", "username": "Michael_Azer" }, { "code": "", "text": "Hi @Michael_Azer,Looks like all documents needs to be scanned to get the values so an index scan will not make sense but a full scan might be optimal.The scans are better then full index scan like the one you showed.Why would you specifically query a position element? I mean maybe there is a better query for what you are trying to achieve.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnyThis is a game and I need to get the ranking of a player in any quiz (level) compared to all the other players who finished the level.So I get the number of players who finished the quiz and another query to get the ranking.All I’m doing is counting with many different queries to get different stats like this. I don’t actually retrieve the documents themselves.Thanks", "username": "Michael_Azer" }, { "code": "", "text": "Hi @Michael_Azer,Which fields you are ranking on, is it like score?Perhaps you can test a partial index with partailFilterExpression quizzes.done : true and index the ranking fileds.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "scorequizzesscore{ $exists: true }scoredonescoredistinctdistinctquizzes.0.score{ $gte: 0}", "text": "Hi @Pavel_DuchovnyYes. There is a score field for each quiz solved and each player has a document with the quizzes array holding those scores.I modified the schema so that there is a score field ONLY for the finished quizzes.\nSo I check if { $exists: true } on the score field to get the number of players who finished the quiz and I no longer need the done field to do that.I calculate the ranking of a player on a certain quiz based on the score field on the quiz.To get the correct ranking I found that I have to select only distinct scores so that multiple players having the same score would have the same ranking. Unfortunately, the distinct made the query even slower!\nNow even the quizzes.0.score index that I mentioned before is making the query slightly slower than without it.I tried the partial filter expressions and it doesn’t make sense for my case as any player that I’m getting the rank for could have a score of zero which means the filter would be { $gte: 0} which basically selects the whole collection. I tried it and it was slower than without index.For now, I no longer use any index as this seems the fastest for my case.Please update me if you have other suggestions.Thanks in advance\nRegards\nMichael", "username": "Michael_Azer" }, { "code": "explian(executionStats)", "text": "Hi @Michael_Azer,I need to compare explian(executionStats) from each type of query to understand the query pattern and why its slower or faster.The COLLSCAN algorithm is pretty optimized and if for your calculations you need to scan a large portion of the collection its not surprising that COLLSCAN is better.I always say that COLLSCANS are not a bad thing they just not advised if you can filter better using an Index.Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB not using wildcard nested array index
2020-08-19T20:33:47.694Z
MongoDB not using wildcard nested array index
3,894
null
[]
[ { "code": "db.test1.insertMany([\n {\n _id: 'G1',\n name: 'My Group',\n posts: [\n {\n _id: 'P1',\n title: 'Post 1',\n comments: [\n {\n _id: 'C1',\n name: 'Comment 1',\n replies: [\n {\n _id: 'R1',\n content: 'Reply 1',\n },\n {\n _id: 'R2',\n content: 'Reply 2',\n },\n ],\n },\n {\n _id: 'C2',\n name: 'Comment 2',\n replies: [\n {\n _id: 'R3',\n content: 'Reply 3',\n },\n {\n _id: 'R4',\n content: 'Reply 4',\n },\n ],\n },\n ],\n },\n ],\n },\n {\n _id: 'G2',\n name: 'My Group',\n posts: [\n {\n _id: 'P2',\n title: 'Post 2',\n comments: [\n {\n _id: 'C3',\n name: 'Comment 3',\n replies: [\n {\n _id: 'R5',\n content: 'Reply 5',\n },\n {\n _id: 'R6',\n content: 'Reply 6',\n },\n ],\n },\n {\n _id: 'C4',\n name: 'Comment 4',\n replies: [\n {\n _id: 'R7',\n content: 'Reply 7',\n },\n {\n _id: 'R8',\n content: 'Reply 8',\n },\n ],\n },\n ],\n },\n ],\n },\n ]);\n", "text": "how to push new object in G1>P1C1> relies array…", "username": "Praveen_Gupta" }, { "code": "{\"_id\" : \"R3\" ,\n \"content\" : \"Reply 3\",\n \"randomField\" : \"EDIT THIS DOC\"}\n\n{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {},\n \"u\": [\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$ROOT._id\",\n \"G1\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$ROOT\",\n {\n \"posts\": {\n \"$let\": {\n \"vars\": {\n \"posts\": \"$$ROOT.posts\"\n },\n \"in\": {\n \"$map\": {\n \"input\": \"$$posts\",\n \"as\": \"post\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$post._id\",\n \"P1\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$post\",\n {\n \"comments\": {\n \"$map\": {\n \"input\": \"$$post.comments\",\n \"as\": \"comment\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$comment._id\",\n \"C1\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$comment\",\n {\n \"replies\": {\n \"$concatArrays\": [\n \"$$comment.replies\",\n [\n {\n \"_id\": \"R3\",\n \"content\": \"Reply 3\",\n \"randomField\": \"EDIT THIS DOC\"\n }\n ]\n ]\n }\n }\n ]\n },\n \"$$comment\"\n ]\n }\n }\n }\n }\n ]\n },\n \"$$post\"\n ]\n }\n }\n }\n }\n }\n }\n ]\n },\n \"$$ROOT\"\n ]\n }\n }\n }\n ],\n \"multi\": true\n }\n ]\n}\n\n\n[\n {\n \"_id\": \"G1\",\n \"name\": \"My Group\",\n \"posts\": [\n {\n \"_id\": \"P1\",\n \"title\": \"Post 1\",\n \"comments\": [\n {\n \"_id\": \"C1\",\n \"name\": \"Comment 1\",\n \"replies\": [\n {\n \"_id\": \"R1\",\n \"content\": \"Reply 1\"\n },\n {\n \"_id\": \"R2\",\n \"content\": \"Reply 2\"\n },\n {\n \"_id\": \"R3\",\n \"content\": \"Reply 3\",\n \"randomField\": \"EDIT THIS DOC\"\n }\n ]\n },\n {\n \"_id\": \"C2\",\n \"name\": \"Comment 2\",\n \"replies\": [\n {\n \"_id\": \"R3\",\n \"content\": \"Reply 3\"\n },\n {\n \"_id\": \"R4\",\n \"content\": \"Reply 4\"\n }\n ]\n }\n ]\n }\n ]\n },\n {\n \"_id\": \"G2\",\n \"name\": \"My Group\",\n \"posts\": [\n {\n \"_id\": \"P2\",\n \"title\": \"Post 2\",\n \"comments\": [\n {\n \"_id\": \"C3\",\n \"name\": \"Comment 3\",\n \"replies\": [\n {\n \"_id\": \"R5\",\n \"content\": \"Reply 5\"\n },\n {\n \"_id\": \"R6\",\n \"content\": \"Reply 6\"\n }\n ]\n },\n {\n \"_id\": \"C4\",\n \"name\": \"Comment 4\",\n \"replies\": [\n {\n \"_id\": \"R7\",\n \"content\": \"Reply 7\"\n },\n {\n \"_id\": \"R8\",\n \"content\": \"Reply 8\"\n }\n ]\n }\n ]\n }\n ]\n }\n]\n", "text": "Hello : )The json bellow was not valid,because sometimes its hard to do it by hand you can\ncheck using a tool to be valid json.\nI randomly used https://jsonformatter.org/json-pretty-print to fix that document.\n(didn’t had “” on keys or values,had extra commas etc)I pushed(add at the end) a new document inside G1->P1->C1->Relies\nYou can see that document i added inside the pipeline of the update command.Update command(you can only take the pipeline the “u” part and use any driver update command),\nas long its mongoDB>=4.2 compatable,becaused i used pipeline in the update.After the update my cursor returned those 2 documentsHope it helps.", "username": "Takis" }, { "code": "", "text": "It is very helpful for me thank so much for help …", "username": "Praveen_Gupta" } ]
How to update nested array in complex schema
2020-09-04T13:53:24.947Z
How to update nested array in complex schema
3,114
https://www.mongodb.com/…23d42764af5f.png
[ "swift", "atlas-device-sync" ]
[ { "code": "{\n \"title\": \"Tag\",\n \"bsonType\": \"object\",\n \"required\": [\n \"id\",\n \"userId\"\n ],\n \"properties\": {\n \"userId\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"string\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"createdAtDate\": {\n \"bsonType\": \"date\"\n },\n \"lastModifiedDate\": {\n \"bsonType\": \"date\"\n }\n }\n}\n", "text": "Hi,\nI have an existing iOS App using local Realm. I enabled sync on MongoDB realm and setup the schema for the same(Under Rules). I am seeing the following log error:\n\"Ending session with error: failed to validate upload changesets: could not find primary key “_id” in target table schema with name “Tag” (ProtocolErrorCode=212) \"My Schema is:The swift code model code is:\n\nScreen Shot 2020-09-04 at 12.28.28 AM728×766 76.7 KB\n", "username": "Jacqueline_Arokiaswa" }, { "code": "_idid", "text": "could not find primary key “_id”And your model does not contain an _id property; only an id is shown in the object in the question", "username": "Jay" } ]
Realm Sync: failed to validate upload changesets
2020-09-04T10:40:34.224Z
Realm Sync: failed to validate upload changesets
2,542
null
[]
[ { "code": "{\n \"appName\": \"MongoDB Automation Agent v10.16.5.6520 (git: e4fef549c7036b5a31e046b5692a259467d6f68e)\",\n \"command\": {\n \"aggregate\": \"dispatchjobs\",\n \"maxTimeMS\": 15000,\n \"pipeline\": [\n {\n \"$sample\": {\n \"size\": 772\n }\n }\n ],\n \"cursor\": {},\n \"lsid\": {\n \"id\": {\n \"$binary\": \"HJjZ1xLjRRuVloMZ01zw7A==\",\n \"$type\": \"03\"\n }\n },\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1599032548,\n \"i\": 83\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": \"PHh4eHh4eD4=\",\n \"$type\": \"00\"\n },\n \"keyId\": {\n \"$numberLong\": \"6818169589222866945\"\n }\n }\n },\n \"$db\": \"openfleet-core\",\n \"$readPreference\": {\n \"mode\": \"primaryPreferred\"\n }\n },\n \"planSummary\": [\n {\n \"MULTI_ITERATOR\": {}\n }\n ]\n", "text": "sample snippet from the parsed log document:REASON FOR QUESTION POST:\nidentify queries causing alertsQUESTIONS:\n1 what is “MongoDB Automation Agent”?\n2 why is it running a $sample at regular intervals?i cannot find detailed info on parsed log documentany links or guidance will be appreciated as we are having performance issues and in the process of culling unused indexes and tuning.thank you in advanceMark", "username": "Mark_Emerson" }, { "code": "", "text": "Hi Mark - Thank you for reaching out. I lead one of the product teams that is responsible for this query.The specific query you see here is invoked by our “Schema Advisor” tool in Atlas, which recommends potential schema changes you can make to optimize performance. The “automation agent” is part of the code which does that. It’s a very low-level implementation detail, which probably shouldn’t be exposed in the profiler - so apologize for the confusion it has caused you.The query also runs only when you use Schema Advisor. Can you confirm that you are not viewing or using the product regularly, either via the UI or API? If so and if we are still seeing it run on a regular interval, we will investigate it as a potential bug.I am happy to work with you to diagnose performance issues, specially pertaining to culling unused indexes. We also have a recommendation product that will do it automatically for you that I can speak about. Please feel free to reach out to me at [email protected]!Thanks.Rez", "username": "Rez_Khan" }, { "code": "", "text": "Hi Rez,what a great response, thank you.So that makes sense and thanks for the useful info which i can report back to my team.I have now tuned some indexes and alerts gone.However whenever I try to run the performance/schema advisor, i constantly receive (despite time of day its run) the message “Unexpected Error Occurred: return to clusters”. i dont even get a list of collections to choose from. could there be an issue with the advisor?Is guessing this should be a new ticket or related to the service agent message being logged?Mark", "username": "Mark_Emerson" } ]
Atlas Profiler Parsed Logged Document
2020-09-02T08:06:37.102Z
Atlas Profiler Parsed Logged Document
2,641
null
[ "aggregation", "performance" ]
[ { "code": "db.getCollection('example').aggregate([\n{\n\t$lookup: {\n\tfrom: \"users\",\n\tlocalField: \"userId\",\n\tforeignField: \"userId\",\n\tas: \"user\"\n\t}\n},\n{$match: {user.isLogin : true}}\n{ $count: \"total_count\" }\n])\n", "text": "I am using aggregate on mongodb.\nThis aggregate gets the number of documents that are matched to “user.isLogin : true”.This aggregate takes more than 4 seconds now.\nWhen I remove the $count, it takes 0.173 secs.How can I speed up to 0.173 secs when I am using the $count?\nI know $count is same to $group:{_id: null, total_count:{$sum: 1}} so this question is how to speed up the group stage after lookup stage?Thanks, all.", "username": "Valentine_Soin" }, { "code": "db.getCollection('example').aggregate([\n {\n $group: {\n _id: null,\n usersIds: {\n $addToSet: '$userId',\n },\n },\n },\n {\n $lookup: {\n from: 'users',\n let: {\n usersIds: '$usersIds',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $in: ['$_id', '$$usersIds'] },\n { $eq: ['$isLogin', true] },\n ],\n },\n },\n },\n ],\n as: 'users',\n },\n },\n {\n $project: {\n _id: null,\n totalUsers: {\n $size: '$users',\n },\n },\n },\n]);\n", "text": "If you just need to get usersIds from example collection and then just count how many users with that ids are online, bbetter to collect those ids into one array and then $lookup with that array of ids.", "username": "slava" }, { "code": "", "text": "Thanks for your reply, slava.Your replay is not what I wanna know. I should use $group after $lookup.\nFYI, I want to know how many records are that online users, not users count.I am thinking why $group after $lookup is taking so long time. This is so simple query. So do you think this is mongodb critical bug?\nNo solution to resolve this issue?", "username": "Valentine_Soin" }, { "code": "", "text": "Can you share your aggregation here?\nIt seems $lookup and $group stages are used inefficiently.", "username": "slava" }, { "code": "{\n\t\"_id\" : ObjectId(\"5a8531bae2557229f8eaf603\"),\n\t\"nickname\" : \"kaka\",\n\t\"isLogin\" : true,\n}\n{\n\t \"_id\" : ObjectId(\"5ee86942aeadbc47b8d7ad4f\"),\n\t \"id\" : ObjectId(\"5a8531bae2557229f8eaf603\"),\n\t \"commet\" : \"blabla\",\n }\ndb.getCollection('example').aggregate([\n {\n $lookup: {\n from: \"users\",\n localField: \"id\",\n foreignField: \"_id\",\n as: \"user\"\n }\n },\n {\n $unwind: \"$user\"\n }, \n { \n $match: {\"user.isLogin\" : true}\n },\n { $count: \"total_count\" } \n ])\n {\n \"stages\" : [ \n {\n \"$cursor\" : {\n \"query\" : {},\n \"fields\" : {\n \"id\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1, \n \"indexFilterSet\" : false,\n \"parsedQuery\" : {},\n \"winningPlan\" : {\n \"stage\" : \"EOF\"\n },\n \"rejectedPlans\" : []\n }\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"users\",\n \"as\" : \"user\",\n \"localField\" : \"id\",\n \"foreignField\" : \"_id\",\n \"unwinding\" : {\n \"preserveNullAndEmptyArrays\" : false\n },\n \"matching\" : {\n \"isLogin\" : {\n \"$eq\" : true\n }\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"$const\" : null\n },\n \"total_count\" : {\n \"$sum\" : {\n \"$const\" : 1\n }\n }\n }\n }, \n {\n \"$project\" : {\n \"_id\" : false,\n \"total_count\" : true\n }\n }\n ],\n \"ok\" : 1.0\n}\n", "text": "a. Users Collection which has 1 reocrdb. Example Collection which has 50,000 recordsc. This is the aggregate which takes over 4 secs.d.This is the aggregate explain.The issue is the $group after lookup is taking so long time terribly.", "username": "Valentine_Soin" }, { "code": "db.getCollection('users').aggregate([\n // match only relevant users\n {\n $match: {\n isLogin: true,\n },\n },\n // collect usersIds into array\n {\n $group: {\n _id: null,\n userIds: {\n $addToSet: '$_id',\n },\n },\n },\n // join example data using array of userIds\n {\n $lookup: {\n from: 'logs',\n localField: 'userIds',\n foreignField: 'userId',\n as: 'logs',\n },\n },\n // count total logs, that were made by user.isLogin only\n {\n $project: {\n _id: null,\n totalLogs: {\n $size: '$logs',\n },\n },\n },\n\n]);\n", "text": "OK, so your aggregation has following problems:You achieve your result much efficiently if you do $match before $lookup and reduce number of $lookups;\nLike this:", "username": "slava" }, { "code": "", "text": "Yes, I think your aggregate is more optimized, I agree on it. But I want to say the aggregate is taking 0.028 s when without using $count.\nIt means the lookup is not taking much time,The main issue is concerned with $count($group) after $lookup.Why $count($group) after $lookup is taking terrible time?", "username": "Valentine_Soin" }, { "code": "", "text": "Hi, slava. What is your think?", "username": "Valentine_Soin" }, { "code": "", "text": "Why $count($group) after $lookup is taking terrible time?For this, I do not know, mate ", "username": "slava" }, { "code": "", "text": "Alright. Thanks for your effort.\nBut I want to get an answer. How can I get it?", "username": "Valentine_Soin" }, { "code": "", "text": "I think you can get answer to your question in MongoDB Help Center", "username": "slava" }, { "code": "", "text": "Thanks. I can’t get more answers this forum, mate?", "username": "Valentine_Soin" }, { "code": "$countmongo$countdb.collection.aggregate(...).toArray()", "text": "Hi,This aggregate takes more than 4 seconds now.\nWhen I remove the $count, it takes 0.173 secs.I believe it’s because without $count, the aggregation was not actually executed by the server. That is, the server returns a cursor for the result set, but not the result themselves. That’s usually why it can return a sub-second performance.Also, in the mongo shell, if the aggregation returns a number of documents, it will fetch only the first 20, so it will still be quicker than fetching the whole result set if there’s a lot of documents there.In contrast, $count would actually execute the query, fetch the documents, and load them from disk if necessary. This is why it would take a much longer time. In this case, it’s more than a magnitude slower.You should have a similar timing (i.e. ~4 seconds or so) if you force the aggregation to execute, for example with db.collection.aggregate(...).toArray(). Bear in mind that depending on how warm the cache is, you may see a lower/higher number.Best regards,\nKevin", "username": "kevinadi" }, { "code": "$group + $projectdb.collection.aggregate( [\n { $group: { _id: null, total_count: { $sum: 1 } } },\n { $project: { _id: 0 } }\n] )\n$counttotal_count$countitcount()$countdb.getCollection('example').aggregate([\n // pipeline stages here (excluding the $count) ...\n]).itcount()\n", "text": "The $count stage is actually an equivalent to the following $group + $project sequence:So, the $count stage scans all the documents to arrive at the total_count value. Scanning the documents within a stage takes time. And, that is what is happening.Instead of using the $count you can use the itcount cursor method. This returns the count of documents in the cursor. Note that an aggregation query returns a cursor with documents. Using itcount() is likely to be faster than scanning the documents in the $count stage.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi, Kevin.\nThanks for your kind reply.I’ve checked all carefully. Your saying does make sense for me and is right by the test.But I am wondering on theseThanks.", "username": "Valentine_Soin" }, { "code": "", "text": "Hi, Parsad.\nThanks for your clear explanation.I tested it with itcount(), but it takes too long, just as the $count is in the aggregation stage.\nit takes 17.6 secs to count 90051 documentsI am wondering how we can use it if it takes so much time to even get a count of matched documents. What is the solution to speed up dramatically on $lookup & group in aggregate?\nThanks.", "username": "Valentine_Soin" }, { "code": "", "text": "Hello, friends.What do you think about this Thanks.", "username": "Valentine_Soin" }, { "code": "", "text": "No more solution for this aggregate, friends?", "username": "Valentine_Soin" }, { "code": "find()db.collection.explain('executionStats').aggregate(...)db.collection.stats()", "text": "Hi,So far based on what we’ve seen, there are a couple of ways you can improve this:The main issue is that your aggregation-based count requires the server to execute the query, get the matching documents, then count them. There is no shortcut to do this, so the two methods described above are the only ways I can think of to speed it up.If you need further help, please post the output of db.collection.explain('executionStats').aggregate(...) and the output of db.collection.stats(). Please also describe your deployment, e.g. how much RAM you have, what kind of disk you have, is it deployed bare metal or using Docker-like method, etc.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "A post was split to a new topic: Total size of documents matching pipeline’s $lookup stage exceeds 104857600 bytes", "username": "Stennie_X" } ]
Aggregate $lookup and $group takes so long time
2020-06-15T20:22:39.232Z
Aggregate $lookup and $group takes so long time
38,765
https://www.mongodb.com/…5871bcbb332.jpeg
[]
[ { "code": "1 from pymongo import MongoClient\n 2 import bson\n 3 import pprint\n 4 \n 5 client = MongoClient()\n 6 db = client['idx_tree_check']\n 7 admin_db = client.admin\n 8 \n 9 document0 = { \"item\": \"canvas_1\", \"qty\": \"500\"}\n 10 \n 11 document1 = { \"item\": \"canvas_2\", \"qty\": \"700\"}\n 12 \n 13 document2 = { \"item\": \"canvas_3\", \"qty\": \"1000\"}\n 14 \n 15 document3 = { \"item\": \"canvas_4\", \"qty\": \"400\"}\n 16 \n 17 document4 = { \"item\": \"canvas_5\", \"qty\": \"100\"}\n 18 \n 19 document5 = { \"item\": \"canvas_6\", \"qty\": \"600\"}\n 20 \n 21 document6 = { \"item\": \"canvas_7\", \"qty\": \"900\"}\n 22 \n 23 document7 = { \"item\": \"canvas_8\", \"qty\": \"800\"}\n 24 \n 25 document8 = { \"item\": \"canvas_9\", \"qty\": \"200\"}\n 26 \n 27 document9 = { \"item\": \"canvas_10\", \"qty\": \"300\"}\n 28 \n 29 print(\"write document0\")\n 30 db.usertable.insert_one(document0)\n 31 \n 32 print(\"write document1\")\n 33 db.usertable.insert_one(document1)\n 34 \n 35 print(\"write document2\")\n 36 db.usertable.insert_one(document2)\n 37 \n 38 print(\"write document3\")\n 39 db.usertable.insert_one(document3)\n 40 \n 41 print(\"write document4\")\n 42 db.usertable.insert_one(document4)\n 43 \n 44 print(\"create Index\")\n 45 db.usertable.create_index([ (\"qty\", 1) ])\n 46 \n 47 print(\"write document5\")\n 48 db.usertable.insert_one(document5)\n 49 \n 50 print(\"write document6\")\n 51 db.usertable.insert_one(document6)\n 52 \n 53 print(\"write document7\")\n 54 db.usertable.insert_one(document7)\n 55 \n 56 print(\"write document8\")\n 57 db.usertable.insert_one(document8)\n 58 \n 59 print(\"write document9\")\n 60 db.usertable.insert_one(document9)\n", "text": "I have a question about the contents of key-value pairs which are converted from collection and index data, when I put db.collection.insertOne() in pymongo.\nI understand that the document, which is inserted from User, is converted into two key-value pairs(collection and index, respectively) and inserted into each b+ tree that is created by file schema, in default MongoDB. Then, what is the exact content of those key-value pairs? Also, what does happen to those values when I create Index?To answer my question, I did some little experiment. I inserted 10 documents one by one, and capture contents of cursor at __curfile_insert(WT_CURSOR* cursor) using GDB, and looking at those key and value data of cursor. Also, in between 5-th and 6-th insert, I created Index to see whether the content of index is changed or not.MongoDB version : 4.0.9\nMongoDB storage engine : wiredtiger 3.1.1 version\nMongoDB storage engine configurations :\ncollectionConfig blockCompressor : none\nindexConfig prefixCompression : false\npython version: 3.6result607×1024 109 KB\nSorry that I am new user and can post one image only Finally, here are my questions:\nWhen inserting one document by calling db.collection.insertOne(),\nq1-1 : what is content of key-value pair which accesses btree managed by collection file schema?\nq1-2 : what is content of key-value pair which accesses btree managed by index file schema?\nWhen I created index, and inserting document again,\nq2-1 : what is content of key-value pair which accesses btree managed by collection file schema?\nq2-2 : what is content of key-value pair which accesses btree managed by index file schema?\nq2-3 : Why index key-value pairs are created twice?Thank you.", "username": "junhan_lee" }, { "code": "", "text": "My question has been solved! there were similar discussion in following link: https://groups.google.com/g/mongodb-dev/c/f7FEFlheAxQ/m/iUdhUi0IBQAJ", "username": "junhan_lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Contents of key-value pair created by collection and index when db.collection.insertOne()
2020-09-02T02:09:02.384Z
Contents of key-value pair created by collection and index when db.collection.insertOne()
4,459
null
[ "containers", "ops-manager", "kubernetes-operator" ]
[ { "code": "-Restore from Backup (RestoreRsMemberFromBackupPhase2)\n-Check process needs restore from backup (CheckRsMemberNeedsRestorePhase2)\nFAILS HERE-->-SeedRestoreAfterApplyOps the replica set member with an oplog and a replset config (SeedRestoreAfterApplyOps)\n-Mark automated restore phase finished (MarkRestorePhase2Finished)\nPlan execution failed on step SeedRestoreAfterApplyOps as part of move RestoreRsMemberFromBackupPhase2 : <proj1-replicaset2-1> \n[10:59:49.939] Failed to apply action. Result = <nil> : <proj1-replicaset2-1> \n[10:59:49.938] Error starting mongo process proj1-replicaset2-1 on ephermal port : <proj1-replicaset2-1> \n[10:59:49.938] Error starting mongo with args = map[net:map[bindIp:0.0.0.0 port:38348 ssl:map[mode:disabled]] replication:map[] setParameter:map[disableLogicalSessionCacheRefresh:true ttlMonitorEnabled:false] storage:map[dbPath:/data] systemLog:map[destination:file path:/var/log/mongodb-mms-automation/mongodb.log]] : (res=<nil>) : <proj1-replicaset2-1> \n[10:59:49.937] Error starting mongod : <proj1-replicaset2-1> \n[10:59:49.937] failed to store process id : open forkedProcesses: permission denied\n", "text": "I am trying to restore a snapshot of my current deployment to a new deployment via MongoDB Ops Manager. Both the current and new deployment are created using MongoDB Enterprise Kubernetes Operator.However I am facing the following errors reported by Ops Manager:Looking at the container logs, I found the following errors reported by the automation agent:Any ideas how to resolve the error?Thanks!", "username": "Boon_Tiang_Tan" }, { "code": "", "text": "Hi @Boon_Tiang_Tan,It looks like the automotion agent does not have permission to operate on the mongod process for some reason.However, since the Ops Manager and Enterprise kuberentes are Enterprise Lisence tools I would request you to open a case under your support subscription with MongoDB.Best\nPavel", "username": "Pavel_Duchovny" } ]
Facing errors restoring snapshots via MongoDB Ops Manager to a new deployment created using MongoDB Enterprise Kubernetes Operator
2020-09-02T12:38:39.167Z
Facing errors restoring snapshots via MongoDB Ops Manager to a new deployment created using MongoDB Enterprise Kubernetes Operator
2,289
null
[ "indexes", "performance" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5f2d30b0c7cc16c0da84a57d\"),\n \"RecipientId\" : \"6a28d20f-4741-4c14-a055-2eb2593dcf13\",\n \n\t...\n\t\n \"Actions\" : [ \n {\n \"CampaignId\" : \"7fa216da-db22-44a9-9ea3-c987c4152ba1\",\n \"ActionDatetime\" : ISODate(\"1998-01-13T00:00:00.000Z\"),\n \"ActionDescription\" : \"OPEN\"\n }, \n ...\n ]\n}\ndb.getCollection(\"recipients\").createIndex( { \"Actions.ActionDatetime\": 1 } )\ndb.getCollection(\"recipients\").count({\n \"Actions\":\n { $elemMatch:{ ActionDatetime: {$gt: new Date(\"1950-08-04\")} }}}\n)\n{\n \"executionSuccess\" : true,\n \"nReturned\" : 0,\n \"executionTimeMillis\" : 13093,\n \"totalKeysExamined\" : 8706602,\n \"totalDocsExamined\" : 500000,\n \"executionStages\" : {\n \"stage\" : \"COUNT\",\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 1050,\n \"works\" : 8706603,\n \"advanced\" : 0,\n \"needTime\" : 8706602,\n \"needYield\" : 0,\n \"saveState\" : 68020,\n \"restoreState\" : 68020,\n \"isEOF\" : 1,\n \"nCounted\" : 500000,\n \"nSkipped\" : 0,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"Actions\" : {\n \"$elemMatch\" : {\n \"ActionDatetime\" : {\n \"$gt\" : ISODate(\"1950-08-04T00:00:00.000Z\")\n }\n }\n }\n },\n \"nReturned\" : 500000,\n \"executionTimeMillisEstimate\" : 1040,\n \"works\" : 8706603,\n \"advanced\" : 500000,\n \"needTime\" : 8206602,\n \"needYield\" : 0,\n \"saveState\" : 68020,\n \"restoreState\" : 68020,\n \"isEOF\" : 1,\n \"docsExamined\" : 500000,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 500000,\n \"executionTimeMillisEstimate\" : 266,\n \"works\" : 8706603,\n \"advanced\" : 500000,\n \"needTime\" : 8206602,\n \"needYield\" : 0,\n \"saveState\" : 68020,\n \"restoreState\" : 68020,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"Actions.ActionDatetime\" : 1.0\n },\n \"indexName\" : \"Actions.ActionDatetime_1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"Actions.ActionDatetime\" : [ \n \"Actions\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"Actions.ActionDatetime\" : [ \n \"(new Date(-612576000000), new Date(9223372036854775807)]\"\n ]\n },\n \"keysExamined\" : 8706602,\n \"seeks\" : 1,\n \"dupsTested\" : 8706602,\n \"dupsDropped\" : 8206602\n }\n }\n }\n}\n", "text": "I have a collection with 500k documents with the following structure:I need to count the top level documents whose subdocuments inside the “Actions” array meet certain criteria, and for this I’ve created the following Multikey index (taking only the “ActionDatetime” field as an example):The problem is that when I write the query using an $elemMatch, the operation is much slower than when I don’t use the Multikey index at all:The stats for this query:This query took 14sec to execute, whereas if I remove the index, the COLLSCAN takes 1 second.I understand that I’d have a better performance by not using $elemMatch, and filtering by “Actions.ActionDatetime” directly, but in reality I’ll need to filter by more than one field inside the array, so the $elemMatch becomes mandatory.I suspect that it’s the FETCH phase which is killing the performance, but I’ve noticed that when i use the “Actions.ActionDatetime” directly, MongoDB is able to use a COUNT_SCAN instead of the fetch, but the performance is still poorer than the COLLSCAN (4s).I’d like to know if there’s a better indexing strategy for indexing subdocuments with high cardinality inside an array, or if I’m missing something with my current approach.\nAs the volume grows, indexing this information will be a necessity and I don’t want to rely on a COLLSCAN.", "username": "Pedro_Cristina" }, { "code": "", "text": "Hi @Pedro_Cristina,The execution plan you have posted shows that the query eventually had to count all documents (500k) as the used criteria probably did not filtered any documents out.This means that for the same work that a collscan will do you also had to scan all index keys from the index file and do a fetch of each document to count it.For this predict there is no doubt that just scanning all documents is much faster considering that MongoDB code has many optimizations to COLLSCANS as those used in critical areas as Initial Syncs of replica sets and have to be optimized compare to the resulted full index scan!Please test the query with predicts who really narrow down the query results.Best\nPavel", "username": "Pavel_Duchovny" } ]
Searching in array using $elemMatch slower with index than without
2020-09-02T12:38:32.118Z
Searching in array using $elemMatch slower with index than without
5,676
null
[]
[ { "code": "", "text": "Hi all,\nDo mongodump, mongorestore, etc automatically use SSL when connecting to a database hosted in MongoDB Atlas? Or is TLS/SSL optional in the workings of these tools and its use should be enabled explicitly in the command line?\nThank you!", "username": "Eduardo_Cavalcanti" }, { "code": "", "text": "Hi @Eduardo_Cavalcanti,Atlas require the --ssl parameter for any of the MongoDB tools connecting to it as all connections ars SSL encrypted.You can find more hereAs well as under the tools tab on your atlas cluster.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,\nThanks for your answer.I have recently migrated from mLab to MongoDB Atlas. When loading our company databases to Atlas I used mongorestore, connecting through a VPN.The mongorestore version was the same as the cluster instances’ mongodb versionI have used mongorestore without no ssl configuration parameter and it worked fine.The command format used was:mongorestore --uri mongodb+srv://:@ --nsFrom=<source_db>.* --nsTo=<target_db>.*My point is: using mongorestore with no ssl definition on the operation issued,\na) was SSL automatically, transparently being used on the connection?\nb) or was SSL not being used at all, because not explicitly invoked?I would like to know that to determine if I still need to use a VPN to (somewhat) securely use mongodump/mongorestore when connecting from my workstation to our Atlas hosted databases.", "username": "Eduardo_Cavalcanti" }, { "code": "", "text": "Just a remark for the message above. The mongorestore comand format did not appear well.A better representation is:mongorestore --uri mongodb+srv://user:password@uri --nsFrom=source_db.* --nsTo=target_db.*", "username": "Eduardo_Cavalcanti" }, { "code": "", "text": "Hi @Eduardo_Cavalcanti,When using SRV records the SSL is implicitly defined.Let me know if you have any questions.Best\nPavel", "username": "Pavel_Duchovny" } ]
Mongodump, mongorestore and SSL
2020-09-02T02:09:40.181Z
Mongodump, mongorestore and SSL
3,132
null
[ "dot-net" ]
[ { "code": "How do i insert Adds many objects to the the List In c# MongoDB.Driver\n /// <summary>LogTest</summary>\n public class VisitLog\n {\n /// <summary>MongoDB特有的字段</summary>\n [MongoDB.Bson.Serialization.Attributes.BsonElement(\"_id\")]\n [JsonConverter(typeof(ObjectIdConverter))]\n public MongoDB.Bson.ObjectId MongoId { get; set; }\n\n /// <summary>YMD datetime</summary>\n public int Yymmdd { get; set; }\n\n /// <summary>Visitor</summary>\n public string Visitor { get; set; }\n\n /// <summary>VisitInfos</summary>\n public List<VisitInfo> VisitInfos { get; set; }\n\n }\n// 1\n{\n \"_id\": ObjectId(\"5f506eb02000a9b52d72a600\"),\n \"Yymmdd\": NumberInt(\"20200903\"),\n \"Visitor\": \"360spider\",\n \"VisitInfos\": [ ]\n}\n\n \n\nvar filter = Builders<VisitLog>.Filter.Eq(\"_id\", item.MongoId);\nvar update = Builders<VisitLog>.Update.Push(\"VisitInfos\", new VisitInfo { Visitor = Visitor, Browser = \"IE\", Ip = \"192.168.1.1\", Createtime = DateTime.Now.ToUnixTimeLocalIslong() });\nvar result = BB.UpdateOne(filter, update);\nvar items = BB.Find(x => x.Yymmdd.Equals(Yymmdd) && x.Visitor.Equals(Visitor)).Project<VisitLog>(fields).ToList();\n if (items.Count > 0)\n {\n var item = items[0];\n\n\n var VisitInfos = new List<VisitInfo>();\n\n for (int j = 0; j < 10000; j++)\n {\n VisitInfos.Add(new VisitInfo { Visitor = Visitor, Browser = \"IE\", Ip = \"192.168.1.1\", Createtime = DateTime.Now.ToUnixTimeLocalIslong() });\n }\n\n var filter = Builders<VisitLog>.Filter.Eq(\"_id\", item.MongoId);\n var update = Builders<VisitLog>.Update.Push(\"VisitInfos\", VisitInfos);\n var result = BB.UpdateOne(filter, update);\n\n\n \n }\n", "text": "my c# EntityIn the MongoDBCode Like the codei will add objects to the “VisitInfos”: \nHow do i insert Adds many objects to the the List In c# MongoDB.DriverThe Way 1 : insert ony one object my test code isThe Way 2 : i want to insert InsertManyAsyncthe way 2 is failedpls help meths very much…", "username": "AtlantisDe" }, { "code": "", "text": "Have you looked at how $push with $each modifier works?", "username": "Asya_Kamsky" }, { "code": "", "text": "OK …i will …look it ths", "username": "AtlantisDe" } ]
How do I insert many objects to a List In C# MongoDB.Driver
2020-09-03T06:44:22.235Z
How do I insert many objects to a List In C# MongoDB.Driver
7,580
null
[ "node-js" ]
[ { "code": "E.g. var docs = [{ \"name\": \"name1\", age: \"21\" },\n { \"na.me\": \"name2\", age: \"24\" },\n { \"name\": \"name3\", age: \"23\" }]\nError: key na.me must not contain '.'\n at serializeInto (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:913:19) \n at serializeObject (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:347:18 at serializeInto (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:727:17) \n at serializeObject (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:347:18)\n at serializeInto (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:937:17) \n)\n at serializeInto (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:727:17) \n at serializeObject (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:347:18)\n at serializeInto (<path>\\node_modules\\bson\\lib\\bson\\parser\\serializer.js:937:17) \n at BSON.serialize (<path>\\node_modules\\bson\\lib\\bson\\bson.js:64:28)\n at Msg.serializeBson (<path>\\node_modules\\mongodb-core\\lib\\connection\\msg.js:124:22)\n at Msg.makeDocumentSegment (<path>\\node_modules\\mongodb-core\\lib\\connection\\msg.js:116:33)\n at Msg.toBin (<path>\\node_modules\\mongodb-core\\lib\\connection\\msg.js:102:25) \n at serializeCommand (<path>\\node_modules\\mongodb-core\\lib\\connection\\pool.js:772:41)\n", "text": "Using MongoDB Node.js Driver:While running insertMany docs where one one of the docs is having key with a dot(.). Refer the document below.When the option is selected as ordered: false, it should insert document name1, should fail document name2(as the key has a dot) and then insert document name3.But in reality the entire insert fails without inserting any document with the below error.Any help is appreciated.", "username": "Vyankatesh_Inamdar" }, { "code": "mongo.\"na.me\" : \"name2\"", "text": "Hello @Vyankatesh_Inamdar, welcome to the community!There are open issues related to that on MongoDB JIRA:That said, in the mongo shell (server v4.2.8) you can insert the documents with dot (.) in the field names without any errors. The field from the sample document you had posted shows as \"na.me\" : \"name2\" in the inserted document.", "username": "Prasad_Saya" } ]
insertMany failing when ordered : false and document key having a dot(.)
2020-09-03T17:43:01.927Z
insertMany failing when ordered : false and document key having a dot(.)
3,103
https://www.mongodb.com/…4_2_1024x512.png
[ "security" ]
[ { "code": "", "text": "I find the privileges and roles section of the document to be less explicit than I would hope it to be.Specific example:\nI have a sharded cluster (with replica sets). In the sharded cluster I have a database named XDB that has a collection named YCOLL. The collection is a GridFS collection, so we see collections YCOLL.chunks and YCOLL.files.I would llike to create a roled named ZROLE, and a user named WUSER. I know how to create the user and grant the role to the user. I would like the user to be able to perform this action: db.YCOLL.chunks.getShardDistribution ()With db.grantPrivilegesToRole() https://docs.mongodb.com/manual/reference/method/db.grantPrivilegesToRole\nI can grant an action to the role ZROLE. How do I know which action to grant to the role, to allow the user to do a getShardDistribution ?", "username": "Jacques_Kilchoer" }, { "code": "collStatsgetShardDistribution$collStatsgetShardDistribution", "text": "Hello @Jacques_Kilchoer, welcome to the community.collStats is the command previously used to get the details of the getShardDistribution (see this: https://jira.mongodb.org/browse/SERVER-44892). But, its changed to $collStats, an aggregation stage. I am guessing that the related action for getShardDistribution is collStats.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you so much. I will try that action. From your answer, I’m guessing that there is no easy (or documented) way to know which action correpsonds to any particular method?", "username": "Jacques_Kilchoer" }, { "code": "", "text": "I’m guessing that there is no easy (or documented) way to know which action correpsonds to any particular method?Me too . For now documentation is good enough, I think. We can probe in these forums as questions arise.", "username": "Prasad_Saya" } ]
MongoDB privileges and roles for a certain method
2020-09-02T21:57:33.829Z
MongoDB privileges and roles for a certain method
2,197
null
[ "atlas-search" ]
[ { "code": "", "text": "Hi,\nI have created a M0 cluster in Mongo Atlas. Im running atlas search on a collection I have created. I would like to run explain plan for the search query to know the time taken to execute. db.collection.explain().aggregate() is working without any error. But when running\ndb.collection.explain(“executionStats”).aggregate(), there is a error as below\n“ok” : 0,\n“errmsg” : “(Unauthorized) not authorized on demoecommerce to execute command { aggregate: “inventory”, pipeline: [[{$search [{index inv_idx} {text [{query JBL} {path productName}]}]}] [{$project [{_id 0} {productName 1}]}]], cursor: { } }”,\n“code” : 8000,\n“codeName” : “AtlasError”Please let me know how can I run explain plan for atlas search or if there is any other option to get the time take for execution in atlas search.", "username": "Durga_Krishnamoorthi" }, { "code": "MongoDB Enterprise Free-shard-0:PRIMARY> db.zips.explain().aggregate([{\"$match\": {city: \"NEW YORK\"}}])\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"5e9099160ff7e0434fe7c98c_sample_training.zips\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"city\" : {\n\t\t\t\t\"$eq\" : \"NEW YORK\"\n\t\t\t}\n\t\t},\n\t\t\"queryHash\" : \"0491F17A\",\n\t\t\"planCacheKey\" : \"0491F17A\",\n\t\t\"optimizedPipeline\" : true,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"COLLSCAN\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"city\" : {\n\t\t\t\t\t\"$eq\" : \"NEW YORK\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"direction\" : \"forward\"\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"atlas-8vl55a-shard-00-01.ruudt.mongodb.net\",\n\t\t\"port\" : 27000,\n\t\t\"version\" : \"4.2.8\",\n\t\t\"gitVersion\" : \"43d25964249164d76d5e04dd6cf38f6111e21f5f\"\n\t},\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1599050624, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"u5KzhKzaKG/yBSkDRg4/zS2OmrI=\"),\n\t\t\t\"keyId\" : NumberLong(\"6813510425880035331\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1599050624, 1)\n}\nMongoDB Enterprise Free-shard-0:PRIMARY> db.zips.explain(true).aggregate([{\"$match\": {city: \"NEW YORK\"}}])\nuncaught exception: Error: explain failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"(Unauthorized) not authorized on sample_training to execute command { aggregate: \\\"zips\\\", pipeline: [[{$match [{city NEW YORK}]}]], cursor: { } }\",\n\t\"code\" : 8000,\n\t\"codeName\" : \"AtlasError\"\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nthrowOrReturn@src/mongo/shell/explainable.js:32:19\nconstructor/this.aggregate@src/mongo/shell/explainable.js:122:24\n@(shell):1:1\n", "text": "Issue confirmed on my end.I don’t see this documented in:I escalated this conversation to the Atlas team. I’m waiting for an answer at the moment.That being said - there is probably a reason for this operation to not be supported on shared environment and I don’t have this issue with an M10 cluster.", "username": "MaBeuLux88" }, { "code": "$search", "text": "We have not released explain plans yet for the $search stage yet, but they are in active development and will be released soon.", "username": "Marcus" }, { "code": "", "text": "Hi Durga_Krishnamoorthi,\nI’m a product manager at MongoDB and work with the team that built and maintains a large part of the M0/2/5 stack. I’ve verified that the explain() behavior you reported above is a bug and we now have it in our list to fix. Very much appreciate you taking the time to ask and provide details around the issue as it really helps us ensure you have the best experience possible!Thank you,\nMelissa Plunkett", "username": "Melissa_Plunkett" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Explain plan for Atlas search
2020-09-01T07:40:07.344Z
Explain plan for Atlas search
3,395
null
[]
[ { "code": " {\n id: 1234,\n verbatim: clean\n numeric: 4\n },\n {\n id: 1234,\n verbatim: dirt\n numeric: 1\n },\n {$group: {id: $id, \n status: {\n \"$push\": {\n \"verbatim\": \"$verbatim\",\n \"numeric\": \"$numeric\"\n }\n }}}\n {status: {\n \"$arrayElemAt\": [\n status.verbatim\",\n {\n \"$indexOfArray\": [\n \"$status.numeric\",\n {\"$max\": \"$status.numeric\"}\n ]\n }\n ]\n }\n", "text": "Hi there,I have a successfully working aggregation pipeline for Mongo like:Data:Aggregation:Projection:This works fine with mongo, but does not with AWS DocumentDB because operator $indexOfArray is not supported.\nHow do I find an alternative way to find a verbatim field corresponding to maximal numeric?", "username": "AlexG" }, { "code": "\"$indexOfArray\": [\n \"$status.numeric\",\n { \"$max\": \"$status.numeric\" }\n]\n$reduce: {\n\tinput: { $range: [ 0, { $subtract: [ { $size: \"$status.numeric\" }, 1 ] } ] },\n\tinitialValue: 0,\n\tin: {\n\t\t$cond: { if: { $eq: [ { $max: \"$status.numeric\"}, { $arrayElemAt: [ \"$status.numeric\", \"$$this\" ] } ] },\n\t\t\t then: \"$$this\",\n\t\t else: \"$$value\"\n\t\t\t\t}\n\t}\n}", "text": "Hello @AlexG, welcome to the community.Here is a way. The following can be substituted:with:", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya,\nthanks for the quick reply.\nUnfortunately, $reduce is also not implemented in AWS DocumentDB ", "username": "AlexG" }, { "code": "$indexOfArray$reduce", "text": "Unfortunately, $reduce is also not implemented in AWS DocumentDBHi @AlexG,Amazon DocumentDB is a separate implementation from the MongoDB server. DocumentDB uses the MongoDB 3.6 wire protocol, but there are number of functional differences and the supported commands are a subset of those available in MongoDB 3.6 (and earlier). For example, the $indexOfArray and $reduce aggregation operators were introduced in MongoDB 3.4.If the server-side operators you’d like to use aren’t available in DocumentDB, the likely workaround will be manipulating results or documents in your application code.If you want to use a managed MongoDB service on AWS, MongoDB Atlas builds on MongoDB Enterprise server and does not require compatibility workarounds.Regards,\nStennie", "username": "Stennie_X" } ]
Alternative of indexOfArray for DocumentDB
2020-09-03T13:59:25.443Z
Alternative of indexOfArray for DocumentDB
2,283
https://www.mongodb.com/…3_2_1023x510.png
[ "connecting", "security", "next-js" ]
[ { "code": "", "text": "We have a site running on Next.js deployed on Vercel, with a MongoDB backend. Currently we have the issue where new users on the site get occasional 502/500 errors when they perform various functionality such as login, signup or viewing items. However, once they refresh the site it works fine again. When I checked the logs on Vercel, the issue appears as follows:\nimage1341×669 86.7 KBIn MongoDB, I have set the Network Access to allow all addresses.\nIn addition, below is our connectDB function, and an example of where it is called. What might be causing this issue and how could it be resolved? Thanks\n", "username": "Development_Team" }, { "code": "", "text": "Hi Development_Team,I’m sorry to hear you’re running into this issue. In order to help us investigate, can you share cluster tier are you using here (M0, M2, M10, etc?). Feel free to shoot an email to me at andrew.davidson at mongodb.com with a link to your Atlas project.Thanks\n-Andrew", "username": "Andrew_Davidson" } ]
Occasional whitelist issues with new users on site - MongoDB
2020-09-02T02:09:20.265Z
Occasional whitelist issues with new users on site - MongoDB
2,803
null
[ "performance", "cxx" ]
[ { "code": "for (bsoncxx::document::element ele : doc) {\t\n bsoncxx::stdx::string_view field_key{ ele.key() };\n savelocation = -1;\n switch (ele.type()) {\n case bsoncxx::type::k_double:\n records.extractRecord(ele.get_double(), field_key.to_string(), threadNumber, recordCount);\n //extractedvalue = std::to_string(ele.get_double());\n break;\n /*case bsoncxx::type::k_utf8:\n savelocation = records.extractRecord(boost::string_view(ele.get_utf8()).to_string(), field_key.to_string(), threadNumber, recordCount);\n extractedvalue = boost::string_view(ele.get_utf8()).to_string();\n break;*/\n case bsoncxx::type::k_date:\n records.extractRecord((int64_t)ele.get_date(), field_key.to_string(), threadNumber, recordCount);\n //extractedvalue = std::to_string((int64_t)ele.get_date());\n break;\n case bsoncxx::type::k_int32:\n records.extractRecord((int32_t)ele.get_int32(), field_key.to_string(), threadNumber, recordCount);\n //extractedvalue = std::to_string((int32_t)ele.get_int32());\n break;\n case bsoncxx::type::k_int64:\n records.extractRecord((int64_t)ele.get_int64(), field_key.to_string(), threadNumber, recordCount);\n extractedvalue = std::to_string((int64_t)ele.get_int64());\n break;\n }\n", "text": "I am accessing data from MongoDB using the C++ driver, and have written the following code to extract data from the database.From testing, it takes around 20 seconds to extract 100 documents with 5 values each.\nI achieve this by running 4 threads at once. This however is too slow for my purposes. is there a quicker way to access the data than the cursor, as the cursor seems to be the bottleneck at the moment. please note the function records.extractRecord is simply a wrapper around extracting the data from the Bson Types.", "username": "arif_saeed" }, { "code": "", "text": "In my opinion you are way too fast to claim that the bottleneck is the cursor. There is a thousand things that can influence performance. You only supplied few numbers 20 seconds, 100 documents, 5 values and 4 threads. We know nothing about the capacity of the servers, the size of the data sets, the type of documents, indexes or not, …", "username": "steevej" }, { "code": "", "text": "Sorry, i made a mistake in my original post.\nI meant I am accessing 1,000,000 documents, and the fastest i have been able to access the documents is around 14 seconds.I am not using an index, but the results are not sorted.\nI understand that there are alot of variables that can affect performance, but My question is, is there a faster way to access the data than the cursor, or is the cursor the fastest way to access it.I thought maybe the data transformation between bson types and C++ types might be slowing the code down, so i ran the code without running my extractRecord function, so all the code did was run through the cursor, and check each type. This activity took 5 seconds.\nso it looks like its taking around 10 seconds to physically transform 5,000,000 variables from bson types to C++ standard types.\nAnd 5 seconds to run through the cursor.", "username": "arif_saeed" }, { "code": "", "text": "I also stuck in the same place. Did you find a solution?", "username": "sylvester" } ]
Is there a faster way to access the data than the cursor
2020-04-24T18:27:28.292Z
Is there a faster way to access the data than the cursor
2,118
null
[ "cxx" ]
[ { "code": "CMake Error at E:/dev/git/connectors/mongoDb/sourceCode/mongo-c-driver-1.17.0/src/libbson/libbson-1.0-config.cmake:32 (message):\n File or directory\n E:/dev/git/connectors/mongoDb/sourceCode/include/libbson-1.0 referenced by\n variable BSON_INCLUDE_DIRS does not exist !\nCall Stack (most recent call first):\n E:/dev/git/connectors/mongoDb/sourceCode/mongo-c-driver-1.17.0/src/libbson/libbson-1.0-config.cmake:48 (set_and_check)\n src/bsoncxx/CMakeLists.txt:98 (find_package) \nChecking Build System\n Creating directories for 'EP_mnmlstc_core'\n Building Custom Rule C:/mongo-cxx-test/libmongocxx/src/bsoncxx/third_party/CMakeLists.txt\n Performing download step (git clone) for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core download command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_part\n y/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-download-*.log\n No update step for 'EP_mnmlstc_core'\n No patch step for 'EP_mnmlstc_core'\n Performing configure step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core configure command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_par\n ty/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-configure-*.log\n Performing build step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core build command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_party/E\n P_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-build-*.log\n Performing install step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core install command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_party\n /EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-*.log\n Performing fix-includes step for 'EP_mnmlstc_core'\n 'xargs' is not recognized as an internal or external command,\n operable program or batch file.\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(23\n1,5): error MSB6006: \"cmd.exe\" exited with code 255. [C:\\mongo-cxx-test\\libmongocxx-build\\src\\bsoncxx\\third_party\\EP_mn\nmlstc_core.vcxproj] [C:\\mongo-cxx-test\\libmongocxx.vcxproj]\n", "text": "Hi, I’m currently trying to compile mongocxx-driver Windows, and I have ran into some issues.I used the guide found at\nhttp://mongocxx.org/mongocxx-v3/installation/ \nand have successfully installed MongoDB C DriverUnder Step 4, Configure the driver, I ran into the issues.I used the following commandcmake … -DBOOST_ROOT=E:\\dev\\git\\externalapi\\boost -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driverand received the following errors:I also tried follwoing the instructions at stack overflow stackOverflow using the CMakeLists.txt provided (updating the versions) but get to an errorReally need some help on this", "username": "Thomas_Morten" }, { "code": "", "text": "@Thomas_Morten the problem seems to be that CMake is using the package scripts out of the C driver source tree instead of the files from the installed location. Can you inspect your environment variables to see if there are any CMake settings (particularly CMAKE_PREFIX_PATH) being set or modified via an environment variable? Also, could you provide the complete CMake command you are using for the C++ driver and the complete output from the beginning all the way to the error?", "username": "Roberto_Sanchez" }, { "code": "include(ExternalProject)\n\nset(common_cmake_cache_args\n -DCMAKE_CXX_COMPILER:PATH=${CMAKE_CXX_COMPILER}\n)\nif(NOT DEFINED CMAKE_CONFIGURATION_TYPES)\n list(APPEND common_cmake_cache_args\n -DCMAKE_BUILD_TYPE:STRING=${CMAKE_BUILD_TYPE}\n )\nendif()\n\nExternalProject_Add(libmongoc\n GIT_REPOSITORY \"https://github.com/mongodb/mongo-c-driver.git\"\n GIT_TAG \"1.17.0\"\n GIT_PROGRESS 1\n GIT_SHALLOW 1\n SOURCE_DIR \"${CMAKE_BINARY_DIR}/libmongoc\"\n BINARY_DIR \"${CMAKE_BINARY_DIR}/libmongoc-build\"\n INSTALL_DIR \"${CMAKE_BINARY_DIR}/libmongoc-install\"\n CMAKE_CACHE_ARGS\n ${common_cmake_cache_args}\n -DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_BINARY_DIR}/libmongoc-install\n -DENABLE_TESTS:BOOL=OFF\n -DENABLE_STATIC:BOOL=OFF\n -DENABLE_EXAMPLES:BOOL=OFF\n -DENABLE_EXTRA_ALIGNMENT:BOOL=OFF\n #INSTALL_COMMAND \"\"\n)\nset(libmongoc-1.0_DIR \"${CMAKE_BINARY_DIR}/libmongoc-install/lib/cmake/libmongoc-1.0/\")\nset(libbson-1.0_DIR \"${CMAKE_BINARY_DIR}/libmongoc-install/lib/cmake/libbson-1.0/\")\n\nExternalProject_Add(libmongocxx\n GIT_REPOSITORY \"https://github.com/mongodb/mongo-cxx-driver.git\"\n GIT_TAG \"releases/v3.6\"\n GIT_PROGRESS 1\n GIT_SHALLOW 1\n SOURCE_DIR \"${CMAKE_BINARY_DIR}/libmongocxx\"\n BINARY_DIR \"${CMAKE_BINARY_DIR}/libmongocxx-build\"\n INSTALL_DIR \"${CMAKE_BINARY_DIR}/libmongocxx-install\"\n CMAKE_CACHE_ARGS\n ${common_cmake_cache_args}\n -DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_BINARY_DIR}/libmongocxx-install\n -DBUILD_SHARED_LIBS:BOOL=ON\n -DENABLE_TESTS:BOOL=OFF\n -DENABLE_EXAMPLES:BOOL=OFF\n -DBSONCXX_POLY_USE_BOOST:BOOL=OFF\n -DBSONCXX_POLY_USE_MNMLSTC:BOOL=ON\n -DBSONCXX_POLY_USE_STD:BOOL=OFF\n -Dlibmongoc-1.0_DIR:PATH=${libmongoc-1.0_DIR}\n -Dlibbson-1.0_DIR:PATH=${libbson-1.0_DIR}\n DEPENDS\n libmongoc\n)\nset(libmongocxx_DIR \"${CMAKE_BINARY_DIR}/libmongocxx-install/lib/cmake/libmongocxx-3.3.1/\")\nset(libbsoncxx_DIR \"${CMAKE_BINARY_DIR}/libmongocxx-install//lib/cmake/libbsoncxx-3.3.1/\")\n\n\nfunction(ExternalProject_AlwaysConfigure proj)\n # This custom external project step forces the configure and later\n # steps to run.\n _ep_get_step_stampfile(${proj} \"configure\" stampfile)\n ExternalProject_Add_Step(${proj} forceconfigure\n COMMAND ${CMAKE_COMMAND} -E remove ${stampfile}\n COMMENT \"Forcing configure step for '${proj}'\"\n DEPENDEES build\n ALWAYS 1\n )\nendfunction()\n\nExternalProject_Add(${PROJECT_NAME}\n SOURCE_DIR \"${CMAKE_SOURCE_DIR}\"\n BINARY_DIR \"${CMAKE_BINARY_DIR}/${PROJECT_NAME}-build\"\n DOWNLOAD_COMMAND \"\"\n UPDATE_COMMAND \"\"\n CMAKE_CACHE_ARGS\n ${common_cmake_cache_args}\n -D${PROJECT_NAME}_SUPERBUILD:BOOL=OFF\n -Dlibbsoncxx_DIR:PATH=${libbsoncxx_DIR}\n -Dlibmongocxx_DIR:PATH=${libmongocxx_DIR}\n INSTALL_COMMAND \"\"\n DEPENDS\n libmongocxx\n)\nExternalProject_AlwaysConfigure(${PROJECT_NAME})\nreturn()\n", "text": "I have no relevant environment variables set (no CMAKE_PREFIX_PATH).I am using the following script for my CMakeLists.txt as well as the test.cpp from the install guide:cmake_minimum_required(VERSION 3.12)set(CMAKE_CXX_STANDARD 11)project(Test)option({PROJECT_NAME}_SUPERBUILD \"Build {PROJECT_NAME} and the projects it depends on.\" ON)if(${PROJECT_NAME}_SUPERBUILD)endif()message(STATUS “Configuring inner-build”)find_package(libmongocxx REQUIRED)add_executable(test_mongocxx test.cpp)\ntarget_link_libraries(test_mongocxx PUBLIC {LIBMONGOCXX_LIBRARIES})\ntarget_include_directories(test_mongocxx PUBLIC {LIBMONGOCXX_INCLUDE_DIRS})\ntarget_compile_definitions(test_mongocxx PUBLIC ${LIBMONGOCXX_DEFINITIONS})I call this using :cmake .thencmake --build .", "username": "Thomas_Morten" }, { "code": "C:\\mongo-cxx-test>cmake .\n-- Building for: Visual Studio 16 2019\n-- Selecting Windows SDK version 10.0.18362.0 to target Windows 10.0.16299.\n-- The C compiler identification is MSVC 19.24.28316.0\n-- The CXX compiler identification is MSVC 19.24.28316.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise/VC/Tools/MSVC/14.24.28314/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise/VC/Tools/MSVC/14.24.28314/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\n-- Configuring done\n-- Generating done\n-- Build files have been written to: C:/mongo-cxx-test\n\nC:\\mongo-cxx-test>cmake --build .\nMicrosoft (R) Build Engine version 16.4.0+e901037fe for .NET Framework\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n Checking Build System\n Creating directories for 'libmongoc'\n Building Custom Rule C:/mongo-cxx-test/CMakeLists.txt\n Performing download step (git clone) for 'libmongoc'\n Cloning into 'libmongoc'...\n remote: Enumerating objects: 46002, done.\n remote: Counting objects: 0% (1/46002)\n remote: Counting objects: 1% (461/46002)\n ...\n remote: Counting objects: 99% (45542/46002)\n remote: Counting objects: 100% (46002/46002)\n remote: Counting objects: 100% (46002/46002), done.\n remote: Compressing objects: 0% (1/24069)\n remote: Compressing objects: 1% (241/24069)\n ...\n remote: Compressing objects: 100% (24069/24069)\n remote: Compressing objects: 100% (24069/24069), done.\n Receiving objects: 0% (1/46002)\n Receiving objects: 0% (365/46002), 140.00 KiB | 181.00 KiB/s\n Receiving objects: 1% (461/46002), 140.00 KiB | 181.00 KiB/s\n ...\n Receiving objects: 99% (45542/46002), 22.72 MiB | 1.83 MiB/s\n remote: Total 46002 (delta 41401), reused 24517 (delta 21581), pack-reused 0\n Receiving objects: 100% (46002/46002), 22.72 MiB | 1.83 MiB/s\n Receiving objects: 100% (46002/46002), 23.37 MiB | 1.37 MiB/s, done.\n Resolving deltas: 0% (0/41401)\n Resolving deltas: 1% (416/41401)\n ...\n Resolving deltas: 100% (41401/41401)\n Resolving deltas: 100% (41401/41401), done.\n Note: checking out '1.17.0'.\n You are in 'detached HEAD' state. You can look around, make experimental\n changes and commit them, and you can discard any commits you make in this\n state without impacting any branches by performing another checkout.\n\n If you want to create a new branch to retain commits you create, you may\n do so (now or later) by using -b with the checkout command again. Example:\n\n\tgit checkout -b <new-branch-name>\n\n HEAD is now at b51d1e45 1.17.0 Release\n Performing update step for 'libmongoc'\n No patch step for 'libmongoc'\n Performing configure step for 'libmongoc'\n loading initial cache file C:/mongo-cxx-test/libmongoc-prefix/tmp/libmongoc-cache-Debug.cmake\n -- Selecting Windows SDK version 10.0.18362.0 to target Windows 10.0.16299.\n -- The C compiler identification is ;MSVC 19.24.28316.0\n -- Detecting C compiler ABI info\n -- Detecting C compiler ABI info - done\n -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise/VC/Tools/MSVC/14.24.2\n 8314/bin/Hostx64/x64/cl.exe - skipped\n -- Detecting C compile features\n -- Detecting C compile features - done\n -- No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo\n -- Found PythonInterp: E:/dev/Tools/Python3/python.exe (found version \"3.7\")\n calculated BUILD_VERSION 1.17.0\n storing BUILD_VERSION 1.17.0 in file VERSION_CURRENT for later use\n -- Don't build static libraries\n\t-- Using bundled libbson\n libbson version (from VERSION_CURRENT file): 1.17.0\n -- Check if the system is big endian\n -- Searching 16 bit integer\n -- Looking for sys/types.h\n -- Looking for sys/types.h - found\n -- Looking for stdint.h\n -- Looking for stdint.h - found\n -- Looking for stddef.h\n -- Looking for stddef.h - found\n -- Check size of unsigned short\n -- Check size of unsigned short - done\n -- Searching 16 bit integer - Using unsigned short\n -- Check if the system is big endian - little endian\n -- Looking for snprintf\n -- Looking for snprintf - found\n -- Looking for reallocf\n -- Looking for reallocf - not found\n -- Performing Test BSON_HAVE_TIMESPEC\n -- Performing Test BSON_HAVE_TIMESPEC - Success\n -- struct timespec found\n -- Looking for gmtime_r\n -- Looking for gmtime_r - not found\n -- Looking for rand_r\n -- Looking for rand_r - not found\n -- Looking for strings.h\n -- Looking for strings.h - not found\n -- Looking for strlcpy\n -- Looking for strlcpy - not found\n -- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH\n -- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH - Failed\n -- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH\n -- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH - Failed\n -- Looking for pthread.h\n -- Looking for pthread.h - not found\n -- Found Threads: TRUE\n libmongoc version (from VERSION_CURRENT file): 1.17.0\n -- Searching for zlib CMake packages\n -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR)\n -- Enabling zlib compression (bundled)\n -- Looking for include file unistd.h\n -- Looking for include file unistd.h - not found\n -- Looking for include file stdarg.h\n -- Looking for include file stdarg.h - found\n -- Searching for compression library zstd\n -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)\n -- Not found\n -- Check size of socklen_t\n -- Check size of socklen_t - done\n -- Looking for sched_getcpu\n -- Looking for sched_getcpu - not found\n -- Searching for compression library header snappy-c.h\n -- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\n -- No ICU library found, SASLPrep disabled for SCRAM-SHA-256 authentication.\n Searching for libmongocrypt\n -- If ICU is installed in a non-standard directory, define ICU_ROOT as the ICU installation path.\n -- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n -- Performing Test MONGOC_HAVE_SS_FAMILY\n -- Performing Test MONGOC_HAVE_SS_FAMILY - Failed\n -- Compiling against Secure Channel\n -- Compiling against Windows SSPI\n -- Building with MONGODB-AWS auth support\n -- Build files generated for:\n -- build system: Visual Studio 16 2019\n -- instance: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/mongo-cxx-test/libmongoc-build\n Performing build step for 'libmongoc'\n Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Framework\n Copyright (C) Microsoft Corporation. All rights reserved.\n\n\tChecking Build System\n\tBuilding Custom Rule C:/mongo-cxx-test/libmongoc/src/libbson/CMakeLists.txt\n\tbcon.c\n\tbson.c\n\tbson-atomic.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-atomic.c(93,36): warning C4133: 'function': incompatible types -\nfrom 'volatile int64_t *' to 'volatile LONG *' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\\nmongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-atomic.c(93,39): warning C4244: 'function': conversion from 'int6\n4_t' to 'LONG', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx\n-test\\libmongoc.vcxproj]\n\tbson-clock.c\n\tbson-context.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-context.c(290,43): warning C4267: '=': conversion from 'size_t' t\no 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\l\nibmongoc.vcxproj]\n\tbson-decimal128.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-decimal128.c(176,1): warning C4996: 'strcpy': This function or va\nriable may be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online\nhelp for details. [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxp\nroj]\n C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\ucrt\\string.h(133): message : see declaration of 'strcpy'\n [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-decimal128.c(180,1): warning C4996: 'strcpy': This function or va\nriable may be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online\nhelp for details. [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxp\nroj]\n C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\ucrt\\string.h(133): message : see declaration of 'strcpy'\n [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-decimal128.c(629,33): warning C4267: '-=': conversion from 'size_\nt' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cx\nx-test\\libmongoc.vcxproj]\n\tbson-error.c\n\tbson-iso8601.c\n\tbson-iter.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-iter.c(114,22): warning C4267: '=': conversion from 'size_t' to '\nuint32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\n\\libmongoc.vcxproj]\n\tbson-json.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-json.c(617,77): warning C4146: unary minus operator applied to un\nsigned type, result still unsigned [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-te\nst\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-json.c(1069,1): warning C4267: 'function': conversion from 'size_\nt' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-te\nst\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-json.c(2090,39): warning C4018: '<': signed/unsigned mismatch [C:\n\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-json.c(2091,24): warning C4018: '<': signed/unsigned mismatch [C:\n\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tbson-keys.c\n\tbson-md5.c\n\tbson-memory.c\n\tbson-oid.c\n\tbson-reader.c\n\tbson-string.c\n\tbson-timegm.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(278,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(295,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(295,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(295,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(321,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(331,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(331,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(331,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(331,1): warning C4028: formal parameter 4 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(466,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(483,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(483,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(488,12): warning C4244: '+=': conversion from 'const int\n64_t' to 'int_fast32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\\nmongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(494,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(494,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(494,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(507,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(507,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(507,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(519,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(519,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(538,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(538,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(538,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(538,1): warning C4028: formal parameter 4 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(538,1): warning C4028: formal parameter 5 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(706,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(706,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(706,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(706,1): warning C4028: formal parameter 4 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(722,1): warning C4028: formal parameter 1 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(722,1): warning C4028: formal parameter 2 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\bson\\bson-timegm.c(722,1): warning C4028: formal parameter 3 different from\n declaration [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tbson-utf8.c\n\tbson-value.c\n\tbson-version-functions.c\n\tGenerating Code...\n\tCompiling...\n\tbson-writer.c\n\tjsonsl.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\jsonsl\\jsonsl.c(921,1): warning C4996: 'strcpy': This function or variable\nmay be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help fo\nr details. [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\ucrt\\string.h(133): message : see declaration of 'strcpy'\n [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libbson\\src\\jsonsl\\jsonsl.c(959,1): warning C4996: 'strcpy': This function or variable\nmay be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help fo\nr details. [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\ucrt\\string.h(133): message : see declaration of 'strcpy'\n [C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\bson_shared.vcxproj]\n\tcommon-b64.c\n\tcommon-md5.c\n\tcommon-thread.c\n\tGenerating Code...\n\t Creating library C:/mongo-cxx-test/libmongoc-build/src/libbson/Debug/bson-1.0.lib and object C:/mongo-cxx-test/l\n ibmongoc-build/src/libbson/Debug/bson-1.0.exp\n\tbson_shared.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\Debug\\bson-1.0.dll\n\tBuilding Custom Rule C:/mongo-cxx-test/libmongoc/src/libmongoc/CMakeLists.txt\ncl : command line warning D9025: overriding '/W3' with '/w' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_sha\nred.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tadler32.c\n\tcrc32.c\n\tdeflate.c\n\tinfback.c\n\tinffast.c\n\tinflate.c\n\tinftrees.c\n\ttrees.c\n\tzutil.c\n\tcompress.c\n\tuncompr.c\n\tgzclose.c\n\tgzlib.c\n\tgzread.c\n\tgzwrite.c\n\thexlify.c\n\tkms_b64.c\n\tkms_caller_identity_request.c\n\tkms_crypto_apple.c\n\tkms_crypto_libcrypto.c\n\tGenerating Code...\n\tCompiling...\n\tkms_crypto_none.c\n\tkms_crypto_windows.c\n\tkms_decrypt_request.c\n\tkms_encrypt_request.c\n\tkms_kv_list.c\n\tkms_message.c\n\tkms_port.c\n\tkms_request.c\n\tkms_request_opt.c\n\tkms_request_str.c\n\tkms_response.c\n\tkms_response_parser.c\n\tsort.c\n\tGenerating Code...\n\tmongoc-aggregate.c\n\tmongoc-apm.c\n\tmongoc-array.c\n\tmongoc-async.c\n\tmongoc-async-cmd.c\n\tmongoc-buffer.c\n\tmongoc-bulk-operation.c\n\tmongoc-change-stream.c\n\tmongoc-client.c\n\tmongoc-client-pool.c\n\tmongoc-client-side-encryption.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-client-side-encryption.c(249,21): warning C4018: '<': signe\nd/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongo\nc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-client-side-encryption.c(304,21): warning C4018: '<': signe\nd/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongo\nc.vcxproj]\n\tmongoc-cluster.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c(162,38): warning C4267: '+=': conversion from 'si\nze_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo\n-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c(172,62): warning C4267: '=': conversion from 'siz\ne_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-\ncxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c(3130,50): warning C4018: '>': signed/unsigned mis\nmatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c(3215,21): warning C4267: '=': conversion from 'si\nze_t' to 'off_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mon\ngo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c(3392,58): warning C4267: '=': conversion from 'si\nze_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\m\nongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-cluster-aws.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws.c(512,1): warning C4142: '_mongoc_aws_credentia\nls_obtain': benign redefinition of type [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mon\ngo-cxx-test\\libmongoc.vcxproj]\n C:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws-private.h(40): message : see declaration of '\n _mongoc_aws_credentials_obtain' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws.c(579,1): warning C4142: '_mongoc_validate_and_\nderive_region': benign redefinition of type [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\n\\mongo-cxx-test\\libmongoc.vcxproj]\n C:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws-private.h(48): message : see declaration of '\n _mongoc_validate_and_derive_region' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws.c(934,1): warning C4142: '_mongoc_cluster_auth_\nnode_aws': benign redefinition of type [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mong\no-cxx-test\\libmongoc.vcxproj]\n C:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws-private.h(26): message : see declaration of '\n _mongoc_cluster_auth_node_aws' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]\n\tmongoc-cluster-sasl.c\n\tmongoc-collection.c\n\tmongoc-compression.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-compression.c(51,32): warning C4267: 'function': conversion\n from 'size_t' to 'uLong', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj\n] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-compression.c(181,38): warning C4267: 'function': conversio\nn from 'size_t' to 'uLong', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxpro\nj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-compression.c(255,41): warning C4267: 'function': conversio\nn from 'size_t' to 'uLong', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxpro\nj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-counters.c\n\tmongoc-crypt.c\n\tmongoc-cursor-array.c\n\tmongoc-cursor.c\n\tGenerating Code...\n\tCompiling...\n\tmongoc-cursor-cmd.c\n\tmongoc-cursor-change-stream.c\n\tmongoc-cursor-cmd-deprecated.c\n\tmongoc-cursor-find.c\n\tmongoc-cursor-find-cmd.c\n\tmongoc-cursor-find-opquery.c\n\tmongoc-cursor-legacy.c\n\tmongoc-database.c\n\tmongoc-error.c\n\tmongoc-find-and-modify.c\n\tmongoc-init.c\n\tConfigure the driver with ENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF. Automatic cleanup is deprecated and will be remove\n d in version 2.0.\n\tmongoc-gridfs.c\n\tmongoc-gridfs-bucket.c\n\tmongoc-gridfs-bucket-file.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c(375,38): warning C4267: '+=': conversi\non from 'size_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcx\nproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c(376,27): warning C4267: '+=': conversi\non from 'size_t' to 'uint32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vc\nxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c(428,37): warning C4267: '+=': conversi\non from 'size_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcx\nproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c(429,34): warning C4267: '+=': conversi\non from 'size_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcx\nproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c(430,26): warning C4267: '+=': conversi\non from 'size_t' to 'uint32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vc\nxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-gridfs-file.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(448,18): warning C4018: '>=': signed/unsigned\n mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(515,18): warning C4018: '>': signed/unsigned\nmismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(585,21): warning C4018: '>=': signed/unsigned\n mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(602,79): warning C4244: 'function': conversio\nn from 'uint64_t' to 'uint32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.v\ncxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(836,36): warning C4018: '<=': signed/unsigned\n mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(882,12): warning C4018: '>': signed/unsigned\nmismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c(981,42): warning C4244: '=': conversion from\n'uint64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]", "text": "Here is the full output from the cmd commands (split into 2 because of character limit):", "username": "Thomas_Morten" }, { "code": "[C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-gridfs-file-list.c\n\tmongoc-gridfs-file-page.c\n\tmongoc-handshake.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-handshake.c(202,18): warning C4018: '<': signed/unsigned mi\nsmatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-handshake.c(298,1): warning C4996: 'GetVersionExA': was dec\nlared deprecated [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.v\ncxproj]\n C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.18362.0\\um\\sysinfoapi.h(387): message : see declaration of 'GetVe\n rsionExA' [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-handshake.c(505,7): warning C4018: '<': signed/unsigned mis\nmatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-host-list.c\n\tmongoc-http.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-http.c(194,46): warning C4244: '=': conversion from '__int6\n4' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cx\nx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-http.c(197,75): warning C4267: '=': conversion from 'size_t\n' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx\n-test\\libmongoc.vcxproj]\n\tGenerating Code...\n\tCompiling...\n\tmongoc-index.c\n\tmongoc-interrupt.c\n\tmongoc-list.c\n\tmongoc-linux-distro-scanner.c\n\tmongoc-log.c\n\tmongoc-matcher.c\n\tmongoc-matcher-op.c\n\tmongoc-memcmp.c\n\tmongoc-cmd.c\n\tmongoc-opts-helpers.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-opts-helpers.c(67,30): warning C4267: 'function': conversio\nn from 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj]\n [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-opts.c\n\tmongoc-queue.c\n\tmongoc-read-concern.c\n\tmongoc-read-prefs.c\n\tmongoc-rpc.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\op-msg.def(9,1): warning C4267: 'initializing': conversion from 's\nize_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mong\no-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-rpc.c(850,53): warning C4267: '=': conversion from 'size_t'\n to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-\ntest\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-rpc.c(884,64): warning C4267: '=': conversion from 'size_t'\n to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-\ncxx-test\\libmongoc.vcxproj]\n\tmongoc-server-description.c\n\tmongoc-server-stream.c\n\tmongoc-client-session.c\n\tmongoc-server-monitor.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(242,56): warning C4244: '=': conversion fr\nom 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj\n] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(256,31): warning C4267: '=': conversion fr\nom 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\n\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(262,71): warning C4244: 'function': conver\nsion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.\nvcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(270,78): warning C4244: 'function': conver\nsion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.\nvcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(281,78): warning C4244: 'function': conver\nsion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.\nvcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(346,56): warning C4244: '=': conversion fr\nom 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj\n] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(358,31): warning C4267: '=': conversion fr\nom 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\n\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(368,71): warning C4244: 'function': conver\nsion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.\nvcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(548,67): warning C4244: 'function': conver\nsion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.\nvcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(609,73): warning C4244: 'function': conver\nsion from 'uint64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared\n.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(1079,22): warning C4033: '_server_monitor_\nthread' must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\\nlibmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(1155,22): warning C4033: '_server_monitor_\nrtt_thread' must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-t\nest\\libmongoc.vcxproj]\n\tmongoc-set.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-set.c(91,27): warning C4244: '=': conversion from '__int64'\n to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-\ntest\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-set.c(201,17): warning C4267: 'function': conversion from '\nsize_t' to 'uint32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C\n:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tGenerating Code...\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(1156): warning C4716: '_server_monitor_rtt\n_thread': must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-tes\nt\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c(1080): warning C4716: '_server_monitor_thr\nead': must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\li\nbmongoc.vcxproj]\n\tCompiling...\n\tmongoc-socket.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-socket.c(1219,62): warning C4267: 'function': conversion fr\nom 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\n\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-socket.c(1286,39): warning C4267: 'function': conversion fr\nom 'size_t' to 'DWORD', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [\nC:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-stream-buffered.c\n\tmongoc-stream.c\n\tmongoc-stream-file.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-file.c(133,63): warning C4267: 'function': conversio\nn from 'size_t' to 'unsigned int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared\n.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-file.c(177,65): warning C4267: 'function': conversio\nn from 'size_t' to 'unsigned int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared\n.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-stream-gridfs.c\n\tmongoc-stream-gridfs-download.c\n\tmongoc-stream-gridfs-upload.c\n\tmongoc-stream-socket.c\n\tmongoc-topology.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology.c(254,62): warning C4244: 'function': conversion f\nrom 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxpro\nj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-topology-background-monitoring.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology-background-monitoring.c(77,22): warning C4033: 'sr\nv_polling_run' must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cx\nx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology-background-monitoring.c(184,18): warning C4018: '<\n': signed/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\\nlibmongoc.vcxproj]\n\tmongoc-topology-description.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology-description.c(729,65): warning C4018: '<=': signed\n/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc\n.vcxproj]\n\tmongoc-topology-description-apm.c\n\tmongoc-topology-scanner.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology-scanner.c(241,76): warning C4267: '=': conversion\nfrom 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [\nC:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-uri.c\n\tmongoc-util.c\n\tmongoc-version-functions.c\n\tmongoc-write-command.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command.c(423,34): warning C4018: '>': signed/unsigne\nd mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj\n]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command.c(426,35): warning C4018: '>=': signed/unsign\ned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxpro\nj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command.c(543,54): warning C4018: '<=': signed/unsign\ned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxpro\nj]\n\tmongoc-write-command-legacy.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command-legacy.c(176,15): warning C4018: '>': signed/\nunsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.\nvcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command-legacy.c(295,21): warning C4018: '>': signed/\nunsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.\nvcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command-legacy.c(462,21): warning C4018: '>': signed/\nunsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.\nvcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-write-command-legacy.c(475,21): warning C4018: '>': signed/\nunsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.\nvcxproj]\n\tmongoc-write-concern.c\n\tcommon-b64.c\n\tGenerating Code...\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-topology-background-monitoring.c(78): warning C4716: 'srv_p\nolling_run': must return a value [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-\ntest\\libmongoc.vcxproj]\n\tCompiling...\n\tcommon-md5.c\n\tcommon-thread.c\n\tmongoc-crypto.c\n\tmongoc-scram.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-scram.c(435,18): warning C4018: '<=': signed/unsigned misma\ntch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-scram.c(673,18): warning C4018: '<': signed/unsigned mismat\nch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-scram.c(829,59): warning C4267: 'function': conversion from\n 'size_t' to 'int', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\m\nongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-stream-tls.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-tls.c(121,53): warning C4244: '=': conversion from '\nint64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C\n:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-ssl.c\n\tmongoc-crypto-cng.c\n\tmongoc-rand-cng.c\n\tmongoc-stream-tls-secure-channel.c\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-tls-secure-channel.c(386,37): warning C4018: '>': si\ngned/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmo\nngoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-tls-secure-channel.c(394,21): warning C4018: '<': si\ngned/unsigned mismatch [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_shared.vcxproj] [C:\\mongo-cxx-test\\libmo\nngoc.vcxproj]\nC:\\mongo-cxx-test\\libmongoc\\src\\libmongoc\\src\\mongoc\\mongoc-stream-tls-secure-channel.c(752,58): warning C4244: '=': co\nnversion from 'int64_t' to 'int32_t', possible loss of data [C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\mongoc_sha\nred.vcxproj] [C:\\mongo-cxx-test\\libmongoc.vcxproj]\n\tmongoc-secure-channel.c\n\tmongoc-sasl.c\n\tmongoc-cluster-sspi.c\n\tmongoc-sspi.c\n\tGenerating Code...\n\t Creating library C:/mongo-cxx-test/libmongoc-build/src/libmongoc/Debug/mongoc-1.0.lib and object C:/mongo-cxx-te\n st/libmongoc-build/src/libmongoc/Debug/mongoc-1.0.exp\n\tmongoc_shared.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\Debug\\mongoc-1.0.dll\n\tBuilding Custom Rule C:/mongo-cxx-test/libmongoc/src/libmongoc/CMakeLists.txt\n\tmongoc-stat.c\n\tmongoc-stat.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\Debug\\mongoc-stat.exe\n\tBuilding Custom Rule C:/mongo-cxx-test/libmongoc/CMakeLists.txt\n Performing install step for 'libmongoc'\n Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Framework\n Copyright (C) Microsoft Corporation. All rights reserved.\n\n\tbson_shared.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libbson\\Debug\\bson-1.0.dll\n\tmongoc_shared.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\Debug\\mongoc-1.0.dll\n\tmongoc-stat.vcxproj -> C:\\mongo-cxx-test\\libmongoc-build\\src\\libmongoc\\Debug\\mongoc-stat.exe\n\t-- Install configuration: \"Debug\"\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_1.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_2.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_codecvt_ids.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140_1.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/concrt140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/share/mongo-c-driver/COPYING\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/share/mongo-c-driver/NEWS\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/share/mongo-c-driver/README.rst\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/share/mongo-c-driver/THIRD_PARTY_NOTICES\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_2.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_codecvt_ids.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/concrt140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_2.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_codecvt_ids.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/concrt140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/bson-1.0.lib\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/bson-1.0.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-config.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-version.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bcon.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-atomic.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-clock.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-compat.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-context.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-decimal128.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-endian.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-error.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-iter.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-json.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-keys.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-macros.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-md5.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-memory.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-oid.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-prelude.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-reader.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-string.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-types.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-utf8.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-value.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-version-functions.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson/bson-writer.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libbson-1.0/bson.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/pkgconfig/libbson-1.0.pc\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/bson-1.0/bson-targets.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/bson-1.0/bson-targets-debug.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/bson-1.0/bson-1.0-config.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/bson-1.0/bson-1.0-config-version.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/libbson-1.0/libbson-1.0-config.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/libbson-1.0/libbson-1.0-config-version.cmake\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_2.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_codecvt_ids.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/concrt140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_2.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/msvcp140_codecvt_ids.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140_1.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/vcruntime140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin/concrt140.dll\n\t-- Up-to-date: C:/mongo-cxx-test/libmongoc-install/bin\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/mongoc-1.0.lib\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/bin/mongoc-1.0.dll\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-config.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-version.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-apm.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-bulk-operation.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-change-stream.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-client.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-client-pool.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-client-side-encryption.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-collection.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-cursor.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-database.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-error.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-flags.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-find-and-modify.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-gridfs.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-gridfs-bucket.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-gridfs-file.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-page.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-list.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-handshake.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-host-list.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-init.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-index.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-iovec.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-log.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-macros.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-matcher.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-opcode.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-prelude.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-read-concern.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-read-prefs.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-server-description.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-client-session.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-socket.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-tls-libressl.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-tls-openssl.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-buffered.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-file.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-gridfs.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-socket.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-topology-description.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-uri.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-version-functions.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-write-concern.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-rand.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-stream-tls.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc/mongoc-ssl.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/include/libmongoc-1.0/mongoc.h\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/pkgconfig/libmongoc-1.0.pc\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/pkgconfig/libmongoc-ssl-1.0.pc\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/mongoc-1.0/mongoc-targets.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/mongoc-1.0/mongoc-targets-debug.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/mongoc-1.0/mongoc-1.0-config.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/mongoc-1.0/mongoc-1.0-config-version.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/lib/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake\n\t-- Installing: C:/mongo-cxx-test/libmongoc-install/share/mongo-c-driver/uninstall.cmd\n Completed 'libmongoc'\n Creating directories for 'libmongocxx'\n Building Custom Rule C:/mongo-cxx-test/CMakeLists.txt\n Performing download step (git clone) for 'libmongocxx'\n Cloning into 'libmongocxx'...\n remote: Enumerating objects: 26884, done.\n remote: Counting objects: 0% (1/26884)\n ...\n remote: Counting objects: 100% (26884/26884), done.\n remote: Compressing objects: 0% (1/4136)\n ...\n remote: Compressing objects: 100% (4136/4136), done.\n Receiving objects: 0% (1/26884)\n ...\n Receiving objects: 99% (26616/26884), 8.86 MiB | 1.60 MiB/s\n remote: Total 26884 (delta 24123), reused 24493 (delta 22592), pack-reused 0\n Receiving objects: 100% (26884/26884), 9.93 MiB | 1.36 MiB/s, done.\n Resolving deltas: 0% (0/24123)\n ...\n Resolving deltas: 100% (24123/24123), done.\n Branch 'releases/v3.6' set up to track remote branch 'releases/v3.6' from 'origin'.\n Switched to a new branch 'releases/v3.6'\n Performing update step for 'libmongocxx'\n No patch step for 'libmongocxx'\n Performing configure step for 'libmongocxx'\n loading initial cache file C:/mongo-cxx-test/libmongocxx-prefix/tmp/libmongocxx-cache-Debug.cmake\n -- Selecting Windows SDK version 10.0.18362.0 to target Windows 10.0.16299.\n -- The CXX compiler identification is MSVC 19.24.28316.0\n -- Detecting CXX compiler ABI info\n -- Detecting CXX compiler ABI info - done\n -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise/VC/Tools/MSVC/14.24\n .28314/bin/Hostx64/x64/cl.exe - skipped\n -- Detecting CXX compile features\n -- Detecting CXX compile features - done\n -- Found PythonInterp: E:/dev/Tools/Python3/python.exe (found version \"3.7\")\n -- No build type selected, default is Release\n -- The C compiler identification is MSVC 19.24.28316.0\n -- Detecting C compiler ABI info\n -- Detecting C compiler ABI info - done\n -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise/VC/Tools/MSVC/14.24.2\n 8314/bin/Hostx64/x64/cl.exe - skipped\n -- Detecting C compile features\n -- Detecting C compile features - done\n bsoncxx version: 0.0.0\n CMake Warning at C:/mongo-cxx-test/libmongoc-install/lib/cmake/libbson-1.0/libbson-1.0-config.cmake:15 (message):\n\tThis CMake target is deprecated. Use 'mongo::bson_shared' instead.\n\tConsult the example projects for further details.\n Call Stack (most recent call first):\n\tsrc/bsoncxx/CMakeLists.txt:98 (find_package)", "text": "part 2:", "username": "Thomas_Morten" }, { "code": " found libbson version 1.17.0\n -- Performing Test COMPILER_HAS_DEPRECATED_ATTR\n -- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Failed\n mongocxx version: 0.0.0\n CMake Warning at C:/mongo-cxx-test/libmongoc-install/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake:15 (message):\n\tThis CMake target is deprecated. Use 'mongo::mongoc_shared' instead.\n\tConsult the example projects for further details.\n Call Stack (most recent call first):\n\tsrc/mongocxx/CMakeLists.txt:54 (find_package)\n\n\n CMake Warning at C:/mongo-cxx-test/libmongoc-install/lib/cmake/libbson-1.0/libbson-1.0-config.cmake:15 (message):\n\tThis CMake target is deprecated. Use 'mongo::bson_shared' instead.\n\tConsult the example projects for further details.\n Call Stack (most recent call first):\n\tC:/mongo-cxx-test/libmongoc-install/lib/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake:22 (find_package)\n\tsrc/mongocxx/CMakeLists.txt:54 (find_package)\n\n\n found libmongoc version 1.17.0\n -- Looking for C++ include pthread.h\n -- Looking for C++ include pthread.h - not found\n -- Found Threads: TRUE\n -- Build files generated for:\n -- build system: Visual Studio 16 2019\n -- instance: C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise\n -- Configuring done\n -- Generating done\n -- Build files have been written to: C:/mongo-cxx-test/libmongocxx-build\n Performing build step for 'libmongocxx'\n Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Framework\n Copyright (C) Microsoft Corporation. All rights reserved.\n\n\tChecking Build System\n\tCreating directories for 'EP_mnmlstc_core'\n\tBuilding Custom Rule C:/mongo-cxx-test/libmongocxx/src/bsoncxx/third_party/CMakeLists.txt\n\tPerforming download step (git clone) for 'EP_mnmlstc_core'\n\t-- EP_mnmlstc_core download command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_part\n y/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-download-*.log\n\tNo update step for 'EP_mnmlstc_core'\n\tNo patch step for 'EP_mnmlstc_core'\n\tPerforming configure step for 'EP_mnmlstc_core'\n\t-- EP_mnmlstc_core configure command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_par\n ty/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-configure-*.log\n\tPerforming build step for 'EP_mnmlstc_core'\n\t-- EP_mnmlstc_core build command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_party/E\n P_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-build-*.log\n\tPerforming install step for 'EP_mnmlstc_core'\n\t-- EP_mnmlstc_core install command succeeded. See also C:/mongo-cxx-test/libmongocxx-build/src/bsoncxx/third_party\n /EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-*.log\n\tPerforming fix-includes step for 'EP_mnmlstc_core'\n\t'xargs' is not recognized as an internal or external command,\n\toperable program or batch file.\nC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\MSBuild\\Microsoft\\VC\\v160\\Microsoft.CppCommon.targets(23\n1,5): error MSB6006: \"cmd.exe\" exited with code 255. [C:\\mongo-cxx-test\\libmongocxx-build\\src\\bsoncxx\\third_party\\EP_mn\nmlstc_core.vcxproj] [C:\\mongo-cxx-test\\libmongocxx.vcxproj]", "text": "part 3:", "username": "Thomas_Morten" }, { "code": "", "text": "This is rather strange. The error output you posted last does not lead to the initial error you described. You probably cannot use the MNMLSTC/core polyfill since your platform is Windows. According to Step 2 in the installation documentation, you will need to use Boost on Windows. Using MNMLSTC/core is leading to the xargs error, since it is pulling in an external project that assumes a non-Windows platform. Could you go back to the earlier configuration that produced the initial error and post the complete output from that build?", "username": "Roberto_Sanchez" }, { "code": "", "text": "Ok so going back to my original methodology:Install Mongo-C-Driver usinggit clone GitHub - mongodb/mongo-c-driver: The Official MongoDB driver for C language\ncd mongo-c-driver\ngit checkout 1.17.0 # To build a particular release\npython build/calc_release_version.py > VERSION_CURRENT\nmkdir cmake-build\ncd cmake-build\ncmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF …I then open the project files into Visual Studio and build INSTALL in there.OMG ITS WORKING\nI was using the wrong dir for DCMAKE_PREFIX_PATH, it was installing to program files!!! will continue with how i did it just for othersThen for mongocxx:Downloaded 3.6.0 from the downloads:\ncalled the follwoing:cmake … -DBOOST_ROOT=E:\\dev\\git\\externalapi\\boost -DCMAKE_PREFIX_PATH=“C:\\Program Files (x86)\\mongo-c-driver” -DCMAKE_INSTALL_PREFIX=“C:\\Program Files (x86)\\mongo-cxx-driver” -DBUILD_VERSION=3.6.0 -DBoost_INCLUDE_DIR=E:\\dev\\git\\externalapi\\boost\\1.68then built in the visual studio projects thanks for the help!!", "username": "Thomas_Morten" }, { "code": "", "text": "Cool. I’m really glad it works now.", "username": "Roberto_Sanchez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo-cxx Installation Problems Windows
2020-09-03T01:46:45.851Z
Mongo-cxx Installation Problems Windows
6,323
null
[ "motor-driver" ]
[ { "code": "", "text": "Hello, everyone!\nI’m trying to find ways of patching motor client for testing purposes. I want to mock database to test code which make database requests.\nI was wondering If anyone has some links, tips or best practices. Unfortunately can’t find anything myself", "username": "Platon_Dmitriev" }, { "code": "", "text": "Unfortunately, I am unable to point you to any examples of mocking Motor’s database objects. Our test suite almost always uses a live Database object. If you are just trying to test if your application code correctly uses the DB object, then you can use unittest’s Mock/MagicMock (unittest.mock — mock object library — Python 3.11.2 documentation) class to replace the APIs you are interested in and run assertions on their usage.", "username": "Prashant_Mital" } ]
Mocking motor client in tests
2020-08-31T11:25:15.333Z
Mocking motor client in tests
6,339
https://www.mongodb.com/…a_2_576x1024.png
[ "kotlin" ]
[ { "code": "", "text": "I’m implementing the Realm(v6.0.4) in React-Native(v0.58) for both platforms iOS/Android(Kotlin).\nIt’s working well in iOS but It’s showing error in Android. I used official Realm Doc https://realm.io/docs/javascript/latest/#getting-started… to implement it.\nWhen I’m not using (debug console) in emulator, it’s showing error (attached below).\n\nerror1440×2560 195 KB\nif i use (debug console) in emulator then \nScreenshot 2020-08-27 at 4.00.40 PM2022×976 504 KB\n I can see It’s writing files (Screenshot attached) but it doesn’t respond after that, App gets freeze.Environment:\nAndroid Studio: v3.5.2\nOS: MacOS Catalina\nReact Native: v0.58\nCode Base: Kotlin", "username": "Sunder_Singh" }, { "code": "", "text": "Thanks Sundar for posting this here as well after your tweets - let me see if I can get one of the Realm Engineers to look at this for you.", "username": "Shane_McAllister" }, { "code": "", "text": "Thanks for your acknowledgement,", "username": "Sunder_Singh" }, { "code": "", "text": "Did you try the newer docs Sunder? Here - https://docs.mongodb.com/realm/react-native/install/ ?", "username": "Shane_McAllister" } ]
Realm implementaion issue with React Native Android Kotlin
2020-08-27T11:52:58.998Z
Realm implementaion issue with React Native Android Kotlin
2,136
null
[ "graphql" ]
[ { "code": "Uncaught (in promise) Error: reason=\"no matching role found for document with _id: ObjectID(\\\"5f351ffd4a3ebb5bd166ae4f\\\")\"; code=\"NoMatchingRuleFound\"; untrusted=\"insert not permitted\"; details=map[]\n", "text": "I use MongoDB Realm and the GraphQL API I get from realm to do crud stuff in my react app, I created a schema and set up everything on the Realm side, then I implemented login in my React app and logged in as a user I created but when I try to add new data to my collection I get the errorCan anyone help me debug this please, I already have the standard role, and when I created the Schema I selected “User can only read & write own data”", "username": "Ivan_Jeremic" }, { "code": "", "text": "Hey @Ivan_Jeremic, there might be an issue with the way you’ve set up roles for your application and for the user. Do you mind describing how you’ve set up your rules + permissions or messaging me your application URL so I can take a look.", "username": "Sumedha_Mehta1" } ]
Error adding data to collection via react app (GraphQL)
2020-08-13T09:19:28.532Z
Error adding data to collection via react app (GraphQL)
2,580
null
[]
[ { "code": "curl --user \"<public key>:<private key>\" --digest \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/< C 's project id>/clusters/<A's cluster name>/restoreJobs?pretty=true\" \\\n --data '\n {\n \"delivery\" : {\n \"methodName\" : \"AUTOMATED_RESTORE\",\n \"targetGroupId\" : \"< C 's project id>\",\n \"targetClusterId\" : \"<B's cluster name>\"\n },\n \"snapshotId\": \"XXXXX\"\n }'\n{\n \"detail\" : \"Received JSON for the delivery attribute does not match expected format.\",\n \"error\" : 400,\n \"errorCode\" : \"INVALID_JSON_ATTRIBUTE\",\n \"parameters\" : [ \"delivery\" ],\n \"reason\" : \"Bad Request\"\n}%\n", "text": "I have two clusters A and B in project C. A uses legacy backup and B uses cloud backup.I am trying to use mongodb atlas API to create a restore job for A to restore to B. My cli command look like this:I got error like this:this is the reference: https://docs.atlas.mongodb.com/reference/api/legacy-backup/restore/create-one-restore-job/can anyone help with this command? what is incorrect in it?more details:", "username": "Jennifer_Fan" }, { "code": "", "text": "Hi… it looks like you are using targetClusterId which is expecting the ID of the Cluster, you are using the ClusterName. If you want to use the cluster name, you would specify targetClusterName instead.", "username": "bencefalo" } ]
mongodb atlas api Create Legacy Backup Restore Job
2020-08-28T02:07:19.615Z
mongodb atlas api Create Legacy Backup Restore Job
2,101
null
[ "c-driver" ]
[ { "code": "", "text": "Hi,\nIs there a way to grab the name as well as the type of collections existing in a database using the mongo-c-driver? I see that we have a function – mongoc_database_get_collection_names_with_opts() — libmongoc 1.23.2 – that provides the collection names, wanted to see if there is a way through the mongo-c-driver to also grab the collection type, i.e. whether it is a view or just a collection.Currently, the only way this seems possible is by sending a raw command to the server to list collections as obtained using mongoc_client_command_with_opts() — libmongoc 1.23.2.Thanks,\nNachi", "username": "Nachi" }, { "code": "", "text": "Hello @Nachi, mongoc_database_find_collection_with_opts may be what you want. That runs the listCollections command underneath, and returns documents describing the collection, which include a “type” field.", "username": "Kevin_Albertson" } ]
Fetch name and type of collections in a db
2020-08-17T20:32:49.360Z
Fetch name and type of collections in a db
2,192
null
[ "aggregation" ]
[ { "code": "{\n _id: ...,\n index: 1,\n name: \"Mike\",\n companyIndex: 1\n},\n{\n _id: ...,\n index: 2,\n name: \"John\",\n companyIndex: 1\n},\n{\n _id: ...,\n index: 3,\n name: \"Jim\",\n companyIndex: 2\n}\n{\n _id: ...,\n index: 1,\n name: \"Company A\"\n},\n{\n _id: ...,\n index: 2,\n name: \"Company B\"\n}\ndata_Employee = await model_Employee.find({ companyIndex: queryingCompanyIndex }) \n\t\t\t\t\t\ndata_Company = await model_Company.aggregate([ \n\t{ \n\t\t$match: {\n\t\t\tindex: queryingCompanyIndex\n\t\t} \n\t},\n\t{ \n\t\t$addFields: { \n\t\t\t\"employees\": data_Employee\n\t\t}\n\t}\n]);\n", "text": "Sorry for the vague question. Allow me to elaborate:I have the following data:Collection Employee got 3 Documents:Collection Company got 2 Documents:Now, I am using the code below to add a field of employees into a company upon querying:This works for one company, but when I query for 2 companies, employees of both companies show up in both companies. Is there some sort of aggregation manipulation that could do what I want? For each company in the query, it should have all employees that has the companyIndex matching its index.Thank you.", "username": "iono_sphere" }, { "code": "mongodb.company.aggregate([\n{\n $lookup:\n {\n from: \"employee\",\n localField: \"index\",\n foreignField: \"companyIndex\",\n as: \"employees\"\n }\n}\n])\n{\n \"_id\" : ObjectId(\"5f50e903501145bdc9363576\"),\n \"index\" : 1,\n \"name\" : \"Company A\",\n \"employees\" : [\n {\n \"_id\" : ObjectId(\"5f50e8d6501145bdc9363573\"),\n \"index\" : 1,\n \"name\" : \"Mike\",\n \"companyIndex\" : 1\n },\n {\n \"_id\" : ObjectId(\"5f50e8d6501145bdc9363574\"),\n \"index\" : 2,\n \"name\" : \"John\",\n \"companyIndex\" : 1\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"5f50e903501145bdc9363577\"),\n \"index\" : 2,\n \"name\" : \"Company B\",\n \"employees\" : [\n {\n \"_id\" : ObjectId(\"5f50e8d6501145bdc9363575\"),\n \"index\" : 3,\n \"name\" : \"Jim\",\n \"companyIndex\" : 2\n }\n ]\n}\n", "text": "You can use the aggregation $lookup to match the companies with corresponding employees and get the desired result. This is also referred as “joining” the collection data. The following query runs from mongo shell:The output:", "username": "Prasad_Saya" } ]
Is it possible for aggregate to $addFields with condition?
2020-09-03T10:23:32.693Z
Is it possible for aggregate to $addFields with condition?
7,729
null
[ "python" ]
[ { "code": "results = response.json() ['data']\nfinal = [ ]\nnew_data = results.copy()\nfor i in enumerate(results):\n try:\n new_data[0]['date'] = datetime.strptime(i[1]['date'],'%m/%d/%Y')\n new_data[0]['amount'] = float(i[1]['amount'])\n new_data[0]['quantity'] = int(float(i[1]['quantity']))\n new_data[0]['price'] = float(i[1]['price'])\n final.append(new_data[0])\n except ValueError as e:\n print(e.details)\n raise\n\n#do **bulk** insert\ndb.insert_many(final)\n{'writeErrors': [{'index': 1, 'code': 11000, 'errmsg': \"E11000 duplicate key error collection: testDB.testCollection index: _id_ dup key: { : ObjectId('5f4f638ace84c97b4afe81cf') }\",>>results==final\nFalse\n>>print(results) ##before datatype conversion\n[{'name':'xyz','desc':'abcdefghijk','date': '9/1/2020', 'amount':'100','quantity':'2','price':'50'}]\n\n>>print(final) ##after datatype conversion\n[{'name':'xyz','desc':'abcdefghijk','date': datetime.datetime(2020, 9, 01, 0, 0), 'amount':100.0,'quantity':2,'price':50.0}]\n", "text": "Hello, any suggestion on the best/recommended approach of doing explicit datatype conversion from Json string response (list) to MongoDB datatype ? Similar question here:- PyMongo - Python List to MongoDB datatype conversion - Stack Overflow. The api response returns json string object as a list.Have followed this approach of converting only specific list of fields that need type cast on a copy of the list:However, I am getting the duplicate key error:{'writeErrors': [{'index': 1, 'code': 11000, 'errmsg': \"E11000 duplicate key error collection: testDB.testCollection index: _id_ dup key: { : ObjectId('5f4f638ace84c97b4afe81cf') }\",The length(# of records) are same in both the lists results (original) and final (post type conversion), but the list itself is not identical - may be now due to datatype conversion.I am evaluating Mongoose like PyMongo ODM wrapper called PyMODM for handling such type conversion, however doing it Pythonic way sounds more efficient - if it’s doable.Just my two cents - regarding usage of any ORM/ODM, since MongoDB is schema-on-read; enforcing a schema/model even before loading into database collection defies the purpose of it being schema-less. Datatype conversion should still be ok for some fields doing it explicitly during pre-processing and before-loading to Mongo database.Please recommend / suggest a solution.Thanks!", "username": "mani_k" }, { "code": "_idObjectId# Insert a dummy document:\nresult = temp_collection.insert_one({ 'name': \"Old value\" })\n\n# Get the id for the document just inserted:\nid_just_inserted = result.inserted_id\nprint(id_just_inserted) # 5f50cbe859b718350b75bda2\n\ntry:\n # Re-use the same `_id` value when inserting a new document:\n temp_collection.insert_one({ '_id': bson.ObjectId(str(id_just_inserted)), 'name': \"New value\" })\nexcept Exception as e:\n print(e)\n E11000 duplicate key error collection: schema_test.temp index: _id_ dup key: { _id: ObjectId('5f50cbe859b718350b75bda2') }, full error: {'index': 0, 'code': 11000, 'keyPattern': {'_id': 1}, 'keyValue': {'_id': ObjectId('5f50cbe859b718350b75bda2')}, 'errmsg': \"E11000 duplicate key error collection: schema_test.temp index: _id_ dup key: { _id: ObjectId('5f50cbe859b718350b75bda2') }\"}\nWriteError: Document failed validation, full error: {'index': 0, 'code': 121, 'errmsg': 'Document failed validation'}\n_idinsert_manyreplace_onereplace_oneupsertTrue", "text": "Hi @mani_k - thanks for your question!If I understand what you’re doing correctly, you’re reading a bunch of documents from an api, fixing up the types of some of the fields, and then inserting them into your MongoDB collection.Your problem here isn’t anything to do with the data-type conversions, it’s the fact that you’re attempting to insert new records into your collection with the same _id as existing documents in your collection. I’m not sure where the id is coming from though, as I’m assuming your API doesn’t provide ObjectId values!Here’s a simplified example, showing that I get the same error:I get the following error, which is very similar to yours:Your problem definitely isn’t a schema validation issue - if it was, you’d see an error like this:If you want to insert new documents, not update existing documents in your collection, you should remove the _id from each of your documents before running the insert_many command. If you would rather update the existing documents in your collection, I’d recommend you use replace_one in a loop.If you want to update existing documents and insert a new document when the _id already exists in the collection, then you can use replace_one with the upsert parameter set to TrueI hope this helps! Let me know if you have any more questions.", "username": "Mark_Smith" }, { "code": "", "text": "Thanks Mark - After your post, I revisited the code and realized that somehow my Python list itself was generating a duplicate row/tuple/list due to which Mongo was generating same _id’s. After fixing the duplicates, it now works fine. Thank you for the great explanation.", "username": "mani_k" } ]
PyMongo Datatype Conversion
2020-09-02T11:59:23.123Z
PyMongo Datatype Conversion
4,982
null
[ "sharding" ]
[ { "code": "db.adminCommand({ listShards: 1 })\n{\n\t\t\t\"_id\" : \"shard0\",\n\t\t\t\"host\" : \"shard0/centos661:27119,centos7:27117,centos72:27118\",\n\t\t\t\"state\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : \"shard1\",\n\t\t\t\"host\" : \"shard1/centos661:27129,centos7:27127,centos72:27128\",\n\t\t\t\"state\" : 1\n\t\t}\nsh.status()\n{ \"_id\" : \"test\", \"primary\" : \"shard1\", \"partitioned\" : true, \"version\" : { \"uuid\" : UUID(\"e56c2c42-127f-40a1-ae22-1981ea4eaf43\"), \"lastMod\" : 1 } }\n test.colshard\n shard key: { \"a\" : 1, \"b\" : 1 }\n unique: false\n balancing: true\n chunks:\n shard1\t7\n { \"a\" : { \"$minKey\" : 1 }, \"b\" : { \"$minKey\" : 1 } } -->> { \"a\" : 2, \"b\" : 2 } on : shard1 Timestamp(1, 1) \n { \"a\" : 2, \"b\" : 2 } -->> { \"a\" : 29128, \"b\" : 29128 } on : shard1 Timestamp(1, 2) \n { \"a\" : 29128, \"b\" : 29128 } -->> { \"a\" : 43691, \"b\" : 43691 } on : shard1 Timestamp(1, 4) \n { \"a\" : 43691, \"b\" : 43691 } -->> { \"a\" : 60002, \"b\" : 60002 } on : shard1 Timestamp(1, 5) \n { \"a\" : 60002, \"b\" : 60002 } -->> { \"a\" : 74565, \"b\" : 74565 } on : shard1 Timestamp(1, 7) \n { \"a\" : 74565, \"b\" : 74565 } -->> { \"a\" : 95002, \"b\" : 95002 } on : shard1 Timestamp(1, 8) \n { \"a\" : 95002, \"b\" : 95002 } -->> { \"a\" : { \"$maxKey\" : 1 }, \"b\" : { \"$maxKey\" : 1 } } on : shard1 Timestamp(1, 9) \n\"ok\" : 0,\n\"errmsg\" : \"Data transfer error: migrate failed: InvalidUUID: Cannot create collection test.colshard because we already have an identically named collection with UUID f2ea3a92-4e1f-4c99-bd6a-7d2761d7f4c7, which differs from the donor's UUID 0135eb74-74e1-4d87-a5ca-eb3febe6c9c0. Manually drop the collection on this shard if it contains data from a previous incarnation of test.colshard\",\n\"code\" : 96,\n", "text": "Hi, i have an issue trying to move a chunk from one shard to another, the shard key is compound, next the info:i want to move some of them from shard1 to shard0, i execute:mongos> sh.moveChunk(“test.colshard”, { “a” : 29128, “b” : 43691 } , “shard0” )…what is wrong?, thanks for you help", "username": "Willy_Latorre" }, { "code": "", "text": "This doesn’t have anything to do with compound shard key but rather with the fact that you already have some artifacts of collection with the same name on the other shard.What’s the history of this cluster? Any idea why such a collection would have existed on the other shard?Asya", "username": "Asya_Kamsky" } ]
moveChunk with compound shared key
2020-09-02T17:21:30.152Z
moveChunk with compound shared key
1,768
null
[]
[ { "code": "", "text": "Hi, i dropped a shared collection, but the metadata continue appears when execute sh.status(), how to delete that\nthanks", "username": "Willy_Latorre" }, { "code": "", "text": "Hi @Willy_Latorre,This is a known condition. Please follow SERVER-17397 to clean the metadata.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks Pavel, i have read the document and now all is fine", "username": "Willy_Latorre" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to drop Metadata
2020-09-03T08:55:33.799Z
How to drop Metadata
1,754
null
[ "python" ]
[ { "code": "options = dict(containing $jsonschema validator)\ndb.command(\"collMod\", \"collection_name\", options)\ndb.command(\"listCollections\")\n", "text": "I am trying to provide data validation on existing collection. For this, i am using “collMod” command in python as below:i am verifying whether this latest validator is updated or not in “collection_name” as below:But i am not able to see updated validator $jsonschema.Note: I am able to update using mongo shell. However, i have to do with python only.I require your help!!Best Regards,\nM Jagadeesh", "username": "Manepalli_Jagadeesh" }, { "code": "commandoptionsdb.command('collMod', 'a_simple_collection', validator={\n '$jsonSchema': {\n 'bsonType': \"object\",\n 'properties': {\n 'name': {\n 'bsonType': \"string\",\n },\n }\n }\n})\nvalidatorvalidator=dictoptions = {\n 'validator': {\n '$jsonSchema': {\n 'bsonType': \"object\",\n 'properties': {\n 'name': {\n 'bsonType': \"string\",\n },\n },\n },\n }\n}\n**db.command('collMod', 'a_simple_collection', **options)\n", "text": "Hi @Manepalli_Jagadeesh - good question!When calling the command function, additional arguments must be passed as keyword arguments. You’ve provided options as a positional argument, which doesn’t work.The following works for me:You can see that although the validator data is a dict, I’m providing it as the validator keyword arg with validator=.If you are building up your ‘collMod’ arguments in a dict , such as:You could then provide this to command with the ** syntax, which expands a dictionaries keys and values into keyword arguments, like this:(But I think providing your validator as a single keyword argument is a better idea.)I hope this helps. Let me know if you have any more questions!", "username": "Mark_Smith" }, { "code": "", "text": "Hi @Mark_Smith,\nThank you for clear explanation with provided example. Now, i am able to see modified validator $jsonschema.", "username": "Manepalli_Jagadeesh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
collMod validator usage in python
2020-09-02T04:10:20.208Z
collMod validator usage in python
3,518
https://www.mongodb.com/…8fe02757fd36.png
[]
[ { "code": "[ { a:[ ], b: [ ] }, { a:[ ], b: [ ] }, ]const GameLogSchema = new Schema({ \n _score: Number,\n _playerData: {\n x: Number,\n y: Number,\n },\n _zoneData: [{ _x: Number, _height: Number }],\n _pipeData: [{ _x: Number, _randomeHeightTop: Number }],\n _gap: Number,\n});\n\nconst PlayerSchema = new Schema({\n /* other fields */\n\n _gameLogs: {\n type: [[GameLogSchema]], \n },\n}); \n", "text": "I am trying to push something like [ { a:[ ], b: [ ] }, { a:[ ], b: [ ] }, ] inside the array field of the document.Heres what my mongoose schema looks like-This is what the data its supposed to deal with looks like -\nimage977×129 19.1 KBSpreading one of these objects -\n", "username": "Ari_Hansda" }, { "code": "myCollection has one document {\"myarray\":[1,2,3]}\nif i want to add an element at the end for example the array [{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]\nto get {\"myarray\":[1,2,3,[{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]]}\ni could update (i use pipeline in the update command => you need mongodb>=4.2)\n\n\n{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {},\n \"u\": [\n {\n \"$addFields\": {\n \"myarray\": {\n \"$concatArrays\": [\n \"$myarray\",\n [\n [{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]\n ]\n ]\n }\n }\n }\n ],\n \"multi\": true\n }\n ]\n}\n[1,2,3] to [1,2,3,{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]\n[[{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]]\n[{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]\n\n", "text": "Hello : )The above is the database command,you can use your driver update() function,\nand take the pipeline only from the command(the “u” value)If you want to make something likeInstead of that in the above codeUse thisHope it helps", "username": "Takis" }, { "code": "[[{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]]\n[{\"a\":[],\"b\":[]},{\"b\":[],\"c\":[]}]\n\n", "text": "Instead of that in the above codeUse thisHope it helpsbut it has to be this way. Can you suggest a better schema for what I am trying to achieve.\nand what I am trying to achieve is insert an array in an array. And I don’t see any other way around it.Thank you.", "username": "Ari_Hansda" }, { "code": "", "text": "Hello : )\nAdding an array inside an array (here with $concatArrays the new array will go at the end),its easy to do,the above code,does that.\nIf you need more help maybe someone can help more about the schema.", "username": "Takis" } ]
How to $push an array consisting of objects having arrays, to an array
2020-09-02T14:50:13.034Z
How to $push an array consisting of objects having arrays, to an array
2,071
null
[ "node-js", "production" ]
[ { "code": "ConnectionkerberoscreateIndexcreateIndexcreateCollectioncreateCollectionCollectionstrictcreateCollection", "text": "The MongoDB Node.js team is pleased to announce version 3.6.1 of the driverA bug in introducing the new CMAP Connection prevented some users from properly\nauthenticating with the kerberos module.The logic for building the createIndex command was changed in v3.6.0 to use an allowlist\nrather than a blocklist, but omitted a number of index types in that list. This release\nreintroduces all supported index types to the allowlist.Since v3.6.0 createCollection will no longer returned a cached Collection instance if\na collection already exists in the database, rather it will return a server error stating that\nthe collection already exists. This is the same behavior provided by the strict option for\ncreateCollection, so that option has been removed from documentation.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.6 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks very much to all the community members who contributed to this release!", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.6.1 Released
2020-09-02T13:10:52.344Z
MongoDB Node.js Driver 3.6.1 Released
3,326
null
[]
[ { "code": "", "text": "", "username": "Olusanya_Afolabi-Adewunmi" }, { "code": "", "text": "this is not the adress of your cluster. use the connection string provided by atlas.", "username": "steevej" }, { "code": "", "text": "Hi @Olusanya_Afolabi-Adewunmi,This is an example connection string. Please follow the steps mentioned in this lecture Chapter 2: Connecting to Your Sandbox Cluster from the mongo ShellLet us know know if you have any doubts.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I can now connect.\nThank you so much", "username": "Olusanya_Afolabi-Adewunmi" }, { "code": "", "text": "I can now connect.\nThank you so much.", "username": "Olusanya_Afolabi-Adewunmi" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Unable to connect to my Sandbox cluster from windows10 command prompt
2020-09-02T05:21:02.385Z
Unable to connect to my Sandbox cluster from windows10 command prompt
1,434
null
[]
[ { "code": "{\n \"channel\" : {\n \"_id\" : \"Object ID \",\n \"name\" : \"switch\",\n \"formats\" : [ \n {\n \"_id\" : \"Object ID \",\n \"formatName\" : \"ISO8583-93\",\n \"description\" : \"ISO Format\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ],\n \"messages\" : [ \n { \"_id\" : \"Object ID \",\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n }, \n { \"_id\" : \"Object ID \",\n \"name\" : \"fundTransfer\",\n \"alias\" : \"creditTransfer\",\n \"description\" : \"Funds Transfer Request : Sender Bank -> Message Hub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n }\n ]\n }, \n { \"_id\" : \"Object ID \",\n \"formatName\" : \"ISO20022\",\n \"description\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }, \n {\n \"name\" : \"1\",\n \"alias\" : \"Bitmap(s)\",\n \"lenght\" : \"8\",\n \"description\" : \"\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ]\n }\n ]\n }\n}\n{ \n \"_id\" : \"Object ID \",\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry update\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"text\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n}\n", "text": "I have data in MongoDB like below:I want to update element of array “messages” by its “_id”. Depending on given condition where “channel.name”:“switch” and “channel.formats.formatName”:“ISO8583-93” and “channel.formats.messages._id”:“Object Id\" of balance enquiry ”. I want to update only below part,I dont want to update field only…want to replace complete object of mongodb through javaHow do I update this from Java?", "username": "Erica_01" }, { "code": "import com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\n\nMongoClient mongoClient = MongoClients.create();\nMongoDatabase database = mongoClient.getDatabase(\"YOURDATABASE\");\n\nDocument buildInfoResults = database.runCommand(COMMAND);\nSystem.out.println(buildInfoResults.toJson());\nCOMMAND(convert this json to org.bson.Document,and pass it as argument above)=\n\n{\n \"update\": \"YOUR_COLLECTION_NAME\",\n \"updates\": [\n {\n \"q\": {},\n \"u\": [\n {\n \"$addFields\": {\n \"channel\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$channel.name\",\n \"switch\"\n ]\n },\n {\n \"_id\": \"$channel._id\",\n \"name\": \"$channel.name\",\n \"formats\": {\n \"$map\": {\n \"input\": \"$channel.formats\",\n \"as\": \"format\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$format.formatName\",\n \"ISO8583-93\"\n ]\n },\n {\n \"_id\": \"$$format._id\",\n \"formatName\": \"$$format.formatName\",\n \"description\": \"$$format.description\",\n \"fields\": \"$$format.fields\",\n \"messages\": {\n \"$map\": {\n \"input\": \"$$format.messages\",\n \"as\": \"formatMessage\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$formatMessage.name\",\n \"balanceEnquiry\"\n ]\n },\n {\n \"_id\": \"$$formatMessage._id\",\n \"name\": \"$$formatMessage.name\",\n \"alias\": \"newAlias\",\n \"messageIdentification\": \"$$formatMessage.messageIdentification\",\n \"messageType\": \"$$formatMessage.messageType\",\n \"messageFormat\": \"$$formatMessage.messageFormat\"\n },\n \"$$formatMessage\"\n ]\n }\n }\n }\n },\n \"$$format\"\n ]\n }\n }\n }\n },\n \"$channel\"\n ]\n }\n }\n }\n ],\n \"multi\": true\n }\n ]\n}\n{\n \"_id\": \"$$formatMessage._id\",\n \"name\": \"$$formatMessage.name\",\n \"alias\": \"newAlias\",\n \"messageIdentification\": \"$$formatMessage.messageIdentification\",\n \"messageType\": \"$$formatMessage.messageType\",\n \"messageFormat\": \"$$formatMessage.messageFormat\"\n} \n{\n \"_id\": \"Object ID \",\n \"name\": \"switch\",\n \"formats\": [\n {\n \"_id\": \"Object ID \",\n \"formatName\": \"ISO8583-93\",\n \"description\": \"ISO Format\",\n \"fields\": [\n {\n \"name\": \"0\",\n \"alias\": \"MTI\",\n \"lenght\": \"4\",\n \"description\": \"\",\n \"type\": \"FIXED\",\n \"dataType\": \"\",\n \"required\": true\n }\n ],\n \"messages\": [\n {\n \"_id\": \"Object ID \",\n \"name\": \"balanceEnquiry\",\n \"alias\": \"newAlias\",\n \"messageIdentification\": \"\",\n \"messageType\": \"\",\n \"messageFormat\": \"\"\n },\n {\n \"_id\": \"Object ID \",\n \"name\": \"fundTransfer\",\n \"alias\": \"creditTransfer\",\n \"description\": \"Funds Transfer Request : Sender Bank -> Message Hub\",\n \"messageIdentification\": \"\",\n \"messageType\": \"\",\n \"messageFormat\": \"\",\n \"fields\": [\n {\n \"name\": \"DE_0\",\n \"alias\": \"MTI\",\n \"lenght\": \"4\",\n \"description\": \"\",\n \"type\": \"FIXED\",\n \"dataType\": \"\"\n },\n {\n \"name\": \"DE_1\",\n \"alias\": \"Primary Bitmap\",\n \"lenght\": \"8\",\n \"description\": \"Primary Bitmap\",\n \"type\": \"BIN\",\n \"dataType\": \"\"\n }\n ]\n }\n ]\n },\n {\n \"_id\": \"Object ID \",\n \"formatName\": \"ISO20022\",\n \"description\": \"\",\n \"fields\": [\n {\n \"name\": \"0\",\n \"alias\": \"MTI\",\n \"lenght\": \"4\",\n \"description\": \"\",\n \"type\": \"FIXED\",\n \"dataType\": \"\",\n \"required\": true\n },\n {\n \"name\": \"1\",\n \"alias\": \"Bitmap(s)\",\n \"lenght\": \"8\",\n \"description\": \"\",\n \"type\": \"BIN\",\n \"dataType\": \"\",\n \"required\": true\n }\n ]\n }\n ]\n}\n", "text": "Hello\nI was thinking to send you this from yesterday but you said i want to get that part\nof the document,and i answered to the previous question.\nIf you want to update use this query,that updates the message if\n(and (= channel.name “switch”)\n(= format.formatName “ISO8583-93”)\n(= message.name “balanceEnquiry”))To run it you need mongoDB >= 4.2 , because i use pipeline inside the update.\nAlso because your driver might not support mongoBD>=4.2,run command is safe way,\nand answer can be readable for all.If you driver supports mongoDB >=4.2 take the pipeline\nand add it to your java update() method.From this command you only care to edit the below part,i just updated the alias here,to “newAlias”\nand didn’t added the fields and description fields,you can do any change you want.The results i got wasHope that helps.", "username": "Takis" } ]
How to update only specified array elements from nested array present in document
2020-09-01T09:26:51.682Z
How to update only specified array elements from nested array present in document
2,497
https://www.mongodb.com/…a0d9d728d3fd.png
[ "replication", "security", "configuration" ]
[ { "code": "", "text": "Good morning everyoneI am setting up a replicaset environment with Mongo version 4.4. I am using three servers to implement a replicaset in my environment.The initial default configuration works perfectly, I correctly raise the replicaset on the primary node, and then add the two secondary nodes, and I see in the rs state that everything is correct.To secure a little more, I have read the documentation that there are two internal authentication mechanisms, between the members of the replicaset: keyfile and x509 certificates.I implemented the first solution, and it worked correctly. I generate a file with a key, and I pass it on to each of the nodes. But the documentation recommends that for production environments it is advisable to perform internal authentication by x509 certificates. I have followed the documentation, I have created my self-signed certificates for each of the hosts, I have changed the configuration of the mongod.conf file, and when lifting each of the nodes, in the log I see the following:\nScreenshot_18985×198 20.9 KB\nI have generated the self-signed certificates all the same except for the CN field that I have put the FQND of each of the servers. And if I make a hostname in the operating system, the FQDN of the server comes out, that is, it is the same. What am I missing in the configuration?Best regards.", "username": "Eduardo_HM" }, { "code": "", "text": "hello Eduardo,\nthe log says that your nodes do not show up with FQDN but only with the short name, so the CN must contain only the short name. do you have any entries in /etc/hosts? are your nodes in DNS?", "username": "Walter_Fortunato" }, { "code": "", "text": "Thank you very much for the reply Walter.I have done several tests but it keeps giving me the same error.Could you tell me or point out the steps and how to do it from the generation of the self-signed certificate to the change of configuration so that instead of using the keyfile authentication that is what I have now, the x509 authentication can be used for only internal authentication between the nodes of the replica set?", "username": "Eduardo_HM" } ]
Error when I configure replicaset with internal authentication by x509
2020-08-12T09:34:03.404Z
Error when I configure replicaset with internal authentication by x509
1,623
null
[ "replication", "security" ]
[ { "code": "", "text": "Cannot connect to replica set “My Ecomm Connection”[localhost:27017].\nSet’s primary is unreachable.Reason:\nSSL tunnel failure: Network is unreachable or SSL connection rejected by server. Reason: It is not possible to continue with empty set namethis is a connection failure i was trying to do.Can any one please help me to setup my connection with ROBO 3T\nit would be a great help", "username": "Yatin_Rathod" }, { "code": "", "text": "Hi @Yatin_Rathod & welcome in the MongoDB community .Can you please explain to us where your MongoDB replica set is deployed and help us understand the configuration your are using to run the mongod processes?\nIs it really running on localhost 27017 with SSL?\nCan you please also share the configuration you have using to deploy the RS and how you are trying to connect to Robo3T?Thanks.", "username": "MaBeuLux88" } ]
CA-signed certificate does not exists
2020-09-01T20:56:00.978Z
CA-signed certificate does not exists
2,536
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": " primary Default mode. All operations read from the current replica set primary.\n primaryPreferred In most situations, operations read from the primary but if it is unavailable, operations read from secondary members.\n secondary All operations read from the secondary members of the replica set.\n secondaryPreferred In most situations, operations read from secondary members but if no secondary members are available, operations read from the primary.\n nearest Operations read from the nearest member of the replica set, irrespective of the member’s type.\nsecondarysecondaryPreferred If read operations account for a large percentage of your application’s traffic, distributing reads to secondary members can improve read throughput. However, in most cases sharding provides better support for larger scale operations, as clusters can distribute read and write operations across a group of machines.\n", "text": "I’m trying to understand the behavior of reads in a mongodb replica set. In particular I have an environment with high rate of reads, low rate of writes, and relatively small data set.I read this document:In particular:So my understanding is that reads by default go to the primary. There are read preferences that allow reading from secondary ( secondary , and secondaryPreferred ). In these cases stale data may be served.It seems to me that it would be preferable to distribute the reads across both primary and secondary machines, so that I can make best use off all 3 machines. But I don’t really see this as an option. The following statement in particular perplexes me:However, in the case of a relatively small data set, sharding simply doesn’t make sense. Can someone shed some light on the right configuration?", "username": "CaptainLevi" }, { "code": "tag_setsmaxStalenessSecondslocalThresholdMS", "text": "TL;DR: Use nearest.Indeed, sharding your cluster would definitely solve the problem as it would force you to split your data set into pieces (shards) and your reads and writes operations would be distributed evenly by the mongos servers - granted that you chose a good shard key.\nBut, as you found out, it doesn’t really makes sense for a relatively little data set and it would be more costly.Our documentation doesn’t really reveals all the magic behind the “nearest” option, but there is actually a round-robin algorithm implemented behind it.\nIn our specifications, you can read more about it - especially about the options that you can set to tune the round-robin algorithm.To distribute reads across all members evenly regardless of RTT, users should use mode ‘nearest’ without tag_sets or maxStalenessSeconds and set localThresholdMS very high so that all servers fall within the latency window.Here is more documentation about the ping times.Especially this part:Once the driver or mongos has found a list of candidate members based on mode and tag sets, determine the “nearest” member as the one with the quickest response to a periodic ping command. (The driver already knows ping times to all members, see “assumptions” above.) Choose a member randomly from those at most 15ms “farther” than the nearest member. The 15ms cutoff is overridable with “secondaryAcceptableLatencyMS”.Also, the more RAM you have, the less documents will need to be retrieved from disk. If your working set is large, you should considered adding some RAM to reduce the IOPS and overall latency.I hope this helps !", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you, yes, this helps a lot, I was running into dead-ends everywhere before you replied.", "username": "CaptainLevi" }, { "code": "https://github.com/mongodb/specifications/blob/master/source/driver-read-preferences.rst#nearesthttps://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.rst#nearesthttps://docs.mongodb.com/manual/core/read-preference-mechanics/#default-thresholdsecondaryAcceptableLatencyMSlocalThresholdlocalThresholdMSThe ‘localThresholdMS’ variable used to be called secondaryAcceptableLatencyMS, but was renamed for more consistency with mongos (which already had localThreshold as a configuration option) and because it no longer applies only to secondaries.", "text": "@MaBeuLux88 there is inconsistency in terms used in https://github.com/mongodb/specifications/blob/master/source/driver-read-preferences.rst#nearest , https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.rst#nearest and https://docs.mongodb.com/manual/core/read-preference-mechanics/#default-threshold like secondaryAcceptableLatencyMS, localThreshold and the correct one of localThresholdMS.Server Selection in Next Generation MongoDB Drivers | MongoDB Blog clarified that\nThe ‘localThresholdMS’ variable used to be called secondaryAcceptableLatencyMS, but was renamed for more consistency with mongos (which already had localThreshold as a configuration option) and because it no longer applies only to secondaries.we should consider updating it.", "username": "CaptainLevi" }, { "code": "", "text": "I have escalated this internally. Thanks ", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Distribute reads across replica sets
2020-08-31T07:35:11.424Z
Distribute reads across replica sets
3,553
null
[ "aggregation" ]
[ { "code": "MatchesKits", "text": "Hope the title topic wasn’t too misleading, let me explain the problem a bit more:Not sure if anyone understands, so I better give an example:List of kits: [‘AA’, ‘BB’, ‘CC’]\nMatches docs that I’m trying to find (as their count will be === 1):\n{ kit1: ‘AA’, kit2’ ABV’, chr: ‘1’ }\n{ kit1: ‘XX’, kit2: ‘CC’, chr: ‘5’ }Matches docs that should be excluded from the result (as their count > 1)\"\n{ kit1: ‘BB’, kit2: ‘HR’, chr: ‘8’ }\n{ kit1: ‘BB’, kit2: ‘HR’, chr: ‘X’ }I’m not even sure this can be done in MongoDb directly but would appreciate the aggregation been done there as we already have 700,000 docs in Matches and will soon cross 1 million.Thanks in advance,Andreas", "username": "Andreas_West" }, { "code": "var kits = [\"AA\",\"BB\",\"CC\"];\n\n[\n {\n \"$match\": {\n \"$expr\": {\n \"$or\": [\n {\n \"$in\": [\n \"$kit1\",\n kits\n ]\n },\n {\n \"$in\": [\n \"$kit2\",\n kits\n ]\n }\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"kit1\": \"$kit1\",\n \"kit2\": \"$kit2\"\n },\n \"doc\": {\n \"$push\": {\n \"kit1\": \"$kit1\",\n \"kit2\": \"$kit2\",\n \"chr\": \"$chr\"\n }\n }\n }\n },\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n {\n \"$size\": \"$doc\"\n },\n 1\n ]\n }\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$doc\"\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$doc\"\n }\n }\n]\nvar kits = [\"AA\",\"BB\",\"CC\"];\n\n[\n {\n \"$match\": {\n \"$expr\": {\n \"$or\": [\n {\n \"$in\": [\n \"$kit1\",\n kits\n ]\n },\n {\n \"$in\": [\n \"$kit2\",\n kits\n ]\n }\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"kit1\": \"$kit1\",\n \"kit2\": \"$kit2\"\n },\n \"chr\": {\n \"$push\": \"$chr\"\n }\n }\n },\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n {\n \"$size\": \"$chr\"\n },\n 1\n ]\n }\n }\n },\n {\n \"$project\": {\n \"doc\": {\n \"$map\": {\n \"input\": \"$chr\",\n \"as\": \"chr\",\n \"in\": {\n \"kit1\": \"$_id.kit1\",\n \"kit2\": \"$_id.kit2\",\n \"chr\": \"$$chr\"\n }\n }\n }\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$doc\"\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$doc\"\n }\n }\n]\n", "text": "This does what you need i think.Filter so kit1 or kit2 inside kits,\ngroupby kit1 kit2,keep only those that exist 1 time(the array sum count =1),\nand unwind to root to have the original document.There is a solution also using less memory,on group by to not store all the document,but only the\nchr but it needs the document to be contructed again later.", "username": "Takis" }, { "code": "Matches", "text": "Thank you @Takis - can I ask you one more question? Where do I identify the collection that I use in your code example?In the first $match block it should query the Matches collection, I don’t see any reference to it.The latter part is excellent and I didn’t know how to use these aggregation steps, thanks for that.", "username": "Andreas_West" }, { "code": "", "text": "How to name the database and the collection depends on the driver you use.\nOn mongo shell you do\nuse yourDatabaseName\ndb.yourCollectionName.agreegate([…the-stages…])Its simple in all drivers,see the driver tutorial on aggregate example\nFor example python\nhttps://pymongo.readthedocs.io/en/stable/examples/aggregation.htmlThe code works assuming kits list is not like a very big list.Hope it helps", "username": "Takis" } ]
How can I find all docs where a user has only a single doc (aggregation problem)?
2020-08-25T14:30:33.394Z
How can I find all docs where a user has only a single doc (aggregation problem)?
1,500
null
[ "node-js" ]
[ { "code": "let connectionString = `mongodb://${data.username}:${data.password}@${clusterEndpoint}:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0`;\n return new Promise((resolve, reject) => {\n mongodb.connect(connectionString, { useUnifiedTopology: true }, (err, res) => {\n if (err) {\n reject(err);\n } else {\n resolve(res);\n }\n })\n\n });\n", "text": "I am facing the below exception intermittently. I am using the connection as below“errorMessage”:“MongoTimeoutError: Server selection timed out after 30000 ms”,“reason”:{“errorType”:“MongoTimeoutError”,“errorMessage”:“Server selection timed out after 30000 ms”,“name”:“MongoTimeoutError”,“stack”:[“MongoTimeoutError: Server selection timed out after 30000 ms”,\" at Timeout.setTimeout [as _onTimeout] (/var/task/node_modules/mongodb/lib/core/sdam/server_selection.js:308:9)\",\" at ontimeout (timers.js:436:11)\",\" at tryOnTimeout (timers.js:300:5)\",\" at listOnTimeout (timers.js:263:5)\",\" at Timer.processTimers (timers.js:223:10)\"]},“promise”:{},“stack”:[“Runtime.UnhandledPromiseRejection: MongoTimeoutError: Server selection timed out after 30000 ms”,\" at process.on (/var/runtime/index.js:37:15)\",\" at process.emit (events.js:198:13)\",\" at process.EventEmitter.emit (domain.js:448:20)\",\" at emitPromiseRejectionWarnings (internal/process/promises.js:140:18)\",\" at process._tickCallback (internal/process/next_tick.js:69:34)\"]}", "username": "Suresh_Nedunchezhian" }, { "code": "", "text": "Hi @Suresh_Nedunchezhian,This message means no nodes/primary was found whitin the default 30s server selection period.It might be a firewall thing or a DNS issue with the specified clusterEndpoint …Make sure that with a mongoshell you can connect from this host. Moreover, please specify all hosts in replica set.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thanks for looking into this.\nIf it is a firewall issue, then it has to happen all the times. But it happening for us only at sometimes.\nOne thing which i want to specify is we are using AWS DocDB.Regards,\nSuresh D.N.", "username": "Suresh_Nedunchezhian" }, { "code": "", "text": "Hi @Suresh_Nedunchezhian,This forum is to help with MongoDB related technologies and products I am not sure how I can help with other databases which are not MongoDB…Best\nPavel", "username": "Pavel_Duchovny" } ]
Server selection timed out
2020-09-01T20:53:31.567Z
Server selection timed out
6,779
https://www.mongodb.com/…d_2_1023x483.png
[]
[ { "code": "", "text": "Hi,We have a production app that is running on MongoDB Realm mainly using “mongodb-stitch-browser-sdk” & “mongodb-stitch-server-sdk”. Our entire team picked up MQL and we are quite comfortable with it.With the new changes I see that “mongodb-stitch-browser-sdk” and the server side version is being called Legacy SDK.I wrote to support some time ago and they mentioned that EOL will be announced for the Legacy SDK\ns41674×791 111 KB\nThen I saw this post on the \"stitch-js-sdk\" repo in GitHub\n\ns52373×1395 340 KB\nThen there are multiple places within the MongoDB Realm UI where there are seemingly confusing code snippets likeWhen I go to the docs here - I see this\n\nissue-11470×1094 221 KB\nWhen I go to the “Web SDK” in Realm docs here - I see this\n\nissue-21514×732 133 KB\nLastly, if you go navigate inside a “Realm app” and choose “Clusters” and click on the dropdown “Working with your clusters” - you get this code snippet\n\nissue-31698×1456 283 KB\nI am honestly confused and highly worried because as a fintech startup COO I have a production app on MongoDB Realm (we started last it year when it was MongoDB Stitch) and I have a lot of questionsHow long can we use “mongodb-stitch-browser-sdk” & “mongodb-stitch-server-sdk”?Is there a equivalent to “mongodb-stitch-browser-sdk” and it’s server-side library in terms of functionality? If so which is it exactly? Web SDK & Node.js SDK? It doesn’t seem like you can do MQL operations from them only GraphQLWill MQL on client side be phased out as well along with the functionality that exists on “mongodb-stitch-browser-sdk”? I get the impression on the MongoDB Realm docs that may be case as all the examples of reading and writing queries are all GraphQL based.What advice do you have for a startup like us and what are the things that we should be concerned about given we had launch a production app using MongoDB Stitch SDKsWe invested some significant time on MongoDB Stitch learning, understanding the ins & outs and being comfortable with all the concepts hence we are really concerned with all the new but different information.Thanks", "username": "Salman_Alam" }, { "code": "", "text": "Hi Salman –Looks like some of the docs content can be corrected, I’ll get folks working on that. To address your questions –Thanks,\nDrew", "username": "Drew_DiPalma" }, { "code": "", "text": "Ks support MQL. All features from the Stitch SDKs are either available or will be available shortly with the exception of 3rd party serviceThank you very much for further clarification. Is there a reference manual for Realm Web SDK? I see that the one in the docs is basic - it does not clearly show how to use MQL using the Web SDK. Is there any document or article that we can reference?", "username": "Salman_Alam" }, { "code": "", "text": "Hi – I believe you should be able to use the same JS SDK reference for Web/Server. You can also find an example here.", "username": "Drew_DiPalma" }, { "code": "", "text": "I share the same confusion as Salman with the SDKs and general roadmap for the merging of Stitch and Realm. I’m eager to use Mongo for my next project, however the mixed messaging on how to get started with the Web SDK is a barrier to adoption.On one hand, the docs advise to use ‘realm-web’ (which is in line with the comments regarding EOL for the stitch-browser-sdk I’ve seen here and on Github), however, on the other hand, many of the example and demos provided by Mongo employees themselves are still referencing the old Stitch SDK(s) - e.g. the live coding YouTube video on Mongo’s official channel as of a week and a half ago.I can appreciate that things are in flux with Stitch and Realm, and that it will take some time to get everything merged. In the meantime though, a little more attention on the docs and setting expectations on what’s to come would help my team feel more confident in going all in on Realm. Until then, Firebase is a tempting alternative (really hoping to stick with Mongo BaaS though).", "username": "J_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Confusion about future of MongoDB Realm and Stitch
2020-07-09T08:25:31.808Z
Confusion about future of MongoDB Realm and Stitch
2,868
https://www.mongodb.com/…4_2_1024x512.png
[ "graphql" ]
[ { "code": "api-keyapiKeyAPI Key --header 'api-key: <User or Server API Key>' \\\n\"error\":\"no authentication methods were specified\" --header 'apiKey: <User or Server API Key>' \\\n", "text": "tl;dr: api-key needs to be apiKey?I followed the instruction below:In the API Key section, there’s a code saying:but this causes an error: \"error\":\"no authentication methods were specified\"I modified it slightly with some guessing, like:And I got the correct response. Please check if it is correct.", "username": "Toshi" }, { "code": "", "text": "Hi @Toshi,Thanks for this feedback I will test and if this is correct I will contact our documentation team to fix .Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Toshi,Thanks for raising this I’ve opened an internal discussion to fix this.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "My pleasure. Kudos Realm team!", "username": "Toshi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Obsolete doc? Realm GraphQL request header for API Key
2020-09-02T03:17:15.531Z
Obsolete doc? Realm GraphQL request header for API Key
3,987
null
[ "node-js" ]
[ { "code": " let uresult = await mongo_dal.mongoUpdate({\"_id\": user._id}, {\"sent\": user.sent });\n console.log(uresult.result.nModified);\n\nasync function mongoUpdate(query, newData){\n try{\n let result = new Promise((resolve, reject)=>{\n const users = db.collection('users').updateOne(query,{$set: newData}, function(err, docs) {\n if (err) {\n reject(err);\n } else {\n resolve(docs);\n }\n });\n });\n return result;\n }catch(err){\n console.error(err);\n return err.message;\n }\n}\n", "text": "Hi,I running nodejs - mongoUpdate function with:on the “uresult.result.nModified” I got 1 - looks like all right and updated.\nbut really the data not updated!\nthe problem happens not always…Please help me to solve this…", "username": "Anton_Turbin" }, { "code": "user._idupdateOneObjectIDconst { MongoClient, ObjectID } = require(\"mongodb\");\nmongoUpdate({ \"_id\": ObjectID(user._id) }, {\"sent\": user.sent });\n_id", "text": "Hi @Anton_Turbin,The user._id variable that you’re using in the filter portion of your updateOne operation. What format is the data?If the value is a hash, you might try wrapping it in an ObjectID first so that the type is correct.You’d import it like this:And when you want to use it, you’d do something like this:This of course assumes that the problem was in the type of data that you’re using for your _id field.Let me know if that helps.Best,", "username": "nraboy" } ]
mongoUpdate not updated
2020-07-07T08:42:17.435Z
mongoUpdate not updated
1,612
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.1 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the release notes.You can obtain the driver source from GitHub under the 1.4.1 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Isabella_Siu" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.1 Released
2020-09-01T18:14:17.347Z
MongoDB Go Driver 1.4.1 Released
1,924
null
[ "atlas-device-sync" ]
[ { "code": "v10true_partitionKey_partitionEmbeddedObjectEnding session with error: failed to validate upload changesets: instruction had incorrect partition value for key \"_partition\" (ProtocolErrorCode=212)\n.debugBadChangeset Error\n\nError:\n\nEnding session with error: failed to validate upload changesets: instruction had incorrect partition value for key \"_partition\" (ProtocolErrorCode=212)\n\nPartition:\n\nmaster\n\nSession Metrics:\n\n{}\n\nSDK:\n\nRealm Cocoa v10.0.0-beta.2\n\nPlatform Version:\n\nVersion 10.15.5 (Build 19F101)\n", "text": "Hey folks,one of your former Realmies here. Lovely to see btw. that there are still some familiar faces around from back in the day. Well, I wanted to give MongoDB Realm a spin and use it to build a product myself for once. But I keep hitting walls and I cannot find why. Context and StumblingsFor context, I’m trying to use the Cocoa bindings from a macOS app in beta-2. I tried the tag which is published on CocoaPods, but saw that there are some not-yet-published improvements on the branch v10, so I hoped maybe if I go bleeding edge it will be better, but no dice. )I’ve my MongoDB Realm sync in development mode and my sync permissions are true for read and write, which should be as permissive as it gets.Note:\nThe UI around could give a little intro in what these expressions actually mean, at least provide a direct links to the docs and the semantics are totally unclear as is. But also the documentation around this is lacking a good intro in what’s going on really. But maybe this is all subject to change?)I’ve tried to change the type of the partition key, the name of the partition key (_partitionKey and _partition, tried to stay close with the docs ), I read a lot of docs, tutorials, examples, read some code diffs to catch up with what has changed and I could be missing*, … But there is almost nothing around this particular error, which is really frustrating. I diffed my exported server config with the tutorial app, but nothing stand out which should be problematic.Note:\nBy reading the cocoa bindings code, I first found out about EmbeddedObject, which is pretty awesome and made me very happy, but it is not even mentioned with a single word in the MongoDB Realm docs. I think it would be good to give an explanation on how this affects sync and maps to MongoDB Atlas. This whole latter part is very opaque right now and it would help a lot, if this would be laid out better to the user imho.About my error:I get the following error in the MongoDB Realm UI in the logs:This maps to what I see in the macOS client in the logs. That is logLevel to .debug, plus one print line custom logging from the error handler. (starting with , as seen in the log attached below) What (I think) I can recognise is that this must be something related to some validation on the server-side, as given by the error class. Partitioning was before my time and I’m sure a lot has changed around that since the MongoDB integration, so I don’t really know what’s going on there. I feel like I’m holding it wrong. Just not sure how, where or why.Server LogClient Log2020-08-16 13:22:59.799380+0200 Editor[99805:16209519] Sync: Realm sync client ([realm-core-10.0.0-beta.1], [realm-sync-10.0.0-beta.2])\n2020-08-16 13:22:59.799460+0200 Editor[99805:16209519] Sync: Supported protocol versions: 1-1\n2020-08-16 13:22:59.799514+0200 Editor[99805:16209519] Sync: Platform: macOS Darwin 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64\n2020-08-16 13:22:59.799556+0200 Editor[99805:16209519] Sync: Build mode: Release\n2020-08-16 13:22:59.799593+0200 Editor[99805:16209519] Sync: Config param: max_open_files = 256\n2020-08-16 13:22:59.799629+0200 Editor[99805:16209519] Sync: Config param: one_connection_per_session = 1\n2020-08-16 13:22:59.799667+0200 Editor[99805:16209519] Sync: Config param: connect_timeout = 120000 ms\n2020-08-16 13:22:59.799701+0200 Editor[99805:16209519] Sync: Config param: connection_linger_time = 30000 ms\n2020-08-16 13:22:59.799733+0200 Editor[99805:16209519] Sync: Config param: ping_keepalive_period = 60000 ms\n2020-08-16 13:22:59.799765+0200 Editor[99805:16209519] Sync: Config param: pong_keepalive_timeout = 120000 ms\n2020-08-16 13:22:59.799797+0200 Editor[99805:16209519] Sync: Config param: fast_reconnect_limit = 60000 ms\n2020-08-16 13:22:59.799829+0200 Editor[99805:16209519] Sync: Config param: disable_upload_compaction = 0\n2020-08-16 13:22:59.799860+0200 Editor[99805:16209519] Sync: Config param: tcp_no_delay = 0\n2020-08-16 13:22:59.799892+0200 Editor[99805:16209519] Sync: Config param: disable_sync_to_disk = 0\n2020-08-16 13:22:59.799927+0200 Editor[99805:16209519] Sync: User agent string: ‘RealmSync/10.0.0-beta.2 (macOS Darwin 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64) RealmObjectiveC/10.0.0-beta.2 $appId’\n2020-08-16 13:22:59.801140+0200 Editor[99805:16211279] Sync: Connection[1]: WebSocket::Websocket()\n2020-08-16 13:22:59.801243+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Binding ‘$path’ to ‘“master”’\n2020-08-16 13:22:59.801299+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Activating\n2020-08-16 13:22:59.801370+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\n2020-08-16 13:22:59.801418+0200 Editor[99805:16211279] Sync: Opening Realm file: $path\n2020-08-16 13:22:59.801816+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: client_file_ident = 0, client_file_ident_salt = 0\n2020-08-16 13:22:59.801892+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 0, reliable_download_progress = 0, snapshot version = 1\n2020-08-16 13:22:59.801958+0200 Editor[99805:16211279] Sync: Connection[1]: Resolving ‘ws.eu-west-1.aws.realm.mongodb.com:443’\n2020-08-16 13:22:59.831549+0200 Editor[99805:16211279] Sync: Connection[1]: Connecting to endpoint ‘99.83.185.35:443’ (1/2)\n2020-08-16 13:22:59.840666+0200 Editor[99805:16211279] Sync: Connection[1]: Connected to endpoint ‘99.83.185.35:443’ (from ‘192.168.178.65:60858’)\n2020-08-16 13:22:59.864231+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 2216, reliable_download_progress = 0, snapshot version = 2\n2020-08-16 13:22:59.951901+0200 Editor[99805:16211279] Sync: Connection[1]: WebSocket::initiate_client_handshake()\n2020-08-16 13:23:00.025554+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 2318, reliable_download_progress = 0, snapshot version = 3\n2020-08-16 13:23:00.090092+0200 Editor[99805:16211279] Sync: Connection[1]: WebSocket::handle_http_response_received()\n2020-08-16 13:23:00.090171+0200 Editor[99805:16211279] Sync: Connection[1]: Negotiated protocol version: 1\n2020-08-16 13:23:00.090222+0200 Editor[99805:16211279] Sync: Connection[1]: Will emit a ping in 20117 milliseconds\n2020-08-16 13:23:00.090268+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Sending: BIND(path=‘“master”’, signed_user_token_size=469, need_client_file_ident=1, is_subserver=0)\n2020-08-16 13:23:01.349524+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Received: IDENT(client_file_ident=5, client_file_ident_salt=387383682228219102)\n2020-08-16 13:23:01.350739+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Sending: IDENT(client_file_ident=5, client_file_ident_salt=387383682228219102, scan_server_version=0, scan_client_version=0, latest_server_version=0, latest_server_version_salt=0)\n2020-08-16 13:23:01.350905+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Sending: MARK(request_ident=1)\n2020-08-16 13:23:01.605823+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=1, download_client_version=0, latest_server_version=1, latest_server_version_salt=3644857074679688014, upload_client_version=0, upload_server_version=0, downloadable_bytes=0, num_changesets=1, …)\n2020-08-16 13:23:01.606233+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Finished changeset indexing (incoming: 1 changeset(s) / 6 instructions, local: 2 changeset(s) / 128 instructions, conflict group(s): 2)\n2020-08-16 13:23:01.606478+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Finished transforming 2 local changesets through 1 incoming changesets (128 vs 6 instructions, in 2 conflict groups)\n2020-08-16 13:23:01.607236+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: 1 remote changeset integrated, producing client version 5\n2020-08-16 13:23:01.607313+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Progress handler called, downloaded = 105, downloadable(total) = 105, uploaded = 0, uploadable = 2318, reliable_download_progress = 1, snapshot version = 5\n2020-08-16 13:23:01.748902+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Received: MARK(request_ident=1)\n2020-08-16 13:23:01.749017+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=5, progress_server_version=1, locked_server_version=1, num_changesets=2)\n2020-08-16 13:23:01.749137+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Upload compaction: original size = 2216, compacted size = 2216\n2020-08-16 13:23:01.749204+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Upload compaction: original size = 102, compacted size = 102\n2020-08-16 13:23:01.790901+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Received: ERROR(error_code=212, message_size=22, try_again=0)\n2020-08-16 13:23:01.791019+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Suspended\n2020-08-16 13:23:01.791269+0200 Editor[99805:16211279] Sync: Connection[1]: Session[1]: Sending: UNBIND\n Unhandled sync error: io.realm.sync:4 Bad changeset (UPLOAD)2020-08-16 13:23:01.791541+0200 Editor[99805:16211279] Sync: Connection[1]: Disconnected", "username": "mrackwitz" }, { "code": "BadChangeset ErrorERROR: AddColumn@objc dynamic var _partitionKey@objc dynamic var _super_partitionKey", "text": "TL;DRI’ve posted information below however, we have an existing project that has remained unchanged code or data wise for two weeks and are getting a very similar error BadChangeset Error however, the trailing info is different - your error is related to the partition key (read below), ours is related to ERROR: AddColumn when we didn’t make any changes to our objects, data or code.This may be server related and I have opened a ticket and will post with any updates.It would be helpful to include the specific code that causes the error along with how your objects are defined and then what change was made to those objects. e.g. my objects had a partition key of @objc dynamic var _partitionKey and I changed that to @objc dynamic var _super_partitionKeyAlso thisI’ve tried to change the type of the partition key,Is not possible in that context without jumping through a bunch of hoops. General rule of thumb is; once it’s set, it’s set. Attempting to change it without the other steps would explain the errorinstruction had incorrect partition value for key “_partition”A few notesRealm clients should never modify the partition value directly. Any field that clients can modify cannot be used as a partition key.Changing a partition value is an expensive operation, requiring a deletion of the object from one realm and an insertion of the same object into another realm. As a result, you should choose a partition key that rarely, if ever, needs to change.Once you have chosen a partition key and enabled sync, you cannot reassign the partition key to a different field. To use a different field as your partition key you’ll need to stop sync, change it, re-enable sync and then deal with a client reset in code (this goes back to bullet point #2 in that the server does a lot of work on the back end when this is changed)A good overview is in the Partition Documentation", "username": "Jay" }, { "code": "", "text": "Bad Changeset is an error in the low-level C++ sync-client and cannot be handled by the upper level binding code. I have upped the priority of this issue and we are working on a fix this week.", "username": "Ian_Ward" }, { "code": "", "text": "Hey @Ian_Ward, long time! Awesome that this is going to be fixed soon.\nAny workarounds for now? This is pretty much a complete showstopper for me atm.", "username": "mrackwitz" }, { "code": "@objc dynamic var _partition: String?\n@objc dynamic var _partition: String = \"\"\n@objc dynamic var _partitionKey: String = \"\"\n", "text": "Thanks @Jay for your quick reply.Is not possible in that context without jumping through a bunch of hoops. General rule of thumb is; once it’s set, it’s set. Attempting to change it without the other steps would explain the errorTo use a different field as your partition key you’ll need to stop sync, change it, re-enable sync and then deal with a client reset in code (this goes back to bullet point #2 in that the server does a lot of work on the back end when this is changed)I know, I’ve jumped at least thru the hoops you describe later (as in 2nd quote) on now several times. Very familiar with the procedure to nuke the local Realm as a solution…\nIf there are any more steps involved please let me know.My partition key is defined as follows:Also tried:And:With sync terminations, rule changes, sync re-start, nuking local Realm and re-building and starting the app in between, obviously.", "username": "mrackwitz" }, { "code": "", "text": "@mrackwitz For now, you can probably avoid this issue by disabling Developer mode on the server side. You would then need to define your syncing schema in JSON on the cloud though. You would also need to wipe your client to start fresh.", "username": "Ian_Ward" }, { "code": "", "text": "Thanks for the super fast response. I tried this one too. I disabled dev mode, reduced my schema to a very limited subset, but the error still persists.", "username": "mrackwitz" }, { "code": "", "text": "@mrackwitz Did you wipe your simulator as well?", "username": "Ian_Ward" }, { "code": "", "text": "It’s a macOS app. I did a “rm -rf” on “…/Application Support/realm-object-server/”. That should wipe it all?", "username": "mrackwitz" }, { "code": "instruction had incorrect partition value for key \"_partition\"realm = try! Realm(configuration: user.configuration(partitionValue: \"abc\"))\ntry! realm.write {\n realm.add(Foo(_partition: \"def\"))\n}\nabc_partitiondef_partition_partition", "text": "Hey Marius, instruction had incorrect partition value for key \"_partition\" occurs when the value for the partition key is different from the value of the partition with which you opened the Realm.So something like:Note that the Realm was opened with a partition value of abc, but the _partition field is assigned to def. To avoid the error you can either assign the _partition field in your model to the correct value or avoid assigning it to anything (that way the server will populate it). If you don’t care about the _partition field in your mobile app, then you can just remove it from your swift model altogether - technically it will still be in the database, because the server will synchronize it to the device, but not having it defined in your models will ensure you don’t accidentally assign it to the wrong value.", "username": "nirinchev" }, { "code": "_partition\"master\"_partition", "text": "Heyy @nirinchev, thanks for chiming in!To avoid the error you can either assign the _partition field in your model to the correct valueI tried this, that did not seem to work for me with a string partition key. The partition key I attempted to use was just \"master\".or avoid assigning it to anything (that way the server will populate it).That’s what I tried next. (Possibly default values as they are used in the Cocoa / Swift SDK might got in between here. For the record, I used the constructor of the object with zero arguments directly and set all fields externally, beside the partition key. I tried this with a required string and an optional string as partition key.)If you don’t care about the _partition field in your mobile app, then you can just remove it from your swift model altogether - technically it will still be in the database, because the server will synchronize it to the device, but not having it defined in your models will ensure you don’t accidentally assign it to the wrong value.Well, that worked finally. Knowing about this earlier could have saved me some pain and time, would be nice to include this in the docs. In fact, I just looked it up again, the docs are then even completely misleading here, they say Realm Sync would ignore Collections without partition key:However, you don’t have to include the partition key in all collections; Realm Sync ignores any collections that lack a Realm Schema as well as any documents that lack a valid value for the partition key.My expectation from reading this was, if I don’t include a partition key on any particular object, those objects won’t be synced at all.", "username": "mrackwitz" }, { "code": "\"master\"_partition_partition\"master\"", "text": "I tried this, that did not seem to work for me with a string partition key. The partition key I attempted to use was just \"master\" .Hm… I wonder if there’s an issue with the Cocoa SDK where it doesn’t encode the string value correctly. I’ll file a ticket for the server team to log the expected and the actual values for the partition key to make it easier to spot mistakes. Once that is there, I’ll see what’s going on with the Cocoa SDK and why there’s a discrepancy with the partition keys. Note that if you had default values, it’s very likely that this was the cause - the way Sync will encode this is: one instruction to set _partition to its default value, then a second instruction to set _partition to \"master\". In this case, the first instruction would violate the validations and the entire changeset will be rejected, even though the final value of the field is correct. I’ll check with the Cocoa team to confirm if we can do anything about that.My expectation from reading this was, if I don’t include a partition key on any particular object, those objects won’t be synced at all.I can see how that would be confusing - this explanation actually targets the server-side schema and concerns synchronization from MongoDB to Realm - i.e. if you have collections in MongoDB that you don’t want to sync to the mobile device, you can omit the partition key from their schema and sync will ignore them. At the moment all objects from the mobile database get synchronized to the server, but we may consider some opt-out mechanism in the future.In any case, great to hear that things are working for you and thanks for the valuable feedback. I’ll make sure to convey it to the right people so that future users get a more seamless experience ", "username": "nirinchev" }, { "code": "_partition_partition\"master\"failed to validate upload changesets: clients cannot upload destructive schema changes, received \"EraseColumn\" instruction\n", "text": "Note that if you had default values, it’s very likely that this was the cause - the way Sync will encode this is: one instruction to set _partition to its default value, then a second instruction to set _partition to \"master\" . In this case, the first instruction would violate the validations and the entire changeset will be rejected, even though the final value of the field is correct. I’ll check with the Cocoa team to confirm if we can do anything about that.That would make sense and was along the lines, I was thinking. If Cocoa, can’t do anything about it, maybe making up in the documentation or even in the error message itself, could probably help a lot.In any case, great to hear that things are working for youWorking, but not for long actually. Just after a subsequent start, I get failures again, same error, but different reasoning. Quite reproducible again tho unfortunately.Any tips what the culprit could be in that case?\nNote: No, I’m not actually doing any schema changes at all.\nIn fact I’ve tried to terminate sync and start again, and it does sync even in development mode, where this error is happening as well.\nCould this have something to do with that the server rules include the partition key, but the client schema doesn’t?", "username": "mrackwitz" }, { "code": "EraseColumn_partition", "text": "That’s super interesting - the EraseColumn instruction should be impossible to generate from the SDK. I’m sorry you’re hitting these issues, but the feedback is very valuable. I’ll try to isolate a repro case, but just to be sure, my understanding is that these are the steps you’re taking:Does this sound reasonable?", "username": "nirinchev" }, { "code": "EraseColumn", "text": "That’s super interesting - the EraseColumn instruction should be impossible to generate from the SDK. I’m sorry you’re hitting these issues, but the feedback is very valuable.Well, glad that the feedback at least is helpful. I can keep it coming, as I keep hitting issues, once this one is resolved. To the repro steps: this does indeed sound like what I’ve done, but little disclaimer: once sync was initially working I opened the flood gates and added my fully fledged model, instead of the previously very simplified version I’ve used. That alone was an additive change though, but there is a chance that there is more going on. Nevertheless given that I didn’t make any subsequent changes to the schema, the partition key should be indeed the only difference which could be somehow interpreted as destructive.Keep me posted on ideas what I can do to resolve this issue, or at least side-step this issue again. Meanwhile I’m starting to seriously consider to just use at least the Mongo DB document-driven API with plain JSON for now, but that would require implementing JSON mapping and all its drawbacks afaict. ", "username": "mrackwitz" }, { "code": "", "text": "Haven’t had much luck reproing it, but have you tried wiping your local state? While the bug is real and we’ll be hunting it, I imagine that should unblock you in the short term.", "username": "nirinchev" }, { "code": "[email protected]", "text": "Haven’t had much luck reproing it, but have you tried wiping your local state?Several times. I was still on the v10 branch, so theoretically there could be something in there? But I just checked and saw that @jsflx published the latest changes just yesterday as v10.0.0-beta.3, so I might as well use that.", "username": "mrackwitz" }, { "code": "", "text": "@mrackwitz Would you be able to send me a repro case so that I can recreate it internally? You can email me at [email protected]", "username": "Ian_Ward" }, { "code": "", "text": "I’m getting a very similar error but coming from Android/Kotlin\nfailed to validate upload changesets: instruction had incorrect partition value for key “username” (ProtocolErrorCode=212)Yet username is assigned in the class Object and it has a valid value", "username": "Barry_Fawthrop" }, { "code": "", "text": "@Barry_Fawthrop Please share your server side logs - what do they say for this error?", "username": "Ian_Ward" } ]
Keep getting BadChangeset Error
2020-08-16T11:51:10.000Z
Keep getting BadChangeset Error
7,515
null
[ "java", "performance" ]
[ { "code": "public static void bulkInsert2() {\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(\"mongodb://10.4.1.3:34568/\"))\n .build();\n \n MongoClient mongoClient = MongoClients.create(settings);\n \n WriteConcern wc = new WriteConcern(0).withJournal(false);\n \n String databaseName = \"test\";\n String collectionName = \"testCollection\";\n \n System.out.println(\"Database: \" + databaseName);\n System.out.println(\"Collection: \" + collectionName);\n System.out.println(\"Write concern: \" + wc);\n \n MongoDatabase database = mongoClient.getDatabase(databaseName);\n \n MongoCollection<Document> collection = database.getCollection(collectionName).withWriteConcern(wc);\n \n int rows = 1000000;\n int iterations = 5;\n int batchSize = 1000;\n \n double accTime = 0;\n \n for (int it = 0; it < iterations; it++) {\n database.drop();\n \n List<InsertOneModel<Document>> docs = new ArrayList<>();\n \n int batch = 0;\n long totalTime = 0;\n \n for (int i = 0; i < rows; ++i) {\n String key1 = \"7\";\n String key2 = \"8395829\";\n String key3 = \"928749\";\n String key4 = \"9\";\n String key5 = \"28\";\n String key6 = \"44923.59\";\n String key7 = \"0.094\";\n String key8 = \"0.29\";\n String key9 = \"e\";\n String key10 = \"r\";\n String key11 = \"2020-03-16\";\n String key12 = \"2020-03-16\";\n String key13 = \"2020-03-16\";\n String key14 = \"klajdlfaijdliffna\";\n String key15 = \"933490\";\n String key17 = \"paorgpaomrgpoapmgmmpagm\";\n \n Document doc = new Document(\"key17\", key17).append(\"key12\", key12).append(\"key7\", key7)\n .append(\"key6\", key6).append(\"key4\", key4).append(\"key10\", key10).append(\"key1\", key1)\n .append(\"key2\", key2).append(\"key5\", key5).append(\"key13\", key13).append(\"key9\", key9)\n .append(\"key11\", key11).append(\"key14\", key14).append(\"key15\", key15).append(\"key3\", key3)\n .append(\"key8\", key8);\n \n docs.add(new InsertOneModel<>(doc));\n \n batch++;\n \n if (batch >= batchSize) {\n long start = System.currentTimeMillis();\n \n collection.bulkWrite(docs);\n \n totalTime += System.currentTimeMillis() - start;\n \n docs.clear();\n batch = 0;\n }\n }\n \n if (batch > 0) {\n long start = System.currentTimeMillis();\n \n collection.bulkWrite(docs);\n \n totalTime += System.currentTimeMillis() - start;\n \n docs.clear();\n }\n \n accTime += totalTime;\n \n System.out.println(\"Iteration \" + it + \" - Elapsed: \" + (totalTime / 1000.0) + \" seconds.\");\n }\n \n System.out.println(\"Avg: \" + ((accTime / 1000.0) / iterations) + \" seconds.\");\n \n mongoClient.close();\n }{\n \"_id\" : ObjectId(\"5f3c2db34063366c39177e64\"),\n \"key17\" : \"paorgpaomrgpoapmgmmpagm\",\n \"key12\" : \"2020-03-16\",\n \"key7\" : \"0.094\",\n \"key6\" : \"44923.59\",\n \"key4\" : \"9\",\n \"key10\" : \"r\",\n \"key1\" : \"7\",\n \"key2\" : \"8395829\",\n \"key5\" : \"28\",\n \"key13\" : \"2020-03-16\",\n \"key9\" : \"e\",\n \"key11\" : \"2020-03-16\",\n \"key14\" : \"klajdlfaijdliffna\",\n \"key15\" : \"933490\",\n \"key3\" : \"928749\",\n \"key8\" : \"0.29\"\n}", "text": "Processors: 2 x Intel Xeon E5-2640 2.50GHz\nMemory: 8GB RDIMM, 1333 MH (Total 32Gb RAM)\nNetwork Card Speed: Broadcom 5720 QP 1Gb Network Daughter Card\nOperating System: Core OS\nMongoDB Server Version: 3.6.2 (Docker hosted)Processors: Intel Core i7-4790 CPU @ 3.60GHz (8CPUs). ~3.1GHz\nMemory: 16GB RAM\nNetwork Card: Intel® Ethernet Connection (2) I218-V, 1Gb\nOperating System: Windows 7 EnterpriseThe avg data transfer rate between the client and server is ~90 MB/sInserting 1 million documents (See the sample document section) using bulk write, the performance starts to degrade when I use a batch size larger than 1000. The following is the execution times of the sample code using different batch sizes.batch size 1000\nIteration 0 - Elapsed: 6.577 seconds.\nIteration 1 - Elapsed: 6.52 seconds.\nIteration 2 - Elapsed: 6.156 seconds.\nIteration 3 - Elapsed: 6.859 seconds.\nIteration 4 - Elapsed: 6.152 seconds.\nAvg: 6.452800000000001 seconds.batch size 5000\nIteration 0 - Elapsed: 7.112 seconds.\nIteration 1 - Elapsed: 6.662 seconds.\nIteration 2 - Elapsed: 6.457 seconds.\nIteration 3 - Elapsed: 6.551 seconds.\nIteration 4 - Elapsed: 6.211 seconds.\nAvg: 6.5986 seconds.batch size 10000\nIteration 0 - Elapsed: 8.049 seconds.\nIteration 1 - Elapsed: 7.528 seconds.\nIteration 2 - Elapsed: 7.664 seconds.\nIteration 3 - Elapsed: 7.462 seconds.\nIteration 4 - Elapsed: 7.396 seconds.\nAvg: 7.6198 seconds.Is this the expected outcome in relation to batch sizes ? Can someone explain why does using larger batch sizes causes the performance to degrade in this case ?", "username": "Aziz_Zitouni" }, { "code": "", "text": "Hi @Aziz_Zitouni,I’ve noticed you are using a write concern of 0 with no journal. I believe that you done it to gain max throughput.However, when you do that you probably cause the server to be overwhelmed as the ack comes before the server processes anything. Not sure what this test tries to achieve.If you have secondaries performance may be worse as the lag cause cache pressure on the majority commit point maintenance.Therefore, if you push more workload to the database at once you get it more overwhelmed and stressed. This explains why smaller batches allow “better” performance as it gives the database more space to breath as there are 10 times more roundtrips than with 10k.Also the 3.6.2 is a pretty old version with lots of revisions released since then with many performance improvements. I would not test performance on this version what so ever ( always use latest for testing 3.6.19)Best\nRegards", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,Thanks for the explanation. It makes more sense to me now. I have verified the performance against newer versions of MongoDB. The execution results with batch size 1000 for 3.6.19 and 4.2.8 are as follows:Iteration 0 - Elapsed: 7.11 seconds.\nIteration 1 - Elapsed: 6.878 seconds.\nIteration 2 - Elapsed: 7.173 seconds.\nIteration 3 - Elapsed: 7.152 seconds.\nIteration 4 - Elapsed: 6.935 seconds.\nIteration 5 - Elapsed: 7.341 seconds.\nIteration 6 - Elapsed: 7.238 seconds.\nIteration 7 - Elapsed: 7.793 seconds.\nIteration 8 - Elapsed: 6.753 seconds.\nIteration 9 - Elapsed: 6.862 seconds.\nAvg: 7.1235 seconds.Iteration 0 - Elapsed: 7.0 seconds.\nIteration 1 - Elapsed: 6.022 seconds.\nIteration 2 - Elapsed: 7.878 seconds.\nIteration 3 - Elapsed: 5.978 seconds.\nIteration 4 - Elapsed: 6.207 seconds.\nIteration 5 - Elapsed: 6.31 seconds.\nIteration 6 - Elapsed: 7.098 seconds.\nIteration 7 - Elapsed: 6.23 seconds.\nIteration 8 - Elapsed: 6.284 seconds.\nIteration 9 - Elapsed: 6.648 seconds.\nAvg: 6.5655 seconds.The performance on 4.2.8 seems a bit slower than 3.6.19 for some reason. Could it be due to a new feature in 4.2 ? Note that I am using MongoDB java driver 3.12.7 and the servers are plain docker containers created from the images in Docker.Regards,\nAziz", "username": "Aziz_Zitouni" }, { "code": "", "text": "Hi @Aziz_Zitouni,There are many factors that can impact testing results, some of those is the test scenarios themselves.Have you changed the journal and wc behaviour?There are some known areas where performance could be impacted by the additional mechanics in future versions as they support more complex logic like transactions/retrayble reads and others.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,I have tried running the same tests with wc set to 1 and journal set to false. I’m seeing the same behavior in relation to the scalability.So, are there options related to the new features you mentioned in MongoDB 4.2 and above that can be tweaked to improve the write performance ?Regards,\nAziz", "username": "Aziz_Zitouni" }, { "code": "W: 1", "text": "Hi @Aziz_Zitouni,I will recommend testing with W: 1 and journal true. Working with w:0 and journal false is not recommended and should not be used in production.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,The best performance I was able to achieve with W: 1 and J: true to insert 1M documents is ~17 seconds on average with a batch size of 50000.\nThe performance starts to degrade when I use batches larger than 50k but I believe it’s due to the additional round trips required when sub batches are created and sent to MongoDB to overcome the maximum size of the bulk operation. So, assuming we have a single MongoDB instance with no secondaries, Is it safe to say that the optimal way to insert 1M in this case is using bulk write with a batch size of 1000 and write concern of 0 or does it depend on other factors ?Regards,\nAziz", "username": "Aziz_Zitouni" }, { "code": "", "text": "Hi @Aziz_Zitouni,If you don’t care that your data will get lost or corrupted along the way then yes.As long as you understand this risk (no durability gurantee and a single node).Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for your help with this @Pavel_Duchovny. I don’t have further questions.Regards,\nAziz", "username": "Aziz_Zitouni" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Performance bottleneck in bulk insertion (Java)
2020-08-18T19:43:19.425Z
Performance bottleneck in bulk insertion (Java)
8,447
null
[ "indexes", "performance", "golang" ]
[ { "code": "{text: /putin/i, \"timestamp\": {\"$gt\":1590969600, \"$lt\": 1598832000}}\ttopt := options.Find()\n\ttopt.SetBatchSize(300_000)\n\ttopt.SetLimit(300_000)\nctx, cancel = context.WithTimeout(context.Background(), 30*time.Second)\ndefer cancel()\n\ncur, err := database.Collection(\"chat\").Find(ctx, filter, topt)\nif err != nil {\n log.Println(err)\n fmt.Fprint(w, `{\"success\": false}`)\n return\n}\ndefer cur.Close(ctx)\n\nctx, cancel = context.WithTimeout(context.Background(), 30*time.Second)\ndefer cancel()\n\nvar postResult []post\n err = cur.All(ctx, &postResult)\nif err != nil {\n fmt.Fprint(w, `{\"success\": false, \"message\": \"All failing\"}`)\n return\n}\n", "text": "hi, I’m new on this forum, so i have same fetch performance problemQuery : {text: /putin/i, \"timestamp\": {\"$gt\":1590969600, \"$lt\": 1598832000}}.explain(“allPlansExecution”) : https://gist.github.com/batara666/e6c1321fba176d32ff74dd986442bd26Driver : mongo-go-driveroptions.Find():Code :returned chat length are ~36k documents and it tooks ~9sec, i need to make it faster, i really frustrated :(, any suggestions would be greatly appreciated ", "username": "dodo_nosk" }, { "code": "texttext { text : \"text\", timestamp : 1}\n", "text": "Hi @dodo_nosk,The used index is scanning a range of dates and all of its text entires which is expected as indexes do no support a unanchored regular expressions as well as case insensitive search.What you should consider is a text index where field text is indexed compound indexAnd use the $text operator with case insensitive search.MONGODB ATLAS SEARCHAtlas Search makes it easy to build fast, relevance-based search capabilities on top of your MongoDB data. Try it today on MongoDB Atlas, our fully managed database as a service.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "idk, but FTS is slower than indexed-regexp on my mongo", "username": "dodo_nosk" }, { "code": "", "text": "Hi @dodo_nosk,It is possible if the text index cause more work then eventually scanning the keys.The best solution we have for regex searches is with using Atlas and its Atlas search machnics. We also have case insensitive indexes if you are looking for a specific value for case insensitive criteria .Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB batch find performance
2020-08-31T21:28:07.711Z
MongoDB batch find performance
3,884
null
[ "security" ]
[ { "code": "", "text": "Hi All,Just wanted to know whether encryption at rest is free for enterprise edition of mongodb or not?", "username": "Aayush_Sod" }, { "code": "", "text": "Yes it is standard feature of MongoDB enterprise.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB encryption at rest
2020-09-01T10:26:58.893Z
MongoDB encryption at rest
1,619
null
[]
[ { "code": "", "text": "I have faced some issues in MongoDB service. MongoDB community version(3.6.0) for the staging server. I have directly deleted some files (only collections wt file) in mongo folder. so now I have not to repair MongoDB and not start MongoDB service. I have use AWS Linux2 EC2 machine. how to recover my database.", "username": "manpreet_gill" }, { "code": "", "text": "I have directly deleted some filesYou should not do that on any database. You’ve effectively put yourself in a position where your only option is a recovery from backup or export.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB database recover
2020-09-01T08:08:27.554Z
MongoDB database recover
1,606
null
[ "database-tools", "backup" ]
[ { "code": "mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.\nFor 4.2+ sharded clusters with in-progress sharded transactions, use one of the following coordinated backup and restore processes which do maintain the atomicity guarantees of transactions across shards:", "text": "Hi,\nIn MongoDB 4.2 release notes, I see below lines which states that mongodump is not reliable for backup as atomicity cannot be guaranteed. Will acquiring lock using fsyncLock before backup help here?", "username": "Akshaya_Srinivasan" }, { "code": "mongodumpmongodumpmongodmongodump", "text": "Hi Akshaya,The info you quoted is specific to sharded clusters that may have sharded transactions in progress. The following paragraph on that documentation page suggests recommended alternatives: mongodump for sharded clusters.Note that mongodump is generally not a recommended backup strategy where the uncompressed data being backed up is significantly larger than RAM. The mongod process has to read all data to be dumped through memory, so mongodump can have a significant impact on the working set and performance of an active deployment.See MongoDB Backup Methods for an overview of supported backup approaches.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X , but I do not want to use any 3rd party tool as an alternative. And I want to just dump a database from one cluster and restore it to another cluster.", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi ,\nplease can someone help with this requirement. Thanks in advance.", "username": "Akshaya_Srinivasan" }, { "code": "mongodbump", "text": "If your cluster is not sharded you can use mongodbump with the caveat outlined above that it may impact performance for other users during the dump process.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thanks Joe. But I am using sharded cluster in my case.", "username": "Akshaya_Srinivasan" }, { "code": "mongodumpmongodump", "text": "I was looking into this as well, and basically even though you can still use mongodump, it seems that there is just no way to use mongodump for consistent, atomic backups when you are using sharded clusters.", "username": "zOxta" }, { "code": "", "text": "To do that right with an on-premise deployment you need to install our commercial product Ops Manager which includes a tool for doing consistent backups of sharded clusters.Our recommendation for most customers is to run your sharded cluster in MongoDB Atlas if you can. It has all the backup technology built in.", "username": "Joe_Drumgoole" } ]
Atomicity for backup using mongodump
2020-04-17T18:33:24.190Z
Atomicity for backup using mongodump
2,374
null
[ "connecting" ]
[ { "code": "", "text": "Created a database in free MongoDB Atlas cloud in AWS, downloaded MongoDB Compass Community in Windows10, testing my connection using Compass. I have pasted my connection string in Compass under “New Connection” tab as : mongodb://my_user:[email protected]:27017/my_db\nand clicked “Connect”. I am getting following error: getaddrinfo ENOTFOUND mycluster-u8ha7.mongodb.net.Can someone pls guide me how to troubleshoot and make this remote connectivity working from my local laptop? Just started playing around with MongoDB. So looking for the initial push to move forward. Thanks in advance.", "username": "Jyoti_Sarkar" }, { "code": "", "text": "That does not look like a connection string from Atlas.You can get one from the connect button on your cluster.", "username": "chris" }, { "code": "", "text": "Thanks Chris for your response. I got the right connect string from Atlas as you have suggested. Initially when I was using mongodb+srv I was getting error, after installing dnspython everything started working. Thanks again.", "username": "Jyoti_Sarkar" }, { "code": "mongodb+srv://sam:<password>@cluster0.tpvpc.mongodb.net/bot", "text": "I am also having an error reason of which is unknown to me. Am giving the connect string given to me from Mongo and applying it to my compass but its showing an error over and over again.\nSame error is also coming when am trying to connect to it from my application’s node.js driver\nmongodb+srv://sam:<password>@cluster0.tpvpc.mongodb.net/bot\nI have replaced the the password tag with actual pass while requesting", "username": "Swarnab_Mukherjee" }, { "code": "", "text": "Can you connect by shell?\nIs your hostname correct\nCan you ping your cluster hosts\nMake sure no spaces/invalid characters while pasting the connect string in Compass\nor it could be some program blocking the port like antivirus", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely your internet provider is using an old DNS server and do not understand seedlist queries. Try using 8.8.8.8 and 8.8.4.4 google’s DNS server.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb remote database connectivity error - getaddrinfo ENOTFOUND
2020-06-09T20:47:00.553Z
Mongodb remote database connectivity error - getaddrinfo ENOTFOUND
72,615
null
[]
[ { "code": "", "text": "\nmy MongoDB compass version 1.21.2", "username": "ARIF_NIAZ" }, { "code": "", "text": "This is wrong version. The schema tab is not available in the community edition. Search this forum, you will find instructions to install stable entreprise edition.", "username": "steevej" }, { "code": "", "text": "Hi @ARIF_NIAZ,Please download the 1.21.2 (stable) version of MongoDB using this link.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "hi sir @Shubham_Ranjan , i have already stored some data and i do not want to lose them\nwill the download or upgrade to 1.21.2(stable) cause loss of data?", "username": "Andes_Lam" }, { "code": "mongod", "text": "Hi @Andes_Lam,Data doesn’t get stored in Compass. In your case, most likely it would be an Atlas cluster or a locally running mongod instance.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
I don't find scheme tab
2020-07-11T14:13:07.535Z
I don&rsquo;t find scheme tab
1,136
null
[]
[ { "code": "{\n \"name\": \"London\",\n \"models\": [{\n \"result\": \"5\"\n }, {\n \"result\": \"9\"\n }, {\n \"result\": \"20\"\n }]\n}\n", "text": "Hi, I have an object that has many fields called “result”. How can I find all object that does not contain any result field that is equal to specific value?\nAll my attempts ended with returning all objects that inclueded at least one result field not equal to specific value…eg:", "username": "Daniel_Reznicek" }, { "code": "db.test.find( { \"models.result\": { $ne: \"200\" } } )\"200\"", "text": "Hello @Daniel_Reznicek, welcome to the community.You can try this query:db.test.find( { \"models.result\": { $ne: \"200\" } } )This finds the documents that does not contain any result field that is equal to specific given value, \"200\".", "username": "Prasad_Saya" } ]
Find objects where none of "result" fields is equal to specific value
2020-09-01T09:51:50.081Z
Find objects where none of &ldquo;result&rdquo; fields is equal to specific value
1,720
null
[]
[ { "code": "", "text": "\nThis message shows when I try to connect", "username": "Shun_Lei" }, { "code": "", "text": "Everything is fine. This is just a warning.", "username": "steevej" }, { "code": "", "text": "image1352×363 16.7 KBBut I run this command , this error shows.", "username": "Shun_Lei" }, { "code": "", "text": "You do not have write access to the shared class Cluster.You must upload the data on your own cluster as per the instructions.", "username": "steevej" }, { "code": "Sandbox cluster", "text": "Hi @Shun_Lei,As @steevej-1495 mentioned, you are supposed to load the dataset in your Sandbox cluster.", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Unable to connect to the class Atlas cluster
2020-08-31T16:14:50.017Z
Unable to connect to the class Atlas cluster
1,482