image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "server" ]
[ { "code": "", "text": "Please help . i used db.shutdownServer to shut it down to check for an issue .\nBut now i am unable to restart it .", "username": "Stuart_S" }, { "code": "", "text": "What is your setup like?\nIs it standalone or a replica\nWhat error you get when you start mongod\nCheck errors from mongod.log", "username": "Ramachandra_Tummala" } ]
I am unable to restart mongodb server after using c
2022-04-28T03:36:58.366Z
I am unable to restart mongodb server after using c
1,887
null
[]
[ { "code": "", "text": "I’ve run into the following situation with Atlas Device Sync (M2 cluster on AWS Frankfurt):While Atlas M0 (Free Cluster), M2, and M5 Limitations does mention some limitations, the troughput number for even the free cluster is quite a bit higher.\nIs this expected performance for a M2 cluster?\nIs there some overview of expected performance for larger and dedicated cluster types?Is there any way to see the Sync Server queue (number of unsynchronized objects)? Terminating Sync after schema upgrades seems quite risky if there’s no way to ensure that all pending changesets have been written to Atlas.", "username": "Andreas_Ley" }, { "code": "", "text": "Hi. Im not sure I 100% follow your bullet points of what is being “uploaded” from where to where. I will say that we recommend always using a dedicated instance with sync. This is because:\n(a) You can scale up to bigger instances without terminating sync (M2->M5, or M2->M10+ is not possible without terminating sync)\n(b) You get access to cluster metrics\n(c) You do not get any rate-limiting. One of the biggest issues with Sync and shared-clusters is that there is rate-limiting appliedThus, for any performance-related testing I would highly recommend using >= M10.If you still have any issues, let me know and if you send me the URL to your app in the Atlas / Realm UI I can take a look at the logsThanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks, Tyler.I’ve upgraded to a M10 instance which massively improved performance. Since the project is still in development, performance is not the main issue and I’m more worried about losing data in production later on.Here’s again what happened on the M2 instance:I imported 600,000 objects into a macOS app that’s using a Realm database (with Atlas Device Sync). The import, which ended up in the local .realm file, took 1 minute. The objects were written to the Realm in batches of 1000 objects.The Realm SDK then automatically started to upload the new objects from the local .realm file to the Realm Sync server (Atlas App Services / Device Sync). This took about 15 minutes, iirc.The Realm Sync server obviously cached the whole changeset and then started transferring the new data into Atlas, doing so at a speed of less than 10 objects/second. Writing 600,000 objects to Atlas would therefore take more than 20 hours.When I terminated Device Sync in Atlas App Services, the queue was wiped. 500,000 objects that weren’t yet synchronized to Atlas were gone server-side.\nThe warning that appears when terminating Device Sync doesn’t mention such data loss. I also couldn’t find any way to see the status of Device Sync (e.g. “Writing 500,000 objects to Atlas…” or something along these lines).On the M10 instance, the transfer from Device Sync to Atlas was basically done as soon as the data was uploaded. Still, it would be great to have the ability to see the status/queue of Device Sync in the MongoDB Cloud dashboard.", "username": "Andreas_Ley" }, { "code": "", "text": "Hi, glad to see that the M10 was a much better experience. I have found that it is the minimum required tier for any sync application with real load (600,000 objects is non-trivial).As for the last question, we are planning on releasing a series of metrics endpoints and visualizations in the coming months. One of the metrics is not the “size” of the queue but rather the “lag” measured in time of how long the changeset has taken from when it is received by the server to when it is inserted into Atlas and I suspect that will be helpful to you.", "username": "Tyler_Kaye" }, { "code": "", "text": "Hello Andreas,I see my colleague Tyler has done an amazing job resolving this, I would like to add some background context for the behavior you had observed.These behaviors are common for Shared Tier clients.Information about shared tier clustersCluster Recommendations\nMongoDB does not recommend shared tier clusters for full production environments, and we encourage at the least an M10 as a dedicated cluster for a production environment with extreme lows in traffic, but an M20 would be the better starting point. Our official recommendation overall is an M30 cluster for production environments.Notice\nWe cannot guarantee stability on a live production environment on a shared tier cluster for the reasons mentioned above, as these clusters are intended for educational and development environments. As the shared cluster gains tenant density the available resources outside of the dedicated RAM, and CPU allotted to the specific tenant on creation will be spread more thin.Once you moved to the M10, the jump in performance is due to now having dedicated resources like the writer, and so on. Resources that as mentioned above in a shared tier instance would otherwise be shared.I hope this better clears up why you observe such differences using the M10.Regards,\nBrock", "username": "Brock_GL" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow throughput on M2
2022-06-13T09:47:06.814Z
Slow throughput on M2
2,949
null
[ "aggregation", "queries" ]
[ { "code": "{\n\"FieldA\": [{\"key\" : \"A\", \"value\" : 1234},{\"key\" : \"A\", \"value\" : 5689}],\n\"FieldB\": [{\"key\": \"B\", \"value\": 4567},{\"key\" : \"B\", \"value\" : 4532}]\n}\n", "text": "Hello team,My query is related to mongo update with aggregation pipeline.Input document:FieldA and FieldB may/may not exist in the document during update. Array size is dynamic i.e., can be 1/2/many\nIf exists, add a new key-value pair to the array\nIf not exists, create a new array field with one key-value pairQuestion 1:\nHow to add new element to an existing array or create a new array using update with aggregation pipeline. Since $push cannot be used in update pipeline. Alternates to achieve this?Question 2:\nThe updates are triggered in batches. If there is a failure in updating a batch, undo the updates i.e.,\nIf array size is 1 - unset the keys “FieldA” and “FieldB”\nIf array size is greater than 1 - remove the last element with key = “A” for FieldA and key = “B” for FieldB\nHow to achieve this ?I also update the value of an existing field to a new field using update with mongo aggregation. How to frame the queries for Question 1 and Question 2 and include them as part of update with mongo aggregation pipeline.Regards,\nLaks", "username": "Laks" }, { "code": " test [direct: primary] test> db.coll.insertOne({name: \"Max\"})\n{\n acknowledged: true,\n insertedId: ObjectId(\"62a75465bf5bb07f017b7d7f\")\n}\ntest [direct: primary] test> db.coll.updateOne({name: \"Max\"}, {$push: {fieldA: {key: \"A\", value: 1234}}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\ntest [direct: primary] test> db.coll.findOne()\n{\n _id: ObjectId(\"62a75465bf5bb07f017b7d7f\"),\n name: 'Max',\n fieldA: [ { key: 'A', value: 1234 } ]\n}\ntest [direct: primary] test> db.coll.updateOne({name: \"Max\"}, {$push: {fieldA: {key: \"A\", value: 5678}}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\ntest [direct: primary] test> db.coll.findOne()\n{\n _id: ObjectId(\"62a75465bf5bb07f017b7d7f\"),\n name: 'Max',\n fieldA: [ { key: 'A', value: 1234 }, { key: 'A', value: 5678 } ]\n}\n", "text": "Hi @Laks and welcome back !Why do you want to use the pipeline update when there is a more simple solution?For your question 1, I would just do this:The $push update operator behaves as you described by default.Question 2: The only way to achieve this properly is to wrap your batch in a multi-doc ACID transactions and abort the transaction if an error occurs in your processing.Regarding your last question:I also update the value of an existing field to a new field using update with mongo aggregation. How to frame the queries for Question 1 and Question 2 and include them as part of update with mongo aggregation pipeline.I’m not sure to understand. Can you elaborate or provide an example maybe?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{\n\"FieldA\": // add an array element or create new array\n\"FieldB\": // add an array element or create new array\n\"FieldC\": \"$varC\" // \"varC\" is an existing field in the document, the value of the existing field will be copied to this new field \"FieldC\"\n}\ndb.collection.updateMany({},\n[{ \"$set\": {\"FieldA\":{\"$cond\" : [ {$isArray: \"$FieldA\" }, {\"$concatArrays\": [ \"$FieldA\",[{\"key\" : \"V\", \"value\": 5678}] ] }, [{\"key\" : \"V\", \"value\": 5678}],\n//same logic for \"FieldB\"\n ]\n);\n", "text": "Hi @MaBeuLux88Thanks for your response.Following are the fields to be updated in mongo document,Without aggregation pipeline, I will not be able to copy “FieldC” from an existing field for each document.\nSo I’m insisting on using updates with aggregation pipeline to make the work easier.The below command worked for Question 1:Question 2: I don’t wish to use transaction management, as the volume is really high and better option is to process them in batches rather than holding the related documents in memory. How can I perform undo for he updates made to FieldA, FieldB", "username": "Laks" }, { "code": "db.coll.findOne()\n{\n _id: ObjectId(\"62a76d0b2567cbdbd1ad3d03\"),\n FieldA: [ { key: 'A', value: 1234 }, { key: 'A', value: 5689 } ],\n FieldB: [ { key: 'B', value: 4567 } ]\n}\ndb.coll.updateOne({}, [{\n \"$set\": {\n \"FieldA\": {\n \"$cond\": [{$eq: [{$size: \"$FieldA\"}, 1]}, \"$$REMOVE\", {\"$slice\": [\"$FieldA\", {\"$add\": [{$size: \"$FieldA\"}, -1]}]}]\n }, \n \"FieldB\": {\n \"$cond\": [{$eq: [{$size: \"$FieldB\"}, 1]}, \"$$REMOVE\", {\"$slice\": [\"$FieldB\", {\"$add\": [{$size: \"$FieldB\"}, -1]}]}]\n }\n }\n}])\n\ndb.coll.findOne()\n{\n _id: ObjectId(\"62a77b1a2567cbdbd1ad3d04\"),\n FieldA: [ { key: 'A', value: 1234 } ]\n}\n", "text": "Without the transaction you have no guarantee that you are undoing what you just updated. Maybe another update was done in the meantime by another system and you are going to alter this instead of what you meant to undo.But if your batch is isolated, I think you could do this:If array is size 1 => $$REMOVE.\nIf array is size 2+ => $slice.Let’s say I want to “undo” 5689 and 4567:I can execute this:Result:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks @MaBeuLux88. This was really helpful", "username": "Laks" } ]
Add new array element - Update with Aggregation pipeline
2022-06-13T10:20:22.370Z
Add new array element - Update with Aggregation pipeline
5,933
https://www.mongodb.com/…96b8c84cb618.png
[ "atlas-functions", "app-services-data-access" ]
[ { "code": "", "text": "As seen in the image above, I am calling find on a collection twice. Both queries should return all the documents in the collection.What’s going on? This function is being called via a specific user which has certain rules and filters applied. Filters will filter the collection so that users see only a subset of the collection.As seen above the first find operation looks at the filtered collection, but the second operation looks at the entire collection.This is also relevant when outside the function and two functions are called together (called as custom resolvers), one resolver looks at a subset of the database, another looks at the entirety of it.This is hindering development, can anyone please let me know if this is a bug, or if this is an issue in my implementation? Thanks", "username": "Imtinan_Azhar" }, { "code": "", "text": "Hi @Imtinan_Azhar welcome to the community!Thanks for the report! It was investigated, and was confirmed to be an Atlas issue.Best regards\nKevin", "username": "kevinadi" } ]
Bug in Realm, Filters are only applied once in calls made together
2022-05-27T06:38:05.429Z
Bug in Realm, Filters are only applied once in calls made together
2,482
null
[]
[ { "code": "> error: \nfailed to execute source for 'node_modules/@slack/bolt/dist/index.js': TypeError: 'exit' is not a function\n\tat pleaseUpgradeNode (node_modules/please-upgrade-node/index.js:28:19(79))\n\tat node_modules/@slack/bolt/dist/index.js:47:38(88)\n", "text": "Hi,\nI was attempting to use the Slack bolt js library in a Ream http endpoint. I added it as an external dependency via the UI (not an upload), and it said it successfully installed it.When I tried to run a basically empty realm function for the endpoint, I got:Is the library just not compatible due to the age of node in Realm, or is there some way I can proceed?On the off chance there’s some simple way to make it work, has anyone seen a bolt Receiver for Realm HTTP Endpoints that one could plumb in and then be able to use the standard bolt event handling mechanisms, etc.? I’m thinking something like the AWS receiver could be done for Realm as well, and would let me hook slack right up to our Mongo Atlas stuff without having to make a Heroku or AWS component in the middle.Thanks for any pointers.", "username": "Rob_Arnold" }, { "code": "", "text": "Out of curiosity, has anyone here used the bolt lib from Slack, or even their old node lib with Realm functions?\nCalling the web API directly is working, but not as nice as having the full sdk.", "username": "Rob_Arnold" } ]
External dependency: @slack/bolt - > runtime error
2022-06-09T00:03:10.083Z
External dependency: @slack/bolt - > runtime error
1,453
null
[ "mdbw22-buildersfest" ]
[ { "code": "", "text": "Session #4 @2PM EST 06/09/2022\nDrop your LEGO creations here!Theme: Robots", "username": "TimSantos" }, { "code": "", "text": "\nIMG_13161920×2560 369 KB\n\n\nIMG_13211920×2560 423 KB\n", "username": "TimSantos" } ]
Brick Building Bonanza #4
2022-06-05T15:35:59.583Z
Brick Building Bonanza #4
2,836
null
[ "mdbw22-buildersfest" ]
[ { "code": "", "text": "Session #3 @1PM EST 06/09/2022\nDrop your LEGO creations here!Theme: High Tech Gadgets", "username": "TimSantos" }, { "code": "", "text": "\nIMG_13151920×2560 397 KB\n\n\nIMG_13181920×2560 323 KB\n\n\nIMG_13201920×2560 457 KB\n", "username": "TimSantos" } ]
Brick Building Bonanza #3
2022-06-05T15:35:42.314Z
Brick Building Bonanza #3
2,633
null
[ "mdbw22-buildersfest" ]
[ { "code": "", "text": "Session #2 @12PM EST 06/09/2022\nDrop your LEGO creations here!THEME: Transportation", "username": "TimSantos" }, { "code": "", "text": "\n20220609_1218511920×2560 283 KB\n\n2nd Place!", "username": "Brian_Chamberlain" }, { "code": "", "text": "\nIMG_13021920×2560 216 KB\n\n\nIMG_13031920×2560 350 KB\n\n\nIMG_13041920×2560 195 KB\n\n\nIMG_13051920×2560 188 KB\n\n\nIMG_13071920×2560 197 KB\n", "username": "TimSantos" } ]
Brick Building Bonanza #2
2022-06-05T15:35:07.027Z
Brick Building Bonanza #2
2,955
null
[ "mdbw22-buildersfest" ]
[ { "code": "", "text": "Session #1 @11AM EST 06/09/2022\nDrop your LEGO creations here!Theme: Animals & Zoo", "username": "TimSantos" }, { "code": "", "text": "Better late than never!\nIMG_12791920×2560 301 KB\n\n\nIMG_12801920×2560 131 KB\n\n\nIMG_12811920×2560 294 KB\n\n\nIMG_12821920×2560 359 KB\n\n\nIMG_12831920×2560 224 KB\n\nIMG_12841920×2560 225 KB\n\n\nIMG_12881920×1440 286 KB\n\n\nIMG_12901920×1440 223 KB\n", "username": "TimSantos" } ]
Brick Building Bonanza #1
2022-06-05T15:34:40.221Z
Brick Building Bonanza #1
2,835
null
[ "mongodb-shell" ]
[ { "code": "DB.prototype.dropDatabase = function() {\n print('Dropping databases is not authorized')\n}\ndb.prototype.dropDatabase = function() {\n print('Dropping databases is not authorized')\n}\n", "text": "In “mongo” using .mongorc.js it was possible to do this:But in “mongosh” using .mongoshrc.js it doesn’t have DB object, and db (lowercase) doesn’t seems to work the same way:How could I override dropDatabase method in the new mongo shell?", "username": "Gilberto_Albino" }, { "code": "db.__proto__.dropDatabase = () => console.log('Dropping databases is not authorized')\ndbAdmindropDatabasedb.runCommand( { dropDatabase: 1 } )\n", "text": "You should be able to do that with:However, I would recommend against overriding prototype methods as that might cause the shell to behave unexpectedly.The best and most secure way to prevent someone (or yourself) from dropping a database is to not give them dbAdmin role. Even when overriding the dropDatabase helper, it’s still possible to drop a database with:", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Indeed you are right.\nAbout the security issues, the dropDatabase() in this case is to some legacy really old Mongo automation that won’t cause any risk in production.\nBut as we’re upgrading Mongo, we’ll take into consideration.\nThanks Massimiliano.", "username": "Gilberto_Albino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to override DB methods in .mongoshrc.js
2022-06-13T14:00:02.480Z
How to override DB methods in .mongoshrc.js
1,773
https://www.mongodb.com/…e26439e1d431.png
[ "aggregation", "atlas-search" ]
[ { "code": "leasingleasing\"leasing\": {\n \"canLease\": true,\n \"defaultPrice\": null,\n \"regionalPrices\": [\n {\n \"regionId\": 32,\n \"amountInCents\": 111487,\n \"expiresOn\": \"2022-07-01T00:00:00Z\"\n },\n {\n \"regionId\": 581,\n \"amountInCents\": 111487,\n \"expiresOn\": \"2022-07-01T00:00:00Z\"\n },\n {\n \"regionId\": 30478,\n \"amountInCents\": 111487,\n \"expiresOn\": \"2022-07-01T00:00:00Z\"\n }\n ]\n}\nregionalPricesembeddedDocumentsleasingleasing.regionalPricesembeddedDocumentsembeddedDocumentsembeddedDocumentsembeddedDocumentsembeddedDocumentsembeddedDocuments\"leasing\": {\n \"fields\": {\n \"canLease\": {\n \"type\": \"boolean\"\n },\n \"defaultPrice\": {\n \"type\": \"number\"\n },\n \"regionalPrices\": {\n \"fields\": {\n \"amountInCents\": {\n \"indexDoubles\": false,\n \"representation\": \"int64\",\n \"type\": \"number\"\n },\n \"regionId\": {\n \"indexDoubles\": false,\n \"representation\": \"int64\",\n \"type\": \"number\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n },\n \"type\": \"document\"\n},\nembeddedleasing.regionalPricesembeddedDocumentembedded{\n \"embeddedDocument\": {\n \"path\": \"leasing.regionalPrices\",\n \"operator\": {\n \"range\": {\n \"gte\": 31230,\n \"lte\": 31230,\n \"path\": \"leasing.regionalPrices.regionId\",\n }\n },\n \"score\": {\n \"embedded\": {\n \"outerScore\": {\n \"boost\": {\n \"path\": \"leasing.regionalPrices.amountInCents\"\n }\n }\n }\n }\n }\n}\namountInCents", "text": "I’m attempting to use the new embeddedDocuments type (still in preview as of writing this) but I’m running into a few issues. I can’t tell if these are actual issues or user error, but any advice either way would be much appreciated.For reference, the top-level field from my model that I’m attempting to use here is called leasing and the structure of the leasing field on a typical document is as follows:regionalPrices is the list of documents I’d like to use with the embeddedDocuments type.Also, here is the starting index I’m working with as it relates to the leasing field:\nStarting index values824×325 47.6 KB\nWhen setting up the leasing.regionalPrices in my index with the embeddedDocuments type, I found that the dropdown doesn’t list that field:\nDropdown for field name, only showing the "leasing" option1118×600 35.8 KB\nI can at least work around this by using the JSON editor to set the field type to embeddedDocuments, but why doesn’t the field even show up in the visual editor?After successfully setting my field as embeddedDocuments using the JSON editor, upon loading the Visual Editor to check things out I get the following error message:\n"Your index is incompatible with the Visual Editor. Instead, cancel and use the JSON editor to view and refine your existing index."1972×146 40.3 KB\nAfter setting up the embeddedDocuments type, the fields within those documents which I previously had indexed (see the screenshot way up at the top) are apparently no longer present in the index, and the newly changed field doesn’t show any type:Bizarrely, the JSON editor still shows the sub fields:After setting up the index, I can query leasing.regionalPrices using the embeddedDocument query, but when I use the embedded score modifier on a sub-field of the embedded documents:I get the following error:\n'PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: path expression for function score requires path "leasing.regionalPrices.amountInCents" to be indexed as numeric'742×140 43.9 KB\nAs shown in the JSON index from issue 3, amountInCents should be indexed as numeric, but apparently the pipeline doesn’t agree.", "username": "Lucas_Burns" }, { "code": "embeddedDocumentsembeddedDocumentembeddedDocumentsembeddedDocumentsembeddedDocumentsembeddedDocumentsembeddedDocuments{\n \"embeddedDocument\": {\n \"path\": \"leasing.regionalPrices\",\n \"operator\": {\n \"range\": {\n \"gte\": 31230,\n \"lte\": 31230,\n \"path\": \"leasing.regionalPrices.regionId\",\n }\n },\n \"score\": {\n \"embedded\": {\n \"outerScore\": {\n \"boost\": {\n \"path\": \"leasing.regionalPrices.amountInCents\"\n }\n }\n }\n }\n }\n}\n{\n \"embeddedDocument\": {\n \"path\": \"leasing.regionalPrices\",\n \"operator\": {\n \"range\": {\n \"gte\": 31230,\n \"lte\": 31230,\n \"path\": \"leasing.regionalPrices.regionId\",\n \"score\": {\n \"boost\": {\n \"path\": \"leasing.regionalPrices.amountInCents\"\n }\n }\n }\n }\n }\n}\nembeddedDocumentembeddedDocumentembeddedDocumentoperatorembeddedDocumentleasing.regionalPrices.amountInCentsembeddedDocumentcompoundembeddedDocumentleasing.defaultPriceembeddedDocumentembeddedDocumentembeddedDocument", "text": "Hi Lucas_Burns,Thanks for your interest in the embeddedDocuments index type and embeddedDocument operator! We are excited to see it being used, and appreciate you taking the time to write up your post in such a detailed way.With respect to issues related to the Visual Index Builder - we do intend to support the embeddedDocuments field type in the visual index builder, but have not implemented support for the new embeddedDocuments field there yet. We do note this in the docs - though it is easy to miss (emphasis is mine):Use the embeddedDocuments type to index fields in documents that are elements of an array. Atlas Search indexes embedded documents independent of their parent document. Each indexed document contains only fields that are part of the embedded document array element. You can’t use embeddedDocuments for date or numeric faceting. You can’t use the Atlas UI Visual Index Builder to define fields of embeddedDocuments type.We will change that note to be more prominent, sorry for the confusion.After setting up the index, I can query leasing.regionalPrices using the embeddedDocument query, but when I use the embedded score modifier on a sub-field of the embedded documents:I think the query you might want to run instead isDetails around embeddedDocument operator execution are helpful in better understanding how the score of an embeddedDocument operator is computed, and what fields are available to a function score in different scopes of an embedded document query.We can think of an embeddedDocument operator as being executed in three stages:Said differently, embeddedDocument computes the score of each matching embedded document, combines those scores in a configurable way, and adds that combined score to the net relevance score of result documents.Does it make sense why the only place where embeddedDocument can use values of embedded documents in a function score is in the first “stage” of execution (inside the operator specified by embeddedDocument), before scores of multiple matching embedded documents are combined?Please let me know if that is helpful/resolves your issue, and please let me know if I can clarify anything/help explain anything in more detail!", "username": "Evan_Nixon" }, { "code": "embeddedDocumentsembeddedDocumentsembeddedDocumentsoperatorembeddedDocuments", "text": "Thanks for the reply, Evan!To your point with #1, 2, and 3:As you noted, the documentation says“You can’t use the Atlas UI Visual Index Builder to define fields of embeddedDocuments type”(my bad for not reading thoroughly enough) but it you indicated that there are plans to support this. Since the documentation also specifiesYou can’t use embeddedDocuments for date or numeric faceting\"it feels like I should ask, is there also a plan to support date or numeric faceting for embeddedDocuments eventually?Also, I assume that the missing fields I noted in #3 are due to the missing support for the visual editor and not due to the fields being passed over by the search index system?To #4, I see what you’re saying about the internal operator being the correct place to score the individual sub-documents. That boost → path adjustment also seems to do the trick if I want to boost based on some component of the matching sub-documents.Is there currently a timeline for getting the embeddedDocuments system out of preview and into a fully supported feature?", "username": "Lucas_Burns" }, { "code": "embeddedDocumentsoperatorembeddedDocumentsembeddedDocuments", "text": "Thanks for the reply, Evan!Sure, my pleasure!the documentation also specifiesYou can’t use embeddedDocuments for date or numeric faceting\"Good question! There is not a plan for that support right now.Would that be something that you’d be interested in? I’d love to learn more about your use case - if it is something you’d be interested in, would you mind sharing a little more about the shape of the documents that you’d like to use numeric/date faceting over embedded documents on?Also, I assume that the missing fields I noted in #3 are due to the missing support for the visual editor and not due to the fields being passed over by the search index system?Yes, exactly - our apologies for the confusion. Of course, we aim to have both the visual index builder and the JSON index representation show the same accurate state for an index definition - but sometimes when adding new features, visual index builder support lags behind support in the JSON editor by some period of time. If there is ever a question about how an index is configured, the JSON view should be considered the “source of truth” for an index.To #4, I see what you’re saying about the internal operator being the correct place to score the individual sub-documents. That boost → path adjustment also seems to do the trick if I want to boost based on some component of the matching sub-documents.Great, glad this makes sense and is working for you!Is there currently a timeline for getting the embeddedDocuments system out of preview and into a fully supported feature?We are working to move embeddedDocuments out of preview - but we don’t have a timeline for moving this out of preview yet.We do intend to move it out of preview eventually - and really do appreciate your time in writing this post, and your interest in the feature!", "username": "Evan_Nixon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
embeddedDocuments and "embedded" score modifier issues
2022-06-09T20:18:32.249Z
embeddedDocuments and “embedded” score modifier issues
2,348
null
[ "server", "release-candidate", "schema-validation" ]
[ { "code": "", "text": "MongoDB 4.2.21-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.20. The next stable release 4.2.21 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.21-rc0 is released
2022-06-13T17:23:25.794Z
MongoDB 4.2.21-rc0 is released
2,495
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.15-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.14. The next stable release 4.4.15 will be a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.15-rc0 is released
2022-06-13T17:20:22.024Z
MongoDB 4.4.15-rc0 is released
2,679
null
[ "aggregation", "atlas-cluster" ]
[ { "code": "{\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"plannerVersion\": 1,\n \"namespace\": \"list.list\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"732BF4BE\",\n \"planCacheKey\": \"732BF4BE\",\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"__alias_0\": 1,\n \"createdAt\": 1,\n \"deviceType\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"direction\": \"forward\"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 5775,\n \"executionTimeMillis\": 11341,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 5775,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 5775,\n \"executionTimeMillisEstimate\": 10248,\n \"works\": 5777,\n \"advanced\": 5775,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 602,\n \"restoreState\": 602,\n \"isEOF\": 1,\n \"transformBy\": {\n \"__alias_0\": 1,\n \"createdAt\": 1,\n \"deviceType\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 5775,\n \"executionTimeMillisEstimate\": 10248,\n \"works\": 5777,\n \"advanced\": 5775,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 602,\n \"restoreState\": 602,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 5775\n }\n }\n }\n },\n \"nReturned\": 5775,\n \"executionTimeMillisEstimate\": 11321\n },\n {\n \"$addFields\": {\n \"createdAt\": {\n \"$cond\": [\n {\n \"$eq\": [\n {\n \"$type\": [\n \"$createdAt\"\n ]\n },\n {\n \"$const\": \"date\"\n }\n ]\n },\n \"$createdAt\",\n {\n \"$const\": null\n }\n ]\n }\n },\n \"nReturned\": 5775,\n \"executionTimeMillisEstimate\": 11321\n },\n {\n \"$addFields\": {\n \"__alias_0\": {\n \"year\": {\n \"$year\": {\n \"date\": \"$createdAt\"\n }\n },\n \"month\": {\n \"$subtract\": [\n {\n \"$month\": {\n \"date\": \"$createdAt\"\n }\n },\n {\n \"$const\": 1\n }\n ]\n }\n }\n },\n \"nReturned\": 5775,\n \"executionTimeMillisEstimate\": 11321\n },\n {\n \"$group\": {\n \"_id\": {\n \"__alias_0\": \"$__alias_0\",\n \"__alias_1\": \"$deviceType\"\n },\n \"__alias_2\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11329\n },\n {\n \"$project\": {\n \"__alias_2\": true,\n \"__alias_0\": \"$_id.__alias_0\",\n \"__alias_1\": \"$_id.__alias_1\",\n \"_id\": false\n },\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11329\n },\n {\n \"$project\": {\n \"x\": \"$__alias_0\",\n \"y\": \"$__alias_2\",\n \"color\": \"$__alias_1\",\n \"_id\": false\n },\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11329\n },\n {\n \"$group\": {\n \"_id\": {\n \"x\": \"$x\"\n },\n \"__grouped_docs\": {\n \"$push\": \"$$ROOT\"\n }\n },\n \"nReturned\": 42,\n \"executionTimeMillisEstimate\": 11329\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"_id.x.year\": 1,\n \"_id.x.month\": 1\n }\n },\n \"nReturned\": 42,\n \"executionTimeMillisEstimate\": 11339\n },\n {\n \"$unwind\": {\n \"path\": \"$__grouped_docs\"\n },\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11339\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$__grouped_docs\"\n },\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11339\n },\n {\n \"$limit\": 5000,\n \"nReturned\": 45,\n \"executionTimeMillisEstimate\": 11339\n }\n ],\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1653052072,\n \"i\": 1\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"wbTpff+wi+JIk2OTk+7x8AI=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7078973919130026000\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1653052072,\n \"i\": 1\n }\n }\n}\n", "text": "Hello Charts experts,a kind of simple chart takes quite long. When I explain the aggregation I see a collection scan which I do not get rid of. None of my indexing approaches borough any change and since I have to deal with the result charts is making of the dragged fields I feel very limited.\nWhat do you suggest ? These are less than 6000 documents and I don’t get it better than 11 secs !!!??\nRegards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller -Collection scans are expected when rendering charts that do not have any filters. In an unfiltered chart, we need to consider every document, so the scan is inevitable. Indexes are helpful and recommended whenever you have filters or lookups.As to why it’s taking 11 seconds, I’m not sure, but I’m not an expert on query stats or performance tuning. One thing to note is that Charts does cache the results of the query up to the period you specify, so people viewing the charts will get much better performance when the results are in the cache and fresh.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollander\nthanks for your answer, I was kind of expecting this answer but was hoping on some kind of secret sauce Concerning the fact that the collection scan takes 11 sec for only 6000 docs, I’d like to dig deeper in this does not sound correct. Does anyone see something odd in the explain statement?Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "db.collection.stats()", "text": "Hi @michael_hoeller11 secs for scanning 6000 docs seems excessive. Can you provide a bit more details so we can reproduce this?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you for following up @kevinadi\nI will provide all information right after the MDBW22", "username": "michael_hoeller" }, { "code": "", "text": "Hello @kevinadi\nthanks again for your response. While gathering the data I realized that on (not needed field) in each document has > 5MB data. I created a view and could get rid of the problem.\nThanks a lot!\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to optimize a charts aggregation?
2022-05-20T13:19:12.544Z
How to optimize a charts aggregation?
2,612
null
[ "api" ]
[ { "code": " CODE=`curl --user \"${{ secrets.PUBLIC_KEY }}:${{ secrets.PRIVATE_KEY }}\" \\\n --digest --include \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/${{ secrets.PROJECT_ID }}/clusters/cluster-1/backup/snapshots?pretty=true\" \\\n --data '{ \"description\" : \"On Demand Snapshot\", \"retentionInDays\" : 3 }'`\n", "text": "I took the backup of the database using below commandNow I am looking for a way to use the above snapshot the restore back the database using the curl command", "username": "Vikas_Rathore" }, { "code": "", "text": "Hi @Vikas_Rathore and welcome in the MongoDB Community !I think you are looking for this:Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
How to restore the last on demand snapshot taken via api?
2022-06-12T23:25:33.752Z
How to restore the last on demand snapshot taken via api?
2,388
null
[ "dot-net", "data-modeling", "replication" ]
[ { "code": "", "text": "Hi ,\nAm unable to add secondary node into replica set .getting below error.Can anyone help.{\n“operationTime” : Timestamp(1654863211, 1),\n“ok” : 0,\n“errmsg” : “Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2”,\n“code” : 103,\n“codeName” : “NewReplicaSetConfigurationIncompatible”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1654863211, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}", "username": "Rojalin_Das1" }, { "code": "", "text": "Hi @Rojalin_Das1 and welcome in the MongoDB Community !Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2Error message seem pretty clear. Can you share your cluster config and how you are trying to add the new node?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Below is the config filesystemLog:\ndestination: file\nlogAppend: true\npath: /database/log/mongodb/mongod.logstorage:\ndbPath: /database/data/mongodb\njournal:\nenabled: trueprocessManagement:\nfork: true # fork and run in background\npidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\ntimeZoneInfo: /usr/share/zoneinfonet:\nport: 27019\nbindIp: 127.0.0.1,hostname # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.#security:#operationProfiling:#replication:\nreplication:\nreplSetName: “SNT-NP-DEV-rs01”#sharding:#auditLog:#snmp:One thing to share here, while doing rs.status(), host name showing as localhost not the actual hostname.\n“members” : [\n{\n“_id” : 0,\n“name” : “localhost:27019”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 5876,\n“optime” : {\n“ts” : Timestamp(1654867891, 1),\n“t” : NumberLong(1)\n},", "username": "Rojalin_Das1" }, { "code": "", "text": "Adding node: rs.add(“hostname:27019”)", "username": "Rojalin_Das1" }, { "code": "rs.init()", "text": "The problem was when you did the first rs.init(). I guess you didn’t provide a config there and it took localhost by default.Send a new config with the correct hostname or IP address. Then add the new node.", "username": "MaBeuLux88" }, { "code": "", "text": "Send a new config with the correct hostnamI didn’t get you to send a new config with the correct hostnam. Could you please elaborate it", "username": "Rojalin_Das1" }, { "code": "test [direct: primary] test> conf = rs.conf()\n{\n _id: 'test',\n version: 2,\n term: 1,\n members: [\n {\n _id: 0,\n host: 'localhost:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"62a35084f93f180c51796d9d\")\n }\n}\ntest [direct: primary] test> conf.members[0].host = 'hafx:27017'\nhafx:27017\ntest [direct: primary] test> rs.reconfig(conf)\n{\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1654870331, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1654870331, i: 1 })\n}\ntest [direct: primary] test> rs.conf()\n{\n _id: 'test',\n version: 3,\n term: 1,\n members: [\n {\n _id: 0,\n host: 'hafx:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"62a35084f93f180c51796d9d\")\n }\n}\nrs.status()test [direct: primary] test> rs.status()\n{\n set: 'test',\n date: ISODate(\"2022-06-10T14:12:21.947Z\"),\n myState: 1,\n term: Long(\"1\"),\n syncSourceHost: '',\n syncSourceId: -1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 1,\n writeMajorityCount: 1,\n votingMembersCount: 1,\n writableVotingMembersCount: 1,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1654870331, i: 1 }), t: Long(\"1\") },\n lastCommittedWallTime: ISODate(\"2022-06-10T14:12:11.371Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1654870331, i: 1 }), t: Long(\"1\") },\n appliedOpTime: { ts: Timestamp({ t: 1654870331, i: 1 }), t: Long(\"1\") },\n durableOpTime: { ts: Timestamp({ t: 1654870331, i: 1 }), t: Long(\"1\") },\n lastAppliedWallTime: ISODate(\"2022-06-10T14:12:11.371Z\"),\n lastDurableWallTime: ISODate(\"2022-06-10T14:12:11.371Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1654870319, i: 1 }),\n electionCandidateMetrics: {\n lastElectionReason: 'electionTimeout',\n lastElectionDate: ISODate(\"2022-06-10T14:09:08.971Z\"),\n electionTerm: Long(\"1\"),\n lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1654870148, i: 1 }), t: Long(\"-1\") },\n lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1654870148, i: 1 }), t: Long(\"-1\") },\n numVotesNeeded: 1,\n priorityAtElection: 1,\n electionTimeoutMillis: Long(\"10000\"),\n newTermStartDate: ISODate(\"2022-06-10T14:09:09.484Z\"),\n wMajorityWriteAvailabilityDate: ISODate(\"2022-06-10T14:09:09.565Z\")\n },\n members: [\n {\n _id: 0,\n name: 'hafx:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 197,\n optime: { ts: Timestamp({ t: 1654870331, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2022-06-10T14:12:11.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-06-10T14:12:11.371Z\"),\n lastDurableWallTime: ISODate(\"2022-06-10T14:12:11.371Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1654870148, i: 2 }),\n electionDate: ISODate(\"2022-06-10T14:09:08.000Z\"),\n configVersion: 3,\n configTerm: 1,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1654870331, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1654870331, i: 1 })\n}\n", "text": "You have to use the rs.reconfig() command like below. In this example I change “localhost” to “hafx” which is my actual hostname that other computers on my network can find.Final rs.status()", "username": "MaBeuLux88" }, { "code": "rs.conf()", "text": "rs.conf()Thanks. It woked.\nShall i follow same process for other nodes as well ?So that i can add member into replica set.", "username": "Rojalin_Das1" }, { "code": "", "text": "Thanks a lot . My problem is solved. I have created Replica set successfully.", "username": "Rojalin_Das1" }, { "code": "rs.add(XXX)rs.initiate(xx){\n _id: \"replicaSetName\",\n members: [\n { _id: 0, host: \"hostnameX:27017\" },\n { _id: 1, host: \"hostnameY:27017\" },\n { _id: 2, host: \"hostnameZ:27017\" }\n ]\n}\n", "text": "When you send the rs.add(XXX) command, the hostname and port that you send will be used to add the node in the config. Make sure to use the right hostname / IP address when you add them and you won’t have to reconfigure.Next time, directly pass the right config when you send the rs.initiate(xx) command:Something like this for example:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "conf = rs.conf()", "text": "conf = rs.conf()Hi ,\nAgain having issue today to add nodes into replica set .\nI stopped mongod services and while starting all nodes went secondary .To make one node primary, i have removed all nodes except one node by command(conf.members.splice(1,2) ).\nThen that node automativally come to primary node and when i start to add other nodes, its not adding now.Getting below error. Please help on this issue.Error:\nSNT-DEV-rs01:PRIMARY> rs.add(“hostname:27019”)\n{\n“operationTime” : Timestamp(1655109480, 1),\n“ok” : 0,\n“errmsg” : “Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: hostname:27019; the following nodes did not respond affirmatively: hostname:27019 failed with Error connecting to hostname:27019 (10.36.7.42:27019) :: caused by :: Connection refused”,\n“code” : 74,\n“codeName” : “NodeNotFound”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1655109480, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}", "username": "Rojalin_Das1" }, { "code": "", "text": "Did you replace hostname with actual hostname in your rs.add command?\nPlease show output of\nhostname\nAlso did you try to add nodes using IP?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Please please please, do not use hostname as your host name. We usually use hostname as place holder to indicate that a real host name should be specified. Using hostname as an hostname is very confusing and error prone.What is funny in your error message is that it says that only hostname:27019 responded and at the same time it says that hostname:27019 did not respond.This makes me think that hostname, did I say it is a bad choice, does not resolve to the same IP in all of your hosts.Is there any reasons why you are not using the default port number?", "username": "steevej" }, { "code": "", "text": "There is no reason not to use default port but i want to use different port. Regarding error msg, i have just replace server name as “hostname” and in all place server name is same.Not sure why the error coming. Could you please guide me what necessary thing i can check and correct.", "username": "Rojalin_Das1" }, { "code": "", "text": "Hi,\nDue to errors,i have uninstalled mongod package and installed it again in servers but while doing initiate of replicaset it thowing below error.Am unable to initiate replicaset by passing all host details which command advised by you earlier. Kindly help me to correct it.@(shellhelp2):1:1rs.initiate({\n… _id : “rs0”,\n… members: [\n… { _id : 0, host: “abc0a:27019” },\n… { _id : 1, host: “abc1a:27019” },\n… { _id : 2, host: “abc2a:27019” }\n… ]\n… })\n{\n“operationTime” : Timestamp(0, 0),\n“ok” : 0,\n“errmsg” : \"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: abc1a:27019 failed with Error connecting to abc1a:27019 (XX.XX.X.XX:27019) :: caused by :: Connection refused, abc2a:27019 failed with Error connecting to abc2a:27019 (XX.XX.X.XX:27019) :: caused by :: Connection refused,\n“code” : 74,\n“codeName” : “NodeNotFound”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(0, 0),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}", "username": "Rojalin_Das1" }, { "code": "", "text": "HI,\nDue to different errors,i did mongo package uninstallation and installed it again.And while initiate replicaset by passing all host name am getting below error.rs.initiate({\n… _id : “rs0”,\n… members: [\n… { _id : 0, host: “abc0a:27019” },\n… { _id : 1, host: “abc1a:27019” },\n… { _id : 2, host: “abc2a:27019” }\n… ]\n… })\n{\n“operationTime” : Timestamp(0, 0),\n“ok” : 0,\n“errmsg” : \"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: abc1a:27019 failed with Error connecting to abc1a:27019 (XX.XX.X.XX:27019) :: caused by :: Connection refused, abc2a:27019 failed with Error connecting to abc2a:27019 (XX.XX.X.XX:27019) :: caused by :: Connection refused,\n“code” : 74,\n“codeName” : “NodeNotFound”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(0, 0),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}", "username": "Rojalin_Das1" }, { "code": "", "text": "HI All,My issue has been resolved after starting mongod with config file. Replication has done sucessfullyThank You", "username": "Rojalin_Das1" } ]
Unable to add nodes into Replicaset
2022-06-10T12:39:23.919Z
Unable to add nodes into Replicaset
4,491
null
[ "time-series" ]
[ { "code": "", "text": "Hello,\nLet me start with saying that we are using Mongo and have been doing it for more then sever year, and it is an excellent product.Since the CPU requirements changed in 5.5, my managers tasked me with finding the answers to these questions:If anyone could shed any light on this, that would be awesome\nThank you\nVera Dobryanskaya", "username": "Vera_Dobryanskaya" }, { "code": "", "text": "Hello Vera_Dobryanskaya,Welcome to our forms!Let us know if you have any further questions!", "username": "Xander_Neben" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
One more time about Mongo 5 CPU requirements
2022-06-09T22:54:42.870Z
One more time about Mongo 5 CPU requirements
4,226
https://www.mongodb.com/…3_2_1024x512.png
[]
[ { "code": "", "text": "We have downloaded and created the give applicationContribute to realm/RChat development by creating an account on GitHub.We have followed the following stepsWe need that if user is already login then it should not ask to login again and want to see the conversation had with [email protected] even if mobile network is off. expecting the data should persist locally in device or realm SDKin Firebase / AWS Amplify this functionality is working so can you please help to resolve this problem as soon as possible.", "username": "Kashee_Kushwaha" }, { "code": "userconst app = new Realm.App(appConfig);\n\nconst credentials = Realm.Credentials.jwt(accessToken);\n\ntry {\n // Try to login, but ignore the error to allow continue offline.\n await app.logIn(credentials);\n} catch (err) {\n console.warn(err);\n}\n\nif (app.currentUser) {\n // Ensure that exists a cached or logged user\n throw new Error('Realm login failed: current user not found');\n}\n\nconst realm = await Realm.open({\n schema: config.schema,\n schemaVersion: config.schemaVersion,\n sync: {\n user: app.currentUser,\n partitionValue: config.partition,\n ...\n },\n });\n\n// Done!\n", "text": "I know this is an old thread, but just for the record, Realm caches de user when you log in for the first time.Sou, you can do things like this (React Native version):Docs: https://www.mongodb.com/docs/realm/sdk/node/examples/open-and-close-a-realm/#open-a-synced-realm-while-offline", "username": "Douglas_Junior" } ]
Offline support and handle session if user is already login
2021-01-17T19:32:50.404Z
Offline support and handle session if user is already login
1,959
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hello,\nwe are exploring React Native + Realm option. We are planning to use SYNC feature. Here is scenario i am trying to solve:any suggestions ?", "username": "Bapu_Hirave" }, { "code": "", "text": "If you leave out 4. (User logout from the app), the user will be able to access data offline (6.)", "username": "Kenneth_Geisshirt" }, { "code": "", "text": "Hi @Kenneth_Geisshirt we see possibility that user will logout from app, when in the network. And when user goes out of network, and user needs to access app/offline data, user wont be able to login to access app/offline data. what do you recommend ?", "username": "Bapu_Hirave" }, { "code": "userconst app = new Realm.App(appConfig);\n\nconst credentials = Realm.Credentials.jwt(accessToken);\n\ntry {\n // Try to login, but ignore the error to allow continue offline.\n await app.logIn(credentials);\n} catch (err) {\n console.warn(err);\n}\n\nif (app.currentUser) {\n // Ensure that exists a cached or logged user\n throw new Error('Realm login failed: current user not found');\n}\n\nconst realm = await Realm.open({\n schema: config.schema,\n schemaVersion: config.schemaVersion,\n sync: {\n user: app.currentUser,\n partitionValue: config.partition,\n ...\n },\n });\n\n// Done!!!\n", "text": "I know this is an old thread, but just for the record, Realm caches de user when you log in for the first time.Sou, you can do something like this (React Native version):Docs: https://www.mongodb.com/docs/realm/sdk/node/examples/open-and-close-a-realm/#open-a-synced-realm-while-offline", "username": "Douglas_Junior" } ]
REALM Offline Login
2020-12-18T04:45:53.049Z
REALM Offline Login
3,706
null
[ "atlas-device-sync" ]
[ { "code": "sync_metadata.realm...\nexports.UserMetadata = {\n name: 'UserMetadata',\n properties: {\n identity: 'string',\n local_uuid: 'string',\n marked_for_removal: 'bool',\n refresh_token: 'string?',\n provider_type: 'string',\n access_token: 'string?',\n identities: 'UserIdentity[]',\n profile: 'profile',\n state: 'int',\n device_id: 'string'\n }\n}\n\nexports.current_user_identity = {\n name: 'current_user_identity',\n properties: {\n current_user_identity: 'string'\n }\n}\n...\nUserMetadatarefresh_tokenaccess_token", "text": "One of the limitations of offline-first apps is that, if a user logs out before going offline, he/she cannot get back in while offline (having no internet access) as authentication happens on the server.When looking through the files Realm Sync uses on the device I noticed a database called sync_metadata.realm, which seems to contain authentication details:Within UserMetadata there is a refresh_token and access_token available after a user logs in and these are cleared when a user logs out. The user identity of each previously logged in user is also saved in this db file.This gave me the idea that this data could possibly be used to re-login previously authenticated users while they are offline. Is there an API available which officially supports this? (Based on previous discussions here, I assume there is not, but if there already is please point me to the API and the rest of my post would be moot).(getting a 403 error repeatedly when trying to post the second part of this… )", "username": "Christian_Wagner" }, { "code": "refresh_tokenaccess_token", "text": "If there is no official API which supports re-login in an offline situation, I thought that maybe I could access and cache the refresh_token and access_token in on-device storage while the user is online and use it to re-login while the user is offline. I know this might have security implications, but this is not a concern for the current app.As a proof of concept, I tested the following using iOS Simulator with the offline-enabled demo Task Tracker React Native app:(still getting a 403 error every when trying to post the remaining part of this… )", "username": "Christian_Wagner" }, { "code": "cd $HOME/Library/Developer/CoreSimulator/Devices/8B6.../data/Containers/Data/Application/2CC.../Documents/mongodb-realm/tasktracker-q.../server-utility/metadatacp -v sync_metadata.realm sync_metadata.loggedin.realmsync_metadata.realmmv -v sync_metadata.realm sync_metadata.loggedout.realmmv -v sync_metadata.loggedin.realm sync_metadata.realmconsole.log('* REALM PATH: ' + Realm.defaultPath);Realm.open(config)sync_metadata.realmrefresh_tokenaccess_token", "text": "(continued… due to 403 errors)The path for the Realm files can be found for example by using: console.log('* REALM PATH: ' + Realm.defaultPath); somewhere after Realm.open(config). And the Realm files can be opened on the desktop using MongoDB Realm Studio.This seems to work in principle, but obviously copying and swapping the sync_metadata.realm file would not be feasible to do from within the app. Yet possibly re-inserting the refresh_token and access_token could be done programmatically while the user is logged out and offline.I am only interested to re-login the most recently logged out user. Would this be feasible and how could I go about doing that in javascript on React Native? What side effects or problems could this cause?", "username": "Christian_Wagner" }, { "code": "sync_metadata.realm", "text": "Offline Re-Login of an offline-first app should actually be built into the Realm framework. Apps like 1Password and some others allow re-login on the same device while offline. As shown above, Realm actually has a database - sync_metadata.realm - which has the potential to cache credentials.I will request offline re-login capability as a feature request , as soon as I have confirmation, that there is no API to do that with the current version.An answer about this and with regard to the other questions would really be helpful.", "username": "Christian_Wagner" }, { "code": "@realm/react package", "text": "Thought I was alone on this, this really nags me for a weeks not, I’m surprised to see no answer from the Mongodb Realm Team.I’m using in React-Native with a @realm/react packageif you ever find a solution please let me know. Cheers from Tanzania", "username": "Raymond_Michael" }, { "code": "userconst app = new Realm.App(appConfig);\n\nconst credentials = Realm.Credentials.jwt(accessToken);\n\ntry {\n // Try to login, but ignore the error to allow continue offline.\n await app.logIn(credentials);\n} catch (err) {\n console.warn(err);\n}\n\nif (app.currentUser) {\n // Ensure that exists a cached or logged user\n throw new Error('Realm login failed: current user not found');\n}\n\nconst realm = await Realm.open({\n schema: config.schema,\n schemaVersion: config.schemaVersion,\n sync: {\n user: app.currentUser,\n partitionValue: config.partition,\n ...\n },\n });\n\n// Done!!\n", "text": "I know this is an old thread, but just for the record, Realm caches de user when you log in for the first time.Sou, you can do something like this (React Native version):Docs: https://www.mongodb.com/docs/realm/sdk/node/examples/open-and-close-a-realm/#open-a-synced-realm-while-offline", "username": "Douglas_Junior" } ]
Re-Login to App while still offline? Could sync_metadata.realm be utilized?
2021-07-23T09:14:41.133Z
Re-Login to App while still offline? Could sync_metadata.realm be utilized?
5,745
null
[ "atlas-device-sync" ]
[ { "code": " const config = {\n schema: [sessionSchema],\n sync: {\n user,\n partitionValue: 'Test',\n }\n} \n\n Realm.open(config)\n .then((openedRealm) => {\n if (canceled) {\n openedRealm.close();\n return;\n }\n realmRef.current = openedRealm;\n const syncCampaigns = openedRealm.objects('session');\n});", "text": "After I authenticate and sync my react native app with my database cluster using MongoDB Realm Sync, I want to open my app completely offline. How can I do that ? It works fine when I open it online. But, when I open it offline, it does not return anything , neither it gives any error.\nBelow is my code:", "username": "Promit_Rai" }, { "code": "let configuration = AppConfiguration(\n\t\t\tbaseURL: // Custom base URL\n\t\t\ttransport: nil, // Custom RLMNetworkTransportProtocol\n\t\t\tlocalAppName: \"AppName\",\n\t\t\tlocalAppVersion: \"1.0\",\n\t\t\tdefaultRequestTimeoutMS: 30000\n\t\t)\nlet app = App(id: \"someId\", configuration: configuration)\n\nif let user = app.currentUser {\n\t\t\tvar config = user.configuration(partitionValue: \"SHARED\")\n\t\t\tconfig.objectTypes = kSyncedObjectTypes\n\t\t\tdo {\n\t\t\t\tlet realm = try Realm.init(configuration: config, queue: .main)\n\t\t\t}\n\t\t} else {\n\t\t\n let creds = Credentials.jwt(token: \"someToken\"\n\n\t\t\tapp.login(credentials: creds) { result in\n\t\t\t\tswitch result {\n\t\t\t\tcase .success(let user):\n\t\t\t\t\tvar config = user.configuration(partitionValue: \"SHARED\")\n\t\t\t\t\tconfig.objectTypes = kSyncedObjectTypes\n\t\t\t\t\tRealm.asyncOpen(configuration: config, callbackQueue: .main, callback: completion)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n", "text": "Hi,\nI had similar issue regarding using offline only with the iOS SDK. And I found the following solution. Not quite sure what you should use in the React Native SDK, but maybe this can help you anyways.First time I login in the app I’m using the Realm.OpenAsync() function (after successfully logon), to get data from the server. Once the data is received the user is automatically stored behind the scenes in the keychain by the SDK. When I open the app next time I won’t do a Realm.OpenAsync(). I first look at the Realm.App instance and try to get the user from that by doing app.currentUser. If a user is available then I can use that user to initialize Realm.\nSee below:The important thing is that you always need the user to create the configuration and to open the Realm db.\nSo Realm.asyncOpen for the initial load/first time login and from that point you will always need to use Realm.init() from the locally stored user.Hope this helps,\nSujeevan", "username": "Sujeevan_Kuganesan" }, { "code": "", "text": "Do you know how to manage the same situation in React Native SDK. There is no method Realm.init()", "username": "Lukasz_Stachurski" }, { "code": "async function getRealm() {\n const app = new Realm.App(\"your-app-id\");\n if (app.currentUser) {\n // A user had already logged in - open the Realm synchronously\n return new Realm(getConfig(app.currentUser));\n }\n\n // We don't have a user - login a user and open the realm async\n const user = await app.logIn(Realm.Credentials.anonymous());\n return await Realm.open(getConfig(user));\n}\n\nfunction getConfig(user) {\n return {\n sync: {\n user,\n partitionValue: \"my-partition\"\n }\n };\n}\n", "text": "Do you know how to manage the same situation in React Native SDK. There is no method Realm.init()I haven’t tried in the React Native SDK, but I think it would something like this:Maybe you can also look at the discussion following the link: Local Realm open after synchronized on realm cloud - #4 by Ian_Ward", "username": "Sujeevan_Kuganesan" }, { "code": "", "text": "yeah, that works. Thank you so much! It solved my problem", "username": "Lukasz_Stachurski" }, { "code": "function getConfig(user, appId) {\n return {\n schema: [Account.schema],\n sync: {\n user,\n partitionValue: `${appId}`,\n },\n error: function(session, syncError) {\n console.log('session ------>', session)\n console.log('syncError ------>', syncError)\n },\n }\n}\n\n\n\nasync function handleLogin(){\n const user = await app.logIn(credentials)\n const finalUser = await user.functions.getUserData({email})\n await Realm.open(getConfig(user, finalUser._appId))}\n\nconst config = {\n // schema: [Account.schema],\n sync: {\n user,\n partitionValue: 'abc-123',\n },\n error: function(session, syncError) {\n console.log('session ------>', session)\n console.log('syncError ------>', syncError)\n },\n }\n try {\n const realm = new Realm(config)\n const all = realm.objects('Account')\n console.log('all', all)\n } catch (error) {\n console.log('error', error)\n }", "text": "Hi @Lukasz_Stachurski, could you please share how you solved it to work off-line? I’ve been having the same problem here with reat-native.", "username": "Virmerson" }, { "code": "", "text": "Hi @Lukasz_Stachurski i’m facing the same problem with RN, i already have my app.currentUser but it’s not enought to complete the connection with Realm MongoDB offline, do you have any working offline example that you can share with us ?", "username": "Leonel_Ceolin_Farias" }, { "code": "app.currentUser", "text": "Hi, the idea is to not open new connection if you already have user in app.currentUser.\nTake a look on a example given by Sujeevan_Kuganesan", "username": "Lukasz_Stachurski" }, { "code": "", "text": "I did it but still not working.Realm.open() only works with connection, if i want to get all data in “Account” wich is a synched collection, how i do it offline?", "username": "Leonel_Ceolin_Farias" }, { "code": "const realmRef = useRef(null);\nconst config = {\n schema: [accountSchema],\n sync: {\n user,\n partitionValue: projectId,\n },\n };\ntry {\n if (isConnected) {\n Realm.open(config)\n .then((openedRealm) => {\n if (canceled) {\n openedRealm.close();\n return;\n }\n realmRef.current = openedRealm;\n const syncAccount = openedRealm.objects('account');\n setAccount(syncAccount);\n })\n .catch((error) => console.warn('Failed to open realm:', error));\n } else {\n const localRealm = new Realm(config);\n realmRef.current = localRealm;\n const syncAccount = localRealm.objects('account');\n setAccount(syncAccount)l\n }\n } catch (e) {\n console.log('ERROR', e);\n }", "text": "This is how i did:", "username": "Promit_Rai" }, { "code": "userconst app = new Realm.App(appConfig);\n\nconst credentials = Realm.Credentials.jwt(accessToken);\n\ntry {\n // Try to login, but ignore the error to allow continue offline.\n await app.logIn(credentials);\n} catch (err) {\n console.warn(err);\n}\n\nif (app.currentUser) {\n // Ensure that exists a cached or logged user\n throw new Error('Realm login failed: current user not found');\n}\n\nconst realm = await Realm.open({\n schema: config.schema,\n schemaVersion: config.schemaVersion,\n sync: {\n user: app.currentUser,\n partitionValue: config.partition,\n ...\n },\n });\n\n// Done!\n", "text": "I know this is an old thread, but just for the record, Realm caches de user when you log in for the first time.Sou, you can do something like this (React Native version):Docs: https://www.mongodb.com/docs/realm/sdk/node/examples/open-and-close-a-realm/#open-a-synced-realm-while-offline", "username": "Douglas_Junior" } ]
Open synced local database when completely offline
2020-11-02T13:12:44.514Z
Open synced local database when completely offline
6,223
null
[ "php" ]
[ { "code": " CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \"type\": { \"$eq\": \"'.$query.'\" } },\n \"createdOn\": {\n \"$gte\": \"2022-04-11T00:08:54Z\",\n \"$lt\": \"2022-04-11T00:08:54Z\"\n }\n}',\nInvalid parameter(s) specified: createdOn", "text": "I am trying to Fetch MongoDB data between specific Date using CURL in PHP. My code is like below.But I am getting message Invalid parameter(s) specified: createdOn.", "username": "foysal_foysal" }, { "code": "", "text": "I am not sure but I suspect that the last closing brace on the filter line should be after the closing brace of the createdOn object.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Fetch MongoDB data between specific Date
2022-06-13T10:18:16.969Z
Fetch MongoDB data between specific Date
2,697
null
[ "compass" ]
[ { "code": "", "text": "Having a very strange experience with a web server connected to a MongoDB database. I am using NextAuth with MongoDB to store user sessions and account data automatically. It’s creating 3 collections in a database, the “users” “accounts” and “sessions” collections. It is infact working great and storing and retrieving data perfectly (as proven by some debugging i did).However, when I try to find these collections in MongoDB compass they are NOWHERE to be found. I have triple checked my connection string, database name, etc and everything is correct. I am using a DigitalOcean MongoDB database server which makes me think that may I am lacking permissions to view these collections? I can’t think of any other reason why this might be?I’ve tried switching out the connection string to one for a free cluster from MongoDB and it creates the 3 collections as normally and I can see them clear as day. But when I use this digitalocean server i cannot seem to see them in MongoDB compass.If anybody has any insight into this I would greatly appreciate it. I am baffled as to how this is possible.", "username": "Thomas_Peters" }, { "code": "", "text": "If you are able to connect to your server you should see the colllections\nWhat do you see?\nCan you see your collections from shell?\nCheck step 2 from this docMongoDB is the most popular NoSQL database. Let’s quickly setup a single MongoDB server on an DigitalOcean Droplet in 4 easy steps. We can…\nReading time: 3 min read\n", "username": "Ramachandra_Tummala" } ]
Unable to see some collections in MongoDB compass?
2022-06-11T06:41:39.050Z
Unable to see some collections in MongoDB compass?
4,007
null
[ "swift" ]
[ { "code": "configurationqueue", "text": "thisRealm is the synchronized MongoDB I got from asyncOpen.let rc = thisRealm.configuration\nlet rq = DispatchQueue(label: “readQueue”, qos: .userInteractive, autoreleaseFrequency: .workItem)\nlet readRealm: RealmSwift.Realm // not defined yet.\nrq.sync {\nreadRealm = try! Realm(configuration: rc, queue: rq)\n}In the Xcode quick helpconfigurationA configuration value to use when creating the Realm.queueAn optional dispatch queue to confine the Realm to. If given, this Realm instance can be used from within blocks dispatched to the given queue rather than on the current thread.I’m confused:Am I tempting Realm fatalErrors if I try to use readRealm in multiple serial dispatch blocks? If so, what is the point of the initializer with the queue parameter?", "username": "Adam_Ek" }, { "code": "", "text": "May I ask what the intended use case is? (why are you asking?)I think if we have some clarity on what you’re attempting to do we can provide a more accurate answer or propose some alternatives.", "username": "Jay" }, { "code": "rq.sync {\n readrealm.write {\n ....\n }\n}\nRealm@MainActorqueue:", "text": "I was hoping to be able to use:within some other async code blocks. Avoiding having to usetry! Realm(configuration: rc)within every DispatchQueue block.However, I just found this in the release notes from 10.19.0Looks like I should just ignore the Realm(configuration: rc, queue: rq) initilalizer, it’s been deprecated.", "username": "Adam_Ek" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does the Realm(configuration: rc, queue: rq) queue parameter work like I hope?
2022-05-26T00:01:15.259Z
Does the Realm(configuration: rc, queue: rq) queue parameter work like I hope?
2,109
null
[]
[ { "code": "", "text": "Hi thereI’ve just started learning about MongoDB and I’ll probably go through university first.\nMy main focus is on database administration.\nIn the future I will support MongoDB in the company.\nI am currently working as an Oracle DBA.Cheers \nStephan", "username": "TeMatuaNgahere" }, { "code": "", "text": "Hello @TeMatuaNgahere,Welcome to the community!! Happy to hear that you are learning MongoDB, do post in community in case you get any roadblocks in your learning journey and we would be happy to help… All the best!! Cheers,\nTarun Gaur", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello my name is Elizabeth Williams", "username": "Elizabeth_Sore" } ]
Hello World - Greetings from Dubendorf, Switzerland
2022-06-08T10:45:38.933Z
Hello World - Greetings from Dubendorf, Switzerland
2,565
https://www.mongodb.com/…63e90f650246.png
[ "golang" ]
[ { "code": "go.mongodb.org/mongo-driver/bson/primitive.E composite literal uses unkeyed fieldsgo-vet\nfunc getMovie(c *gin.Context) {\n\tif err := godotenv.Load(); err != nil {\n\t\tlog.Println(\"No .env file found\")\n\t}\n\turi := os.Getenv(\"MONGODB_URI\")\n\tif uri == \"\" {\n\t\tlog.Fatal(\"You must set your 'MONGODB_URI' environmental variable. See\\n\\t https://www.mongodb.com/docs/drivers/go/current/usage-examples/#environment-variable\")\n\t}\n\tclient, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer func() {\n\t\tif err := client.Disconnect(context.TODO()); err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}()\n\tcoll := client.Database(\"axelrod\").Collection(\"movies\")\n\ttitle := \"Back to the Future\"\n\tvar movie Movie\n\terr = coll.FindOne(context.TODO(), bson.D{{\"title\", title}}).Decode(&movie)\n\tif err == mongo.ErrNoDocuments {\n\t\tfmt.Printf(\"No document was found with the title %s\\n\", title)\n\t\treturn\n\t}\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tjsonData, err := json.MarshalIndent(movie, \"\", \" \")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Printf(\"%s\\n\", jsonData)\n\tc.JSON(http.StatusOK, movie)\n}\n\n\n", "text": "I copied the example of the find operation from the Go driver documentation. When I do I get a warning foron the find field. I created a struct to use in the decode and the document is returned without issue. Can anyone explain what this warning is about? Here is the full function\nimage765×116 6.18 KB\n", "username": "tapiocaPENGUIN" }, { "code": "bson.Ebson.Ebson.Ebson.D{{Key: \"title\", Value: title}}\n", "text": "Hey @tapiocaPENGUIN, thanks for the question! The Go Driver team has also noticed that the default linters installed with the VSCode Go plugin warns about unkeyed fields in composite literals when using the BSON document literal syntax used in most of the Go Driver examples. While it’s reasonably safe to ignore that warning for the bson.E types (we consider changing the order and type of fields in bson.E an API breaking change that would only happen in a different major version of the Go Driver), it’s still annoying to get those linter warnings. Check out GODRIVER-2271, which is a proposal to modify the BSON document literal declaration syntax in a way that doesn’t cause linter warnings in most editors.As far as an immediate solution, you can get around those linter warnings by using keyed fields in the bson.E literals. For example:However, that’s fairly verbose, so many people chose to accept the linter warnings in exchange for more terse syntax.We don’t currently have a timeline for implementing any of the proposed BSON document literal syntax improvements, but please comment on GODRIVER-2271 if you have an opinion about any of them!", "username": "Matt_Dale" }, { "code": "", "text": "Thanks for your reply Matt_Dale", "username": "tapiocaPENGUIN" }, { "code": "func bsonFilter(key string, value string) bson.D {\n\treturn bson.D{{Key: key, Value: value}}\n", "text": "Here’s what I did in my code. It’s not general, but it works for single key/value strings.and I just say bsonFilter(“K”, “V”) instead of bson.d{{“K”, “V”}}", "username": "Spencer_Brown" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go primative.E
2022-06-10T01:39:05.926Z
MongoDB Go primative.E
5,162
null
[ "containers" ]
[ { "code": "kubectl exec -it mongodb-0 -- sh\nmongo mongodb://admin:[email protected]/platforms?authSource=admin MongoDB shell version v5.0.9 connecting to: mongodb://mongodb-headless-service.svc.cluster.local:27017/platforms?authSource=admin&compressors=disabled&gssapiServiceName=mongodb Error: couldn't connect to server mongodb-headless-service.svc.cluster.local:27017, connection attempt failed: HostNotFound: Could not find address for mongodb-headless-service.svc.cluster.local:27017: SocketException: Host not found (authoritative) : connect@src/mongo/shell/mongo.js:372:17 @(connect):2:6 exception: connect failed exiting with code 1\nkubectl exec -it mongodb-deployment-c896cf876-djhvl -- sh\nmongo \"mongodb://admin:abc123@localhost/platforms?authSource=admin\"\nMongoDB shell version v4.4.6\nconnecting to: mongodb://localhost:27017/platforms?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"23c1e0ce-3369-4449-8895-1a6791246b67\") }\nMongoDB server version: 4.4.6\n---\n....\n....\nmongo mongodb://admin:[email protected]/platforms?authSource=admin\nkubectl describe deplomentapiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mongodb-deployment\n namespace: labs\n labels:\n app: mongodb\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mongodb\n template:\n metadata:\n labels:\n app: mongodb\n spec:\n containers:\n - name: mongodb\n image: docker.io/bitnami/mongodb:4.4.6-debian-10-r0\n imagePullPolicy: \"IfNotPresent\"\n command:\n - \"mongod\"\n - \"--bind_ip\"\n - \"0.0.0.0\"\n securityContext:\n runAsNonRoot: true\n runAsUser: 1001\n ports:\n - containerPort: 27017\n env:\n - name: MONGO_INITDB_ROOT_USERNAME\n valueFrom:\n secretKeyRef:\n name: mongodb-secret-amended\n key: mongo-root-username\n - name: MONGO_INITDB_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mongodb-secret-amended\n key: mongo-root-password\n volumeMounts: \n - mountPath: /data/db\n name: mongodb-vol\n volumes:\n - name: mongodb-vol\n persistentVolumeClaim:\n claimName: mongodb-claim\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mongodb-service\n namespace: labs\nspec:\n type: NodePort\n selector:\n app: mongodb\n ports:\n - protocol: TCP\n port: 27017\n targetPort: 27017 \n---\napiVersion: v1\nkind: Service\nmetadata:\n name: mongodb-headless-service\n namespace: labs\nspec:\n selector:\n app: mongodb\n ports:\n - protocol: TCP\n port: 27017\n targetPort: 27017 \n{\"t\":{\"$date\":\"2022-06-12T10:23:14.337+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.339+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.339+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.340+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"mongodb-deployment-c896cf876-djhvl\"}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.340+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.6\",\"gitVersion\":\"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\",\"openSSLVersion\":\"OpenSSL 1.1.1d 10 Sep 2019\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.340+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"version\":\"Kernel 5.8.0-43-generic\"}}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.340+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"0.0.0.0\"}}}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db\"}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.341+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-06-12T10:23:14.342+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "I am trying to connect to a MongoDB deployment from another deployment which is in the same namespace but when I attempt to do so I am getting an error :However when I test the connection in the original pod it authenticates successfullyI have tried specifying namespace but still I get same error :I have checked with kubectl describe deploment whether bind-ip is specified (is this even necessary ?) and I observe its not configured.\nWhen I specify bind-ips in the yml definition for the container as below :I also get an error on container startup :What am I missing ?", "username": "Edmore_Tshuma" }, { "code": "", "text": "What am I missing ?Write permission for the directory /data/db.", "username": "steevej" }, { "code": "", "text": "@steevej is my issue about bind-ips really though. As it is I cannot assign bind-ips , those permissions are not there, bitnami mongo images run as non-root by default and overriding is not possible.Is it not that the containers should discover each other by default (They are already in same namespace and I have used the recommened Mongo-URI pattern", "username": "Edmore_Tshuma" }, { "code": "{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db\"}", "text": "You have to fix your write permission issue first{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db\"}If mongod does not start it cannot bind to any address.", "username": "steevej" } ]
HostNotFound: error when connecting to Mongo pod in same namespace
2022-06-12T13:06:39.854Z
HostNotFound: error when connecting to Mongo pod in same namespace
4,542
null
[ "aggregation", "data-modeling", "many-to-many-relationship" ]
[ { "code": "", "text": "I’m working on a domain in which documents in a few collections (say 4) can have many-to-many relationships with documents in many other collections (say 40). I won’t go into the details of this domain as it’s quite specialised. But to illustrate the problem, you can think about visits to locations.For example, it’s Saturday and I’ve got a lot on! In the morning, I leave home to go to my doctor to get a flu shot. Then I go to the local mall where I visit a few shops, including a restaurant where I have lunch. Next, I go to the beach for the afternoon, followed by dinner at a friend’s place. Then in the evening, I go to the movies before returning home.So there are many different types of location that I could visit (say 40), and in terms of modelling, each type needs its own separate collection because of its specialised data. But it’s easy for each location to have a to-many relationship to all the people that visit it, and to answer questions like how many people visited this doctor or that restaurant on Saturday.But for me, how do I get a list of all the locations I visited on Saturday? I also need a to-many relationships to each type of location (say, to many shops or friends’ homes). So with 40 types of location, that’s 40 different many-to-many relationships. Can I list the locations I visited on Saturday without issuing 40 different queries and then aggregating them?And in-person visits aren’t the only option, as I could also visit the websites of most of those locations, or else I could phone them. So I would have three separate collections for in-person and website visits and phone calls, but in each case I need to answer the same questions.I realise this problem is more suited to a graph database. But I have other needs (such as multi-field unique constraints and change streams) which graph DBs can’t meet.", "username": "Denis_Vulinovich" }, { "code": "", "text": "many other collections (say 40)andmany different types of location that I could visit (say 40),makes me guess that you will have one collection per location type because each location type has different attributes. If that is the case then it is a big NO NO. It is very SQLish.The flexible-schema nature of MongoDB, allows you to have all location types within the same collection and each type have their own set of attributes.", "username": "steevej" } ]
Many-to-many relationships between multiple collections
2022-06-12T01:04:04.404Z
Many-to-many relationships between multiple collections
3,265
null
[ "swift" ]
[ { "code": "", "text": "Hi,\nOur project is pretty large so I’m trying to optimise the build steps.\nI’m seeing that if Realm is added into the project as a SPM dependency the build time increases with at least 3 minutes in CircleCI over importing the framework as a .xcframework file.\nI’ve implemented the caching strategy as explained here: Swift Package Manager(SPM) And How To Cache It With CI | by Volodymyr Bondar | Uptech | Medium\nCan you please let me know if I am missing something, or is this a normal behaviour ?", "username": "Mihai_Dumitrache" }, { "code": "", "text": "Are you saying it increased by 3 minutes AFTER you implemented a caching strategy?There are a LOT of reasons the time could have increased so it’s going to be hard to say - are they located in the project directory? Are they located a folder somewhere else? Did you set up the CI Config file? There are lots of variables that could slow down the process - if that’s the question.", "username": "Jay" }, { "code": "steps:\n - checkout\n - restore_cache: \n key: spm-cache-{{ checksum \"TestCircleci/TestCircleci.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved\" }}\n - run: bundle check || bundle install --path vendor/bundle\n - run:\n name: fastlane\n command: bundle exec fastlane build_beta\n - save_cache:\n paths:\n - SourcePackages/\n key: spm-cache-{{ checksum \"TestCircleci/TestCircleci.xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved\" }}\nlane :build_beta do |options|\n build_app(\n project: \"TestCircleci/TestCircleci.xcodeproj\",\n clean: false,\n cloned_source_packages_path: \"SourcePackages\"\n )\n end\n", "text": "Hi @Jay,\nThanks for your reply.I’ll try to give you a few more details about the problem. In the meantime I’ve created a basic project with a single ViewController and just realm added as dependency. I’ve attached a screenshot from circleci with the following info:My .circleci/config.yml looks like this:And fastlane/Fastfile looks like this:", "username": "Mihai_Dumitrache" }, { "code": "", "text": "@Mihai_DumitracheFor clarity…You’ve got an XCode 13 Project (not Xcode Cloud), Swift, with a single view, using SPM and the project takes either 7 minutes to build or 13 minutes to build?As in… you’re looking at your code, press Command-B and it’s done compiling 7 minutes later?If so, I would like to insert a head exploding emojii here as it would make my head explode waiting that long in between builds - that would indicate you’ve got something else going on…If that’s not the case, please advise.", "username": "Jay" }, { "code": "", "text": "These times are from building the app in a CI environment where the project needs to be cloned + compiled from the start.\nLocally you’ll have DerivedData that will speed up the compile process for next builds.", "username": "Mihai_Dumitrache" }, { "code": "", "text": "I am prepared to share with you the CircleCI compile logs but I’m not allowed to add attachments here.\nI’ve uploaded it here in the meantime: file", "username": "Mihai_Dumitrache" }, { "code": "", "text": "@Mihai_DumitracheI don’t have first hand experience with CircleCI but with the number of reports of projects timing out during builds (non-Realm ones) or having long build times, it seems the issue is common.One suggestion was to throw more CircleCI resources at it (Ram & CPU) by upgrading to a different tier and then a second option was parallelism to split the project build among multiple resources - some reports indicate it improves testing and build times.In a nutshell, while your are using Realm, there are a lot of others that are not and they are having a similar issue with crazy long build times.If nobody else here can help with that, it may be a good idea to open a support ticket for guidance - perhaps there’s an (another) issue with SPM (I am not an SPM fan at this time)https://support.circleci.com/hc/en-us/requests/new", "username": "Jay" } ]
Realm cache when added with SPM in CircleCI
2022-06-08T20:23:04.987Z
Realm cache when added with SPM in CircleCI
2,307
https://www.mongodb.com/…_2_472x1024.jpeg
[ "react-native", "android", "typescript" ]
[ { "code": "reanimated 2realm/react<AppProvider id={SYNC_CONFIG.appId}>\n <UserProvider fallback={SignIn}>\n <AppSync>\n <AppContextProvider> <-- this is my own context\n <PortalProvider>\n <App />\n </PortalProvider>\n </AppContextProvider>\n </AppSync>\n </UserProvider>\n</AppProvider>\n\nconst formsResult = useQuery(Form);\n\n// ... other code\n\nuseEffect(() => {\n function selectInitialForm() {\n if (currentUser) {\n const dbForms = formsResult.sorted('name');\n\n if (dbForms.length > 0) {\n setCurrentForm(dbForms[0]); //<----- Logs inform this line is causing the crashes.\n }\n }\n }\n selectInitialForm();\n\n }, [currentUser, formsResult]);\nTypeError: object._objectId is not a function. (In 'object._objectId()', 'object._objectId' is undefined)\n\nThis error is located at:\n in UserStatisticsProvider (created by AppContextProvider)\n in AppContextProvider (created by RealmAppWrapper)\n in Unknown (created by AppSync)\n in AppSync (created by RealmAppWrapper)\n in UserProvider (created by RealmAppWrapper)\n in AppProvider (created by RealmAppWrapper)\n in ThemeProvider (created by RealmAppWrapper)\n in GestureHandlerRootView (created by GestureHandlerRootView)\n in GestureHandlerRootView (created by RealmAppWrapper)\n in RealmAppWrapper (created by ExpoRoot)\n in ExpoRoot\n in RCTView (created by View)\n in View (created by AppContainer)\n in RCTView (created by View)\n in View (created by AppContainer)\n in AppContainer\nat node_modules/react-native/Libraries/Core/ExceptionsManager.js:104:6 in reportException\nat node_modules/react-native/Libraries/Core/ExceptionsManager.js:172:19 in handleException\nat node_modules/react-native/Libraries/Core/ReactFiberErrorDialog.js:43:2 in showErrorDialog\n\nTypeError: object._objectId is not a function. (In 'object._objectId()', 'object._objectId' is undefined)\n\nThis error is located at:\n in UserStatisticsProvider (created by AppContextProvider)\n in AppContextProvider (created by RealmAppWrapper)\n in Unknown (created by AppSync)\n in AppSync (created by RealmAppWrapper)\n in UserProvider (created by RealmAppWrapper)\n in AppProvider (created by RealmAppWrapper)\n in ThemeProvider (created by RealmAppWrapper)\n in GestureHandlerRootView (created by GestureHandlerRootView)\n in GestureHandlerRootView (created by RealmAppWrapper)\n in RealmAppWrapper (created by ExpoRoot)\n in ExpoRoot\n in RCTView (created by View)\n in View (created by AppContainer)\n in RCTView (created by View)\n in View (created by AppContainer)\n in AppContainer\nat node_modules/react-native/Libraries/Core/ExceptionsManager.js:104:6 in reportException\nat node_modules/react-native/Libraries/Core/ExceptionsManager.js:172:19 in handleException\nat node_modules/react-native/Libraries/Core/setUpErrorHandling.js:24:6 in handleError\nat node_modules/expo-dev-launcher/build/DevLauncherErrorManager.js:44:19 in errorHandler\nat node_modules/expo-dev-launcher/build/DevLauncherErrorManager.js:49:24 in <anonymous>\nat node_modules/expo-error-recovery/build/ErrorRecovery.fx.js:12:21 in ErrorUtils.setGlobalHandler$argument_0\n", "text": "Hello @Andrew_Meyer i think i need you help again please. Since my last problem related here i have decided to remake my app from scratch without using reanimated 2.But the app now is crashing once, on the first data loading. Then, if i reopen it the things go as it should be.Im currently using the hooks exported by realm/react lib and i think im doing something wrong.This is how im using ‘realm/react’ This is the code that crashes my app, just on the first time it is opened.This is the logs when it happens:\nrealm-error738×1600 60.8 KB\n", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "I think this is related to [Needs help] Flexible sync not working on expo 45 · Issue #4625 · realm/realm-js · GitHub\nThere appears to be a conflict with how we link the android lib and Expo 45. Hopefully we will have a solution in the near future. Until then, the only work around i can provide is to use Expo 44.", "username": "Andrew_Meyer" }, { "code": "", "text": "@Andrew_Meyer I’m currently using Expo 44 and partition based sync. \nShould I try flexible? Is it production ready?Are these errors occurring on React Native apps without expo?", "username": "Diego_Jose_Goulart" }, { "code": "-beta@realm/react", "text": "@Diego_Jose_Goulart flexible sync is officially released this week, so we are directing all users to migrate there.\nThe issues you are currently having are only with Expo. We just need to do a new release of our -beta branch that is compatible with @realm/react and then that will also be compatible with the newest Expo.", "username": "Andrew_Meyer" }, { "code": "", "text": "@Andrew_Meyer I will make a git repository with a runnable example of my issue. Maybe you can give it a try, please?\nIn the meantime, I will test a version of my app without Expo and with flexible sync, hopefully it will be more stable.", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "Can we use flexible sync in production now?", "username": "Zubair_Rajput" }, { "code": "", "text": "Yes, it has been officially released for production.", "username": "Andrew_Meyer" } ]
App is crashing one time before the realm data loads
2022-06-09T21:51:28.286Z
App is crashing one time before the realm data loads
4,392
null
[ "mdbw22-communitycafe" ]
[ { "code": "", "text": "A “retrospective” on the MongoDB Hackathon!", "username": "TimSantos" }, { "code": "", "text": "@Shane_McAllister and @Mark_Smith announcing the MongoDB World 2022 Hackathon winners!\nimage1920×1440 155 KB\n", "username": "TimSantos" }, { "code": "", "text": "Shoutouts to the MongoDB Forums! \nimage1920×1440 137 KB\n", "username": "TimSantos" }, { "code": "", "text": "622 Participants in 57 countries! What an awesome community built!\nimage1920×1440 115 KB\n", "username": "TimSantos" }, { "code": "", "text": "I was able to join the live stream after 1/2 hour. Would like to know when can we get the recordings of the event?\nIn WebEx it’s only showing live events.Also would like to know which team became the winner and which team became runner up?", "username": "Avik_Singha" }, { "code": "", "text": "Hey @Avik_Singha We’re trying to get the recording to share from our. events team. I will post it here when we have it.I’ve just posted the winners HERE", "username": "Shane_McAllister" }, { "code": "", "text": "Thanks a lot for posting.", "username": "Avik_Singha" } ]
#MDBW22 Hackathon Roundup!
2022-06-06T14:07:50.587Z
#MDBW22 Hackathon Roundup!
4,559
null
[ "node-js", "realm-web" ]
[ { "code": "realm-web", "text": "Can Realm Web SDK be used on the server?\nWe have concerns that realm-web exposes the data in the straightforward way that looks similar to MongoDB query including database and collection names in request payload when used on the client.", "username": "rkazakov" }, { "code": "", "text": "Hi Ruslan,Welcome to the forum.Can Realm Web SDK be used on the server?Short answer is yes - see the README for information on two additional peer dependencies that you need to install to use Realm Web from a Node.js process.The question is if you want to do this. The main incentive for Realm Web being able to run in a Node.js process is to make it easier to write SSR React apps and to make your components testable. If you want to simply access the data stored in your MongoDB cluster, you have multiple alternatives available for you (besides Realm Web):Hope this helps.", "username": "kraenhansen" }, { "code": "", "text": "Hi Kræn,Thank you for your response! We are aware of different ways to access MonogDB data. However, Realm JS Node SDK is not an option in case of using Next.js/AWS Lambda or similar functions as Realm Node SDK seems to be requiring file system access.We like the simplicity of Realm Web SDK, but thinking to run it on the server. In this case we do not expose the database details and this seems more secure.Apologies, if this is a different question and I am happy to edit my answer here and create a new question.\nDo you have recommendation on how to set up authentication with Realm App Users in this case? Is it an option to run Realm Web SDK on the client to authenticate and then pass credentials to server requests and use Realm Web on the server to read the data from MongoDB?Thanks,Ruslan", "username": "rkazakov" }, { "code": "", "text": "Realm JS Node SDK is not an option in case of using Next.js/AWS Lambda or similar functions as Realm Node SDK seems to be requiring file system access.I am curious why you see it this way. Realm JS does require fille system access when storing an authenticated users access and refresh tokens. To my knowledge AWS Lambda does provide an ephemeral file system and using Realm JS should be possible in that case too.Just to make it clear, Realm JS also includes a MongoDB client which allows accessing data without having to use its sync capabilities (which would store data on the filesystem) with an API that should be equivalent to Realm Web.Do you have recommendation on how to set up authentication with Realm App Users in this case? Is it an option to run Realm Web SDK on the client to authenticate and then pass credentials to server requests and use Realm Web on the server to read the data from MongoDB?Generally speaking, a strength of the MongoDB Realm platform, is that it doesn’t need a server component. Not that it wouldn’t work with in a combination with a server component, but it’s not its primary use case.The Realm Web SDK doesn’t provide a public (dehydrate & hydrate) APIs enabling transferring the access and refresh tokens of an authenticated user from the client to a server. Is this what you’re thinking of?\nOne alternative might be to enable the API key authentication provider, create an API key on the client-side and pass that to the server which can then authenticate on behalf of the user and make requests.", "username": "kraenhansen" }, { "code": "realm-webrealmsync_metadata.realmERROR\tError: make_dir() failed: Read-only file system Path: /var/task/mongodb-realm/", "text": "I am curious why you see it this way. Realm JS does require fille system access when storing an authenticated users access and refresh tokens. To my knowledge AWS Lambda does provide an ephemeral file system and using Realm JS should be possible in that case too.We are using Realm with NextJS and realm-web seems to work fine. However, realm does not work. Locally I can see that it creates a number of files like sync_metadata.realm, etc in the project folder. It must be doing the same on the server causing this error in Next.js logs:ERROR\tError: make_dir() failed: Read-only file system Path: /var/task/mongodb-realm/Just to make it clear, Realm JS also includes a MongoDB client which allows accessing data without having to use its sync capabilities (which would store data on the filesystem) with an API that should be equivalent to Realm Web.Yes, on the server, we are looking to use just MongoDB access.The Realm Web SDK doesn’t provide a public (dehydrate & hydrate) APIs enabling transferring the access and refresh tokens of an authenticated user from the client to a server. Is this what you’re thinking of?Yes, something like that.One alternative might be to enable the API key authentication provider, create an API key on the client-side and pass that to the server which can then authenticate on behalf of the user and make requests.Thanks, we will investigate this further.", "username": "rkazakov" }, { "code": "// client\nconst app = new Realm.App({ id: '<ID>' });\nawait app.logIn(Realm.Credentials.emailPassword('[email protected]', 'password'));\nawait app.currentUser?.apiKeys.create('testKey');\nawait app.currentUser?.apiKeys.enable('testKey');\nconst apiKey = await app.currentUser?.apiKeys.fetch('testKey');\n// pass apiKey to the server?\n// server\nconst clientKey = getKeyFromClient();\nawait app.logIn(Realm.Credentials.apiKey(clientKey));\n // invalid API key (status 401)\n", "text": "@kraenhansenOne alternative might be to enable the API key authentication provider, create an API key on the client-side and pass that to the server which can then authenticate on behalf of the user and make requests.I am trying to experiment with this, but I am not sure how to create an apiKey on the client:What is the right way to do that?", "username": "rkazakov" }, { "code": "", "text": "I realize this topic is over a year old but trying to use Realm with an app that requires SSR does not work. Even after installing the additional packages in order to use realm-web in node. The only way I can get Realm to work with Next.js is to use a wacky architecture that makes it difficult for me to reuse code between web and node executions. In a world where everyone is doing SSR, Realm is alienating developers because of the need for separate SDK’s. Look at all the successful JAMStack databases available today, they all have one js SDK for both node and browser.I’m just about ready to punt on this MongoDB idea and switch to something that is more friendly to modern tech. I’ve wasted way too much time on this already.As for any Angular developers looking to use Realm in your SSR app, don’t even bother.", "username": "Jesse_Beckton" }, { "code": "", "text": "Have you managed to find a solution? I have tried to follow your path, but unfortunately, no progress. It seams that you can obtend the key secret only at time of creating. It is, however, solvable. You can delete the key and create it again. But does it make sense? I think it is easier to implement the authentication from ground with sessions approach, then setting up crutches.", "username": "Alexandru_Tocar" }, { "code": "", "text": "Hello, I am currently trying to solve the authentication for a Remix app that will be hosted via Cloudflare Pages. As such, I have no choice, but to use the Realm Web SDK. And it is perfectly fine, until the very essential part of the application: user authentication. The thing with Remix is that, after the principles of SSR, it pre-fetches data on the server side, and sends the user already hadryted html page. It means that everything database related must happen on the server “on behalf of user”. Well, I get it, whenever there is something on behalf of user, we use API Keys Authentication. With previous answers I got so far. Now, because I switched to Realm Authentication for the very reason of not having to deal with complicated authentication flows, I have a consideration about how I proceed with the idea of User API Keys.Let’s say, I have logged in my user via Web SDK on the front end. How do I communicate my user authentication to my server-side function? Is it even possible?", "username": "Alexandru_Tocar" } ]
Can Realm Web SDK be used on the server?
2021-01-21T12:53:24.108Z
Can Realm Web SDK be used on the server?
5,470
null
[ "dot-net", "java", "100daysofcode" ]
[ { "code": "", "text": "Hello everyone,I am currently working as a Full Stack Developer with focus on Angular, JS, Java and .Net Core, yet sometimes I find myself skeptical in the fundamentals of development. Hence, I have been meaning to improve my skillset starting from the basics Thanks to my dear friend and tech enthusiast @SourabhBagrecha for inviting me to this community and motivating me to take this #100daysofcode challenge.I am looking forward to share my daily updates and progress over the time.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Day 1: Implemented pagination with filter in Angular.\nAlso went back to the basics ! Revised and did hands on at JS Fundamentals :- Keywords, Destructuring arrays/objects, Rest & Spread operators.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Day 2: Continued the course on JS , topics covered were type conversions, controlling loops, Operators, Function and Block scopes, IIFE’s and Closures.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Day 3: Continuing with JS Fundamentals - this, call, apply and bind methods, Arrow functions, Constructor functions, Prototypes, JSON, Array Methods.\nScreenshot (60)1920×1080 190 KB\n", "username": "Shreya_Bhanot" }, { "code": "", "text": "Day 4: Learnt about JS classes, methods, constructors and inheritance. Uses of Module, window object, timers, BOM, DOM and modifying elements of it. Lastly, Error Handling and user defined errors and Promises.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Day 5 : Learnt few things in Angular :- Creating validation for input field using dynamic list, setting conditions for cascading dropdown. Keyup, keydown events with keycode.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Learnt about Callback functions and its limitations, use of Promises. Dug deeper into how JS executes & Execution Context. Concept of Hoisting in JS and also explored some of the developer tools.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Concepts on how JS functions work, their scope, global context and its associated window object and this keyword. Learnt about lexical environment and the working of scope chain, undefined v/s not defined.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Gone through concepts of let, const and temporal dead zone. Also their scope, block scope, shadowing and illegal shadowing in JS.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Went deeper into understanding Closures.\nAlso learnt setTimeout + closures, but would need more clarity on this.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Finally got closure on Closures with setTimeout functions, and that would be enough for the weekend.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Learnt some of the cool jargons of JS, difference in function statement and function expression, anonymous functions, named function expression and first class functions.", "username": "Shreya_Bhanot" }, { "code": "", "text": "How Event Listeners and callback functions work, blocking of the main thread, closure along with event listener, garbage collection & event listeners.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Learnt about Event Loops, calling of Web APIs, CallBack Queue, MicroTask Queue and how JS works asynchronously in web browsers. setTimeout functions working along with fetch and event listeners.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Theory of JS Engine Architecture, JavaScript Runtime Environment, JIT compilation, Syntax Parsers, Garbage Collector, and how things work behind the scenes in Google’s V8 JS Engine and its architecture.", "username": "Shreya_Bhanot" }, { "code": "", "text": "earnt about High Order Functions in JS and its functional programming, polyfill using map.\nAlso about Concurrency model in JS using example of setTimeout and setTimeout with 0 delay.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Use cases of Higher Order Functions like Map, Reduce & Filter, with chaining.", "username": "Shreya_Bhanot" }, { "code": "", "text": "Started a much talked about course on Udemy for Web development, begin with introduction to Web, Internet, Request/Response cycle and very basics of HTML.\nScreenshot (64)1920×1080 340 KB\n", "username": "Shreya_Bhanot" }, { "code": "", "text": "Continued with HTML basics on block and inline elements, HTML entities, tables and some of the shortcut tricks.\n\nScreenshot (66)1920×1080 263 KB\n", "username": "Shreya_Bhanot" }, { "code": "", "text": "Completed the last section of HTML, it included Forms, different input types and validation.\nDid hands on by creating Google search Form and Marathon registration form.\nScreenshot (68)1920×1080 229 KB\n", "username": "Shreya_Bhanot" } ]
The Journey of #100DaysOfCode (@Shreya_Bhanot)
2022-05-21T07:06:30.853Z
The Journey of #100DaysOfCode (@Shreya_Bhanot)
7,930
null
[ "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "spring.data.mongodb.authentication-database=admin\nspring.data.mongodb.username=rootuser\nspring.data.mongodb.password=rootpass\nspring.data.mongodb.database=dz-database\nspring.data.mongodb.port=27017\nspring.data.mongodb.host=localhost\nspring.data.mongodb.auto-index-creation=true\nserver.error.include-message=always\nspring.data.mongodb.uri=mongodb+srv://redouane59:<**my_password**>@dz-cluster.ijmc9sx.mongodb.net/?retryWrites=true&w=majority\norg.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'pronounService' defined in file [C:\\Users\\Perso\\Documents\\GitHub\\dz-dialect-api-spring\\target\\classes\\io\\github\\dzdialectapispring\\pronoun\\PronounService.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'pronounRepository' defined in io.github.dzdialectapispring.pronoun.PronounRepository defined in @EnableMongoRepositories declared on PronounService: Cannot resolve reference to bean 'mongoTemplate' while setting bean property 'mongoOperations'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'mongoTemplate' defined in class path resource [org/springframework/boot/autoconfigure/data/mongo/MongoDatabaseFactoryDependentConfiguration.class]: Unsatisfied dependency expressed through method 'mongoTemplate' parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'mongoDatabaseFactory' defined in class path resource [org/springframework/boot/autoconfigure/data/mongo/MongoDatabaseFactoryConfiguration.class]: Unsatisfied dependency expressed through method 'mongoDatabaseFactory' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongo' defined in class path resource [org/springframework/boot/autoconfigure/mongo/MongoAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.mongodb.client.MongoClient]: Factory method 'mongo' threw exception; nested exception is com.mongodb.MongoConfigurationException: Failed looking up TXT record for host dz-cluster.ijmc9sx.mongodb.net\n\n", "text": "Hello,I followed some tutorials to learn how to make MongoDB work with Spring and everything was really easy locally.Today, I decided to deploy my database to MongoDB Atlas and I’m struggling for few hours without understanding what is happening.Before, I had this application.properties :After, I had this one :I took it from the MongoDB Atlas page after clicking to the “connect” button.When I want to start my Application, here is the exception I get after a few seconds :It’s really not clear for me and I’m really lost.Any idea ?Full GitHub repository is available here : GitHub - redouane59/dz-dialect-api-spring: spring fork of dz dialect api project (only application.properties file is not up to date).Thanks !", "username": "Redouane_Bali" }, { "code": "", "text": "Please refer to my answer on Unable to look up TXT record for host ****.****.mongodb.net from GCP ap - #2 by Jeffrey_Yemin and see if that helps to resolve the issue.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "May be you made the same error as in Bad auth : Authentication failed. - Nodejs / MongoDB atlas", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB working locally but MongoDB Atlas not working / Java Spring
2022-06-07T20:56:20.088Z
MongoDB working locally but MongoDB Atlas not working / Java Spring
10,831
null
[]
[ { "code": "", "text": "I have enabled one HTTP endpoint without any authorization and published it.\nI am now accessing it using the postman. I am unable to call that get a request. I am getting the below error. Please help me out to fix it.{\n“error”: “no authentication methods were specified”,\n“error_code”: “InvalidParameter”,\n“link”: “App Services”\n}", "username": "Rajan_Dhinoja" }, { "code": "", "text": "Hi Rajan - yes functions by default are enabled with ‘application authentication’ (see your function) you can either enable an authentication provider (e.g. API key) or change to System (but this will not enforce any permissioning or authenticationso is opening up access to your data)", "username": "Sumedha_Mehta1" }, { "code": "", "text": "With the secret key, I am also not able to call it. After enabling secret I am getting this error. I am sure that I have entered the correct secret.\n{\n“error”: “invalid secret”,\n“link”: “App Services”\n}", "username": "Rajan_Dhinoja" }, { "code": "", "text": "If I am going to use the API key then also I am getting the same issue with same error.\n{\n“error”: “no authentication methods were specified”,\n“error_code”: “InvalidParameter”,\n“link”: “App Services”\n}", "username": "Rajan_Dhinoja" }, { "code": "", "text": "@Sumedha_Mehta1 Can we schedule a quick zoom call?", "username": "Rajan_Dhinoja" }, { "code": "", "text": "What is the full curl snippet you are sending (with the key obsucred)", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I am able to call from postman but my angular application still gets cors error.", "username": "Rajan_Dhinoja" }, { "code": "", "text": "the CORS error is due to invoking from the browser - you can upvote here, but we’re working on it: Please Support CORS from the Data API – MongoDB Feedback Engine", "username": "Sumedha_Mehta1" } ]
Getting InvalidParameter error in the GET request of the atlas app services
2022-06-07T13:19:21.616Z
Getting InvalidParameter error in the GET request of the atlas app services
3,165
null
[ "aggregation", "queries", "node-js", "atlas-search", "text-search" ]
[ { "code": "if (query1 || query2 || query3 || query4) {\n\n const examples = await db\n\n .collection(\"examples\")\n\n .find({\n\n $or: [\n\n {\n\n $text: { $search: `${query1}` },\n\n },\n\n { field2: query2 },\n\n { field3: query3 },\n\n { field4: { $gte: Number(query4) } },\n\n ],\n\n verified: true,\n\n })\n\n .project({ anotherField: 0})\n\n .sort({ createdAt: -1 })\n\n .toArray();\n\n return {\n\n props: {\n\n examples: JSON.parse(JSON.stringify(examples)),\n\n },\n\n };\n\n }\n", "text": "I can’t get to query multiple optional values on several fields. Here is my code:I receive the following error message:MongoServerError: error processing query: ns=examplesdb.examplesTree: $and\n$or\nfield3 $eq null\nfield2 $eq “some-keyword”\nfield4 $gte nan.0\nTEXT : query=undefined, language=english, caseSensitive=0, diacriticSensitive=0, tag=NULL\nverified $eq true\nSort: { createdAt: -1 }\nProj: { anotherField: 0 }\nplanner returned error :: caused by :: No query solutionsExpected outcome:\nWhichever query1 or 2 or 3 or 4 comes in, I want mongodb to find that document where query1 or 2 or 3 or 4 matches with $text or field2 or field3 or field4 (respectively).The important thing is that the query1/2/3/4 are optional and sometimes all of them would be used, sometimes only 2 and so on… So my question points at $or as well.I already know that $text search can’t be used with $or. And I’m on the free tier at the moment so I can’t use aggregation either (due to limitations, it returns that error message).What else should I try? Please someone lead me in the right direction.Thank you!", "username": "davidm92" }, { "code": "", "text": "The main issue with your code is you *OR your 4 queries but you use all of them even if only one is true.You have to test each query indivudually and build your complete $or array accordingly. This is standard JS because the $or field is a standard JS array where each clause is a standard JS object.", "username": "steevej" }, { "code": "let filter = { verified: true };\n\nif (query1) filter = { ...filter, $text: { $search: `${query1}` } };\nif (query2) filter = { ...filter, field2: query2 };\nif (query3) filter = { ...filter, field3: query3 };\nif (query4) filter = { ...filter, field4: { $gte: Number(query4) } };\n\nconst examples = await db\n .collection('examples')\n .find(filter)\n .project({ anotherField: 0 })\n .sort({ createdAt: -1 })\n .toArray();\n\nreturn {\n props: {\n examples: JSON.parse(JSON.stringify(examples)),\n },\n};\n", "text": "I have a solution that works. I think it’s similar to what you advised.", "username": "davidm92" }, { "code": "", "text": "It is what I had in mind.editedJust note that in the new code you are ANDind the optional queries while you were ORing in the original.", "username": "steevej" }, { "code": "", "text": "Great!Actually you’re right. I need to and them after all. The query comes from the url. And that changes frequently. Just realized that’s the correct method, took so long… ", "username": "davidm92" } ]
Find - $text search with $or + other optional values
2022-06-10T06:19:25.955Z
Find - $text search with $or + other optional values
4,521
https://www.mongodb.com/…e246b39ff221.png
[ "crud" ]
[ { "code": " const updatedProduct = await ProductsModel.findOneAndUpdate(\n {_id: productId},\n {\n $set: input,\n },\n {new: true}\n ).exec()\n", "text": "Hello,Could you kindly tell me if it is normal that modifying an embedded document changes its _id; or am I doing something wrong? I am assuming it is because updating the embedded document effectively replaces the original one?Before change:Changing the name to James Brown:\nIs this intended or a sign of poor design on my part? Should this be avoided?Thank you!", "username": "RENOVATIO" }, { "code": "console.log(input)", "text": "Hi,Can you console.log(input) and add the result to the question?", "username": "NeNaD" }, { "code": "{\n recipient: { name: 'James Browley' },\n}\n", "text": "Hello, the input is just the following:", "username": "RENOVATIO" }, { "code": "recipient$set_idrecipientconst updatedProduct = await ProductsModel.findOneAndUpdate(\n { _id: productId },\n { \"recipient.name\": input.name },\n { new: true }\n).exec()\n", "text": "I see. So you probably defined sub-schema model for the recipient so when you use $set the whole new document of Recipient model is created, and that new document get it’s own _id.Try not to set the whole recipient sub-property, but only the fields that you really want.", "username": "NeNaD" }, { "code": "{ \"recipient.name\": input.name } \n", "text": "Hey again,Thanks for the explanation.Would you happen to know how to properly describe what you wrote:In a standalone Typescript object and interface?Thanks!", "username": "RENOVATIO" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating an embedded document changes its _id?
2022-06-10T15:39:59.352Z
Updating an embedded document changes its _id?
2,935
null
[ "compass", "connecting" ]
[ { "code": "", "text": "Hi dear community,I created a database project and then tried to connect it with Compass as instructed. It did not work. After writing the password inside of the <…> also paying attention to the URL encoded writing it still said “bad auth”. Any clues how to fix this in 2022? Or any community question you can recommend where this problem was answered well?Best regards and many thanks", "username": "Marcel_Ohrenschall" }, { "code": "", "text": "<…>bad authentication means wrong userid or password\nYou have to remove <…> .Just keep the password", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you for making this so easy!", "username": "Karan_Pal_Singh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Compass Authentication failed "bad auth"
2022-01-31T17:29:39.113Z
MongoDB Compass Authentication failed &ldquo;bad auth&rdquo;
10,400
null
[ "mongodb-shell", "installation" ]
[ { "code": "Current Mongosh Log ID: 62a418c7740d6e8837461793\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.5.0\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nmongodb-tools-binmonodb-binmongosh-binsudo systemctl start mongodbmongodbmongodsudo systemctl start mongodmongodbmongosh", "text": "I have referenced ArchWiki and checked the official one for UBUNTU too.Steps I have taken for installation till now\nInstalled mongodb-tools-bin, monodb-bin and mongosh-bin from aur.\nsudo systemctl start mongodb here is a conflict archWiki asks to do mongodb while everyone else on StackOverflow and even official ubuntu installation guide asks for mongod.\nHence, I tried sudo systemctl start mongod too and got that didn’t found error tried daemon-reload but nothing changed(still not found).\nAssuming starting mongodb have worked because it didn’t throw any error. I went for mongosh and the error is posted at the top.", "username": "20BEC002_Amit_Kumar_Mishra" }, { "code": "", "text": "May be your mongodb is not up\nWhat does sudo systemctl status mongodb shows?\nYou can check mongodb.log also\nUnder installed dir/bin you can check what is your binary mongodb or mongod", "username": "Ramachandra_Tummala" }, { "code": "where mongodmongodb.servicemongod", "text": "I have tried checking status earlier too but was getting lots of variables only. Probably that time it was taking time for it’s first setup. Right now it is getting failed with “core-dumped”\nwhere mongod says /usr/bin/mongod, mongodb wasn’t found. So this seems now clear service name is mongodb.service while daemon is mongod.", "username": "20BEC002_Amit_Kumar_Mishra" }, { "code": "flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_\nsse", "text": "Following this,I think i do have sse and i hope i can conclude this should work on my cpu architecture(however, i do have low computing power)", "username": "20BEC002_Amit_Kumar_Mishra" } ]
Install and setup mongodb on arch linux
2022-06-11T04:37:11.189Z
Install and setup mongodb on arch linux
10,884
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "{\n _id: ObjectId(...),\n date: [ISODate('2022-05-27T00:00:00.000+00:00'), ISODate('2022-05-28T00:00:00.000+00:00')]\n}\n", "text": "I have a document like this one:I need to calculate the difference between each date and the next one in days.\nCan you help me with it?", "username": "pseudo.charles" }, { "code": "", "text": "I tried using unwind, keeping the index, and grouping by the index and the next value, but it didn’t work", "username": "pseudo.charles" }, { "code": "date$dateDiffarrayElemAt0-1db.collection.aggregate([\n {\n $project: {\n dateDiff: {\n $dateDiff: {\n startDate: {\n $arrayElemAt: [\n \"$date\",\n 0\n ]\n },\n endDate: {\n $arrayElemAt: [\n \"$date\",\n -1\n ]\n },\n \"unit\": \"day\",\n \n }\n }\n }\n }\n])\n", "text": "Hi @pseudo.charles,Welcome to the MongoDB Community Forums! If I understand the question correctly, you need the difference between the two dates in your date field. Can you please let us know why you are trying to do this? Based on the document you provided, you can try the $dateDiff which would return the difference between the two dates. To select elements from the array, you can use $arrayElemAt which returns the element at the specified array index.This is what the query would look like, there are other fields in $dateDiff too that you can use according to your use case. Also, note the use of arrayElemAt indexes as 0 (the first element) and -1 (the last element), you can adjust the indexes as per your data:Please let us know if you have any further questions. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "$dateDiffarrayElemAt0-1\n", "text": "Can you please let us know why you are trying to do this? Based on the document you provided, you can try the $dateDiff which would return the difference between the two dates. To select elements from the array, you can use $arrayElemAt which returns the element at the specified array index.This is what the query would look like, there are other fields in $dateDiff too that you can use according to your use case. Also, note the use of arrayElemAt indexes as 0 (the first element) and -1 (the last element), you can adjust the indexes as per your data:Thank you a lot @Satyam! The thing is that the date array can have more than 2 elements, and if so, I’d like to know the date difference between each index and the current one. In the case you mentioned, it’s always the first and the last one", "username": "pseudo.charles" }, { "code": "", "text": "I’d like to know the date difference between each index and the current one.Actually, the date difference between each index and the next one.", "username": "pseudo.charles" }, { "code": "", "text": "You do the same logic but within a $map. You will end up with an array od date differences.", "username": "steevej" }, { "code": "{\n $project: {\n _id: 0,\n dateDiff: {\n $map: {\n input: {\n $range: [0, { $size: '$date' }]\n },\n as: 'this',\n in: {\n $dateDiff: {\n startDate: {\n $arrayElemAt: ['$date', '$$this']\n },\n endDate: {\n $arrayElemAt: ['$date', {\n $add: ['$$this', 1]\n }\n ]\n },\n unit: 'day'\n }\n }\n }\n }\n }\n}\n", "text": "Thank you @steevej! This solves the issue:", "username": "pseudo.charles" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Date diff between data in different documents
2022-06-03T17:47:42.973Z
Date diff between data in different documents
4,916
https://www.mongodb.com/…b_2_1023x263.png
[ "monitoring" ]
[ { "code": "", "text": "I keep receiving this error in my email.\nThe Perfomance Advisor does not give me any recommendation.\nMy Profiler looks like this: (I dont know exactly if thats good or not).\n\nCaptura de Tela 2022-06-06 às 20.11.312004×516 47.8 KB\nWhat is the problem?", "username": "foco_radiante" }, { "code": "", "text": "It says Keys Examined more than 100k and return 10, but my query is limiting to 10 so thats what I want. Is it a problem?", "username": "foco_radiante" }, { "code": "", "text": "Hi Foco!Thank you for reaching out! Without understanding your system more, it’s hard to say exactly what the problem is. However, in general, I’d recommend starting with the Metrics tab and taking a look at the “Query Targeting” graph to get an understanding of how much work your database is doing versus how much work you’re getting out. I would also recommend taking a look at the Query Profiler and setting the Display dropdown in the top left to “Examined:Returned Ratio”. Here, if you see any slow queries with a high ratio, you can click into the individual operation to learn more.\nTo do this, click on the dot on the scatter plot and then click on the “View More Details” button.In the View More Details page, you will see some JSON with more information about the slow operation. In this JSON, I typically like to focus in on a few fields.Performance Advisor will look at the last 24 hours of your slow logs and make index suggestions. These recommendations may take a bit to show in the Performance Advisor page. However, without understanding your system fully, it’s hard to say whether an index will definitely improve your performance. I’d start with the steps above to see if you have any collection scans and dig in deeper from there.\nIf you have any further questions, please feel free to reach out to me and/or open a support case so we can provide you further support!Thanks,\nFrank", "username": "Frank_Sun" }, { "code": "Mint.aggregate([\n {\n $match: {\n \"createdAt\": {\n $gte: finalUnix\n }\n }\n },\n {\n $project: {\n contract: 1,\n }\n },\n {\n $group: {\n _id: '$contract',\n totalMints: { $sum: 1 }\n }\n },\n {\n $sort: {\n \"totalMints\": -1\n }\n },\n {\n $limit: 10\n }\n ])\n", "text": "Perfomance advisor advised me to create one new index, but it is not on the collection that is getting this warnings.Here is my profiler:\n\nCaptura de Tela 2022-06-08 às 21.49.39694×492 63.7 KB\nBut the point is, in my query, Im limiting to return only 10 items. So it is kinda obvious that it will scan a lot of items to return only a few.Here is my query:", "username": "foco_radiante" }, { "code": "", "text": "Hi Foco,Thanks for your followup! Performance Advisor also factors in how often a query is being run as well as a variety of other factors. This way it doesn’t suggest any extraneous indexes for one-off or infrequent queries. Having too many indexes on a collection can be detrimental and impact write performance. This is also why we also have the “Drop Index Recommendations” capability to catch any unused, duplicate, or hidden indexes.To address your main point, I understand you’re limiting the query to only return 10 items, which is why the query targeting alert is triggered. In general, query targeting should be kept as low as possible, under the 1k threshold. You could achieve this either by improving the query and/or by adding an effective index to that namespace to bring down the number of documents scanned. My personal recommendation would be to investigate these two options, as it looks like the operational latency for your namespaces is quite high (based on the Profiler screenshot).Thanks,\nFrank", "username": "Frank_Sun" } ]
Alert all the time - Query Targeting: Scanned Objects / Returned has gone above 1000
2022-06-06T23:13:25.150Z
Alert all the time - Query Targeting: Scanned Objects / Returned has gone above 1000
3,477
null
[ "replication", "atlas-cluster", "containers" ]
[ { "code": "services:\n mongo1:\n hostname: mongo1\n image: mongo\n expose:\n - 27017\n ports:\n - 30001:27017 \n restart: always\n networks:\n - mongo\n command: mongod --replSet my-mongo-set\n mongo2:\n hostname: mongo2\n image: mongo\n expose:\n - 27017\n ports:\n - 30002:27017\n restart: always\n networks:\n - mongo\n command: mongod --replSet my-mongo-set\n mongo3:\n hostname: mongo3\n image: mongo\n expose:\n - 27017\n ports:\n - 30003:27017\n restart: always\n networks:\n - mongo\n command: mongod --replSet my-mongo-set\n\n# finally, we can define the initialization server\n# this runs the `rs.initiate` command to intialize\n# the replica set and connect the three servers to each other\n mongoinit:\n image: mongo\n # this container will exit after executing the command\n restart: \"no\"\n networks:\n - mongo\n depends_on:\n - mongo1\n - mongo2\n - mongo3\n command: >\n mongo --host mongo1:27017 --eval \n '\n db = (new Mongo(\"mongo1:27017\")).getDB(\"CropWatch\");\n config = {\n \"_id\" : \"my-mongo-set\",\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"mongo1:27017\"\n },\n {\n \"_id\" : 1,\n \"host\" : \"mongo2:27017\"\n },\n {\n \"_id\" : 2,\n \"host\" : \"mongo3:27017\"\n }\n ]\n };\n rs.initiate(config);\n '\nnetworks:\n mongo:\n driver: bridge\n", "text": "Hello!\nSorry, I have tried to find an answer to this, but have not had much luck.\nI have a self-hosted MongoDB Replica-set, and it has been working very well. But I would like to have additional safty by adding MongoDB Atlas into my replica-set. It would be great to have my primary still on the self hosted boxes.I have this docker-compose:This works, well, but I am hoping to add Atlas into the “members” list.\nThe problem is that this never seems to work.Has anyone tried this? Succeeded?\nThanks for the help!!!\n-Kevin", "username": "Kevin_Cantrell" }, { "code": "", "text": "Hi @Kevin_Cantrell and welcome in the MongoDB Community :muscle !It’s impossible and (I’m pretty sure) will never happen.MongoDB Atlas provides MongoDB clusters as a service. You can’t alter the configuration or adds nodes in it without using the Atlas UI (or API) and definitely not external nodes that wouldn’t be under the control of Atlas. This defeats the entire purpose of being “as a Service”.Everything is automated in Atlas.It generated way more problems that it solves. So to sum up. Not happening.It will be a lot safer for you to have the Primary managed by Atlas as well in Atlas and identical to the 2 other nodes, monitored, secured, backed up, automated, etc.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I see, Thank you for the response!", "username": "Kevin_Cantrell" }, { "code": "", "text": "To add extra safety, while you cannot add Atlas node to your RS you may have a change stream process that replicate the on-premise operations into a separate Atlas cluster.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas part of a replica set with primary on-prem
2022-06-09T07:20:00.787Z
MongoDB Atlas part of a replica set with primary on-prem
2,411
null
[ "atlas-cluster" ]
[ { "code": "com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=MONGODB-AWS, userName='ASIAXXXXXXXX', source='$external', password=<hidden>, mechanismProperties=<hidden>}\n\nCaused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server XXXXShard.XXXX.mongodb.net:27017. The full response is {\"ok\": 0.0, \"errmsg\": \"Authentication failed.\", \"code\": 18, \"codeName\": \"AuthenticationFailed\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1654860159, \"i\": 1}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"yBS9VFbLkDTcGYirWgQPzZAnUTY=\", \"subType\": \"00\"}}, \"keyId\": 7106759389712744452}}, \"operationTime\": {\"$timestamp\": {\"t\": 1654860159, \"i\": 1}}}\n", "text": "I’ve successfully connected my application/pod in AWS-EKS to a MongoDB Atlas cluster using this authentication method: https://www.mongodb.com/docs/atlas/security/passwordless-authentication/#aws-eks. Logs indicate that the client successfully connects. But when doing a write operation it fails with:Any tips what this can be?", "username": "Kristoffer_Almas" }, { "code": "", "text": "Post can be deleted - solution found. Hadn’t URLEncoded AWS_SESSION_TOKEN param", "username": "Kristoffer_Almas" }, { "code": "", "text": "It is best to keep the post and marked them as solved. This way others facing the same issue have an idea on how to solve it.", "username": "steevej" } ]
MongoCommandException: AuthenticationFailed - With MongoDB Atlas and AWS-EKS
2022-06-10T11:40:25.107Z
MongoCommandException: AuthenticationFailed - With MongoDB Atlas and AWS-EKS
2,444
null
[ "dot-net", "atlas-cluster", "containers", "serverless" ]
[ { "code": "Unhandled exception: MongoDB.Driver.MongoConnectionPoolPausedException: The connection pool is in paused state for server tenantserverless-lb.dsets.mongodb.net:27017.\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PoolState.ThrowIfNotReady()\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionHelper.StartCheckingOut()\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionHelper.AcquireConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.GetChannelAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.InitializeAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.CreateAsync(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ListDatabasesOperation.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.ListDatabaseNamesAsync(IClientSessionHandle session, ListDatabaseNamesOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at TenantMongoDb.Api.MongoDbTools.CreateDatabase(String connectionString, String databaseName, Boolean clobber) in E:\\GitHub\\daciertech\\carbon\\src\\CarbonEntities\\TenantMongoDb.Api\\MongoDbTools.cs:line 24\n\nThe code is (last line is line 24 of MongoDbTools.cs):\n\n var settings = MongoClientSettings.FromConnectionString(connectionString);\n settings.ApplicationName = \"Appname\";\n settings.ServerApi = new ServerApi(ServerApiVersion.V1);\n settings.UseTls = true;\n settings.SdamLogFilename = @\"sdam.log\";\n var client = new MongoClient(settings);\n var dbNames = (await client.ListDatabaseNamesAsync().ConfigureAwait(false)).ToList();\n", "text": "I have been using a local Docker image of Mongo for development. Now I’m trying to use a Serverless instance and the code is failing with “The connection pool is in paused state…”. The code works with local Docker image and with a free instance of Atlas (only changing connection string).I’m using V2.15.1 of the .NET Driver.The stack trace is:The sdam.log file is empty.The connection string that works is:mongodb+srv://<username>:<password>@jvcluster.wcszc.mongodb.net/?retryWrites=true&w=majorityThe connection string that fails is:mongodb+srv://<username>:<password>@tenantserverless.dsets.mongodb.net/?retryWrites=true&w=majorityWhat am I doing wrong? Thanks!", "username": "John_Vottero1" }, { "code": "", "text": "Hi @John_Vottero1 and welcome in the MongoDB Community !It might be a problem with the .NET driver. Can you try the latest version (aka 2.16.0 at the moment) and confirm you still have the problem?I think it was fixed by CSHARP-3947.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks! Upgrading to 2.16.0 resolved the issue.", "username": "John_Vottero1" }, { "code": "", "text": "Oh I like the sound of that ", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Code that works with free instance fails with Serverless,
2022-06-08T21:44:02.429Z
Code that works with free instance fails with Serverless,
2,484
null
[]
[ { "code": "script: {\n\n script: {\n\n source: \"doc['location'].arcDistance(params.lat, params.lon) / 1000 < doc['marketplace.distanceRadius'].value\",\n\n lang: 'painless',\n\n params: {\n\n lat: Number(body.lat),\n\n lon: Number(body.lon),\n\n }\n\n }\n\n }\n", "text": "Is it possible to use a painless script for search?\nI’m thinking to move from ES to Mongo Atlas, but i kind of need some flexibility with my queries.One example is:\nI need to find nearby stores to my customers, but the store has the possibility to define the max distance it will delivery.\nAnd i’m using this script in ES for it.\nIs it possible to do that with atlas search?", "username": "Rodrigo_Real" }, { "code": "", "text": "Hi Rodrigo,Painless is a scripting language designed specifically for Elasticsearch, which lets you write inline/stored scripts and use them in ES (reference: documentation here). As far as I know, there is no way to use Painless scripts within Atlas Search.However, what the Painless script in question is doing might be achievable in Atlas Search as well. Please refer to this Atlas Search Tutorial: How to Run a Compound Geo JSON Query that discusses how to retrieve documents within a specified polygon using coordinates. Also, you may find the documentation on the Atlas Search geoWithin, geoShape, and near operators useful and relevant.Hope this helps.Thanks,\nHarshad", "username": "Harshad_Dhavale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to use a painless script?
2022-06-08T19:50:50.509Z
Is it possible to use a painless script?
2,043
null
[ "data-modeling" ]
[ { "code": "", "text": "I have to develop the product from the scratch, designing the whole of the backend by myself.\nThe repetitive problem I face is how, to begin with, data modeling.\nAs sometimes clients provide vague information about the product, let us take an example if the client wants to develop a school management SAAS product.\nThen my approach will be", "username": "Anshul_Negi" }, { "code": "", "text": "@Anshul_Negi your use case is very much a MongoDB use case. You don’t have to design schema as rigorously in MongoDB as in a SQL RDBMS. Imagine each major part of your application as a kind of document, and code to that document. The individual documents in a MongoDB collection do not have to be entirely uniform and consistent like the rows in a RDBMS table are. The collection metaphor is very flexible and allows you to experiment broadly.", "username": "Jack_Woehr" } ]
Designing data model
2022-06-09T15:47:46.032Z
Designing data model
1,386
null
[ "mongodb-shell" ]
[ { "code": "mongoshmongomongoshmongomongosh$ time mongosh --eval 'db.adminCommand({ping: 1})'\nCurrent Mongosh Log ID:\t62a21572cfd011d5ac021e05\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.5.0\nUsing MongoDB:\t\t5.0.9\nUsing Mongosh:\t\t1.5.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n{\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1654789485, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"a32d1165b14260c59870e293075b7d56d9ee9fbf\", \"hex\"), 0),\n keyId: Long(\"7107233936354312197\")\n }\n },\n operationTime: Timestamp({ t: 1654789485, i: 1 })\n}\n\nreal\t0m2.403s\nuser\t0m1.334s\nsys\t0m0.178s\nmongosh:\n enableTelemetry: false\n snippetAutoload: false\n", "text": "Himongosh client is taking about 10 times longer than previous mongo client when using --eval to query the DB.For instance, the following mongosh comand is taking around 2.26 seconds in one of our environments and only 0.16 seconds if we use mongo instead of mongoshThe tests were executed with the following configuration:Is there any way to improve the performance of mongosh?", "username": "Fran_Mulero" }, { "code": "", "text": "You might do best filing an issue on mongosh to get developer attention.", "username": "Jack_Woehr" }, { "code": "", "text": "Thanks @Jack_Woehr for your quick response.It seems the issue is there, MONGOSH-1240, since 2 week ago.", "username": "Fran_Mulero" }, { "code": "", "text": "@Fran_Mulero that just proves that your are in the vanguard!", "username": "Jack_Woehr" } ]
Mongosh performance
2022-06-09T16:03:48.073Z
Mongosh performance
2,150
null
[ "aggregation", "node-js" ]
[ { "code": " {\n $lookup: {\n from: \"users\",\n localField:\"_id\",\n foreignField:\"s_id\",\n as: \"user\"\n }\n },\nconst keyword = req.query.keyword\n ? {\n user_name: {\n $regex: req.query.keyword,\n $options: 'i',\n },\n }\n : {}\n", "text": "I am getting all data using localField and foreignField but how to seach data by nameI am receiving search keyword in", "username": "Avish_Pratap_Singh" }, { "code": "$match$lookupusers$lookup", "text": "Hi @Avish_Pratap_Singh,If you want to filter the parent collection, you can add a $match stage before the $lookup stage in your pipeline. If you want to filter the docs from the children collection users, you have to use the other format of the $lookup stage that includes a sub-pipeline that can help filter or transform the docs you bring from the children collection.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
How to search data
2022-06-09T04:43:03.220Z
How to search data
1,277
null
[]
[ { "code": "replaceOne", "text": "i read from the mongodb manual: “If a write operation modifies an indexed field, MongoDB updates all indexes that have the modified field as a key.”. however, is a replaceOne always considered to be a modification on the indexed field? even though the replacement actually did not change the indexed field?", "username": "Wei_Sun" }, { "code": "", "text": "Hi @Wei_Sun,“Before” and “after” docs are compared to determine if index changes are necessary. This is true for every write operations. So indexes are only updated if necessary.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
replaceOne and index
2022-06-08T02:59:47.063Z
replaceOne and index
1,151
https://www.mongodb.com/…4_2_1024x156.png
[ "atlas-search", "text-search" ]
[ { "code": "", "text": "I am evaluating full-text search on Atlas. I need to know that MongoDB Atlas supports partial matching. I created a database on the free tier and the search index config appears in the console. I created an index as per the instructions. I need to confirm that if a field has “text” and I search for “text” that there is a match. I also need to confirm that it is using the Lucene engine.\nimage1536×234 14 KB\nI can do searches but partial match does not work. I.e. searching for “text” yields a result, but “tex” yields nothig. I then saw this article and now I’m wondering if the free tier supports partial matching. Does it?I tried creating a serverless instance but there is no config for searching there at all.So, is it possible to do partial text matching on searches on the free tier? If not, what do I need to do to see this functionality? Which minimum level tier do I need?Alternatively, is there a MongoDB docker image I can use that comes with Lucene search?", "username": "Christian_Findlay" }, { "code": "", "text": "CorrectionI need to confirm that if a field has “text” and I search for “text” that there is a match.Should be\nI need to confirm that if a field has “text” and I search for “tex” that there is a match.", "username": "Christian_Findlay" }, { "code": "", "text": "I have also tried searching with tex* but this doesn’t yield results either.", "username": "Christian_Findlay" }, { "code": "", "text": "\nimage1264×292 16.1 KB\n", "username": "Christian_Findlay" }, { "code": "", "text": "\nimage1257×204 12.6 KB\n", "username": "Christian_Findlay" }, { "code": "", "text": "\nimage1239×195 12.5 KB\n", "username": "Christian_Findlay" }, { "code": "", "text": "Perhaps I need the autocomplete operator? If so, how would I even do an autocomplete query through the portal? Do you I need to use the API client SDK for that?", "username": "Christian_Findlay" }, { "code": "", "text": "Hey there! Yes you can use the autocomplete operator - I would make sure it is also set up correctly in your index https://docs.atlas.mongodb.com/reference/atlas-search/index-definitions/#autocomplete (can set up using our visual index builder directions here) and then use the right query in either Compass Aggregation Builder, on the Atlas Collections tab in Aggregation Builder or in the MongoshellLet us know if you get it!", "username": "Elle_Shwer" }, { "code": "db.collection.aggregate([{ $search: { wildcard: { query: '*tex*', path: { 'wildcard': '*' }, allowAnalyzedField: true }} } ]);\n", "text": "You can use search with wildcard to achieve this. Once you’ve created a search index, you can do something like this:", "username": "Philip_Braswell" }, { "code": "", "text": "Beware !!!\nUsing this appraoch seems good. But it puts way too high load on your servers, CPU exhaustion, memory exhaustion becomes frequent if you have a decent amount of data.\nOnly index the fields you need and write the aggregation in a way which you think would actually be used.\nFor your use case :Hope it helps.", "username": "Pawan_Saxena" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Full Text Search (Partial Match)
2021-10-27T20:50:32.807Z
Full Text Search (Partial Match)
14,969
null
[ "queries" ]
[ { "code": "db.ProductV2.find(\n{\n $expr: {\n $gt: [\n {$size:\n {$cond:\n [\n {$ne:['sizes',null]},\n {$filter: \n {\n input:'$sizes.sku',\n as:'sku',\n cond:{$ne:[{$substr:['$$sku',0,11]},'$optionId']}\n }\n },\n []\n ]\n }\n },\n 0\n ]\n }\n}\n)\n", "text": "Hi,I get an error because the argument to $size is null\nI add a $cond control to check if argument is null, but still have an error messageI don’t know how to solve my problem", "username": "emmanuel_bernard" }, { "code": "", "text": "Hi @emmanuel_bernard and welcome to the community forum!!Can you help us with few information on the above mentioned error:Thanks\nAasawari", "username": "Aasawari" }, { "code": "{$ne:['sizes',null]}", "text": "Inside $expr you must use $ in front of field name to access the field value.With{$ne:['sizes',null]}you are testing if the string sizes is not null. You want $sizes instead.", "username": "steevej" }, { "code": "db.ProductV2.find(\n{\n $and: [\n {sizes:{$exists:true}},\n {\n $expr: {\n $gt: [\n {$size:\n {$filter: \n {\n input:'$sizes.sku',\n as:'sku',\n cond:{$ne:[{$substrCP:['$$sku',0,11]},'$optionId']}\n }\n }\n },\n 0\n ]\n }\n }\n ]\n}\n)\n", "text": "Hi Steeve,Thanks a lot.I found this comment in a post “You cannot use $exists within a $expr; only aggregation expressions are allowed”.Here is my solution", "username": "emmanuel_bernard" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The argument to $size must be an array, but is of type: null
2022-05-19T22:18:47.448Z
The argument to $size must be an array, but is of type: null
4,883
null
[ "data-modeling" ]
[ { "code": "", "text": "in our project ,i use the moding of SchemaVersion, and my leader ask me a question about, why schema_version 's type use string ,not use int type.", "username": "zhao_bin" }, { "code": "", "text": "Hi @zhao_bin welcome to the community!why schema_version 's type use string ,not use int type.Are you talking about the pattern in Model Data for Schema Versioning?It’s just a pattern that you can implement and not a hard rule, so you can use any type that’s required by your use case (int, string, or even sub-document).If this is not what you meant, could you elaborate further?Best regards\nKevin", "username": "kevinadi" } ]
Why schema_version' s type use string, not use int
2022-06-08T06:56:09.847Z
Why schema_version&rsquo; s type use string, not use int
1,390
null
[ "queries", "rust" ]
[ { "code": "use futures::{TryStream, TryStreamExt};\n use anyhow::Result;\n use mongodb::results::InsertOneResult;\n use mongodb::{Database, bson::document::Document, options::FindOptions};\n use serde::{Deserialize, Serialize};\n\n pub trait MongoDbModel {\n fn collection_name() -> String;\n }\n\n pub async fn get_all_vec<I: MongoDbModel>(db: &Database, filter: Option<Document>, options: Option<FindOptions>) -> Vec<I> {\n let col = db.collection::<I>(&I::collection_name());\n let mut cursor = match col.find(None, None).await {\n Ok(cursor) => cursor,\n Err(_) => return vec![],\n };\n \n let mut documents: Vec<I> = Vec::new();\n while let Ok(Some(doc)) = cursor.try_next().await {\n documents.push(doc);\n }\n \n documents\n }\nerror[E0599]: the method `try_next` exists for struct `mongodb::Cursor<I>`, but its trait bounds were not satisfied\n --> src/lib/mongodb_client.rs:39:42\n |\n39 | while let Ok(Some(doc)) = cursor.try_next().await {\n | ^^^^^ method cannot be called on `mongodb::Cursor<I>` due to unsatisfied trait bounds\n |\n ::: /Users/r/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-2.2.2/src/cursor/mod.rs:94:1\n |\n94 | pub struct Cursor<T> {\n | --------------------\n | |\n | doesn't satisfy `mongodb::Cursor<I>: TryStreamExt`\n | doesn't satisfy `mongodb::Cursor<I>: TryStream`\n |\n = note: the following trait bounds were not satisfied:\n `mongodb::Cursor<I>: TryStream`\n which is required by `mongodb::Cursor<I>: TryStreamExt`\n\nwarning: unused import: `TryStreamExt`\n --> src/lib/mongodb_client.rs:12:30\n |\n12 | use futures::{TryStream, TryStreamExt};\n | ^^^^^^^^^^^^\nfutures::TryStreamExt", "text": "Hi,I am trying to write generic CRUD functions on structs that modelate my data.\nI have the following code right now.Trying to compile this results inAs you can see I’m importing futures::TryStreamExt and still this does not seem to be making use of it.Can someone help me with this?\nThanks in advance!", "username": "Raymundo_63313" }, { "code": "Cursor<T>StreamTDeserializeOwnedUnpinSendSyncIMongoDbModelMongoDbModeltrait MongoDbModel: DeserializeOwned + Sync + Send + Unpin { ... }\nget_all_vecpub async fn get_all_vec<I>(\n db: &Database,\n filter: Option<Document>,\n options: Option<FindOptions>,\n) -> Vec<I>\nwhere\n I: MongoDbModel + DeserializeOwned + Unpin + Send + Sync,\n{\n // impl here\n}\n", "text": "Hey @Raymundo_63313!The reason you’re seeing an error here is that the Cursor<T> type only implements Stream if the T implements DeserializeOwned, Unpin, Send, and Sync. You can see this requirement here: Cursor in mongodb - Rust. In your example, the I type is only required to implement MongoDbModel though, so all the trait requirements aren’t satisfied.To fix this, you can update your MongoDbModel trait to inherit from those traits:Or, you can update the constraints of the get_all_vec function:As a side note, it may be possible for us to relax some of these trait requirements in a future version. I filed https://jira.mongodb.org/browse/RUST-1358 to track the work for investigating this. Thanks for bringing this to our attention!", "username": "Patrick_Freed" }, { "code": "", "text": "Thank you @Patrick_Freed! I took the approach of the boundaries in the function.Besides solving the issue it has helped me to improve in Rust ", "username": "Raymundo_63313" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Rust driver: help writing a generic find method
2022-06-09T20:24:22.719Z
Rust driver: help writing a generic find method
5,248
null
[ "dot-net" ]
[ { "code": "db.student.insert({\n studentnum: getSequenceNextValue(\"studentnum\"),\n student_id: \"02940409\",\n course: \"Bachelor of Science in Computer Engineering\",\n year_level: \"2nd\",\n department: \"engineering\"\n});\n", "text": "As part of our project implementation, we are using Counters to auto increment certain fields of the collection as needed. We are using C# as programming language and using Mongo DB Native Drivers to connect to Mongo DB.If I am using Mongo DB Command in Mongo Shell to insert a document which uses a function to auto increment a value, we run the below command. Here, you can see that we are calling the function.getNextSequenceValue is a function which gives me the next counter value for use. But C# does not accept use of the function, instead, it expects a value.var document = new BsonDocument { { “studentnum”, getSequenceNextValue(“studentnum”), }};Is there a way in C# using Mongo DB Driver to achieve this? Or is there a way to run native Mongo DB Shell commands from C#?Thanks,\nVikram", "username": "Vikram_Bade" }, { "code": "getSequenceNextValueJsonCommandvar command = new JsonCommand<BsonDocument>(\"{ dropDatabase: 1 }\");\ndb.RunCommand(command);\nCommandDocumentvar command = new CommandDocument(\"dropDatabase\", 1);\ndb.RunCommand<BsonDocument>(command);\n", "text": "getSequenceNextValueThere is db.RunCommand discussed in https://stackoverflow.com/questions/38671771/run-mongodb-commands-from-c-sharpYou can use a JsonCommand like this:or use a CommandDocument like this:", "username": "Mat_Forsberg" }, { "code": "var stringCommand = \"{\\\"_id\\\": getSequenceNextValue(\\\"itemId\\\"),\\\"student_id\\\": \\\"02740305\\\",\\\"course\\\": \\\"Bachelor of Science in Finance\\\",\\\"year_level\\\": \\\"2nd\\\",\\\"department\\\": \\\"Business Administration\\\"}\";\n var command = new JsonCommand<BsonDocument>(stringCommand);\n database.RunCommand(command);\nvar stringCommand = \"db.student.insert({\\\"_id\\\": getSequenceNextValue(\\\"itemId\\\"),\\\"student_id\\\": \\\"02740305\\\",\\\"course\\\": \\\"Bachelor of Science in Finance\\\",\\\"year_level\\\": \\\"2nd\\\",\\\"department\\\": \\\"Business Administration\\\"});\";\n var command = new JsonCommand<BsonDocument>(stringCommand);\n database.RunCommand(command);\n", "text": "Unfortunately, it does not work that way. I have already looked at this post and tried it. It works for commands where there is no function involved. For example, if I try either of the below, it does not work and expects a JSON:So, that Run Command does not run native Mongo Shell commands unfortunately.Thanks.", "username": "Vikram_Bade" } ]
Auto Increment Function - How to call from C# Code while inserting a document?
2022-06-08T05:48:14.839Z
Auto Increment Function - How to call from C# Code while inserting a document?
4,829
null
[ "aggregation", "time-series" ]
[ { "code": "/** Document layout for BTM4208SD */\nconst document = {\n time: Date,\n channel1: Number || null, // Temperature\n channel2: Number || null,\n channel3: Number || null,\n channel4: Number || null,\n channel5: Number || null,\n channel6: Number || null,\n channel7: Number || null,\n channel8: Number || null,\n channel9: Number || null,\n channel10: Number || null,\n channel11: Number || null,\n channel12: Number || null,\n};\n\n[\n [ new Date(\"2009/07/12\"), 20, 24, 23 ],\n [ new Date(\"2009/07/19\"), 18, 16, 22 ]\n]\n[\n [ \n new Date(\"2009/07/12\"), \n 20, // (A collection, channel 1) \n 24, // (B collection, channel 4) \n .... \n ],\n [ \n new Date(\"2009/07/19\"), \n 18, // (A collection, channel 1) \n 16, // (B collection, channel 4) \n ... \n ]\n]\n", "text": "Hey guys, have a challenge for the experienced Mongo guys out there.Have two collections, A and B, each a time series. Following is the schema:We display this data on the client with dygraphs which requires the time series data to be in the following format:I have an aggregate that will produce the data in the above format for 1 collection at a time. Works great. However, I need to combine data from both collections in the above format using one aggregate. For examples:How can I do this? I believe I need a way to combine the two collections and do some sort of conditional statement like:if (collection = A)\nselect { channel1 }if (collection = B)\nselect { channel2 }Any suggestions?\nI can change the schema if needs be.", "username": "Daniel_Smyth" }, { "code": "$lookup", "text": "You want the aggregation stage called $lookup", "username": "Jack_Woehr" }, { "code": "localFieldforeignFieldfrom$lookupfromhourlocalFieldforeignFieldhour", "text": "Thanks, Jack.My localField and foreignField are timestamps recorded every 1 or 2 seconds, they may match.Can I perform an aggregation on my from collection first in a $lookup ?The aggregation on my from collection would group documents by the hour, adding a field hour. Then perform the same aggregation on destination collection. My localField and foreignField would be the new hour field.", "username": "Daniel_Smyth" }, { "code": "$lookup", "text": "I think to achieve your objectives you’ll need to work diligently through the $lookup documentation and especially the examples. MongoDB documentation is rather “thick” but all the information is there.", "username": "Jack_Woehr" }, { "code": "", "text": "All good. Thank you.", "username": "Daniel_Smyth" } ]
Performing Aggregate on Two Collections
2022-06-09T23:51:51.565Z
Performing Aggregate on Two Collections
7,506
null
[ "compass" ]
[ { "code": "", "text": "Hi, I want to restrict some certain IPs for connection with MongoDB Compass, but for the Database itself Access from anywhere. I’ve tried Add a connection IP address but it restricts only to the Database, what I want is the restriction for MongoDB Compass. How can I do that?Thanks\nTai", "username": "acb7881" }, { "code": "", "text": "I do not think it would make sense to leave your database open to any application and block one of the legitimate one.", "username": "steevej" }, { "code": "", "text": "Hi @acb7881 welcome to the community!To add a bit of colour to @steevej’s answer, I don’t think at this moment you can restrict connection by application, only by IP. I believe what you need is to use proper authentication & authorization, where you only give enough privileges for an account to do what they need, and no more. See Configure Database Users and Role-Based Access Control for more information.Best regards\nKevin", "username": "kevinadi" } ]
Restrict IP MongoDB Compass connection
2022-06-09T01:13:58.819Z
Restrict IP MongoDB Compass connection
1,791
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "I have a MongoDB database in which I have a topic, MCQs question, and true-false question schema. The topicId is present in the other two tables(MCQs and T/F). Now my question is if I delete the topic all the questions related to that topic in the other two tables should also be deleted. Just like a cascade in SQL but as I read MongoDB does not support cascade. I used some middleware after going through some tutorials but I am very new to MongoDB I could not figure that out. Can anybody please help me? I am really stuck with that problem.\nHere is my topic Schema in topic.model.js file.const mongoose = require(‘mongoose’);\nconst Schema = mongoose.Schema;\nconst True_false = require(‘…/models/true_false’);\nconst Mcqs = require(‘…/models/mcqs’);\nconst Open_Ended = require(‘…/models/open-ended’);const topicSchema = new mongoose.Schema({\ntopic: {\ntype: String,\nrequired: true\n},\nageGroup: {\ntype: String,\nrequired: true\n},\ngrade: {\ntype: String,\nrequired: true\n},\nnoOfQuestions: {\ntype: String,\nrequired: true\n}\n});// I wrote this middleware but I do not know that it is correct and how to use it in the delete topic route.topicSchema.pre(‘remove’, function(next) {\nMcqs.remove({topicId: this._id}).exec();\nOpen_Ended.remove({topicId: this._id}).exec();\nTrue_false.remove({topicId: this._id}).exec();\nnext();\n});\nconst Topic = mongoose.model(‘Topic’, topicSchema);\nmodule.exports = Topic;// My route in topic.route.js filefunction topic_delete(req, res) {\nTopic.findByIdAndRemove(req.params.id)\n.then(data => {\nif(!data) {\nreturn res.status(404).send(‘Topic not found!’)\n} else\nreturn res.send(200).send(‘deleted’)\n}).catch(err => {\nconsole.log(err)\n});\n};router.delete(‘/delete/:id’, topic_delete);Right now with that API the topic is deleted but not the questions so it means that my middleware is not working. Can Anyone please help me that how can I use my middleware in my route file?", "username": "Naila_Nosheen" }, { "code": "", "text": "Hi @Naila_NosheenWould this be able to help you? No easy way to cascade delete documents · Issue #9152 · Automattic/mongoose · GitHub There are a couple of examples in the linked page that might be useful.Best regards\nKevin", "username": "kevinadi" } ]
How can i perform Cascade in mongoDb? If i delete parent the child table also deleted
2022-06-09T10:54:27.476Z
How can i perform Cascade in mongoDb? If i delete parent the child table also deleted
8,929
null
[ "atlas-cluster", "database-tools", "backup" ]
[ { "code": "*mongoexport --uri \"mongodb+srv://m001-student:************@sandbox.xxxxxxxxx.mongodb.net/myFirstDatabase\" --collection sales --out sales.json *\n*or*\nmongodump --uri=\"mongodb+srv://m001-student:m001-mongodb:***********@sandbox.xxxxxxxxxx.mongodb.net/myFirstDatabase\"", "text": "Hi Community,\nI am currently joining course m001 and learning how to import and export files with mongodump and mongoexport using the --uri option.\nIf I try mongodump or mongoexport with – uri like this:mongodump --uri=\"mongodb+srv://m001-student:m001-mongodb:***********@sandbox.xxxxxxxxxx.mongodb.net/myFirstDatabase\"I get this error message: error parsing command line options: error parsing uri (mongodb+srv://m001-student:m001-mongodb:****@sandbox.xxxxxxxx.mongodb.net/myFirstDatabase): scheme must be \"mongodb\"As this is already known I tried bing and googe, bur I couldn’t found a solution to fix that. Any help appreciated. Thanks in advance. Uli", "username": "Ulrich_Kleemann1" }, { "code": "", "text": "Please show the exact commands you used as screenshot\nYour first command seems to be ok but second one is not correct\nIt has 3 colons.Should look likemongodb+srv://user:pwd@sandbox… but your string is like\nmongodb+srv://user:pwd:xxx@sandbox…Also check you are using straight quotes and no invalid characters or spaces in your connect string\nTry to type and see if it works instead copy & paste", "username": "Ramachandra_Tummala" } ]
Error parsing command line options: error parsing uri
2022-06-09T15:19:49.118Z
Error parsing command line options: error parsing uri
7,112
null
[ "aggregation", "node-js" ]
[ { "code": "const pipe = [\n {\n $match: {\n is_active: true,\n status: 'active',\n entity_type: 'offer',\n entity_count: {\n $gt: 0\n }\n }\n },\n {\n $lookup: {\n from: 'offers',\n localField: 'offers',\n foreignField: '_id',\n as: 'offer'\n }\n },\n {\n $project: {\n offer: 1, name: 1,\n liveArray: { $cond: { if: { $eq: ['$offer.publish_status.value', '2'] }, then: '$offer._id', else: null } },\n notLiveArray: { $cond: { if: { $ne: ['$offer.publish_status.value', '2'] }, then: '$offer._id', else: null } }\n }\n }\n];\n", "text": "this offer array of offer_id", "username": "Ghalib_Ansari" }, { "code": "", "text": "We have no clue about what you want to do.Sample source and result documents are needed with more details on the logic you want.", "username": "steevej" } ]
Want create new on condtion from old array
2022-06-09T12:42:06.564Z
Want create new on condtion from old array
1,116
null
[ "aggregation" ]
[ { "code": "", "text": "Hello guys. I have some users collection, for example:\n{\nfirst_name:“test”,\n“events”: [\n{‘name’: ‘test1’, amount: 300},\n{‘name’:‘test2’, amount: 500}\n]\n}How to make a query without aggregation framework where for every users summary amount from all events is greather than 600 for example?\nI haven’t any poccibility to use aggregation functions without aggregation framework.", "username": "111891" }, { "code": "", "text": "Your requirement is not clear.What do you mean by summary amount from all events?We are not event sure if the sample document you supplied should be matched or not. No amount is greater than 600 but 300+500 is.Sample documents from the collection and same resulting documents with more explications.Read Formatting code and log snippets in posts before posting more code or sample documents.", "username": "steevej" }, { "code": "", "text": "Hi. Thank you for reply. In my case, i have array of purchases. So i want to query by customers who has at least 3 purchases or at least 100$ spent on purchases. Can i make such a query without using aggregation pipiline and groups?", "username": "111891" }, { "code": "", "text": "Still not clear enough.Sample documents from the collection and same resulting documents with more explications.Read Formatting code and log snippets in posts before posting more code or sample documents.", "username": "steevej" } ]
Query by aggregates
2022-06-08T15:05:08.083Z
Query by aggregates
3,476
null
[ "aggregation" ]
[ { "code": "[{\n category: \"ABC\",\n sections: [\n {\n section_hod: \"x111\",\n section_name: \"SECTION A\",\n section_staff_count: \"v11111\",\n section_id: \"a1111\",\n :\n },\n {\n section_hod: \"x2222\",\n section_name: \"SECTION B\",\n section_staff_count: \"v2222\",\n section_id: \"a2222\",\n :\n }\n ]\n}\n:\n:\n]\ndb.getSiblingDB(\"departments\").getCollection(\"DepartmentDetails\")\n.aggregate([\n { $unwind : \"$sections\"},\n { $match : { $and : [{ \"sections.section_name\" : \"SECTION A\"},\n { $or : [{ \"category\" : \"ABC\"}]}]}},\n {\n $project : {\n \"name\" : \"$sections.section_name\",\n \"hod\" : \"$sections.section_hod\",\n \"staff_count\" : \"$sections.section_staff_count\",\n \"id\" : \"$sections.section_id\"\n }\n },\n {\n $facet: {\n metaData: [{\n $count: 'total'\n }],\n records: [\n {$skip: 0},\n {$limit: 10}\n ]\n }\n }\n]);\ncategorysections.section_name$unwind$match", "text": "I have a mongodb data with a structure like as shown belowI am using the below query for getting the total record counts and the recordsThe above aggregation query is working fine but takes almost 10 seconds to return the results. I have total documents of 70K size in that collection. I have even indexed category and sections.section_name still it takes 6-10 seconds. I have noticed one key difference, which is when I put the $unwind after $match the results came super fast and took only 1-2 seconds…but the count was different.Can someone please help me on this", "username": "AlexMan" }, { "code": "$match$match", "text": "That actually make sense. $match can use an index to filter documents if $match is the first stage in a pipeline.", "username": "NeNaD" } ]
$unwind is taking more time
2022-06-09T18:37:03.043Z
$unwind is taking more time
1,705
null
[ "security" ]
[ { "code": "", "text": "Hi folks,We have been trying to connect with Mongo Atlas using the IAM auth feature from a POD in EKS. For some reason, its not detecting the IAM role associated with the pod via IRSA. Instead its trying to use the role associated with the worker node. Any thoughts on what could be wrong?We are using the mongo-go-driver.Thanks in advance!", "username": "Arun_Mathew" }, { "code": "sts:AssumeRoleWithWebIdentity", "text": "I’d love to see this supported as well.Currently, I’m looking at is to emulate behavior by calling sts:AssumeRoleWithWebIdentity to get the access key ID, secret access key, and session token. That all then is passed in when creating the Mongo client. It’d be nice to have this taken care of in the client so I don’t have to roll a custom implementation.", "username": "Josh_Scherschel" }, { "code": "", "text": "See IRSA support · Issue #8 · mongodb/pymongo-auth-aws · GitHub for an issue and proposed patch.", "username": "Josh_Scherschel" } ]
Mongo IAM auth with IRSA
2021-03-31T17:12:20.110Z
Mongo IAM auth with IRSA
3,754
null
[]
[ { "code": "malloc_ptr classes(objc_copyClassList(&numClasses), &free);", "text": "Hi,I seem to be crashing on iOS 13 (simulator too) from some reason, on this line:\nmalloc_ptr classes(objc_copyClassList(&numClasses), &free);When initializing realm. Can’t figure out the cause. Any idea?", "username": "donut" }, { "code": "", "text": "HI @donut,is there some code you can share that reproduces this issue?", "username": "Andrew_Morgan" }, { "code": "", "text": "I am having the same issue, my code last goes through let realm = try ? Realm() and then it crashes on the same line as yours.Crash happens on iOS 13 simulator, same code was working in previous months and still works in iOS 14.\nI didn’t find a solution yet", "username": "Elliott_Chkayben" }, { "code": "", "text": "Just an update, for me the crash was not related to Realm, it is the strangest thing that xcode breaks at a line in realm but after further investigation in the console, the crash was cause by me having a coordinator using OpenURL call which is unavailable in iOS 13, although the class is marked platform ios14 and higher. Cannot find the slightest link between the crash on realm and the actual cause of the crash as that piece of code isn’t even called on runtime but just removing that coordinator fixed the crash.\nTL;DR the crash might very well be irrelevant to Realm and might be linked to SwiftUI binary linking causing issue in iOS 13, check previous commits to know exactly when did it break.", "username": "Elliott_Chkayben" }, { "code": "", "text": "Hi @Elliott_Chkayben – many thanks for circling back with that update", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi everybodyI have the same crash. I am using realm 10.25.2. If I remove realm of my project or use with device iOS 14 this works, but If I use device iOS 13 don’t works. I use Xcode 13.2.1\nAlso I try use other version like 10.20 or 10.17 but I don’t success.Anybody can help me?Thank you.", "username": "Alejandro_Acosta" }, { "code": "", "text": "@Alejandro_AcostaSince this crash reported in the original question was not related to Realm, creating a new thread and question describing your issue - be sure to include the code you’re using and troubleshooting steps you’ve done, along with XCode version and how Realm was included (CocoaPods? Version? etc) so we can duplicate the issue and maybe come up with a solution", "username": "Jay" } ]
Weird crashing on iOS 13
2021-07-14T07:13:46.567Z
Weird crashing on iOS 13
4,426
null
[ "crud" ]
[ { "code": "", "text": "I’m having doubt with respect to inserting document to a collection…It showed WriteResult as ‘1’ but still i couldn’t find that document in collection.Please tell me what could be the issue?", "username": "sai_rohith" }, { "code": "", "text": "If it showed WriteResult:1, then a document has been written somewhere, so most likely you are NOT looking at the same place where it has been written or you are using a query that does not match your new document.Share you connection string.Share the code you used to insert the document.Post a screenshot of what ever tool you use to see the document.", "username": "steevej" }, { "code": "", "text": "Thank you so much Steeve for answering. Sadly, I don’t have the screenshot. But will try it out once! ", "username": "sai_rohith" } ]
Document Insertion into a collection
2022-06-08T17:18:46.612Z
Document Insertion into a collection
1,326
null
[ "swift", "atlas-device-sync", "flexible-sync" ]
[ { "code": "SyncSubscriptionStateSyncSubscriptionSetSyncSubscriptionSyncSubscriptionStateSyncSubscriptionSetapplyWhen{}andrules: {}ObjectList<Object>PersonList<Dogs>", "text": "@Ian_Ward\nHey there - I hope you are doing well! I have some questions regarding Flexible Sync’s usage. I will ask from a Swift SDK point of view.I could not find a built-in way to “watch” (aka: observe) a ‘SyncSubscriptionState’ on either a ‘SyncSubscriptionSet’ nor on an individual ‘SyncSubscription’ - Can you please advise?Also, [why] is a ‘SyncSubscriptionState’ only for a ‘SyncSubscriptionSet’? -if so, I would then assume that each set is grouped for all its subscriptions’ sync-states within that set as a whole, am I correct?For Permissions, the ‘applyWhen{}’ is the first condition that has to be met, and then proceeds with read & write conditions, correct? -I see it that way for order (beyond also being an ‘and’ conjoin) but would like confirmation.For the JSON in ‘Permissions’, what does wrapping all the conditions inside ‘rules: {}’ do versus not? -I have seen examples both ways:\n• rules wrapping example link (both for same app’s tutorial)\n• rules not wrapping link (both for same app’s tutorial)I’d like to confirm what I am seeing in my testing… that a related ‘Object’ or ‘List<Object>’ will be prevented in a ‘Flexible Sync’ query if it has nested relationships that do not meet the conditions in Sync Permission’s (JSON) rules, set in cloud-side. Therefore, we must ensure that nested relationships also meet conditions (just like the top-level does) for the complete result. For example, if a ‘Person’ object has a ‘List<Dogs>’, but one of the dog objects does not meet the rules’ conditions, then that one dog object is not supposed to show up in that query (while other dog objects meeting conditions will), correct? - I want to make sure that is the intended behavior in Flexible Sync.Thank you!", "username": "Reveel" }, { "code": "rules: {}defaultRoles: []", "text": "For #5 - I think I have figured it out - for any future readers…Basically, the 2nd link was pulled from ‘Permission’ JSON from the server/cloud side; which is different than our usage that we define in Flexible Sync’s “Define Permissions” area.Therefore, we should just follow as documented, which to summarize/re-cap is… we basically use ‘rules: {}’ for individual collection ‘roles’ and use ‘defaultRoles: []’ for ‘roles’ across all collections.", "username": "Reveel" }, { "code": "", "text": "For #2 - I received confirmation via github - for any future readers…Currently (as of 2022-06-09), observing/watching is not available.", "username": "Reveel" } ]
Flexible Sync Usage Clarifications
2022-06-06T16:35:59.325Z
Flexible Sync Usage Clarifications
2,257
null
[]
[ { "code": "", "text": "Hello guys. I need to implement advanced funnel analytic, as descibed here: Funnel analysis - Wikipedia.\nPlease tell, does mongoDb is appropriate for this? And maybe any solution exists?\nI can’t find anything in google about this topic.", "username": "111891" }, { "code": "$match$group$sort$skip$limit$project$addFields$lookup$unwind", "text": "Hi @111891 and welcome back !This picture from your Wikipédia article:\nimage1678×922 68 KB\nReminds me of the MongoDB Aggregation Pipeline:\nimage1920×1280 81.9 KB\nWhich you can basically sum up like this:Each stage will apply a transformation to the documents in the pipeline in order to produce the final desired output. You can checkout the list of the 31 (more to code) aggregation pipeline stages in the doc.The most common ones are:So to sum up, yes, I think MongoDB is a good tool for this job. But you will need to learn the MongoDB Aggregation pipeline because it’s a very powerful querying language.Good news: there is a free training for that available on the MongoDB University website: M121.As a bonus, if you host your data in MongoDB Atlas, you also get access to Atlas Charts which will help you build dashboards which can be customized with… the aggregation pipeline !Here is an example dashboard I build for the Open Data COVID-19 project.I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Maxime, Thank you for your reply. I know aggregation framework very well. But what you can say about such functionality\nParametric Aggregate Functions | ClickHouse Docs ?\nFunnels, retentions, flows analytic. Mongodb not provide this type of functionality and haven’t any examples hot to do. It can’t be fetched using $group in example above, so your example woul’d not work. I read a lot of about mongodb aggregations and possibility but can’t find cases where peoples do such analytic using mongodb.", "username": "111891" }, { "code": "", "text": "Your example wouldn’t work because the events must be in correct order.", "username": "111891" }, { "code": "", "text": "Please read about Parametric Aggregate Functions | ClickHouse Docs and say does mongodb solve this cases and does mongodb is the right choice for that?", "username": "111891" }, { "code": "", "text": "Maybe I’m misunderstanding but the first one at least in your link (Histogram) reminds me of the $bucket and $bucketAuto stages that we have in the aggregation pipeline.I’m not sure which part you are referring to. But the aggregation pipeline is a Turing Complete language.I even implemented the Game of Life with it. So really, you can do anything with it and manipulate the docs the way you want. There are usually several ways to achieve the same result though and one of them is optimal. I already saw many sub-optimal pipeline out there but there are a few tricks to achieve perfection. I’m still not sure exactly what you are trying to do or implement so I might be completely off topic. At least I tried !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for reply.", "username": "111891" }, { "code": "", "text": "I need a way how to calculate that some events are in correct order. For example, we have five events:\na,b,c,d,e\nI dont know how to write query, that generated the next :a - 100%\na → b 50%\na → b → c 30%\na->b->c->d 25%\na->b->c->d->e - 5%all this events should be in orded as above.\nThis type of query is called “Conversion rate” or funnel.\nMaybe you can helm to understand how to implement such thing using mongodb.", "username": "111891" }, { "code": "", "text": "Your example would’t work correct because order of events must be considered too.\n\nimage726×485 56.8 KB\n", "username": "111891" }, { "code": "{ _id: 'a', date: ISODate(\"2022-06-08T19:11:59.221Z\") }\n", "text": "Can you provide a few sample JSON documents to illustrate your use case and the expected output that you expect from these docs?You can insert Markdown code blocks in here like so:", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for help. I have done this task using $reduce function from aggregation framework and some code", "username": "111891" } ]
Funnel analytic
2022-06-03T12:00:11.502Z
Funnel analytic
2,877
null
[ "licensing" ]
[ { "code": "", "text": "Hey,I’m currently going through some license compatiblity issues details and would like to get some input. Specifically I’m wondering about the issue that SSPL wants someone to open source their management code with a number of very explicit things like backup, UIs, APIs, etc.What’s the idea about this kind of infectiousness WRT to underlying technology. E.g. creating a management platform that leverages other open source tools. Obviously people will use Linux to build those platforms but Linux won’t be available as SSPL. Or someone builds a platform, open sources it with the SSPL but also uses a backup utility like borg backup under the hood.What’s the idea here (besides “I know it when I see it” when something absolutely needs to be SSPL vs. other open source licenses). From my point of view: others who create services providing MongoDB community server and do that using an open source approach could open source (and potentially dual license) their own code with the SSPL and IMHO they should be able to build upon the same open source utilities that the “community at large” rlies on … however, from a wording perspective this isn’t clear how this should be handled.Cheers!", "username": "Christian_Theune" }, { "code": "", "text": "And a quick follow-up: does anyone know of prior art that is considered a valid practice?", "username": "Christian_Theune" } ]
SSPL compatibility with other licenses
2022-06-09T14:52:39.308Z
SSPL compatibility with other licenses
2,297
null
[ "sharding", "mongodb-shell", "storage" ]
[ { "code": "Active: failed (Result: signal) since Sat 2022-06-04 16:37:36 CEST; 51s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 31801 (code=killed, signal=KILL)\nInstalled:\n mongodb-org-database.x86_64 0:5.0.8-1.el7\n\nDependency Installed:\n mongodb-mongosh.x86_64 0:1.5.0-1.el8\n\nUpdated:\n mongodb-org.x86_64 0:5.0.8-1.el7\n mongodb-org-mongos.x86_64 0:5.0.8-1.el7\n mongodb-org-server.x86_64 0:5.0.8-1.el7\n mongodb-org-shell.x86_64 0:5.0.8-1.el7\n mongodb-org-tools.x86_64 0:5.0.8-1.el7\n\nComplete!\nJun 04 16:48:38 server3.f.hr systemd[1]: Starting MongoDB Databas...\nJun 04 16:48:38 server3.f.hr mongod[31525]: about to fork child p...\nJun 04 16:48:38 server3.f.hr mongod[31525]: forked process: 31527\nJun 04 16:48:42 server3.f.hr mongod[31525]: ERROR: child process ...\nJun 04 16:48:42 server3.f.hr mongod[31525]: To see additional inf...\nJun 04 16:48:42 server3.f.hr systemd[1]: mongod.service: control ...\nJun 04 16:48:42 server3.f.hr systemd[1]: Failed to start MongoDB ...\nJun 04 16:48:42 server3.f.hr systemd[1]: Unit mongod.service ente...\nJun 04 16:48:42 server3.f.hr systemd[1]: mongod.service failed.\nHint: Some lines were ellipsized, use -l to show in full.\n[root@mongodbp3er ~]# journalctl -xe\n-- Unit mongod.service has begun starting up.\nJun 04 16:48:38 server3.f.hr mongod[31525]: about to fork child process, waiting unti\nJun 04 16:48:38 server3.f.hr mongod[31525]: forked process: 31527\nJun 04 16:48:42 server3.f.hr mongod[31525]: ERROR: child process failed, exited with\nJun 04 16:48:42 server3.f.hr mongod[31525]: To see additional information in this out\nJun 04 16:48:42 server3.f.hr systemd[1]: mongod.service: control process exited, code\nJun 04 16:48:42 server3.f.hr systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nJun 04 16:48:42 server3.f.hr polkitd[1995]: Unregistered Authentication Agent for uni\nJun 04 16:48:42 server3.f.hr systemd[1]: Unit mongod.service entered failed state.\nJun 04 16:48:42 server3.f.hr systemd[1]: mongod.service failed.\nJun 04 16:48:42 server3.f.hr logger[31554]: root[/root] : systemctl start mongod\nJun 04 16:48:57 server3.f.hr polkitd[1995]: Registered Authentication Agent for unix-\nJun 04 16:48:57 server3.f.hr polkitd[1995]: Unregistered Authentication Agent for uni\nJun 04 16:48:57 server3.f.hr logger[31576]: root[/root] : systemctl stop mongod\nJun 04 16:49:03 server3.f.hr logger[31584]: root[/root] : systemctl status mongod\nJun 04 16:49:39 server3.f.hr logger[31618]: root[/root] : journalctl -xe\n", "text": "Hello,in our company we have mongodb community deployment with three nodes (Primary, Secondary and Secondary on DR site).We are experiencing problem stopping mongod instance and starting.\nOS RHAT 7.7, 64bit, RAM 32 GB, wt cache default, numCores 4. MongoDB version 4.4.2.The idea was to upgrade to version 5 and we started by the book, from secondary on DR site.When we issue systemctl stop mongod , the process does not stop in 5 minutes and something kills it.Normal systemctl status would be ‘inactive (dead)’ but we have ‘failed’.Previously (around two years ago) we could start it again by issuing ‘start’ command multiple times.\nWe noticed that journal -xe shows some kind of counter that increase after every start and in the end it starts.\nIt was strange but since this instance is rarely stopped or restarted, we did not have opportunity to repeat the test.Now, upgrade to version 5.0.8 was done via yum install command. The software upgraded but could not start.Then inside mongod.log we found:Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.featureCompatibility was set to 4.4 earlier.Then we also found: Upgrading from a WiredTiger version 10.0.0 database that was not shutdown cleanly is not allowed. Perform a clean shutdown on version 10.0.0 and then upgrade.So, it was killed during stop but this time multiple start command does not help.\nBase on some googling… we decided to move back to 4.4.14. (latest version 4).We did that but we could not start again.Final solution was - cold sync that is now going on (around 4 days to finish).We have 32 TB of data online but users access only last 1% of data or less… (last 2 year available, but 99% of time they use last 7 days RW, 7-30 days RO in history).Now we think that problem could be related to wiredTiger Cache that is 50% of RAM -1 GB : in our setup it’s 32 GB /2 = 16 GB -1 = 15 GB.Our plan was to:wait to see is it going to be stopped now or killed againif stopped correctly (clean shutdown), systemctl status mongod should be inactive (dead)… that I suppose it can start normallyPlease advise what do you think about our situation and plan for Friday (expected date of cold sync finish).Thank you very much.Best regards,\nBranimir Putniković", "username": "Branimir_Putnikovic" }, { "code": "", "text": "The SIGKILL indicates the shutdown was not clean. Likely this is a SIGKILL from systemd after a stop timeout.A clean shutdown is a prerequisite for the upgrade. If you are not getting a clean shutdown from systemctl then you can try one of the other methods on https://www.mongodb.com/docs/manual/tutorial/manage-mongodb-processes/#stop-mongod-processesI did a test jumping from 4.2 through 4.4 to 5.0 without updating the FCV and mongod 5.0 gave a good informational message about FCV being on 4.2. So I think “something bad” ™ happened to your installation on that replica.", "username": "chris" }, { "code": "", "text": "Good morning,Thank you Mr. Dellaway for answer and advice.Tomorrow is our cold sync going to be finished.\nWe shall use ‘use admin db.shutDownServer’ after increasing RAM/wt cache as described and see result.\nIf successful, we plan to do this on other two nodes and after that upgrade to 5.x.Best regards,\nBranimir Putniković", "username": "Branimir_Putnikovic" } ]
MongoDB community stop/start problem
2022-06-06T12:53:45.610Z
MongoDB community stop/start problem
2,655
null
[ "sharding", "atlas-search" ]
[ { "code": "?search_type=dfs_query_then_fetch", "text": "Hi, we are currently migrating from ES and we wanted to know if Atlas Search will allow us to do some custom configuration into the scoring without using the constant scoring.\nThe cases we want to customize are:Thank you!", "username": "Diana" }, { "code": "indexOptions=docsnorms=omit", "text": "Searching on docs for another thing, already discovered the way to do the first case.For string fields only, it can be set the index config to indexOptions=docs to ignore the term frequency and position (see docs here), which acts like the parameter k on Elasticsearch. With norms=omit we can also ignore the field length, so it’s like the b parameter from Elasticsearch.", "username": "Diana" }, { "code": "", "text": "Currently, there is no support for calculating the frequencies for the whole collection, so I opened a case for enhancement here.", "username": "Diana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Okapi scoring in sharded clusters
2022-06-02T11:19:32.512Z
Okapi scoring in sharded clusters
1,558
null
[]
[ { "code": "", "text": "We are using MongoDB 4.0.4 version for our project. The database configuration is done as below:", "username": "Nandkishor_Chavan" }, { "code": "", "text": "Check this link.May help", "username": "Ramachandra_Tummala" } ]
How to manage mongodb log file
2022-06-08T08:46:30.758Z
How to manage mongodb log file
2,686
null
[]
[ { "code": "", "text": "I am installing mongodb 3.6 on RHEL 8.5, it worked fine, I am using custom directories for data and logs, after implementing the step on the guide (Install MongoDB Community Edition on Red Hat or CentOS — MongoDB Manual) for ‘Configure SELinux’ is does not want to start anymore.sudo systemctl start mongod\nJob for mongod.service failed because the control process exited with error code.\nSee “systemctl status mongod.service” and “journalctl -xe” for details.systemctl status mongod.service\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: failed (Result: exit-code) since Wed 2022-06-08 19:50:49 UTC; 1min 2s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 32032 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=1/FAILURE)\nProcess: 32031 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 32029 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 32027 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nMain PID: 22352 (code=exited, status=0/SUCCESS)Would you know what the problem might be, or how can I rolled back the ‘Configure SELinux’ steps,Thanks a lot\nIvan", "username": "Ivan_Flores" }, { "code": "~/.bash_historymongod", "text": "That procedure is very complex. It’s quite possible you made a mistake. You might check your ~/.bash_history file and make sure you did it correctly. Or it could be some other cause.In any case, probably the easiest “sanity check” is to set SELinux to non-enforcing mode and see if mongod starts up correctly. Then you know the problem is SELinux and not something else.", "username": "Jack_Woehr" }, { "code": "", "text": "non-enforcingJack thank you kindly, disabling SELinux allow it to start. I followed the steps exactly as stated, I might have misinterpreted something. I might need to re-enable SELinux at some point, would you recommend to re-execute the SELinux steps as described in the guide again?", "username": "Ivan_Flores" }, { "code": "~/.bash_history", "text": "I would compare them to what was recorded in your ~/.bash_history file and see if a mistake was made, before I’d try doing them again. SELinux is tricky, it’s easy to make a mistake. And how one corrects mistakes depends on the kind of mistake that was made.", "username": "Jack_Woehr" } ]
Mongo db does not start after Configure SELinux step was completed
2022-06-08T19:53:21.505Z
Mongo db does not start after Configure SELinux step was completed
2,967
null
[ "dot-net", "mdbw22-communitycafe" ]
[ { "code": "", "text": "Discussions about Coffe Roulette: C# Developers", "username": "TimSantos" }, { "code": "", "text": "Photos from @Harshit", "username": "TimSantos" }, { "code": "", "text": "The @James_Kovacs and @James_Turner show. Thank you for the amazing .NET/C# talks !", "username": "wan" } ]
Coffee Roulette: C# Developers
2022-06-06T14:09:47.435Z
Coffee Roulette: C# Developers
3,154
null
[ "java" ]
[ { "code": "", "text": "Hello, my documents have multiple fields, one of them is the “mailReceived”. When a user sends a message to another user, I want to add that message to the existing messages stored under mailReceived. Although this is an insertion (new message), in the context of MongoDB, I have to do an update instead of post (I’m not adding a new document, just updating an existing document using $set). But for the mailReceived field, doing an update will replace all the existing mail messages with the new message, no, I don’t want to do that. I just want to add the new message to the existing messages stored in mailReceived. Is there a quick way to achieve this ? Right now, I have to read all the existing messages out and then store them as a list, then append the new message to the list, then save everything back in, it’s cumbersome. Hopefully, there’s a quick way to do it in Java. I appreciate any help, thanks so much!", "username": "John_Doe4" }, { "code": "", "text": "Look at the $push update operator.", "username": "steevej" }, { "code": "", "text": "Steeve,Thanks a lot ! You are the man. I appreciate your assistance. It worked !", "username": "John_Doe4" }, { "code": "", "text": "Hi Steeve,During an update, to add a new message to the mailReceived field, I use $push. But now, instead of adding a new message, I want to delete an existing message from the mailReceived field while updating my document, is there something like $push for deletion? Could not find $delete/$remove from the MongoDB site. Thanks!", "username": "John_Doe4" }, { "code": "", "text": "See the array update operations.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve! (I just want to say thanks but it complains that my post has less than 20 characters Hopefully, my submission goes through this time)", "username": "John_Doe4" }, { "code": "", "text": "less than 20 charactersI have to fight that do. I would write smaller post more often if I could. B-)~~ /) ~~", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add new mail message to the mail field in an update?
2022-06-04T20:47:10.935Z
How to add new mail message to the mail field in an update?
1,919
null
[ "mdbw22-communitycafe" ]
[ { "code": "", "text": "Discussions about MongoDB Technical Trivia", "username": "TimSantos" }, { "code": "", "text": "Congratulations to our winners! Photos from @Harshit", "username": "TimSantos" } ]
MongoDB Technical Trivia!
2022-06-06T14:10:15.593Z
MongoDB Technical Trivia!
2,664
null
[ "serverless", "mdbw22-communitycafe" ]
[ { "code": "", "text": "Serverless allows developers to focus on the thing they like to do the most - development - and leave the rest of the attributes including infrastructure and maintenance to the platform offerings. In this discussion, we are going to see how Cloud Run and MongoDB Atlas come together to enable a completely serverless application development experience.", "username": "TimSantos" }, { "code": "", "text": "Photos from @Harshit", "username": "TimSantos" } ]
Discussion: Serverless application development with Cloud Run and MongoDB Atlas
2022-06-06T14:49:55.677Z
Discussion: Serverless application development with Cloud Run and MongoDB Atlas
2,770
null
[ "replication", "backup" ]
[ { "code": "Snapshot was taken: 10AM\nOplog: continuos sync\nCluster crash: 10:20 AM\n", "text": "Im planning to create a backup policy for my mongo DB setup. It has 5 nodes in the cluster and 30TB of data on each node.I want to meet the RTO for 5minutes. Im using AWS Cloud. So planning to use EBS snapshots for this. In my case, the journaling files are located in the same disk where data files are located. But I have one more cluster(5TB data) where journaling is on another disk.Need your suggestions to implement a better backup solution.This is what Im thinking. Take a snapshot every 1hr on the primary node(the script will check who is primary). Then keep archiving the oplog data into somewhere(continuos sync) . But don’t know how it is possible.Recovery scenario:And if I want to perform point in time recovery then I can restore the most recent AWS snapshot but on top of it I need to apply the oplog, But at which point do I need to start restoring the oplog? Because during the snapshot I don’t know the last committed transition. So Any pointers to know about this?", "username": "Bhuvanesh_R" }, { "code": "db.fsyncLock()mongod", "text": "Hi @Bhuvanesh_R and welcome in the MongoDB Community !First, let’s talk about RPO vs RTO:Recovery Point Objective (RPO) generally refers to the amount of data that can be lost within a period most relevant to a business, before significant harm occurs, from the point of a critical event to the most preceding backup.Recovery Time Objective (RTO) often refers to the quantity of time that an application, system and/or process, can be down for without causing significant damage to the business as well as the time spent restoring the application and its data.I think you meant RPO instead of RTO. Because restoring an entire 30 TB 5 nodes cluster in 5 min… Good luck with that.Now I’ll assume the goal is a 5 min RPO (=maximum 5 minutes of lost data).But first of all, 30 TB of data in a single Replica Set (RS) is HUGE. Usually MongoDB clients are recommended to shard their cluster when they reach 2TB of data. Sometimes, depending on the use case and after some discussions with the Technical Service Engineer (TSE), they can push to 4TB of data but not rarely over that.Usually a healthy MongoDB Cluster needs about 15 to 20% of its storage amount in RAM. So if you have 30 TB, I would recommend you to have ~6 TB of RAM on each machine in your RS…So to sum up, you should shard.That being said, let’s get back to the backup problem.I’m not super familiar with AWS and EBS snapshots. But just to be on the safe side, I would db.fsyncLock() (doc) the node you want to snapshot before the snapshot. This forces the node to flush to disk all the pending write operations and lock the entire mongod instance. I think this would be better to ensure consistency of the snapshot.Let’s talk about the oplog now. If you want a 5 min RPO, you will have to be able to replay the oplog from the time of the last snapshot to the desired timestamp. Which means that you have to record the oplog in another cluster elsewhere.Which now brings us to this question:But at which point do I need to start restoring the oplog? Because during the snapshot I don’t know the last committed transition. So Any pointers to know about this?The oplog is idempotent.This means you can replay the entirety of the oplog you have, whatever the snapshot time, you will always end up in the same state.Let’s say your snapshot was done at 10am and you had a crash at 10:20am. At 11am you have restored your 10am snapshot to 5 brand new machines (good luck with 30TB… that’s why sharding is also important for the RTO strategy). You can now apply the oplog from 9am => the last oplog entry you got (so probably 10:19 and 55 sec am and the final result will be in the same state than the collection was at 10:19 and 55sec exactly. You could also choose to replay ALL the oplog you have since 3 days ago or just replay from 9:59am, you would be in the same state.As long as you make sure that you don’t start to replay the oplog after the snapshot time (like 10:01am), you are good to replay whatever you like.It’s also the reason why it’s important to keep a large amount of oplog (like 3 days). So you can restore the cluster in any state during these 3 days, given that you have a snapshot before that date and still covered by the oplog.I hope it makes sense. Just a closing comment about why sharding helps the RTO: it’s easier to bring back up 2TB on 15 shards (15 * 3 nodes) with 2TB on each than restoring 30TB on 3 nodes. When you are sharded, you can start all the data transfer in parallel and your final RTO will be (transfer time of 2TB + 1h of maintenance & machine provisioning + time to replay oplog). If you are on a single RS, then your RTO starts with (transfer time of 30TB).Oh and closing statement: Everything that we talked about here is entirely automated, coded, carefully designed and implemented in MongoDB Atlas.Restoring an entire cluster with snapshot + replay the oplog is like 10 mouse clicks top.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "fsynclock()", "text": "Thanks a lot for your detailed answer. One last question about replaying oplog.My backup process will be,Lets say, the max oplog position from the above backup is 10:02AM.Then the crash happened at 10:30AM, and i have oplog backup till 10:29AM.Now restoring the oplog,I’ll set the start time as 10:00 AM.\nWhat happens if any transaction at that particular time is already captured by snapshot(till 10:02AM is already covered in snapshot) and I’m replaying them again via oplog.Will it cause any duplicates?", "username": "Bhuvanesh_R" }, { "code": "{name: \"Max\"}coll{_id: ObjectId(xx), name: \"Max\"}collpersons_id", "text": "No because the oplog is idempotent. You could replay the oplog entirely 10 times, the databases and collection at the end will always end up in the same state.If you want, the oplog only contains the result of the transaction, not the transaction itself. Each operation you run are transformed into an idempotent entry in the oplog.For example.insert doc {name: \"Max\"} in coll becomes doc {_id: ObjectId(xx), name: \"Max\"} inserted in coll. When you replay the oplog, it will replay this operation as an upsert with the unique ID. If the doc exists => does nothing. If it doesn’t, it’s created in that state.Another example. Let’s say you want to $inc an age by 1 (birthday of someone in your person collection).\nThe result in the oplog is: in persons collection, SET age of doc with _id= ObjectId(xx) to 10.When you executed that update query, age was 9 => $inc +1 to 10. Only the result of the query is stored in the oplog. Not the query itself. If you replay the oplog. The command is SET AGE 10. Not $inc age by 1.You can replay this 10 times => Age will still be 10. If the oplog was storing the $inc query, it would be 19. which would not work.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Backup strategy for the replica set with Filesystem backup
2022-06-01T18:09:09.299Z
Backup strategy for the replica set with Filesystem backup
3,300
null
[ "aggregation", "python" ]
[ { "code": "client['chat']['commissionLedger'].aggregate([{ $match : { 'commissionTo' : j}}])", "text": "I am trying to run this specific query\nclient['chat']['commissionLedger'].aggregate([{ $match : { 'commissionTo' : j}}])but i am getting the error statingclient[‘chat’][‘commission’].aggregate([{ $match : { ‘commissionTo’ : ‘12345’}}])\n^\nSyntaxError: invalid syntaxI am new to community so please ignore if my question is not in format, sorry in advance", "username": "Harsh_Patel" }, { "code": "", "text": "The python language is very picky.Without the code block that surround the line of code that you share it is hard to tell. The best is to share the whole function.But I would try a few things.Please read Formatting code and log snippets in posts before posting more code or documents.", "username": "steevej" } ]
$match stage getting error
2022-06-08T13:09:26.126Z
$match stage getting error
3,653
null
[ "aggregation" ]
[ { "code": "const ownerAddress = String(resultCardOwner.get(\"owner\"));\n\n const pipeline = [\n {\n match: {\n $expr: {\n $and: [\n { $eq: [\"$owner\", ownerAddress] },\n { $eq: [\"$tokenId\", cardId] },\n\n ],\n },\n },\n },\n {\n lookup: {\n from: \"CardOwners\",\n localField: \"owner\",\n foreignField: \"owner\",\n as: \"cardOwner\",\n },\n },\n {\n project: { \n tokenId:1,\n name:1,\n image:1,\n owner: 1,\n cardType:1,\n \n \"cardOwner.name\":1,\n \"cardOwner.email\": 1,\n \"cardOwner.avatar\": 1, \n\n }\n }\n ];\n \n const result = query.aggregate(pipeline);\n return result; \n{ $eq: [\"$owner\", ownerAddress] },\n{ $eq: [\"$tokenId\", cardId] },\n localField: \"owner\",\n foreignField: \"owner\",\n", "text": "Hi everyone, I’m struggling with a query lookup with two collections where I’m comparing strings of owners. Both the fields are on the two collections are named “owner”.I’d need to match the collections based on an exadecimal value of these owner fields, type String, that is written often in a different way.Let’s imagine 0xfba… in the first collection and on the other one 0xFBa…If they are identical the query works fine, if not, it fails and is unable to retrieve data from the second collection.I’m not figuring out the right syntax to use for a case insensitive $eq.I’m using a backend service, and I’m trying to pass an external variable to the query, the owner field needs to match that external value, for both the collections.This is part of the queryI’d like to bring all the strings of owner for both the collections (Cards and CardOwners) to be case insensitiveHereand here ( I think)Any suggestion? Thank you", "username": "Gerry" }, { "code": "i", "text": "Hello @Gerry ,Welcome to the community!! In mentioned scenario for case insensitive search, you can use $regex with option “i”.From a developer’s prespective, If you frequently run case-insensitive regex queries (utilizing the i option), you should create a case-insensitive index to support your queries. You can specify a collation on an index to define language-specific rules for string comparison, such as rules for lettercase and accent marks. A case-insensitive index greatly improves performance for case-insensitive queries. Here is an example on how to achieve this.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Case insensitive in an aggregate / lookup query with external variable
2022-05-20T05:30:56.294Z
Case insensitive in an aggregate / lookup query with external variable
4,179
null
[ "java", "mdbw22-communitycafe" ]
[ { "code": "", "text": "Discussions about the talk: Hello, Java World!", "username": "TimSantos" }, { "code": "", "text": "Hello Java developers!I will be chatting with @Jeffrey_Yemin on MongoDB World 22 Community Café tomorrow about all things MongoDB + Java. If you have any topic that you would like to be mentioned, or questions that you would like to be discussed please post here.The session would be approximately 20 minutes long, and if time allows we can discuss it on the session.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "\nimage1920×1440 145 KB\n", "username": "TimSantos" } ]
Hello, Java World!
2022-06-06T14:09:24.237Z
Hello, Java World!
2,936
null
[]
[ { "code": "", "text": "Hello guys,\nI’m trying to join two collections in mongodb to get all data from a collection and only the commun ones FROM the other collection any idea if it’s possible to do this in mongodb? if yes how to?\nthanks in advance", "username": "BOULLOUS_Laila1" }, { "code": "$unionWith", "text": "Hi,Yes, you can join 2 collections with Aggregation Framework and $unionWith stage.Here are the docs with syntax and examples, so you can check how to do it. ", "username": "NeNaD" } ]
Left Join: join two collections in mongodb to get all data from a collection and only the commun ones FROM the other collection
2022-06-08T15:59:17.712Z
Left Join: join two collections in mongodb to get all data from a collection and only the commun ones FROM the other collection
6,082
null
[ "mdbw22-communitycafe" ]
[ { "code": "", "text": "Discussions about the MongoDB Green Team and Sustainable Sustainability", "username": "TimSantos" }, { "code": "", "text": "Starting now! Lydia is talking about the initiatives that the Green Team do here in MongoDB\nimage1920×1440 170 KB\n", "username": "TimSantos" } ]
MongoDB Green Team and Sustainable Sustainability
2022-06-06T14:08:49.446Z
MongoDB Green Team and Sustainable Sustainability
2,632
null
[ "mdbw22-communitycafe" ]
[ { "code": "", "text": "Meet the folks in charge of innovation.", "username": "TimSantos" }, { "code": "", "text": "@ThomasR and @Michael_Cahill live now!\n\nimage1920×1440 208 KB\n", "username": "Jason_Tran" }, { "code": "", "text": "MongoDB Labs focuses on research and machine learning to understand MongoDB customers further and get ahead of the market for MongoDB. They build proof of concepts for product leads so they can become more informed when it comes to product decisions\nimage1920×1440 183 KB\n", "username": "TimSantos" } ]
Meet the Team: MongoDB Labs
2022-06-06T14:08:16.910Z
Meet the Team: MongoDB Labs
3,325
null
[ "graphql", "mdbw22-communitycafe" ]
[ { "code": "", "text": "Hasura is an open-source product that gives you instant GraphQL & REST APIs out of the box on all your data. In this live demo, we will show you how you can connect Hasura to your data sources and create a unified data API with simplicity, speed & security!Join Chris Toth, senior solutions engineer at Hasura, and learn how building a unified Data API can:Simplify your application architectureDramatically improve your API & UI teams productivitySeamlessly work with your organization’s governance standards", "username": "TimSantos" }, { "code": "", "text": "This session is starting now!\nimage1920×1440 116 KB\n", "username": "TimSantos" }, { "code": "", "text": "Chris Toth did a demo based on the diagram below:\nimage1920×1440 108 KB\n", "username": "TimSantos" }, { "code": "", "text": "\nImage from iOS (10)1920×2560 419 KB\n–\nImage from iOS (9)1920×2560 158 KB\n", "username": "Harshit" } ]
Partner Showcase: Instant GraphQL APIs on your data
2022-06-06T14:07:16.042Z
Partner Showcase: Instant GraphQL APIs on your data
3,708
null
[ "transactions" ]
[ { "code": "public class Amestec : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n}\n public static void RemoveAmestecFromDB(ObjectId id)\n {\n var realm = Realm.GetInstance(_syncConfiguration);\n\n var selectedAmestec = realm.All<Amestec>().FirstOrDefault(c => c.Id == id);\n if(selectedAmestec == null)\n return;\n using (var transaction = realm.BeginWrite()) {\n realm.Remove(selectedAmestec);\n transaction.Commit();\n }\n}\n public static void RemoveAmestecFromDB(ObjectId id)\n { \n var amestec = _realm.Find<Amestec>(id);\n _realm.Write(() => {\n _realm.Remove(amestec);\n });\n }\nRealmInvalidObjectException: Attempted to access detached row\nRealms.NativeException.ThrowIfNecessary (System.Func`2[T,TResult] overrider) (at <599fd848fd9040f0a59e4106e4838256>:0)\nRealms.ObjectHandle.GetValue (System.String propertyName, Realms.RealmObjectBase+Metadata metadata, Realms.Realm realm) (at <599fd848fd9040f0a59e4106e4838256>:0)\nRealms.RealmObjectBase.GetValue (System.String propertyName) (at <599fd848fd9040f0a59e4106e4838256>:0)\nAmestec.get_Id () (at Assets/Scripts/Model/AmestecModel.cs:10)\n...\n", "text": "This is my model:I’m trying to remove an Amestec using:orThe object gets removed but I get the following error:", "username": "Vlad_Miu" }, { "code": "", "text": "Can you post the entire stacktrace? It looks like something is trying to access the Id of the just-removed object. This can be for example if you have wired up object change notifications and are not correctly filtering-out deletions.", "username": "nirinchev" }, { "code": "RealmInvalidObjectException: Attempted to access detached row\nRealms.NativeException.ThrowIfNecessary (System.Func`2[T,TResult] overrider) (at <599fd848fd9040f0a59e4106e4838256>:0)\nRealms.ObjectHandle.GetValue (System.String propertyName, Realms.RealmObjectBase+Metadata metadata, Realms.Realm realm) (at <599fd848fd9040f0a59e4106e4838256>:0)\nRealms.RealmObjectBase.GetValue (System.String propertyName) (at <599fd848fd9040f0a59e4106e4838256>:0)\nAmestec.get_Id () (at Assets/Scripts/Model/AmestecModel.cs:10)\nDeleteAmestecOnClick.DeleteCurrentAmestec () (at Assets/Scripts/Utility/OnClick/Amestec/DeleteAmestecOnClick.cs:27)\nUnityEngine.Events.InvokableCall.Invoke () (at <07c89f7520694139991332d3cf930d48>:0)\nUnityEngine.Events.UnityEvent.Invoke () (at <07c89f7520694139991332d3cf930d48>:0)\nUnityEngine.UI.Button.Press () (at Library/PackageCache/[email protected]/Runtime/UI/Core/Button.cs:68)\nUnityEngine.UI.Button.OnPointerClick (UnityEngine.EventSystems.PointerEventData eventData) (at Library/PackageCache/[email protected]/Runtime/UI/Core/Button.cs:110)\nUnityEngine.EventSystems.ExecuteEvents.Execute (UnityEngine.EventSystems.IPointerClickHandler handler, UnityEngine.EventSystems.BaseEventData eventData) (at Library/PackageCache/[email protected]/Runtime/EventSystem/ExecuteEvents.cs:50)\nUnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) (at Library/PackageCache/[email protected]/Runtime/EventSystem/ExecuteEvents.cs:262)\nUnityEngine.EventSystems.EventSystem:Update() (at Library/PackageCache/[email protected]/Runtime/EventSystem/EventSystem.cs:385)\n", "text": "Heya, I don’t think I have anything like object change notifications but here is the stacktrace:", "username": "Vlad_Miu" }, { "code": "DeleteCurrentAmestecAmestec", "text": "Could you show the code of DeleteCurrentAmestec?I don’t think I have anything like object change notificationsIf you have used any method from this page that concerns Amestec, we’d like to see that/those.", "username": "Andrea_Catalini" }, { "code": "DeleteAmestecOnClick.cs:27Amestec.Id", "text": "More specifically, it looks like in DeleteAmestecOnClick.cs:27 you’re accessing Amestec.Id after the object has been deleted.", "username": "nirinchev" }, { "code": "private void DeleteCurrentAmestec()\n {\n var amestecController = AmestecController.Instance;\n var currentAmestec = amestecController.CurrentAmestec;\n if (currentAmestec != null) {\n RealmController.RemoveAmestecFromDB(currentAmestec.Id);\n amestecController.RefreshNamesView();\n amestecController.GetAmestecViewDataInstance().ResetFieldsNull();\n amestecController.ClearIstoricView();\n amestecController.SetActiveBtns(false);\n amestecController.GetSliderInstance().GetComponent<AmestecViewSlider>().RefreshSliderView();\n }\n } \n", "text": "Hm, I’m passing the ID to the RemoveAmestecFromDB witch is above, but this should happen before deletion right?", "username": "Vlad_Miu" }, { "code": "", "text": "sorry, line 27 is RealmController.RemoveAmestecFromDB(currentAmestec.Id)", "username": "Vlad_Miu" }, { "code": "", "text": "I have fixed it by giving amestecController.CurrentAmestec = null right after RemoveAmestecFromDB … I don’t understand exactly what was going on but thank you guys a lot for the help!", "username": "Vlad_Miu" }, { "code": "Amestec.IdamestecController.CurrentAmestecRealmObjectamestecController.CurrentAmestecDeleteCurrentAmestecif (currentAmestec != null) {", "text": "I don’t understand exactly what was going onThat feels like something was reacting to the deletion and trying to access fields of the deleted object (Amestec.Id as Nikola said). What puzzles me is that it seems that part of the “reaction” is to call the delete method again. You may want to investigate this.\nAnyway, I assume that amestecController.CurrentAmestec is basically caching a RealmObject. If so, by not setting it manually to null when an object is deleted, any subsequent reference to amestecController.CurrentAmestec throws as you see. In fact, when you set it to null, the next call to DeleteCurrentAmestec won’t throw because you have the following check if (currentAmestec != null) {If my assumptions are correct this should help you to have a better overview of what’s going on.", "username": "Andrea_Catalini" }, { "code": "", "text": "Yeah it’s just a reference to a realm object, I thought it was something like calling the function again but I can’t figure out why this was happening, I might have to double check if it’s calling the function onmousedown or taking multiple inputs, if not I have no idea", "username": "Vlad_Miu" } ]
Removing object from realm based on Id works but throws error
2022-06-06T11:47:28.759Z
Removing object from realm based on Id works but throws error
3,144
null
[ "aggregation", "php" ]
[ { "code": "{\n \"_id\": \"QQoMlgFhfoAGuVYsVcGCGhWBZMpNBfjOTbp\",\n \"cost\": 0.0103,\n \"url\": \"https://content-delivery.pro/in.php?tcid=f0e2590aae1cc442458eb30a7495554eb91ab2b6\",\n \"traffic\": \"XXAds\",\n \"campaigns\": \"123456 FFF\",\n \"browser\": \"Safari\",\n \"c1\": 52029,\n \"geo\": \"CA\",\n \"log_time\": {\n \"$date\": {\n \"$numberLong\": \"1654319236000\"\n }\n }\n}\n[\n {\n '$match': {\n 'log_time': {\n '$lt': new Date('Sat, 04 Jun 2022 04:27:59 GMT'), \n '$gte': new Date('Sat, 28 May 2022 04:27:59 GMT')\n }\n }\n }, {\n '$group': {\n '_id': {\n '$dateToString': {\n 'format': '%Y-%m-%d', \n 'date': '$log_time'\n }\n }, \n 'totalPayout': {\n '$sum': '$payout'\n }, \n 'totalClicks': {\n '$sum': '$click'\n }, \n 'totalConversions': {\n '$sum': '$conversions'\n }, \n 'totalCost': {\n '$sum': '$cost'\n }, \n 'totalVisitor': {\n '$sum': 1\n }\n }\n }, {\n '$sort': {\n '_id': 1\n }\n }\n]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"click_data.clicks\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317033667\"\n }\n }\n }\n },\n {\n \"log_time\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712233667\"\n }\n }\n }\n }\n ]\n },\n \"queryHash\": \"D218AE91\",\n \"planCacheKey\": \"6A370F76\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"click\": 1,\n \"conversions\": 1,\n \"cost\": 1,\n \"log_time\": 1,\n \"payout\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"filter\": {\n \"$and\": [\n {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317033667\"\n }\n }\n }\n },\n {\n \"log_time\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712233667\"\n }\n }\n }\n }\n ]\n },\n \"direction\": \"forward\"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1448441,\n \"executionTimeMillis\": 4634,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1512268,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1448441,\n \"executionTimeMillisEstimate\": 292,\n \"works\": 1512270,\n \"advanced\": 1448441,\n \"needTime\": 63828,\n \"needYield\": 0,\n \"saveState\": 1599,\n \"restoreState\": 1599,\n \"isEOF\": 1,\n \"transformBy\": {\n \"click\": 1,\n \"conversions\": 1,\n \"cost\": 1,\n \"log_time\": 1,\n \"payout\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"filter\": {\n \"$and\": [\n {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317033667\"\n }\n }\n }\n },\n {\n \"log_time\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712233667\"\n }\n }\n }\n }\n ]\n },\n \"nReturned\": 1448441,\n \"executionTimeMillisEstimate\": 197,\n \"works\": 1512270,\n \"advanced\": 1448441,\n \"needTime\": 63828,\n \"needYield\": 0,\n \"saveState\": 1599,\n \"restoreState\": 1599,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1512268\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1448441,\n \"executionTimeMillisEstimate\": 2636\n },\n {\n \"$group\": {\n \"_id\": {\n \"$dateToString\": {\n \"date\": \"$log_time\",\n \"format\": {\n \"$const\": \"%Y-%m-%d\"\n }\n }\n },\n \"totalPayout\": {\n \"$sum\": \"$payout\"\n },\n \"totalClicks\": {\n \"$sum\": \"$click\"\n },\n \"totalConversions\": {\n \"$sum\": \"$conversions\"\n },\n \"totalCost\": {\n \"$sum\": \"$cost\"\n },\n \"totalVisitor\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"totalPayout\": 504,\n \"totalClicks\": 504,\n \"totalConversions\": 504,\n \"totalCost\": 504,\n \"totalVisitor\": 504\n },\n \"totalOutputDataSizeBytes\": 4291,\n \"usedDisk\": false,\n \"nReturned\": 7,\n \"executionTimeMillisEstimate\": 4630\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"_id\": 1\n }\n },\n \"totalDataSizeSortedBytesEstimate\": 4403,\n \"usedDisk\": false,\n \"nReturned\": 7,\n \"executionTimeMillisEstimate\": 4630\n }\n ],\n \"serverInfo\": {\n \"host\": \"ubuntu-s-2vcpu-2gb-sgp1-01\",\n \"port\": 27017,\n \"version\": \"5.0.9\",\n \"gitVersion\": \"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"clicks\",\n \"pipeline\": [\n {\n \"$match\": {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317033667\"\n }\n },\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712233667\"\n }\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"$dateToString\": {\n \"format\": \"%Y-%m-%d\",\n \"date\": \"$log_time\"\n }\n },\n \"totalPayout\": {\n \"$sum\": \"$payout\"\n },\n \"totalClicks\": {\n \"$sum\": \"$click\"\n },\n \"totalConversions\": {\n \"$sum\": \"$conversions\"\n },\n \"totalCost\": {\n \"$sum\": \"$cost\"\n },\n \"totalVisitor\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$sort\": {\n \"_id\": 1\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"click_data\"\n },\n \"ok\": 1\n}\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"click_data.clicks\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317109655\"\n }\n }\n }\n },\n {\n \"log_time\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712309655\"\n }\n }\n }\n }\n ]\n },\n \"queryHash\": \"D218AE91\",\n \"planCacheKey\": \"3A53374A\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"click\": 1,\n \"conversions\": 1,\n \"cost\": 1,\n \"log_time\": 1,\n \"payout\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"log_time\": 1\n },\n \"indexName\": \"Timestamp\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"log_time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"log_time\": [\n \"[new Date(1653712309655), new Date(1654317109655))\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1448472,\n \"executionTimeMillis\": 7844,\n \"totalKeysExamined\": 1448472,\n \"totalDocsExamined\": 1448472,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1448472,\n \"executionTimeMillisEstimate\": 1436,\n \"works\": 1448473,\n \"advanced\": 1448472,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1536,\n \"restoreState\": 1536,\n \"isEOF\": 1,\n \"transformBy\": {\n \"click\": 1,\n \"conversions\": 1,\n \"cost\": 1,\n \"log_time\": 1,\n \"payout\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 1448472,\n \"executionTimeMillisEstimate\": 1221,\n \"works\": 1448473,\n \"advanced\": 1448472,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1536,\n \"restoreState\": 1536,\n \"isEOF\": 1,\n \"docsExamined\": 1448472,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1448472,\n \"executionTimeMillisEstimate\": 314,\n \"works\": 1448473,\n \"advanced\": 1448472,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1536,\n \"restoreState\": 1536,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"log_time\": 1\n },\n \"indexName\": \"Timestamp\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"log_time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"log_time\": [\n \"[new Date(1653712309655), new Date(1654317109655))\"\n ]\n },\n \"keysExamined\": 1448472,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1448472,\n \"executionTimeMillisEstimate\": 5734\n },\n {\n \"$group\": {\n \"_id\": {\n \"$dateToString\": {\n \"date\": \"$log_time\",\n \"format\": {\n \"$const\": \"%Y-%m-%d\"\n }\n }\n },\n \"totalPayout\": {\n \"$sum\": \"$payout\"\n },\n \"totalClicks\": {\n \"$sum\": \"$click\"\n },\n \"totalConversions\": {\n \"$sum\": \"$conversions\"\n },\n \"totalCost\": {\n \"$sum\": \"$cost\"\n },\n \"totalVisitor\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"totalPayout\": 504,\n \"totalClicks\": 504,\n \"totalConversions\": 504,\n \"totalCost\": 504,\n \"totalVisitor\": 504\n },\n \"totalOutputDataSizeBytes\": 4291,\n \"usedDisk\": false,\n \"nReturned\": 7,\n \"executionTimeMillisEstimate\": 7842\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"_id\": 1\n }\n },\n \"totalDataSizeSortedBytesEstimate\": 4403,\n \"usedDisk\": false,\n \"nReturned\": 7,\n \"executionTimeMillisEstimate\": 7842\n }\n ],\n \"serverInfo\": {\n \"host\": \"ubuntu-s-2vcpu-2gb-sgp1-01\",\n \"port\": 27017,\n \"version\": \"5.0.9\",\n \"gitVersion\": \"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"clicks\",\n \"pipeline\": [\n {\n \"$match\": {\n \"log_time\": {\n \"$lt\": {\n \"$date\": {\n \"$numberLong\": \"1654317109655\"\n }\n },\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1653712309655\"\n }\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"$dateToString\": {\n \"format\": \"%Y-%m-%d\",\n \"date\": \"$log_time\"\n }\n },\n \"totalPayout\": {\n \"$sum\": \"$payout\"\n },\n \"totalClicks\": {\n \"$sum\": \"$click\"\n },\n \"totalConversions\": {\n \"$sum\": \"$conversions\"\n },\n \"totalCost\": {\n \"$sum\": \"$cost\"\n },\n \"totalVisitor\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$sort\": {\n \"_id\": 1\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"click_data\"\n },\n \"ok\": 1\n}``` \n\n\nWould love some help here!!! \n\nThanks a lot in advance,\n\nKK", "text": "Hi all,I am new to MongoDB. I’ve been trying to optimise my database schema, and queries. Reading on how to what to create index on.In this collection, I would have large number of documents like this (currently at 1.5m+):A typical query would be an aggregate, that would match one of the attributes or compare timestamp, and create sums of records/payout/cost etc, like this:I have created indexes on “campaign”, and when I ran a query that compared campaign first before grouping, it didn’t make a huge difference. Then I created an index on log_time(timestamp), and the query would actually take much longer (from 11s → 21s). I’m trying to get the query time down, as 11s is a bit long for my application, and the number of records are just going to grow from here.Running explain on this query without index:With log_time as index (21s):", "username": "Jantzen_Chow" }, { "code": "\nWould love some help here!!! \n\nThanks a lot in advance,\n\nKK\n", "text": "This went to the code block, and I couldn’t find the edit post button…", "username": "Jantzen_Chow" }, { "code": "", "text": "You have an IXSCAN and it looks adequate.You have something else slowing the overall system. The same collection with an extra index might require more RAM to cache both the index and the data. So if your system is low in resource, which seems to be the case with “host”: “ubuntu-s-2vcpu-2gb-sgp1-01” , the extra disk I/O eats up all the performance.Also your date selection is not really selective since nReturned:1448441 is almost the whole collection which iscurrently at 1.5m+You also do $sum of fields that are not present (except cost) in the sample document.", "username": "steevej" }, { "code": "", "text": "Thanks for your reply.I had resized the VM from 2GB 2CPU to 4CPU 8GB prior to running those test. I don’t think it was swapping but I will confirm. Afterall the whole database was under 100MB.\nI’ll report in if it wasn’t swapping.Only some of the documents has the field in $sum, would it be better if they all had 0?So I guess, to get better performance I will need to change the schema / database strategy.THanks!", "username": "Jantzen_Chow" }, { "code": "", "text": "would it be better if they all had 0?I don’t think so. A non-existing field take a lot less space than a field with 0. I just mentioned it in case the documents have been redacted to hide some details.change the schemaBut before changing anything, you have to make sure that you do not have another issue., like write traffic occurring at the same time?What are the total databases size?\nWhat are the total indexes size?", "username": "steevej" }, { "code": "", "text": "I had loaded the database with test data. Just to confirm speed/usability and how much space I need to provide for each collection.So no other queries was going on at the same time.I was mistaken earlier, total database size for 1.5M record was about 150MB. Index size about 15Mb.", "username": "Jantzen_Chow" }, { "code": "", "text": "I’d redone some more tests, it seems if I use index on a smaller portion of the documents, it will benefit, but if I’m doing queries that involves 80%+ of the documents, it will actually be slower.Makes sense.", "username": "Jantzen_Chow" } ]
A query that compares timestamp, would run slower with the timestamp created as Index?
2022-06-05T04:34:42.169Z
A query that compares timestamp, would run slower with the timestamp created as Index?
3,080
null
[ "sharding", "change-streams", "ruby", "beta" ]
[ { "code": "", "text": "The Ruby driver team is please to announce the release of version 2.18.0.beta1.This beta release of the Ruby driver supports MongoDB version 5.2 and 6.0. This release can be used for early testing of new features, but are not suitable for production deployments.Please note that MongoDB 6.0 binaries are currently available only as release candidates. Release candidates can be used for early testing of new features, but are not suitable for production deployments.This release includes the following new features:Added support for queryable encryption.Added support for Azure Key Vault, Google Cloud Key Management, and anyKMIP compliant Key Management System to be used as master key storage forclient side encryption.Following issues were also implemented or fixed:RUBY-2922 Use the count command instead of collStats to implement estimatedDocumentCountRUBY-2909 Snapshot Query Examples for the ManualRUBY-2682 Specify 5.0 cursor behaviourRUBY-2920 “ChangeStream Spec: fullDocument field in ChangeStreamOptions should be an optional to handle ““default”” case.”RUBY-2766 Allow hint for unacknowledged writes using OP_MSG when supported by the serverRUBY-2746 Provide options to limit number of mongos servers used in connecting to sharded clustersRUBY-2736 Add server connectionId to command monitoring eventsRUBY-2528 Permit inserting dollar-prefixed or dotted keysRUBY-2525 Expose the Reason an Operation Fails Document ValidationRUBY-2737 Allow custom service names with srvServiceName URI optionRUBY-2990 Terminate push monitor when it failsRUBY-2973 Add the acknowledged? field to the BulkWrite::Result classRUBY-2966 QueryCache: Add ActiveJob wrapperRUBY-2845 JRuby 9.3 SupportRUBY-2969 Always report ‘wallTime’ in the change stream event outputRUBY-2959 Clustered Indexes for all CollectionsRUBY-2956 Make allowDiskUse opt-out rather than opt-in in 6.0RUBY-2891 Change streams support for user-facing PIT pre- and post-imagesRUBY-2822 Add support for the comment field to all helpersRUBY-2814 Support authorizedCollections option for listCollections helpersRUBY-2742 “Remove OP_KILL_CURSORS, OP_INSERT, OP_UPDATE, OP_DELETE implementations + tests”RUBY-2715 Document Time-Series CollectionsRUBY-2480 to_bson getting called 6 times on a collection insertRUBY-2443 Adjust batch size based on limit & remaining # of documentsRUBY-2090 Move driver DBRef class to bson-rubyRUBY-2834 Rails 7 SupportRUBY-2590 Driver Handling of DBRefsRUBY-2978 Ruby 3.1: Synchronize can’t be called from trap contextRUBY-2972 Calling find_one_and_* methods with write_concern 0 causes an errorRUBY-2923 Slow spawn of mongocryptdRUBY-3009 Update min_pool_size documentation for populationRUBY-2961 Zero & negative limits not emulated correctly by QueryCacheRUBY-2996 Implement update and replace validationRUBY-2869 ruby 3.1.0: finalizer can’t synchronize using mutexesRUBY-2843 count_documents is not documented/tested to take session option", "username": "Dmitry_Rybakov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Ruby driver 2.18.0.beta1
2022-06-08T11:57:02.425Z
Ruby driver 2.18.0.beta1
2,673
null
[]
[ { "code": "", "text": "I am looking for differences between { } & [ ] , when to use which type of braces?", "username": "pen_so" }, { "code": "", "text": "{ } objects\n[ ] arrayRead JSON, JSON - Wikipedia and then take MongoDB Courses and Trainings | MongoDB University.", "username": "steevej" } ]
Diff between { } & [ ]
2022-06-08T05:15:38.419Z
Diff between { } &amp; [ ]
1,114
null
[ "queries" ]
[ { "code": "{\n \"_id\": \"609f36c35931ac30173d920d\",\n \"firstName\": \"John\",\n \"lastName\": \"Doe\",\n \"accounts\": [\n {\n \"_id\": \"6288f730595169ab83742200\",\n \"bankName\": \"Chase\",\n \"snapshot\": {\n \"balances\": [\n {\n \"ticker\": \"USD\",\n \"amount\": 1200,\n \"usdValue\": 1200\n },\n {\n \"ticker\": \"EUR\",\n \"amount\": 1000,\n \"usdValue\": 1100\n }\n ]\n }\n },\n {\n \"_id\": \"6288f730595169ab83742201\",\n \"bankName\": \"Bank of America\",\n \"snapshot\": {\n \"balances\": [\n {\n \"ticker\": \"USD\",\n \"amount\": 500,\n \"usdValue\": 500\n },\n {\n \"ticker\": \"GBP\",\n \"amount\": 500,\n \"usdValue\": 600\n }\n ]\n }\n }\n ]\n},\n{\n \"_id\": \"609f36c35931ac301883492\",\n \"firstName\": \"Jane\",\n \"lastName\": \"Doe\",\n \"accounts\": [\n {\n \"_id\": \"6288f730595169ab83742200\",\n \"bankName\": \"Wells Fargo\",\n \"snapshot\": {\n \"balances\": [\n {\n \"ticker\": \"USD\",\n \"amount\": 0,\n \"usdValue\": 0\n },\n {\n \"ticker\": \"AUD\",\n \"amount\": 1000,\n \"usdValue\": 800\n }\n ]\n }\n },\n {\n \"_id\": \"6288f730595169ab83742201\",\n \"bankName\": \"TD Bank\",\n \"snapshot\": {\n \"balances\": [\n {\n \"ticker\": \"CAD\",\n \"amount\": 500,\n \"usdValue\": 650\n },\n {\n \"ticker\": \"HKD\",\n \"amount\": 0,\n \"usdValue\": 0\n }\n ]\n }\n }\n ]\n}\n\n[\"USD\", \"GBP\", \"AUD\", \"CAD\", \"HKD\", \"EUR\"]\n", "text": "I have a users collection with the following structureI want to get the following output:I’m completely new to MongoDB. Can you please help?", "username": "Akram_Mnif" }, { "code": "db.collection.aggregate({\n \"$group\": {\n \"_id\": \"$accounts.snapshot.balances.ticker\"\n }\n},\n{\n \"$unwind\": \"$_id\"\n},\n{\n \"$unwind\": \"$_id\"\n},\n{\n \"$group\": {\n \"_id\": null,\n \"values\": {\n \"$addToSet\": \"$_id\"\n }\n }\n},\n{\n \"$project\": {\n \"values\": 1,\n \"_id\": 0\n }\n})\n", "text": "You can do it like this:Working example", "username": "NeNaD" }, { "code": "", "text": "Thank you so much, that’s really overwhelming for a beginner.", "username": "Akram_Mnif" }, { "code": "", "text": "Kudos for the double $unwind after the $group.Instinctively, I would have $unwind first but by doing $group first I feel it is more efficient as the working set and RAM requirement is cut right at the first stage.I like it.", "username": "steevej" }, { "code": "$group", "text": "Hi @steevej,Thanks! Yup, I also this it’s more efficient to do $group first. ", "username": "NeNaD" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get an array of unique tickers
2022-06-07T15:01:41.621Z
How to get an array of unique tickers
1,041
https://www.mongodb.com/…d40f6b92dd4.jpeg
[]
[ { "code": "", "text": "\nimage937×547 77 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "Unix Team and DB team are able to connect to the servers able to do the configuration.Current Issue: Developer is unable to connect to the Mongo DB from his laptop through app via Global Protect.", "username": "Ashish_Wanjare" }, { "code": "", "text": "Please let me know if anyone have a solution", "username": "Ashish_Wanjare" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Qual environment is running as a 3 Node cluster,
2022-06-08T11:42:40.467Z
Qual environment is running as a 3 Node cluster,
1,284
null
[ "rust", "beta" ]
[ { "code": "v2.3.0-betamongodb", "text": "The MongoDB Rust driver team is pleased to announce the v2.3.0-beta release of the mongodb crate. This release contains a number of new features, bug fixes, and improvements, most notably support for MongoDB 6.0.To see the full set of changes, check out the release notes. If you run into any issues, please file an issue on JIRA or GitHub.", "username": "kmahar" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Rust driver 2.3.0-beta released
2022-06-07T15:35:06.659Z
Rust driver 2.3.0-beta released
2,438
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.21.2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Announcing libbson 1.21.2.No changes since 1.21.1; release to keep pace with libmongoc’s version.Bug Fixes:Thanks to everyone who contributed to this release.", "username": "eramongodb" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.21.2 Released
2022-06-07T18:36:02.184Z
MongoDB C Driver 1.21.2 Released
2,701
null
[ "php", "field-encryption", "beta" ]
[ { "code": "encryptedFieldsMapbypassQueryAnalysisMongoDB\\Driver\\Manager::__construct()queryTypeMongoDB\\Driver\\ClientEncryption::encrypt()MongoDB\\Driver\\BulkWriteMongoDB\\Driver\\Queryletcommentcommentpecl install mongodb-1.14.0beta1\npecl upgrade mongodb-1.14.0beta1\n", "text": "The PHP team is happy to announce that version 1.14.0beta1 of the mongodb PHP extension is now available on PECL. This release introduces support for MongoDB 6.0 and Queryable Encryption.Release HighlightsTo support Queryable Encryption, encryptedFieldsMap and bypassQueryAnalysis auto encryption options have been added to MongoDB\\Driver\\Manager::__construct() . Additionally, new algorithms and a queryType option have been added to MongoDB\\Driver\\ClientEncryption::encrypt() . MongoDB\\Driver\\BulkWrite and MongoDB\\Driver\\Query support a let option for defining variables that can be accessed within query filters and updates. Additionally, both classes now support a comment option of any type (previously a string comment was only supported for queries).This release upgrades our libbson and libmongoc dependencies to 1.22.0-beta0. The libmongocrypt dependency has been upgraded to 1.5.0-rc1.A complete list of resolved issues in this release may be found at: https://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12484&version=34002DocumentationDocumentation is available on PHP.net: http://php.net/set.mongodbInstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL: http://pecl.php.net/package/mongodb", "username": "jmikola" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB PHP Extension 1.14.0-beta1 Released
2022-06-08T01:47:38.359Z
MongoDB PHP Extension 1.14.0-beta1 Released
2,749
null
[ "php", "beta" ]
[ { "code": "MongoDB\\Database::createCollectionMongoDB\\Database::dropCollectionMongoDB\\Collection::dropencryptedFieldsfindfindAndModifydeleteupdateletcommentwatch()fullDocumentfullDocumentBeforeChangefullDocumentBeforeChangeMongoDB\\Database::createCollection()MongoDB\\Database::modifyCollection()changeStreamPreAndPostImagesMongoDB\\Database::createCollection()createMongoDB\\Collection::estimatedDocumentCount()countaggregate$collStatscountestimatedDocumentCount()countcountestimatedDocumentCountMongoDB\\Driver\\ServerApimongodbcomposer require mongodb/mongodb:1.13.0-beta1@dev\nmongodb", "text": "The PHP team is happy to announce that version 1.13.0-beta1 of the MongoDB PHP library is now available. This release introduces support for MongoDB 6.0 and Queryable Encryption.Release HighlightsMongoDB\\Database::createCollection, MongoDB\\Database::dropCollection, and MongoDB\\Collection::drop now support an encryptedFields option. This is used by the library to manage internal collections used for queryable encryption.Helper methods for find , findAndModify , delete , and update commands now support a let option, which can be used to define variables that can be accessed within query filters and updates. Additionally, all helpers now support a comment option of any type (previously a string comment was only supported for queries).Change Streams with Document Pre- and Post-Images are now supported. Change stream watch() helpers now accept “whenAvailable” and “required” for the fullDocument option and support a new fullDocumentBeforeChange option, which accepts “whenAvailable” and “required”. Change events may now include a fullDocumentBeforeChange response field. Additionally, MongoDB\\Database::createCollection() and MongoDB\\Database::modifyCollection() now support a changeStreamPreAndPostImages option to enable this feature on collections.MongoDB\\Database::createCollection() now supports creating clustered indexes and views. Clustered indexes were introduced in MongoDB 5.3. Views date back to MongoDB 3.4 but the corresponding options for the create command were never added to the library’s helper method.MongoDB\\Collection::estimatedDocumentCount() has been changed to always use the count command. In a previous release (1.9.0), the method was changed to use aggregate with a $collStats stage instead of the count command, which did not work on views. Reverting estimatedDocumentCount() to always use the count command addresses the incompatibility with views. Due to an oversight, the count command was omitted from the Stable API in server versions 5.0.0–5.0.8 and 5.1.0–5.3.1. Users of the Stable API with estimatedDocumentCount are advised to upgrade their MongoDB clusters to 5.0.9+ or 5.3.2+ (if on Atlas) or disable strict mode when using MongoDB\\Driver\\ServerApi.This release upgrades the mongodb extension requirement to 1.14.0-beta1.A complete list of resolved issues in this release may be found at: https://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=34003DocumentationDocumentation for this library may be found at: https://www.mongodb.com/docs/php-library/current/InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "jmikola" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB PHP Library 1.13.0-beta1 Released
2022-06-08T01:49:49.551Z
MongoDB PHP Library 1.13.0-beta1 Released
2,386
null
[ "dot-net" ]
[ { "code": "", "text": "Hello all, i have issue with Mongoclient in C#, always error “The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.” please help if any solution for this issue,", "username": "Gudang_Distributor" }, { "code": "DnsClientWrapperLookupClientLookupClient", "text": "Hi, @Gudang_Distributor,Welcome to the MongoDB Community Forums. In diagnosing this error, it would be helpful to know the following:The static constructor (aka type initializer) for DnsClientWrapper instantiates a LookupClient object from DnsClient.NET. The full stack trace should provide some clues as to why the LookupClient can’t be successfully instantiated.James", "username": "James_Kovacs" }, { "code": "", "text": "Hello, @James_Kovacs thanks for your attention,Here, I put the requirement that I used for mongo DB :\nMongoDB .NET/C# Driver version : 2.11.5\n.NET version : netcoreapp3.1 (Azure Function)\nOperating system and version : Windows 11 Pro, Version 21H2\nFull stack trace : The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception. ThanksBest Regards,\nAndry Sistiawan", "username": "Gudang_Distributor" } ]
Error The type initializer for 'MongoDB.Driver.Core.Misc.DnsClientWrapper' threw an exception
2022-06-01T15:29:03.462Z
Error The type initializer for &lsquo;MongoDB.Driver.Core.Misc.DnsClientWrapper&rsquo; threw an exception
3,176
null
[ "node-js" ]
[ { "code": "", "text": "“message”:“Server selection timed out after 30000 ms”,“reason”:{“type”:“ReplicaSetNoPrimary”,“servers”:{},“stale”:false,“compatible”:true,“heartbeatFrequencyMS”:10000,“localThresholdMS”:15,“setName”:“atlas-uhi03q-shard-0”}This is error I am receiving when I connect to mongo atlas instance from ECS tasks (EC2 instance running on private subnet). Runs well when I run this application from localhost. I have also enabled VPC peering and IP under network access is open to the world. Cannot figure out the issue. Appreciate some help here.", "username": "Ajith_Nair" }, { "code": "0.0.0.0Server selection timed out", "text": "Hi @Ajith_Nair welcome to the community!Runs well when I run this application from localhost. I have also enabled VPC peering and IP under network access is open to the world.Do you mean your Atlas instance is open to 0.0.0.0 in the network access tab from Atlas? If yes, then it sounds like it’s a network connectivity issues from the AWS setup instead of something that was Atlas-specific. In my experience, this is typically caused by misconfigured AWS security policies, but there could be other reasons as well.Having said that, the Server selection timed out message is consistent with the app unable to connect to the server.You might need to troubleshoot this by following the suggestions in How do I troubleshoot connectivity issues that I’m experiencing while using an Amazon VPC?, or posting in AWS re:Post forum for better AWS-related answers.Best regards\nKevin", "username": "kevinadi" }, { "code": "0.0.0.0", "text": "Thanks for that, Yes Atlas instance is open to 0.0.0.0. let me check those forum for some help.", "username": "Ajith_Nair" } ]
Mongo Atlas instance timeout from ECS tasks of AWS
2022-06-07T10:33:22.482Z
Mongo Atlas instance timeout from ECS tasks of AWS
2,647
null
[]
[ { "code": "", "text": "Currently Using Mongo Db Community edition 4.4.14 in Development environment. Planning to move the same in Production environment .\nHave couple of Initial questions\n1.Is it advisable to move to Production with version 4.4.14.\nAlso suggest,\n2. best practices to be followed in setting Mongo Db in production environment.", "username": "Arunkumar_s" }, { "code": "", "text": "Hi Team any updates or suggestion on this?", "username": "Arunkumar_s" }, { "code": "", "text": "Hi @Arunkumar_s and welcome to the community!!.Is it advisable to move to Production with version 4.4.14.Yes, this version can be used in the production environment. For more information please visit https://www.mongodb.com/docs/manual/release-notes/4.4/best practices to be followed in setting Mongo Db in production environment.Below are a few recommended links which would be helpful to run MongoDB Community in production environment.Let us know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating from Dev environment to Production with Mongo version version 4.4.14
2022-05-11T07:03:08.768Z
Migrating from Dev environment to Production with Mongo version version 4.4.14
1,591
null
[ "containers" ]
[ { "code": "msg\\\":\\\"Received signal\\\",\\\"attr\\\":{\\\"signal\\\":15,\\\"error\\\":\\\"\nmsg\\\":\\\"Signal was sent by kill(2)\\\",\\\"attr\\\":{\\\"pid\\\":\nmsg\\\":\\\"will terminate after current cmd ends\\\"}\\n\",\"stream\":\"\nmsg\\\":\\\"Stepping down the ReplicationCoordinator for shutdown\\\",\\\"attr\\\":{\\\"waitTimeMillis\\\":\nmsg\\\":\\\"Shutting down the MirrorMaestro\\\"}\\n\",\"stream\":\"stdout\",\"\nmsg\\\":\\\"Shutting down the WaitForMajorityService\\\"}\\n\",\"stream\":\"stdout\",\"\nmsg\\\":\\\"Shutting down the LogicalSessionCache\\\"}\\n\",\"stream\":\"stdout\",\"\nmsg\\\":\\\"Shutdown: going to close listening sockets\\\"}\\n\",\"stream\nmsg\\\":\\\"removing socket file\\\",\\\"attr\\\":{\\\"path\\\":\\\"/tmp/mongodb-27017.sock\n", "text": "Hi,\nI’m running Mongo 4.4 in docker among with a dockerize frontend and backend, and mongo suddenly shuts down, not even in the middle of an operation. These are the verbose logs:and so on til is down…\nI’ve read here that some other process send signal 15 to terminate mongo.\nWhat could possibly be going on?\nThank’s in advance for your help.", "username": "juan_martin2" }, { "code": "mongodmongodmongod", "text": "Hi @juan_martin2 welcome to the community!It appears that the mongod received a terminate signal, and duly obeyed by killing itself. In that end, the mongod is behaving as expected, and correctly.Unfortunately beyond that knowledge, we have no other hints as to what sends this signal, and why.You may be able to know more by maybe looking at the logs of other apps you have? Perhaps you can find the culprit by selectively turning on your apps, and check at what point the mongod process shuts down? This way you can narrow down the possibilities.Best regards\nKevin", "username": "kevinadi" } ]
Mongo shuts down with signal 15
2022-06-07T16:27:42.538Z
Mongo shuts down with signal 15
3,588
null
[ "sharding", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2022-06-06T06:08:04.545+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22649, \"ctx\":\"thread24\",\"msg\":\"Creating distributed lock ping thread\",\"attr\":{\"processId\":\"config\",\"pingIntervalMillis\":30000}}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.545+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigStartingUp\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.546+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280500, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to create internal replication collections\"}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.547+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31802,\"message\":\"[1654495684:547876][1:0x7f3f6a96ac80], file:collection-2-5837153500946542875.wt, WT_SESSION.open_cursor: __posix_file_read, 428: /data/db/collection-2-5837153500946542875.wt: handle-read: pread: failed to read 4096 bytes at offset 32768: WT_ERROR: non-specific WiredTiger error\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.548+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":50882, \"ctx\":\"initandlisten\",\"msg\":\"Failed to open WiredTiger cursor. This may be due to data corruption\",\"attr\":{\"uri\":\"table:collection-2-5837153500946542875\",\"config\":\"\",\"error\":{\"code\":8,\"codeName\":\"UnknownError\",\"errmsg\":\"-31802: WT_ERROR: non-specific WiredTiger error\"},\"message\":\"Please read the documentation for starting MongoDB with --repair here: http://dochub.mongodb.org/core/repair\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.548+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50882,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp\",\"line\":109}}\n{\"t\":{\"$date\":\"2022-06-06T06:08:04.548+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.220+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31802,\"message\":\"[1654496630:220679][1:0x7f0d18c36c80], file:WiredTiger.wt, connection: __wt_block_read_off, 309: WiredTiger.wt: fatal read error: WT_ERROR: non-specific WiredTiger error\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.220+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31804,\"message\":\"[1654496630:220691][1:0x7f0d18c36c80], file:WiredTiger.wt, connection: __wt_block_read_off, 309: the process must exit and restart: WT_PANIC: WiredTiger library panic\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.220+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23089, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":538}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.220+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23090, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.220+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"563853EB7405\",\"b\":\"563850008000\",\"o\":\"3EAF405\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357\",\"s+\":\"215\"},{\"a\":\"563853EB9E99\",\"b\":\"563850008000\",\"o\":\"3EB1E99\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"563853EB24D6\",\"b\":\"563850008000\",\"o\":\"3EAA4D6\",\"s\":\"abruptQuit\",\"s+\":\"66\"},{\"a\":\"7F0D19AF43C0\",\"b\":\"7F0D19AE0000\",\"o\":\"143C0\",\"s\":\"funlockfile\",\"s+\":\"60\"},{\"a\":\"7F0D1993103B\",\"b\":\"7F0D198EE000\",\"o\":\"4303B\",\"s\":\"gsignal\",\"s+\":\"CB\"},{\"a\":\"7F0D19910859\",\"b\":\"7F0D198EE000\",\"o\":\"22859\",\"s\":\"abort\",\"s+\":\"12B\"},{\"a\":\"5638513CF06B\",\"b\":\"563850008000\",\"o\":\"13C706B\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"F6\"},{\"a\":\"563850EC1522\",\"b\":\"563850008000\",\"o\":\"EB9522\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216\",\"s+\":\"16\"},{\"a\":\"5638516D1F03\",\"b\":\"563850008000\",\"o\":\"16C9F03\",\"s\":\"__eventv\",\"s+\":\"403\"},{\"a\":\"563850ED3E84\",\"b\":\"563850008000\",\"o\":\"ECBE84\",\"s\":\"__wt_panic_func\",\"s+\":\"114\"},{\"a\":\"563850EE2590\",\"b\":\"563850008000\",\"o\":\"EDA590\",\"s\":\"__wt_block_read_off.cold.5\",\"s+\":\"D3\"},{\"a\":\"5638517FC966\",\"b\":\"563850008000\",\"o\":\"17F4966\",\"s\":\"__wt_block_extlist_read\",\"s+\":\"96\"},{\"a\":\"5638517FCE8B\",\"b\":\"563850008000\",\"o\":\"17F4E8B\",\"s\":\"__wt_block_extlist_read_avail\",\"s+\":\"2B\"},{\"a\":\"563851806560\",\"b\":\"563850008000\",\"o\":\"17FE560\",\"s\":\"__wt_block_checkpoint_load\",\"s+\":\"1E0\"},{\"a\":\"5638517FE007\",\"b\":\"563850008000\",\"o\":\"17F6007\",\"s\":\"__bm_checkpoint_load\",\"s+\":\"37\"},{\"a\":\"56385172A662\",\"b\":\"563850008000\",\"o\":\"1722662\",\"s\":\"__wt_btree_open\",\"s+\":\"D62\"},{\"a\":\"563851650412\",\"b\":\"563850008000\",\"o\":\"1648412\",\"s\":\"__wt_conn_dhandle_open\",\"s+\":\"8D2\"},{\"a\":\"5638516D07D9\",\"b\":\"563850008000\",\"o\":\"16C87D9\",\"s\":\"__wt_session_get_dhandle\",\"s+\":\"E9\"},{\"a\":\"5638516D0E82\",\"b\":\"563850008000\",\"o\":\"16C8E82\",\"s\":\"__wt_session_get_\ndhandle\",\"s+\":\"792\"},{\"a\":\"5638516D1184\",\"b\":\"563850008000\",\"o\":\"16C9184\",\"s\":\"__wt_session_get_btree_ckpt\",\"s+\":\"154\"},{\"a\":\"56385166848A\",\"b\":\"563850008000\",\"o\":\"166048A\",\"s\":\"__wt_curfile_open\",\"s+\":\"5A\"},{\"a\":\"5638516CB7BB\",\"b\":\"563850008000\",\"o\":\"16C37BB\",\"s\":\"__session_open_cursor_int\",\"s+\":\"2DB\"},{\"a\":\"5638516CB478\",\"b\":\"563850008000\",\"o\":\"16C3478\",\"s\":\"__wt_open_cursor\",\"s+\":\"58\"},{\"a\":\"563851696E3E\",\"b\":\"563850008000\",\"o\":\"168EE3E\",\"s\":\"__wt_metadata_cursor_open\",\"s+\":\"6E\"},{\"a\":\"563851696F3B\",\"b\":\"563850008000\",\"o\":\"168EF3B\",\"s\":\"__wt_metadata_cursor\",\"s+\":\"4B\"},{\"a\":\"56385164CD97\",\"b\":\"563850008000\",\"o\":\"1644D97\",\"s\":\"wiredtiger_open\",\"s+\":\"28C7\"},{\"a\":\"5638515F8239\",\"b\":\"563850008000\",\"o\":\"15F0239\",\"s\":\"_ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_\",\"s+\":\"B9\"},{\"a\":\"563851603738\",\"b\":\"563850008000\",\"o\":\"15FB738\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb\",\"s+\":\"1138\"},{\"a\":\"5638515DA121\",\"b\":\"563850008000\",\"o\":\"15D2121\",\"s\":\"_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE\",\"s+\":\"171\"},{\"a\":\"5638523BC289\",\"b\":\"563850008000\",\"o\":\"23B4289\",\"s\":\"_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE\",\"s+\":\"419\"},{\"a\":\"56385154382F\",\"b\":\"563850008000\",\"o\":\"153B82F\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896\",\"s+\":\"47F\"},{\"a\":\"5638515461AF\",\"b\":\"563850008000\",\"o\":\"153E1AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"},{\"a\":\"5638513E237E\",\"b\":\"563850008000\",\"o\":\"13DA37E\",\"s\":\"main\",\"s+\":\"E\"},{\"a\":\"7F0D199120B3\",\"b\":\"7F0D198EE000\",\"o\":\"240B3\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"},{\"a\":\"5638515406AE\",\"b\":\"563850008000\",\"o\":\"15386AE\",\"s\":\"_start\",\"s+\":\"2E\"}],\"processInfo\":{\"mongodbVersion\":\"5.0.7\",\"gitVersion\":\"b977129dc70eed766cbee7e412d901ee213acbda\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.4.0-113-generic\",\"v\nersion\":\"#127-Ubuntu SMP Wed May 18 14:30:56 UTC 2022\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"563850008000\",\"elfType\":3,\"buildId\":\"FD1987341101CD5988745C190BB51D0DA505C680\"},{\"b\":\"7F0D19AE0000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"F0983025F0E0F327A6DA752FF4FFA675E0BE393F\"},{\"b\":\"7F0D198EE000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"9FDB74E7B217D06C93172A8243F8547F947EE6D1\"}]}}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563853EB7405\",\"b\":\"563850008000\",\"o\":\"3EAF405\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.357\",\"s+\":\"215\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563853EB9E99\",\"b\":\"563850008000\",\"o\":\"3EB1E99\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563853EB24D6\",\"b\":\"563850008000\",\"o\":\"3EAA4D6\",\"s\":\"abruptQuit\",\"s+\":\"66\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F0D19AF43C0\",\"b\":\"7F0D19AE0000\",\"o\":\"143C0\",\"s\":\"funlockfile\",\"s+\":\"60\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F0D1993103B\",\"b\":\"7F0D198EE000\",\"o\":\"4303B\",\"s\":\"gsignal\",\"s+\":\"CB\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F0D19910859\",\"b\":\"7F0D198EE000\",\"o\":\"22859\",\"s\":\"abort\",\"s+\":\"12B\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638513CF06B\",\"b\":\"563850008000\",\"o\":\"13C706B\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"F6\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563850EC1522\",\"b\":\"563850008000\",\"o\":\"EB9522\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_error_with_startup_suppressionEP18__wt_event_handlerP12__wt_sessioniPKc.cold.1216\",\"s+\":\"16\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516D1F03\",\"b\":\"563850008000\",\"o\":\"16C9F03\",\"s\":\"__eventv\",\"s+\":\"403\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563850ED3E84\",\"b\":\"563850008000\",\"o\":\"ECBE84\",\"s\":\"__wt_panic_func\",\"s+\":\"114\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563850EE2590\",\"b\":\"563850008000\",\"o\":\"EDA590\",\"s\":\"__wt_block_read_off.cold.5\",\"s+\":\"D3\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638517FC966\",\"b\":\"563850008000\",\"o\":\"17F4966\",\"s\":\"__wt_block_extlist_read\",\"s+\":\"96\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638517FCE8B\",\"b\":\"563850008000\",\"o\":\"17F4E8B\",\"s\":\"__wt_block_extlist_read_avail\",\"s+\":\"2B\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563851806560\",\"b\":\"563850008000\",\"o\":\"17FE560\",\"s\":\"__wt_block_checkpoint_load\",\"s+\":\"1E0\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638517FE007\",\"b\":\"563850008000\",\"o\":\"17F6007\",\"s\":\"__bm_checkpoint_load\",\"s+\":\"37\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56385172A662\",\"b\":\"563850008000\",\"o\":\"1722662\",\"s\":\"__wt_btree_open\",\"s+\":\"D62\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563851650412\",\"b\":\"563850008000\",\"o\":\"1648412\",\"s\":\"__wt_conn_dhandle_open\",\"s+\":\"8D2\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516D07D9\",\"b\":\"563850008000\",\"o\":\"16C87D9\",\"s\":\"__wt_session_get_dhandle\",\"s+\":\"E9\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516D0E82\",\"b\":\"563850008000\",\"o\":\"16C8E82\",\"s\":\"__wt_session_get_dhandle\",\"s+\":\"792\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516D1184\",\"b\":\"563850008000\",\"o\":\"16C9184\",\"s\":\"__wt_session_get_btree_ckpt\",\"s+\":\"154\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56385166848A\",\"b\":\"563850008000\",\"o\":\"166048A\",\"s\":\"__wt_curfile_open\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516CB7BB\",\"b\":\"563850008000\",\"o\":\"16C37BB\",\"s\":\"__session_open_cursor_int\",\"s+\":\"2DB\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638516CB478\",\"b\":\"563850008000\",\"o\":\"16C3478\",\"s\":\"__wt_open_cursor\",\"s+\":\"58\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563851696E3E\",\"b\":\"563850008000\",\"o\":\"168EE3E\",\"s\":\"__wt_metadata_cursor_open\",\"s+\":\"6E\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563851696F3B\",\"b\":\"563850008000\",\"o\":\"168EF3B\",\"s\":\"__wt_metadata_cursor\",\"s+\":\"4B\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56385164CD97\",\"b\":\"563850008000\",\"o\":\"1644D97\",\"s\":\"wiredtiger_open\",\"s+\":\"28C7\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638515F8239\",\"b\":\"563850008000\",\"o\":\"15F0239\",\"s\":\"_ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_\",\"s+\":\"B9\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"563851603738\",\"b\":\"563850008000\",\"o\":\"15FB738\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES8_PNS_11ClockSourceES8_mmbbbb\",\"s+\":\"1138\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638515DA121\",\"b\":\"563850008000\",\"o\":\"15D2121\",\"s\":\"_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createEPNS_16OperationContextERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE\",\"s+\":\"171\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638523BC289\",\"b\":\"563850008000\",\"o\":\"23B4289\",\"s\":\"_ZN5mongo23initializeStorageEngineEPNS_16OperationContextENS_22StorageEngineInitFlagsE\",\"s+\":\"419\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56385154382F\",\"b\":\"563850008000\",\"o\":\"153B82F\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1896\",\"s+\":\"47F\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638515461AF\",\"b\":\"563850008000\",\"o\":\"153E1AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638513E237E\",\"b\":\"563850008000\",\"o\":\"13DA37E\",\"s\":\"main\",\"s+\":\"E\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F0D199120B3\",\"b\":\"7F0D198EE000\",\"o\":\"240B3\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"}}}\n{\"t\":{\"$date\":\"2022-06-06T06:23:50.369+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5638515406AE\",\"b\":\"563850008000\",\"o\":\"15386AE\",\"s\":\"_start\",\"s+\":\"2E\"}}}\n", "text": "My mongodb sharding config server and share-0 could not start, maybe cause a unscheduled power off, and I try to repair it, but it still not work. Could same one please help me to resolve this.Here is config server start log.And here is share-0 start log", "username": "Alex_Zhou" }, { "code": "failed to read 4096 bytes at offset 32768", "text": "Hi @Alex_Zhou welcome to the community!From the logs you posted, I believe the main issue is hardware. This is due to this line: failed to read 4096 bytes at offset 32768, meaning that WT tried to read a file, but failed. Power issues are definitely a possible source of corruption, but this could also be caused by bad hardware. Unfortunately, there’s not much anyone can do if the actual hardware is having issues.The ideal resolution to this is to perform a clean resync from an unaffected node in the replica set. Alternatively, using mongod --repair might also work, but a clean resync is much preferable & safer.In light of this event, I would recommend you to audit your remaining hardware and ensure that none of them are corrupt in this manner.Best regards\nKevin", "username": "kevinadi" } ]
MongoDB config server and share0 could not start
2022-06-06T06:29:02.590Z
MongoDB config server and share0 could not start
2,630
null
[ "node-js", "atlas-device-sync", "app-services-user-auth" ]
[ { "code": "Expected signal to be an instanceof AbortSignal\nimport Realm from 'realm';\n\nimport {ProfileSchema} from './schemas';\n\nconst OpenRealmBehaviorConfiguration = {\n\ttype: 'openImmediately' as any,\n};\n\nexport const realmApp: Realm.App = new Realm.App({id: 'app id'});\n\nexport const getRealm = async (): Promise<Realm> => {\n\tconst conn = new Realm({\n\t\tschema: [ProfileSchema],\n\t\tsync: {\n\t\t\tuser: realmApp.currentUser,\n\t\t\tpartitionValue: 'moises',\n\t\t\tnewRealmFileBehavior: OpenRealmBehaviorConfiguration,\n\t\t\texistingRealmFileBehavior: OpenRealmBehaviorConfiguration,\n\t\t},\n\t});\n\n\treturn Promise.resolve(conn);\n};\nimport Realm from 'realm';\n\nimport {handleIpcInvokeResponse, ISignInInfo} from '@types';\nimport {realmApp} from '../Database';\nimport {logEverywhere} from '../main';\n\nexport const signIn = async (user: ISignInInfo): Promise<handleIpcInvokeResponse> => {\n\tconsole.log(user);\n\tlogEverywhere(JSON.stringify(realmApp));\n\n\tconst credentials = Realm.Credentials.emailPassword('[email protected]', '1234567');\n\n\tawait realmApp.logIn(credentials);\n\n\treturn Promise.resolve({\n\t\tsuccess: true,\n\t\tmessage: 'Login correcto!',\n\t\tresult: realmApp,\n\t});\n};\n", "text": "Hi! i am using realm node sdk with electron-forge configuration and also creating an api with contextBridge.when i load the application in development mode it runs normally but when i copy the application to an electron package i get the following error in the try catchthis is my configuration where the error occursrealm js instanceuser authentication function that is called when using the api I exposed with electron preloadI would appreciate a reply and thank you in advance.", "username": "Moises_Barillas" }, { "code": "error.stack", "text": "Can you share the error.stack as well? I suspect the error is thrown from this line, but I’d like to be certain.", "username": "kraenhansen" }, { "code": "externals: {\n\t\t'react-native': 'react-native',\n\t},\n", "text": "Hello!Thanks for replying, I had problems using electron-forge, I switched to vite js with a configuration for electron and it works without problems when I package the application.I don’t know if it’s a problem with electron-forge or some webpack configuration or when externalizing react-native debido a que me dio problemas al empquetar tambien.I don’t know if it’s because of those lines I put in the webpack configuration or if it’s a problem with realm js itself.I am new to using these technologies, what do you mean by error.stack ?", "username": "Moises_Barillas" } ]
Realm js: Expected signal to be an instanceof AbortSignal
2022-05-30T01:01:55.976Z
Realm js: Expected signal to be an instanceof AbortSignal
2,600
null
[ "queries", "replication", "sharding", "transactions" ]
[ { "code": "errmsg\" : \"expected entry { _id: \\\"foo.bar\\\", lastmodEpoch: ObjectId('5af632cfbf0364e88776eccb'), lastmod: new Date(4294967304), dropped: false, key: { app_key: 1.0, created_at: 1.0 }, unique: false } in config.collections for foo.bar to have a UUID, but it did not\",\ncache.chunks.foo.bar\ncache.collections\ntransactions\n\nmongodb_shard_rs01:PRIMARY> db.cache.collections.find();\n{ \"_id\" : \"foo.bar\", \"epoch\" : ObjectId(\"5af632cfbf0364e88776eccb\"), \"key\" : { \"app_key\" : 1, \"created_at\" : 1 }, \"unique\" : false, \"refreshing\" : false, \"lastRefreshedCollectionVersion\" : Timestamp(171, 1) }\n\n", "text": "I upgraded our production cluster from 3.4 to 3.6 by swapping binaries all running on debian 9.13 EC2 instances. I followed this upgrade 3.6 upgrade replica set document.\nOnce all the mongos, mongod, and mongo config services were running as 3.6, I tried to set featureCompatibilityVersion to 3.6 and ran into the following errorI looked into the config database and found the following collectionsIn the cache.chunks.foo.bar there are some old records from 2016.in cache.collections I see the followingMy question is it safe to clear out this document or should I modify and add a UUID to the doc?", "username": "Drew_Pierce" }, { "code": "", "text": "So it looks like the foo.bar collection was sharded and I needed to follow 3.6-upgrade-sharded-cluster.\nI was able to shutdown the balancer and start the balancer and I needed to run the FCV update command from mongos vs running this on each primary in the replication sets.", "username": "Drew_Pierce" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrade 3.4.15 to 3.6.23 featureCompatibilityVersion error config.collection UUID missing
2022-06-06T19:27:09.705Z
Upgrade 3.4.15 to 3.6.23 featureCompatibilityVersion error config.collection UUID missing
1,639