image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null |
[] |
[
{
"code": "{\n _id: 3,\n name: 'feta',\n description: \"Greek brined white cheese made from sheep's milk or from a mixture of sheep and goat's milk.\",\n countryOfOrigin: 'Greece',\n egVector: [\n -0.015739429742097855,\n 0.04937680810689926,\n -0.1067470908164978,\n 0.1293928325176239,\n -0.03162907809019089\n ],\n aging: 'about 3 months',\n yearProduced: 2021,\n brined: true\n },\n",
"text": "Hi All,MongoDB atlas vector search seems to be a game changer and would greatly improve the ease of working with data on my LLM application.However I have noticed in the documentation, it seems that only one vector embeddings can be stored in one document?e.g. the vector embeddings for the “description” of the cheese document, what if i also have “reviews”, “recipes” fields for the cheese document, can I also embed these in the same document?https://www.mongodb.com/docs/atlas/atlas-search/knn-beta/#std-label-knn-beta-egsThx in advance!",
"username": "Keith_Kwan"
},
{
"code": "Atlas atlas-b8d6l3-shard-0 [primary] vertorSearch> db.textSearch.find()\n[\n {\n _id: 1,\n name: 'mozzarella',\n description: \"Italian cheese typically made from buffalo's milk.\",\n countryOfOrigin: 'Italy',\n egVector: [\n -0.09950768947601318,\n -0.02402166835963726,\n -0.046839360147714615,\n 0.06274440884590149,\n -0.0920015424489975\n ],\n aging: 'none',\n yearProduced: 2022,\n brined: false,\n reviews: 'The recipie turned out to be really nice.',\n egVector2: [\n -0.11875683814287186,\n 0.027652710676193237,\n -0.0073554981499910355,\n 0.030328862369060516,\n -0.04793226718902588\n ]\n },\n {\n _id: 2,\n name: 'parmesan',\n description: \"Italian hard, granular cheese produced from cow's milk.\",\n countryOfOrigin: 'Italy',\n egVector: [\n -0.04228218272328377,\n -0.024080513045191765,\n -0.029374264180660248,\n -0.04369240626692772,\n -0.01295427419245243\n ],\n aging: 'at least 1 year',\n yearProduced: 2021,\n brined: false,\n reviews: 'This cheeze is good for making pizza',\n egVactor2: [\n -0.029639432206749916,\n 0.0437360517680645,\n 0.0022121944930404425,\n 0.018038751557469368,\n -0.16932083666324615\n ]\n },\n {\n _id: 3,\n name: 'feta',\n description: \"Greek brined white cheese made from sheep's milk or from a mixture of sheep and goat's milk.\",\n countryOfOrigin: 'Greece',\n egVector: [\n -0.015739429742097855,\n 0.04937680810689926,\n -0.1067470908164978,\n 0.1293928325176239,\n -0.03162907809019089\n ],\n aging: 'about 3 months',\n yearProduced: 2021,\n brined: true,\n egVector2: [\n 0.04778356850147247,\n -0.027836525812745094,\n 0.013962717726826668,\n 0.0071579585783183575,\n -0.05239229276776314\n ],\n reviews: 'This is good for salads and authentic Italian pizza'\n }\n]\n\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"egVector\": {\n \"dimensions\": 5,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n },\n \"egVector2\": {\n \"dimensions\": 5,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'index': 'default', \n 'knnBeta': {\n 'vector': [\n -0.015739429742097855, 0.04937680810689926, -0.1067470908164978, 0.1293928325176239, -0.03162907809019089\n ], \n 'path': 'egVector2', \n 'k': 5\n }\n }\n }\n]\n",
"text": "Hi @Keith_Kwan and welcome to MongoDB community forums!!However I have noticed in the documentation, it seems that only one vector embeddings can be stored in one document?The documentation for the vector search have been crafted to explain the concepts in more simpler way. The documentation explain the vector embedding for the single field but the vector embedding can be added for multiple fields as well.I tried to update the sample data in the Atlas using some additional fields like:and added the following index search for the same.finally I tried the query for the egVector2 field values like:and I was able to get two documents that matched the vector values.Please note that the vector in the search query has been randomly generated to retrieve the output from the $search operation.Vector embeddings are added to the text fields on which the $search operation is applied. It’s important to note that having multiple vector embeddings wouldn’t be meaningful; rather, it would increase the document size.Since the $search operation can only be utilised in the first stage of the aggregation pipeline, it is recommended to use vector embeddings for a single field. However, to better understand your specific requirements and use case, could you help me understand the requirement to use multiple vector embedding?Please reach out if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi,\nI have my schema like{\ngroupName: “name1”,\nmessages:[\n{“messageId”:101,\n“messageText”:“hi there”\n“messageEmbedded”:[embedded message data]\n},\n{“messageId”:102,\n“messageText”:“hello”\n“messageEmbedded”:[embedded message data]\n}\n],\n}How can I create search index for my vector embedded message data?correct this search Index{\n“mappings”: {\n“dynamic”: true,\n“fields”: {\n“messages.messageEmbedded”: {\n“dimensions”: 1536,\n“similarity”: “cosine”,\n“type”: “knnVector”\n}\n}\n}\n}",
"username": "Koushik_Sherugar"
},
{
"code": "",
"text": "Hi @Koushik_Sherugar and welcome to MongoDB community forums!!For better visibility within the community, we encourage creating a fresh topic with all the details and appropriate tags and then posting on the community forum.In general it is preferable to start a new discussion to keep the details of different environments/questions separate and improve visibility of new discussions. That will also allow you to mark your topic as “Solved” when you resolve any outstanding questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "hi Koushik, did you find a solution for your problem? I have the same problem currently",
"username": "Patrick_Treppmann"
},
{
"code": "",
"text": "Hi Aasawari. It would be awesome to get a response to the second question provided here. Im searching for a solution for days!",
"username": "Patrick_Treppmann"
},
{
"code": "",
"text": "Hi @Patrick_Treppmann and welcome to MongoDB community forums!!For better visibility within the forum, we encourage creating a fresh topic with all the details and appropriate tags and then posting on the community forum.\nIn the new topic, could you share your complete requirement with supported sample document and the index definition?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "",
"username": "Aasawari"
}
] |
Multiple Vector Embeddings in one document?
|
2023-07-02T05:21:25.503Z
|
Multiple Vector Embeddings in one document?
| 1,190 |
null |
[
"crud",
"indexes",
"atlas-search"
] |
[
{
"code": "",
"text": "Hi everyone, I am facing this weird issue and I was wondering if this was intended.BasePost which includes most common attributes for this type of document.(e.g. title, content…)\nMarketPost which is a discriminator of “BasePost” which includes extra attributes.I have search index setup with title and content.I have post creation code with “UpdateOne” using MarketPost schema with “upsert” option.Once I create “MarketPost” this post is not searchable until I query this post at least once. Once I query this “MarketPost”, then “MarketPost” is searchable. Let me give more verbose scenario.My theory isUpon creating a document with “update and upsert”, document is not properly indexed.I wonder if my theory is correct, hopefully someone here is able to answer my question.Thank you.",
"username": "D_Kim"
},
{
"code": "$search",
"text": "Hi @D_Kim,Once I create “MarketPost” this post is not searchable until I query this post at least once. Once I query this “MarketPost”, then “MarketPost” is searchable. Let me give more verbose scenario.My interpretation may be incorrect here but this might be due to Eventual Consistency and Indexing Latency. More specifically:Atlas Search supports eventual consistency and does not provide any stronger consistency guarantees. This means that data inserted into a MongoDB collection and indexed by Atlas Search will not be available immediately for $search queries.How long in seconds approximately between step 1 and 2?Regards,\nJason",
"username": "Jason_Tran"
}
] |
Document under discriminator key won't be searched
|
2023-08-10T22:31:03.846Z
|
Document under discriminator key won’t be searched
| 454 |
null |
[
"atlas-search"
] |
[
{
"code": "{\n mappings: {\n dynamic: false,\n fields: {\n title: {\n type: 'string',\n norms: 'omit',\n },\n content: {\n type: 'string',\n norms: 'omit',\n },\n },\n },\n name: 'DocumentSearch',\n}\n[\n { \n _id: 1, \n title: 'My first document', \n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n },\n { \n _id: 2, \n title: 'My first_document', \n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n },\n]\n{\n $search: {\n index: 'DocumentSearch',\n compound: {\n minimumShouldMatch: 1,\n should: [\n {\n phrase: {\n path: 'title',\n query,\n slop: 4,\n score: { boost: { value: 3 } },\n },\n },\n {\n phrase: {\n path: 'content',\n query,\n slop: 4,\n },\n },\n ],\n },\n highlight: {\n path: 'content',\n },\n },\n}\n",
"text": "Given the following Atlas Search Index and example data…Atlas Search IndexExample DataI’m trying to searching for “my document” and I’m expecting to get back both documents but getting back none. Additionally, searching for “my fir” returns no documents and I’d expect it to return both.Search QueryWhat changes do I need to make to my query or index to support my desired outcome?",
"username": "Brandon"
},
{
"code": "\"norms\"\"omit\"autocomplete\"my fir\"documets> a\n{\n '$search': {\n compound: {\n minimumShouldMatch: 1,\n should: [\n { autocomplete: { query: 'my fir', path: 'content' } },\n { autocomplete: { query: 'my fir', path: 'title' } }\n ]\n }\n }\n}\ndocumets> db.collection.aggregate(a)\n[\n {\n _id: 1,\n title: 'My first document',\n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n },\n {\n _id: 2,\n title: 'My first_document',\n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n }\n]\n\"my document\"documets> b\n{\n '$search': {\n compound: {\n minimumShouldMatch: 1,\n should: [\n { autocomplete: { query: 'my document', path: 'content' } },\n { autocomplete: { query: 'my document', path: 'title' } }\n ]\n }\n }\n}\n\n\ndocumets> db.collection.aggregate(b)\n[\n {\n _id: 1,\n title: 'My first document',\n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n },\n {\n _id: 2,\n title: 'My first_document',\n content: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit'\n }\n]\nautocompleteautocomplete{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"content\": {\n \"type\": \"autocomplete\"\n },\n \"title\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\nphrase",
"text": "Hi @Brandon,I’m trying to searching for “my document” and I’m expecting to get back both documents but getting back none. Additionally, searching for “my fir” returns no documents and I’d expect it to return both.I understand you have \"norms\" set to \"omit\" for your current index but i’m wondering if using autocomplete suits your use case here? Some examples below for the search terms you mentioned above:Using \"my fir\" as the search term:Using \"my document\" as the search term:This is just a basic example of autocomplete based off the search terms and sample documents provided but you could probably alter it accordingly to fit your use case. More information regarding the autocomplete type and operator linked For reference, the index used for the above examples in my test environment:Additionally, the reason why you may not be getting any results returned is due to the phrase opeator. You can see an example of this here as well which describes the behaviour for a single phrase.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
}
] |
Atlas search not matching results that contain _
|
2023-08-02T15:18:57.784Z
|
Atlas search not matching results that contain _
| 560 |
null |
[
"compass"
] |
[
{
"code": "",
"text": "Hello everyone,Actually I’m new to mongo and facing the following issue: Trying to update existing mongo collections, upon messages published from a message broker. Update returns success, reading data directly from database after updating shows the new values applied, but actually in the database the value isn’t updated as shown in Compass.Any help regarding this issue?Regards,",
"username": "Haitham_Timani"
},
{
"code": "",
"text": "Many possible reasons. Maybe replication takes too long, maybe secondaries fail to write, maybe there is a roll back happened, maybe…You need to think about what can go wrong end to end.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks dear for your reply!\nIs there any tool I may use to figure out what’s going on internally?",
"username": "Haitham_Timani"
}
] |
Update List of collections in Mongo leading to update failure
|
2023-08-14T11:27:50.965Z
|
Update List of collections in Mongo leading to update failure
| 340 |
null |
[
"node-js",
"connecting"
] |
[
{
"code": "",
"text": "Hey guys,I am ripping out my hair and I hope somebody can help me.At current i am using nodeJS to connect to Mongodb via the Mongoclient and it works perfectly…However…I need to run it at work which is behind a corporate proxy, and mongoclient doesn’t seem to have the ability to authenticate behind a proxy…please please help",
"username": "Nathan_Azzi"
},
{
"code": "",
"text": "Hi,\nI think I have exactly the same problem and I can’t get it to work.Did you find a solution ?Thanks,\nCamilo",
"username": "Camilo_Torres"
}
] |
MongoClient + Corporate proxy + nodeJs
|
2020-10-26T05:01:56.086Z
|
MongoClient + Corporate proxy + nodeJs
| 3,009 |
null |
[] |
[
{
"code": "inserts*queryupdatedeletegetmore",
"text": "https://www.mongodb.com/docs/database-tools/mongostat/#fieldsBelow is an excerpt from the mongostat document.inserts\nThe number of objects inserted into the database per second. If followed by an asterisk (e.g. *), the datum refers to a replicated operation.query\nThe number of query operations per second.update\nThe number of update operations per second.delete\nThe number of delete operations per second.getmore\nThe number of get more (i.e. cursor batch) operations per second.If I have passed 2 as polling interval, does these values still mean “… per second”? Or “… per interval”? Thanks.",
"username": "Yang_Shuai1"
},
{
"code": "",
"text": "per second since your last polling time.",
"username": "Kobe_W"
}
] |
Understanding of mongostat results
|
2023-08-15T20:51:37.208Z
|
Understanding of mongostat results
| 421 |
[
"queries",
"atlas-search",
"text-search"
] |
[
{
"code": "namedescription$commandcollection->createindex(\n ['$**' => 'text'],\n [\n \"weights\" => [\"name\" => 3, \"description\" => 2],\n \"name\" => \"textIndex\",\n \"default_language\" => \"english\"\n ]\n);\n$results = $collection->find(\n [\n '$text' => [\n '$search' => $searchPattern,\n '$language' => 'english',\n '$caseSensitive' => false,\n ]\n ],\n [\n 'projection' => ['score' => ['$meta' => 'textScore']],\n 'limit' => 10, 'sort' => ['score' => ['$meta' => 'textScore']]\n ]\n);\nphar file decompressandor",
"text": "I am using MongoDB to store all kinds of Linux CLI commands (~ 15,000). Now I want to use the full text search to find the right command. Every command has a name, a short description and an explanation. I use weights in the index:When I now run the search, the score of the results is irritating:In this example, the search query phar file decompress is literally the description of the second document, but the score is lower than the one of the first document.\nScreenshot of MongoDB results1412×1470 157 KB\nWhat is wrong here? Is it the search string that is not searching for and but or?",
"username": "Nils_Langner"
},
{
"code": "andor$textOR",
"text": "Hi @Nils_Langner,Are you able to provide the MongoDB version in use and some sample documents which I can use in my test environment? It’s possible that the terms are maybe in other fields but from a quick glimpse it does appear there are more occurrences in the second document although I cannot see the full contents (which is why I am requesting for the documents). Difficult to say at the moment but hopefully we’ll get more insight into why the second document is generating a lower score.What is wrong here? Is it the search string that is not searching for and but or?If the search string is a space-delimited string, $text operator performs a logical OR search on each term and returns documents that contains any of the terms.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "Cluster: Standalone\nEdition: MongoDB 6.0.6 Community\n",
"text": "@Jason_Tran thanks for your feedback. You can see the search result here: https://forrestcli.com/forrest.commands_search.json",
"username": "Nils_Langner"
},
{
"code": "",
"text": "Thanks Nils, I’ll run some tests soon and get back to you once I have further information.",
"username": "Jason_Tran"
},
{
"code": "createindex(\n ['$**' => 'text']\n\"file\"\"file\"phar file decompress\"phar file decompress\"db.text.find({$text:{$search:'phar file decompress'}},{score:{'$meta':'textScore'}})\n[\n {\n _id: ObjectId(\"64888e3944169f5b3203fcd8\"),\n name: 'files:file:delete',\n description: 'Delete a given file.',\n prompt: 'rm ${filename}',\n parameters: { filename: { type: 'forrest_filename' } },\n tool: 'rm',\n created: '2023-06-13 15:41:34',\n explanation: 'This is a command for Linux or Unix-based systems using the shell command line interface. \"rm\" stands for \"remove\" and this command is used to delete a file or multiple files. \\n' +\n '\\n' +\n '\"${filename}\" is a variable that contains the name of the file you want to remove. The syntax of ${variable_name} is used to refer to the value stored in a variable. So in this case, the command is telling the system to remove the file whose name is stored in the variable filename. \\n' +\n '\\n' +\n 'For example, if the variable filename is set to \"myFile.txt\", the command would remove that file from the system. \\n' +\n '\\n' +\n \"It's important to note that this command is permanent, meaning that the file will be permanently deleted from the system and cannot be recovered. So, make sure you are absolutely sure that you want to delete the file before using this command.\",\n normalizedPrompt: 'rm ${p}',\n questions: [\n 'remov file?',\n 'delet log file',\n 'delet txt file',\n 'delet file',\n 'delet excel file',\n 'remov excel file',\n 'delet empti file',\n 'wie kann ich ein datei löschen',\n 'remov file'\n ],\n context: 'files',\n plainQuestions: [\n 'How to delete an excel file?',\n 'How to remove an excel file?',\n 'How to delete a file?',\n 'How to delete an empty file?',\n 'wie kann ich eine Datei löschen',\n 'How to remove a file?'\n ],\n score: 16.252709740990987\n },\n {\n _id: ObjectId(\"6476ebd479b96036dc032fa9\"),\n name: 'php:phar:decompress',\n description: 'Decompress a phar file to a given directory',\n prompt: 'php -r \\'$phar = new Phar(\"${phar_file}\"); $phar->extractTo(\"${directory_to_decompress_to}\");\\'',\n parameters: {\n phar_file: {\n name: 'phar file',\n description: 'Phar file that should be extracted',\n type: 'forrest_filename',\n 'file-formats': [ 'phar' ]\n },\n directory_to_decompress_to: []\n },\n 'file-formats': [ 'phar' ],\n tool: 'php',\n normalizedPrompt: \"php -r '$phar = new Phar(${p}); $phar->extractTo(${p});'\",\n context: 'php',\n runs: 2,\n success_count: 2,\n score: 14.86060606060606\n }\n]\n\"file\"plainQuestionsquestions.find()$text\"phar file decompress\"description field[\n {\n _id: ObjectId(\"6476ebd479b96036dc032fa9\"),\n name: 'php:phar:decompress',\n description: 'Decompress a phar file to a given directory',\n prompt: 'php -r \\'$phar = new Phar(\"${phar_file}\"); $phar->extractTo(\"${directory_to_decompress_to}\");\\'',\n parameters: {\n phar_file: {\n name: 'phar file',\n description: 'Phar file that should be extracted',\n type: 'forrest_filename',\n 'file-formats': [ 'phar' ]\n },\n directory_to_decompress_to: []\n },\n 'file-formats': [ 'phar' ],\n tool: 'php',\n normalizedPrompt: \"php -r '$phar = new Phar(${p}); $phar->extractTo(${p});'\",\n context: 'php',\n runs: 2,\n success_count: 2,\n score: 14.86060606060606\n },\n {\n _id: ObjectId(\"64888e3944169f5b3203fcd8\"),\n name: 'files:file:delete',\n description: 'Delete a given file.',\n prompt: 'rm ${filename}',\n parameters: { filename: { type: 'forrest_filename' } },\n tool: 'rm',\n created: '2023-06-13 15:41:34',\n explanation: 'This is a command for Linux or Unix-based systems using the shell command line interface. \"rm\" stands for \"remove\" and this command is used to delete a file or multiple files. \\n' +\n '\\n' +\n '\"${filename}\" is a variable that contains the name of the file you want to remove. The syntax of ${variable_name} is used to refer to the value stored in a variable. So in this case, the command is telling the system to remove the file whose name is stored in the variable filename. \\n' +\n '\\n' +\n 'For example, if the variable filename is set to \"myFile.txt\", the command would remove that file from the system. \\n' +\n '\\n' +\n \"It's important to note that this command is permanent, meaning that the file will be permanently deleted from the system and cannot be recovered. So, make sure you are absolutely sure that you want to delete the file before using this command.\",\n normalizedPrompt: 'rm ${p}',\n context: 'files',\n score: 7.169376407657658 /// <--- lowered score after changing the document\n }\n]\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { _fts: 'text', _ftsx: 1 },\n name: 'textIndex',\n weights: { '$**': 1, description: 2, name: 3 },\n default_language: 'english',\n language_override: 'language',\n textIndexVersion: 3\n }\n]\ntest> db.text.createIndex({'$**':'text'},{weights:{description:6,name:8}})\n$**_text\ndescription test> db.text.find({$text:{$search:'phar file decompress'}},{score:{'$meta':'textScore'}})\n[\n {\n _id: ObjectId(\"6476ebd479b96036dc032fa9\"),\n name: 'php:phar:decompress',\n description: 'Decompress a phar file to a given directory',\n prompt: 'php -r \\'$phar = new Phar(\"${phar_file}\"); $phar->extractTo(\"${directory_to_decompress_to}\");\\'',\n parameters: {\n phar_file: {\n name: 'phar file',\n description: 'Phar file that should be extracted',\n type: 'forrest_filename',\n 'file-formats': [ 'phar' ]\n },\n directory_to_decompress_to: []\n },\n 'file-formats': [ 'phar' ],\n tool: 'php',\n normalizedPrompt: \"php -r '$phar = new Phar(${p}); $phar->extractTo(${p});'\",\n context: 'php',\n runs: 2,\n success_count: 2,\n score: 28.727272727272727\n },\n {\n _id: ObjectId(\"64888e3944169f5b3203fcd8\"),\n name: 'files:file:delete',\n description: 'Delete a given file.',\n prompt: 'rm ${filename}',\n parameters: { filename: { type: 'forrest_filename' } },\n tool: 'rm',\n created: '2023-06-13 15:41:34',\n explanation: 'This is a command for Linux or Unix-based systems using the shell command line interface. \"rm\" stands for \"remove\" and this command is used to delete a file or multiple files. \\n' +\n '\\n' +\n '\"${filename}\" is a variable that contains the name of the file you want to remove. The syntax of ${variable_name} is used to refer to the value stored in a variable. So in this case, the command is telling the system to remove the file whose name is stored in the variable filename. \\n' +\n '\\n' +\n 'For example, if the variable filename is set to \"myFile.txt\", the command would remove that file from the system. \\n' +\n '\\n' +\n \"It's important to note that this command is permanent, meaning that the file will be permanently deleted from the system and cannot be recovered. So, make sure you are absolutely sure that you want to delete the file before using this command.\",\n normalizedPrompt: 'rm ${p}',\n questions: [\n 'remov file?',\n 'delet log file',\n 'delet txt file',\n 'delet file',\n 'delet excel file',\n 'remov excel file',\n 'delet empti file',\n 'wie kann ich ein datei löschen',\n 'remov file'\n ],\n context: 'files',\n plainQuestions: [\n 'How to delete an excel file?',\n 'How to remove an excel file?',\n 'How to delete a file?',\n 'How to delete an empty file?',\n 'wie kann ich eine Datei löschen',\n 'How to remove a file?'\n ],\n score: 25.169376407657666\n }\n]\n",
"text": "Hi @Nils_Langner,Thanks for your patience. I believe the scoring you’re seeing (in specific reference to the 2 documents in your screenshot) is due to the other fields containing the \"file\" term. I did a quick find on the document with the highest score and found there are 31 instances of the word \"file\" which would impact the score since you have indexed all fields.In this example, the search query phar file decompress is literally the description of the second document, but the score is lower than the one of the first document.For example, in my test environment, I have the same 2 documents where the higher scoring document does not contain contain \"phar file decompress\" in the description:There are multiple instances of the term \"file\" inside the plainQuestions and questions field for the first document. After removing this and performing the same .find() with the $text operator, we can now see the score is lower than the file that contains the terms \"phar file decompress\" in the description field:Note: The above test collection contained the following indexes:Although I understand removing those fields is not what you’re after, I hope it helps in understanding the scoring behaviour you had seen. In comparison to this, with the unaltered versions of the same 2 documents, I created a similar index with following weights:Which then resulted in the document containing all the terms of the text search in description to score higher:In saying all the above, hopefully the Assign Weights to Text Search Results documentation may be of use to you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDB score is irritating
|
2023-08-07T21:49:55.394Z
|
MongoDB score is irritating
| 554 |
|
null |
[
"atlas-functions"
] |
[
{
"code": "",
"text": "HiI have a Realm app with quite a number of functions. I recently started creating some subdirectories (which I understand is a relatively new feature). Creating the subdirectories through the UI works fine - however, when I try and sync back to GitHub, all the sub directories disappear from the UI - although they are present in GitHub.Similarly, if I create a function via GitHub, it will work as long as it isn’t in a sub-directory. If it is in a sub-directory then it just won’t appear in the UI and I won’t be able to call it.It took me a while to track the problem down as I originally created the function for a https end-point and when syncing from GitHub I got a message in the Deployment status field saying “Failed: failed to import app: unable to find function: …”anyone else getting this or know of any workarounds?",
"username": "ConstantSphere"
},
{
"code": "",
"text": "Hi Simon,I recently started creating some subdirectories (which I understand is a relatively new feature). Creating the subdirectories through the UI works fineI’m not sure what you mean by this.\nCould you please provide a screenshot?The article below states that subdirectories under /functions will not be recognised.Regards",
"username": "Mansoor_Omar"
},
{
"code": "/functions/functionsutils/addfunctions/utils/add.js",
"text": "Hi Manny,thank you for your response; apologies, if I didn’t explain too well. I see from the link you provided (which I hadn’t seen before) that “All of your function source code files must be in the /functions directory. Realm does not recognize functions nested in subdirectories of /functions .”However, it also says here:https://www.mongodb.com/docs/realm/functions/define-a-function/“You can define functions inside of nested folders. Function names are slash-separated paths, so a function named utils/add maps to functions/utils/add.js in the app’s configuration files.”I found out about it from this post:which says the sub folders work and as per the screenshot I can quite easily create a function in a sub folder from the UI and reference it in a function call.\nimage990×865 33.8 KB\nIt looks like the feature has only been part implemented. Is it possible to get a ticket raised to finish the feature off so that it works with GitHub and make the documentation consistent? The danger is that someone creates a bunch of functions in sub-folders via the UI and then tries to sync with GitHub only to lose everything.Many thanks.",
"username": "ConstantSphere"
},
{
"code": "",
"text": "Hi Simon,Thanks for providing a link to the documentation.\nThe article I referred to seems to be outdated and I will bring this to the attention of our docs team.I was able to reproduce the behaviour you’re seeing with github auto deploy and this appears to be either a bug or unsupported in github auto deploy.Unfortunately I did not see a potential workaround in this workflow to get around this issue but I have raised this with our Realm team to be investigated and will update this thread if there’s any progress.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi Manny,Many thanks for your investigation and raising it with the team internally.Any updates would be much appreciated.Regards",
"username": "ConstantSphere"
},
{
"code": "",
"text": "Is there any update on this issue?",
"username": "Surender_Kumar"
},
{
"code": "",
"text": "Hi Surender,There is no update on this yet.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi Manny,\nExperiencing same issue, any updates or any work around.",
"username": "Tekeleselassie_Leul"
},
{
"code": "",
"text": "@Mansoor_Omar Any update on this?! I’m amazed that ten months later, this still exists. The issue was well-documented above, showing inconsistent documentation and broken functionality on your product. I just wasted hours trying to debug this until finding this thread.Are you understanding that your UI (still) says “Use forward slashes to denote nesting in the Functions directory…” when creating the function, but then they never show in that very same UI?Are you no longer officially supporting GitHub deployments?? Is this getting fixed, and/or is there any workaround?Thanks.",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "Hi All,There is no update on this yet but I will follow it up with our product team.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This is still an issue. It would be nice to at least see clarification in the admin UI text where it says subfolders are supported.",
"username": "Eric_Summers"
},
{
"code": "",
"text": "@Mansoor_Omarre:There is no update on this yet but I will follow it up with our product team.Where are we with this? If Functions are meant to be a key aspect of MongoDB Atlas, then it should support something as simple as grouping under folders. Projects could have dozens or hundreds of functions, and it needs to be more manageable.Thanks.",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "Hi All,Apologies for the delay however the good news is our team have a fix for this issue which will should be released in a week or so.I will update again when the fix is deployed.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Looking forward to this getting fixed. I’ve been using a clunky workaround of naming my functions with underscores to represent the directory structure to help me find stuff. It will be great to finally tidy up the mess I have created.",
"username": "ConstantSphere"
},
{
"code": "",
"text": "@ConstantSphere @Gregory_Fay @Surender_Kumar @Tekeleselassie_Leul @Eric_SummersThank you for your patience on this, I’m pleased to announce that the fix for this issue has been released today!Please test and let us know if there are any problems.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "@Mansoor_Omar Thank you, sir! Preliminary testing has this working great!",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "I can also confirm that this is now working for me. Many thanks for your efforts on this. As a side note it would be good to show the functions in their folder structure within the UI (as you do in the hosting UI) but this is more of an enhancement than a defect. Thanks again!",
"username": "ConstantSphere"
},
{
"code": "",
"text": "Hi Simon,Good to hear it’s working now!As a side note it would be good to show the functions in their folder structure within the UI (as you do in the hosting UI) but this is more of an enhancement than a defect.I would recommend raising this in our feedback portal.\nhttps://feedback.mongodb.com/forums/945334-atlas-app-servicesRegards",
"username": "Mansoor_Omar"
}
] |
Realm functions in subdirectory disappear with automatic GitHub deplyment
|
2022-04-29T19:47:43.777Z
|
Realm functions in subdirectory disappear with automatic GitHub deplyment
| 4,792 |
null |
[
"react-js"
] |
[
{
"code": "exports = async function (args) {\n const bucket = \"my-s3-bucket\";\n\n const S3 = require(\"aws-sdk/clients/s3\");\n const s3 = new S3({\n accessKeyId: context.values.get(\"awsAccessKeyId\"),\n secretAccessKey: context.values.get(\"awsSecretAccessKey\"),\n region: \"ap-southeast-2\"\n });\n\n const presignedUrl = await s3.getSignedUrlPromise(\"putObject\", {\n Bucket: bucket,\n Key: args.Key,\n ContentType: args.ContentType,\n // Duration of the lifetime of the signed url, in milliseconds\n Expires: 900000\n });\n return presignedUrl;\n};\nimport { realmUser } from \"../../main\";\nconst axios = require(\"axios\");\n\nexport default class UploadFile {\n async handleFileUpload(file) {\n const returnObj = {\n s3Path: \"\",\n s3: {},\n };\n if (file.size > 26214400) {\n alert(\"No files over 25 MB supported\");\n return false;\n }\n\n const key = `files/${file.name}`;\n returnObj.s3Path = key;\n // AWS S3 Request\n const args = {\n ContentType: file.type,\n Key: key,\n };\n\n try {\n const presignedUrl = await this.getPresignedS3URL(args);\n const options = {\n headers: {\n \"Content-Type\": file.type,\n },\n };\n // Saves the file to S3\n await axios.put(presignedUrl, file, options);\n returnObj.s3.key = key;\n } catch (error) {\n console.log(error);\n }\n\n // Return the data back to a componenet\n return returnObj;\n }\n\n async getPresignedS3URL(args) {\n return new Promise((resolve, reject) => {\n realmUser.functions\n .uploadTestBuyFile(args)\n .then((doc) => {\n resolve(doc);\n })\n .catch((err) => {\n reject(err);\n });\n });\n }\n}\nexports = async function(newId, newPicture, bucket) {\n \n const S3 = require('aws-sdk/clients/s3');\n const s3 = new S3({\n accessKeyId: context.values.get(\"AWS_ACCESS_KEY\"),\n secretAccessKey: context.values.get(\"AWS_SECRET_ACCESS_KEY_LINKED\"),\n region: \"region\",\n })\n \n const putResult = await s3.putObject({\n Bucket: bucket,\n Key: newId,\n ContentType: \"image/jpeg\",\n Body: Buffer.from(newPicture, 'base64'),\n ContentEncoding: 'base64'\n }).promise();\n\n}\n",
"text": "Howdy,With 3rd party services being deprecated August 1, 2023 I wanted to reopen this conversation so the community can best understand how we should migrate from 3rd party services in MongoDB Realm, specifically AWS S3 in this case.The conversation ended with the 3rd party services being extended an additional year and a community member @Adam_Holt kindly sharing an implementation to on how to use AWS S3 successfully and efficiently without 3rd party services.Given that timeline, I was under the impression that MongoDB would have published some tutorial or made some comments by now as to the best way to use AWS services going forward. Perhaps I missed this in my search.My question, what is the recommended approach to getObjects and putObjects to AWS S3? Without the 3rd party services and @Adam_Holt’s suggestion, I am only aware of 1 other solution:Adam’s SolutionRealm functionFrontend JS codeLegacy Solution (This is remarkably slower than the MongoDB 3rd Party Services)Thanks for your help! @henna.s tagging you as you were involved in the prior conversations.Legacy Deprecated 3rd Party Services Conversation",
"username": "Jason_Tulloch1"
},
{
"code": "",
"text": "Hello @Jason_Tulloch1 ,Its been a long while How are you? Thank you so much for raising your concern.Please allow me some time to talk to teams internally and I will get back to you as soon as I can.Cheers, \nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hi @Jason_Tulloch1,As per my discussion with the App Services Team, here are some guidelines on how to move from third-party services to npm modules. For uploading images, you may have to be mindful of the image size as the memory limit imposed on function runtime may become a limiting factor.Unfortunately, at this time, there isn’t any documentation available on uploading images to AWS S3.Thanks,",
"username": "henna.s"
},
{
"code": "{\n \"name\": \"functions\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"scripts\": {\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n },\n \"dependencies\": {\n \"@aws-sdk/client-lambda\": \"^3.369.0\",\n \"@aws-sdk/client-s3\": \"^3.369.0\",\n \"@aws-sdk/s3-request-presigner\": \"^3.369.0\",\n \"@aws-sdk/client-ses\": \"^3.369.0\"\n },\n \"author\": \"\",\n \"license\": \"ISC\"\n}\nexports = async function () {\n const AWS_CONFIG = {\n credentials: {\n accessKeyId: context.values.get('AWS_ACCESS_ID'),\n secretAccessKey: context.values.get('AWS_SECRET_KEY'),\n },\n region: 'ap-southeast-2',\n }\n return AWS_CONFIG\n}\nexports = async function (key) {\n const AWS_CONFIG = await context.functions.execute('aws_getConfig')\n const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3')\n const { getSignedUrl } = require('@aws-sdk/s3-request-presigner')\n const s3Service = new S3Client(AWS_CONFIG)\n\n const presignedUrl = await getSignedUrl(\n s3Service,\n new PutObjectCommand({\n Bucket: 'BUCKET_NAME',\n Key: key,\n Method: 'PUT',\n ExpirationMS: 120000,\n })\n )\n return presignedUrl\n}\nconst preSigned = await s3.uploadFile(fileName) // This is calling your mongo app function\nawait axios.put(presignedUrl, file, {\n headers: {\n 'Content-Type': file.type\n },\n})\nexports = async function (key) {\n const AWS_CONFIG = await context.functions.execute('aws_getConfig')\n const { S3Client, GetObjectCommand } = require('@aws-sdk/client-s3')\n const { getSignedUrl } = require('@aws-sdk/s3-request-presigner')\n const s3Service = new S3Client(AWS_CONFIG)\n \n const presignedUrl = await getSignedUrl(\n s3Service,\n new GetObjectCommand({\n Bucket: 'BUCKET',\n Key: key,\n Method: 'GET',\n ExpirationMS: 120000,\n })\n )\n\n return presignedUrl\n}\nconst AWS_CONFIG = await context.functions.execute('aws_getConfig')\nconst { InvokeCommand, LambdaClient } = require('@aws-sdk/client-lambda')\nconst lambda = new LambdaClient(AWS_CONFIG)\n\nconst lambdaPayload = {\n foo: 'bar',\n}\n\nconst lambdaCommand = new InvokeCommand({\n FunctionName: 'FUNC_NAME',\n Payload: JSON.stringify(lambdaPayload)\n})\n\nconst lambdaResult = await lambda.send(lambdaCommand)\nconst result = EJSON.parse(Buffer.from(lambdaResult.Payload).toString())\nreturn result\nconst AWS_CONFIG = await context.functions.execute('aws_getConfig')\nconst { SESClient, SendEmailCommand } = require('@aws-sdk/client-ses')\nconst ses = new SESClient(AWS_CONFIG)\nconst sendEmailCommand = new SendEmailCommand({\n Source: `${settings.CompanyName}<[email protected]>`,\n Destination: {\n ToAddresses: [user.email],\n CcAddresses: [settings.Orders_EmailCC]\n },\n Message: {\n Body: {\n Html: {\n Charset: 'UTF-8',\n Data: email,\n },\n },\n Subject: {\n Charset: 'UTF-8',\n Data: subject,\n },\n },\n})\nconst send = await ses.send(sendEmailCommand)\n",
"text": "@Jason_Tulloch1I had to move my own application over this weekend. It was a long weekend here in NZ and I figured it was a good time to do it before the August cut off. The one I moved previously was for a client.As AWS v3 SDK is now recommended over v2, I thought I would try this as it was likely to be much lighter than the v2 package as every service is its own package.Here are some examples from code I have put together which should give you a good idea of how you can use v3. It seems to be working really well.The one thing I would avoid is trying to pull data into the function’s memory. That is where you will get bad performance. So no uploading and downloading from S3. Instead, use presigned URLs and do the downloading/uploading in the browser from the client side.functions/package.jsonfunctions/aws_getConfig.jsfunctions/s3_put.jsThen you can use something like this in your client code.functions/s3_get.jsRun Lambda in a functionSend an email via SES",
"username": "Adam_Holt"
},
{
"code": "",
"text": "Thanks @Adam_Holt for sharing these examples with the community!@Jason_Tulloch1 I’ve been updating some of the other threads on this - the team has decided to extend the deprecation timeline for 3rd party services from Aug 1st, 2023 to November 1, 2024. So there will be no impact to your application on Aug 1st. We will be updating the banners in product as well as in documentation, this week, to reflect these changes.",
"username": "Laura_Zhukas1"
},
{
"code": "",
"text": "Thanks @Adam_HoltA quick note for you and others:As of the time of this message, the only “officially” supported version 3 of the AWS SDK is 3.100.0 (more recent versions, at the very least, do not work when trying to send messages to SQS)",
"username": "Dima"
}
] |
Deprecated AWS 3rd Party Services
|
2023-06-28T14:40:33.183Z
|
Deprecated AWS 3rd Party Services
| 1,086 |
[
"serverless"
] |
[
{
"code": "",
"text": "Hello everyone,I am trying to perform a query to a serverless MongoDB cluster from a lambda function in a private subnet, through a peered connection following the steps suggested in the provided Set Up a Network Peering Connection — MongoDB Atlas. However, it always returns the message ‘connection 1 to *IP:27017 closed’ .Here is the configuration I have:MongoDB Peering Connection active.AWS Peering Connection active.In the route table of VPC-xxx, the CIDR of the MongoDB VPC has been added. In the route table of the private subnet xxx, the CIDR of the MongoDB VPC has been added.\nimage1270×1018 130 KB\nWhen I add access from all sources to the IP Access list, it connects successfully. I have followed the steps for the connection, but I am unable to achieve successful communication. Any ideas on what configuration steps might be missing?",
"username": "Angello_Ignacio"
},
{
"code": "",
"text": "Hi @Angello_Ignacio - Welcome to the community I am trying to perform a query to a serverless MongoDB cluster from a lambda function in a private subnet, through a peered connection following the steps suggested in the provided Set Up a Network Peering Connection — MongoDB Atlas. However, it always returns the message ‘connection 1 to *IP:27017 closed’If the MongoDB Atlas cluster is of the serverless instance type, then unfortunately you won’t be able to connect to it through a peering connection as it’s currently unsupported as per the Serverless Limitations documentation.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
AWS lambda peering connection error
|
2023-08-15T22:55:59.756Z
|
AWS lambda peering connection error
| 619 |
|
[
"installation"
] |
[
{
"code": "sudo service mongod start\nCould not find /usr/bin/mongod\n",
"text": "I’m trying to install MongoDB in WSL2 but always getting this error:but I don’t know what is the problem…plz help me \n\nScreenshot (8)1920×1080 238 KB\n",
"username": "Tathagat_Tiwari"
},
{
"code": "",
"text": "I’m following the official documentation: https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/#std-label-install-community-ubuntu-pkg",
"username": "Tathagat_Tiwari"
}
] |
Unable to install MongoDB on WSL2 (Ubuntu 22.04)
|
2023-08-15T21:34:21.830Z
|
Unable to install MongoDB on WSL2 (Ubuntu 22.04)
| 486 |
|
null |
[
"dot-net",
"change-streams"
] |
[
{
"code": "",
"text": "Hello,I installed standalone MongoDb and converted it to ReplSet with single instance. On clusters with 3 instances I’m getting 10-15 ms performance from change streams but in my single instance replset I’m getting 60-70ms event in the same message traffic with same server.I’m using dotnet MongoDB.Driver with version 2.20.0 and MongoDb server with version 6.0.2. WriteConcern is W1 and ReadConcern is Majority since MongoDB does not allow another read concern on change stream.Why single instance replset is giving much worse performance on change streams?",
"username": "Ozgun_Ozdemir"
},
{
"code": "",
"text": "Hopefully someone with internal knowledge of change stream will be able to shed some light on this.I see one reason why it could be the case. May be the change stream is fed from secondaries so more stuff can happen in parallel. Which might explain why ReadConcern:Majority is the only read concern on change stream.Just to be sure your single instance is running on the same hardware/VM setup as the multiple instances?",
"username": "steevej"
},
{
"code": "",
"text": "Which might explain why ReadConcern:Majority is the only read concern on change stream.I’m guessing using a majority read the monitored changes will never be rolled back later. So this is a guarantee to the client side.Not sure why it’s slower with single instance though.",
"username": "Kobe_W"
}
] |
ChangeStream performance issue on single instance Mongodb
|
2023-08-10T10:21:04.476Z
|
ChangeStream performance issue on single instance Mongodb
| 510 |
null |
[
"sharding",
"mongodb-shell"
] |
[
{
"code": "",
"text": "First of all, a bit of background: I’m currently working through a Udemy course. The instructor demos installing MongoDB 4.0, then running mongod in hyper (as the instructor is using a Mac), then opening a new shell and running mongo.I’m trying to achieve the same outcome with MongoDB 6.0.3. I know that now MongoDB uses a separate shell, mongosh. I downloaded the shell and put it in the bin with the other MongoDB downloads. I also set the path environmental variables. When I run the mongod command in the Bash terminal, it seems to be working, but when I open a new Bash shell and run mongosh, it’s giving me errors. I tried the commands for mongo or mongos, and still not working.I wanted to share screencaps here, as I think that’d be more helpful, but it looks like I’m going to need to clean off some diskspace to run Photoshop, so for now I hope these descriptions will help. Here is what I’m seeing in the Bash shell when I run each command:“$ mongosh\nbash: mongosh: command not found”“$ mongo\nBadValue: error: no args for --configdb\ntry ‘C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongos.exe --help’ for more information”I double-checked, and my mongosh file is definitely in “C:\\Program Files\\MongoDB\\Server\\6.0\\bin”, which is the path I set in the environmental variables section.I followed what this guy did in the video: Mongo command is not working/found in MongoDB 6.0+ || MongoDB error fix - YouTube – with the one difference being that I put my mongosh file with the other MongoDB files in the bin folder… though I don’t see that this would make a difference. But when I run the “mongosh” command in Bash, I get the error above. I’m confused, as it seems to work fine for him in the tutorial video, so I came here, as I’m quite frustrated with trying to troubleshoot at this point. I’ve been troubleshooting MongoDB installations issues for a while now, between this and other differences in the updated version, and browsing here and Stack Overflow, but still confused as to why it’s not working.Any help would be appreciated! Thanks!Edit: Oh my gosh. I just decided to try the command CLI rather than the Bash shell, and… it works on the command prompt, but not the Bash shell. I suppose I can use the command prompt, but now I’m confused why it’s not working in the Bash shell. I feel a little silly realizing this after typing all that, but I do prefer to work in the Bash shell if possible.",
"username": "Amber_Adamson"
},
{
"code": "echo $PATHwhich mongoshtype mongoshwhich mongowhich mongodhash -rmongoshmongosh$ mongo\nBadValue: error: no args for --configdb\ntry ‘C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongos.exe --help’ for more information\nmongomongosmongomongosmongos",
"text": "Hi @Amber_Adamson! I don’t have a complete answer for you, but here’s the troubleshooting that I’d do if I were in your situation:Finally, this is not related to troubleshooting but points to something being wrong in general:mongo and mongos are different executables, so if mongo self-identifies as mongos and reports an error that only mongos should ever really report, there’s most likely something else wrong with the MongoDB installation here as well.",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "How are you running bash on Windows? Is this in WSL?You would need to install mongosh inside WSL if you are.",
"username": "chris"
},
{
"code": "",
"text": "I found this post looking for a way to make the mongoDB shell work properly in bash, which I installed on Windows 10 as part of a git installation. Everything works, but whilst mongosh works perfectly using the Windows CLI, all the nice features of mongosh are lost in bash. Indeed, I found it quite error prone in bash, so I just keep on using it with the CLI.",
"username": "Gunter_Seydack"
},
{
"code": "",
"text": "What I did on windows was, I installed mongosh and created an alias and placed it’s location in .bash_profile using vim, just below mongod and mongos",
"username": "Krimier_Dan_Sanz"
},
{
"code": "",
"text": "Thanks, Krimier_Dan_Sanz. Your response prompted me to put some more effort to the problem, and I now found the solution that works best for me. I can simply run the mongosh shell in a Visual Studio Code terminal, and it works perfectly.",
"username": "Gunter_Seydack"
},
{
"code": "cd ~sudo apt updatewget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.listsudo apt-get updatesudo apt-get install -y mongodb-orgmongod --versionmkdir -p ~/data/dbsudo mongod --dbpath ~/data/dbps -e | grep 'mongod'",
"text": "Just putting the solution here for anyone looking:To install MongoDB (version 5.0) on WSL (Ubuntu 20.04):Source",
"username": "Vitally_Zilber"
},
{
"code": "",
"text": "I’ve been asked to post my solution for running the mongosh shell in VS Code. I’ll gladly do so, with the understanding that this is not specifically a solution to the OP’s question about “mongosh command not working in Bash shell”. I understand that the OP’s question has been resolved. Running mongosh in a Bash terminal was suboptimal to me, and running it in a VS Code terminal became my preferred solution; not least, because I’m also using this for learning web development. Assuming a problem-free installation of VS Code on Windows 10, my solution was really quite simple. Just add the mongosh.exe path to the sytem’s Path environment variable; in my case C:\\Program Files\\MongoDB\\Shell\\bin. Open a terminal in VS Code, which in my case by default runs Powershell. At the PS prompt, enter: mongosh. The terminal now runs the mongosh shell, and displays the usual mongosh prompt. Two advantages:",
"username": "Gunter_Seydack"
},
{
"code": "",
"text": "I’m the OP, coming back to this a few months later. First of all, I have to say, I’m really embarrassed to admit I used some imprecise terminology in my original post… which undoubtedly led to confusion. I was working with the Gitbash terminal on my Windows, which I was referring to as “Bash” as short for GITBASH… which was undoubtedly confusing, since anyone would assume I was using a Linux Bash shell, maybe via an Ubuntu VM or something. I have since worked with a Linux Bash shell, so I would be a lot more precise with my terminology and differentiation now. Though I know Git Bash reproduces Bash features, I still think I should’ve been a bit more precise about what I was working with, as I basically led everyone to assume I was working with a Linux shell.I’m not sure I ever figured out why mongosh wasn’t working on Gitbash. I just started using the command prompt instead, as it worked.",
"username": "Amber_Adamson"
},
{
"code": "",
"text": "Hello! I think I solved this problem. After downloading “mongoDB shell”, make sure that it is saved in “Program Files” on your C drive. You should be able to access this from “My PC” then you have to go to that file “mongosh” and go to bin, then make a copy of the file that has a little green leaf on it. Take the copy and go back into “Program Files” where you originally downloaded your monggDB-different from mongosh. Open that folder until you get to bin. Add a coy of mongosh to the bin of your mongoDB. Also, Make sure your mongoDB bin is in “system enviorenmental variables” I hope this helps someone!",
"username": "Sachi_N_A"
},
{
"code": "",
"text": "I think I have it figured out.Just writing this here because I had similar problems with the mongosh shell in a vs code terminal. This was the solution I mustered up after thinking about how to solve the problem for a while.",
"username": "Piano_Dan"
}
] |
Mongosh command not working in Bash shell
|
2022-11-18T03:01:51.501Z
|
Mongosh command not working in Bash shell
| 12,194 |
[
"python"
] |
[
{
"code": "date_format = \"%Y-%m-%dT%H:%M:%SZ\" \nfor item in results['value']:\n item['createdDateTime'] = datetime.strptime(item['createdDateTime'], date_format)\nmongo_collection.insert_many(results['value'], ordered=True) \n{ createdDateTime: { $gte: '2023-08-03', $lt: '2023-08-05' } } ",
"text": "Hello I am using this python code to insert documents that contain createdDateTime field:However, I get different types for the filed in my local DB on Windows and Atlas:Local:\nAtlas:\ncreatedDateTime: date\n(sorry, not allowed to post two images)That makes it difficult to develop queries, for example\n{ createdDateTime: { $gte: '2023-08-03', $lt: '2023-08-05' } } \nworks in local but returns 0 results in Atlas.Any ideas? I am on my first week of MongoDb journey, so apologies for a newbie question ",
"username": "dmb.uk"
},
{
"code": "",
"text": "You are clearly converting native date to string.So both Atlas and local server should store your createdDateTime in the same string format. It would be a major flaw if they would.So what ever happens is not related to where the server is running. Most likely the document where createdDateTime is stored in date format have been inserted before the code was changed or are inserted by another piece of code.By the way it is more efficient to store dates in the date format compared to string. Date takes less space than the string version, date is much faster to compare than string and provides a richer API.",
"username": "steevej"
},
{
"code": "",
"text": "Hey Steeve\nThanks for the replyYou are clearly converting native date to string.It is quite opposite, I am converting a string to datetime object. I use the same code to insert data into my local DB and Atlas. But get different types for the field, string in local but datetime in Atlas.\nnot sure why.",
"username": "dmb.uk"
},
{
"code": "",
"text": "It is quite oppositeMy bad. I am not into python. Like I wrote it would be a major and know flaw if the same code (including mongodb driver, python and imported library versions) would produce different data type in the local server versus Atlas. May be if your local server is a very very old version where date data type was not existent. But I doubt. Something else is at stake and there is not enough information to help find the issue.I still suspect that the documents with the wrong datatype have been created before the code that does the conversion of items.I am not into python but I know it depends on spaces for indentation. May be the invisible space based indentation has been modified between the correct and wrong data type.I will hand over the baton to any one with python experience because I am 99% certain that the issue is with the input data or the python code. May be you date_format is not correct anymore and datetime.strptime silently do not convert the string to date. May be you can update the code and check what strptime really modified the string to date. You could also check the return code of insert_many to see if documents are really inserted, because you might be simply looking at old documents that were not converted to date.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Stevee,Thanks for the comprehensive answer.I agree that it would be a major flow to save the same data in different format. Nevertheless, I’ve recreated the database and it now inserts dateTime as dateTime \nCould be indeed an issue on the data source type as you said, just coincided with the new db creation.Thanks again for replying.\nDmitry",
"username": "dmb.uk"
}
] |
Local DB vs Atlas: Different field type for dateTime
|
2023-08-05T20:21:33.563Z
|
Local DB vs Atlas: Different field type for dateTime
| 676 |
|
null |
[
"node-js",
"flexible-sync"
] |
[
{
"code": "schema = {\n name: 'Node',\n primaryKey: '_id',\n properties: {\n _id: 'uuid',\n text: 'string',\n children: 'Node[]'\n }\n}\nNodeNodeNodechildrenNodeNodeNodeNoderealm.objects().addListener()childrenchildrenNode",
"text": "Hi MongoDB team! In our app we’ve seen and been able to consistently reproduce some data loss that happens during a flexible sync schema update, and are hoping to gain a little insight into what is causing the issue and determine what we should be doing differently to fix the issue.We’re working on a minimal reproduction, but in the meantime, here is the general setup:The issue we’re seeing comes up when we deploy a schema update of our production app that adds a field to the Node schema. Specifically we have seen issues adding embedded object and array fields. Here is the sequence of events:A couple important things we have noted as we’ve been debugging this issue:My best guess as to what is happening is that when the client determines that its own schema does not match the server schema, it begins to replace the local data with data from the server. Any local changes to existing data, such as adding a new object to the children array of an existing Node, are overwritten with the data that exists on the server.Is this what is happening? And if so, what steps can we take to ensure that updates to documents during a schema change are persisted? If not, does anyone have thoughts on why we’re seeing this type of data loss? Again, we’re working on a minimally reproducible example right now, but any more insight in the meantime would be very helpful!",
"username": "Tristan_Dyer"
},
{
"code": "",
"text": "Hi, unfortunately, it sounds like you are running into a consequence of the additive-only initial sync we perform when changes are made to a collection. When a field is added to a synced collection, we start an asynchronous process of searching through all of the documents in your cluster, identifying documents that have those fields set, and pushing those values into Device Sync.It sounds like what is happening is the following:There are a few things you can ideally do here:Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,Thanks for your response! Unfortunately I’m not sure either of those suggestions are options for us. We have a lot of end users who essentially never close their app, so we cannot guarantee that no clients are connected before making changes to the schema. The second suggestion is already true. The schema changes we make are always associated with added functionality on the client app. When we roll out a new feature, we update the database schema first, and then update client apps to work with the new fields in the schema. Doing it this way ensures that there are no documents with the new field in Atlas until the schema has been updated.",
"username": "Tristan_Dyer"
},
{
"code": "",
"text": "Got it. It looks like the issue stems from the fact that this is in a list of embedded objects if I understand correctly. If you add a top-level field we will only sync changes for that field, but if you add to a list of embedded objects then we have to re-write the entire list as that is the only safe thing to do to ensure the fields are present.If you are certain that when you update your schema the new fields are not present in your documents, we have a feature flag that we can add to your application to skip this additive initial sync. If you send the URL to your application (or applications) I can apply it to the app.The danger of this flag is that if you do happen to add a new field that is a list property, and then we skip this additive initial sync, and then that new list is appended to, it will generate an invalid history since we never synced down the initial state of the list. That being said, if your process is that the field is never populated when you add to the schema, then this is a safe change.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "childrenembeddedconst NodeSchema = {\n name: 'Node',\n primaryKey: '_id',\n properties: {\n _id: 'uuid',\n text: 'string',\n children: 'Node[]'\n }\n}\nconst NodeSchema = {\n name: 'Node',\n primaryKey: '_id',\n properties: {\n _id: 'uuid',\n text: 'string',\n children: 'Node[]',\n // new field added to the schema\n newField: 'number[]'\n }\n}\nNodeNodechildrenchildrennewField'number[]'newField'number''string'",
"text": "It looks like the issue stems from the fact that this is in a list of embedded objects if I understand correctly. If you add a top-level field we will only sync changes for that field, but if you add to a list of embedded objects then we have to re-write the entire list as that is the only safe thing to do to ensure the fields are present.So the field where we’re specifically seeing data loss is not a list of embedded objects as far as I can tell. It is the children field in this schema, which is an array of documents (note the embedded flag is not present):If we change the schema above to beand then a user adds a new Node document to an existing Node document’s children field, it is the update to the existing document’s children field that ends up being lost.We’ve been able to reproduce the data loss when the type of newField is an array (e.g., 'number[]', like above) or an embedded object. We do not see the same data loss when newField is a simple type like 'number' or 'string'.",
"username": "Tristan_Dyer"
},
{
"code": "",
"text": "Hi,\nPlease share your app ID (you’ll find this in the url) via the DM I just sent you and I can look into this issue for you.Thanks,\nNiharika",
"username": "Niharika_Pujar"
},
{
"code": "",
"text": "Just reached out, thanks Niharika!",
"username": "Tristan_Dyer"
}
] |
Data loss during flexible sync schema change
|
2023-08-01T16:19:10.723Z
|
Data loss during flexible sync schema change
| 749 |
null |
[
"aggregation",
"time-series"
] |
[
{
"code": "",
"text": "Hi, I am pretty new to MongoDB, and I am developing a web app where I query large time series and aggregate them. For that, I have an identifier variable which I am filtering by using the following match condition in the query (example):{‘$match’: ‘id’: ‘123X’}Currently, I have all the time series (one for each identifier) in the same collection. Running the query this way shows a too-large latency. I wonder if a design with a collection for every identifier would improve the performance (as I would avoid the match condition and could define this condition in a previous step in the API service).Basically:A unique collection with all identifiers VS Several collections, one for every identifier.What is better (if any)?",
"username": "Rodrigo_Vazquez"
},
{
"code": "",
"text": "A unique collection with all identifiers VS Several collections, one for every identifier.What is better (if any)?It all depends on your use-cases so only you can really answer that after doing some performance tests.With new aggregation operators like $unionWith it is less important to keep together documents that are involved in the same use-cases.In your case, having multiple collections might help since this field can be removed from all documents which mean more documents fit in RAM.An alternative from splitting into many collection is to have partial indexes where your id:123X query is the partialFilterExpression. This way specific and smaller indexes will be used whenever your $match includes id:123X.This being said may your too-large latency issue is simply the lack of indexes. Something that often happen with someone that is new to MongoDB.",
"username": "steevej"
}
] |
Multiple collections vs one collection for the same data
|
2023-08-15T07:53:55.775Z
|
Multiple collections vs one collection for the same data
| 527 |
null |
[
"transactions",
"storage"
] |
[
{
"code": "systemctl start mongod.servicectrl+csudo systemctl status --full --lines=50 mongod> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Trying to enqueue job mongod.service/start/replace\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Installed new job mongod.service/start as 27570\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Enqueued job mongod.service/start as 27570\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Failed to set 'blkio.weight' attribute on '/system.slice/mongod.service' to '500': No such file or directory\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Passing 0 fds to service\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: About to execute: /usr/bin/mongod --config /etc/mongod.conf\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Forked /usr/bin/mongod as 3002149\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: Changed dead -> start\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: Starting MongoDB Database Server...\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[1]: mongod.service: User lookup succeeded: uid=111 gid=113\n> Aug 12 01:39:29 xxxxx.xxxxxx.net systemd[3002149]: mongod.service: Executing: /usr/bin/mongod --config /etc/mongod.conf\n> Aug 12 01:39:29 xxxxx.xxxxxx.net mongod[3002149]: {\"t\":{\"$date\":\"2023-08-11T23:39:29.560Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":7484500, \"ctx\":\"-\",\"msg\":\"Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \\\"processManagement.fork\\\" to false\"}\nAfter=network.target mongod.service# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb2\n# journal:\n# enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n# logRotate: rename\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1,mongo0.xxxxxxx.com\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n# keyFile: /var/lib/mongo-security/keyfile.txt\n\n#operationProfiling:\n\n#replication:\n# replSetName: rs0\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.470+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"in\n> comingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.471+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.474+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpF\n> astOpenQueueSize.\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.487+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.te\n> nantMigrationDonors\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.487+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"confi\n> g.tenantMigrationRecipients\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.487+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantS\n> plitDonors\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.487+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.488+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":2999311,\"port\":27017,\"dbPath\":\"/var/lib/mongodb2\",\"architecture\":\"64-bit\"\n> ,\"host\":\"xxxxxx.xxxxxxx.net\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.488+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.8\",\"gitVersion\":\"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74\n> \",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.488+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.488+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1,mo\n> ngo0.xxxxxxxxx.com\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb2\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":false,\"path\":\"/var/log/mongodb/mongod.\n> log\"}}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.490+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb2\",\"storageEngine\":\"wired\n> Tiger\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.490+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mo\n> ngodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:45.490+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=31625M,session_max=33000,eviction=(threads_min=4,\n> threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=\n> 10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rt\n> s:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.061+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1571}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.061+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.083+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrest\n> ricted\",\"tags\":[\"startupWarnings\"]}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.084+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"\n> tags\":[\"startupWarnings\"]}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.087+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersi\n> on\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersio\n> n\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.087+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\n> \"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.087+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.089+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.089+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongodb2/diagnostic.dat\n> a\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.097+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStar\n> t\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.097+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.104+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.104+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n> {\"t\":{\"$date\":\"2023-08-12T01:37:47.104+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\nActive: RunningActive: activating",
"text": "Hello, I really struggle since days, running systemctl start mongod.service - as it always hangs up. I can cancle the command with ctrl+c. The funny thing is, that the mongodb server is reachable and I can filter, add and delete items. The problem is, that when I run sudo systemctl status --full --lines=50 mongod I get following output:● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nDrop-In: /etc/systemd/system/mongod.service.d\n└─mongod.conf\nActive: activating (start) since Sat 2023-08-12 01:39:29 CEST; 13s ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 3002149 (mongod)\nMemory: 166.3M\nCGroup: /system.slice/mongod.service\n└─3002149 /usr/bin/mongod --config /etc/mongod.confAs you can see here, “Active” is remaining on “activating” and is not changing at all. That is why the command hangs, and that is, why other services, which should run with After=network.target mongod.service does not run at all, as mongod is never finishing the start up…Here is my config - pretty much default, I only changed the dbPath to create a new one.This is the log:It would be really appreciated, if someone of you knows how to fix that problem, that systemctl start will run completely and Active: Running is appearing instead of Active: activatingThank you in advanced!",
"username": "Markus_Elsner"
},
{
"code": "",
"text": "Try setting the fork option under processManagement.",
"username": "steevej"
}
] |
Systemd start hangs up, but mongod is functional
|
2023-08-12T00:09:02.908Z
|
Systemd start hangs up, but mongod is functional
| 612 |
null |
[] |
[
{
"code": "",
"text": "I use SQLSERVER and because of the high speed of Mongo DB, I want to change the program’s database.\nBut with the test I did, I found out that the speed of MongoDB in reading many records is not high and requires a lot of time and uses a lot of RAM memory.\nDo you agree?",
"username": "Hossein_Mahdavi"
},
{
"code": "",
"text": "Your question is way to generic.If your tests really reflects your use-cases and your tests indicates that one is better than the other, then use the better one. We cannot make this decision for you.uses a lot of RAM memory.Any server that requires performance should run on dedicated hardware/VM and should use all the resources available. If it does not, then your system is over provisioned or the server is not optimized. RAM is way faster than disk and everything that fits in RAM should be in RAM.",
"username": "steevej"
}
] |
The speed of reading many records is faster in MongoDB or SQLSERVER?
|
2023-08-13T10:17:58.951Z
|
The speed of reading many records is faster in MongoDB or SQLSERVER?
| 265 |
null |
[
"swift"
] |
[
{
"code": " func savePOIs(pois:[POI], implan: String) { <<<That work good>>>\n autoreleasepool {\n // check there are results to store or return\n guard pois.count > 0 else { return }\n \n // build the set of new pois\n var newPois: Set<String> = []\n for p in pois { newPois.insert(p.uid ?? \"\")}\n\n // delete only the pois in the database which are not in the set\n // write all new pois\n DispatchQueue.global(qos: .background).async {\n let realmBackground = try! Realm()\n for oldPoi in realmBackground.objects(POI.self).filter(\"name_implan == %@\", implan) {\n if !newPois.contains(oldPoi.uid ?? \"\") {\n try! realmBackground.write { realmBackground.delete(oldPoi) }\n }\n }\n }\n\n // keep the uis of the ephemeral sorting order\n let oldSortingOrder = realm.objects(POI.self).filter(\"ephemeral_sorting_order > 0\")\n var dicSortingOrder: [Int: Int] = [:]\n for poi in oldSortingOrder {\n dicSortingOrder[poi.ephemeral_sorting_order] = poi.ephemeral_sorting_order\n }\n\n // write all new pois\n DispatchQueue.global(qos: .background).async {\n let realmBackground = try! Realm()\n try! realmBackground.write {\n for poi in pois { realmBackground.add(poi, update: .all) }\n }\n try! realmBackground.write {\n for (uid, order) in dicSortingOrder {\n if let poi = realmBackground.objects(POI.self).filter(NSPredicate(format: \"ephemeral_sorting_order = %d\", uid)).first {\n poi.ephemeral_sorting_order = order\n }\n }\n }\n }\n\n let finalSize = realm.objects(POI.self).count\n print(\"\\(LOG_TAG) - savePOIs - final size (all implans) = \\(finalSize), new pois = \\(pois.count), \")\n\n // prefetch images for all pois\n if Utils.hasInternetConnection() { ImageClient.sharedInstance.prefetchAllImages(prefetchType: .pois, implan: implan) }\n }\n }\nfunc saveSmallPOIs(pois: [SmallPOI], implan: String) { <<<That throw error>>>\n autoreleasepool {\n // check there are results to store or return\n guard pois.count > 0 else { return }\n \n // build the set of new pois\n var newPois: Set<String> = []\n for p in pois { newPois.insert(p.smallPOIUID ?? \"\")}\n\n // delete only the pois in the database which are not in the set\n // write all new pois\n DispatchQueue.global(qos: .background).async {\n let realmBackground = try! Realm()\n for oldPoi in realmBackground.objects(SmallPOI.self).filter(\"name_implan == %@\", implan) {\n if !newPois.contains(oldPoi.smallPOIUID ?? \"\") {\n try! realmBackground.write { realmBackground.delete(oldPoi) }\n }\n }\n }\n // keep the uis of the ephemeral sorting order\n let oldSortingOrder = realm.objects(SmallPOI.self).filter(\"ephemeral_sorting_order > 0\")\n var dicSortingOrder: [Int: Int] = [:]\n for poi in oldSortingOrder {\n dicSortingOrder[poi.ephemeral_sorting_order] = poi.ephemeral_sorting_order\n }\n \n // write all new pois\n DispatchQueue.global(qos: .background).async {\n let realmBackground = try! Realm()\n \n try! realmBackground.write {\n for poi in pois { realmBackground.add(poi, update: .all) }\n }\n try! realmBackground.write {\n for (uid, order) in dicSortingOrder {\n if let poi = realmBackground.objects(SmallPOI.self).filter(NSPredicate(format: \"ephemeral_sorting_order = %d\", uid)).first {\n poi.ephemeral_sorting_order = order\n }\n }\n }\n }\n \n let finalSize = realm.objects(SmallPOI.self).count\n print(\"\\(self.LOG_TAG) - savePOIs - final size (all implans) = \\(finalSize), new pois = \\(pois.count), \")\n\n // prefetch images for all pois\n if Utils.hasInternetConnection() { ImageClient.sharedInstance.prefetchAllImages(prefetchType: .smallPOIs, implan: implan) }\n }\n }\n",
"text": "I have 2 almost identical methods, they differ only in the model that is stored in the realm. And for some reason, one of them manipulates its model without problems, while the other throws me the error “Realm accessed from incorrect thread”. I can’t figure out what the problem is.",
"username": "Igor_Stasiv"
},
{
"code": "",
"text": "There are two different Realm models being used so understanding what those look like may help.Also note that throwing an error often times happens on a specific line of code - knowing what line that is may also help.Lastly, doing some basic troubleshooting may reveal more info: add a breakpoint and step through the code line by line, inspecting the vars and code execution until you spot something unexpected. Then provide those details.",
"username": "Jay"
},
{
"code": "for oldPoi in realmBackground.objects(SmallPOI.self).filter(\"name_implan == %@\", implan) {",
"text": "for oldPoi in realmBackground.objects(SmallPOI.self).filter(\"name_implan == %@\", implan) {I already tried setting a breakpoint before and it didn’t help me at all. The model returns an error in the string\nfor oldPoi in realmBackground.objects(SmallPOI.self).filter(“name_implan == %@”, implan)this line uses a model created in the background in exactly the same way as in the other method, but in the method with an error the model for some reason crashes when trying to get data and the error says that the model is called from the wrong queue, I think it is not privileged to use it from the main queue because it freezes the interface even if it was working, but it is not.",
"username": "Igor_Stasiv"
},
{
"code": " DispatchQueue.global(qos: .background).async {\n let realmBackground = try! Realm()\n for oldPoi in realmBackground.objects(SmallPOI.self).filter(\"name_implan == %@\", implan) {\n if !newPois.contains(oldPoi.smallPOIUID ?? \"\") {\n try! realmBackground.write { realmBackground.delete(oldPoi) }\n }\n }\n }\nlet realmBackground = try! Realm()\nlet implanResults = realmBackground.objects(SmallPOI.self).filter(\"name_implan == %@\", implan)\nfor implan in implanResults {\n print(implan)\n}\n",
"text": "What happens if you do this as a test; comment out all of the above and replace it with thisAnd see if it crashes and if so, on what line.",
"username": "Jay"
},
{
"code": "DispatchQueue.global(qos: .background).asynctry await realm.asyncWrite",
"text": "Had another thought/questions as well. Why are you using DispatchQueueDispatchQueue.global(qos: .background).asyncwhen Realm support asynchronous writes on background threads?try await realm.asyncWriteUsing the asyncWrite does not block or perform I/O on the calling thread so it won’t block your UI.Take a look at Actor Isolated Realms.",
"username": "Jay"
},
{
"code": "DispatchQueue.global(qos: .background).asynctry await realm.asyncWrite",
"text": "I used DispatchQueue.global(qos: .background).async because I got an old project that already used realm with a similar style and since it was my first experience with realm I, not knowing all the SDK capabilities, just kept creating new ones methods according to this sample. They all worked until the case I presented earlier. (I found what the problem was and now it works, long story short: I called the realm method after an internet request, forgetting to transfer the queue -_-).Now I dug a little deeper into the documentation of your SDK and found a description of the feature you proposed try await realm.asyncWrite. Tried refactoring one of the methods using Task {} so as not to turn the whole program into async code, and surprisingly that works too. Over time, I will redo the rest of the methods in this way.\nThanks for the advice.",
"username": "Igor_Stasiv"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Realm accessed from incorrect thread
|
2023-08-07T15:01:00.951Z
|
Realm accessed from incorrect thread
| 1,035 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "const productSchema = new mongoose.Schema(\n {\n name: { type: String, required: true, unique: false },\n slug: { type: String, required: true, unique: false },\n category: { type: String, required: true, unique: false },\n image: { type: String, required: true, unique: false },\n price: { type: Number, unique: false },\n countInStock: { type: Number, required: true, unique: false },\n brand: { type: String, required: true, unique: false },\n rating: { type: Number, required: true, unique: false },\n reviews: { type: Number, required: true, unique: false },\n description: { type: String, required: true, unique: false },\n },\n {\n timestamp: true,\n }\n);\n\nMongoBulkWriteError: E11000 duplicate key error collection: react-store.Products index: price_1 dup key: { price: 123 } ",
"text": "i have the following schema:Even though i’ve set the unique: false property everywhere, i always get MongoBulkWriteError: E11000 duplicate key error collection: react-store.Products index: price_1 dup key: { price: 123 } \nHow does that even happen? There is no restraint on the unique, I don’t want price to be unique and i’ve explicitly said so in the schema.\nI’m losing my mind here",
"username": "rygel_hyn"
},
{
"code": "> db.Products.dropIndex('price_1') ",
"text": "FIXED IT!!! Oh my GOD What a disgusting error it turns out Mongoose doesn’t remove existing indexes so you’ll need to explicitly drop the index to get rid of it. In the shell:\n> db.Products.dropIndex('price_1') What a miserable experience this has been i was losing it! If you initially set it to unique:true you can’t set it to false anymore. It stays as true forever until you drop the index manually from the shell and",
"username": "rygel_hyn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Constant duplicate key error even when unique is set to false in the schema
|
2023-08-15T07:55:14.699Z
|
Constant duplicate key error even when unique is set to false in the schema
| 411 |
[
"java",
"android"
] |
[
{
"code": "",
"text": "Started a new project and wanted to use realm, works on IOS but if I want to build on a android to help with development I keep getting this error\nimage2523×179 45.5 KB\nI even tried some template with realm but get the same error.Any help is appreciated.",
"username": "JoaoNMBernardo"
},
{
"code": "",
"text": "New user so can’t post more than 1 image per postReally new to RN and realm. I’ll post my package.json and how I’m doing my db\nimage499×1193 86.1 KB\n",
"username": "JoaoNMBernardo"
},
{
"code": "",
"text": "\nimage664×1056 73.1 KB\n",
"username": "JoaoNMBernardo"
},
{
"code": "",
"text": "I am having same problem. Did you find any solution ?",
"username": "JiuJitsuBOX_N_A"
}
] |
Error on RN project startup if using realm (java.lang.UnsatisfiedLinkError: couldn't find DSO to load)
|
2023-01-20T10:09:37.330Z
|
Error on RN project startup if using realm (java.lang.UnsatisfiedLinkError: couldn’t find DSO to load)
| 2,291 |
|
[] |
[
{
"code": "",
"text": "Pls i have an issue with my connection url on mongodb that’s what i think ao though, pls can check the images provided below for better understanding\n\nIMG-20230808-WA00021049×269 66.2 KB\n",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "You might want to post the code with passwords removed that show how you’re connecting. Most likely there’s a problem with your connection string.Also not sure if this is your problem but with the password section of your uri string, it’s safer to wrap it in encodeURIComponent in case you have any special charactersencodeURIComponent(process.env.MONGODB_PASSWORD)",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "Eeii pls if u can break it down small especially what u said last",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I don’t get what u said here “You might want to post the code with passwords removed that show how you’re connecting”, I know that am meant to input my password in the connection string with username though, so putting password in it can affect it how it would connect right?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I was also talking about the last line the encode something u said, I understand when it is been used but I don’t know how or where it should be put in my connection string or my code",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "My recommendation is, go back to the very beginning and follow the instructions step by step. Start here:Otherwise you’ll keep getting stuck. I’ve been there, it’s tough in the beginning. YouTube walkthroughs can help as well. But I promise all of the answers are in the documentation – MongoDB’s is very good – and the best skill you can develop is learning how to read through them",
"username": "Justin_Jaeger"
}
] |
Mongodb connection url
|
2023-08-09T16:43:11.103Z
|
Mongodb connection url
| 835 |
|
null |
[
"aggregation",
"transactions",
"views"
] |
[
{
"code": "a1: {\n _id: 1,\n b: { },\n c: { }\n},\na2: {\n _id: 2,\n b: { },\n c: { }\n}\na_collection.aggregate([{\n b: { $exists: true } // match\n}, {\n $lookup: {\n from: \"b_collection\",\n localField: \"b\", // overly simplified example, b on the local doc would be an id\n foreignField: \"_id\"\n as: \"b\"\n },\n $merge: {\n into: \"a_collection\",\n on: \"_id\"\n }\n}])\n",
"text": "Hi all,\nI’m attempting to build a distributed, denormalization service. My concerns are centered around concurrency. I have collections a, b, and c. The relationships between a and b/c are many to many. I want to write the contents of b and c into a in a structure like this:The pipeline I would like to write would look something like:My understanding is that writes to single documents using findAndUpdate are atomic and optimistically concurrent but I’m looking for a findManyAndUpdateMany that would provide the same guarantees.I’ve tried using an aggregation pipeline with a $lookup and $merge operator (but the documentation state that aggregation pipelines only seem to apply a read (intent shared) lock). Does the $merge operator apply a write (intent exclusive) lock? Does it apply it only at the final stage or does Mongo detect an update and apply the lock to all the docs that are match in the aggregation pipeline? Or does it apply locks as it’s processing each individual document in the pipeline?I’ve also tried using an update() but I can’t use $lookup and so I’m not able to lookup from b and c in one atomic operation.Is there another approach I should be taking? Should I just use a transaction? Is there a way for me to explicitly/implicitly apply different locks?Thanks in advance!",
"username": "Nathan_Toung"
},
{
"code": "",
"text": "To clarify, there would be c version of the same aggregation pipeline that would run in parallel. In theory, one of the lookups could have stale data and overwrite the update of the other pipeline.",
"username": "Nathan_Toung"
}
] |
Concurrency with Aggregation Pipelines
|
2023-08-14T20:19:27.999Z
|
Concurrency with Aggregation Pipelines
| 484 |
null |
[] |
[
{
"code": "",
"text": "I have the following schema:\nunique_doc_id: {\ntitle: title1,\ndesc: desc\nlistField:\nmapField:\n}I have a system1which hosts an API used to take inputs for above document…then notifies system2 using an unordered queue which is supposed to update this document in the DB.I want to maintain field level timestamps for concurrent updates on same field for the following scenario:\nLet’s say someone calls API at T1 with title1, at T2 (T1 + 1ms) with title2…but the updates reach system2 out of order i.e. the T1 update reaches AFTER T2 update. i.e. let’s say the write of title2 happens first, so, when the title1 update reaches system2, that write should fail…I beleive this can be done with field level timestamps…Does mongoDB have support for this ??\nAny doc links / example code chunks would help.Thanks!",
"username": "Yash_Verma2"
},
{
"code": "",
"text": "Could have a last updated field or checksum stored in the document, when you write you pass in the last known checksum. If it’s different then you know something else has modified it since your last read and can deal with it appropriately.",
"username": "John_Sewell"
},
{
"code": "db.books.insertOne({\n title: 'title0',\n modificationsReceived: 1691360130000\n});\ndb.books.updateOne(\n { \n modificationsReceived: {\n $lt: 1691360130002\n },\n },\n {\n $set: {\n title: 'title2',\n modificationsReceived: 1691360130002\n }\n }\n);\ndb.books.updateOne(\n { \n modificationsReceived: {\n $lt: 1691360130001\n },\n },\n {\n $set: {\n title: 'title1',\n modificationsReceived: 1691360130001\n }\n }\n);\n",
"text": "Hello, @Yash_Verma2 and welcome to the community! To solve your problem, you can add a field in your document, that would represent the timestamp when the update command is received. Update the document only if your update command has the greater timestamp.Example dataset:If the following command reach the database first:Then this one will not make any changes to your data",
"username": "slava"
},
{
"code": "",
"text": "Using a timestamp to order the events can work most of the time, but not 100% guaranteed.This is because wall clock time is not reliable and you can have clock drift in a distributed system. To achieve 100% guarantee, you will have to create your own virtual clock for ordering.Check my comments in I am curious about how $currentDate works - #3 by YounWan_Kim",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you. I will try this one using java driver",
"username": "Vasu_Bathina"
}
] |
Concurrent update to same field in mongodb
|
2023-08-06T08:24:19.139Z
|
Concurrent update to same field in mongodb
| 517 |
null |
[
"field-encryption"
] |
[
{
"code": "",
"text": "I’m studying queryable encryption and trying to understand how it works, but the MongoDB documentations are about how to use it, it doesn’t mention much about how exactly it works. I wonder if the source code is available or if there are any white books or something to explain it in detail.",
"username": "Yun_WANG"
},
{
"code": "",
"text": "Hello Yun Wang - You should see more information coming out about it very soon.Cynthia",
"username": "Cynthia_Braund"
}
] |
Is the aource code of queryable encryption publicly available
|
2023-08-14T16:37:02.492Z
|
Is the aource code of queryable encryption publicly available
| 454 |
null |
[
"golang",
"field-encryption"
] |
[
{
"code": "AWS_WEB_IDENTITY_TOKEN_FILE",
"text": "Hi, I’m trying to set up CSFLE with AWS KMS using the go driver. As far as I can see, libmongocrypt only accepts setting up credentials by passing access key/secret through the AWS kms provider options.\nIn our org we use EKS and try to not be setting credentials explicitly but rather use the EKS integration for using roles for pods automatically through Service Accounts, which sets up the AWS_WEB_IDENTITY_TOKEN_FILE in each pod for assuming roles (docs here: Configuring Pods to use a Kubernetes service account - Amazon EKS)Is there any way for making AWS KMS provider to work with this built into the driver? Or plans for doing that?In my opinion, it would be great because it can enforce better security practices, and having the driver manage assuming the role and session refreshing would abstract a lot of hassle from the client-side code.Thanks!",
"username": "Alejo_Abdala"
},
{
"code": "",
"text": "Hello Aleja_Abdala,Welcome to the MongoDB Community! In the CSFLE AWS tutorial here, under the Grant Permissions section, there are instructions on how to use an IAM role instead of a user, which will support assume roles. It is in the 2nd yellow box titled “Authenticate with IAM Roles in Production”. I hope that helps.Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Thanks for the response Cynthia! I’ve seen that section, but it leaves it to the end user to refresh credentials and recreate clients every time.\nIf that’s that, I understand the limitation and can share my feedback on it: as a user I’d find it very convenient that the mongo driver used the same environment variables as the AWS SDKs (ideally requiring no passage of credentials if exported in the environment). It would be great if it also handled credentials refreshing (which I think - I haven’t checked all of them - AWS SDK’s already handle automatically when using AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_ARN env vars).Thanks!",
"username": "Alejo_Abdala"
},
{
"code": "",
"text": "Hi Alejo,We do have support for automatic refresh, meaning the client doesn’t need to refresh credentials or recreate the clients. It is possible that you are using an older driver version that does not yet support the auto refresh. Are you using the latest version of the driver?Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Oh, nice, thanks! could you link me to the docs for setting that up?I haven’t tried it myself and assumed it didn’t work like that as the documentation you linked in the previous message states:Your application must include logic to get new temporary credentials and recreate your CSFLE-enabled MongoClient instance when each set of temporary credentials expires.",
"username": "Alejo_Abdala"
},
{
"code": "{ \"accessKeyId\":\"<temporary access key ID>\", \"secretAccessKey\":\"<temporary secret access key>\", \"sessionToken\":\"<temporary session token>\" }",
"text": "Hi Alejo,Thank you for pointing that out. I thought we had removed that from the documentation already and will get that fixed. You should be able to set it up just by using providing the elements listed in the KMS Provider object\n{ \"accessKeyId\":\"<temporary access key ID>\", \"secretAccessKey\":\"<temporary secret access key>\", \"sessionToken\":\"<temporary session token>\" }\nMake sure to get the most up to date driver versions to ensure that it has the support. Try it out and let me know how it goes.Thanks,Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "DEBU[0000] Using mongo driver version: v1.12.1 \nINFO[0011] 08-11-2023 11:43:05.317336 => GET /configurations/3f488da7-1427-4c8b-acf5-6e2782db2c35/versions/0 from: 127.0.0.1:56262 RequestId=ded44e43-3fdb-4062-a172-02a98b6874f2\nDEBU[0012] decoding key: /password owner: name: config-with-secret, version: 0, scope[], labels[] RequestId=ded44e43-3fdb-4062-a172-02a98b6874f2\nINFO[2520] 08-11-2023 13:05:43.187757 => GET /configurations/3f488da7-1427-4c8b-acf5-6e2782db2c35/versions/0 from: 127.0.0.1:59922 RequestId=0a229383-bf64-4bdd-93ad-133e56b3642f\nDEBU[2521] decoding key: /password owner: name: config-with-secret, version: 0, scope[], labels[] RequestId=0a229383-bf64-4bdd-93ad-133e56b3642f\nDEBU[2524] mongo version.Driver: v1.12.1 RequestId=0a229383-bf64-4bdd-93ad-133e56b3642f\nDEBU[2524] failed to fetch secret mongocrypt error 1: Error in KMS response. HTTP status=400. Response body=\n{\"__type\":\"ExpiredTokenException\",\"message\":\"The security token included in the request is expired\"} <nil> RequestId=0a229383-bf64-4bdd-93ad-133e56b3642f\n",
"text": "Thanks Cynthia, I’ve tried it out as instructed and couldn’t get it to work as I’d understand it should. I’ll be specific about the setup, my understanding about expectations and outcomes.Steps 0-3 work as expected, I can confirm the temporary credentials and CSFLE related workflows work well during the first <60 mins (session expiration), though step 5 fails.Here is an excerpt from the application logs which reflect the error I’m facing, I’d understand my assumption of refreshing credentials is wrong and that’s not happening.Perhaps I’m missing some step on how to configure the driver for auto-refreshing?\nThanks",
"username": "Alejo_Abdala"
},
{
"code": "// Passing an empty document results in the driver fetching AWS credentials from the environment.\nkmsProviders := map[string]map[string]interface{}{\n \"aws\": {},\n}\n\n\n",
"text": "Hi Alejo,You were correct in your understanding, I just pointed you to the outdated part of the docs. Instead of providing the accesskeyid, secretaccesskey and session token use this instead.That tells the driver to fetch new credentials and will not require a restart.Thank you again for pointing out the outdated text in the docs and let me know how it goes.Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Hey Cynthia, thanks for following up with this. That works perfectly well! Thanks for the help.",
"username": "Alejo_Abdala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
CSFLE with AWS KMS using AWS_WEB_IDENTITY_TOKEN_FILE
|
2023-08-10T15:09:25.975Z
|
CSFLE with AWS KMS using AWS_WEB_IDENTITY_TOKEN_FILE
| 671 |
null |
[
"aggregation",
"queries",
"java"
] |
[
{
"code": "{\n \"data\": [\n {\n \"MainId\": 1111,\n \"firstName\": \"Sherlock\",\n \"lastName\": \"Homes\",\n \"categories\": [\n {\n \"CategoryID\": 1,\n \"CategoryName\": \"Example\"\n }\n ]\n },\n {\n \"MainId\": 122,\n \"firstName\": \"James\",\n \"lastName\": \"Watson\",\n \"categories\": [\n {\n \"CategoryID\": 2,\n \"CategoryName\": \"Example2\"\n }\n ]\n }\n ],\n \"messages\": [], // blank json\n \"success\": true // boolean value\n}\nCategoryIDNote: each document structure might vary so need to find whether the key is present in the whole collection or not",
"text": "Let us take my document in collection looks like this bellowso i need to search whether the key exists CategoryID in the document or not , It is just an example my search key can be anything and can be nested to any levelwhat is an better and optimal way to find whether the key exists or not ?\nNote: each document structure might vary so need to find whether the key is present in the whole collection or noti can iterate through all the documents in the collection and recursively go as deep as inside to check the existing of the key but that is brute force solution how can i optimise it…?What i need is:",
"username": "Divakar_V1"
},
{
"code": "",
"text": "Hi @Divakar_V1 and welcome to MongoDB community forums!!i can iterate through all the documents in the collection and recursively go as deep as inside to check the existing of the key but that is brute force solution how can i optimise it…?As you are aware that Embedding documents in MongoDB gives you the flexibility to model the data in an efficient manner. Please correct me if my understanding of your use-case is not right or if I am missing something, your document schema has long nested levels which could make the use of dot notation mechanism to access the embedded documents difficult and as you mentioned would be a brute force method.To avoid this, you can make use of the Extended Reference Patten to model the data and test your use-case as per your requirements to see if this solves your purpose.Let us know if the above solution works for you.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "@Aasawari Thanks for your answer but i need to do without altering the document",
"username": "Divakar_V1"
}
] |
What is the better and optimal way to find the key (can be n-level nested ) is present in the mongodb collection?
|
2023-07-16T06:20:10.414Z
|
What is the better and optimal way to find the key (can be n-level nested ) is present in the mongodb collection?
| 605 |
null |
[
"compass",
"etl"
] |
[
{
"code": "",
"text": "I have been trying to connect my Talend 7.3 version to the MongoDB atlas but it is giving errors. However , I am able to connect the MongoDB Cluster using Compass.Error : “Timeout after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=… , type=UNKNOWN …}]}”\nI need to work in my current assignment to update and retrieve records from MongoDB atlas.This is very urgent , any help would be highly appreciated.Regards,\nKumar",
"username": "satya_kumar"
},
{
"code": "",
"text": "Hi @satya_kumar ,Welcome to the MongoDB community.It looks like you’ve connectivity issue specifically from Talend to Atlas. However, You’re able to connect to Atlas from your local.Based on that info, One of the reasons could be network settings of your Atlas cluster. You would have whitelisted your IP and wouldn’t have added Talend server IP or CIDR range that it flows traffic through.I think you can follow one of the two things here:Feel free to post here if you’re facing issues in any of above steps or still have connectivity issues from your Talend instance.All the best!",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Looks like Talend 7.3 version can’t connect to MongoDB Atlas , that’s what told by the Tech Support. Only Cloud version of Talend can connect to Atlas.If anyone has any workaround , please suggest.",
"username": "Satyendra_Kumar2"
}
] |
Talend 7.3 not able to connect to MongoDB Atlas
|
2023-04-03T18:37:28.132Z
|
Talend 7.3 not able to connect to MongoDB Atlas
| 967 |
null |
[] |
[
{
"code": "",
"text": "Hello,We are using MongoDB Community Edition on our production environment. I just wanna learn if it is legally safe to use it on a prod environment.Thanks in advance",
"username": "Ilker_Demirci"
},
{
"code": "",
"text": "In short , yes.If you’re creating a Public DBaaS offering you’d not be in compliance.Disclaimer: Not a lawyer and specifically not your lawyer.",
"username": "chris"
}
] |
A Question About Using Community Edition
|
2023-08-14T09:00:03.574Z
|
A Question About Using Community Edition
| 363 |
null |
[
"atlas-device-sync"
] |
[
{
"code": "",
"text": "Whether Realm sync can be work with IonicFramework?",
"username": "Es_Degan_N_A"
},
{
"code": "",
"text": "Hi and welcome to the Community.Currently we only support React Native (both iOS & Android) and Node.js (on MacOS and Linux) but we are considering adding support for Cordova/PhoneGap/Ionic as well, but we have no exact timelines as yet. Stay tuned here, and via @realm on Twitter too for any updates.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hi Shane,\nThe ionic community is huge and it would be awesome to have Ionic integrate with Realm.\nWe look forward to it.\nAny news?",
"username": "Alan_Claudio_Melo"
},
{
"code": "",
"text": "Hi Alan - good timing. We’ve just completed a proof of concept with Ionic and we are looking forward to sharing that with the community very soon - stay tuned! And thanks for following up on this thread - I should have kept it updated!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hi Shane,I look forward to it.Thank you for yoir reply.",
"username": "Alan_Claudio_Melo"
},
{
"code": "",
"text": "No problem - as soon as we can share details - I’ll update this post.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hi ShaneAm also very interested in this development.\nAre you able to say whether you will be supporting cordova or capacitormany thanks\nDarren",
"username": "Darren_Hunter"
},
{
"code": "",
"text": "Hi @Shane_McAllister\nJust prompting on this query again. Are you guys making progress ? Any forward looking dates ? Are you using Cordova or Capacitor for the native interface ?\nthanks\nDarren",
"username": "Darren_Hunter"
},
{
"code": "",
"text": "Hi @Darren_Hunter - apologies for the delay - was on leave. Yes, we’ve been making progress here and are aiming to release sample app and demos in September.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "thanks for the update - you didn’t mention if your solution was using Cordova or Capacitor (or both maybe) - would you be able to let us know thanks",
"username": "Darren_Hunter"
},
{
"code": "",
"text": "Hi @Darren_HunterAttached you’ll find an Ionic Web React App sample that uses Realm to login (you’ll have to register a user). This looks like a Capacitor App. Developed by the Ionic team with our help. A blog post and a decent repo with instruction should follow, but wanted to give you something to work with.Hope this helps you.Google Drive file.",
"username": "Diego_Freniche"
},
{
"code": "",
"text": "@Darren_Hunter Ionic have published their Blog Building an app with Ionic React and Realm - Ionic Blog - and we will be following up shortly with taking this further to iOS & Android",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "We should be in a position to publish this blog next week.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Any plans to add Ionic Vue and Realm?",
"username": "Gerald_Owens"
},
{
"code": "",
"text": "Hi McAllister,Any news about Realm with Ionic?\nI searched the Blog and didn’t find any publication as you indicated.\nIonic is a framework that hj runs under Angular, Vue and React, I hope the solution is universal and not just for React.\nI eagerly await information.\nThanks",
"username": "Alan_Claudio_Melo1"
},
{
"code": "",
"text": "@Alan_Claudio_Melo1 Apologies - it got a little waylaid! But no fear, it’s in final review and will be out very shortly.",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Looking forward to it.\nI have a project stopped, waiting for this solution.\nI don’t want to use SQLite or another solution, Realm is perfect for use with MongoDB.I await brief news, thank you.",
"username": "Alan_Claudio_Melo1"
},
{
"code": "",
"text": "Awesome to hear. However can you give and estimate of “shortly”. Is this days, weeks or months?",
"username": "Gerald_Owens"
},
{
"code": "",
"text": "Hi @Gerald_Owens, Diego here, working on aforementioned postWe aim for it to be published this week, or next one. But take this with a grain of salt: this is a developer’s estimation and you know…Thanks for your patience and interest.",
"username": "Diego_Freniche"
},
{
"code": "",
"text": "@Gerald_Owens @Alan_Claudio_Melo1 @Es_Degan_N_A @Darren_HunterHi all - glad to add that we’ve published Let’s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android! | MongoDB which builds upon Ionic’s previous post with Realm. Please do check it out, and let us know what you think.",
"username": "Shane_McAllister"
}
] |
Whether Realm sync can be work with IonicFramework?
|
2020-09-06T04:38:29.852Z
|
Whether Realm sync can be work with IonicFramework?
| 14,652 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "users.insertone()",
"text": "Hi guy i get this error code can you help me?this is my source codes = GitHub - widror31/carrental1",
"username": "Ryan_Gosling"
},
{
"code": "users.insertone()users.insertOne()10000ms",
"text": "Hey @Ryan_Gosling,Thank you for reaching out to the MongoDB Community forums Mongoose Error: operation users.insertone() buffering timed out after 10000mThe shared error message indicates that the operation users.insertOne() is getting timed out after 10000ms, which means the connection to MongoDB is not being established correctly or is taking too long to process. You can refer to a similar thread here:Hope it helps. In case you have any further questions or concerns, feel free to reach out.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "execute: async () => {\n\t\tlet timeout = 25;\n\t\twhile (mongoose.connection.readyState === 0) {\n\t\t\tif (timeout === 0) {\n\t\t\t\tconsole.log('timeout');\n\t\t\t\tthrow new Error(\n\t\t\t\t\t'timeout occured with mongoose connection',\n\t\t\t\t);\n\t\t\t}\n\t\t\tawait mongoose.connect(mongoToken, {\n\t\t\t\tuseNewUrlParser: true,\n\t\t\t\tuseUnifiedTopology: true,\n\t\t\t});\n\t\t\ttimeout--;\n\t\t}\n\t\tconsole.log(\n\t\t\t'Database connection status:',\n\t\t\tmongoose.connection.readyState,\n\t\t);\n\t},\n",
"text": "In case you didnt find a fix. I had same issue. I could have identical code and sometimes the connection would just drop out. I solved this by making a new file the checks connections and if its dropped it reconnects.",
"username": "Human_N_A"
},
{
"code": "",
"text": "A post was split to a new topic: AtlasError:8000 and buffering timeout on my query",
"username": "Kushagra_Kesav"
}
] |
Mongoose Error: operation `users.insertone()` buffering timed out after 10000m
|
2023-06-08T10:10:14.095Z
|
Mongoose Error: operation `users.insertone()` buffering timed out after 10000m
| 2,131 |
null |
[
"c-driver"
] |
[
{
"code": "",
"text": "async api would be more effective, and less threads. why not supply in mongo c driver?there are async api in mongo java driver",
"username": "kuku_super"
},
{
"code": "",
"text": "Hi @kuku_super, thanks for reaching out!An async API on the C driver has been considered before. It is not outside the realm of possibility. Notably, the investigation in CDRIVER-27 concluded the current multi-threaded API could maintain throughput well as the number of application threads increased.Do you have a use case in mind for an async API? Has the number of threads spawned by libmongoc been problematic?Sincerely,\nKevin",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Hi @Kevin_Albertson , thx for ur reply.if changed the request waits 0 - 2 ms, then we compared the thread pool solution and the async solution with 1 cpu. the async solution would give more qps, more than 20% better.more quickly the request, there would be more context switch.",
"username": "kuku_super"
},
{
"code": "",
"text": "hi, @Kevin_Albertson, give a use case in game server.we service 10k player in a server process with only one cpu. every player would save data to mongodb once 1 second. so , there would be 10k qps ops. suppose 10ms per op, if we use thread pool solution, we need 1000 threads only one cpu. we would deploy 5-6 processes on one server with 8 cpus. the context switch would cost too much cpu.so async solution would be better.would async solution be considered again?look forward to your reply , @Kevin_Albertson.",
"username": "kuku_super"
},
{
"code": "",
"text": "Hi, sorry for bumping the thread.I’m a maintainer of drogon framework. We’ve investigated MongoDB support a while ago. But stopped by the fact that the C API does not support async mode. The main reason being we want to keep the high concurrency all time in our framework and reduce context switches. From our experience, running DB on the same thread as the HTTP request can improve throughput up to 20%. And we try hard to keep all threads not blocked under all circumstances. Otherwise performance tuning becoming a loose-loose game of balancing number of DB threads and dealing with the drawbacks of having too many threads.Hopefully the feedback can push this forward.",
"username": "marty1885"
},
{
"code": "",
"text": "Hello, just bumping this thread again. Are there some news here? Really surprised that the C driver has no async feature. Still nothing planned in a near future? As has been mentioned earlier, in a high concurrency context like can be for some web services, async model usually performs a lot better than managing a high number of threads. I would be worried to begin my developments with a sync model…\n@kuku_super @marty1885 do you have any feedback that you can share when using sync model with threads?\nThanks a lot!",
"username": "Michael_El_Baki"
},
{
"code": "",
"text": "Our application uses one worker thread per CPU core present on the system, with an interpreter which allows coroutinelike/greenthreadlike behaviour. On I/O that would block, the current request yields control back to the interpreter which services other requests whilst waiting for a response. With this model, blocking I/O introduces severe reliability issues where if a worker thread is blocked, its input queues are quickly overwhelmed and we start dropping packets. The only way we can work around this, is to have a pool of threads dedicated to performing blocking I/O. This introduces a significant performance penalty, and greatly increases code complexity. Supporting async I/O isn’t just about performance, it’s required when working within an event driven programming paradigm.",
"username": "Arran_Cudbard-Bell"
},
{
"code": "",
"text": "Hi all,I am the Product Manager for C/C++ drivers and developer experience. I wanted to take a moment to express my gratitude for everyone’s active participation and feedback in this discussion.We understand the importance of async/event driven paradigms in modern software development, and are committed to continuously improving our offerings to better meet the evolving needs of our users.Your inputs help us prioritise our efforts and align our roadmap with the features and enhancements that matter most to you. Rest assured, we are actively monitoring the feedback and votes on this suggestion, and your voice will play a significant role in shaping the future direction of our MongoDB C Driver.I’d highly encourage to cast your votes on the MongoDB Feedback Forum for this request - Asynchronous variant of MongoDB C Driver – MongoDB Feedback Engine .-Rishabh",
"username": "Rishabh_Bisht"
}
] |
Why not supply async api in mongo C driver?
|
2021-02-20T10:54:25.069Z
|
Why not supply async api in mongo C driver?
| 5,800 |
[
"replication",
"transactions"
] |
[
{
"code": "",
"text": "Hi all. Who worked with mongodb transactions/replications. I need your help.\nI configured my mongodb config file (1 screen)\nI’m getting the error “Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \"processManagement.fork\" to false”I changed (screen 2) my mongo service file (/usr/lib/systemd/system/mongod.service)And changed the value of “MONGODB_CONFIG_OVERRIDE_NOFOR” to zero (it didn’t help). Then I just deleted the Environment value (didn’t help).Please help, it’s very important to me\n1585×722 53.5 KB\n",
"username": "Danish_M1_N_A"
},
{
"code": "",
"text": "Hi @Danish_M1_N_A,Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding “processManagement.fork” to falseAs mentioned here, i think it’ s enough remove this enviroment variable.From the documentation:Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "It doesn’t work for me. The error disappeared when I removed all authorization or/and keyFile from security",
"username": "Danish_M1_N_A"
}
] |
I'm getting the error "Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \"processManagement.fork\" to false"
|
2023-07-31T13:16:44.239Z
|
I’m getting the error “Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \”processManagement.fork\” to false”
| 1,121 |
|
null |
[
"atlas-cluster",
"php",
"field-encryption"
] |
[
{
"code": "<?php\nuse MongoDB\\Client;\n\n// Include the Composer autoloader to load the MongoDB PHP Library\nrequire_once __DIR__ . '/vendor/autoload.php';\n\n// Replace the placeholder with your actual Atlas connection string\n$uri = 'mongodb+srv://username:[email protected]/?retryWrites=true&w=majority';\n\n// Create a new client and connect to the server\n$client = new MongoDB\\Client($uri);\n\ntry {\n // Send a ping to confirm a successful connection\n $client->selectDatabase('dbname)->command(['ping' => 1]);\n echo \"Pinged your deployment. You successfully connected to MongoDB!\\n\";\n} catch (Exception $e) {\n printf(\"Failed to connect to MongoDB: %s\", $e->getMessage());\n}\nFailed to connect to MongoDB: No suitable servers found ( set): [connection closed calling hello on 'ac-3jxjrla-shard-00-00.fme2j91.mongodb.net:27017'] [connection closed calling hello on 'ac-3jxjrla-shard-00-01.fme2j91.mongodb.net:27017'] [connection closed calling hello on 'ac-3jxjrla-shard-00-02.fme2j91.mongodb.net:27017']|MongoDB extension version|1.14.0|\n|---|---|\n|MongoDB extension stability|stable|\n|libbson bundled version|1.22.0|\n|libmongoc bundled version|1.22.0|\n|libmongoc SSL|enabled|\n|libmongoc SSL library|OpenSSL|\n|libmongoc crypto|enabled|\n|libmongoc crypto library|libcrypto|\n|libmongoc crypto system profile|disabled|\n|libmongoc SASL|enabled|\n|libmongoc ICU|enabled|\n|libmongoc compression|enabled|\n|libmongoc compression snappy|disabled|\n|libmongoc compression zlib|enabled|\n|libmongoc compression zstd|enabled|\n|libmongocrypt bundled version|1.5.0|\n|libmongocrypt crypto|enabled|\n|libmongocrypt crypto library|libcrypto|\n",
"text": "Hi,I am trying to connect to my atlas cluster from my WordPress website. I am using the basic example recommended to me from my mongoDB atlas dashboard. Below is the code with swapped out username & password. After the code is the error I am receiving along with my phpinfo.I am able to connect to the cluster using other apps like react apps etc. There are no outgoing hosting firewalls or IP blocking from my cluster.ResultFailed to connect to MongoDB: No suitable servers found (serverSelectionTryOnce set): [connection closed calling hello on 'ac-3jxjrla-shard-00-00.fme2j91.mongodb.net:27017'] [connection closed calling hello on 'ac-3jxjrla-shard-00-01.fme2j91.mongodb.net:27017'] [connection closed calling hello on 'ac-3jxjrla-shard-00-02.fme2j91.mongodb.net:27017']PHP info",
"username": "Martin_Kariuki"
},
{
"code": "",
"text": "Any mongodb side log to share?looks like the server side closes the connection upon “hello”.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I was unable to find any logs via cpanel. I have written to my hosting company and asked about this.",
"username": "Martin_Kariuki"
},
{
"code": "",
"text": "My host wrote me withI am sorry, but it is not possible to use MongoDB with our hosting plans. If you want to use mongodb the best solution is to use av VPS.So dead end to my little project I guess I should delete this post from the forum?Thanks",
"username": "Martin_Kariuki"
},
{
"code": "",
"text": "@Martin_Kariuki, welcome to the MongoDB Community forums!\nIt looks like the MongoDB Extension is up and running, and the PHP code can access the PHP driver and create an instance. Instead, it seems like a connection issue when connecting to your MongoDB instance.If you use Atlas, did you allow the server’s IP to access the cluster?Can you share which hosting company and server product/plan you’re using?I wrote an article about MongoDB PHP errors. There are a few tests that may be of help:https://www.mongodb.com/developer/languages/php/php-error-handling/\nLet me know,Thanks!",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Hi Hubert,Thanks for your response! Turns out it was a hosting problem. Everything worked well after I switched to a new hosting provider ",
"username": "Martin_Kariuki"
},
{
"code": "",
"text": "Great! I can’t wait to see what you’re doing with WordPress and MongoDB. Keep us posted~",
"username": "Hubert_Nguyen1"
}
] |
Connecting to MongoDB from WordPress ( PHP )
|
2023-07-26T14:01:27.088Z
|
Connecting to MongoDB from WordPress ( PHP )
| 972 |
null |
[
"replication",
"java"
] |
[
{
"code": "",
"text": "Hi Team ,Mongo java Driver : 3.12.9 (storage engine : MMapV1) , mongo server - 3.6 , PSA replica set , journalling enabledMy application uses writeConcern “w:1” while writing to database. During some load condition, with FullGC trigerred intermittenly causing App Pauses , we observed that application upserts some records in the DB successfully but after few hrs when appln tries to read the same the record it is not available in the database.My Question: Does the “w:1” writeConcern not ensure the record is written to disk successfully ? is there a possiblity that record is created in memory but there can be failures while writing to the disk ?",
"username": "Udaya_Bhaskar_chimak"
},
{
"code": "",
"text": "Does the “w:1” writeConcern not ensure the record is written to disk successfully ?With journaling enabled, yes. However the disk write here means journal file, not db data file.is there a possiblity that record is created in memory but there can be failures while writing to the disk ?this is of course possible, hardware can fail at any time. That’s why journaling is needed for prod servers.Check out dirty pages and journaling concepts on Internet and you will notice the difference of their purposes.In terms of why they are not there after hours, i am not sure. Maybe disk is full ? maybe it fails to flush dirty data to disk file? Check error logs.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks Kobe for your reply.Can you please indicate what logs I can check for failures for dirtty data flushes.Udaya",
"username": "Udaya_Bhaskar_chimak"
},
{
"code": "w:1rollbackw:'majority'",
"text": "The most likely scenario for data loss with w:1 is because of a change of primary and subsequent rollback on the host that was previously primary.You can read more about them here. If you have a rollback directory under the datadirectory, you have encountered a rollback at some point.For best data durability w:'majority' is recommended.",
"username": "chris"
},
{
"code": "",
"text": "Thanks a lot Chris for your clarification !",
"username": "Udaya_Bhaskar_chimak"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Records missing after successful write to DB with writeConcern w:1
|
2023-08-09T17:41:40.447Z
|
Records missing after successful write to DB with writeConcern w:1
| 541 |
null |
[
"queries",
"dot-net",
"indexes"
] |
[
{
"code": "",
"text": "I am interacting with MongoDB using C# driver. It threw duplicate key error like below for one indexthe highlighted value 0x141214141a24 is not a actual value in the error message of location.path. how to get the actual value of it? (location.target is actual value I see)Message : “E11000 duplicate key error collection: Database.Collection index: location.path_1_location.target_1 collation: { locale: “en_US”, caseLevel: false, caseFirst: “off”, strength: 1, numericOrdering: false, alternate: “non-ignorable”, maxVariable: “punct”, normalization: false, backwards: false, version: “57.1” } dup key: { location.path: “0x141214141a24”, location.target: 457 }” } ].",
"username": "Souvik_Sardar"
},
{
"code": "location.target457location.path",
"text": "Hi Souvik,the highlighted value 0x141214141a24 is not a actual value in the error message of location.path. how to get the actual value of it? (location.target is actual value I see)Assuming that value you’ve noted doesn’t exist, would you mind advising how many documents have the location.target value of 457?Although the value of the field location.path looks like a hex value, from the error message I’m assuming it’s stored as a string type.Have you tried catching this duplicate key error in your code and handling it by e.g. printing the document on the logs when it happened?Lastly, can you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "location.target457{\n \"target\" : 1,\n \"path\" : 1\n}\n{\n \"_id\" : \"635448f52be262cf567a6\",\n \"links\" : null,\n \"location\" : [ \n {\n \"path\" : 123,\n \"target\" : \"67555276\",\n }\n ],\n \"path\" : 123,\n \"entityid\" : null\n}\n",
"text": "Thank you @Jason_Tran for your response.would you mind advising how many documents have the location.target value of 457 ? - we have 868144 documents.Have you tried catching this duplicate key error in your code and handling it by e.g. printing the document on the logs when it happened? – I have not handled from code. it was thrown from C# mongo driver.The command which generated this error. - await collection.BulkWriteAsync(write);The MongoDB version in use. - 6.0.8",
"username": "Souvik_Sardar"
}
] |
How to get actual value for duplicate index MongoDB?
|
2023-08-12T15:19:52.257Z
|
How to get actual value for duplicate index MongoDB?
| 499 |
null |
[
"replication",
"mongodb-shell",
"migration"
] |
[
{
"code": "Unknown, Last error: connection() error occured during connection handshake mongomirror --host \"Replicaset-Name/host-0.domain.com:27017,host-1.domain.com:27017,host-2.domain.com:27017\" \\\n --username \"<username>\" \\\n --password \"<password>\" \\\n --authenticationDatabase \"admin\" \\\n --destination \"Replicaset-Name/mongodb-0.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-svc.svc.cluster.local:27017\" \\\n --destinationUsername \"<username>\" \\\n --destinationPassword \"<password>\"\n mongomirror version: 0.12.5\n git version: 6e5a5489944845758420e8762dd5e5a89d2e8654\n Go version: go1.16.9\n os: linux\n arch: amd64\n compiler: gc\n 2022-08-16T16:53:29.353+0000\tSource isMaster output: {IsMaster:true MinWireVersion:0 MaxWireVersion:7 Hosts:[host-0.domain.com:27017 host-1.domain.com:27017 host-2.domain.com:27017}\n 2022-08-16T16:53:29.353+0000\tSource buildInfo output: {Version:4.0.12 VersionArray:[4 0 12 0] GitVersion:5776e3cbf9e7afe86e6b29e22520ffb6766e95d4 OpenSSLVersion: SysInfo: Bits:64 Debug:false MaxObjectSize:16777216}\n 2022-08-16T16:55:29.356+0000\tError initializing mongomirror: could not initialize destination connection: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-1.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, { Addr: mongo-0.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, { Addr: mongo-2.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, ] }\n mongosh --host mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017 --username <username>telnet mongodb-1.mongodb-svc.mongodb.svc.cluster.local 27017",
"text": "I want to migrate a database from one cluster to another. The source and destination are withing the same VPC, but I keep getting Unknown, Last error: connection() error occured during connection handshake error.Output:I’ve tested the connection with mongosh. It has no issues while establishing the connection. mongosh --host mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017 --username <username>telnet mongodb-1.mongodb-svc.mongodb.svc.cluster.local 27017So what might I have missed here?Thanks!",
"username": "mhmtsvr"
},
{
"code": "",
"text": "Is your target an Atlas cluster?\nCheck below thread\nThe mongosh connection you have shown is connecting to only one node\nWhat about connection to replicaset?\nWhat does rs.status() show?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello Mongo Support,We want to migrate our community mongo databases running on AWS to Atlas Cluster . Since it will be a different version movement(3.2 → 5.0) , We would like to make use of mongomirror utility to do a near to zero downtime migration.However the mongo dbs on AWS are running on Docker containers.\nWhat will be the exact syntax to specify the source host address ./mongomirror --host replsetname\\ip:port,ip:port,ip:port\n–username xxxx \n–password xxxx \n–authenticationDatabase admin \n–destination clusterxxx-shard-00-00.xxxx.mongodb.net:27017 \n–destinationUsername xxx \n–destinationPassword xxx \n–includeDB=xxx \n–ssl \n–forceDumpis giving me error\n2023-08-08T12:31:09.332+0000 Error initializing mongomirror: could not initialize source connection: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: xxx:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, ] }Any idea how to achieve this.\nThanks",
"username": "gargee_dutta"
},
{
"code": "",
"text": "What type is your cluster?\nThere are several restrictions like it cannot be free M0 or M2/M5 cluster\nAre you giving correct cluster string?\nFor host you should pass all your replica host names.In the target/destination also you gave single hostname\nPlease go thru this link",
"username": "Ramachandra_Tummala"
}
] |
Why can't mongomirror initiate connection to the destination source? [server selection timeout]
|
2022-08-17T14:16:31.514Z
|
Why can’t mongomirror initiate connection to the destination source? [server selection timeout]
| 2,865 |
null |
[
"php"
] |
[
{
"code": "",
"text": "Joomla doesn’t support MongoDB.Any recommendations for a PHP framework to use with MongoDB?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hello @Jack_Woehr , I am curious about what you would like to build. I haven’t looked at Joomla for many years, but I did use MongoDB within WordPress if that’s something which might be related to your question.",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "@Hubert_Nguyen1 that might indeed be an option. I’ll take a look at WordPress.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Awesome! Let me know if you bump into something, or want to exchange notes.",
"username": "Hubert_Nguyen1"
},
{
"code": "define('DB_HOST', 'mongodb://localhost:27017');\nmysqli",
"text": "Hi @Hubert_Nguyen1\nI am trying now to migrate from MySQL to MongoDB in educational purpose.On my WordPress local website I installed MongoDB, mongoDB PHP driver, but my WordPress still asks about MySQL PHP extension to be enabled. I am not good in PHP so not sure from which angle chase this issue.In wp-config.php I made the change ofStill without success. WordPress asks me:Your PHP installation appears to be missing the MySQL extension which is required by WordPress.\nPlease check that the mysqli PHP extension is installed and enabled.Will appreciate any suggestions. Thanks in advance.",
"username": "Michael_Litvinenko"
},
{
"code": "",
"text": "Hi @Michael_Litvinenko welcome to the MongoDB community forums!It does seem very tempting to migrate WordPress from MySQL to MongoDB!Your immediate issue seems related to the MySQL PHP extension not being installed. I don’t know which PHP/WordPress stack you’re using, but setting up the PHP stack for WordPress has a lot of different tutorials. The Ubuntu one is just an example:Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.Now to the main point of your question: “How to make WordPress run on MongoDB instead of MySQL.” Unfortunately, it is impossible because WordPress and thousands of plugins rely on SQL queries written directly into the code.Even if we could emulate or translate all the SQL queries, it would not be optimum for MongoDB because NoSQL has been created to do away with some of the limitations of SQL. Check this video. It’s a fascinating story!MongoDB has a different architecture and a powerful query API different from SQL.On a related note, @Jack_Woehr Jack_Woehr, check Laravel. There’s excellent MongoDB support there. I tried it, and it looks terrific.",
"username": "Hubert_Nguyen1"
}
] |
PHP Framework that supports MongoDB
|
2022-11-08T02:35:04.411Z
|
PHP Framework that supports MongoDB
| 2,380 |
null |
[
"aggregation",
"queries",
"atlas-search"
] |
[
{
"code": "let pipeline = [{\n $search: {\n index: \"index\",\n 'compound': {\n 'must': {\n 'text': {\n 'path': 'x',\n 'query': arg.x\n }\n },\n 'filter': {\n text: {\n query: arg.filterBy,\n path: {\n wildcard: '*'\n }\n }\n }\n }\n }\n },\n {\n $project: {\n \n }\n }\n]\n",
"text": "I have this query, I would like to search in all query but exclude some fields",
"username": "Ibrahim_ALSURKHI"
},
{
"code": "",
"text": "Hello, @Ibrahim_ALSURKHI! Welcome to the MongoDB community To understand better your case, please provide:",
"username": "slava"
},
{
"code": "{\n $project: { unwantedField1: 0, unwantedField2: 0 }\n}\n",
"text": "Would it not be as simple as:",
"username": "Justin_Jaeger"
}
] |
Filter In all fields exclude some fields
|
2023-08-13T12:04:42.996Z
|
Filter In all fields exclude some fields
| 480 |
null |
[
"aggregation",
"queries"
] |
[
{
"code": "approver_iddocument_approval_line_arraydocument_approver_idx{\n \"_id\" : ObjectId(\"64cc5a9501b7f613484ca9f9\"),\n \"document_submitter\" : \"woong\",\n \"document_approval_line_id\" : ObjectId(\"64ca0ab82cd80b2168a44736\"),\n \"document_approval_line_array\" : [\n [\n {\n \"approver_type\" : \"2\",\n \"approver_id\" : \"woong\",\n \"approver_name\" : \"\",\n \"approver_department\" : \"IT본부\",\n \"approver_title\" : \"사원\",\n \"approver_shownas\" : \"테스터\"\n }\n ],\n [\n {\n \"approver_type\" : \"2\",\n \"approver_id\" : \"denny\",\n \"approver_name\" : \"데니\",\n \"approver_department\" : \"IT본부\",\n \"approver_title\" : \"대리\",\n \"approver_shownas\" : \"UI/UX팀\"\n }\n ],\n [\n {\n \"approver_type\" : \"1\",\n \"approver_id\" : \"reggie\",\n \"approver_name\" : \"레지\",\n \"approver_department\" : \"IT본부\",\n \"approver_title\" : \"팀장\",\n \"approver_shownas\" : \"개발팀장\"\n },\n {\n \"approver_type\" : \"1\",\n \"approver_id\" : \"luke\",\n \"approver_name\" : \"루크\",\n \"approver_department\" : \"IT본부\",\n \"approver_title\" : \"사원\",\n \"approver_shownas\" : \"테스터\"\n }\n ],\n [\n {\n \"approver_type\" : \"0\",\n \"approver_id\" : \"test\",\n \"approver_name\" : \"대장\",\n \"approver_department\" : \"IT본부\",\n \"approver_title\" : \"대표이사\",\n \"approver_shownas\" : \"대표이사\"\n }\n ]\n ],\n \"document_approver_idx\" : NumberInt(2),\n \"document_approver_nextIdx\" : NumberInt(3)\n\n}\ndb.getCollection(\"company_document\").aggregate([\n {\n $addFields: {\n temp: {\n $arrayElemAt: ['$document_approval_line_array', '$document_approver_idx']\n }\n }\n },\n {\n $match: {\n 'temp.approver_id': 'reggie'\n }\n }\n ]\n)\n",
"text": "I’m sorry. I may not be very good at English, so my explanation might be inadequate. Please understandI want to search for the approver_id within the document_approval_line_array using the document_approver_idx value.Although this query works correctly, but I don’t want to use addFields. Is it possible?",
"username": "reggie_han"
},
{
"code": "db.approvals.insertMany([\n {\n _id: 'A',\n document_approval_line_array: [\n [\n {\n \"approver_id\": \"M1\",\n },\n {\n \"approver_id\": \"M2\",\n }\n ],\n [\n {\n \"approver_id\": \"M3\"\n }\n ]\n ],\n document_approver_idx: NumberInt(1),\n document_approver_ids: ['M3'] // new field\n },\n {\n _id: 'B',\n document_approval_line_array: [\n [\n {\n \"approver_id\": \"N1\",\n }\n ],\n [\n {\n \"approver_id\": \"N2\",\n },\n {\n \"approver_id\": \"N3\",\n }\n ],\n ],\n document_approver_idx: NumberInt(1),\n document_approver_ids: ['N2, N3'] // new field\n },\n]);\ndb.approvals.aggregate([\n {\n $match: {\n document_approver_ids: 'N3'\n }\n }\n]);\ndb.approvals.find({\n document_approver_ids: 'N3'\n});\n[\n {\n _id: 'B',\n document_approval_line_array: [\n [ { approver_id: 'N1' } ],\n [ { approver_id: 'N2' }, { approver_id: 'N3' } ]\n ],\n document_approver_idx: 1,\n document_approver_ids: [ 'N2', 'N3' ]\n }\n]\n",
"text": "Hello, @reggie_han and welcome to the community! From the description and examples you provided it is clear, that your aggregation pipeline perform $match using the calculated data. For that you need additional stage, like $addFields or $project. If you want to omit those stages, you may want to add some changes to your model. For example:Later you can achieve same results by using this aggregation pipeline:Or even with a simple find operation:Both above methods would return the same result:",
"username": "slava"
}
] |
I want to find using the object value
|
2023-08-04T10:11:25.129Z
|
I want to find using the object value
| 402 |
null |
[
"atlas-search"
] |
[
{
"code": "{\nCompanyName: \"Test 1\",\nCompanyKeywords: \"key1, key2, key3\",\nCompanyScore: 454.44\n},\n{\nCompanyName: \"Test 2\",\nCompanyKeywords: \"key1, key2, key3\",\nCompanyScore: 51.44\n},\n",
"text": "HiI created a search index containing 2 text fields and 1 number field.\nDocument look like this:I need to sort the result of the $search by the CompanyScore field.\nWhen I search for “Test” it brings “Test 2”, but I want it to sort by CompanyScore and bring “Test 1” as it should.",
"username": "Forie_Forie"
},
{
"code": "",
"text": "Please share the code you’ve used for your query and sorting.",
"username": "Erik_Hatcher"
},
{
"code": "db.companiesRating.insertMany([\n {\n _id: 'A',\n companyName: 'Test 1',\n companyScore: 500,\n },\n {\n _id: 'B',\n companyName: 'Test 2',\n companyScore: 40,\n },\n {\n _id: 'C',\n companyName: 'Test 3',\n companyScore: 1500,\n },\n {\n _id: 'D',\n companyName: 'Test 4',\n companyScore: 99,\n }\n]);\ncompanied-search-test{\n \"mappings\": {\n \"dynamic\": true\n }\n}\ndb.companiesRating.aggregate([\n {\n $search: {\n index: 'companies-search-test',\n text: {\n query: 'test',\n path: 'companyName'\n }\n }\n }\n]);\n[\n { _id: 'D', companyName: 'Test 4', companyScore: 99 },\n { _id: 'C', companyName: 'Test 3', companyScore: 1500 },\n { _id: 'A', companyName: 'Test 1', companyScore: 500 },\n { _id: 'B', companyName: 'Test 2', companyScore: 40 }\n]\ndb.companiesRating.aggregate([\n {\n $search: {\n index: 'companies-search-test',\n text: {\n query: 'test',\n path: 'companyName',\n score: {\n function:{\n multiply: [\n {\n path: {\n value: 'companyScore',\n undefined: 1\n }\n },\n {\n score: 'relevance'\n }\n ]\n }\n }\n }\n }\n },\n // this last $addFields stage is added to display the modified score\n // and can be removed\n {\n $addFields: {\n documentScore: { $meta: 'searchScore' },\n }\n }\n]);\n[\n {\n _id: 'C',\n companyName: 'Test 3',\n companyScore: 1500,\n documentScore: 71.83670806884766\n },\n {\n _id: 'A',\n companyName: 'Test 1',\n companyScore: 500,\n documentScore: 23.94556999206543\n },\n {\n _id: 'D',\n companyName: 'Test 4',\n companyScore: 99,\n documentScore: 4.741222858428955\n },\n {\n _id: 'B',\n companyName: 'Test 2',\n companyScore: 40,\n documentScore: 1.9156455993652344\n }\n]\ncompanyScore",
"text": "Hello, @Forie_Forie To better understand your situation, I extended your example dataset a bit, so it would look like this:And created a simple Atlas Search index, named companied-search-test, with a default configuration:Then, I used $search stage with “test” query in the aggregation pipeline to reproduce your case:Output:We can see, that documents order in the output is more random, than predictable.To make it more predictable, you need add some additional logic that would modify document’s search score in the aggregation pipeline, like this:Output:Now, the sorting of the documents is greatly affected by companyScore.",
"username": "slava"
}
] |
Sorting $search results
|
2023-05-30T09:09:09.914Z
|
Sorting $search results
| 783 |
[
"node-js"
] |
[
{
"code": "",
"text": "Can anyone help me explain this error and how to fix it? I’m working on a bookstore site project but I’m having a hard time figuring out how to create the interface!\n\nimage1634×744 54.8 KB\n",
"username": "Hung_Viet"
},
{
"code": "",
"text": "Hey @Hung_Viet, a quick search of these forums for the error you listed would surface an answer to your question.See Option usecreateindex is not supported - #4 by Stennie_X for further details.",
"username": "alexbevi"
},
{
"code": "",
"text": "To fix the error, please comment out line numbers 41 and 42.This error occurred because useCreateIndex and useFindAndModify are no longer supported.",
"username": "ABHISHEK_KUMAR_SINGH5"
}
] |
Error and need everyone's help!
|
2023-03-23T12:53:08.237Z
|
Error and need everyone’s help!
| 466 |
|
[
"node-js",
"connecting",
"sharding",
"containers"
] |
[
{
"code": "",
"text": "Our company has a frontend and backend node-js application which is up and running on AWS EC2 instance and successfully connecting to MongoDB Atlas. When we tried to docrise the application we had trouble connecting to MongoDB. The dockerised application is running on the same AWS EC2 but opening a different port. (We are unable to deploy it to another EC2 instance.)I have inspected the docker container and confirmed it has internet access, checked the connection string and AWS EC2 security group to confirm the network setting, but couldn’t resolve issue.\nWX20230808-142324@2x1920×557 126 KB\nDoes anyone have any idea what could go wrong?Thank you for your help in advance.Abby",
"username": "Abby_Kuo"
},
{
"code": "",
"text": "Did you manage to fix this ?",
"username": "Vlad_Tkachuk"
}
] |
MongoError: no mongos proxy available
|
2023-08-08T04:38:09.640Z
|
MongoError: no mongos proxy available
| 554 |
|
null |
[
"aggregation",
"dot-net",
"crud"
] |
[
{
"code": "// var request = ...\n\nvar result = MyDbCollection.UpdateOneAsync(\n a => otherConditions && a.MyObject != null,\n Builders<MyDocument>.Update\n .Set(a => a.MyObject!.A, request.A)\n .Set(a => a.MyObject!.B, request.B)\n .Set(a => a.MyObject!.C, request.C));\n\nif (result.ModifiedCount <= 0)\n{\n MyDbCollection.UpdateOneAsync(\n a => otherConditions && a.MyObject == null,\n Builders<MyDocument>.Update\n .Set(a => a.MyObject, new MyDocument.MyObject()\n {\n A = request.A,\n B = request.B,\n C = request.C\n }));\n}\na.MyObject != nulla.MyObject == nullMyDbCollection.UpdateOneAsync(\n a => otherConditions\n Builders<MyDocument>.Update\n .Condition(a => a.MyObject != null)\n .Then(builder => builder\n .Set(a => a.MyObject!.A, request.A)\n .Set(a => a.MyObject!.B, request.B)\n .Set(a => a.MyObject!.C, request.C)\n )\n .Else(builder => builder\n .Set(a => a.MyObject, new MyDocument.MyObject()\n {\n A = request.A,\n B = request.B,\n C = request.C\n })\n )\nSystem.InvalidCastException:\nUnable to cast object of type 'MongoDB.Bson.BsonArray' to type 'MongoDB.Bson.BsonDocument'.\n",
"text": "Originally posted at: MongoDB C# Driver - Conditional `UpdateDefinition`? - Stack OverflowWhat is the simpler way of doing this?Notice the differing a.MyObject != null and a.MyObject == null conditions, ideally I want to do something like in this pseudocode:I’ve wasted hours at this point trying to do it with aggregation pipelines where I keep getting this error:I don’t have that code anymore so I can’t copy paste it here. Sorry!",
"username": "HVFOU"
},
{
"code": "",
"text": "Latest version of this post is at:",
"username": "HVFOU"
}
] |
Conditional `UpdateDefinition`
|
2023-08-01T17:29:43.321Z
|
Conditional `UpdateDefinition`
| 678 |
null |
[
"queries",
"node-js",
"crud"
] |
[
{
"code": "{\n_id: \"63492520b5ef3f0bf00282ca\",\nusers: [ {\n_id:\"64d6c6cd2b758fd95dceeedd\"\nuserId: \"AB211C5F-9EC9-429F-9466-B9382FF61035\"\n},{\n_id:\"64d6cccd2b818fd95ccefedd\"\nuserId: \"AC412D5H-8GE6-631R-8581-C1452JF60029\"\n}]\n\n const pullUsers= await Subscribed.updateOne({_id:user._id},\n {$pull: {\n \"users\": {\n \"userId\":{in:[...input.pullUsers]}\n }\n }},);\ninput.pullUsersinput {\n pullUsers: [ 'AB211C5F-9EC9-429F-9466-B9382FF61035' ]\n}\n \"message\": \"Cast to string failed for value \\\"{ in: [ 'AB211C5F-9EC9-429F-9466-B9382FF61035' ] }\\\" (type Object) at path \\\"userId\\\"\"pullUsersuserId let pulledUsers = [];\n for (let i=0; i<input.pullUsers?.length; i++){\n const data = input.pullUsers[i].toString()\n pulledUsers.push(data)\n }\n\nconst pullUsers= await Subscribed.updateOne({_id:user._id},\n {$pull: {\n \"users\": {\n \"userId\":{in:[pulledUsers]}\n }\n }},);\n\n",
"text": "Hi, I’m working on a project where I need to be able to pull the objects containing certain userIds from an array of objects:I’ve been trying to do this with versions of:where input.pullUsers is:but I keep getting an error: \"message\": \"Cast to string failed for value \\\"{ in: [ 'AB211C5F-9EC9-429F-9466-B9382FF61035' ] }\\\" (type Object) at path \\\"userId\\\"\", which I don’t understand because pullUsers is a string and so is userId. I’ve tried:to double check that my input would be sending strings to be pulled from mongodb, but I’m getting the same error. I’ve also tried looking at other instances of castErrors that people have had, but I couldn’t find one that would helped me with this issue. If anyone knows why this might be happening, I would really appreciate any help or advice. Thank you!",
"username": "p_p1"
},
{
"code": "userIdconst pullUsers= await Subscribed.updateOne(\n { _id: \"63492520b5ef3f0bf00282ca\" },\n {\n $pull: {\n users: {\n userId: { $in: [\"AB211C5F-9EC9-429F-9466-B9382FF61035\"] }\n }\n }\n }\n)\n",
"text": "Hello @p_p1,Are you using Mongoose NPM, if yes then can you show your schema declaration? might be the type of the userId is different than your input value.If not then,Can you try a static value in your query as below, if it is working then there is some issue in your input variables,As you can see the query is working here,Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "turivishal"
},
{
"code": "pullUserInput {pullUsers: [String]}",
"text": "Hi Vishal,Thank you for your response! I tried the static version that you gave me, and it worked, so like you mentioned it must be something with my input variables, but I have no idea what it could be. I’m using graphql and I have: pullUserInput {pullUsers: [String]} (this is equal to the input variable), so my backend should be getting an array of strings, but for some reason it’s not registering that.",
"username": "p_p1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
CastError: Cast to string failed for { in: [ ...] }
|
2023-08-13T03:58:20.238Z
|
CastError: Cast to string failed for { in: [ …] }
| 704 |
null |
[
"queries",
"node-js",
"compass",
"atlas-search",
"text-search"
] |
[
{
"code": "db.getCollection('test').find({$text:{$search: \"apple\"}})await db\n .collection('test')\n .find({\n $text: { $search: 'apple' }\n })\n .toArray();\nError: error processing query: ns=db.usersTree: TEXT : query=hi, language=english, caseSensitive=0, diacriticSensitive=0\nSort: {}\nProj: {}\n planner returned error :: caused by :: need exactly one text index for $text query\n",
"text": "I created a text index on a collection. I can query it from the MongoDB compass:db.getCollection('test').find({$text:{$search: \"apple\"}})but not from the mongodb node driver (I have the latest version, “^5.7.0”)It throws this error:Steps to reproduce:I’m using a serverless instance, which doesn’t support Atlas search, so just regular text search.Please let me know if able to reproduce and if this will be addressed soon. Thank you",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "Any MongoDB staff that can confirm this bug? Seems like a pretty big problem if all node.js users can’t use text indexes",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "WORKAROUND:If you create the index within Node.js, it works. To do this, temporarily disable strict mode on your connection before you run the createIndex command.This still appears to be a bug with MongoDB but at least you won’t have to forego using the text indexes.",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
BUG with reproducing instructions - Node.js driver error when using text index
|
2023-08-09T18:58:42.678Z
|
BUG with reproducing instructions - Node.js driver error when using text index
| 531 |
null |
[] |
[
{
"code": "",
"text": "Hi,\nI want to fetch all records in a table in batches. I can use skip() and limit() but performance wise skip() would be slow. Does MongoDb have the equivalent of BigQuery’s paginationToken concept? - Read data with BigQuery API using pagination | Google CloudAlternately, I came across this KeySet Pagination approach, which relies on ObjectId being monotonically increasing - MongoDB Pagination, Fast & Consistent | by Mosius | The Startup | Medium. Is that assumption about ObjectId likely to remain valid in future?",
"username": "Pramod_Biligiri"
},
{
"code": "",
"text": "One question and one idea:",
"username": "Luis_Osta1"
},
{
"code": "",
"text": "Regarding the first question on BigQuery, that’s a good one. I don’t have official docs about this, but one blog post says that pageToken approach is much faster than skip+limit: https://medium.com/bloggingtimes/the-page-token-method-in-bigquery-an-efficient-approach-to-pagination-7d8f2443c69d . I don’t know how internally this is implemented. But important to note that this pageToken can be reused in a subsequent call to BigQuery even after disconnecting (How long is the pageToken valid for? No idea I haven’t checked yet).Regarding the second question on Mongodb paging, my problem is not with batchSize. I’m concerned about the fact that it will skip the Offset no. of records each time. Linearly skipping the offset no. of records might get progressively slower if you are fetching the latter parts of a large collection?My usecase requires me to disconnect after fetching each page and then reconnecting. Hence a Page Token kind of approach is what I was looking for.",
"username": "Pramod_Biligiri"
},
{
"code": "page tokenpage token",
"text": "@Pramod_Biligiri Ah I see, I don’t 100% know if in Mongo limit + skip does a linear skip of the offset but that sounds unlikely. Considering that blog posts describes pageTokens as:The page token method is a cursor-based mechanism for pagination in BigQuery that allows you to efficiently retrieve a specific page of results from a query. The page token is generated after the first query and passed in the subsequent queries to determine the starting point for the next page of results.And all MongoDB queries are cursor-based, it seems that they would be roughly equivalent. I think possibly older MongoDB versions had to manually go through each part of a collection though so I’m unsure which version you’re running. I think the only way to know for sure is to do some load testing with something like https://locust.io/ or something else equivalent. And see what the performance looks like for you. I’ve personally never encountered any performance problems with skip + limit in my usage.Edit:\nJust went through the docs and I was not able to find anything that wasn’t just skip to move cursors. So please do let me know how your investigation goes!",
"username": "Luis_Osta1"
},
{
"code": "",
"text": "It does look like skip() and limit() are inefficient, and one way around this is to iterate by sorting over one of the indexed fields explicitly. Check out the section titled “Avoiding large skips” in this page - 4. Querying - MongoDB: The Definitive Guide, 2nd Edition [Book]",
"username": "Pramod_Biligiri"
},
{
"code": "",
"text": "Hi @Pramod_BiligiriJust to add my 2c: I found this community blog post API Paging Built The Right Way, with the corresponding Github repo implementing the idea in GitHub - mixmaxhq/mongo-cursor-pagination: Cursor-based pagination for Mongo that may be of interest to you.Note that those are community resources and not official MongoDB code/product, so use at your own risk Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Skip and limit will cause performance impact with large values, and i bave seen this on our own on premise clustersOfficial doc has some explanation about how skip and limit are dealt with in a sharded cluster.I think thats because index is built with b plus tree. And with a large skip value, the scan still has to fo from root and one by one until the number is reached. Theres no number of subnodes info in the tree.But with the obj id filter , the indexes smaller than that can be easily skipped, resulting in O height runtime.",
"username": "Kobe_W"
}
] |
Can I do paging over all records in a collection without using skip and limit?
|
2023-06-21T09:55:30.124Z
|
Can I do paging over all records in a collection without using skip and limit?
| 1,036 |
null |
[
"mongodb-shell",
"database-tools",
"backup",
"devops"
] |
[
{
"code": "Failed: error writing data for collection to disk: error reading collection: connection pool for 127.0.0.1:6666 was cleared because another operation failed with: connection() error occurred during connection handshake: connection(127.0.0.1:6666[-26]) incomplete read of message header: read tcp 127.0.0.1:57313->127.0.0.1:6666: i/o timeout",
"text": "Hi Mongo Community,We use mongodb utility tools in our everyday workflow to copy production data in a local development environment (e.g. to troubleshoot existing functionality or develop new features). To streamline or workflow, we have written bash scripts, which we run from our development machines, and that use ssh tunneling to access prod.Two use cases we’re having issues with:calls mongodump over an ssh tunnel (dumps prod data in a local folder),drops the local databaseruns mongorestore to load prod data in the local databaseruns mongosh commands to delete sensitive data that isn’t necessary for developmentISSUE #1:ISSUE #2:Failed: error writing data for collection mydbname.mycollectionname to disk: error reading collection: connection pool for 127.0.0.1:6666 was cleared because another operation failed with: connection() error occurred during connection handshake: connection(127.0.0.1:6666[-26]) incomplete read of message header: read tcp 127.0.0.1:57313->127.0.0.1:6666: i/o timeoutWould love to hear how other dev teams are tackling the problem of copying production data to dev. I would love to brainstorm different solutions with other teams facing similar challenges.Thanks!XavierFeather Finance",
"username": "Xavier_Robitaille"
},
{
"code": "master mode",
"text": "I thought I’d revive this post given that I’m still experiencing the issues described…Some further thoughts:",
"username": "Xavier_Robitaille"
}
] |
SSH Tunnelling Issues with mongoexport, mongorestore and mongosh
|
2023-01-22T14:59:19.002Z
|
SSH Tunnelling Issues with mongoexport, mongorestore and mongosh
| 1,842 |
null |
[
"queries",
"atlas-search",
"text-search"
] |
[
{
"code": "20044BarcodeBarcodeItem.Item.Item.Barcodefind({$text: {$search: '20044'}})\nfind({'Item.Item.Item.Barcode': '20044'})\n{\n \"_id\": {\n \"$oid\": \"633d7cc238d7f8dafeace6f5\"\n },\n \"Number\": \"2\",\n \"Item\": [\n {\n \"Type\": \"FrameElement\",\n \"Item\": [\n {\n \"Type\": \"Frame\",\n \"Barcode\": \"20011\"\n },\n {\n \"Type\": \"Frame\",\n \"Barcode\": \"20012\"\n },\n {\n \"Type\": \"SashElement\",\n \"Item\": [\n {\n \"Type\": \"Sash\",\n \"Barcode\": \"20021\"\n },\n {\n \"Type\": \"Sash\",\n \"Barcode\": \"20022\"\n },\n {\n \"Type\": \"GlassBarElement\",\n \"Item\": [\n {\n \"Type\": \"GlassBar\",\n \"Barcode\": \"20031\"\n },\n {\n \"Type\": \"GlassBar\",\n \"Barcode\": \"20032\"\n }\n ]\n }\n ]\n },\n {\n \"Type\": \"Glass\",\n \"Barcode\": \"20016\"\n },\n {\n \"Type\": \"GlassBarElement\",\n \"Item\": [\n {\n \"Type\": \"GlassBar\",\n \"Barcode\": \"20043\"\n },\n {\n \"Type\": \"GlassBar\",\n \"Barcode\": \"20044\"\n }\n ]\n }\n ]\n }\n ]\n}\n",
"text": "How i can search for the value “20044” in all fields “Barcode” and just in field “Barcode” in a nested object without specifying an absolute path e.g. \" Item.Item.Item.Barcode \" in MongoDB?My current solutions:Search in all text fields, not only in “Barcode” fieldsSearch in one specifying absolute path and not in all “Barcode” fieldsThis is my databse object:",
"username": "Igor_Lovrinovic"
},
{
"code": "",
"text": "Hi @Igor_Lovrinovic , thanks for the question and welcome to the MongoDB community!You can search over nested objects using Atlas Search. After creating Atlas Search search indexes, you can use $search directly to query nested objects instead of $text.Once using Atlas Search, you may also consider looking into embeddedDocs or, specifying multiple absolute paths in your search index, ie. Item.Barcode, Item.Item.Barcode, Item.Item.Item.Barcode, etc.Hope this helps! Please do not hesitate to reply back to this thread if you have more quesitons.",
"username": "amyjian"
},
{
"code": "",
"text": "Hello @amyjian , thank you for your answer. The nesting of objects “Item.Item.Item…” can be infinite until “Barcode” comes along. I can’t create absolute paths. I have no idea what this query should look like?I use:MongoDB Community Server\n6.0.2MongoDB Compass\nVersion 1.33.1\nimage944×244 10.4 KB\n",
"username": "Igor_Lovrinovic"
},
{
"code": "",
"text": "Hi @Igor_Lovrinovic , thanks for providing more details. I understand now that you are using MongoDB Community, which is not supported by MongoDB’s full text search solution, Atlas Search.If you’re interested in addressing your search needs via Atlas Search, you can read more about it here. You can also try it with a free MongoDB Atlas cluster! Sign up here. Hope this helps.",
"username": "amyjian"
}
] |
How to search in nested objects?
|
2022-10-07T10:12:18.463Z
|
How to search in nested objects?
| 5,021 |
null |
[] |
[
{
"code": "",
"text": "I have a set of events that have set times and recur each week on a given day. These events are global, and for each event we track the local time and the time zone. Simply converting to UTC would normally work, but not in our use case because the events always occur in local time (i.e., 3pm in New York whether Daylight Savings Time is in effect or not).Looking closer at a simplified example:Assume the entries for each of these three events were added in January for Mondays. Everyone is using standard time (i.e., no DST is in effect)E1, UTC+00, 2300 local, DST not observed, 2300 UTC\nE2, UTC-07, 1615, DST not observed, 2315 UTC\nE3, UTC-08, 1530, DST observed, 2330 UTCPeople around the globe might access these events based on their own time zones, and we want to present them with, say, the next 20 sequential events.If the data were sorted simply based on UTC, the order would be E1, E2, E3.However, when DST kicks in for E3, the offset shifts to UTC-07 and 1530 local would be 2230 UTC. At that point, the correct order to present to the user would be E3, E1, E2.My initial thought to solve this was to track an offset adjustment in the database if DST is observed for the event. But, it is my understanding the indexes can’t have calculated fields in them.So now I’m diving into Views (standard and on demand) but it isn’t clear to me this is a solution for my problem. Is it?The only other solution I can come up with is to periodically rewrite the database with updated UTC times based on if DST is in effect for an event’s time zone.Am I missing other potential solutions?Thanks.Tim",
"username": "Tim_Rohrer"
},
{
"code": "eventseventsdb.events.insertMany([\n {\n _id: 'E1',\n eventTime: ISODate('2023-08-11T08:00:00.000Z'), // 8:00\n },\n {\n _id: 'E2',\n eventTime: ISODate('2023-08-11T14:00:00.000Z'), // 14:00\n },\n {\n _id: 'E3',\n eventTime: ISODate('2023-08-11T20:00:00.000Z'), // 20:00\n },\n]);\nusersdb.users.insertOne({\n _id: 'U1',\n timezoneIdentifier: 'America/Argentina/Cordoba', // timezone offset = -3\n});\nusers// convert somehow user's local time (16:00) to UTC (13:00)\nvar userTimeInUTC = ISODate('2023-08-11T13:00:00.000Z') \ndb.events.find({\n eventTime: {\n $gte: userTimeInUTC\n }\n});\n[\n { _id: 'E2', startTime: ISODate(\"2023-08-11T14:00:00.000Z\") }, // 14:00\n { _id: 'E3', startTime: ISODate(\"2023-08-11T20:00:00.000Z\") } // 20:00\n]\ndb.events.aggregate([ \n {\n $project: {\n eventTimeUTC: {\n $dateToString: {\n format: '%Y-%m-%d %H:%M:%S',\n date: '$eventTime',\n }\n },\n eventTimeLocal: {\n $dateToString: {\n format: '%Y-%m-%d %H:%M:%S',\n date: '$eventTime',\n timezone: user.timezoneIdentifier, // app variables is used\n }\n },\n }\n }\n]);\n[\n {\n _id: 'E1',\n eventTimeUTC: '2023-08-11 08:00:00',\n eventTimeLocal: '2023-08-11 05:00:00'\n },\n {\n _id: 'E2',\n eventTimeUTC: '2023-08-11 14:00:00',\n eventTimeLocal: '2023-08-11 11:00:00'\n },\n {\n _id: 'E3',\n eventTimeUTC: '2023-08-11 20:00:00',\n eventTimeLocal: '2023-08-11 17:00:00'\n }\n]\n",
"text": "Hello, @Tim_Rohrer !It is extremely important to have a consistency in your datetime-related data. This will cut off lots of headaches when working with dates later.I’d suggest to stick to the following rules of thumb when working with date and time:It should not be that hard to follow those rules, as MongoDB stores all datetime data in UTC by default.Now, returning to your specific case:\nIs local time really that valuable, so you are storing it with each event document? What if user moves around much (a traveler or active businessman)? What if user works with your platform from different devices that can have different time zone setup? If the local time is the same for all events created by same user, does it worth it to store with each event?In case you decide, that storing local time in events doucments is not an option, have a look at the example flow below.Imagine, we have events documents stored like this, storing dates only in UTC:And users documents would store the timezone offset:Later, when some client requests nearest event, we determine his timezone offset from users collection (or maybe from request data) and this is how we get the UTC-representation of user’s local time:Now, queries to get the list of events is quite simple:Ouput (only two events are still available to the user):You can handle this data as with date is UTC format - it should be easy to convert it to local time there. But, in case you want to do some datetime conversions from UTC to user’s local time, you can do it with some server application-level middleware. Also it can be done with an aggregation pipeline:Output:Note: you can have different date srtring format using other datetime format specifiers.",
"username": "slava"
},
{
"code": "0T2300ZlastUpdatedcreateView",
"text": "Hi Slava!I really appreciate you looking at this. I had written a bunch in reply but then realized that my opening paragraph above may not have adequately explained the challenge. Please let me try again.Unfortunately, the convention for how to establish events has been in place for decades, and changing it isn’t going to happen. I shouldn’t say never to the extend we might be able to change it server-side, but today even that would be tough to do. I’m trying to find a solution that can work with the current conventions.An event is established in local time. What I should have been clearer on is that the event is set for a day of the week, and not set for a particular date. For example, an event occurs on Mondays (day 1) in Seattle at 1500 local time. This event repeats every Monday unless it is removed. The host will start it at 1500 local, summer, spring, winter or fall, regardless of DST. But because this convention isn’t date dependent, it also isn’t Daylight Savings Time-aware. Using UTC times becomes more complicated as it is more difficult to determine correct offsets.If the event is created while DST is in effect, when standard time returns, the offset changes again but in the opposite direction. For this reason, we are not presently seeing how simply using UTC works.Now, returning to your specific case:\nIs local time really that valuable, so you are storing it with each event document? What if user moves around much (a traveler or active businessman)? What if user works with your platform from different devices that can have different time zone setup? If the local time is the same for all events created by same user, does it worth it to store with each event?Today, in one of the prototype databases, local time and the week day of the event are stored. No other time-related data is stored (but I may be able to affect that). Events can be hosted from basically any timezone, and can be attended by people all over the globe.So, how this is presented to users today who want to see the next X events (regardless of timezone) is that we provide the entire contents of the data set (~2500 recurring events I think) and then sort in the client browser. So yeah, we believe there is a performance hit from this and that is why I’m trying to shift this process to server side and limit the results returned.But, the bugger is the lack of DST awareness. Even if an event can be represented in UTC time for just the specified day of the week (i.e, Mondays), something like 0T2300Z, then we still have the issue of the UTC time needing to be adjusted when DST goes into force for whatever time zone the event is being hosted.I think that if we capture the lastUpdated for a particular event’s time, and I add a collection that keeps track of when a particular timezone springs forwards/falls back, we can insert a form of DST-awareness. At least we can adjust the offset used to determine the UTC time equivalent of the local time. But this will require server side calculations. You mentioned middleware, which I was not aware of. I’ll research that further.Hopefully this better explains why I started looking closer at MongoDB Views. I just don’t know how complex calculations can be. The best option I’m coming up with is to introduce offset adjustments to the UTC-based time of the event based on whether the hosting timezone observes DST and if it is/is not in effect. I’m planning to experiment with that using createView.Again, I apologize and hope this explanation is clearer.Tim",
"username": "Tim_Rohrer"
}
] |
If not with an index, how can I solve this challenge?
|
2023-08-11T03:50:06.812Z
|
If not with an index, how can I solve this challenge?
| 261 |
null |
[
"aggregation",
"dot-net"
] |
[
{
"code": "db.Webs.aggregate([\n {\n $lookup:{\n from: \"Task1\", \n localField: \"Task1\",\n foreignField: \"_id\", \n as: \"Task1\"\n }\n },\n { $unwind:\"$Task1\" },\n {\n $lookup:{\n from: \"Task2\", \n localField: \"Task1.Task2Id\", \n foreignField: \"_id\",\n as: \"Task2\"\n }\n },\n { $unwind:\"$Task2\" },\n {\n $lookup:{\n from: \"Campaigns\", \n localField: \"Task2.CampaignId\", \n foreignField: \"_id\",\n as: \"Campaign\"\n }\n },\n { $unwind:\"$Campaign\" },\n { \n $project: {\n _id: 1,\n Status : 1,\n Description:1,\n Url:1\n PF : 1,\n ST :\"$Task1.Field1\",\n WSU : \"$Task1.Field2\",\n RefName : \"$Task1.Field3\",\n CampaignName : \"$Campaign.Name\"\n }\n }\n]);\n",
"text": "How can I replicate a query like this with C#. NetDriver?",
"username": "Alberto_Pacheco"
},
{
"code": "",
"text": "Any update of this yet?",
"username": "Jose_Jeino_Solivio"
}
] |
How to join multiple collections with $lookup using c# .Net drive
|
2021-07-30T00:11:12.419Z
|
How to join multiple collections with $lookup using c# .Net drive
| 3,871 |
null |
[
"aggregation",
"flutter"
] |
[
{
"code": "CartItem {\n late List<Product>products;\n}\n",
"text": "Hi @Desislava_St_Stefanova. I would appreciate some clarification on Synced Realm. My question are:Is querying synced real from client flutter app would be count as paid read ? As I do understand It would not as realm installed locally.Is it better to store just ObjectId or Reference with synced Realm?\nlate Product product: … => reference\nlate ObjectId productId: <some ObjectId(AAA)>Does synced Realm perform automatic $lookup(s) when I use reference? If yes, does it count as paid read query? E.g.:Every time I query CartItem products is accessible.",
"username": "shyshkov.o.a"
},
{
"code": "realm.subscriptions.waitForSynchronization()realm.syncSession.waitForUpload()realm.syncSession.waitForDownload()CartItemProductCartItem.productsProduct.cartItems",
"text": "Hi @shyshkov.o.a!\nWe are glad to have your interest on the realm!Flutter Realm SDK supports Flexible sync, which means that the developer is able to control the data that are downloaded to the client by query filter (subscription) and by user permissions.\nI you use flexible sync you have to define subscriptions as it is described in the documentation. Filtering by permissions is an addition option where you can use roles with custom user data.\nWe create a realm file on the device per user. The data for each user is in different file.\nAll the realm objects that match the subscriptions and user permissions for the current user are downloaded in background initially or if there is some data changed on the server. If you need to wait for the sync processes to complete you can use the following methods realm.subscriptions.waitForSynchronization(), realm.syncSession.waitForUpload() and realm.syncSession.waitForDownload().I hope the described will answer your questions!",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Thanks a lot @Desislava_St_Stefanova. Really appreciate that you replied to me. As far as I know SYNCED Realm don’t support BackLinks(Inverse Relationships), and as I do understand N-N as well. Right? Just need to understand is this is the same thing N-N and Inverse Relationships.Do I have a chance to ask some of MongoDb expert to take a look on my schema design (UML diagram)? I’ve herd that you are helping start-up(s). A bit rude request form my side, just iggore it if it’s not appropriate.",
"username": "shyshkov.o.a"
},
{
"code": "",
"text": "As far as I know SYNCED Realm don’t support BackLinks(Inverse Relationships)Sync’d Realms operate and work exactly the same as a local only Realm; and ‘backlinks’ operate in the same fashion. You can think of a backlink as a computed property as they are not “stored” in the same way as a forward relationship. When an object has a backlink property, it’s automagically ‘filled in’ when the forward link is created.Backlinks allow you to traverse the object graph inversely, so they are an inverse relationship.Do I have a chance to ask some of MongoDb expertYou can always post coding questions here on the forums or on StackOverflow; keeping in mind that questions about object models (Schema), relationships between objects, what queries will be run etc are really difficult to answer without knowing the entire use case.Best bet is to crank up your development environment and write some code! Go through the getting started guides and work through the example projects. Then when you get stuck, post a question containing the code you’re stuck on and the troubleshooting.Welcome to the Realm Forums!",
"username": "Jay"
},
{
"code": "",
"text": "@Jay im a bit confused, docs has this statement:\n\nimage1125×1388 218 KB\n",
"username": "shyshkov.o.a"
},
{
"code": "",
"text": "Yes, that’s correct!They are not present anywhere! e.g. you can’t actually “see” an inverse relationship or set it’s value whether it be a sync’d or local Realm.It’s essentially a “computed property”, as mentioned, and that’s done at the device level.In other words, if there’s a Person and Dog class, and the Person has a one to many relationship with Dog, and Dog has a LinkingObjects (inverse relationship) to Person, of you use the Realm Browser and look at the schema, the Dog object doesn’t actually show the LinkingObjects property back to the person, even though it’s defined on the model.The key to the note you linked is that if you’re logged into the realm console and look at the Dog object, you will not see the property that provides the inverse relationship as it “doesn’t exist” on the server (either) - it’s computed at the device level, not server.",
"username": "Jay"
},
{
"code": "",
"text": "Thanx @Jay , that is an excellent explanation. Now I finally got it.",
"username": "shyshkov.o.a"
}
] |
Realm Sync clarifications
|
2023-08-10T13:53:53.854Z
|
Realm Sync clarifications
| 677 |
null |
[
"serverless"
] |
[
{
"code": "",
"text": "Hi everyone.It’s been 9 months since the following post: When will serverless instances support Private Endpoints on Google Cloud?\nand there are still no signs of it coming up.When can we expect “the feature is being worked on and coming soon”?Thank you.",
"username": "robin4542"
},
{
"code": "",
"text": "Hi thereSupporting private endpoints on Google Cloud is still on our roadmap. Unfortunately, we will not be able to provide you with an estimated date at this time.",
"username": "Anurag_Kadasne"
}
] |
MongoDB Atlas Serverless (Private Endpoints on GCP)
|
2023-08-11T14:20:02.104Z
|
MongoDB Atlas Serverless (Private Endpoints on GCP)
| 495 |
null |
[] |
[
{
"code": "",
"text": "We have a case where we want to inspect the data of existing record before updating. While this is done, we do not want another thread to update the document. Basically, we need a way to synchronize the updates coming from different threads on a per document basis.\nInstead of implementing something outside to lock the document for update, wondering if there is any inherent support",
"username": "Vasu_Bathina"
},
{
"code": "",
"text": "Give us an example? (e.g. what will be checked)\notherwise it’s difficult to give suggestions.",
"username": "Kobe_W"
},
{
"code": "",
"text": "We have a collection that holds any type of data. We are API providers and we wrap the consumer data into our own wrapper and the API deals with only attributes of that wrapper. We want to give a feature to the consumers to provide a hook such that say they can decide to skip the update if in case a older version comes in and the document in the collection already has a newer version. Not everyone need that but some of our consumers may need that control over updates.\nBasically its like acquiring a lock, process and release lock",
"username": "Vasu_Bathina"
},
{
"code": "modificationsRequestedmodificationsRequested",
"text": "Hello, @Vasu_Bathina! Welcome to the community! You can assign to each update operation the modificationsRequested timestamp and simply do not update documents that have modificationsRequested with a higher value. This solution is described with example in this topic.UpdateOne method returns update result on which it is clear if the doc is updated, or not. Then act accordingly.",
"username": "slava"
}
] |
Checking a condition before updating q document in a single threaded manner
|
2023-08-11T17:02:03.578Z
|
Checking a condition before updating q document in a single threaded manner
| 352 |
null |
[] |
[
{
"code": "",
"text": "Hello everyone. I’m relatively new to working with such a large amount of data in MongoDB, and I have a query that retrieves around 70,000 items. However, this query is taking approximately 40 seconds to complete. I’ve created a compound index for this query, and it appears to be in use. While everything seems to be functioning correctly, I’m wondering if this execution time of 40 seconds is reasonable for fetching this amount of data, or if there’s potential to improve its speed. Does anyone have any insights on this matter?",
"username": "satoru_turing"
},
{
"code": "mongosh",
"text": "Hey @satoru_turing,Welcome to the MongoDB Community!I’m relatively new to working with such a large amount of data in MongoDB, and I have a query that retrieves around 70,000 items. However, this query is taking approximately 40 seconds to complete.Could you please shareThis information will help us to assist you further.Please also let us know which version of MongoDB you are working with and whether you are executing the queries through mongosh or using any language driver.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
Query is taking 40 seconds to complete and find 70k items
|
2023-08-08T12:25:17.567Z
|
Query is taking 40 seconds to complete and find 70k items
| 340 |
null |
[] |
[
{
"code": "",
"text": "Hello Team,\nLab test failing in the Intro to Mongo DB course even after successfully authenticating using the one time code .\nRegards,\nM.R.Meghana",
"username": "Meghana_M_R"
},
{
"code": "",
"text": "Hi Meghana,Welcome to the forums! Sorry you’re experiencing this issue with the labs. Please reach out to [email protected] with your issue and they’ll be able to help you. Thanks!",
"username": "Aiyana_McConnell"
}
] |
Lab test failing in the Intro to Mongo DB course
|
2023-08-11T12:34:08.714Z
|
Lab test failing in the Intro to Mongo DB course
| 471 |
null |
[
"queries",
"node-js",
"next-js"
] |
[
{
"code": "import { MongoClient, Db } from \"mongodb\";\nimport { NextApiRequest, NextApiResponse } from \"next\";\n\nconst resolveGetRequest = async (\n req: NextApiRequest,\n res: NextApiResponse,\n db: Db\n) => {\n const collection = db.collection(\"KnowledgePointComments\");\n const data = await collection.find({}).toArray();\n res.status(200).json({\n code: \"0\",\n messgae: \"success\",\n data: \"hello-world!\",\n });\n};\n\nconst resolvePostRequest = async (\n req: NextApiRequest,\n res: NextApiResponse,\n db: Db\n) => {};\n\nexport default async function handler(\n req: NextApiRequest,\n res: NextApiResponse\n) {\n const url = \"mongodb://localhost:27017\";\n const dbName = \"TEST_FRONTEND_EVERYDAY\";\n const client = new MongoClient(url);\n // console.log(client);\n try {\n const __res = await client.connect();\n console.log(\"@\", __res);\n console.log(\"connect mongodb success!\");\n\n const db: Db = client.db(dbName);\n\n // 获取随机知识点\n if (req.method === \"GET\") {\n resolveGetRequest(req, res, db);\n }\n\n // 发布知识点\n if (req.method === \"POST\") {\n resolvePostRequest(req, res, db);\n }\n } catch (err) {\n res.status(500).json({\n code: \"-1\",\n message: \"unable to connect mongodb\",\n });\n } finally {\n await client.close();\n }\n}\n\n",
"text": "environmentdetails:my guess:src/pages/api/v1/knowledge below:",
"username": "HC_P"
},
{
"code": "",
"text": "ps:I had tried these solution.pss:",
"username": "HC_P"
}
] |
Can not connet to mongodb database in nextjs
|
2023-08-11T15:20:43.047Z
|
Can not connet to mongodb database in nextjs
| 498 |
null |
[
"java"
] |
[
{
"code": "",
"text": "Current Setup\nWe have a single node Mongodb (v4.4.2) installed and configured in an EC2 machine in us-east-1 region. This DB is connected by a frontend java app in same region with the total data of around 1.2 TB and total index of around 583675023360Due to cost cutting we didn’t go for replication/clustering yet, and the above setup is working fine.Issue\nWe wanted to plan for DR. Our backup process for DR wasWith above plan, DR site was up. Small datasets were loading successfully, but, for larger datasets data loading was too slow - we got 504 gateway timeout in our frontend app.Findings/Debugging\nWe tried to verify the total db size along with index in our DR and existing Mongodb setup - they were all same.\nAlso, checking on the Mongodb logs in DR, index scanning was seen.Could anyone please suggest what additional configuration we need to do for Mongodb that was created from an existing AMI if there is any to avoid this slowness? Are we missing something here?",
"username": "Ayush_2025"
},
{
"code": "",
"text": "What’s DR?Show us the explain output of your query?",
"username": "Kobe_W"
}
] |
MongoDB slow when created from a backup AMI in AWS
|
2023-08-11T10:20:06.031Z
|
MongoDB slow when created from a backup AMI in AWS
| 380 |
null |
[
"server",
"security",
"atlas"
] |
[
{
"code": "",
"text": "Which edition onwards support Audit feature?\nThis needs to be done at collection level or at document level also its possible?\nCan only selected documents be audited?",
"username": "ABC_DEF1"
},
{
"code": "",
"text": "",
"username": "Kobe_W"
}
] |
Where and how to enable Audit in MongoDB
|
2023-08-11T11:42:29.837Z
|
Where and how to enable Audit in MongoDB
| 470 |
null |
[
"aggregation"
] |
[
{
"code": " {\n mytoken: 'foo',\n attributes: [\n { attribute: 'Attr-A', value: 'value-for-a', op: ':=' }\n { attribute: 'Attr-B', value: 'value-for-b', op: ':=' },\n { attribute: 'Attr-C', value: 'value-for-c', op: '-=' },\n { attribute: 'Attr-D', value: 'value-for-d', op: '+=' },\n { attribute: 'Attr-E', value: 'value-for-e', op: ':=' }\n ]\n }\n// It doesn't work\n[[\n {\n \"$match\":{\n \"mytoken\":\"foo\"\n }\n },\n {\n \"$addFields\":{\n \"attributes.Attr-C\":\"$value\",\n \"attributes.Attr-E\":\"$value\",\n }\n },\n {\n \"$project\":{\n \"_id\":0,\n \"attributes\":{\n \"$objectToArray\":\"$attributes\"\n }\n }\n },\n {\n \"$unwind\":\"$attributes\"\n },\n {\n \"$project\":{\n \"_id\":0,\n \"attribute\":\"$attributes.k\",\n \"value\":\"$attributes.v\",\n \"op\":\"$op\"\n }\n }\n]]\n[\n { attribute: 'Attr-C', value: 'value-for-c', op: '-=' },\n { attribute: 'Attr-E', value: 'value-for-e', op: ':=' }\n]\nattributes: []",
"text": "I have the Collection with the below data.Therefore, I’d to get only the ‘attributes’ array result based on which attributes values I want.e.g: I was trying something like…My goal is to have a result like this:and also a possibility to get the entire attributes: [] array.Appreciate any help.",
"username": "Jorge_Pereira"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $match:{\n \"mytoken\":\"foo\" \n }\n},\n{\n $project:{\n filteredArray:{\n $filter:{\n input:'$attributes',\n as:'theItem',\n cond:{$in:['$$theItem.attribute', ['Attr-B', 'Attr-A']]}\n }\n }\n }\n},\n{\n $unwind:'$filteredArray'\n},\n{\n $replaceRoot:{\n newRoot:'$filteredArray'\n }\n}\n])\ndb.getCollection(\"Test\").aggregate([\n{\n $match:{\n \"mytoken\":\"foo\" \n }\n},\n{ \n $unwind:'$attributes'\n},\n{\n $match:{\n 'attributes.attribute':{\n $in:[\n 'Attr-B', 'Attr-A'\n ]\n }\n }\n},\n{\n $replaceRoot:{\n newRoot:'$attributes'\n }\n}\n])\n",
"text": "Note I made the assumption that $value in your sample query was passed in…I can see two main options:This operator lets you filter an array, so we could do this:Alternatively with the other option, you could do something like this:",
"username": "John_Sewell"
},
{
"code": "[\n { attribute: 'Attr-A', value: 'value-for-a', op: ':=' }\n { attribute: 'Attr-B', value: 'value-for-b', op: ':=' },\n { attribute: 'Attr-C', value: 'value-for-c', op: '-=' },\n { attribute: 'Attr-D', value: 'value-for-d', op: '+=' },\n { attribute: 'Attr-E', value: 'value-for-e', op: ':=' }\n]\n",
"text": "Thank you @John_SewellEven it it possible to get only the attributes array content?eg: the entire attributes field:",
"username": "Jorge_Pereira"
},
{
"code": "db.Test.aggregate([\n{\n $match:{\n \"mytoken\":\"foo\" \n }\n},\n{ \n $unwind: '$attributes'\n},\n{\n $replaceRoot:{\n newRoot:'$attributes'\n }\n}\n])\n",
"text": "I am showing only the attributes: [] like: Not sure if it is the more fast and better way. therefore, it is working as well.",
"username": "Jorge_Pereira"
}
] |
Is it possible to get a array based on the fields using aggregate?
|
2023-08-11T04:18:12.567Z
|
Is it possible to get a array based on the fields using aggregate?
| 299 |
null |
[
"replication"
] |
[
{
"code": "5.0.6\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.825+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn20\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.825+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn20\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.3:37808\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.825+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn21\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.825+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn21\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.3:37810\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.902+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn20\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\nncipalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.3:37808\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.979+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn20\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:57.979+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn20\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.3:37808\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.056+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn20\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.056+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn20\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.3:37808\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.382+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn23\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.382+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn22\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.382+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn22\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.2:46998\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.382+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn23\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.2:47000\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.454+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn23\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.455+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn23\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.2:47000\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.527+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn23\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.527+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn23\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.2:47000\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.600+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn23\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2022-03-30T13:42:58.600+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn23\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.0.1.2:47000\",\"extraInfo\":{}}}\n\n",
"text": "I’m running MongoDB 5.0.6 in our newly setup staging environment as a 3 member replica set.Comparing to our ancient production replica set running a much older version of MongoDB, we noticed our logs keep filling with entries like these:Trying to Google for any explanation yields zero results and it seems I’m the only person in this universe with this problem.Any help in understanding the root cause is much appreciated and I will gladly attempt to provide any information about our setup.",
"username": "ilari"
},
{
"code": "",
"text": "\nScreenshot 2022-03-30 at 20.51.131920×1215 677 KB\nI noticed these log messages appear at exactly 5 min intervals ",
"username": "ilari"
},
{
"code": "",
"text": "Anyone? This seems odd, since my setup was rather simple - just following the replica set deployment guide and applying some of the “production notes” optimisations.",
"username": "ilari"
},
{
"code": "",
"text": "I switched the replica set from keyfile auth to clusterAuthMode x509 and the errors disappeared ",
"username": "ilari"
},
{
"code": "{\"t\":{\"$date\":\"2023-02-27T19:45:33.609+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn37\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-02-27T19:45:33.609+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn37\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.2.1.5:58803\",\"extraInfo\":{}}}\n",
"text": "Hello, were you ever able to resolve this using a keyfile? I’m looking at the same thing. I tried x509 certs briefly, but saw similarly verbose logs. Made me think it was a network-related thing, like a drop and reconnect, but I see no evidence of that in my network logs.I converted a 5.0.14 stand-alone using authorization to a rs. Used a brain-dead simple configuration; rs config, keyfile on all nodes, bind using DNS name, matching the primary with auth.Everything is working as expected, but there are endless “Client has attempted to reauthenticate as a single user” logs coming from the IPs of the secondary nodes in ACCESS.So it is not an issue with the keyfile auth, as the replica set is started, the nodes have initiated, and the data is replicated. The admin DB users are replicating on all nodes as well.I tried turning off authorization on all nodes and restarting, and it still logs these reauthentication from the nodes to the __system user and local db messages.There is minimal documentation on the internal SCRAM communication between replica nodes. I looked at the source code, and there is one reference to this log as a warning that the same credentials were reused. So it seems 100% benign in nature. I’d like to figure out what I did wrong, if anything.",
"username": "AllenI"
},
{
"code": "{\"t\":{\"$date\":\"2023-02-27T21:25:25.184+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn33\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslSupportedMechanisms\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.184+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5859101, \"ctx\":\"conn33\",\"msg\":\"Set user name for session\",\"attr\":{\"userName\":{\"user\":\"__system\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"\"}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.184+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn33\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslSupportedMechanisms\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.185+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn32\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslSupportedMechanisms\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.185+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5859101, \"ctx\":\"conn32\",\"msg\":\"Set user name for session\",\"attr\":{\"userName\":{\"user\":\"__system\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"\"}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.185+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn32\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslSupportedMechanisms\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn33\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286203, \"ctx\":\"conn33\",\"msg\":\"Updating user name for session\",\"attr\":{\"userName\":{\"user\":\"\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"\"}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286200, \"ctx\":\"conn33\",\"msg\":\"Setting mechanism name\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286304, \"ctx\":\"conn33\",\"msg\":\"Determined mechanism for authentication\"}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn33\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn32\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286203, \"ctx\":\"conn32\",\"msg\":\"Updating user name for session\",\"attr\":{\"userName\":{\"user\":\"\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"\"}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286200, \"ctx\":\"conn32\",\"msg\":\"Setting mechanism name\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286304, \"ctx\":\"conn32\",\"msg\":\"Determined mechanism for authentication\"}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.191+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn32\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.196+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn33\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslContinue\"}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.196+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn33\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.196+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn33\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.2.1.4:56091\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-02-27T21:25:25.196+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286305, \"ctx\":\"conn33\",\"msg\":\"Marking as cluster member\"}\n\n",
"text": "I set the access network, and replica level logs to verbose, and I do see “Error while waiting for hello response” where the time limit was exceeded in the replica log. I’m assuming that just means it timed out and nothing changed in the topology though.Would adjusting timeout settings from the default help here, or is this normal behavior?I tried doubling the values a few times, then reverted to the default values in the replication cfg as this seems unrelated to the reauth messages and made no difference on the hello messages.I also see good information in level-3 ACCESS logs. It appears to behave as though it has no user/pass (empty strings for both fields), and it falls back to the system default and is able to proceed? The nodes are obviously not authenticating properly. Did I miss some step in setting up the nodes?",
"username": "AllenI"
},
{
"code": "{\"t\":{\"$date\":\"2023-02-28T17:03:27.528+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn33\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.529+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286203, \"ctx\":\"conn33\",\"msg\":\"Updating user name for session\",\"attr\":{\"userName\":{\"user\":\"\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"\"}}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.529+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286200, \"ctx\":\"conn33\",\"msg\":\"Setting mechanism name\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\"}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.529+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286304, \"ctx\":\"conn33\",\"msg\":\"Determined mechanism for authentication\"}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.529+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn33\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslStart\"}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.533+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286300, \"ctx\":\"conn34\",\"msg\":\"Starting authentication step\",\"attr\":{\"step\":\"SaslContinue\"}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.533+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn34\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.533+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn34\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.2.1.5:52348\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.534+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286305, \"ctx\":\"conn34\",\"msg\":\"Marking as cluster member\"}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.534+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286203, \"ctx\":\"conn34\",\"msg\":\"Updating user name for session\",\"attr\":{\"userName\":{\"user\":\"__system\",\"db\":\"local\"},\"oldName\":{\"user\":\"\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.534+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286306, \"ctx\":\"conn34\",\"msg\":\"Successfully authenticated\",\"attr\":{\"client\":\"10.2.1.5:52348\",\"isSpeculative\":false,\"isClusterMember\":true,\"mechanism\":\"SCRAM-SHA-256\",\"user\":\"__system\",\"db\":\"local\"}}\n{\"t\":{\"$date\":\"2023-02-28T17:03:27.534+00:00\"},\"s\":\"D3\", \"c\":\"ACCESS\", \"id\":5286301, \"ctx\":\"conn34\",\"msg\":\"Finished authentication step\",\"attr\":{\"step\":\"SaslContinue\"}}\n",
"text": "I am using Windows for my deployment (not by my choice), so it appears my issue are probably related to that.I tried to force SCRAM-SHA-1 authorization because I was seeing Sasl mechanisms being used (despite not setting those up), but the nodes still authorize with SHA-256 regardless, so I can’t even rule that out. But I also tried 1/2 a dozen other configuration settings (mongod.cfg and windows security and network) to try to get rid of these messages, and they are present no matter what. I also upgraded to Mongo 6 from 5.0.14 to see if it was a bug or something.Below is the node auth process I am seeing.",
"username": "AllenI"
},
{
"code": "",
"text": "I just upgraded my MongoDB replica set from v4.4.16 to MongoDB v5.0.14 and these message started showing up in mongod.log after the upgrade:{“t”:{“$date”:“2023-03-07T15:21:02.259-05:00”},“s”:“W”, “c”:“ACCESS”, “id”:5626700, “ctx”:“conn29”,“msg”:“Client has attempted to reauthenticate as a single user”,“attr”:{“user”:{“user”:“__system”,“db”:“local”}}}Any idea how to resolve this ?Please help.Sally",
"username": "Yook_20450"
},
{
"code": "",
"text": "Hi,Did this issue went away for you?\nI just upgraded my replica set to 5.0.14 and I got the same warning messages showing up every 5 min in mongod.log.Please let me know.Thanks !Sally",
"username": "Yook_20450"
},
{
"code": "",
"text": "The tl;dr here is that this appears to be normal behavior now in all mongo replica set deployments using keyfile/auth in Mongo 5 and beyond, both Windows and Linux.Per the documentation, the nodes use the keyfile to initiate and gain access to the replica set, then they do not use it for further auth, and auth is required when using a keyFile now in Mongo 5 and beyond.I see a reauthorization log entry for each secondary node and for each DB in my deployment, so it appears the nodes reauth to each DB every 5 minutes.The documentation does not explain the keyfile SCRAM internal reauthentication mechanism between replica set nodes in any great detail. The source code references this warning message in one place in a way that seems it could/should be avoided. There are no steps that I am aware of to “authenticate” the nodes internally. Technically, they all share the admin DB, so you would assume they would default to that for db credentials, but you cannot specify a config or service flag value anywhere.Even x509 is chatty in this regard, logging constant messages every 5 minutes when the nodes connect.I had upgraded from a Mongo 4.4 install to a Mongo 5.0.14 prior to creating the replica set myself from a stand-alone instance. However, even a fresh install with 3 locally hosted instances of mongo using auth and a keyfile in either Windows or Linux will cause these log messages to appear endlessly when using Mongo 5 or greater, following the deployment instructions exactly. In Mongo 4, you could use keyfile without auth. In Mongo 5 and greater, auth is required, so we will see these logs.I tried things like altering all kinds of config values, manual auth from the secondary nodes, creating new users with various roles on the primary in multiple ways, reconfig of replica set configuration settings, etc. Unless there is some step to authorize these nodes to each other that I am missing, this is a lost cause chasing this.I’m guessing nobody cares about this because keyfile is not used in enterprise, and not recommended in production environments. In my particular case, this is a 100% internal use db and I am using TLS already, so I didn’t want to go through the trouble/complication of using x509, as they would be self-signed anyway. Keeping things simple makes it much easier to maintain and debug.",
"username": "AllenI"
},
{
"code": "",
"text": "Thanks for the info Allen.\nI had opened a Jira ticket with MongoDB and they are now investigating.\nI will share the info with you when I have the result from MongoDB support.Thanks !\nSally",
"username": "Yook_20450"
},
{
"code": "",
"text": "Thanks Sally, appreciated. Please share any information you receive from that ticket in this thread to help direct anyone else with this issue as well.",
"username": "AllenI"
},
{
"code": "",
"text": "Anything new on this Sally?Thanks",
"username": "AllenI"
},
{
"code": "",
"text": "I am assuming this isn’t going anywhere. I am planning to move away from using MongoDB now anyway.",
"username": "AllenI"
},
{
"code": "",
"text": "So what did the MongoDB support said?",
"username": "sebtheone"
}
] |
"Client has attempted to reauthenticate as a single user" messages from replica set members filling our logs
|
2022-03-30T13:53:01.267Z
|
“Client has attempted to reauthenticate as a single user” messages from replica set members filling our logs
| 4,683 |
null |
[] |
[
{
"code": "",
"text": "Hi all,Is there a way that I can add a Trigger function via Terraform?I am able to create a set of triggers via Terraform but not the App function.thanks",
"username": "gabrieleravanelli"
},
{
"code": "mongodbatlas_event_trigger",
"text": "Hi @gabrieleravanelli, welcome to the community Have you tried using the following mongodbatlas_event_trigger resource to see if it works for you / suits your use case?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Sorry, just misread. I believe you may have already tried that resource / it’s not what you’re after.I’ll try find out if creation of the trigger function via terraform is possible (correct me if i’m wrong here).",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I don’t believe this is available as of yet but appears to be planned for future. You can refer to the following feedback post regarding this : Add Terraform resource for functions – MongoDB Feedback EngineYou can keep track of the feedback post link and check occasionally for any updates.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "thanks for the response I will try to use the app function API v3 instead for the time being",
"username": "gabrieleravanelli"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Terraform API adding a Trigger function
|
2023-08-10T14:57:56.516Z
|
Terraform API adding a Trigger function
| 307 |
null |
[] |
[
{
"code": "Package 'mongodb' has no installation candidate",
"text": "I am following this guide: Create installer script for Debian/Ubuntu by TurtleIdiot · Pull Request #985 · Grasscutters/Grasscutter · GitHubI was trying to find out why MongoDB error saying: Package 'mongodb' has no installation candidate didn’t exist or something in the same folder am I mistaken?",
"username": "Samet_Takao"
},
{
"code": "",
"text": "Hi @Samet_Takao,\nYou can try to follow the official documentation:For Debian:For Ubuntu:Regards",
"username": "Fabio_Ramohitaj"
}
] |
Package 'mongodb' has no installation candidate
|
2023-08-10T20:34:39.524Z
|
Package ‘mongodb’ has no installation candidate
| 1,529 |
null |
[
"aggregation",
"node-js"
] |
[
{
"code": "{\n ...,\n guides: [\n // array of object ids\n ]\n}\n{\n ...,\n guides: [\n {\n ...,\n sections: [\n {\n ...\n }\n ]\n }\n ]\n}\n\n {\n $match: {\n $expr: {\n $eq: [\"$name\", cookbookName],\n },\n },\n },\n {\n $lookup: {\n from: \"guides\",\n localField: \"guides\",\n foreignField: \"_id\",\n as: \"guides\",\n },\n },\n {\n $unwind: {\n path: \"$guides\",\n },\n },\n {\n $lookup: {\n from: \"sections\",\n localField: \"guides.sections\",\n foreignField: \"_id\",\n as: \"guides.sections\",\n },\n },\n {\n $group: {\n _id: \"$_id\",\n name: {\n $first: \"$name\",\n },\n avatar_url: {\n $first: \"$avatar_url\",\n },\n banner_url: {\n $first: \"$banner_url\",\n },\n game: {\n $first: \"$game\",\n },\n preview: {\n $first: \"$preview\",\n },\n roles: {\n $first: \"$roles\",\n },\n streams: {\n $first: \"$streams\",\n },\n guides: {\n $push: \"$guides\",\n },\n },\n },\n",
"text": "Is it possible to populate nested arrays of object ids while preserving order? I understand that lookup is essentially random but is there a way to use the initial array of object ids to set the order for the populated ids after the lookups?My initial model looks like this:and the final would look likeCurrent pipeline - works but items are unordered.",
"username": "steffan"
},
{
"code": "",
"text": "I guess that for efficient reasons it has been decided to not guaranty the order of the localField array.I wrote I guess because I am not too sure it is the reason but it makes sense to me.1 - it is more efficient to return a document that is already in RAM if it matches rather than flushing it out and the reread it later to return it in the correct order.2 - it is more efficient to NOT return duplicate documents when the source array mentioned it multiple times.3 - they could do the lookup more efficiently by just doing a find in the looked up collection using a simple $in query with the localField array as argument, and a find does not sort the document and does not return duplicates.So it is simpler, more efficient and sufficient for most use-case to return an array without a specific order. And with aggregation you can add stages to get the order you want with $sortArray, sometimes you would use $map on the source array.In your case, your first localField array is guides and you do $unwind the result. So in your case to preserve the original guides order, you could $unwind before the $lookup rather than $unwind after.",
"username": "steevej"
},
{
"code": "$map$filter$unwind$group$group$unwind {\n $lookup: {\n from: \"guides\",\n localField: \"guides\",\n foreignField: \"_id\",\n as: \"guides_arr\"\n }\n },\n {\n $lookup: {\n from: \"sections\",\n localField: \"guides_arr.sections\",\n foreignField: \"_id\",\n as: \"sections_arr\"\n }\n },\n {\n $addFields: {\n guides_arr: {\n $map: {\n input: \"$guides_arr\",\n as: \"g\",\n in: {\n $mergeObjects: [\n \"$$g\",\n {\n sections: {\n $map: {\n input: \"$$g.sections\",\n as: \"s\",\n in: {\n $first: {\n $filter: {\n input: \"$sections_arr\",\n cond: { $eq: [\"$$s\", \"$$this._id\"] }\n }\n }\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n $addFields: {\n guides: {\n $map: {\n input: \"$guides\",\n as: \"g\",\n in: {\n $first: {\n $filter: {\n input: \"$guides_arr\",\n cond: { $eq: [\"$$g\", \"$$this._id\"] }\n }\n }\n }\n }\n }\n }\n },\n {\n $unset: [\n \"guides_arr\",\n \"sections_arr\"\n ]\n }\n",
"text": "Hello @steffan, Welcome to the MongoDB community forum,Is it possible to populate nested arrays of object ids while preserving order?It will result documents in natural/stored order from the lookup collection.is there a way to use the initial array of object ids to set the order for the populated ids after the lookups?You can do it by using $map and $filter operators after $lookup, but I can see you are doing $unwind and then the $group stage, also the $group stage does not preserve the order of documents!I think you don’t need to $unwind here, see the below pipeline,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Wow thank you so much, this was amazing!",
"username": "steffan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Populate Array of ObjectIds While Preserving Order
|
2023-08-08T06:20:41.510Z
|
Populate Array of ObjectIds While Preserving Order
| 446 |
null |
[
"connector-for-bi"
] |
[
{
"code": "",
"text": "Hi together,first of: we’ve been using mongodb now for a couple of years without any major problems, awesome, thank you guys!We do currently have a bi-connector instance installed to, to be able to connect multiple bi-tools like powerbi and tableau.\nThis does work like a charm within desktop environments using the mongo-odbc-driver.\nWe internally rolled this out to multiple departments without further issues.Now to the problem: We are unable to install the driver on any windows 2012 r2 server instance.The following error occurs while installing, after downloading the msi and following the installer:“The setup routines for the MongoDB ODBC 1.4.1 Unicode Driver ODBC driver could not be loaded due to system error code 126: The specified module could not be found. (C:\\Program\\Files\\MongoDB\\ODBC\\1.4\\bin\\mdbodbcS.dll)”Does anyone have any idea on what to look for here?Thanks!\nMarkus",
"username": "bla_blubb"
},
{
"code": "",
"text": "Similar issue here. Solved by installing Visual C++ Redistributable for Visual Studio 2015, as indicated in the instructions.",
"username": "SergioMagnettu"
}
] |
Mongodb odbc driver win server 2012
|
2020-08-10T07:28:01.490Z
|
Mongodb odbc driver win server 2012
| 2,745 |
null |
[
"storage"
] |
[
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /data/var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n{\"t\":{\"$date\":\"2023-08-09T12:28:18.565+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-09T12:28:18.565+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-09T12:28:18.565+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-09T12:28:18.565+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.215+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.215+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingIn>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.216+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.217+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrati>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMig>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\">\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":73698,\"port\":27017,\"dbPath\":\"/data/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\">\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.8\",\"gitVersion\":\"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74\",\"openSSLV>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.225+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.226+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/c>\n{\"t\":{\"$date\":\"2023-08-09T12:36:18.226+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15484M,session_max=33000,eviction=(threads_min=4,threads_max>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.079+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":853}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.079+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.480+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"ta>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.480+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"sta>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.500+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuidDisposition\":\"provided\",\"uuid\":{\"uuid\":{\"$uuid\":\"4>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"458a0e57-e541-4be3-bc87-a9be>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":20459, \"ctx\":\"initandlisten\",\"msg\":\"Setting featureCompatibilityVersion\",\"attr\":{\"newVersion\":\"6.0\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"setFCV\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"in>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"in>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.690+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.691+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/var/lib/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.693+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"c9a>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.899+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"c9a0e516-d2c8-4526-a0c5-7a87>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.899+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.900+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.901+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNo>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.902+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"uuidDisposition\":\"generated\",\"uuid\":{\"u>\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.902+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.902+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:19.902+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:36:20.200+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"420a6230-492e-4>\n{\"t\":{\"$date\":\"2023-08-09T12:36:20.200+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"420a6230-492e-4>\n{\"t\":{\"$date\":\"2023-08-09T12:36:20.200+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.system.sessions\",\"command\":{\"createIndexes\":\"system.s>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"thread26\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":{\"ts_sec\":1691584988,\"ts_usec\":394320,\"thread\":\"73698:0x7fcde9c086>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"thread26\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":{\"ts_sec\":1691584988,\"ts_usec\":394483,\"thread\":\"73698:0x7fcde9c086>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"thread26\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":{\"ts_sec\":1691584988,\"ts_usec\":394538,\"thread\":\"73698:0x7fcde9c086>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"thread26\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":-31804,\"message\":{\"ts_sec\":1691584988,\"ts_usec\":394599,\"thread\":\"73698:0x7fcde9>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23089, \"ctx\":\"thread26\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":712}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23090, \"ctx\":\"thread26\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.394+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"thread26\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"thread26\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"5611E42FDD14\",\"b\":\"5611DF499000\",\"o\":\"4E64D14\",\"s\":\"_ZN5mongo18stack_trace_de>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E42FDD14\",\"b\":\"5611DF499000\",\"o\":\"4E64D14\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E4300259\",\"b\":\"5611DF499000\",\"o\":\"4E67259\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mongo>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E42F9F36\",\"b\":\"5611DF499000\",\"o\":\"4E60F36\",\"s\":\"abruptQuit\",\"s+\":\"66\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE662520\",\"b\":\"7FCDEE620000\",\"o\":\"42520\",\"s\":\"__sigaction\",\"s+\":\"50\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE6B6A7C\",\"b\":\"7FCDEE620000\",\"o\":\"96A7C\",\"s\":\"pthread_kill\",\"s+\":\"12C\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE662476\",\"b\":\"7FCDEE620000\",\"o\":\"42476\",\"s\":\"raise\",\"s+\":\"16\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE6487F3\",\"b\":\"7FCDEE620000\",\"o\":\"287F3\",\"s\":\"abort\",\"s+\":\"D3\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E13BB518\",\"b\":\"5611DF499000\",\"o\":\"1F22518\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPK>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E0F640BA\",\"b\":\"5611DF499000\",\"o\":\"1ACB0BA\",\"s\":\"_ZN5mongo12_GLOBAL__N_141mdb_handle_erro>\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E1F444C1\",\"b\":\"5611DF499000\",\"o\":\"2AAB4C1\",\"s\":\"__eventv\",\"s+\":\"E61\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E0F783F9\",\"b\":\"5611DF499000\",\"o\":\"1ADF3F9\",\"s\":\"__wt_panic_func\",\"s+\":\"13A\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5611E1EC06D5\",\"b\":\"5611DF499000\",\"o\":\"2A276D5\",\"s\":\"__log_server\",\"s+\":\"485\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE6B4B43\",\"b\":\"7FCDEE620000\",\"o\":\"94B43\",\"s\":\"pthread_condattr_setpshared\",\"s+\":\"513\"}}}\n{\"t\":{\"$date\":\"2023-08-09T12:43:08.470+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"thread26\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FCDEE746A00\",\"b\":\"7FCDEE620000\",\"o\":\"126A00\",\"s\":\"__xmknodat\",\"s+\":\"23\nIt looks like the MongoDB server encountered a fatal error and has crashed. The log you've provided shows a sequence of events leading up to the crash. Here's a summary of what's happening:\n\n1. The server starts and goes through its initialization process.\n2. Various components and services are registered, and the server initializes its storage engine (WiredTiger).\n3. The server starts listening on certain addresses and ports for incoming connections.\n4. Several index builds and collections are created.\n5. Various configuration options and settings are applied.\n\nHowever, at some point, the server encounters a critical error:\n\n1. There are several error messages from the WiredTiger storage engine, indicating various types of errors.\n2. The error messages are related to file system operations, thread operations, and other internal operations of the storage engine.\n\nFinally, the server crashes:\n\n1. An \"assertion\" failure is triggered, which is a mechanism in programming to catch logical inconsistencies.\n2. The assertion failure leads to the server generating a \"fatal message\" and ultimately aborting.\n\nThe exact cause of this crash could be due to a variety of factors, such as hardware issues, misconfiguration, corruption of data files, or software bugs. To troubleshoot and resolve this issue, you might need to analyze the error messages in more detail, inspect the MongoDB configuration settings, and investigate the health of the underlying hardware and file system. If you have access to MongoDB support or a community forum, reaching out for assistance can also be helpful.\n",
"text": "its a clean install of latest ver as per instructions for installation on ubuntu server 22.04.I have a /data directory which is an (unlocked) LUKS partition.i have updated mongod.conf to store data in subdirs of /data :Mongod when started. Logs read:Since I feel out of my depth, here, I resorted to using GPT 3.5, it tells me:Please can someone shed light on this. I’m stoked to go live, but can’t proceed without mongo! Thanks all,\nSam",
"username": "sam_ames"
},
{
"code": "abort()",
"text": "That was a great demonstration of how useless ChatGPT can be \nEverything it told you is right there in the error dump.SIG 6 is abort() called by the C++ library.It has some problem with your file system or its understanding of the layout, I am inferring.If you set the configuration back the way it was before you changed it, does it work?This is like the old joke:\n“Doctor, it hurts when I do this.”\n“Don’t do that!”",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I absolutely must use my /data dir.I will revert the db path and get back to you in a couple of hours.Many thanks for your support",
"username": "sam_ames"
},
{
"code": "",
"text": "Okay, yes: The db server stays up when using the default path.(Now I have an error regarding my multer-gridfs-storage connection from my express api, which doesn’t exist in my local version… I set the env vars when launching in pm2, so not sure what th- e issue is, but I’ll investigate tomorrow. Now back to topic!)Please tell me how I can keep the server up, whilst storing all my user data in subdirs of /dataThanks for your support \nSam",
"username": "sam_ames"
},
{
"code": "",
"text": "Please tell me how I can keep the server up, whilst storing all my user data in subdirs of /dataI am afraid I don’t know what the problem is. All I can suggest is that you carefully read all information on MongoDB configuration and try to determine what causes the problem.If your file system is unusual, some kind of mount, that might be the problem. I suspect that the engine wants direct file system access.But I’m not really sure. Perhaps a MongoDB employee will have a thought on this.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Oh dear. I’ll do some more work on this tomorrow, then. It is an encrypted volume, that is part of a vg, mounted at /dataMy host os is ubuntu 22.04I really hope I can get this completedHopefully someone knows…Thanks,\nSam",
"username": "sam_ames"
},
{
"code": "",
"text": "@Tarun_Gaur do you have any suggestions for this user ? ^^^^",
"username": "Jack_Woehr"
}
] |
Why does mongo crash?
|
2023-08-09T13:18:10.380Z
|
Why does mongo crash?
| 556 |
null |
[
"queries",
"compass"
] |
[
{
"code": "",
"text": "Hello, please, I need to make a query to rescue the information from yesterday. Example \"{type: “REFUND”, Date: “yestersay”} in mongodb compass",
"username": "Alvaro_Gallardo"
},
{
"code": "",
"text": "Unless I’m being stupid I have no idea what you need from your post.Are you trying to recover deleted data or use Compass to find data or see what queries you ran yesterday?If you want help with queries, please attach sample documents so people can see exactly what’s going on without making (too many) assumptions.",
"username": "John_Sewell"
},
{
"code": "",
"text": "I need to make a query to retrieve data from yesterday from a database, but I don’t want to add the date",
"username": "Alvaro_Gallardo"
},
{
"code": "{\n _id:'XXXXXXX123456',\n documentDate:ISODate('2023-08-10 11:23:000000')\n}\n",
"text": "Ahh, ok so you have a database with a collection, lets call them Database and Collection and the collection has documents like this:And you want a query that does always gets data from yesterday without needing to update it every day to enter yesterdays date?Or…Do you want to extract the document date from the ID field and then use that to look for documents that were created yesterday as opposed to a field you’ve set on the document being for yesterday?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Exactly, I need to execute this query every day to extract data from the previous day",
"username": "Alvaro_Gallardo"
},
{
"code": "",
"text": "And what do your documents look like? What field determines documents from the previous day? The ID ObjectID or a field that’s set on the document?",
"username": "John_Sewell"
},
{
"code": "",
"text": "This is the query {type: “REFUND”, accountingDate: “2023-08-09”} and it works but I have to run it daily and I want the date to be added automatically from the previous day",
"username": "Alvaro_Gallardo"
},
{
"code": "",
"text": "That’s not an actual date then, it’s a string? And what format is the date, I’m assuming it’s ISO as opposed to something exotic like YYYY-DD-MM.",
"username": "John_Sewell"
},
{
"code": "",
"text": "example\nimage736×486 24.2 KB\n",
"username": "Alvaro_Gallardo"
},
{
"code": "db.getCollection(\"Test\").explain().find(\n{\n $expr:{\n $eq:[\n '$accountingDate',\n {\n $dateToString:{\n date:{\n $dateAdd:{\n startDate:'$$NOW',\n unit:'day',\n amount:-1\n }\n },\n format:'%Y-%m-%d'\n }\n } \n ]\n }\n}\n)\n\n",
"text": "Ok, so if you want to compare to a calculated date you want to use $expr operation:This means you can use all the aggregation options and compare data (similar to if you want to compare two fields within a document against each other in a find).With the aggregation style query comes the ability to use aggregation variables such as $$NOW:You’ll want to compare the field in the doc against the current date, with one day taken away then re-formatted to YYYY-MM-DD format:Something like this, if doing this on a lot of data, obviously create an index…",
"username": "John_Sewell"
},
{
"code": "",
"text": "And you can put that query into compass:\nimage973×440 24.4 KB\n",
"username": "John_Sewell"
},
{
"code": "",
"text": "thank you very much problems solved thank you",
"username": "Alvaro_Gallardo"
}
] |
Query mongo yesterday
|
2023-08-10T16:13:42.091Z
|
Query mongo yesterday
| 512 |
null |
[
"swift",
"flexible-sync"
] |
[
{
"code": "class Place: Object {\n @Persisted(primaryKey: true) public var _id = UUID().uuidString\n @Persisted public var ownerId: String\n @Persisted public var lastVisits: List<LastVisit> = .init()\n}\n\nclass LastVisit: Object {\n @Persisted(primaryKey: true) public var _id = UUID().uuidString\n @Persisted public var ownerId: String\n @Persisted public var date = Date()\n}\n\"roles\": [\n {\n \"name\": \"Place\",\n \"apply_when\": {},\n \"document_filters\": { \"read\": true, \"write\": true },\n \"read\": true, \"write\": true, \"insert\": true, \"delete\": true, \"search\": true\n },\n {\n \"name\": \"LastVisit\",\n \"apply_when\": {},\n \"document_filters\": { \"read\": { \"owner_id\": \"%%user.id\" }, \"write\": { \"owner_id\": \"%%user.id\" } },\n \"read\": true, \"write\": true, \"insert\": true, \"delete\": true, \"search\": true\n }\n]\nlastVisit",
"text": "Hello community!I’ve been working on a project where I have a realm model to keep track of every user’s last visit to a specific place-object. Here’s a simple description of my classes:I set up two sync rules:The corresponding roles look like this:However, I’ve run into a rather puzzling issue:\nI have a first client and user that adds a first LastVisit to the Place-Object.\nWhen a second user reads the Place-Object, the lastVisit list is empty, which is expected. But, when this client appends a new LastVisit Object to the List, it doesn’t append an object; instead, it overrides the whole List! This leaves me with one LastVisit-Object on the server-side collection, and the first client and user have an empty list.What I would expect is that the server has all objects of all users stored and not that the Client overrides the whole List.Is this flexible sync and list behavior a bug or intended? If anyone has any insights or has faced a similar issue, please let me know how you resolved it or if there are any workarounds.",
"username": "Dan_Ivan"
},
{
"code": "\"roles\": [\n {\n \"name\": \"Place\",\n \"apply_when\": {},\n \"document_filters\": { \"read\": true, \"write\": true },\n \"read\": true, \"write\": true, \"insert\": true, \"delete\": true, \"search\": true\n },\n {\n \"name\": \"LastVisit\",\n \"apply_when\": {},\n \"document_filters\": { \"read\": { \"owner_id\": \"%%user.id\" }, \"write\": { \"owner_id\": \"%%user.id\" } },\n \"read\": true, \"write\": true, \"insert\": true, \"delete\": true, \"search\": true\n }\n]\nrulesrolesPlaceLastVisitlastVisitlastVisitsErrorCompensatingWritehttps://realm.mongodb.com/groups/<groupID>/apps/<appID>",
"text": "Hello @Dan_Ivan ,Wanted to confirm a few things:I set up two sync rules:The corresponding roles look like this:Did you setup two rules or two roles? To be clear, a “rule” contains “roles” and rules apply at the MongoDB collection level – there’s also the concept of a “default rule”, which applies to any collection that doesn’t have a rule defined (you can tell if a rule is defined for a collection in the Rules page in the UI if you see that the name of the collection is not greyed out). Based on your description, I think that for the collection corresponding to Place you want to have a collection rule with a single read-all/write-all role. And then for the collection corresponding to LastVisit, there will be a collection rule with the “LastVisit” role that you’ve specified above.I have a first client and user that adds a first LastVisit to the Place-Object.\nWhen a second user reads the Place-Object, the lastVisit list is empty, which is expected. But, when this client appends a new LastVisit Object to the List, it doesn’t append an object; instead, it overrides the whole List! This leaves me with one LastVisit-Object on the server-side collection, and the first client and user have an empty list.Do you mind checking whether the first client is running into a compensating write error when it attempts to append to the lastVisits variable (you can check for this by examining the logs for the client’s session, an ErrorCompensatingWrite will log entry will show up)? The TLDR is that if there is a permissions violation, then a compensating write will be issued on the server, which effectively undos the illegal write.If you want, you can also link your app URL here (looks like https://realm.mongodb.com/groups/<groupID>/apps/<appID>). I can then poke around to see if there’s anything suspicious going on.Regards,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "In the project I am currently working on and where we have this use case, the following error occurs on the server:OtherSessionError:\nError:Failed to apply received changeset: ArrayInsert: Invalid prior_size (list size = 1, prior_size = 0) (instruction target: CBProject[“6485DBFE-2F4F-41C1-A103-83B26B561DB8”].metadata[0], version: 1734, last_integrated_remote_version: 42, origin_file_ident: 316, timestamp: 271621410470). Please contact support. (ProtocolErrorCode=201)In the logic of the example at the beginning, Location here would be CBProject and LastVisit would be be saved in the metadata array…We have two roles set - both as described above.If it helps, I’ll set up an example project with the simplified use case I mentioned at the beginning!",
"username": "Dan_Ivan"
}
] |
Lists and Flexible Sync Permissions
|
2023-08-09T13:45:53.885Z
|
Lists and Flexible Sync Permissions
| 479 |
[
"sanfrancisco-mug",
"vancouver-mug",
"mug-virtual-us-west"
] |
[
{
"code": "Meeting ID: 985 6392 2855\n\nPasscode: 407403\nTechnical Marketing Manager @Grafana LabsLead Product Marketing @ MongoDBSr. Product Manager @ MongoDB",
"text": "\nMUG - West1920×1080 222 KB\nJoin us for the inaugural Americas West Virtual MongoDB Meetup, tailored for DevOps enthusiasts on August 10th. Get ready for an informative and engaging meetup with practical demonstrations.Learn alternative methods of utilizing MongoDB Atlas beyond the UI in our first session with Darshana and Zubair. Discover powerful approaches like the MongoDB Atlas Terraform Provider, CloudFormation Resources, CDK, Quick Start Partner Solution Deployments, and the Atlas Kubernetes Operator. Experience a live demo on swiftly creating your first MongoDB Atlas cluster using the Atlas Terraform Provider.Enhance your MongoDB expertise with two lightning sessions. Vijay Tolani from Grafana Labs will showcase connecting MongoDB data and other sources to a unified Grafana dashboard. Gain valuable business insights by visualizing metrics with charts, gauges, geo-maps, and more. Additionally, receive real-time alerts and query MongoDB and MongoDB Atlas data without migration or ingestion.MongoDB Champion @Roman_Right will delve into the world of Race Conditions. Gain a comprehensive understanding and explore strategies to mitigate them. Wrap up the event with a thrilling Trivia session, an opportunity to network, and the chance to win some cool MongoDB Swag! Connect with other passionate MongoDB enthusiasts, share ideas, and establish long-lasting connections. Event Type: Online Join Zoom Meeting (passcode is encoded in the link) Find your local number: Zoom International Dial-in Numbers - ZoomDetailed agenda to be announced soon!\nimage1500×1999 178 KB\nMongoDB Champion | User Group Leader | Author of Beanie - MongoDB ODM`–Technical Marketing Manager @Grafana Labs–Lead Product Marketing @ MongoDB–Sr. Product Manager @ MongoDB",
"username": "Harshit"
},
{
"code": "",
"text": "Hey there,Thank you for confirming to attend the meetup tomorrow at 11:00 AM PST. We are thrilled to have you join us.Here are the different ways for you to join the event:We want to make sure everyone has a fantastic time, so please join us at 11:00 AM to ensure you don’t miss any of the sessions. We can also have some time to chat before the talks begin.If you have any questions, please don’t hesitate to ask by replying to this Looking forward to seeing you all at the event!",
"username": "Harshit"
},
{
"code": "",
"text": "Hi there @Harshit, @Zuhair_Ahmed, @Roman_Rightdue to some unexpected long running jobs I might miss the MUG or parts of it. I will definitely consume the recording. In case I can’t join one question before hand:Best, Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hey Everyone! Gentle Reminder - We will be starting in 5 mins!",
"username": "Harshit"
},
{
"code": "",
"text": "Bummed that I have a meeting landing right in the middle - but hoping to not miss too much.",
"username": "Nuri_Halperin"
}
] |
Americas West vMUG: MongoDB DevOps Integrations: Terraform, CloudFormation, Grafana Plug-in & Race Conditions
|
2023-07-17T21:34:43.108Z
|
Americas West vMUG: MongoDB DevOps Integrations: Terraform, CloudFormation, Grafana Plug-in & Race Conditions
| 2,076 |
|
null |
[
"aggregation",
"queries"
] |
[
{
"code": "$$NOW$$NOW",
"text": "First off, I love Mongo Atlas Charts, and they are extremely valuable.I’m just stuck on how to make charts use relative dates.For example, I’d love to be able to make a dashboard that’s only of today’s current metrics.I’ve seen there’s a built in $$NOW variable, but I’m not sure how to use it in an aggregation as a date instead of full datetime.Because if you were to query on $$NOW you’d be filtering only records that have taken place right now.Additionally, it would be great to know how I could make month over month charts, or even month comparison charts.How do you perform date math in Mongo Atlas Charts?",
"username": "Dylan_Pierce"
},
{
"code": "$NOWdate2024$$NOWdate$lte$$NOW[{ $match: { $expr: { $lte: [\"$date\", \"$$NOW\"] } } }]\n$$NOW",
"text": "Hi @Dylan_Pierce Thanks for the great feedback regarding Charts!I’ve seen there’s a built in $NOW variable, but I’m not sure how to use it in an aggregation as a date instead of full datetime.Not sure if this is what you’re after in terms of using it an aggregation but please view the below example based off my test environment. I have the following chart:\nimage5448×3010 464 KB\nYou can see there is a total of 3 documents to the far right all with a date year value of 2024 and onwards.Now, I set the following filter using $$NOW to only display / use documents with a date value of $lte $$NOW:Which results in the following (where the 3 documents on the far right in the above image are no longer counted):\n\nimage5460×2984 461 KB\nIf you still require further assistance, could you please provide some sample document(s) and clarification on how you would like the $$NOW variable used against those? This is just so I can get a better idea of what you’re after Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "[{ $match: { $expr: { $lte: [\"$date\", \"$NOW\"] } } }]\n$created_at$created_at$created_at$match$expr",
"text": "Thanks for the reply @Jason_TranThat is very close, but say you wanted to make a relative match. Consider these scenarios:How could you use the aggregation pipeline to only $match documents based on a relative date like the scenarios above?The answer here says use math within an $expr, which seems processing heavy. Is there a more elegant way?",
"username": "Dylan_Pierce"
},
{
"code": "",
"text": "hoursu can use $subtract to compare date different",
"username": "Shuai_Aaron_Shaw"
}
] |
How can I make a query or aggregation for a chart that's relative to today's date?
|
2023-05-11T20:56:06.530Z
|
How can I make a query or aggregation for a chart that’s relative to today’s date?
| 758 |
null |
[
"queries",
"time-series"
] |
[
{
"code": " {\n \"metadata\": {\n \"id\": \"abcd-12\",\n \"type\": \"type1\"\n },\n \"timestamp\": \"2023-06-27T03:46:49.786099495Z\",\n \"data\": {\n \"ids\": [\n \"abcd-12\",\n \"efdg-33\"\n ],\n\n \"attribute_1\": \"abc\"\n }\n }\ndb.ts.find({\"metadata.type\":\"type1\",\"data.ids\":{\"$in\":[\"efdg-33\"]}}).sort({\"_id\":-1}).limit(1)\n{\n \"t\": {\n \"$date\": \"2023-06-27T19:10:04.850+00:00\"\n },\n \"s\": \"I\",\n \"c\": \"COMMAND\",\n \"id\": 51803,\n \"ctx\": \"conn9\",\n \"msg\": \"Slow query\",\n \"attr\": {\n \"type\": \"command\",\n \"ns\": \"sampledb.ts\",\n \"command\": {\n \"find\": \"ts\",\n \"filter\": {\n \"metadata.type\": \"type1\",\n \"data.ids\": {\n \"$in\": [\n \"efdg-33\"\n ]\n }\n },\n \"limit\": 1,\n \"projection\": {\n \"data\": 0\n },\n \"singleBatch\": true,\n \"sort\": {\n \"_id\": -1\n },\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"e55dfa5f-d438-4767-862a-d4912efc03e3\"\n }\n },\n \"$db\": \"sampledb\"\n },\n \"planSummary\": \"COLLSCAN\",\n \"resolvedViews\": [\n {\n \"viewNamespace\": \"sampledb.ts\",\n \"dependencyChain\": [\n \"ts\",\n \"system.buckets.ts\"\n ],\n \"resolvedPipeline\": [\n {\n \"$_internalUnpackBucket\": {\n \"timeField\": \"timestamp\",\n \"metaField\": \"metadata\",\n \"bucketMaxSpanSeconds\": 86400\n }\n }\n ]\n }\n ],\n \"keysExamined\": 0,\n \"docsExamined\": 2606504,\n \"hasSortStage\": true,\n \"cursorExhausted\": true,\n \"numYields\": 2703,\n \"nreturned\": 1,\n \"queryHash\": \"130CB8DF\",\n \"planCacheKey\": \"130CB8DF\",\n \"queryFramework\": \"classic\",\n \"reslen\": 194,\n \"locks\": {\n \"FeatureCompatibilityVersion\": {\n \"acquireCount\": {\n \"r\": 2819\n }\n },\n \"Global\": {\n \"acquireCount\": {\n \"r\": 2819\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 116\n }\n }\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 7279476780,\n \"timeReadingMicros\": 7278730\n },\n \"timeWaitingMicros\": {\n \"cache\": 2822\n }\n },\n \"remote\": \"129.0.168.1:60736\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 14869\n }\n}\nsampldb> db.ts.createIndex( { \"data.ids\": 1 } )\nMongoServerError: Index build failed: b38519f2-ca8f-4244-91fe-845f87f19137: Collection sampldb.system.buckets.ts ( a17c3bff-c74f-4a3d-b18f-aee3bca3404b ) :: caused by :: Indexed measurement field contains an array value\n",
"text": "Sample data:query:log from mongoDB:it becomes worse when the number of document grows ( about 70 for same metadata.id, 4M for all documents)\nIndexing on the data.ids should improve the performance but it looks like the mongoDB ( time series doesn’t support indexing on arrays.mongoDB version:6.0.6Please let me know if there is a solution , Thanks.",
"username": "Vincent_Zhou"
},
{
"code": "",
"text": "show output of explain ?",
"username": "Kobe_W"
},
{
"code": "sampledb> db.ts.find({\"metadata.type\":\"type1\",\"data.ids\":{\"$in\":[\"efdg-33\"]}}).sort({\"_id\":-1}).limit(1).explain()\n{\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: 'sampledb.system.buckets.ts',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { 'meta.type': { '$eq': 'type1' } },\n {\n 'control.max.data.ids': { '$_internalExprGte': 'efdg-33' }\n },\n {\n 'control.min.data.ids': { '$_internalExprLte': 'efdg-33' }\n }\n ]\n },\n queryHash: '130CB8DF',\n planCacheKey: '130CB8DF',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'COLLSCAN',\n filter: {\n '$and': [\n { 'meta.type': { '$eq': 'type1' } },\n {\n 'control.max.data.ids': { '$_internalExprGte': 'efdg-33' }\n },\n {\n 'control.min.data.ids': { '$_internalExprLte': 'efdg-33' }\n }\n ]\n },\n direction: 'forward'\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 75371,\n executionTimeMillis: 8123,\n totalKeysExamined: 0,\n totalDocsExamined: 2709723,\n executionStages: {\n stage: 'COLLSCAN',\n filter: {\n '$and': [\n { 'meta.type': { '$eq': 'type1' } },\n {\n 'control.max.data.ids': { '$_internalExprGte': 'efdg-33' }\n },\n {\n 'control.min.data.ids': { '$_internalExprLte': 'efdg-33' }\n }\n ]\n },\n nReturned: 75371,\n executionTimeMillisEstimate: 3849,\n works: 2709725,\n advanced: 75371,\n needTime: 2634353,\n needYield: 0,\n saveState: 2861,\n restoreState: 2861,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 2709723\n },\n allPlansExecution: []\n }\n },\n nReturned: Long(\"75371\"),\n executionTimeMillisEstimate: Long(\"7420\")\n },\n {\n '$_internalUnpackBucket': {\n exclude: [],\n timeField: 'timestamp',\n metaField: 'metadata',\n bucketMaxSpanSeconds: 86400,\n assumeNoMixedSchemaData: true,\n usesExtendedRange: true\n },\n nReturned: Long(\"509841\"),\n executionTimeMillisEstimate: Long(\"7768\")\n },\n {\n '$match': { 'data.ids': { '$eq': 'efdg-33' } },\n nReturned: Long(\"75\"),\n executionTimeMillisEstimate: Long(\"8121\")\n },\n {\n '$sort': { sortKey: { _id: -1 }, limit: Long(\"1\") },\n totalDataSizeSortedBytesEstimate: Long(\"0\"),\n usedDisk: false,\n spills: Long(\"0\"),\n nReturned: Long(\"1\"),\n executionTimeMillisEstimate: Long(\"8121\")\n }\n ],\n serverInfo: {\n host: 'mongodb-7556b474f-mgmb7',\n port: 27017,\n version: '6.0.6',\n gitVersion: '26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n aggregate: 'system.buckets.ts',\n pipeline: [\n {\n '$_internalUnpackBucket': {\n timeField: 'timestamp',\n metaField: 'metadata',\n bucketMaxSpanSeconds: 86400,\n assumeNoMixedSchemaData: true,\n usesExtendedRange: true\n }\n },\n {\n '$match': {\n 'metadata.type': 'type1',\n 'data.ids': { '$in': [ 'efdg-33' ] }\n }\n },\n { '$sort': { _id: -1 } },\n { '$limit': Long(\"1\") }\n ],\n cursor: {},\n collation: {}\n },\n ok: 1\n}\n",
"text": "Sure.",
"username": "Vincent_Zhou"
},
{
"code": "",
"text": "so the planner is using a collection scan. that’s why it’s slow. You need to create index for your query filter. But i’m not familiar with time series and array index, so i can’t help further.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks. I tried create the index on data.ids, but it looks like index on array is not allowed for time series.",
"username": "Vincent_Zhou"
},
{
"code": "",
"text": "work around:\nmove ids to metadata, then it can be indexed.",
"username": "Vincent_Zhou"
}
] |
$in query of time series > 10 seconds
|
2023-06-27T21:05:45.557Z
|
$in query of time series > 10 seconds
| 644 |
[
"connector-for-bi"
] |
[
{
"code": "",
"text": "Hey Everyone,\ni was trying to setup a BI connector setup for a project i’m working. i have mongosqld running locally on the windows laptop and i have downloaded and installed the odbc driver but i am unable to establish a connection. There is authentication enabled on either mongod,mongosqld processes.bi connector version:\nmongosqld.exe --version\nmongosqld version: v2.14.4odbc driver version : 1.4.1error i get :-\ndriver couldnot be loaded due to system error 126: the specified module couldnot be found \"MongoDB ODBC … mdbodbcw.dll in the installed location.But i do have the mdbodbcw.dll file in the installed bin location.\nimage680×744 103 KB\n",
"username": "Arun_guptha_1"
},
{
"code": "",
"text": "@Arun_guptha_1 , two thoughts:",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes @Jack_Woehr , i did follow the exact steps from the documentation provided.there was typo , that’s why it didnot match the text and the error of snapshot. However i do get the mdbodbcw.dll not found when i choose Mongodb odbc unicode driver.",
"username": "Arun_guptha_1"
},
{
"code": "",
"text": "Can you copy and paste in the terminal data that shows the error message, please?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "i am getting error while conneccting to the mongosqld process running in the local host from the odbc driver end. i don’t see any error messages in the terminal/cmd related to odbc. if there is a place that i can look into, please let me know, i can get those details. mongosqld process is running fine and able to map the schema to my mongo instance running in aws ec2.",
"username": "Arun_guptha_1"
},
{
"code": "",
"text": "“connecting to the mongosqld process” from what? Does whatever application you are using do any logging? Have you checked the mongosqld logs? (I don’t know where mongosqld logs errors on Windows.)",
"username": "Jack_Woehr"
},
{
"code": "\tDIAG [IM003] Specified driver could not be loaded due to system error 126: The specified module could not be found. (MongoDB ODBC 1.4.2 ANSI Driver, C:\\Program Files\\MongoDB\\ODBC\\1.4\\bin\\mdbodbca.dll). (160) \n\tDIAG [08003] [Microsoft][ODBC Driver Manager] Connection not open (0) \n",
"text": "I have the tracing logs if that helps from odbc driver. I’m trying to establish a connection from mongodb odbc driver to mongosqld process.odbcad32 2a7c-1ac4\tENTER SQLAllocHandle\nSQLSMALLINT 1 <SQL_HANDLE_ENV>\nSQLHANDLE 0x0000000000000000\nSQLHANDLE * 0x0000000BBDF8AB30odbcad32 2a7c-1ac4\tEXIT SQLAllocHandle with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 1 <SQL_HANDLE_ENV>\nSQLHANDLE 0x0000000000000000\nSQLHANDLE * 0x0000000BBDF8AB30 ( 0x000001331ABF6C50)odbcad32 2a7c-1ac4\tENTER SQLSetEnvAttr\nSQLHENV 0x000001331ABF6C50\nSQLINTEGER 200 <SQL_ATTR_ODBC_VERSION>\nSQLPOINTER 3 <SQL_OV_ODBC3>\nSQLINTEGER 0odbcad32 2a7c-1ac4\tEXIT SQLSetEnvAttr with return code 0 (SQL_SUCCESS)\nSQLHENV 0x000001331ABF6C50\nSQLINTEGER 200 <SQL_ATTR_ODBC_VERSION>\nSQLPOINTER 3 <SQL_OV_ODBC3>\nSQLINTEGER 0odbcad32 2a7c-1ac4\tENTER SQLAllocHandle\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABF6C50\nSQLHANDLE * 0x0000000BBDF8AB28odbcad32 2a7c-1ac4\tEXIT SQLAllocHandle with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABF6C50\nSQLHANDLE * 0x0000000BBDF8AB28 ( 0x000001331ABFA0B0)odbcad32 2a7c-1ac4\tENTER SQLDriverConnectW\nHDBC 0x000001331ABFA0B0\nHWND 0x0000000000000000\nWCHAR * 0x00007FFCBAEA7210 [ -3] “******\\ 0”\nSWORD -3\nWCHAR * 0x00007FFCBAEA7210\nSWORD -3\nSWORD * 0x0000000000000000\nUWORD 0 <SQL_DRIVER_NOPROMPT>odbcad32 2a7c-1ac4\tEXIT SQLDriverConnectW with return code -1 (SQL_ERROR)\nHDBC 0x000001331ABFA0B0\nHWND 0x0000000000000000\nWCHAR * 0x00007FFCBAEA7210 [ -3] “******\\ 0”\nSWORD -3\nWCHAR * 0x00007FFCBAEA7210\nSWORD -3\nSWORD * 0x0000000000000000\nUWORD 0 <SQL_DRIVER_NOPROMPT>odbcad32 2a7c-1ac4\tENTER SQLGetDiagRecW\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 1\nSQLWCHAR * 0x0000000BBDF89E28\nSQLINTEGER * 0x0000000BBDF89E24\nSQLWCHAR * 0x0000000BBDF89E40\nSQLSMALLINT 512\nSQLSMALLINT * 0x0000000BBDF89E20odbcad32 2a7c-1ac4\tEXIT SQLGetDiagRecW with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 1\nSQLWCHAR * 0x0000000BBDF89E28 [ 5] “IM003”\nSQLINTEGER * 0x0000000BBDF89E24 (160)\nSQLWCHAR * 0x0000000BBDF89E40 [ 189] “Specified driver could not be loaded due to system error 126: The specified module could not be found. (MongoDB ODBC 1.4.2 ANSI Driver, C:\\Program Files\\MongoDB\\ODBC\\1.4\\bin\\mdbodbca.dll).”\nSQLSMALLINT 512\nSQLSMALLINT * 0x0000000BBDF89E20 (189)odbcad32 2a7c-1ac4\tENTER SQLGetDiagRecW\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 2\nSQLWCHAR * 0x0000000BBDF89E28\nSQLINTEGER * 0x0000000BBDF89E24\nSQLWCHAR * 0x0000000BBDF89E40\nSQLSMALLINT 512\nSQLSMALLINT * 0x0000000BBDF89E20odbcad32 2a7c-1ac4\tEXIT SQLGetDiagRecW with return code 100 (SQL_NO_DATA_FOUND)\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 2\nSQLWCHAR * 0x0000000BBDF89E28\nSQLINTEGER * 0x0000000BBDF89E24\nSQLWCHAR * 0x0000000BBDF89E40\nSQLSMALLINT 512\nSQLSMALLINT * 0x0000000BBDF89E20odbcad32 2a7c-1ac4\tENTER SQLGetDiagRecW\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 1\nSQLWCHAR * 0x0000000BBDF8AB40\nSQLINTEGER * 0x0000000BBDF8AB38\nSQLWCHAR * 0x00000133381522E0\nSQLSMALLINT 494\nSQLSMALLINT * 0x0000000BBDF8AB20odbcad32 2a7c-1ac4\tEXIT SQLGetDiagRecW with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0\nSQLSMALLINT 1\nSQLWCHAR * 0x0000000BBDF8AB40 [ 5] “IM003”\nSQLINTEGER * 0x0000000BBDF8AB38 (160)\nSQLWCHAR * 0x00000133381522E0 [ 189] “Specified driver could not be loaded due to system error 126: The specified module could not be found. (MongoDB ODBC 1.4.2 ANSI Driver, C:\\Program Files\\MongoDB\\ODBC\\1.4\\bin\\mdbodbca.dll).”\nSQLSMALLINT 494\nSQLSMALLINT * 0x0000000BBDF8AB20 (189)odbcad32 2a7c-1ac4\tENTER SQLDisconnect\nHDBC 0x000001331ABFA0B0odbcad32 2a7c-1ac4\tEXIT SQLDisconnect with return code -1 (SQL_ERROR)\nHDBC 0x000001331ABFA0B0odbcad32 2a7c-1ac4\tENTER SQLFreeHandle\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0odbcad32 2a7c-1ac4\tEXIT SQLFreeHandle with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 2 <SQL_HANDLE_DBC>\nSQLHANDLE 0x000001331ABFA0B0odbcad32 2a7c-1ac4\tENTER SQLFreeHandle\nSQLSMALLINT 1 <SQL_HANDLE_ENV>\nSQLHANDLE 0x000001331ABF6C50odbcad32 2a7c-1ac4\tEXIT SQLFreeHandle with return code 0 (SQL_SUCCESS)\nSQLSMALLINT 1 <SQL_HANDLE_ENV>\nSQLHANDLE 0x000001331ABF6C50",
"username": "Arun_guptha_1"
},
{
"code": "",
"text": "Well, it’s pretty clear why the program is unhappy … it can’t load that .dll.\nWhy not?\nYou say it is present in the directory.\nHave you looked at the permissions on the .dll file?\nHave you checked to see that the .dll is not somehow damaged, e.g., did it become a zero-length file somehow?\nEtc.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes @Jack_Woehr , file do exist with data(size is non zero bytes definitely) . permissions are open as well on the file. tried to change the path from default installation place to a different location re-installed with different location. still same error.",
"username": "Arun_guptha_1"
},
{
"code": "",
"text": "Hmm that makes no sense at all \nSomeone from MongoDB staff will have to help you.\n@Stennie_X ?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Similar issue here. Solved by installing Visual C++ Redistributable for Visual Studio 2015, as indicated in the instructions suggested by Jack.",
"username": "SergioMagnettu"
}
] |
MongoDB BI connector Driver DSN error - mdbodbcw.dll module not found
|
2022-05-18T22:45:51.574Z
|
MongoDB BI connector Driver DSN error - mdbodbcw.dll module not found
| 4,961 |
|
null |
[
"backup"
] |
[
{
"code": "",
"text": "We’re excited to announce the release of a new backup feature in MongoDB Atlas, snapshot distribution. Snapshot distribution allows you to easily copy your backup snapshots across multiple geographic regions within your primary cloud provider with just the click of a button.You can configure how snapshots are distributed directly within your backup policy and Atlas will automatically copy them to other regions as selected - no manual process necessary.While it may not always be required to store additional snapshot copies in varying places, this can be extremely useful in several situations, such as:If you fall into either of these categories, snapshot distribution may be a valuable feature addition to your current backup policy - allowing you to automate prior manual processes and free up development time to focus on innovation.Learn More",
"username": "Ashley_George"
},
{
"code": "",
"text": "How to I terraform Atlas to run snapshot backup to another region in additional to the local (default) region ?",
"username": "Alex_Leong"
},
{
"code": "",
"text": "Hi @Alex_Leong . As of the Terraform MongoDB Atlas Provider v.1.8.0 you can add snapshot distribution in your cloud backup schedules.You can find an example with further details here in the documentation.Best regards,\nEvin",
"username": "Evin_Roesle"
}
] |
New: Backup your Atlas snapshots to additional regions
|
2022-09-30T20:05:42.539Z
|
New: Backup your Atlas snapshots to additional regions
| 2,650 |
null |
[
"dot-net"
] |
[
{
"code": "",
"text": "Hello everyone!\nNote: I have been playing with MongoDB C# driver for quite some time but still cannot consider myself an expert.I am trying to figure out how to programmatically extract the fields used when using linq query in the MongoDB C# drivers.For example, I have a code working codecollection.AsQueryable().Where(c => c.Region == “AU” && c.Rank > 0 && c.Rank < 100).ToListAsync();Is there anyway to extract from the resulting IMongoQueryable (or any other possible type casting) the fields used in the conditon? (For the given example, the result should be “Region” & “Rank”)I cannot change the structure of the LINQ so far, nor can I change it to use Builder.FilterHoping for a positive response or even just a nudge in the right direction.Thanks!",
"username": "Charles_Stephen_Vice"
},
{
"code": "",
"text": "On the return value from the above, the Expression object should have the query parameters, it seems to be split into left and right which can be recursively parsed to pull out all fields used in the query along with types etc.\nimage1089×693 46.3 KB\n",
"username": "John_Sewell"
}
] |
Get used where/sort condition from IMongoQueryable<TDocument>
|
2023-08-10T03:00:03.210Z
|
Get used where/sort condition from IMongoQueryable<TDocument>
| 485 |
null |
[
"aggregation",
"queries"
] |
[
{
"code": "{\n “name\": “test”,\n \"data\": {\n “statusOne”: “enabled”,\n “statusTwo”: “active”\n }\n}\n{\n “name\": “test”,\n \"data\": {\n “statusOne”: “disabled”,\n “statusTwo”: “active”\n }\n}\n{\n “name\": “another-test”,\n \"data\": {\n “statusOne”: “disabled”,\n “statusTwo”: “active”\n }\n}\n“output”: [\n{\n\t“name”: “test”,\n\t\"data\": [\n\t\t{\n\t\t\t“status”: “active”,\n\t\t\t“count”: 2\n\t\t},\n\t\t{\n\t\t\t“status”: “disabled”,\n\t\t\t”count”: 1\n\t\t},\n\t\t{\n\t\t\t“status”: “enabled”,\n\t\t\t”count”: 1\n\t\t}\n\t]\n},\n{\n\t“name”: “another-test”,\n\t\"data\": [\n\t\t{\n\t\t\t“status”: “active”,\n\t\t\t”count”: 1\n\t\t},\n\t\t{\n\t\t\t“status”: “disabled”,\n\t\t\t”count”: 1\n\t\t}\n\t]\n}\n]\n",
"text": "have following data in my collectionHow to write an aggregation query to display the data like below",
"username": "Devabalan_Arumugam"
},
{
"code": "db.getCollection('Test').aggregate([\n{\n $addFields:{\n statusAt:[\n '$data.statusOne',\n '$data.statusTwo',\n ] \n }\n},\n{\n $unwind:'$statusAt'\n},\n{\n $group:{\n _id:{\n 'name':'$name',\n status:'$statusAt'\n },\n total:{$sum:1} \n }\n},\n{\n $project:{\n _id:0,\n 'name':'$_id.name',\n status:'$_id.status',\n total:1\n }\n},\n{\n $group:{\n _id:'$name',\n data:{$push:{'status':'$status', 'count':'$total'}}\n }\n}\n])\n",
"text": "Are those names hard-coded and fixed? If so, you can:Playing about, something like this:You didn’t say about data volumes etc, so assume with a big dataset you will want to make use of ordering of the stages to take advantage of an index by sorting before you group.Anyway that’s one option…give you an idea of how it could be done.You could make use of the attribute pattern for the status data instead to make things a bit more generic and maintainable. Or just have an array of status, your app / model may have special meaning to those names though…",
"username": "John_Sewell"
}
] |
Aggregation to group by multiple fields in a document
|
2023-08-10T11:50:49.929Z
|
Aggregation to group by multiple fields in a document
| 258 |
[
"charts"
] |
[
{
"code": "",
"text": "I’d like to be able to modify the ‘series’ label that appears when I build an applicable chart, but there doesn’t seem to be a way to do it. See the example screenshot below, which would make more sense if I could modify ‘series’ to ‘characters’, for example:\nimage706×668 35.8 KB\n",
"username": "Phil_Warner"
},
{
"code": "runtimemetacrtic",
"text": "Hi, this is a little tricky (and I agree it should be easier), but it is in fact possible to do this.\nFor context, there are two ways of building a multi-series chart:If you go with approach #1, you can override the series title in the Customize tab. But if you go with option #2, the label is always “Series”.Normally the shape of the data will dictate which approach you use to build the chart, and presumably your data is aligned with the second approach. However you can use an aggregation pipeline to transform the shape of the data, allowing you to use a different approach to encoding the chart.In this example I’m using the Movies sample data to build a chart showing the average runtime against the average metacrtic scores. The normal way of building this chart would just be to encode those two fields (in my case in the X Axis since it’s a bar chart). But you can see I put a pipeline in the query bar which puts these values in an array, allowing me to build the chart using a Series channel, and I can override the label.\nimage1858×952 88.1 KB\nHTH!\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,Excellent, thanks! I had a multi-stage pipeline in place to get to where I was, so I just added your final stage . Regards, Phil",
"username": "Phil_Warner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Can I override the 'series' label on by stacked bar chart in MongoDb Charts?
|
2023-08-09T15:42:00.199Z
|
Can I override the ‘series’ label on by stacked bar chart in MongoDb Charts?
| 505 |
|
null |
[
"flexible-sync"
] |
[
{
"code": "{\n \"_id\": {\n \"$in\": \"%%user.custom_data.allowed_ids\"\n }\n}\n",
"text": "Hello, I am trying to define Rules in my app service for the following use case:\nI want to limit the pool of documents in a collection that each user can access. I have set up custom user data for each user. One of the fields in the custom user data is an array with the ids of the documents that the user can access from the collection. My attempt to define Document Permissions on the collection seems to not be working:Any ideas?",
"username": "Damian_Danev"
},
{
"code": "https://realm.mongodb.com/groups/<group-id>/apps/<app-id>",
"text": "Hi,Do you mind sending me your app URL (looks like https://realm.mongodb.com/groups/<group-id>/apps/<app-id>) so I can take a further look? For context, we recently changed where and how flexible sync permissions are defined for new apps, and I’m interested in seeing if your app was affected by that change at all.Thanks,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "https://realm.mongodb.com/groups/63e10b2a13c97e5c75b7c5d063e10b2a13c97e5c75b7c5d0/apps/63fa4c3a5a96b348b929fa1d",
"username": "Damian_Danev"
},
{
"code": "",
"text": "Hey @Damian_Danev,I took a look at your app, and didn’t find anything immediately suspicious. Based off the way that you described it, I’m assuming that the above expression corresponds to the “Read Document Filter”. What was the specific behavior that you were seeing that caused a problem? Were there documents that were being sent down to the client that were not expected? Was there an error that your app was running into?A few things I’d also like to confirm:\nScreenshot 2023-03-02 at 9.22.36 AM1498×251 28.1 KB\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "{\n \"_id\":{\n \"$oid\":\"63fe58578fd1b60e9636478a\"\n },\n \"name\":\"asdnd\",\n \"gro_id\":\"63fb8bebe2f721ef732ad540\"\n}\n{\n \"_id\":{\n \"$oid\":\"63fa9a794c4b020beaa5ff86\"\n },\n \"user_id\":\"63fa9a794c4b020beaa5ff81\",\n \"email\":\"someemail\",\n \"role\":\"scout\",\n \"user_ids\":[\n \"\"\n ],\n \"plots\":[\n \"63fe58578fd1b60e9636478a\"\n ]\n}\n{\n \"roles\": [\n {\n \"name\": \"scout\",\n \"apply_when\": {\n \"%%user.custom_data.role\": \"scout\"\n },\n \"document_filters\": {\n \"write\": {\n \"_id\": {\n \"$in\": \"%%user.custom_data.plots\"\n }\n },\n \"read\": {\n \"_id\": {\n \"$in\": \"%%user.custom_data.plots\"\n }\n }\n },\n \"read\": true,\n \"write\": false,\n \"insert\": false,\n \"delete\": false,\n \"search\": true\n }\n ]\n}\n\"_id\":{\n \"$oid\":\"63fa9a794c4b020beaa5ff86\"\n },\n",
"text": "Sorry, I should’ve been more specific.\nThis is a document inside collection plots:This is a document inside the custom user data collection called users:This is a Rule I’ve applied to plots collection:I just want users of a certain role to only be able to read documents whose ids are inside a field in the user’s custom data. With this setup, I do not get any documents on my React Native realm. I think I might be failing on the point you made about object ids being a string because I am comparing it to the plot object id:",
"username": "Damian_Danev"
},
{
"code": "plots",
"text": "Yeah, I think converting plots in the custom user data collection to be a list of ObjectIDs would potentially resolve the issue then. Let me know if that works!",
"username": "Jonathan_Lee"
},
{
"code": "{\n \"_id\":{\n \"$oid\":\"63fb8bebe2f721ef732ad540\"\n },\n \"user_id\":\"63fb8bebe2f721ef732ad534\",\n \"email\":\"manager\",\n \"role\":\"manager\",\n \"user_ids\":[\n \"63fa9a794c4b020beaa5ff81\"\n ]\n}\n{\n \"_id\":{\n \"$oid\":\"63fa9a794c4b020beaa5ff86\"\n },\n \"user_id\":\"63fa9a794c4b020beaa5ff81\",\n \"email\":\"intern\",\n \"role\":\"intern\",\n \"user_ids\":[]\n}\n{\n \"roles\":[\n {\n \"name\":\"manager\",\n \"apply_when\":{\n \"%%user.custom_data.role\":\"manager\"\n },\n \"document_filters\":{\n \"write\":{\n \"user_id\":{\n \"$in\":\"%%user.custom_data.user_ids\"\n }\n },\n \"read\":{\n \"user_id\":{\n \"$in\":\"%%user.custom_data.user_ids\"\n }\n }\n },\n \"read\":true,\n \"write\":true,\n \"insert\":false,\n \"delete\":false,\n \"search\":true\n },\n ]\n}\n",
"text": "I managed to solve the problem with the plots rules using strings only.\nNow I have a similar problem with just the users collection. I want realm clients to be able to read user documents based on a list of user ids I provide for them in the same collection (user_ids)\nExample:\ninside the users collection I have these two users:I want to make a rule so that a manager can only read intern users that are associated with him through the user_ids field. (I am also open to other suggestions). Just in case it’s not clear: users is the custom user data collection that’s linked with the app users. user_id is the linked field that holds the id of the app user and is also a queryable field.\nI thought this would be the correct rule:But my mobile app just gets stuck loading when I want to sync as a manager user.",
"username": "Damian_Danev"
},
{
"code": "{\n \"roles\":[\n {\n \"name\":\"manager\",\n \"apply_when\":{\n \"%%user.custom_data.role\":\"manager\"\n },\n \"document_filters\":{\n \"write\":{\n \"user_id\":{\n \"$in\":[\"63fa9a794c4b020beaa5ff81\"]\n }\n },\n \"read\":{\n \"user_id\":{\n \"$in\":[\"63fa9a794c4b020beaa5ff81\"]\n }\n }\n },\n \"read\":true,\n \"write\":true,\n \"insert\":false,\n \"delete\":false,\n \"search\":true\n },\n ]\n}\n\"$in\":\"%%user.custom_data.user_ids\"",
"text": "When I set the document rule to:Then the app functions properly, but when using \"$in\":\"%%user.custom_data.user_ids\" then it doesn’t work. Any idea why?",
"username": "Damian_Danev"
},
{
"code": "user_idsqueryable field",
"text": "Okay. I think I found out what the problem was, even tho the solution puzzles me…\nI had to add user_ids as a queryable field.",
"username": "Damian_Danev"
},
{
"code": "_id{\n \"owner_id\": \"%%user.custom_data._id\"\n}\nowner_id_oid_id{\n \"owner_id\": \"%%user.custom_data._oid\"\n}\n%stringToOid%oidToString",
"text": "@Jonathan_Lee , A new problem arouse. The _id of my custom user data collection is created as an ObjectId but it seems to me that it is passed as a string in the rule expression:This doesn’t return any documents even tho the ids match (owner_id is an ObjectId as well). Further when I created a new field _oid and made it the exact same ObjectId as _id then this expression works and returns the correct documents:What is happening? This smells of a poor design or maybe I am not understanding something. Is there a way to cast ObjectId to string and string to ObjecId inside the expression of the rule?PS: I found %stringToOid and %oidToString",
"username": "Damian_Danev"
},
{
"code": "_id%stringToOid%oidToString%stringToOid{\n \"owner_id\": { \"%stringToOid\": \"%%user.custom_data._id\" }\n}\n\"%stringToOid\"user_idsuser_idsuser_ids",
"text": "Hi,What is happening? This smells of a poor design or maybe I am not understanding something. Is there a way to cast ObjectId to string and string to ObjecId inside the expression of the rule?Our system is designed to allow usage of custom user data in functions, which requires us to convert that data into js values. ObjectIDs are mapped to strings in this process, and the _id field is treated as a string under-the-hood for the purposes of compatibility.PS: I found %stringToOid and %oidToStringYou should be able to use %stringToOid like:Alternatively, you could go with the other approach you mentioned or by using the “user_id” field with \"%stringToOid\" instead.Also, regarding the previous comment about user_ids – could you elaborate on the behavior you were seeing before adding user_ids as a queryable field? Were there any errors in the logs? It seems pretty unexpected to me that user_ids would have to be added as a queryable field to resolve the issues you were encountering, so any additional information would be helpful in diagnosing the situation.Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "%stringToOid{\n { \"%stringToOid\": \"%%user.custom_data._id\" }: { '$in': 'user_oids'}\n}\nuser_idsRulesuser_idsRulesRulesRules",
"text": "You should be able to use %stringToOidYes, but cases like looking for the object id in a list of object ids doesn’t work:Also, regarding the previous comment about user_ids – could you elaborate on the behavior you were seeingUnfortunately, I can’t state an exact problem. I am experiencing weird stuff with Rules and I hope it’s only because I am running on the free tier. It’s an important IF because this is scary in production. After changing the rules I wasn’t able to sync until I added user_ids as a queryable field. Often I have trouble syncing after making changes to the rules. Sometimes I have to wipe device data and start of a fresh realm. One time I made changes to the Rules, and some documents weren’t syncing, made an exact copy of them and the new ones got synced. My brain started hurting, took 2 hour’s break. When I came back everything was working magically without any changes. If I have to be honest changing the Rules scares me. I often am not able to sync back in the mobile app. I am considering leaving the Rules more open so that in the future I don’t have to do a lot of changes to them in production and possibly run into unsuccessful client resets. I hope this is happening only because I am on the free tier and things take some longer to happen at times.",
"username": "Damian_Danev"
},
{
"code": "{\n \"owner_id\": {\n \"%stringToOid\": \"%%user.id\"\n }\n}\n const { _id, owner_id, ...restofAccount } = draftAccount \n await accountItemCollection.updateOne(\n { owner_id: { $oid: app.currentUser.id } },\n { $set: { ...restofAccount } }\n )\n const { _id, owner_id, ...restofAccount } = draftAccount\n\n await accountItemCollection.updateOne(\n { owner_id: app.currentUser.id },\n { $set: { ...restofAccount } }\n )\n const { _id, owner_id, ...restofAccount } = draftAccount\n await accountItemCollection.updateOne(\n { owner_id: new BSON.ObjectId(app.currentUser.id) },\n { $set: { ...restofAccount } }\n )\n",
"text": "Hi Jonathan, looks like there are other ways. I got it working by doing this:My Rule on Account Collection:\nowner_id is of type ObjectIdBut to get this to work in React I needed to this in my update function:Before, when I had the owner_id as String I was able to use it like the documents say, and I thought this kind of magic happened in the background So I wonder if this is the correct way of handling this.Cheers CatoUpdate: this is probably the corrected way of sending a ObjectId object:",
"username": "Cato_Paus_Skrede"
}
] |
Document Permissions
|
2023-03-01T14:23:05.591Z
|
Document Permissions
| 1,709 |
[
"storage"
] |
[
{
"code": "",
"text": "Hello, everyone. I’m new to the MongoDB family and still have limited knowledge about MongoDB. I would appreciate your assistance.\nI have been dealing with an unusual performance issue in MongoDB. To investigate the root cause of the performance problem, I used sysdig to record the system conditions during the time of the issue. The problem occurs every Saturday around 11:00 AM, when disk I/O becomes highly busy, reaching over 100%. Since the default logs didn’t provide any insights into the issue, I resorted to using sysdig to capture the system conditions during the faulty period. Initially, I wanted to examine system calls like pwrite or write to see what data was being written. However, I realized that the data written to WiredTiger (wt) is compressed or encrypted and cannot be directly read. Is there a way to convert this text into readable content? Additionally, I’m curious about the compression or encryption algorithms used in this process.\n\nPasted image 202308081457091517×877 122 KB\n",
"username": "wu_feng1"
},
{
"code": "mongostat",
"text": "Hey @wu_feng1,Welcome to the MongoDB Community!The problem occurs every Saturday around 11:00 AM, when disk I/O becomes highly busy, reaching over 100%Since you’re seeing high disk I/O utilization around the same time every Saturday, is it possible that you have a batch process that happens at that time regularly?Few things to check:Could you confirm if you have any such cron jobs, scripts, or other periodic tasks configured to run around the specified time interval? Seeing what jobs overlap with the performance spike could help identify the cause.Could you share the output of mongostat and the MongoDB server logs of that particular timeframe?Please provide the MongoDB version, disk size, and deployment configuration. Also, are you utilizing Docker, a VM, or a similar solution to host your MongoDB server?Looking forward to your response.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
WiredTiger write contents Decode
|
2023-08-08T07:04:58.429Z
|
WiredTiger write contents Decode
| 488 |
|
null |
[
"aggregation",
"queries",
"dot-net"
] |
[
{
"code": "{_id: 1, \"name\": \"Hello World\"}\n{_id: 2, \"name\": \"Hello\"}\n{_id: 3, \"name\": \"Tree and grass\"}\n{_id: 4, \"name\": \"Cat and Dog\"}\n{_id: 5, \"name\": \"and\"}\nvar filter = Builders<MyClass>.Search.Text(x => x.Name, new List<string> { \"Hello\", \"and\" });\nvar result = _collection.Aggregate().Search(filter, indexName: \"WordIdx\").ToList();\n{_id: 1, \"name\": \"Hello World\"}\n{_id: 2, \"name\": \"Hello\"}\n{_id: 3, \"name\": \"Tree and grass\"}\n{_id: 4, \"name\": \"Cat and Dog\"}\n{_id: 5, \"name\": \"and\"}\n",
"text": "How can I create a search query in c# to match exact word that I specify. For eg lets say my dataset is in the following way:The result here will contain all the elements that isBut I only want {_id: 2, “name”: “Hello”} and {_id: 5, “name”: “and”}.\nShould I use _collection.Aggregate().Search(filter, indexName: “WordIdx”).Limit(2).ToList()? But I am worried about my results being inaccurate",
"username": "Akshay_Katoch"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n }\n}\n[\n {\n $search: {\n index: \"serachIndex\",\n text: {\n query: \"and, hello\",\n path: \"name\"\n }\n }\n }\n]\ndb.collection.createIndex({\"name\":\"text\"}){\n \"name\": {\n $in: [\"and\", \"Hello\"]\n }\n}\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nnew BsonDocument(\"name\", new BsonDocument(\"$in\", new BsonArray\n {\n \"and\",\n \"Hello\"\n }))\n",
"text": "Hi @Akshay_Katoch and welcome to MongoDB community forums!!If I understand your question correctly, you are trying to extract the documents from the collection which has names and either as “hello” or “and”.While Atlas Search gives you the benefit of faster query execution, this feature is helpful if you wish to get the documents from the collection with text that contain the specific words in between.I tried to replicate the sample documents given as above and use the Atlas Search and Text Search\nUsing Atlas Search:\nSearch Index:Search Query:As you you have mentioned correctly, this gives all the documents in the collection and adding a $limit operator would certainly have the chance of returning unexpected results (especially if your dataset is much larger).On the other hand, since it seems you are after the exact matches, you can create the following index db.collection.createIndex({\"name\":\"text\"}) which would support the below query that returns the documents you are after:The equivalent c# code would look like the following:Please let me know if I have been mistaken in understanding your concern.Warm Regards\nAasawari",
"username": "Aasawari"
}
] |
Search using search index in c#
|
2023-08-06T11:49:23.887Z
|
Search using search index in c#
| 591 |
null |
[
"python",
"time-series"
] |
[
{
"code": "create_collectionlist_collection_names\"collmod.timeseries.timefield\" is an unknown field",
"text": "I am using pymongo and a command which automatically creates a collection for each customer in my system. Normally I would just insert one document and delete it, which assured that the collection exists or is created if not. Now I would like to switch to timeseries collections, however this complicates things. For creation I need to use the create_collection command, which is already unhandy, as I need to call list_collection_names before to confirm that the collection doesn’t exist already. The two questions I have are:",
"username": "conrad"
},
{
"code": "create_collectionlist_collection_nameslist_collection_names",
"text": "Hi @conrad and welcome to MongoDB community forums!! I am using pymongo and a command which automatically creates a collection for each customer in my system.It would be helpful for us to provide you could help me understand why do you need to create a collection for every customer as this would result in multiple collections having redundant field names.Now I would like to switch to timeseries collections,Could you also help me understand the use case in order to move from simple collection to a time series collection.I need to use the create_collection command, which is already unhandy, as I need to call list_collection_names before to confirm that the collection doesn’t exist already.I believe this would be needed even if you create a non time series collection, you would need to use list_collection_names to view before you are trying to insert into new collection.However, to answer your questions,is it possible to update a timeseries collections options?At this point as mentioned in the documentation for Limitations In Time Series collection, the delete and update have some limitations. Therefore, this might not be available at the supported drivers as well.If you have a specific feature request or suggestion that you would like to see implemented in MongoDB, we encourage you to share it in the MongoDB Feedback Engine.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "list_collection_names",
"text": "Thank you for your answer.It would be helpful for us to provide you could help me understand why do you need to create a collection for every customer as this would result in multiple collections having redundant field names.This is a security measure on our end so that programmers on our side can only access by definition the collection for the specific customer. Separating the data made our whole process a lot simpler.ould you also help me understand the use case in order to move from simple collection to a time series collection.We only store sensor data in mongoDB, but we are doing this for the last couple of years and now that the TimeSeries collection exist, we would like to use it to improve performance of queries etc.I believe this would be needed even if you create a non time series collection, you would need to use list_collection_names to view before you are trying to insert into new collection.No it actually isn’t. At the moment we add and delete a specific document, which automatically creates a collection if it doesn’t exist.At this point as mentioned in the documentation for Limitations In Time Series collection, the delete and update have some limitations. Therefore, this might not be available at the supported drivers as well.I am not talking about update and deletion calls, but update on the collection itself. The driver shouldn’t be an issue as I could just plainly use the mongodb commands if it would be possible.",
"username": "conrad"
},
{
"code": "convertToTimeseries",
"text": "Hi @conradApologies for writing late.To answer your question regarding convertToTimeseries, as of today we do not have the capability to do so. This is because the time Series collection in MongoDB is implemented in a way that it is a view to system.bucket.collection.Could you kindly verify if the proposed insert and delete workflow aims to prevent excessive privileges? Are you using the built-in readWrite role in MongoDB? Have any custom roles been established, or are there other security measures in place for this purpose?\nIn saying so, there could one potential solution if it works for your use case is to create an API layer between MongoDB and the user. That way, you can restrict database operations in a more granular manner (e.g. you can ensure a collection exists without the need to do the insert-delete workflow).\nHowever this flexibility comes with additional maintenance, so this may not work for all use cases.Let us know if this work for your use case.Regards\nAasawari",
"username": "Aasawari"
}
] |
How to update an existing timeseries collection?
|
2023-07-07T19:53:00.080Z
|
How to update an existing timeseries collection?
| 772 |
[] |
[
{
"code": "",
"text": "Pls i have an issue with my connection url on mongodb that’s what i think ao though, pls can check the images provided below for better understanding\n\nIMG-20230808-WA00021049×269 66.2 KB\n",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "You might want to post the code with passwords removed that show how you’re connecting. Most likely there’s a problem with your connection string.Also not sure if this is your problem but with the password section of your uri string, it’s safer to wrap it in encodeURIComponent in case you have any special charactersencodeURIComponent(process.env.MONGODB_PASSWORD)",
"username": "Justin_Jaeger"
},
{
"code": "",
"text": "Eeii pls if u can break it down small especially what u said last",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I don’t get what u said here “You might want to post the code with passwords removed that show how you’re connecting”, I know that am meant to input my password in the connection string with username though, so putting password in it can affect it how it would connect right?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I was also talking about the last line the encode something u said, I understand when it is been used but I don’t know how or where it should be put in my connection string or my code",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "My recommendation is, go back to the very beginning and follow the instructions step by step. Start here:Otherwise you’ll keep getting stuck. I’ve been there, it’s tough in the beginning. YouTube walkthroughs can help as well. But I promise all of the answers are in the documentation – MongoDB’s is very good – and the best skill you can develop is learning how to read through them",
"username": "Justin_Jaeger"
}
] |
Mongodb connection url⁶
|
2023-08-09T16:35:56.202Z
|
Mongodb connection url⁶
| 314 |
|
null |
[
"golang"
] |
[
{
"code": "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n\t\"gopkg.in/mgo.v2/bson\"\n)\n\nfunc main() {\n\t// Create a MongoDB connection\n\tclientOptions := options.Client().ApplyURI(\"mongodb://localhost:27017\")\n\tclient, err := mongo.Connect(context.Background(), clientOptions)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\t// defer client.Disconnect(context.Background()) // Close the connection when done\n\n\t// Ping the MongoDB server and check for errors\n\terr = client.Ping(context.Background(), nil)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Println(\"Connected to MongoDB!\")\n\n\t// Get a pointer to the database and collection\n\tdb := client.Database(\"test\")\n\tcollection := db.Collection(\"users\")\n\n\t// Run a query on the collection\n\t// For example, insert a document\n\tres, err := collection.InsertOne(context.Background(), map[string]string{\"name\": \"Alice\", \"age\": \"25\"})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Println(\"Inserted document with ID:\", res.InsertedID)\n\tvar resp bson.M\n\terr = collection.FindOne(context.TODO(), bson.M{\"_id\": res.InsertedID}).Decode(&res)\n\tfmt.Println(\"res\", resp, err)\n}\n",
"text": "I have written the below program (sample for connection & query run) and not Disconnect the client to check open files with mongodb query. When I use simple connection & after than 10 queries in loop then it will open 3 resources. 2 for connection & 1 for multiple queries that I will run with that client.\nBut when I use convert the same program with go-routines then open connections increases 7-8. Can you please expalin how mongodb open resources ? & why pooling not work as expected in case of go routine.",
"username": "Swati_Sharma"
},
{
"code": "",
"text": "Using default clientOptions the maxPoolSize is 100. So I think this is consistent with the options you provided on the Connect()",
"username": "chris"
},
{
"code": "",
"text": "@Swati_Sharma the Go Driver connection pool will open a new connection anytime there is an operation waiting to run and there are no idle connections, unless the maximum pool size is reached.When operations are run sequentially (i.e. in the same goroutine), the next operation never begins until the previous one completes, so there is only one request for a connection at a time. However, when operations are run in separate goroutines, there may be many simultaneous requests for connections. In response, the Go Driver creates more connections to serve the waiting operations. Those connections will still be reused for subsequent operations.As @chris mentioned, you can limit the number of connections each connection pool will maintain using maxPoolSize or SetMaxPoolSize.",
"username": "Matt_Dale"
},
{
"code": "",
"text": "ok thank you for your clarification",
"username": "Swati_Sharma"
}
] |
Need to understand open connection concept of mongodb with golang
|
2023-08-04T12:21:55.982Z
|
Need to understand open connection concept of mongodb with golang
| 665 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "users.findOne()// api/auth/register.js\nimport bcrypt from 'bcrypt'\nimport User from '@/models/User'\nimport { connectToDatabase } from '@/utils/mongodb'\n\nexport default async function handler(req, res) {\n \n await connectToDatabase() // no db object to destructure\n \n if (req.method === 'POST') {\n const { username, password } = req.body\n\n try {\n const existingUser = await User.findOne({ username })\n\n if (existingUser) {\n return res\n .status(400)\n .send({ success: false, message: 'Username already exists' })\n }\n\n const hashedPassword = await bcrypt.hash(password, 10)\n const newUser = await new User({ \n username, \n password: hashedPassword, \n role: 'user' \n }).save()\n\n return res.status(201).send({ success: true, data: newUser })\n } catch (error) {\n console.error(error) // Log the error to console\n return res\n .status(500)\n .send({ success: false, message: 'Something went wrong', error: error.message }) // Send error message in response\n }\n } else {\n res.setHeader('Allow', ['POST'])\n return res\n .status(405)\n .send({ success: false, message: `Method ${req.method} not allowed` })\n }\n}\n\n// /models/User.js\n\nimport mongoose from 'mongoose'\nimport bcrypt from 'bcrypt'\n\nconst userSchema = new mongoose.Schema({\n username: { type: String, required: true, unique: true },\n password: { type: String, required: true },\n role: { type: String, default: 'user', enum: ['user', 'admin'] },\n})\n\nuserSchema.pre('save', async function (next) {\n if (this.isModified('password')) {\n this.password = await bcrypt.hash(this.password, 10)\n }\n next()\n})\n\nexport default mongoose.models.User || mongoose.model('User', userSchema)\n\n// /utils/mongodb.js\nimport { MongoClient } from 'mongodb';\n\nlet cached = global.mongo;\n\nif (!cached) cached = global.mongo = {};\n\nexport async function connectToDatabase() {\n if (cached.conn) return cached.conn;\n if (!cached.promise) {\n const opts = {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n serverSelectionTimeoutMS: 30000 // Server selection timeout set to 30000ms\n };\n cached.promise = MongoClient.connect(process.env.MONGODB_URI, opts).then(\n (client) => {\n return { client, db: client.db(process.env.DB_NAME) };\n },\n );\n }\n cached.conn = await cached.promise;\n return cached.conn;\n}\n\n",
"text": "Implementing mongodb and the connection to mongodb through mongoshell showed it was ok.I have a register function that imports a Mongoose schema which gives me this error:MongooseError: Operation users.findOne() buffering timed out after 10000msPlease help me resolve this. Thanks in advance!",
"username": "Alan_Wunsche"
},
{
"code": "users.findOne()",
"text": "Hi @Alan_Wunsche,Welcome to the MongoDB Community.Implementing mongodb and the connection to mongodb through mongoshell showed it was ok.\nI have a register function that imports a Mongoose schema which gives me this error:\nMongooseError: Operation users.findOne() buffering timed out after 10000msCan you please verify whether it was functioning correctly before? If it was, have there been any recent changes?How are you currently testing the API? Could you share the specific error message from the log when executing the code?Can the application code establish a connection to MongoDB? Is your IP address whitelisted?Have you attempted testing a different API? Is the same error appearing there as well?In case of further assistance, please provide details regarding the MongoDB version and the Mongoose version in use. It would also be helpful to include the error logs you’ve encountered.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
Mongoose timeout issue on registering a new user to MongoDB (Nextjs app)
|
2023-08-02T12:38:32.223Z
|
Mongoose timeout issue on registering a new user to MongoDB (Nextjs app)
| 635 |
null |
[
"aggregation",
"queries",
"python"
] |
[
{
"code": "{\n htsc_code: 'AAPL',\n time: ISODate('2023-08-09T12:34:056Z'),\n trading_phase_code: \"1\",\n exchange: 'NASDAQ',\n security_type: 'stock',\n price_max: '100',\n price_min: '1',\n prev_close: '12',\n num_trades: \"100\",\n volume: 100,\n value: 10000,\n last: 20,\n open: 2,\n high: 10,\n low: 1,\n close: 20\n}\nstocks = db[collection].distinct('htsc_code')\ndoc_ls = []\nfor stock in stocks:\n doc_ls.append(db[col].find({'htsc_code': stock'}, {'last': 1, 'time': 1, 'stock_code': 1, '_id': 0}).limit(1))\n[\n {'$group': {\n '_id': '$htsc_code',\n 'price_last': {'$last': '$last'},\n 'time_last': {'$last': '$time'} \n }\n]\n{\n \"explainVersion\": \"2\",\n \"queryPlanner\": {\n \"namespace\": \"htsc.tick_stock\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"AACE9B53\",\n \"planCacheKey\": \"AACE9B53\",\n \"optimizedPipeline\": true,\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"queryPlan\": {\n \"stage\": \"GROUP\",\n \"planNodeId\": 2,\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"planNodeId\": 1,\n \"filter\": {},\n \"direction\": \"forward\"\n }\n },\n \"slotBasedPlan\": {\n \"slots\": \"$$RESULT=s13 env: { s1 = TimeZoneDatabase(Etc/GMT-1...Europe/Kyiv) (timeZoneDB), s3 = Timestamp(1691633168, 163) (CLUSTER_TIME), s2 = Nothing (SEARCH_META), s4 = 1691633168828 (NOW) }\",\n \"stages\": \"[2] mkbson s13 [_id = s8, price_last = s10, time_last = s12] true false \\n[2] group [s8] [s10 = last (fillEmpty (s9, null)), s12 = last (fillEmpty (s11, null))] \\n[2] project [s11 = getField (s5, \\\"time\\\")] \\n[2] project [s9 = getField (s5, \\\"last\\\")] \\n[2] project [s8 = fillEmpty (s7, null)] \\n[2] project [s7 = getField (s5, \\\"htsc_code\\\")] \\n[1] scan s5 s6 none none none none [] @\\\"2fc8cb4f-41e2-4005-9f02-738da2081a4f\\\" true false \"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 5381,\n \"executionTimeMillis\": 10428,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 6901554,\n \"executionStages\": {\n \"stage\": \"mkbson\",\n \"planNodeId\": 2,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 10427,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"objSlot\": 13,\n \"fields\": [],\n \"projectFields\": [\n \"_id\",\n \"price_last\",\n \"time_last\"\n ],\n \"projectSlots\": [\n 8,\n 10,\n 12\n ],\n \"forceNewObject\": true,\n \"returnOldObject\": false,\n \"inputStage\": {\n \"stage\": \"group\",\n \"planNodeId\": 2,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 10426,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"groupBySlots\": [\n 8\n ],\n \"expressions\": {\n \"10\": \"last (fillEmpty (s9, null)) \",\n \"12\": \"last (fillEmpty (s11, null)) \"\n },\n \"usedDisk\": false,\n \"spilledRecords\": 0,\n \"spilledBytesApprox\": 0,\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 6901554,\n \"executionTimeMillisEstimate\": 8903,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"projections\": {\n \"11\": \"getField (s5, \\\"time\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 6901554,\n \"executionTimeMillisEstimate\": 8674,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"projections\": {\n \"9\": \"getField (s5, \\\"last\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 6901554,\n \"executionTimeMillisEstimate\": 8174,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"projections\": {\n \"8\": \"fillEmpty (s7, null) \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 6901554,\n \"executionTimeMillisEstimate\": 8010,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"projections\": {\n \"7\": \"getField (s5, \\\"htsc_code\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"scan\",\n \"planNodeId\": 1,\n \"nReturned\": 6901554,\n \"executionTimeMillisEstimate\": 7627,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 6901,\n \"restoreState\": 6901,\n \"isEOF\": 1,\n \"numReads\": 6901554,\n \"recordSlot\": 5,\n \"recordIdSlot\": 6,\n \"fields\": [],\n \"outputSlots\": []\n }\n }\n }\n }\n }\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"aggregate\": \"tick_stock\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": \"$htsc_code\",\n \"price_last\": {\n \"$last\": \"$last\"\n },\n \"time_last\": {\n \"$last\": \"$time\"\n }\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"htsc\"\n },\n \"serverInfo\": {\n \"host\": \"IT-3\",\n \"port\": 27017,\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1691633179,\n \"i\": 137\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 0\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1691633179,\n \"i\": 137\n }\n }\n}\n{\n '$sort': {\n 'htsc_code': 1, 'time': -1, 'last': 1\n }\n}\n{\n \"explainVersion\": \"2\",\n \"queryPlanner\": {\n \"namespace\": \"htsc.tick_stock\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"9B48956E\",\n \"planCacheKey\": \"9B48956E\",\n \"optimizedPipeline\": true,\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"queryPlan\": {\n \"stage\": \"GROUP\",\n \"planNodeId\": 4,\n \"inputStage\": {\n \"stage\": \"SORT\",\n \"planNodeId\": 3,\n \"sortPattern\": {\n \"htsc_code\": 1,\n \"time\": 1,\n \"last\": 1\n },\n \"memLimit\": 104857600,\n \"type\": \"simple\",\n \"inputStage\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"planNodeId\": 2,\n \"transformBy\": {\n \"htsc_code\": true,\n \"last\": true,\n \"time\": true,\n \"_id\": false\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"planNodeId\": 1,\n \"filter\": {},\n \"direction\": \"forward\"\n }\n }\n }\n },\n \"slotBasedPlan\": {\n \"slots\": \"$$RESULT=s27 env: { s1 = TimeZoneDatabase(Etc/GMT-1...Europe/Kyiv) (timeZoneDB), s3 = Timestamp(1691633545, 439) (CLUSTER_TIME), s2 = Nothing (SEARCH_META), s4 = 1691633545927 (NOW) }\",\n \"stages\": \"[4] mkbson s27 [_id = s22, price_last = s24, time_last = s26] true false \\n[4] group [s22] [s24 = last (fillEmpty (s23, null)), s26 = last (fillEmpty (s25, null))] \\n[4] project [s25 = getField (s7, \\\"time\\\")] \\n[4] project [s23 = getField (s7, \\\"last\\\")] \\n[4] project [s22 = fillEmpty (s21, null)] \\n[4] project [s21 = getField (s7, \\\"htsc_code\\\")] \\n[3] sort [s14, s17, s20] [asc, asc, asc] [s7] \\n[3] project [s20 = fillEmpty (s19, undefined)] \\n[3] traverse s19 s18 s10 {if (s18 <=> s19 < 0, s18, s19)} {} \\nfrom \\n [3] project [s17 = fillEmpty (s16, undefined)] \\n [3] traverse s16 s15 s9 {if (s15 <=> s16 < 0, s15, s16)} {} \\n from \\n [3] project [s14 = fillEmpty (s13, undefined)] \\n [3] traverse s13 s12 s8 {if (s12 <=> s13 < 0, s12, s13)} {} \\n from \\n [3] project [s11 = isArray (s8) <=> false + isArray (s9) <=> false + isArray (s10) <=> false <= 1 || fail ( 2 ,cannot sort with keys that are parallel arrays)] \\n [3] project [s8 = fillEmpty (getField (s7, \\\"htsc_code\\\"), null), s9 = fillEmpty (getField (s7, \\\"time\\\"), null), s10 = fillEmpty (getField (s7, \\\"last\\\"), null)] \\n [2] mkbson s7 s5 [htsc_code, last, time] keep [] true false \\n [1] scan s5 s6 none none none none [] @\\\"2fc8cb4f-41e2-4005-9f02-738da2081a4f\\\" true false \\n in \\n [3] project [s12 = s8] \\n [3] limit 1 \\n [3] coscan \\n \\n in \\n [3] project [s15 = s9] \\n [3] limit 1 \\n [3] coscan \\n \\nin \\n [3] project [s18 = s10] \\n [3] limit 1 \\n [3] coscan \\n\"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 5381,\n \"executionTimeMillis\": 45345,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 7054503,\n \"executionStages\": {\n \"stage\": \"mkbson\",\n \"planNodeId\": 4,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 45343,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"objSlot\": 27,\n \"fields\": [],\n \"projectFields\": [\n \"_id\",\n \"price_last\",\n \"time_last\"\n ],\n \"projectSlots\": [\n 22,\n 24,\n 26\n ],\n \"forceNewObject\": true,\n \"returnOldObject\": false,\n \"inputStage\": {\n \"stage\": \"group\",\n \"planNodeId\": 4,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 45341,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"groupBySlots\": [\n 22\n ],\n \"expressions\": {\n \"24\": \"last (fillEmpty (s23, null)) \",\n \"26\": \"last (fillEmpty (s25, null)) \"\n },\n \"usedDisk\": false,\n \"spilledRecords\": 0,\n \"spilledBytesApprox\": 0,\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 4,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 43614,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"25\": \"getField (s7, \\\"time\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 4,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 43235,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"23\": \"getField (s7, \\\"last\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 4,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 42795,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"22\": \"fillEmpty (s21, null) \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 4,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 42429,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"21\": \"getField (s7, \\\"htsc_code\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"sort\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 41871,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"memLimit\": 104857600,\n \"totalDataSizeSorted\": 988911916,\n \"usedDisk\": true,\n \"spills\": 10,\n \"orderBySlots\": {\n \"14\": \"asc\",\n \"17\": \"asc\",\n \"20\": \"asc\"\n },\n \"outputSlots\": [\n 7\n ],\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 14191,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"20\": \"fillEmpty (s19, undefined) \"\n },\n \"inputStage\": {\n \"stage\": \"traverse\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 13967,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"innerOpens\": 0,\n \"innerCloses\": 0,\n \"inputSlot\": 10,\n \"outputSlot\": 19,\n \"outputSlotInner\": 18,\n \"correlatedSlots\": [],\n \"nestedArraysDepth\": 1,\n \"fold\": \"if (s18 <=> s19 < 0, s18, s19) \",\n \"outerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 13326,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"17\": \"fillEmpty (s16, undefined) \"\n },\n \"inputStage\": {\n \"stage\": \"traverse\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 13138,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"innerOpens\": 0,\n \"innerCloses\": 0,\n \"inputSlot\": 9,\n \"outputSlot\": 16,\n \"outputSlotInner\": 15,\n \"correlatedSlots\": [],\n \"nestedArraysDepth\": 1,\n \"fold\": \"if (s15 <=> s16 < 0, s15, s16) \",\n \"outerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 12561,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"14\": \"fillEmpty (s13, undefined) \"\n },\n \"inputStage\": {\n \"stage\": \"traverse\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 12351,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"innerOpens\": 0,\n \"innerCloses\": 0,\n \"inputSlot\": 8,\n \"outputSlot\": 13,\n \"outputSlotInner\": 12,\n \"correlatedSlots\": [],\n \"nestedArraysDepth\": 1,\n \"fold\": \"if (s12 <=> s13 < 0, s12, s13) \",\n \"outerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 11430,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"11\": \"isArray (s8) <=> false + isArray (s9) <=> false + isArray (s10) <=> false <= 1 || fail ( 2 ,cannot sort with keys that are parallel arrays) \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 10261,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"projections\": {\n \"8\": \"fillEmpty (getField (s7, \\\"htsc_code\\\"), null) \",\n \"9\": \"fillEmpty (getField (s7, \\\"time\\\"), null) \",\n \"10\": \"fillEmpty (getField (s7, \\\"last\\\"), null) \"\n },\n \"inputStage\": {\n \"stage\": \"mkbson\",\n \"planNodeId\": 2,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 9114,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"objSlot\": 7,\n \"rootSlot\": 5,\n \"fieldBehavior\": \"keep\",\n \"fields\": [\n \"htsc_code\",\n \"last\",\n \"time\"\n ],\n \"projectFields\": [],\n \"projectSlots\": [],\n \"forceNewObject\": true,\n \"returnOldObject\": false,\n \"inputStage\": {\n \"stage\": \"scan\",\n \"planNodeId\": 1,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 6932,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 1,\n \"numReads\": 7054503,\n \"recordSlot\": 5,\n \"recordIdSlot\": 6,\n \"fields\": [],\n \"outputSlots\": []\n }\n }\n }\n },\n \"innerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 634,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"projections\": {\n \"12\": \"s8 \"\n },\n \"inputStage\": {\n \"stage\": \"limit\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 301,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"limit\": 1,\n \"inputStage\": {\n \"stage\": \"coscan\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 132,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0\n }\n }\n }\n }\n },\n \"innerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 456,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"projections\": {\n \"15\": \"s9 \"\n },\n \"inputStage\": {\n \"stage\": \"limit\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 239,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"limit\": 1,\n \"inputStage\": {\n \"stage\": \"coscan\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 122,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0\n }\n }\n }\n }\n },\n \"innerStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 481,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"projections\": {\n \"18\": \"s10 \"\n },\n \"inputStage\": {\n \"stage\": \"limit\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 227,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0,\n \"limit\": 1,\n \"inputStage\": {\n \"stage\": \"coscan\",\n \"planNodeId\": 3,\n \"nReturned\": 7054503,\n \"executionTimeMillisEstimate\": 142,\n \"opens\": 7054503,\n \"closes\": 1,\n \"saveState\": 7055,\n \"restoreState\": 7055,\n \"isEOF\": 0\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"aggregate\": \"tick_stock\",\n \"pipeline\": [\n {\n \"$sort\": {\n \"htsc_code\": 1,\n \"time\": 1,\n \"last\": 1\n }\n },\n {\n \"$group\": {\n \"_id\": \"$htsc_code\",\n \"price_last\": {\n \"$last\": \"$last\"\n },\n \"time_last\": {\n \"$last\": \"$time\"\n }\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"htsc\"\n },\n \"serverInfo\": {\n \"host\": \"IT-3\",\n \"port\": 27017,\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1691633591,\n \"i\": 78\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 0\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1691633591,\n \"i\": 78\n }\n }\n}\n",
"text": "I am currently storing the tick data of about 5000 stocks into a collection, and I am trying to find the last doc inserted into the database of each stock. The collection holds about 6m docs on average, and all stock have the following format:with some extra fields and values: {bid/ask}{price/size}{i} for i in range(1, 11): {some float values}.\nI need only the last value of the ‘last’ field and the ‘time’ field. I did the following:\n(1) First create the UNIQUE index htsc_code_1_time_-1, then run the following code:It took 1.3 seconds to generate the list\n(2) Do not create any index, perform the following aggregation:The query took 5 seconds. The explained result is:\n(3) Similar to (2), but create the UNIQUE index htsc_code_1_time_-1 in advance, and perform the aggregation in (2). The query took 10 seconds. The explain result is:and no index is used.\n(4) Similar to (2), but add the unique index mentioned and sort by the fields in advance, i.e. add a stage:It took 45 seconds, the explain result is:The database is running on a PC with i5-13600K and 64GB of RAM. If I managed to make the query faster, it can be performed on a server with 48 cores and 500G of RAM. Is there any other ways I can try to perform the $group faster than querying with a for-loop in Python? Thank you!",
"username": "l_ym"
},
{
"code": "{\n \"explainVersion\": \"2\",\n \"queryPlanner\": {\n \"namespace\": \"htsc.tick_stock\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"42EF333D\",\n \"planCacheKey\": \"42EF333D\",\n \"optimizedPipeline\": true,\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"queryPlan\": {\n \"stage\": \"GROUP\",\n \"planNodeId\": 2,\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"planNodeId\": 1,\n \"filter\": {},\n \"direction\": \"forward\"\n }\n },\n \"slotBasedPlan\": {\n \"slots\": \"$$RESULT=s13 env: { s3 = Timestamp(1691633763, 781) (CLUSTER_TIME), s1 = TimeZoneDatabase(Etc/GMT-1...Europe/Kyiv) (timeZoneDB), s2 = Nothing (SEARCH_META), s4 = 1691633763969 (NOW) }\",\n \"stages\": \"[2] mkbson s13 [_id = s8, price_last = s10, time_last = s12] true false \\n[2] group [s8] [s10 = last (fillEmpty (s9, null)), s12 = last (fillEmpty (s11, null))] \\n[2] project [s11 = getField (s5, \\\"time\\\")] \\n[2] project [s9 = getField (s5, \\\"last\\\")] \\n[2] project [s8 = fillEmpty (s7, null)] \\n[2] project [s7 = getField (s5, \\\"htsc_code\\\")] \\n[1] scan s5 s6 none none none none [] @\\\"2fc8cb4f-41e2-4005-9f02-738da2081a4f\\\" true false \"\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 5381,\n \"executionTimeMillis\": 7239,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 7126488,\n \"executionStages\": {\n \"stage\": \"mkbson\",\n \"planNodeId\": 2,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 7236,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"objSlot\": 13,\n \"fields\": [],\n \"projectFields\": [\n \"_id\",\n \"price_last\",\n \"time_last\"\n ],\n \"projectSlots\": [\n 8,\n 10,\n 12\n ],\n \"forceNewObject\": true,\n \"returnOldObject\": false,\n \"inputStage\": {\n \"stage\": \"group\",\n \"planNodeId\": 2,\n \"nReturned\": 5381,\n \"executionTimeMillisEstimate\": 7236,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"groupBySlots\": [\n 8\n ],\n \"expressions\": {\n \"10\": \"last (fillEmpty (s9, null)) \",\n \"12\": \"last (fillEmpty (s11, null)) \"\n },\n \"usedDisk\": false,\n \"spilledRecords\": 0,\n \"spilledBytesApprox\": 0,\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 7126488,\n \"executionTimeMillisEstimate\": 6191,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"projections\": {\n \"11\": \"getField (s5, \\\"time\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 7126488,\n \"executionTimeMillisEstimate\": 6002,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"projections\": {\n \"9\": \"getField (s5, \\\"last\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 7126488,\n \"executionTimeMillisEstimate\": 5621,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"projections\": {\n \"8\": \"fillEmpty (s7, null) \"\n },\n \"inputStage\": {\n \"stage\": \"project\",\n \"planNodeId\": 2,\n \"nReturned\": 7126488,\n \"executionTimeMillisEstimate\": 5474,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"projections\": {\n \"7\": \"getField (s5, \\\"htsc_code\\\") \"\n },\n \"inputStage\": {\n \"stage\": \"scan\",\n \"planNodeId\": 1,\n \"nReturned\": 7126488,\n \"executionTimeMillisEstimate\": 5177,\n \"opens\": 1,\n \"closes\": 1,\n \"saveState\": 7126,\n \"restoreState\": 7126,\n \"isEOF\": 1,\n \"numReads\": 7126488,\n \"recordSlot\": 5,\n \"recordIdSlot\": 6,\n \"fields\": [],\n \"outputSlots\": []\n }\n }\n }\n }\n }\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"aggregate\": \"tick_stock\",\n \"pipeline\": [\n {\n \"$project\": {\n \"htsc_code\": 1,\n \"time\": 1,\n \"last\": 1\n }\n },\n {\n \"$group\": {\n \"_id\": \"$htsc_code\",\n \"price_last\": {\n \"$last\": \"$last\"\n },\n \"time_last\": {\n \"$last\": \"$time\"\n }\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"htsc\"\n },\n \"serverInfo\": {\n \"host\": \"IT-3\",\n \"port\": 27017,\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1691633771,\n \"i\": 45\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 0\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1691633771,\n \"i\": 45\n }\n }\n}\n",
"text": "And the fifth attempt:\n(5) Similar to (2), but use the unique index and keep only used fields with ‘$project’.\nIt took about 7 seconds, and the explain result is:",
"username": "l_ym"
}
] |
MongoDB slow $group operation
|
2023-08-10T02:31:52.246Z
|
MongoDB slow $group operation
| 457 |
null |
[
"node-js",
"replication",
"mongoose-odm"
] |
[
{
"code": "PoolClearedError [MongoPoolClearedError]: Connection pool for db2.prod.someDomain.com:27017 was cleared because another operation failed with: \"connection <monitor> to [ip:v6:add:ress::]:27017 timed out\"\n at ConnectionPool.processWaitQueue (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection_pool.js:520:82)\n at ConnectionPool.clear (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection_pool.js:251:14)\n at updateServers (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/sdam/topology.js:461:29)\n at Topology.serverUpdateHandler (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/sdam/topology.js:332:9)\n at Server.<anonymous> (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/sdam/topology.js:444:77)\n at Server.emit (node:events:513:28)\n at Server.emit (node:domain:489:12)\n at markServerUnknown (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/sdam/server.js:298:12)\n at Monitor.<anonymous> (/var/www/api/prod/node_modules/mongoose/node_modules/mongodb/lib/sdam/server.js:58:46)\n at Monitor.emit (node:events:513:28) {\n address: 'db.prod.someDomain.com:27017',\n [Symbol(errorLabels)]: Set(1) { 'RetryableWriteError' }\n}\n",
"text": "Hi,\nI have a problem with my database.\nI frequently (several time a day) have crash during queries. It throws this error:I’ve tried several changes, from decreasing my write rates and read rates over the db, but can’t get why it’s crashing … I can’t reproduce the bug, since it happens randomly, and when i read my server monitoring informations, CPUs RAM and SSD I/O are at low levelsHere is my configuration :",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "Hey @Johan_Maupetit,Welcome to the MongoDB Community forums!PoolClearedError [MongoPoolClearedError]: Connection pool for db2.prod.someDomain.com:27017 was cleared because another operation failed with: “connection to [ip:v6:add:ress::]:27017 timed out”Based on the error logs shared, it appears that your connection is experiencing a timeout. I suspect that it may be occurring either due to network errors or the PRIMARY is getting re-elected in your configuration, causing it to lose connection. I would suggest you implement a try-catch behavior in the application code, allowing it to easily switch to the new PRIMARY node.If the issue still persists after verifying the above, it would be helpful to share the code snippet of your connection code and the logs from the MongoDB server of the same timeframe. This will allow us to assist you better.Furthermore, please provide the exact versions of MongoDB, Mongoose, and the Node.js driver you are using.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "{\"t\":{\"$date\":\"2023-07-20T16:59:21.354+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn757\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":7592396}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.355+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn757\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:59106\",\"uuid\":\"d935431a-6989-40c4-967c-b8397fe3f211\",\"connectionId\":757,\"connectionCount\":431}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.484+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn418\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"monitohr.taggedkeywordchains\",\"command\":{\"find\":\"taggedkeywordchains\",\"filter\":{\"keyWords\":{\"$in\":[[\"responsable\"],[\"technique\"],[\"projet\"],[\"responsable\",\"technique\"],[\"technique\",\"projet\"],[\"responsable\",\"technique\",\"projet\"]]},\"deleted\":{\"$ne\":true}},\"projection\":{\"_id\":false,\"salaryStat\":true},\"hint\":{\"keyWords\":1},\"lsid\":{\"id\":{\"$uuid\":\"458f8939-ebf9-41bd-8e8f-8f01aab3738f\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1689872360,\"i\":9}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"E4mTMhpQD7h5384ODZBFCUbiJIo=\",\"subType\":\"0\"}},\"keyId\":7214879625111928841}},\"$db\":\"monitohr\",\"$readPreference\":{\"mode\":\"nearest\"}},\"planSummary\":\"IXSCAN { keyWords: 1 }\",\"keysExamined\":54648,\"docsExamined\":53354,\"cursorExhausted\":true,\"numYields\":54,\"nreturned\":6,\"queryHash\":\"0E10326B\",\"queryFramework\":\"classic\",\"reslen\":824,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":55}},\"Global\":{\"acquireCount\":{\"r\":55}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"storage\":{},\"remote\":\"[2001:41d0:203:999c::]:41312\",\"protocol\":\"op_msg\",\"durationMillis\":154}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn839\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60776\",\"uuid\":\"4f2c0d54-3d6c-48bb-b620-2161aa05a282\",\"connectionId\":839,\"connectionCount\":430}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn846\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:32814\",\"uuid\":\"f8033146-6b75-4ffd-af4d-1f86ace27e22\",\"connectionId\":846,\"connectionCount\":429}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn881\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33658\",\"uuid\":\"96ab2d84-7fac-431b-9cd2-7b50da073040\",\"connectionId\":881,\"connectionCount\":428}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn878\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33602\",\"uuid\":\"103d1ee4-de39-40ee-b277-cf332485293a\",\"connectionId\":878,\"connectionCount\":427}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn819\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60546\",\"uuid\":\"ae357dea-abb4-4046-aa3a-e85e4616c0fc\",\"connectionId\":819,\"connectionCount\":426}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn874\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33528\",\"uuid\":\"674f095e-4fc8-48c7-9e24-11813c7d6347\",\"connectionId\":874,\"connectionCount\":425}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn827\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60626\",\"uuid\":\"aa7d34f1-bd95-4621-8da5-adf81b928889\",\"connectionId\":827,\"connectionCount\":424}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn805\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60238\",\"uuid\":\"012c38d7-f4f4-45a7-b06a-c1a715502ac8\",\"connectionId\":805,\"connectionCount\":423}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.520+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn854\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33122\",\"uuid\":\"b70c7281-f213-469c-a3c5-fdb79572dc18\",\"connectionId\":854,\"connectionCount\":422}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn838\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60774\",\"uuid\":\"1d9c81ed-63b7-4c72-b2ce-77b6ad3b3cf4\",\"connectionId\":838,\"connectionCount\":421}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn773\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60006\",\"uuid\":\"54fa478c-2207-400e-abdb-b5a79f95592d\",\"connectionId\":773,\"connectionCount\":420}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn884\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33678\",\"uuid\":\"ab988b2f-3e42-48eb-9b51-6672facf9ddb\",\"connectionId\":884,\"connectionCount\":419}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn883\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33676\",\"uuid\":\"9c0499de-f89e-4a58-8cb0-8877fdf235d4\",\"connectionId\":883,\"connectionCount\":418}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn850\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:32976\",\"uuid\":\"ac05d36e-13a6-46fa-aae0-b520ecaef303\",\"connectionId\":850,\"connectionCount\":417}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn781\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60024\",\"uuid\":\"8555dba0-e14a-433e-b842-f391c36098cd\",\"connectionId\":781,\"connectionCount\":416}}\n\n...a lot more Connection ended...\n\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn777\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60014\",\"uuid\":\"e2dcdb9e-a1df-40ff-9cb5-c1632545cb5b\",\"connectionId\":777,\"connectionCount\":348}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn900\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:34058\",\"uuid\":\"0c9bf1b7-0993-47fd-aa12-998fd233c068\",\"connectionId\":900,\"connectionCount\":346}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn785\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60058\",\"uuid\":\"dae7facd-b880-4c77-9998-6fad781f521e\",\"connectionId\":785,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn798\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60210\",\"uuid\":\"3115266f-0016-46c5-b8a3-f1a575015cbb\",\"connectionId\":798,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn787\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60086\",\"uuid\":\"e17a8b75-56d3-4c3e-9414-b1dd01f1f819\",\"connectionId\":787,\"connectionCount\":343}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn903\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:34076\",\"uuid\":\"82b59976-2c7f-48d0-975d-fa2f5d4fd794\",\"connectionId\":903,\"connectionCount\":342}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn880\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33656\",\"uuid\":\"5ea12002-e89d-4dc1-9be3-8cd1d50ff284\",\"connectionId\":880,\"connectionCount\":341}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn788\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60092\",\"uuid\":\"c32626ad-56d7-47ef-a4c0-b31e22a9ab13\",\"connectionId\":788,\"connectionCount\":340}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn816\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60522\",\"uuid\":\"6ee5fe5c-d43b-48da-9cda-0f2610b41dc0\",\"connectionId\":816,\"connectionCount\":339}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn896\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33954\",\"uuid\":\"b87815a3-8c47-49e5-af9b-7b062f0344a8\",\"connectionId\":896,\"connectionCount\":338}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn818\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60544\",\"uuid\":\"435f4187-fc99-4d0a-bb28-e668039229b1\",\"connectionId\":818,\"connectionCount\":337}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn774\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60008\",\"uuid\":\"7b8bfb03-8e08-403c-9d0c-decb58059082\",\"connectionId\":774,\"connectionCount\":336}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn830\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60668\",\"uuid\":\"dbacc587-89aa-4e96-89c6-c8dbddcb2892\",\"connectionId\":830,\"connectionCount\":335}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn801\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60216\",\"uuid\":\"f9403a40-a51c-4dcd-b8db-8011ce0bcfb5\",\"connectionId\":801,\"connectionCount\":334}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn784\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60032\",\"uuid\":\"e8ed8812-5e8c-4abe-b077-25b0dadd0abc\",\"connectionId\":784,\"connectionCount\":333}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn869\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:33340\",\"uuid\":\"0c9f0d0f-fcd6-403a-ae50-8198e626a85f\",\"connectionId\":869,\"connectionCount\":332}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.528+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn803\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:60232\",\"uuid\":\"503f95ba-7b29-473e-8c25-6d497b92f6f1\",\"connectionId\":803,\"connectionCount\":331}}\n\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.704+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn1250\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":14029542}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.712+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1250\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57062\",\"uuid\":\"c04f2662-f6f3-439e-b6ee-726ce885ff2c\",\"connectionId\":1250,\"connectionCount\":480}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1366\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57550\",\"uuid\":\"a267c008-3d53-4178-807a-07d14c4d7905\",\"connectionId\":1366,\"connectionCount\":479}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1348\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57510\",\"uuid\":\"009eff11-d781-410a-8de5-2e623a4a1bc2\",\"connectionId\":1348,\"connectionCount\":478}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1301\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57368\",\"uuid\":\"3e643872-ab14-4088-a900-3ce5635c83eb\",\"connectionId\":1301,\"connectionCount\":477}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1362\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57542\",\"uuid\":\"d89d6807-c218-46fb-b44d-31f3e3b03832\",\"connectionId\":1362,\"connectionCount\":476}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1304\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57378\",\"uuid\":\"f34572fb-bc0e-4fe4-ba10-b306e8e812a8\",\"connectionId\":1304,\"connectionCount\":474}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1380\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57588\",\"uuid\":\"ba6a3ba9-b142-428d-b8c8-45f279082248\",\"connectionId\":1380,\"connectionCount\":473}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1330\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57464\",\"uuid\":\"c0ef0bd8-9732-48c6-8047-7716a4aa74eb\",\"connectionId\":1330,\"connectionCount\":472}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1354\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57522\",\"uuid\":\"e2ab2241-c519-42f1-a507-56ced9d4bcb7\",\"connectionId\":1354,\"connectionCount\":471}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1383\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57594\",\"uuid\":\"7681f484-54cf-410a-86dd-d5164a688e02\",\"connectionId\":1383,\"connectionCount\":475}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1349\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57512\",\"uuid\":\"8e9b84ac-cdcd-4317-b000-509c51e3cdbf\",\"connectionId\":1349,\"connectionCount\":470}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1316\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57428\",\"uuid\":\"b3df15a8-48d7-4842-b57f-0d459d6bdcce\",\"connectionId\":1316,\"connectionCount\":468}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1364\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57546\",\"uuid\":\"edeef05a-f99f-4995-9f05-039eeb42a829\",\"connectionId\":1364,\"connectionCount\":467}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1375\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57578\",\"uuid\":\"db4a1abe-7e4e-4c82-ae57-52212b7482b2\",\"connectionId\":1375,\"connectionCount\":466}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.835+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1295\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57356\",\"uuid\":\"25dc7af4-739f-4c83-9125-fcd2411aaa85\",\"connectionId\":1295,\"connectionCount\":469}}\n\n...a lot more Connection ended...\n\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1262\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57108\",\"uuid\":\"d8d1e8a6-38d1-4528-9d15-752162cd8bb5\",\"connectionId\":1262,\"connectionCount\":398}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1332\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57470\",\"uuid\":\"931e46d7-5ca1-4e59-9b6b-7bbaa574f280\",\"connectionId\":1332,\"connectionCount\":397}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1259\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57100\",\"uuid\":\"7a295c8f-b937-422d-b806-4a7dba7bf14a\",\"connectionId\":1259,\"connectionCount\":394}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1373\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57568\",\"uuid\":\"e77431f9-45a2-4ea0-ac6a-75e33b446bb1\",\"connectionId\":1373,\"connectionCount\":393}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1263\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57112\",\"uuid\":\"25b888dc-f080-4a83-9ad9-e7cf8c51dc73\",\"connectionId\":1263,\"connectionCount\":396}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1253\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57076\",\"uuid\":\"8fbbafdc-2f1b-4418-a413-087ee3fa591b\",\"connectionId\":1253,\"connectionCount\":392}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1265\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57126\",\"uuid\":\"5e0160bc-a173-4149-a225-d5e0cc820b29\",\"connectionId\":1265,\"connectionCount\":391}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1343\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57500\",\"uuid\":\"c2c63feb-455d-4204-ba10-170abdba1334\",\"connectionId\":1343,\"connectionCount\":390}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1377\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57582\",\"uuid\":\"057d10db-6e9d-4d6d-a8f8-1da8cdf422bc\",\"connectionId\":1377,\"connectionCount\":389}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1357\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57528\",\"uuid\":\"0fec7fbd-0e28-48ec-92e7-e0bd86eefa66\",\"connectionId\":1357,\"connectionCount\":388}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1308\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57386\",\"uuid\":\"37b704c9-afc0-4e1f-bd25-a3a93f46bd93\",\"connectionId\":1308,\"connectionCount\":387}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1371\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57562\",\"uuid\":\"b615cf5f-5094-4339-bc62-0ffcb5554728\",\"connectionId\":1371,\"connectionCount\":386}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1350\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57514\",\"uuid\":\"21bc770e-05f1-4586-b7da-498b816214cb\",\"connectionId\":1350,\"connectionCount\":385}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1360\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57534\",\"uuid\":\"ae1d1305-aea5-4058-9865-8b4436f67284\",\"connectionId\":1360,\"connectionCount\":384}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1359\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57532\",\"uuid\":\"16a5fb12-f7bf-43a2-84a6-fd79067fb019\",\"connectionId\":1359,\"connectionCount\":383}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1374\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57576\",\"uuid\":\"faaf342a-ae3f-4148-932d-38011469fbe8\",\"connectionId\":1374,\"connectionCount\":382}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1257\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57096\",\"uuid\":\"9ec4aa68-753c-4c7c-b94b-be92bccef7a8\",\"connectionId\":1257,\"connectionCount\":380}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.841+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1303\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57372\",\"uuid\":\"2fdacb29-453e-4c01-84e3-e485f0169291\",\"connectionId\":1303,\"connectionCount\":381}}\n{\"t\":{\"$date\":\"2023-07-20T16:59:21.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1336\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[2001:41d0:203:999c::]:57482\",\"uuid\":\"8c5ca766-0b93-478c-acfc-038cab9d7473\",\"connectionId\":1336,\"connectionCount\":395}}\n",
"text": "Hi @Kushagra_Kesav thank you for your answer.\nI took some time to check if it was due to a re-election … i don’t think so. My primary remains the same node before and after the bug.\nAlso i can’t try catch every line of code where i interact with the db I need to thrust it when i write or read from it\nHere is an extract of the logs i get at the time of the error :On my Secondary:On my Primary:",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "I’ve found what is causing this error, and share it here if anybody is in my situation.In fact the problem does not come from my database neither from my network … but from my client server.Sometimes my client server get its CPU overloaded, all cores going to 100%. In those cases i guess it can’t handle in time its mongoDB requests which leads to this MongoPoolClearedError.Anyone having this error should check on client side if the server is not overloaded.",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] |
MongoPoolClearedError
|
2023-07-10T14:37:08.876Z
|
MongoPoolClearedError
| 1,048 |
null |
[] |
[
{
"code": " \"document_filter\": {\n \"read\": {\n \"team\": \"%%user.custom_data.team\"\n },\n \"write\": {\n \"team\": \"%%user.custom_data.team\"\n }\n },\n",
"text": "Am I reading it right that you shouldn’t store information about permissions in a user’s custom_data?You cannot store permissions information (such as “which document IDs may this user access?”) in the user object. Changes would not be re-evaluated until the next user session, and updates would cause a client reset.That’s from the end of this section: https://www.mongodb.com/docs/atlas/app-services/rules/sync-compatibility/#sync-compatible-expansionsBut in the Tiered Privileges example, it seems to me that’s exactly what’s being done.Snippet:Am I missing something?My plan to implement collaboration in my app was to store an array of team IDs in custom_data, and filter data based on thatTo make sure it’s fresh, I was going to make a subscription to the user object and manually call refreshCustomDataWithCompletion when there are relevant changes.Thank you,\nJon",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "Hi Jonathan,You cannot store permissions information (such as “which document IDs may this user access?”) in the user object. Changes would not be re-evaluated until the next user session, and updates would cause a client reset.I agree this part could be worded more clearly. What it’s trying to say is that the custom user data is cached and will cause a client reset when it changes. You can technically use custom data this way but if the permission depends on custom data that dynamically updates it will not take effect until the next session. Every time it changes there will be a client reset.If the user’s team is static and doesn’t change often then it should be fine to use.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Thank you Manny, that really clears things up!I’d gladly implement this another better way but I just can’t think of one. I’m trying to implement permissions where a user can be in many teams (kind of like Slack workspaces, etc.) and only has access to the associated data if they’re on the team.The only place you can call a function in a permissions rule is in the apply_when section, which I’ve read is only evaluated at the start of the session.But it seems, based on your answer, that the custom_data approach might be similar in the end? Either way you have to start a new session and go through a client reset.Does calling refreshCustomDataWithCompletion start a new session that could trigger the client reset? I’ll be able to test that myself shortly.Thanks,\n-Jon",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Contradiction in docs about custom_data?
|
2023-08-09T03:07:50.800Z
|
Contradiction in docs about custom_data?
| 450 |
[
"java",
"kotlin"
] |
[
{
"code": "",
"text": "Every issue and blockage I had in the MongoDb driver was because I didn’t had and still don’t an insight into Mongodb’s internal workings that deal with serialization, so I have prepared a Sequence Diagram of how I think the Mongodb driver is working…Is this how Mongodb driver actually works or am I missing something???\nHere is the link where you can edit this image… Plant UML diagram editor\ncodec1032×647 47.3 KB\nAll help will be much appreciated.",
"username": "Uros_Jarc"
},
{
"code": "",
"text": "This might be a better way to visualize it:\nimage798×365 18.3 KB\n\nPlantUML linkI just did it for the encoding step but it should help.Ross",
"username": "Ross_Lawley"
},
{
"code": "org.bson.codecs.kotlinx.KotlinSerializerCodec@SerializablemongoClient.getDatabase(db_name).withCodecRegistry(codecRegistry)",
"text": "This is much better way to visualize the flow! Great!!! This clarifies many errors that I was getting!I have questions…org.bson.codecs.kotlinx.KotlinSerializerCodec is internal BSON serializer that tries to convert entities/values to Bson format AND he “ONLY” uses native Kotlinx serializers that are attached via @Serializable to actually convert internal values/entities to the corresponding Bson format? Is that true?What happens if KotlinSerializerCodec does not find appropriate kotlinx serializer will he raise an serialization error or start searching for the appropriate Codec serializer? I could not see the implementation logic so I joust assuming that based on your flow graf there should be an Error.If you register custom codecs via mongoClient.getDatabase(db_name).withCodecRegistry(codecRegistry) what status\ndoes that classes get is it ordinary Codec or KotlinSerializerCodec? And what should I do to register the other one?Can you provide some helpful src links from GitHub - mongodb/mongo-java-driver: The official MongoDB drivers for Java, Kotlin, and Scala\nwhich you think are crucial for the understanding of how serialization/deserialization works\nso that I can take one day off and really study the internal workings of conversion…Thank you so much!",
"username": "Uros_Jarc"
}
] |
Confusion how kotlinx and codec serialization flow is working in kotlin/java Mongodb driver
|
2023-08-09T12:09:10.623Z
|
Confusion how kotlinx and codec serialization flow is working in kotlin/java Mongodb driver
| 631 |
|
null |
[
"node-js",
"atlas-cluster"
] |
[
{
"code": "app.post(\"/AddUser\",async (req,res)=>{\n\n const credintials= await req.body\n console.log(credintials)\n const result=await AddUser(credintials)\n \n \nimport { MongoClient } from \"mongodb\";\n\nconst uri =\n \"mongodb+srv://******@cluster0.xerwiw2.mongodb.net/?maxIdleTimeMS=5000\";\n\nconst client = new MongoClient(uri);\nasync function AddUser(Credintials) {\n try {\n \n const res = await client\n .db(\"Projectman\")\n .collection(\"Projectma\")\n .insertOne({ email: Credintials.email, pass: Credintials.Password })\n .then((res) => {\n return res;\n })\n .catch((err) => {\n \n console.log(err);\n \n });\n \n if (res.acknowledged === true) {\n const user = await client\n .db(\"Projectman\")\n .collection(\"Projectma\")\n .findOne({ email: Credintials.email })\n .then((res) => {\n return res;\n })\n .catch((err) => {\n return \"no user found\";\n });\n\n return user;\n } else {\n return \"error\";\n }\n } finally {\n \n \n console.log(\"done\");\n }\n}\n\n\nexport default AddUser;\n",
"text": "Hi\nI have the following code to add user to mongodb data base\nin my server.js file i have following codeand AddUser function has the following codeI have noticed that the nodejs server is always connected to mongodb database even if no\nclient.connect() function is used\nis this correct",
"username": "Ali_Aboubkr"
},
{
"code": ".insertOneexecuteOperationAsync",
"text": "It happens because the NodeJs mongo driver execute a connect function when is executing an operation.In your case, the pool connection is opened when you call .insertOne method.On internal methods the driver call a executeOperationAsync that this one perform a call to connect a db.",
"username": "Jennysson_Junior"
},
{
"code": "",
"text": "And when the connection will be closed or it will remain open forever\nand please advice is this the good practice or there is another better practice I can follow",
"username": "Ali_Aboubkr"
},
{
"code": "maxPoolSize",
"text": "It will be opened without time to close. Creating a connection is an operation that cost memory and CPU and Mongo prefers to keep them alive. The driver will use these connections in other operations, in case that need more connections will create them until maxPoolSize length.\nIt’s important to keep in mind that the driver has responsible to handle these connections and choose the connection to made an operation. You just need to configure the maxPoolSize that defines the maximum of connections opened at the same time. The default maxPoolSize is 100 connections, So mongo can keep 100 connections opened at the same time. In avg you will need 1Mb RAM for each connection opened. You must use these options with carefully.The good practice can be use as the docs recommend.",
"username": "Jennysson_Junior"
}
] |
Why nodejs server connects to Mongodb without connect command
|
2023-08-09T11:24:46.446Z
|
Why nodejs server connects to Mongodb without connect command
| 407 |
null |
[
"server",
"release-candidate"
] |
[
{
"code": "",
"text": "MongoDB 5.0.20-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.19. The next stable release 5.0.20 will be a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDB 5.0.20-rc1 is released
|
2023-08-07T16:29:05.024Z
|
MongoDB 5.0.20-rc1 is released
| 541 |
[
"queries",
"node-js",
"compass",
"indexes",
"text-search"
] |
[
{
"code": "{\n v: 2,\n key: { _fts: 'text', _ftsx: 1 },\n name: 'name_text',\n sparse: true,\n weights: { name: 1 },\n default_language: 'english',\n language_override: 'language',\n textIndexVersion: 3\n }\nMongoServerError: error processing query: ns=db.usersTree: TEXT : query=hi, language=english, caseSensitive=0, diacriticSensitive=0 Sort: {} Proj: {} planner returned error :: caused by :: need exactly one text index for $text queryconst searchRes = db\n .collection('users')\n .find({ $text: { $search: 'hi' } });\n",
"text": "I created a text index on a table “users” on a field “name”. The shell command “getIndexes” returns the following:I’m confused why the following query gives me the error:\nMongoServerError: error processing query: ns=db.usersTree: TEXT : query=hi, language=english, caseSensitive=0, diacriticSensitive=0 Sort: {} Proj: {} planner returned error :: caused by :: need exactly one text index for $text query\nI do not have multiple text indexes on my collection – just one.This is exactly what my input is:Which is exactly what the documentation says to doMore details: I’m using serverless so Atlas Search is not supported, so I’m doing a self-managed deployment (using SST). This query works perfectly well in my mongoDB Compass shell, just not from my node.js driver.Thank you for your time!",
"username": "Justin_Jaeger"
},
{
"code": "[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n { v: 2, key: { email: 1 }, name: 'email_1', unique: true },\n {\n v: 2,\n key: { oauthId: 1 },\n name: 'oauthId_1',\n unique: true,\n sparse: true\n },\n {\n v: 2,\n key: { _fts: 'text', _ftsx: 1 },\n name: 'name_text',\n weights: { name: 1 },\n default_language: 'english',\n language_override: 'language',\n textIndexVersion: 3\n }\n]\n",
"text": "When I run “getIndexes()” on the users collection, it prints the following (just to show I don’t have multiple text indexes):Additionally, I tested deleting the index, then running the query to see what the error is, and the error is accurate: “text index required for $text query”Then upon creating the index again, the same error persists",
"username": "Justin_Jaeger"
}
] |
When using text index in node.js driver, error: "need exactly one text index for $text query"
|
2023-08-09T10:23:19.290Z
|
When using text index in node.js driver, error: “need exactly one text index for $text query”
| 547 |
|
null |
[
"node-js",
"replication"
] |
[
{
"code": "",
"text": "Hello All,\nWe have mongodb enabled with replica set.\nWhen we tried to connect our db from visual studio code mongodb extension, facing error as server timeout.\nBut if i am using directionconnection=true in connection string, it is working but this directionconnection suppress replicaset enabled in db so my application is not getting connected.\nCan someone help us to give mongodb connection string with replicaset to connect db.\nThanks",
"username": "Santhosh_Sekar"
},
{
"code": "",
"text": "Could you give more information about the problem?Like versions of your extension, database version, and error stack trace.",
"username": "Jennysson_Junior"
}
] |
Could not connect Mongodb enabled with Replicaset from Visual studio code mongodb externsion without using direct connections
|
2023-08-09T08:00:30.213Z
|
Could not connect Mongodb enabled with Replicaset from Visual studio code mongodb externsion without using direct connections
| 457 |
null |
[
"dot-net"
] |
[
{
"code": "",
"text": "TLDR: The Atlas Search indexes don’t seem to show up on the list when checking all indexes on a collection via the C# driver.I really did try to search(mongo help/google/etc) but either my search terms are terrible or I am just bad at searching…Anyway, is there a way to determine if a Mongo Atlas Search(the fancy new text search) index exists on a table?I am using:\nC# Driver\nLatest NuGet Version\nI tried the standard Collection.Indexes.List() and it does not show up there even though it exists\nI can 100% confirm it does indeed exist\nCode to use it runs and works and I can verify it used the index. So the index is fine, I just can’t figure out via code if it exists.Justification:\nNot all tables or search terms have or are valid for my full text search. So one of my checks in code is a pretty basic “if table has Search index, use it, otherwise do a regex(or something else)”. I understand I should probably convert the regex to another Search Index or an added field, but there are a few reasons to not do this.",
"username": "Mark_Mann"
},
{
"code": "listIndexes$listSearchIndexes",
"text": "Hi @Mark_Mann,Anyway, is there a way to determine if a Mongo Atlas Search(the fancy new text search) index exists on a table?To my knowledge the atlas search index won’t show up with the standard listIndexes command. I believe you can use the $listSearchIndexes stage in an aggregation pipeline to retrieve the Atlas Search indexes created on a particular collection instead.An example of the aggregation stage for returning all atlas search indexes on a collection.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Additionally, we have added helper methods to aid working with Atlas search indexes (creation/management/search). They should be part of next release (version 2.21.0).\nReference: CSHARP-4660",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Thank you both. Looks like I have a workaround for now and sometime next year some additional helper functions. Thank you.",
"username": "Mark_Mann"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
coll.Indexes + Atlas Search Index?
|
2023-08-09T00:45:59.933Z
|
coll.Indexes + Atlas Search Index?
| 403 |
[
"node-js",
"app-services-user-auth",
"graphql",
"react-js"
] |
[
{
"code": "",
"text": "Hi there , I am Sourabh Bagrecha from the city of lakes Udaipur, currently working as a Software Engineer at MongoDB. One of the things I enjoy the most is building full-stack apps using React.js, Node.js, and MongoDB.And I certainly believe that our time as a developer should be spent on implementing features that matter to our customers and not reinventing the wheel on our own.Authentication & Authorization, CRUD operations, Analytics, etc are some of the things that are common to every full-stack application, and whenever I get a billion-dollar-app idea, my muscle memory immediately launches a new express.js server with all the server and mongoose configs.Most of the time, I just get lost in the process of building the APIs rather than thinking about features that are unique and important to my app.Last week, I published an 8 part blog series explaining How to Build a Full-stack Web App without creating a server at all.We will learn how we can utilize the Atlas GraphQL API and App Services to implement an expense manager in React.js. We will learn how we can eliminate the burden of creating and maintaining APIs to perform Authentication, CRUD operations, Analytics, etc, just by utilizing the Free tier Atlas Database.We will also learn how to host our React App for free using MongoDB’s Atlas Static Website Hosting.Learn how to perform Authentication, Authorization, CRUD operations, Analytics & Web App Deployment by building an Expense Manager in…\nReading time: 6 min read\nUse this topic to post your doubts, concerns, and any error that you are facing while following the tutorial.",
"username": "SourabhBagrecha"
},
{
"code": "/expense/${_id}/\" seems to miss \"edit\", if I add /edit <Typography variant=\"h6\" component={Link} to={",
"text": "Hi Sourabh, thanks for taking so much time to put together a tutorial.I had trouble accessing the edit form,\nin ExpenseCard.component.js on line 51 \"{Link} to={/expense/${_id}/\" seems to miss \"edit\", if I add /edit <Typography variant=\"h6\" component={Link} to={/expense/${_id}/edit`}>\nit works fine.Hope you put more tutorials online, excellent stuff !",
"username": "Juust_Out"
},
{
"code": "",
"text": "Hi Sourabh,\nI have followed your instructions but when i try to register, i get this error\nError: Request failed (POST https://<–My app ID–>/local-userpass/register): invalid json(status 400)\nhow do i resolve this?",
"username": "I_am_Prime_N_A"
},
{
"code": "",
"text": "I too have the same error. Did you manage to resolve it?",
"username": "John_Harrison"
},
{
"code": "",
"text": "Hi John, No I haven’t solved it yet.",
"username": "I_am_Prime_N_A"
},
{
"code": "",
"text": "@SourabhBagrecha hey! I want to make some data from this app public without having any user authentication. Is there a way I can put some universal authentication into my code? Thanks.",
"username": "Isabella_Lawson"
},
{
"code": "exports = async function onUserCreation(user) {\n const collection = context.services\n .get(\"mongodb-atlas\")\n .db(\"myApp\")\n .collection(\"users\");\n \n try { \n await collection.insertOne({\n user_auth_id: user.id,\n email: user.data.email,\n createdate: new Date()\n });\n } catch (e) {\n console.error(`Failed to create custom user data document for user:${user.id}`);\n throw e\n }\n};\n // Function to signup user into our Realm using their email & password\n const emailPasswordSignup = async (email, password, username) => {\n try {\n await app.emailPasswordAuth.registerUser(email, password);\n\n // Since we are automatically confirming our users we are going to login\n // the user using the same credentials once the signup is complete.\n const authedUser = await emailPasswordLogin(email, password);\n\n // Call this function to get user document by email and update with username\n await updateUserSignup(email, username, authedUser);\n return authedUser;\n } catch (error) {\n throw error;\n }\n };\n\n // Function to update Custom User Data with fields that aren't managed \n // by email/password authentication provider\n const updateUserSignup = async (email, username, authedUser) => {\n const editUserMutation = gql`\n mutation EditUser($query: UserQueryInput!, $set: UserUpdateInput!) {\n updateOneUser(query: $query, set: $set) {\n email\n }\n }\n `;\n // Getting user by email and setting username\n const queryAndUpdateVariables = {\n query: {\n email: email\n },\n set: {\n username: username,\n },\n };\n const headers = { Authorization: `Bearer ${authedUser._accessToken}` };\n try {\n await request(GRAPHQL_ENDPOINT, editUserMutation, queryAndUpdateVariables, headers);\n } catch (error) {\n throw error;\n }\n };\n",
"text": "Hi,I created a collection called ‘users’ and enabled Custom User Data.\nI create a User Creation Function in order to add new signup user to ‘users’ collection and assign user.data.email send by email/password provider :In my signup form, I added a username field in order to complet Custom User Data. But app.emailPasswordAuth.registerUser function seems to only manage email data, so I modify user.context.js like this :Is it the only way to update Custom User Data ?Thanks",
"username": "Vincent_Boulet"
},
{
"code": "const emailPasswordSignup = async (email, password) => {\n try {\n await app.emailPasswordAuth.registerUser({email, password});\n return emailPasswordLogin(email, password);\n } catch (error) {\n throw error;\n }\n }\n",
"text": "A little late here but if the json(status 400) error you were referring to applied to the sign up form. I resolved this by making a small change to the function.",
"username": "Alex_Ritz"
}
] |
Build a full-stack app using MongoDB Realm GraphQL (without worrying about servers at all)
|
2022-04-08T09:00:35.416Z
|
Build a full-stack app using MongoDB Realm GraphQL (without worrying about servers at all)
| 4,675 |
|
null |
[
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
] |
[
{
"code": "",
"text": "Hi, I have created a mongoose model as below, and set recordId field to “unique: true” to handle duplicate entry. But when through API multiple calls are happening within milliseconds the collection is allowing duplicate entries. Can anyone please tell how to handle this error.const RecordList = mongoose.model(\n‘record_list’,\nmongoose.Schema({\nrecordId: {\ntype: String,\nunique: true,\nrequired: [true, ‘Please enter record id!’],\n},\ncampaign: {\ntype: String,\n}\n})\n);Thank You.",
"username": "Krushna_Chandra_Rout"
},
{
"code": "unique:truemongoshdb.collection.getIndexes()name: 'john'MongoError: E11000 duplicate key error collection: test.records index: name_1 dup key: { name: \"john\" }const mongoose = require('mongoose');\nconst url = 'mongodb://127.0.0.1:27017/test';\nmongoose.connect(url, { useNewUrlParser: true, useUnifiedTopology: true });\n\nconst Schema = mongoose.Schema;\nvar RecordSchema = new Schema({\n name: { type: String, required: true, unique: true }\n}, { collection: 'records' })\n\nconst Record = mongoose.model('Record', RecordSchema);\n\nconst r1 = Record({ name: 'john' });\nr1.save(function(err) { \n\tif (err) throw err;\n\tconsole.log('r1 record saved.');\n});\n",
"text": "Hello @Krushna_Chandra_Rout, Welcome to the MongoDB Community forum!With Mongoose ODM, when you create a field in a schema with property unique:true, it means that a unique constraint be created on that field. In fact it does create such an index in the database for that collection.For example, the following code creates such data and inserts one document. I can verify the collection, the document and the unique index from mongosh or Compass. In the shell, db.collection.getIndexes() prints the newly created index details.When I run the same program again, or try to insert another document with the same name: 'john', there is an error: MongoError: E11000 duplicate key error collection: test.records index: name_1 dup key: { name: \"john\" }.Please include the version of MongoDB and Mongoose you are working with.",
"username": "Prasad_Saya"
},
{
"code": "background: truemongoshbackground:true {\n \"v\" : 2,\n \"unique\" : true,\n \"key\" : {\n \"name\" : 1\n },\n \"name\" : \"name_1\",\n \"ns\" : \"test.records\",\n \"background\" : true // <---- [*]\n }\n",
"text": "But when through API multiple calls are happening within milliseconds the collection is allowing duplicate entriesThis may be that the index is created through Mongoose with background: true option. This option may not create the index immediately, and this allows duplicate entries on the indexed field.An option for you is to create the index from mongosh or Compass initially. You can still keep the Mongoose definition as it is. This will definitely trigger duplicate data error immediately.A quick query on my index data showed that the index is created with background:true option [*]:NOTE: This was with using Mongoose 6.2.4 and MongoDB v.5.0.6 (Atlas Cluster).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi, @Prasad_Saya Thank You for this solution.\nActually one of unique field was deleted from compass bymistakely that’s why the above error was comming.Thank You.",
"username": "Krushna_Chandra_Rout"
},
{
"code": "",
"text": "Actually one of unique field was deleted from compass bymistakelyThat is a bad situation. Database objects like collections, databases, indexes, etc., need to be carefully and securely managed.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Prasad_Saya ,\nI am having similar kind of issue. My operation is based on the last generated call on mongodb. I am generating a key in incremental order and for that I need to fetch the last generated key and then I am incrementing it based on last generated. Issue I am getting is when i hit the api in 0 sec for 50 keys the keys are not uniquely generated it points to the same thread. So is there any lock mechanism in mongodb to generate only one key at a time? Please suggest",
"username": "Vishal_Yadav3"
},
{
"code": "",
"text": "to fetch the last generated key and then I am incrementing it based on last generatedThe issue is that 2 or more processes/tasks/threads will read the same value and increment it to the same value and store back that same value. This is a typical issue in bad distributed system. This is not the way you do thing in a distributed system. You need to read and increment in a ACID way. It is not clear how you fetch the last generated key but if you do it with a sort in your collection then I do not know how you could do it. May be with a transaction. May be with repeating an upsert until you do not get a duplicate. One way to do without sort, it is to have the value in a collection and you use findOneAndUpdate to increment your value in an ACID way. But sorting or findOneAndUpdate is a big NO NO as it involves 2 or more server requests. Why don’t you use an $oid, UUID, GUID?",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej ,\nThanks for the answer.\nWhy don’t you use an $oid, UUID, GUID?\nActually, our requirement is that the alphanumeric key be generated once in a lifetime, never repeated and its size is 6 which will be later on incremented when the permutations will complete. I tried to lock the key using pessimistic locking, but I don’t know if it’s the correct way. According to the requirement, I am generating the key in incremental order using modulus operations I am dividing some incremental number/alphabets.size() to get the alphanumeric key of 6 digits in incremental order and it never repeats, where I need to have the last generated number. Can I use pessimistic locking?",
"username": "Vishal_Yadav3"
},
{
"code": "",
"text": "From what I understand you are not using mongodb to generate your unique key. It looks like you are developing your own function in some kind of library. If this is the case then you have to make sure that your function will not return 2 identical keys. This seems to be a JS question rather than a Mongo question.It would be best to share your code so that we really understand. But SO might be a better venue since it is JS.",
"username": "steevej"
}
] |
Duplicate data getting added into collection
|
2022-03-03T06:56:21.739Z
|
Duplicate data getting added into collection
| 22,893 |
[] |
[
{
"code": "",
"text": "I am using render to deploy the backend of my app.I am using mongoDB atlas’s cluster to store data.When I m trying to connect to db in vscode, no issue but when I deploy it on render , connection fails.\n\nimage1500×437 23.4 KB\nI have updated my Ip address , but still issue remains. How to solve it ?",
"username": "Akash_N_A3"
},
{
"code": "",
"text": "Try to allow access from anywhere to see if it makes a difference. If it does then it means you have to specify a different IP than the one you use toupdated my Ip address",
"username": "steevej"
}
] |
Render deployment db connection issue
|
2023-08-09T12:18:44.558Z
|
Render deployment db connection issue
| 374 |
|
null |
[
"queries",
"node-js",
"mongoose-odm"
] |
[
{
"code": "2.41.23.5999999999999996",
"text": "Hi all, I have a question about the decimal support. I want to work with decimals and I know Mongo suggests to use Decimal128 type. It solves the rounding format on db but on the other hand I need to write decimal(BSON) to number converter getters while getting data.I want to use decimals without changing data type. I use mongoose’s $inc, $dec operators a lot with bulk operations. If I continue to use Number(on mongoose, Double on Mongo) type, I can see very long decimals end of the some operations. For example, if I $inc 2.4 with 1.2 result will be 3.5999999999999996 (floating point numbers problem).Is there a way to say Mongo to round selected fields’ final values’ to max 2 via mongoose model or directly on db somehow? Setters don’t work because there is no problem with 2.4 or 1.2, the problem is the result of the inc operation. Or post hooks don’t work with bulks.Thanks in advance.",
"username": "Furkan_Uyar1"
},
{
"code": "",
"text": "Hi Furkan,Can you provide a small example code sample showing exactly how you are inserting and updating the documents in Mongoose please?",
"username": "Ronan_Merrick"
}
] |
How to round decimals on DB without using Decimal128 type
|
2023-08-07T23:18:55.090Z
|
How to round decimals on DB without using Decimal128 type
| 484 |
null |
[] |
[
{
"code": "",
"text": "can I upload MS word document (.docx) to MongoDB in order to search strings?",
"username": "Shlomo_Nachmani"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Shlomo_Nachmani !MongoDB does not have any built-in features to interpret document formats like Word doc/docx, so you would have to use a third party library to extract strings and metadata to save into MongoDB. Data stored in MongoDB can be indexed and searched (with the most flexible option for text search being Atlas Search).If your use case is focused on indexing digital documents like Word files, you may be better served looking into a Document Management System (DMS) which includes support for the document formats you want to index.However, if you have more modest (or custom) requirements you can certainly build your own solution.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Could you give some DMS examples ?\nThanks",
"username": "jeff_chen"
}
] |
Upload MS word document (.docx) to MongoDB
|
2022-06-24T13:18:07.557Z
|
Upload MS word document (.docx) to MongoDB
| 1,927 |
null |
[] |
[
{
"code": "",
"text": "Peace be upon you!\nI would like to know that how can we ask questions or post question to the community when we have problems in our development?",
"username": "Noman_Mangalzai_N_A"
},
{
"code": "",
"text": "Hey @Noman_Mangalzai_N_A - Welcome to the community I would like to know that how can we ask questions or post question to the community when we have problems in our development?I’d definitely recommend checking out the Getting Started with the MongoDB Community: README.1ST Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": " \"virtual_hostname\":\"$_id.hostname\",\n \"sourceType\":\"$_id.sourceType\",\n \"virtual_name\":\"$name\",\n \"virtual_address\":\"$address\",\n \"virtual_port\":\"$port\",\n \"virtual_enabled\":\"$servers.enabled\",\n \"node_address\":\"$servers.address\",\n \"node_name\":\"$servers.nodeName\",\n \"node_port\":\"$servers.port\",\n \"pool\":\"$pool\",\n[\n {\n \"_id\": {\n \"hostname\": \"rflukf502\",\n \"ipAddress\": \"7.191.12.19\",\n \"name\": \"caremobile_prod_80_pool\",\n \"fullPath\": \"/Common/caremobile_prod_80_pool\",\n \"sourceType\": \"F5MgmtTmLtmPool\"\n },\n \"name\": \"caremobile_prod_80_pool\",\n \"fullPath\": \"/Common/caremobile_prod_80_pool\",\n \"clientMnemonic\": \"RFL_UK\",\n \"servers\": [\n {\n \"nodeName\": \"RFLUKSODRP01\",\n \"displayName\": \"RFLUKSODRP01.rfl_uk.cernuk.com\",\n \"enabled\": true,\n \"up\": false,\n \"address\": \"104.170.243.75\",\n \"port\": \"80\",\n \"dnsA\": \"RFLUKSODRP01.rfl_uk.cernuk.com\"\n },\n {\n \"nodeName\": \"RFLUKSODRP02\",\n \"displayName\": \"RFLUKSODRP02.rfl_uk.cernuk.com\",\n \"enabled\": true,\n \"up\": false,\n \"address\": \"104.170.243.76\",\n \"port\": \"80\",\n \"dnsA\": \"RFLUKSODRP02.rfl_uk.cernuk.com\"\n }\n ],\n \"lastUpdated\": ISODate(\"2023-08-07T09:28:27.171Z\"),\n \"_class\": \"com.cerner.cts.oss.nethawk.engine.business.f5VipStatus.F5ConfigPool\"\n },\n {\n \"_id\": {\n \"hostname\": \"rflukf502\",\n \"ipAddress\": \"7.191.12.19\",\n \"name\": \"caremobile_prod_80_vs\",\n \"pool\": \"/Common/caremobile_prod_80_pool\",\n \"sourceType\": \"F5MgmtTmLtmVirtual\"\n },\n \"name\": \"caremobile_prod_80_vs\",\n \"address\": \"104.170.243.112\",\n \"port\": \"80\",\n \"pool\": \"/Common/caremobile_prod_80_pool\",\n \"clientMnemonic\": \"RFL_UK\",\n \"deviceGroup\": [],\n \"policiesNames\": [],\n \"isInternetVip\": false,\n \"lastUpdated\": ISODate(\"2023-08-08T09:57:07.844Z\"),\n \"_class\": \"com.cerner.cts.oss.nethawk.engine.business.f5VipStatus.F5ConfigVirtual\"\n },\n {\n \"_id\": {\n \"hostname\": \"rflukf502\",\n \"ipAddress\": \"7.191.12.19\",\n \"name\": \"caremobile_prod_443_vs\",\n \"pool\": \"/Common/caremobile_prod_80_pool\",\n \"sourceType\": \"F5MgmtTmLtmVirtual\"\n },\n \"name\": \"caremobile_prod_443_vs\",\n \"address\": \"104.170.243.112\",\n \"port\": \"443\",\n \"pool\": \"/Common/caremobile_prod_80_pool\",\n \"clientMnemonic\": \"RFL_UK\",\n \"deviceGroup\": [],\n \"policiesNames\": [],\n \"isInternetVip\": false,\n \"lastUpdated\": ISODate(\"2023-08-08T09:57:07.844Z\"),\n \"_class\": \"com.cerner.cts.oss.nethawk.engine.business.f5VipStatus.F5ConfigVirtual\"\n }\n]\n",
"text": "i need aggregate result of the below query –\nif you see the Json results -\nTwo from “_id.sourceType”: “F5MgmtTmLtmVirtual”\nAnd one from “_id.sourceType”: “F5MgmtTmLtmPool”I need total of four records from this.\nhere we have to Unwind servers as well.\nFields in result :::After aggregation we should have four set of the above fields.\nI was trying in https://mongoplayground.net/",
"username": "Alok_Kumar7"
}
] |
How to ask question regarding mongoDB
|
2023-02-26T17:58:00.486Z
|
How to ask question regarding mongoDB
| 1,479 |
null |
[
"atlas-cluster",
"containers"
] |
[
{
"code": "",
"text": "Hello Mongo Support,We want to migrate our community mongo databases running on AWS to Atlas Cluster . Since it will be a different version movement(3.2 → 5.0) , We would like to make use of mongomirror utility to do a near to zero downtime migration.However the mongo dbs on AWS are running on Docker containers.\nWhat will be the exact syntax to specify the source host address ./mongomirror --host replsetname\\ip:port,ip:port,ip:port\n–username xxxx\n–password xxxx\n–authenticationDatabase admin\n–destination clusterxxx-shard-00-00.xxxx.mongodb.net:27017\n–destinationUsername xxx\n–destinationPassword xxx\n–includeDB=xxx\n–ssl\n–forceDumpis giving me error\n2023-08-08T12:31:09.332+0000 Error initializing mongomirror: could not initialize source connection: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: xxx:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, ] }Any idea how to achieve this.\nThanks",
"username": "gargee_dutta"
},
{
"code": "",
"text": "Any update on this. Will be really appreciatedThanks",
"username": "gargee_dutta"
}
] |
Mongomirror failing to establish connection from AWS to Atlas
|
2023-08-08T13:52:27.729Z
|
Mongomirror failing to establish connection from AWS to Atlas
| 418 |
null |
[
"queries",
"crud"
] |
[
{
"code": "var List = ['XXXXXXXXX','XXXXXXXX','XXXXXXX','XXXXXXX','XXXXX'];\nfor (var i = 0; i < List.length; i++) {\n var lst = List[i];\n try {\n db.getCollection('XXX').find({\n XXX: lst,\n XXX: 'XXX',\n XXX:'XXX'\n }).forEach(\n function (cyuDoc) {\n db.getCollection('cyuquestion')\n .updateMany({\n XXX: lst,\n XXX: 'XXX',\n 'XXXX' : cyuDoc._id.valueOf()\n }, {\n $set: {\n XXXX: 'XXX'\n }\n });\n })\n printjson(`${lst} : Success`)\n } catch (e) {\n printjson(`${lst} : Error Found`)\n printjson(\"ERR:: \" + e);\n }\n}\n",
"text": "Is there any better way to optimize below query.",
"username": "A_W"
},
{
"code": "$infind",
"text": "Hello @A_W,It is hard to understand your query because of unclear fields/key names or different collections, can you please add an example document and the expected behavior that you wanted to perform, little bit of explanation would be good to understand the question.As a prediction, you can avoid for loop and use use $in operator to check your list of values in the find query.",
"username": "turivishal"
},
{
"code": "",
"text": "As said above, use an $in and package up the updates into a bulk update so you just make one server call./edit typo from phone corrected",
"username": "John_Sewell"
}
] |
Query optimization loop
|
2023-08-08T12:26:49.892Z
|
Query optimization loop
| 417 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.