image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"dot-net"
] | [
{
"code": "ObjectSerializer",
"text": "We have a very dynamic data model and persist our data untyped (the model’s value property is of type object). Inserts work well, since then our custom serializer is at work which knows which type it is handling and can serialize lists very well.Now we previously had mixed Ids. ObjectId for document Ids and usually GUID as Ids for finding the correct element in nested arrays. For references in those dynamic data fields we use List. This worked well so far.Now we need to refactor this into using ObjectIds everywhere since we want to build a system which needs all those Ids and having to check and store whether it’s Guid or ObjectId is tedious and imperformant.But now when we want to serialize a List into a field that’s just “object”, it fails. First due to it being an not allowed type for the ObjectSerializer. But configuring it to accept a List works around that. Now when we update a field, the value remains null, altough MongoDB says everything worked fine.Why is this so? Can someone explain me how I can serialize this list?Thanks!",
"username": "Manuel_Eisenschink"
},
{
"code": "ObjectSerializer.AllowedTypesList<object>",
"text": "Hi, @Manuel_Eisenschink,Thanks for reaching out to us about this issue. It sounds like you’ve configured ObjectSerializer.AllowedTypes correctly if you are able to serialize the IDs as a List<object>. It’s not clear why the value would remain null. I would encourage you to file a CSHARP issue with the following so we can investigate further:Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi James, thanks for the quick response.\nI forked the driver and made a test project reproducing the issue. In this isolated environment I could actually find the source (our data model is huge and there’s a few steps to produce the situation, making it hard to isolate).I found the issue was not the serializer but the array filter we’re using. Since those have no support for Expressions we had to hard-code them. Previously we used Guids for nested entities’ Ids, so the field was named differently to separate it from regular “Id” (or in DB: _id) fields.Due to the refactoring we also changed them to use ObjectIds and thus MongoDB automatically applied the ‘_id’ naming even though these were nested documents and no unique fields. This broke the array filter.Sadly MongoDB does not communicate issues with array filters very well in some scenarios. That’s maybe something that could be improved. Obviously “Id” or “id” is a valid name but it does not match if the field’s actually called “_id” inside MongoDB.Long story short: Not a bug, just invalid usage with too few feedback to identify the issue correctly.",
"username": "Manuel_Eisenschink"
},
{
"code": "",
"text": "Hi, @Manuel_Eisenschink,Glad that you were able to find the root cause of the problem. Interesting about the unclear errors/feedback related to array filters. This sounds more like a server query issue than a .NET/C# Driver issue. If it is in fact the former, you can file a https://jira.mongodb.org/browse/SERVER issue. If it does turn out to be the latter (that the driver is not communicating array filter problems in a meaningful way), my team would be happy to investigate further if you file a https://jira.mongodb.org/browse/CSHARP issue.Thanks for closing the loop on the investigation and confirming that this isn’t inherently a driver bug, but more a result of unclear error messages.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | Serializing untyped List<ObjectId> with Update statements | 2023-04-27T14:37:19.291Z | Serializing untyped List<ObjectId> with Update statements | 647 |
null | [
"compass",
"mongodb-shell",
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Hi there,I recently changed my system to Apple Mac Pro M2 on latest MacOS 13.3.1.\nI’m able to install MongoDB successfully, but whenever I’m trying to mongorestore from a dump on my local, I’m always facing either of these issues:\n1.Failed:<collection_name>: error creating collection <collection_name>: error running create command: (TooManyFilesOpen) 24: Too many open files - where <collection_name> is random every time. or,\n2. Failed:<collection_name>: error reading database: connection() error occurred during connection handshake: connection(localhost:27017[-8]) socket was unexpectedly closed: EOFMostly it’s “too many open files” and in this restore process few or more collections get missed and service also gets stopped.\nI tried all 4.4.19, 5.0 and 6.0 versions of MongoDB. All results in same. My current database tools version is 100.7.0 and file descriptors value is 64000 “ulimit -S -n 64000” in .zshrc.I also realised that the service also itself gets stopped when I’m just using Compass to access DB.Can anyone please provide any help on it? I’m starting to believe that this is related to M2 chips only.",
"username": "Abhishek_Vaishnav"
},
{
"code": "sudo launchctl limit maxfiles 64000 524288/Library/LaunchDaemons/private.maxfiles.plist<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n <dict>\n <key>Label</key>\n <string>private.maxfiles</string>\n <key>ProgramArguments</key>\n <array>\n <string>launchctl</string>\n <string>limit</string>\n <string>maxfiles</string>\n <string>64000</string>\n <string>524288</string>\n </array>\n <key>RunAtLoad</key>\n <true/>\n </dict>\n</plist>\n",
"text": "I was able to fix it by running\nsudo launchctl limit maxfiles 64000 524288but since this maxfiles limit was being reset on system reboot. I had to create a file /Library/LaunchDaemons/private.maxfiles.plist with following codeThis created a daemon so that on system boot it can set the maxfiles limit. My belief is that this problem is only with M2 systems, because I have many colleagues who’re using an M1 mac and no one had ever faced such issue with same open files or maxfiles limit.In case if someone has better explanation do let me know. Thanks!",
"username": "Abhishek_Vaishnav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Facing issue "too many open files" with all versions of MongoDB on Apple M2 | 2023-04-26T18:18:59.075Z | Facing issue “too many open files” with all versions of MongoDB on Apple M2 | 1,035 |
null | [
"crud"
] | [
{
"code": "db.survey.insertMany([\n {\n _id: 1,\n results: [\n { item: \"A\", score: 5 },\n { item: \"B\", score: 8 }\n ]\n },\n {\n _id: 2,\n results: [\n { item: \"C\", score: 8 },\n { item: \"B\", score: 4 }\n ]\n }\n] )\ndb.survey.updateMany(\n { },\n { $pull: { results: { $elemMatch: { score: { $lte: 4 } } } } }\n )\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 2,\n modifiedCount: 0,\n upsertedCount: 0 }\n",
"text": "I’ve been reading through the documentation on $pull, $elemMatch and arrayFilters but can’t seem to find the right way to pull elements from an array.Given this initial data:And let’s say we want to pull (remove) elements from the results array in all documents - where the score is less than or equal to 4.According to the documentation this would seem like the right way:But this doesn’t do the trick. Result as follows:Appreciate any help on the matter.",
"username": "Kristoffer_Almas"
},
{
"code": "db.survey.updateMany(\n { },\n { $pull: { results: { $elemMatch: { score: { $lte: 4 } } } } }\n )\n$pull$elemMatch$pull$elemMatch$elemMatch$pull$elemMatchdb.survey.updateMany(\n {},\n { $pull: { results: { score: { $lte: 4 } } } }\n)\n[\n {\n _id: 1,\n results: [\n { item: \"A\", score: 5 },\n { item: \"B\", score: 8 }\n ]\n },\n {\n _id: 2,\n results: [\n { item: \"C\", score: 8 },\n ]\n }\n]\n",
"text": "Hi @Kristoffer_Almas,Welcome back to the Community forums To remove elements from a collection using the $pull operator, you don’t need to use the $elemMatch operator. This is because $pull treats each element as a top-level object and applies the query to each element individually. Therefore, you can simply specify the match conditions in the expression without using $elemMatch.If you do use $elemMatch, the $pull operation will not remove any elements from the original collection.Here’s an updated version of your query without $elemMatch:And it will return the following output:For more information, please refer to the $pull - Remove Items from an Array of Documents documentation.I hope it helps.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Pull elements from array - how? | 2023-04-28T06:41:46.447Z | Pull elements from array - how? | 568 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "Morning! I am having issues with getting clear information on why my lab continues to error. Would someone be able to troubleshoot with me, or point me to someone who can help? Any assistance is greatly appreciated.SM",
"username": "Stephanie_Mason"
},
{
"code": "",
"text": "Can you please post some screen shots of the error or what you are trying to do, this will make it easier to help. Otherwise we are just guessing at which part you are having issues with.",
"username": "tapiocaPENGUIN"
}
] | Lab: Connecting to a MongoDB Atlas Cluster with the Shell | 2023-04-28T14:00:12.157Z | Lab: Connecting to a MongoDB Atlas Cluster with the Shell | 389 |
null | [] | [
{
"code": "const twilio = context.services.get(\"twilio\");\n",
"text": "Hi, I am trying to write a function to send a SMS using Twilio.\nI have been succesful doing this using Third Party Services by selecting the Twilio service. During this process you need to enter your Account SID and Auth Token, no problem. However Third Party Services are to be deprecated in favor of creating HTTP endpoints that use external dependencies in functions.\nI am able to add the twilio dependency (named twilio) and have used:and the send() function with the to, from and body information, but of cource this does not work.\nQuestion, how do I apply the Account SID and Auth Token information, I can not find the information or an example.",
"username": "Glenn_TAYLOR"
},
{
"code": "exports = async function(arg){\n \nconst accountSid = context.values.get(\"twilio_accountSID\");\nconst authToken = context.values.get(\"twilio_authToken\");\nconst client = require('twilio')(accountSid, authToken);\n\nclient.messages\n .create({\n body: 'testing alert',\n from: '+000000',\n to: '+000000'\n })\n .then(message => console.log(message.sid));\n\n};\n",
"text": "Hi Glenn,Thanks for posting and welcome to the community forum!Although third party services is deprecated you can still use Twilio in app services functions.Please first Install the Twilio package as an external dependency.\nFunctions App Services 2023-04-28 at 12.05.29 pm2016×632 88.8 KB\nYou can now import the package in your function and use Twilio as per the example in their documentation.Note that I have used App Services Secrets to safely store the accountSID and auth code and retrieving it in the function via context.values.get().I’ve tried this code and it worked for me.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi Manny,\nmany thanks for the quick and concise response. This works for me; I can now go ahead and learn additional functionality.",
"username": "Glenn_TAYLOR"
}
] | App Services, Functions, Twilio | 2023-04-27T14:30:11.634Z | App Services, Functions, Twilio | 787 |
null | [
"database-tools",
"kotlin",
"flexible-sync",
"schema-version-pattern"
] | [
{
"code": " suspend fun addSet(x: ExerciseSet): Boolean {\n return try {\n realm.write {\n copyToRealm(\n x.apply {\n this.owner_id = [email protected]?.id ?: \"\"\n }\n )\n }\n true\n } catch(e: Exception) {\n println(\"Failed to add set.\")\n println(e.toString())\n false\n }\n }\n",
"text": "Hi all, I’m working on my first project with Realm Sync and MongoDB in Kotlin Multiplaform and it’s been working great so far. My question about the Realm version is the following: is it normal that every time I write data to my database my Realm version updates (I get the following message in my logs):D/REALM: Updating Realm version: VersionId(version=52) → VersionId(version=53)Is the realm version different than the schema version? If it updates everytime the realm version is going to grow quite fast. As far as I understand, the schema version is supposed to update automatically only when there have been changes to the model.My code to add data is the following:",
"username": "Christian_Patience"
},
{
"code": "",
"text": "Hi,This is a different kind of version. Device Sync stores each write transaction that a client makes as a single “changeset” with a “version” (incrementing counter). This information is used by the sync protocol to ensure that all changes are eventually uploaded and that only the necessary deltas are exchanged. The docs page here has a good summary of the protocol if you are interested: https://www.mongodb.com/docs/atlas/app-services/sync/details/protocol/Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm version vs Schema version? in Realm Sync with Kotlin Multiplatform | 2023-04-28T12:38:18.228Z | Realm version vs Schema version? in Realm Sync with Kotlin Multiplatform | 767 |
null | [] | [
{
"code": "",
"text": "Hi guys , so I want load entire Mongodb database (full load) into GCS (Google cloud storage) in json or parquet files and once the full load is done , I want to capture the CDC (change data capture) and load it into gcs for each collection. This is similar to what AWS DMS service. But I want to do this in GCP and AWS DMS does not support GCS storage. How do I achieve this ?",
"username": "R_C"
},
{
"code": "",
"text": "We also needed a similar functionality, do we have anything for this?",
"username": "Srinivasan_Subramanian"
}
] | Load "Full load" as well as CDC (change data capture) in GCS buckets | 2022-10-06T13:36:30.998Z | Load “Full load” as well as CDC (change data capture) in GCS buckets | 1,050 |
null | [
"aggregation"
] | [
{
"code": "db={\n \"accounts\": [\n {\n \"_id\": {\n \"$oid\": \"644798f79ad5e97dd4296152\"\n },\n \"name\": \"The James Trust\",\n \"schools:\": [\n {\n \"_id\": {\n \"$oid\": \"6447998d9ad5e97dd429615a\"\n },\n \"name\": \"Jamestown Academy\"\n },\n {\n \"_id\": {\n \"$oid\": \"644799d99ad5e97dd4296161\"\n },\n \"name\": \"Jamestown College\"\n }\n ]\n }\n ],\n \"users\": [\n {\n \"_id\": {\n \"$oid\": \"63e27b3483161a984f19fcce\"\n },\n \"accountId\": {\n \"$oid\": \"644798f79ad5e97dd4296152\"\n },\n \"name\": \"James Boing\",\n \"email\": \"[email protected]\",\n \"schools\": [\n {\n \"roles\": [\n \"admin\",\n \"safeguard\"\n ],\n \"schoolId\": {\n \"$oid\": \"6447998d9ad5e97dd429615a\"\n }\n },\n {\n \"roles\": [\n \"staff\"\n ],\n \"schoolId\": {\n \"$oid\": \"644799d99ad5e97dd4296161\"\n }\n }\n ]\n }\n ]\n}\ndb.users.aggregate([\n {\n $match: {\n _id: ObjectId(\"63e27b3483161a984f19fcce\")\n }\n },\n {\n $lookup: {\n from: \"accounts\",\n localField: \"accountId\",\n foreignField: \"_id\",\n as: \"account\"\n }\n },\n {\n $unwind: {\n path: \"$account\"\n }\n },\n {\n $addFields: {\n z: {\n $arrayElemAt: [\n \"$account.schools\",\n 0\n ]\n }\n }\n }\n])\nnullz",
"text": "Hi,I have the following data:I don’t seem to be able to use the $account.schools reference in the following example:Instead, I always get null for z. I think I’m missing something fundamental here in the way that $unwind is operating. I’m only using $unwind to flatten the array because it’s a 1-1 relationship between a user and an account.I’ve been stuck for hours on this, and any help would be appreciated.",
"username": "James_N_A2"
},
{
"code": " \"schools:\":accounts\"schools:\"\"schools\"[\n {\n _id: ObjectId(\"63e27b3483161a984f19fcce\"),\n accountId: ObjectId(\"644798f79ad5e97dd4296152\"),\n name: 'James Boing',\n email: '[email protected]',\n schools: [\n {\n roles: [ 'admin', 'safeguard' ],\n schoolId: ObjectId(\"6447998d9ad5e97dd429615a\")\n },\n {\n roles: [ 'staff' ],\n schoolId: ObjectId(\"644799d99ad5e97dd4296161\")\n }\n ],\n account: {\n _id: ObjectId(\"644798f79ad5e97dd4296152\"),\n name: 'The James Trust',\n 'schools:': [\n {\n _id: ObjectId(\"6447998d9ad5e97dd429615a\"),\n name: 'Jamestown Academy'\n },\n {\n _id: ObjectId(\"644799d99ad5e97dd4296161\"),\n name: 'Jamestown College'\n }\n ]\n },\n z: null\n }\n]\n\"$account.schools\"\"$account.schools:\"$addFields[\n {\n _id: ObjectId(\"63e27b3483161a984f19fcce\"),\n accountId: ObjectId(\"644798f79ad5e97dd4296152\"),\n name: 'James Boing',\n email: '[email protected]',\n schools: [\n {\n roles: [ 'admin', 'safeguard' ],\n schoolId: ObjectId(\"6447998d9ad5e97dd429615a\")\n },\n {\n roles: [ 'staff' ],\n schoolId: ObjectId(\"644799d99ad5e97dd4296161\")\n }\n ],\n account: {\n _id: ObjectId(\"644798f79ad5e97dd4296152\"),\n name: 'The James Trust',\n 'schools:': [\n {\n _id: ObjectId(\"6447998d9ad5e97dd429615a\"),\n name: 'Jamestown Academy'\n },\n {\n _id: ObjectId(\"644799d99ad5e97dd4296161\"),\n name: 'Jamestown College'\n }\n ]\n },\n z: {\n _id: ObjectId(\"6447998d9ad5e97dd429615a\"),\n name: 'Jamestown Academy'\n }\n }\n]\n",
"text": "Hi @James_N_A2, \"schools:\":I think this might be the issue here above (based off the sample data at least). The field name here (for the accounts collection) is \"schools:\" as opposed to \"schools\".I managed to replicate what you were experiencing:Output ater changing the value from \"$account.schools\" to \"$account.schools:\" in the $addFields stage:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "\"schools:\"\"schools\"",
"text": "\"schools:\" as opposed to \"schools\".Nice catch. When they say the devil hides in the details. This is what they mean.",
"username": "steevej"
},
{
"code": "",
"text": "I can’t believe I missed that :shame:. Thank you for your eagle eyes!",
"username": "James_N_A2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Behaviour of $addFields after $unwind | 2023-04-27T15:51:05.189Z | Behaviour of $addFields after $unwind | 471 |
null | [
"aggregation",
"queries",
"compass",
"sharding"
] | [
{
"code": "db.A.aggregate([ { \"$project\": {\n \"data\": { \"$objectToArray\": \"$$ROOT\" }\n }},\n { \"$project\": { \"data\": \"$data.k\" }},\n { \"$unwind\": \"$data\" },\n { \"$group\": {\n \"_id\": null,\n \"keys\": { \"$addToSet\": \"$data\" }\n }}\n])\n",
"text": "We are looking for a query which will get in getting the list of unique attributes in a collection by scanning all the documents .We are not going with MongoDB Compass Schema Option as we are having the schema being analyzed for 1000 records .\nWe have tried something to pull manually by using the below query .\nThis works in local but not working as expected in the Sharded collection.When we tried to run by each stage there is some issue with the $group stage .Any suggestion s would be greatly appreciated",
"username": "Geetha_M"
},
{
"code": "$group",
"text": "Hello @Geetha_M,Welcome to the MongoDB Community forums We are looking for a query that will get in getting the list of unique attributes in a collection by scanning all the documents. We are not going with the MongoDB Compass Schema Option as we are having the schema being analyzed for 1000 records.Could you please clarify whether you are looking for the query for all the documents or only 1000 of them? I understand that Compass analyzes only 1000 documents, but I would appreciate it if you could confirm your approach.This works locally but not working as expected in the Sharded collection.Could you share what is not working, and if possible, share the error message you are receiving?When we tried to run by each stage there is some issues with the $group stage.Could you please explain the issue you are encountering with the $group stage?Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "We need all the records to be scanned .Compass wont help because it uses only 1000 documents.There is neither an error message nor any results.As mentioned in the above line there is no error message but the cursor just comes out of the execution.",
"username": "Geetha_M"
},
{
"code": "",
"text": "Hi @Geetha_M,There is neither an error message nor any results.Could you please provide additional clarification? Are you able to see a returned cursor?As mentioned in the above line there is no error message but the cursor just comes out of the execution.Did you attempt to iterate on the cursor to execute it, and is the cursor empty? Please provide the steps that you did so we can reproduce what you’re seeing.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | To get the unique attributes from by scanning all the records in Collection | 2023-04-25T18:42:47.283Z | To get the unique attributes from by scanning all the records in Collection | 681 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hello, I am trying the free tier of MongoDB Atlas.In the Security->Database Access panel I have generated a mema_admin user with a X.509 certificate that I have downloaded on my local machine.On this machine I am trying to connect to Atlas via the MongoDB Compass client.In the Database Deployments I press the Connect button of my database and click on the Compass button (which I already have and use to access my own on-prem databases) and copy the (user_passowrd based) connection string.On my local Compass I create a new connection and paste the connection string, then in the Authentication method choose X.509 and in the TLS/SSL tab I choose the X.509 certificate I have generated in the first step above, but when I click connect I get a red error box “A Client Certificate is required with with X509 authentication.”What am I doing wrong? Thanks a lot",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Hi @Robert_Alexander,The steps you’ve mentioned appear correct based off my interpretation.On my local Compass I create a new connection and paste the connection string, then in the Authentication method choose X.509 and in the TLS/SSL tab I choose the X.509 certificate I have generated in the first step above, but when I click connect I get a red error box “A Client Certificate is required with with X509 authentication.”Did you select the .PEM file that was downloaded for the X.509 user created in Atlas? You’ll need to select this file from the below highlighted box:\nimage738×613 38.6 KB\nIt should look like the following after selecting the file:\nimage699×132 13.7 KB\nRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason, that’s exactly what I do but when I click connect I get the error. ",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Hey Robert - Thanks for confirming.Could you clarify the steps exactly taken so that I can replicate the error? In addition to that, can you provide the Compass version in use?Seems quite odd that you’re getting this error if you’re supplying the certificate.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I tried to see what could have caused the error but couldn’t get the exact same error.I tried the following:But both of these provided different errors to what you had received.Could you also send a screenshot similar to the ones I have provided for the TLS/SSL screen (Please redact any personal or sensitive information before posting here).",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason thanks a lot.Compass is 1.36.4 running on Mac OS 13.3.1 on an M1 chip.So this is my database users certificates (I have two now because of tests)\nScreenshot 2023-04-28 at 06.56.352090×846 54.7 KB\nI use the DB Connect button to generate the connection string which I paste into Compass\nScreenshot 2023-04-28 at 06.55.592104×1474 439 KB\nthen ask for X.509\nScreenshot 2023-04-28 at 06.56.042104×1474 447 KB\nand finally select one of the certs I have generated and downloaded\nScreenshot 2023-04-28 at 06.56.202104×1474 449 KB\nI tried accessing with userid/password and that works well as expected.",
"username": "Robert_Alexander"
},
{
"code": "Certificate Authority (.pem)Client Certificate and Key (.pem)Client Certificate and Key (.pem)",
"text": "In your TLS/SSL screenshot, you have chosen the file for Certificate Authority (.pem). Is there an option available for you under that called Client Certificate and Key (.pem)?You’ll need to select the file for Client Certificate and Key (.pem)Example:\n\nimage1402×820 71 KB\nPerhaps you may need to scroll down a bit although this is just a guess.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "There we go! Scrolling solved the mistery! Thanks a lot!!!",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Glad to hear Robert thanks for marking the solution + updating the post with confirmation!",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Confused about X.509 authentication | 2023-04-27T19:12:18.763Z | Confused about X.509 authentication | 706 |
null | [
"atlas-triggers"
] | [
{
"code": "Error:\nError sending event to AWS EventBridge: SerializationError: failed to unmarshal response error\n\tstatus code: 413, request id: \ncaused by: UnmarshalError: error message missing \n",
"text": "HelloI have created a MongoDB trigger on a collection with Create, Update, Replace and Delete enabled. Document Preimage is also enabled.\nI have connected the trigger to AWS EventBridge. From the EventBridge I’m capturing the events and triggering a Lambda Function with an SQS between the Event bus and the Lambda.I was testing the the trigger but noticed that the Lambda is not triggering and went on to see the Mongo DB trigger logs from MongoDB Atlas. It showed errors with the following messageI’m not sure what’s causing this. Is it something wrong on MongoDB end or AWS end. I couldn’t find anything related to this by googling.",
"username": "schach_schach"
},
{
"code": "",
"text": "Hello, I have exactly the same problem, did you manage to correct it?",
"username": "David_Emmanuel_Escalante"
},
{
"code": "",
"text": "actually no.\nwe thought it was because our mongo was on a free cluster. upgraded and it disappeared\nbut don’t really no why it happened",
"username": "schach_schach"
},
{
"code": "",
"text": "Hello, i’m experiencing the same problem. did you manage to solve this error?",
"username": "fatin_hjulaihi"
},
{
"code": "",
"text": "Hi Fatin,Please see the KB below.https://support.mongodb.com/article/000021040Due to this error, the trigger may fail to run for the specific change event, and in some cases, it could result in the trigger being suspended, preventing future instances from running on change events.The 413 error indicates that the processed change event by the trigger was larger than the allowed limit in AWS. AWS EventBridge triggers have a maximum event size that the AWS ecosystem can accept. The total size of the event for an AWS put entry is set to 256 KB. If the event exceeds this size, the mentioned error will be thrown.To reduce the size of the event sent to AWS, specify a Project Expression in the trigger configuration. This limits the number of fields from the document to only the necessary ones.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hey @Mansoor_Omar Manny, I hope all is well.FYI that KB isn’t publicly accessible. I keep getting sent to the MongoDB employee sign in to see it. It asks for my Okta….",
"username": "Brock"
},
{
"code": "",
"text": "Hi Brock, updated with details.",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "can you let us view the KB as well ",
"username": "schach_schach"
},
{
"code": "",
"text": "Hi Schach,I believe that requires a support subscription, however I have already updated my first comment to include the details of the kb to answer this particular question.Regards\nManny",
"username": "Mansoor_Omar"
}
] | "Error sending event to AWS EventBridge: SerializationError: failed to unmarshal response error" with MongoDB trigger connected to AWS EventBridge | 2022-06-29T08:28:29.148Z | “Error sending event to AWS EventBridge: SerializationError: failed to unmarshal response error” with MongoDB trigger connected to AWS EventBridge | 3,758 |
null | [
"replication",
"java"
] | [
{
"code": "",
"text": "Hi Team,I am using mongo java driver 3.12.9 and server as mongo 5.0.15 in PSA replica set.I did not find any API or method to execute rs.reconfigForPSASet command in mongo driver.Kindly help for the same!Thanks,\nKapil",
"username": "Kapil_Gupta"
},
{
"code": "rs.reconfigForPSASetrs.reconfigForPSASetmongoshmongosh",
"text": "Hey @Kapil_Gupta,Welcome to the MongoDB Community forums The rs.reconfigForPSASet is a mongosh method, and thus may not have a direct equivalent in the Java driver. While the driver is intended for data access and manipulation, and rs.reconfigForPSASet is an administrative command. Therefore, it’s recommended to run such administrative commands using the mongosh shell, instead of using the driver to do so.Additionally, for my own understanding, would you please explain why you’re not running the command from mongosh?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Not able to execute rs.reconfigForPSASet via mongo java driver | 2023-03-28T13:56:01.016Z | Not able to execute rs.reconfigForPSASet via mongo java driver | 748 |
null | [
"sharding",
"transactions",
"time-series"
] | [
{
"code": "sh.shardCollection(\n \"ds.KLineTrade_10min\",\n {\"MetaData.MarketID\":1, \"MetaData.NameID\":1, \"MetaData.BidAsk\": 1, \"Time\":1},\n {\n timeseries: {\n timeField: \"Time\",\n metaField: \"MetaData\",\n granularity: \"seconds\"\n }\n }\n)\n{\n \"_id\" : ObjectId(\"6049b848242c00005e005cb3\"),\n \"Time\" : 2021-04-21T05:00:00.000+00:00, // Date\n \"MetaData\":\n {\n \"MarketID\" : 0, // int32\n \"NameID\": 1, // int32\n \"BidAsk\": 0 // int32, 0: 1-level quote, 1: 5-level quote\n },\n \"Open\": 123.34, // double\n \"Close\": 123.2, // double\n \"High\": 123, // double\n \"Low\": 1.2, // double\n \"Volume\": 123.1, // double\n \"Opi\": 122.1, // double\n \"AVPrice\": 12.1, // double\n \"Scale\": 0.5 // double\n}\n{\"t\":{\"$date\":\"2023-04-25T03:39:04.287+08:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":4619901, \"ctx\":\"CatalogCache-7850\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"ds.system.buckets.KLineTrade_30min\",\"lookupSinceVersion\":\"13|1||64468e997daa9be5ee619980||Timestamp(1682345625, 41)\",\"newVersion\":\"{ chunkVersion: { t: Timestamp(1682345625, 41), e: ObjectId('64468e997daa9be5ee619980'), v: Timestamp(13, 1) }, forcedRefreshSequenceNum: 269733, epochDisambiguatingSequenceNum: 269668 }\",\"timeInStore\":\"{ chunkVersion: \\\"None\\\", forcedRefreshSequenceNum: 269732, epochDisambiguatingSequenceNum: 269667 }\",\"durationMillis\":2}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:04.287+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":5087102, \"ctx\":\"conn112710\",\"msg\":\"Recipient failed to copy oplog entries for retryable writes and transactions from donor\",\"attr\":{\"namespace\":\"ds.system.buckets.KLineTrade_10min\",\"migrationSessionId\":\"rs0_rs1_6446dacb7daa9be5ee17e06c\",\"fromShard\":\"rs0\",\"error\":\"Destructor cleaning up thread\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:04.288+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":22000, \"ctx\":\"migrateThread\",\"msg\":\"Starting receiving end of chunk migration\",\"attr\":{\"chunkMin\":{\"meta.MarketID\":0,\"meta.NameID\":11064,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2020-11-17T01:00:00.000Z\"}},\"chunkMax\":{\"meta.MarketID\":0,\"meta.NameID\":11276,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2018-03-16T01:00:00.000Z\"}},\"namespace\":\"ds.system.buckets.KLineTrade_30min\",\"fromShard\":\"rs0\",\"epoch\":{\"$oid\":\"64468e997daa9be5ee619980\"},\"sessionId\":\"rs0_rs1_6446dad87daa9be5ee17e172\",\"migrationId\":{\"uuid\":{\"$uuid\":\"db42e46f-ca26-453f-9290-cbae2139a93a\"}}}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:04.292+08:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ShardRegistry\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"shard0-primary.mg.com:23450\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:05.289+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn112710\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"_recvChunkStatus\":\"ds.system.buckets.KLineTrade_30min\",\"waitForSteadyOrDone\":true,\"sessionId\":\"rs0_rs1_6446dad87daa9be5ee17e172\",\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1682365144,\"i\":5}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"crIebqqm/pL9InFc9MaG9wQGCqY=\",\"subType\":\"0\"}},\"keyId\":7163442577083990019}},\"$configTime\":{\"$timestamp\":{\"t\":1682365144,\"i\":2}},\"$topologyTime\":{\"$timestamp\":{\"t\":0,\"i\":1}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":853,\"locks\":{},\"remote\":\"<ip_addr>:60426\",\"protocol\":\"op_msg\",\"durationMillis\":1000}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.084+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":21999, \"ctx\":\"batchApplier\",\"msg\":\"Batch application failed\",\"attr\":{\"error\":\"DuplicateKey{ keyPattern: { _id: 1 }, keyValue: { _id: ObjectId('5e6adb107db2a2e5ee632b22') } }: Insert of { _id: ObjectId('5e6adb107db2a2e5ee632b22'), control: { version: 2, min: { _id: ObjectId('64469f35db1f646f8c56b871'), Time: new Date(1584061200000), Open: 7.699999809265137, Close: 8.260000228881836, High: 8.279999732971191, Low: 7.690000057220459, Volume: 2180056.0, Opi: 88467440.0, AVPrice: 7.971197605133057, Scale: 0.3896368145942688 }, max: { _id: ObjectId('64469f35db1f646f8c56b878'), Time: new Date(1584081000000), Open: 8.510000228881836, Close: 8.5, High: 8.640000343322754, Low: 8.399999618530273, Volume: 11104572.0, Opi: 273989728.0, AVPrice: 8.545262336730957, Scale: 0.636924147605896 }, count: 8 }, meta: { BidAsk: 1, MarketID: 0, NameID: 11142 }, data: { Time: BinData(7, 0900C02D83D170010000830DE86E03000000000D000000002E9302FDB74C0A000000000E0000000000000000), Volume: BinData(7, 010000000080272E6541937D76A00D10BA0D00BD243C02D08505003D930A005CDF5A008EA6D3010000000000), _id: BinData(7, 070064469F35DB1F646F8C56B87180080200000000000000), Scale: BinData(7, 0100000000C0AE61E43F86FEFFFFFFBB811D000E000000D06E0800FEFFFFFF47A53B00FEFFFFFF2B5559000E00000078D05300FEFFFFFF2BC37F00FEFFFFFFDB3B980000), High: BinData(7, 0100000000205C8F2040856800000070C4F5080E0000007C140E00FEFFFFFF0BD703000E00000034330300FEFFFFFF67660600FEFFFFFFAB47010000), Opi: BinData(7, 0100000000C09F179541930DB99B3D406A15100D62D82E8096F10B0D6EAA2F801FCD080E046C320000000000), Low: BinData(7, 0100000000608FC21E40846887999982295C0068475C8F705C8F020E00000064660600FEFFFFFF2F330300FEFFFFFFAF47010000), AVPrice: BinData(7, 0100000000A081E21F408468C721DC70A41F086818AE0B70D82E020E000000B8D60200FEFFFFFF47130500FEFFFFFF3BB8020000), Close: BinData(7, 0100000000205C8F204085FEFFFFFFAB4701000E000000C0F50800FEFFFFFF6366060068879999705C8F020E000000D8A30000FEFFFFFF13AE070000), Open: BinData(7, 0100000000C0CCCC1E40850E000000EC513800FEFFFFFFAB4701000E000000EC510800FEFFFFFF8FC2050068C8F508703433030E000000B047010000) } } failed. :: caused by :: E11000 duplicate key error collection: ds.system.buckets.KLineTrade_30min dup key: { _id: ObjectId('5e6adb107db2a2e5ee632b22') }\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.084+08:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22566, \"ctx\":\"ShardRegistry\",\"msg\":\"Ending connection due to bad connection status\",\"attr\":{\"hostAndPort\":\"shard0-primary.mg.com:23450\",\"error\":\"CallbackCanceled: Callback was canceled\",\"numOpenConns\":1}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.084+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22080, \"ctx\":\"migrateThread\",\"msg\":\"About to log metadata event\",\"attr\":{\"namespace\":\"changelog\",\"event\":{\"_id\":\"SHDX-I7-73:23450-2023-04-25T03:39:06.084+08:00-6446dada9394fa0f1351f214\",\"server\":\"SHDX-I7-73:23450\",\"shard\":\"rs1\",\"clientAddr\":\"\",\"time\":{\"$date\":\"2023-04-24T19:39:06.084Z\"},\"what\":\"moveChunk.to\",\"ns\":\"ds.system.buckets.KLineTrade_30min\",\"details\":{\"step 1 of 8\":1,\"step 2 of 8\":0,\"step 3 of 8\":2,\"min\":{\"meta.MarketID\":0,\"meta.NameID\":11064,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2020-11-17T01:00:00.000Z\"}},\"max\":{\"meta.MarketID\":0,\"meta.NameID\":11276,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2018-03-16T01:00:00.000Z\"}},\"to\":\"rs1\",\"from\":\"rs0\",\"note\":\"aborted\"}}}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.103+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":21998, \"ctx\":\"migrateThread\",\"msg\":\"Error during migration\",\"attr\":{\"error\":\"migrate failed: Location51008: _migrateClone failed: :: caused by :: operation was interrupted\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.104+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":5087102, \"ctx\":\"migrateThread\",\"msg\":\"Recipient failed to copy oplog entries for retryable writes and transactions from donor\",\"attr\":{\"namespace\":\"ds.system.buckets.KLineTrade_30min\",\"migrationSessionId\":\"rs0_rs1_6446dad87daa9be5ee17e172\",\"fromShard\":\"rs0\",\"error\":\"migrate failed: Location51008: _migrateClone failed: :: caused by :: operation was interrupted\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.104+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn112710\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"_recvChunkStatus\":\"ds.system.buckets.KLineTrade_30min\",\"waitForSteadyOrDone\":true,\"sessionId\":\"rs0_rs1_6446dad87daa9be5ee17e172\",\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1682365145,\"i\":14082}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"tNt3TIhZF2HtKCEhbfHI8f0kDaY=\",\"subType\":\"0\"}},\"keyId\":7163442577083990019}},\"$configTime\":{\"$timestamp\":{\"t\":1682365144,\"i\":2}},\"$topologyTime\":{\"$timestamp\":{\"t\":0,\"i\":1}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":960,\"locks\":{},\"remote\":\"<ip_addr>:60426\",\"protocol\":\"op_msg\",\"durationMillis\":814}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.104+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":5004703, \"ctx\":\"migrateThread\",\"msg\":\"clearReceiveChunk \",\"attr\":{\"currentKeys\":\"[{ meta.MarketID: 0, meta.NameID: 11064, meta.BidAsk: 1, control.min.Time: new Date(1605574800000) }, { meta.MarketID: 0, meta.NameID: 11276, meta.BidAsk: 1, control.min.Time: new Date(1521162000000) })\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.122+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":22026, \"ctx\":\"range-deleter\",\"msg\":\"Submitting range deletion task\",\"attr\":{\"deletionTask\":{\"_id\":{\"$uuid\":\"db42e46f-ca26-453f-9290-cbae2139a93a\"},\"nss\":\"ds.system.buckets.KLineTrade_30min\",\"collectionUuid\":{\"$uuid\":\"ec1c91b4-e024-4a50-9edc-45dc9873ee18\"},\"donorShardId\":\"rs0\",\"range\":{\"min\":{\"meta.MarketID\":0,\"meta.NameID\":11064,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2020-11-17T01:00:00.000Z\"}},\"max\":{\"meta.MarketID\":0,\"meta.NameID\":11276,\"meta.BidAsk\":1,\"control.min.Time\":{\"$date\":\"2018-03-16T01:00:00.000Z\"}}},\"whenToClean\":\"now\",\"timestamp\":{\"$timestamp\":{\"t\":1682365144,\"i\":5}},\"numOrphanDocs\":63381},\"migrationId\":{\"uuid\":{\"$uuid\":\"db42e46f-ca26-453f-9290-cbae2139a93a\"}}}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:06.122+08:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":21990, \"ctx\":\"range-deleter\",\"msg\":\"Scheduling deletion of the collection's specified range\",\"attr\":{\"namespace\":\"ds.system.buckets.KLineTrade_30min\",\"range\":\"[{ meta.MarketID: 0, meta.NameID: 11064, meta.BidAsk: 1, control.min.Time: new Date(1605574800000) }, { meta.MarketID: 0, meta.NameID: 11276, meta.BidAsk: 1, control.min.Time: new Date(1521162000000) })\"}}\n{\"t\":{\"$date\":\"2023-04-25T03:39:07.830+08:00\"},\"s\":\"E\", \"c\":\"SHARDING\", \"id\":6419611, \"ctx\":\"range-deleter\",\"msg\":\"Cached orphan documents count became negative, resetting it to 0\",\"attr\":{\"collectionUUID\":{\"uuid\":{\"$uuid\":\"ec1c91b4-e024-4a50-9edc-45dc9873ee18\"}},\"numOrphanDocs\":-816,\"delta\":-64197,\"numRangeDeletionTasks\":1}}\n",
"text": "My MongoDB sharded cluster consists of two shards, rs0 and rs1, each with two nodes, one primary and one secondary (originally three nodes, but one was temporarily removed for performance reasons). The issue is that even after the insert operation has completed, rs1(shard1) continues to execute insert operations. The logs show that rs1(shard1) is performing a shard migration, but it has not been successful over and over again. Here are the specific reference details:MongoDB version is 6.0.5\nMongoDB Client: Mongocxx driver v3.7.1(with mongoc driver v1.23.2)Command used to create the time-series collection:Example data:Error log:",
"username": "Tong_Zhang2"
},
{
"code": "'ds.system.buckets.KLineTrade_1h': {\n shardKey: {\n 'meta.MarketID': 1,\n 'meta.NameID': 1,\n 'meta.BidAsk': 1,\n 'control.min.Time': 1\n },\n unique: false,\n balancing: true,\n chunkMetadata: [\n {\n shard: 'rs0',\n nChunks: 1\n },\n {\n shard: 'rs1',\n nChunks: 4\n }\n ],\n chunks: [\n {\n min: {\n 'meta.MarketID': MinKey(),\n 'meta.NameID': MinKey(),\n 'meta.BidAsk': MinKey(),\n 'control.min.Time': MinKey()\n },\n max: {\n 'meta.MarketID': 0,\n 'meta.NameID': 927,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2022-09-07T00:00:00.000Z\n },\n 'on shard': 'rs1',\n 'last modified': Timestamp({ t: 2, i: 0 })\n },\n {\n min: {\n 'meta.MarketID': 0,\n 'meta.NameID': 927,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2022-09-07T00:00:00.000Z\n },\n max: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10302,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2018-12-07T00:00:00.000Z\n },\n 'on shard': 'rs1',\n 'last modified': Timestamp({ t: 3, i: 0 })\n },\n {\n min: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10302,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2018-12-07T00:00:00.000Z\n },\n max: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10532,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2023-02-08T00:00:00.000Z\n },\n 'on shard': 'rs1',\n 'last modified': Timestamp({ t: 4, i: 0 })\n },\n {\n min: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10532,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2023-02-08T00:00:00.000Z\n },\n max: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10826,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2018-12-26T00:00:00.000Z\n },\n 'on shard': 'rs1',\n 'last modified': Timestamp({ t: 5, i: 0 })\n },\n {\n min: {\n 'meta.MarketID': 0,\n 'meta.NameID': 10826,\n 'meta.BidAsk': 1,\n 'control.min.Time': 2018-12-26T00:00:00.000Z\n },\n max: {\n 'meta.MarketID': MaxKey(),\n 'meta.NameID': MaxKey(),\n 'meta.BidAsk': MaxKey(),\n 'control.min.Time': MaxKey()\n },\n 'on shard': 'rs0',\n 'last modified': Timestamp({ t: 5, i: 1 })\n }\n ],\n tags: []\n },\n",
"text": "Hi team!\nI tried setting retryWrites=false, but the sharding still remains imbalanced. I have a large amount of data to insert, with each of the 20 collections having billions of documents and different time aggregation granularities. From the monitoring, it can be seen that rs1 (shard1) has been executing many insert operations.\n\n微信截图_202304260933121874×336 70.9 KB\nAnd here is a part result of “sh.stats()”:",
"username": "Tong_Zhang2"
},
{
"code": "failed. :: caused by :: E11000 duplicate key error collection:\nds.system.buckets.KLineTrade_30min dup key: \n{ _id: ObjectId('5e6adb107db2a2e5ee632b22') }\"}}\n const problemBucketId = ObjectId('5e6adb107db2a2e5ee632b22');\n \n // Ensure that this returned two bucket objects.\n const problemBuckets = db[\"system.buckets.KLineTrade_30min\"].aggregate([{$match: {_id: problemBucketId}}]).toArray();\n \n // Save the measurements corresponding to the buckets.\n const problemMeasurements = db[\"system.buckets.KLineTrade_30min\"].aggregate([{$match: {_id: problemBucketId}}, {$_unpackBucket: {timeField: \"recordStartDate\", metaField: \"mdn\"}}]).toArray();\n print(problemMeasurements);\n \n // Delete the two buckets.\n db[\"system.buckets.KLineTrade_30min\"].deleteMany({_id: problemBucketId});\n \n // Re-insert the buckets to create new buckets with different Object Ids.\n db[\"KLineTrade_30min\"].insertMany(problemMeasurements);\n",
"text": "Hello @Tong_Zhang2,Welcome to the MongoDB Community forums The issue is that even after the insert operation has been completed, rs1(shard1) continues to execute insert operations. The logs show that rs1(shard1) is performing a shard migration, but it has not been successful over and over again.The number of chunks created depends on the configured chunk size. After the initial chunk creation, the balancer migrates these initial chunks across the shards as appropriate as well as manages the chunk distribution. Please refer to this Initial Chunks docs for more details.The error message in the shared logs indicates that a duplicate key error occurred during the batch migration of the buckets.I believe you’re seeing the effect of SERVER-76554, which is a known issue. There are currently efforts to fix this, but there is a workaround that may be helpful in your use case.Note that this workaround requires you to modify internal collections, and should be executed with great care, with the data backed up just in case something went wrong.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB6.0.5 TimeSeries collection shard migration failed | 2023-04-25T00:39:25.957Z | MongoDB6.0.5 TimeSeries collection shard migration failed | 1,135 |
null | [
"queries",
"data-modeling"
] | [
{
"code": "{\n\t\"t\": {\n\t\t\"$date\": \"2023-03-14T18:02:45.788+05:30\"\n\t},\n\t\"s\": \"I\",\n\t\"c\": \"WRITE\",\n\t\"id\": 51803,\n\t\"ctx\": \"conn1830718\",\n\t\"msg\": \"Slow query\",\n\t\"attr\": {\n\t\t\"type\": \"update\",\n\t\t\"ns\": \"db_name.XXXXXX\",\n\t\t\"command\": {\n\t\t\t\"q\": {\n\t\t\t\t\"uid\": 307126413\n\t\t\t},\n\t\t\t\"u\": {\n\t\t\t\t\"$push\": {\n\t\t\t\t\t\"psfd\": {\n\t\t\t\t\t\t\"$each\": [{\n\t\t\t\t\t\t\t\"mid\": 5373,\n\t\t\t\t\t\t\t\"aid\": 1488,\n\t\t\t\t\t\t\t\"trid\": \"89461-5373-307126413-1488-230313105553\",\n\t\t\t\t\t\t\t\"guid\": \"f0aeafbe-a05b-4519-b2bd-277f383febce\",\n\t\t\t\t\t\t\t\"st\": \"drop\",\n\t\t\t\t\t\t\t\"dt\": 230313105553,\n\t\t\t\t\t\t\t\"adw\": 2,\n\t\t\t\t\t\t\t\"ad\": 230313,\n\t\t\t\t\t\t\t\"at\": 105553\n\t\t\t\t\t\t}],\n\t\t\t\t\t\t\"$slice\": -5000\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"multi\": false,\n\t\t\t\"upsert\": false\n\t\t},\n\t\t\"planSummary\": \"IXSCAN { uid: 1 }\",\n\t\t\"keysExamined\": 1,\n\t\t\"docsExamined\": 1,\n\t\t\"nMatched\": 1,\n\t\t\"nModified\": 1,\n\t\t\"nUpserted\": 0,\n\t\t\"numYields\": 0,\n\t\t\"queryHash\": \"B34121E2\",\n\t\t\"planCacheKey\": \"CFF4BBD8\",\n\t\t\"locks\": {\n\t\t\t\"ParallelBatchWriterMode\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"r\": 320\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"ReplicationStateTransition\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"w\": 333\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"Global\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"r\": 12,\n\t\t\t\t\t\"w\": 320\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"Database\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"w\": 320\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"Collection\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"w\": 320\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"Mutex\": {\n\t\t\t\t\"acquireCount\": {\n\t\t\t\t\t\"r\": 700\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"flowControl\": {\n\t\t\t\"acquireCount\": 180,\n\t\t\t\"acquireWaitCount\": 13,\n\t\t\t**\"timeAcquiringMicros\": 12288598**\n\t\t},\n\t\t\"storage\": {\n\t\t\t\"data\": {\n\t\t\t\t\"bytesRead\": 31243557,\n\t\t\t\t\"timeReadingMicros\": 64674\n\t\t\t}\n\t\t},\n\t\t\"remote\": \"172.31.22.28:35548\",\n\t\t\"durationMillis\": 794\n\t}\n}\n",
"text": "I got this below the log in the MongoDB slow query log. I have the MongoDB version running 5.0 with 3 shards in PSS mode.The query is running on the primary key and the total document size is 800KB, in the below query, I see time spent on locks is around 12 seconds.How can I reduce this time?",
"username": "Kathiresh_Nadar"
},
{
"code": "\"timeAcquiringMicros\": 12288598\ntimeAcquiringMicros",
"text": "Hey @Kathiresh_Nadar,I suspect that there could be several different reasons for the high timeAcquiringMicros value, e.g. hardware, other operations, or other factors.The query is running on the primary key and the total document size is 800KB, in the below query, I see time spent on locks is around 12 seconds.Can you please share the following information in order to repro it in my environment:Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Lock acquiring time for a MongoDB Document | 2023-03-15T14:27:09.506Z | Lock acquiring time for a MongoDB Document | 909 |
null | [
"atlas",
"serverless"
] | [
{
"code": "",
"text": "Hi all,\nI am currently using the Serverless Atlas deployment. The application i am trying to run on it, has many update operations on multiple collections. I am running a loadtest on my application and I am generating a constant load of 100-200 VU per second on the application and each thread perform 2 or 3 update operations. I am facing timeouts(as the database is taking a much longer time to perform designated task) after around 25-30 seconds the test starts, and then the timeout stops happening after around 240 seconds, same happens again after a considerable amount of load on the server. I am assuming that the database scales up, during that time which is around 3 minutes.My question is, is that the standard scale up time required for serverless deployments? Should it happen faster, vis-à-vis is this a configuration malfunction?\nAlso, given a constant rate of traffic I am assuming on my application, does that mean i should look for dedicated deployments and not serverless?\nThanks in advance!",
"username": "Abhinaba_Mitra"
},
{
"code": "",
"text": "Hi @Abhinaba_Mitra and welcome to MongoDB community forums!!I am running a loadtest on my application and I am generating a constant load of 100-200 VU per second on the application and each thread perform 2 or 3 update operations.To improve my comprehension of the workflow, it would be greatly beneficial if you could assist me with the following information:As mentioned in the MongoDB Serverless documentationServerless is a next-gen cloud-native development model that allows developers to build applications and run code without thinking about server provisioning, management and scaling.In saying so, when you are on a serverless architecture, scaling up or down is never a consideration.same happens again after a considerable amount of load on the server.Lastly, can you also help me understand, what server is being implied here, an app server or a database server?Regards\nAasawari",
"username": "Aasawari"
}
] | MongoDB Serverless Instances | 2023-04-24T14:45:46.390Z | MongoDB Serverless Instances | 928 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Hello,Since I foresee more than 1 trigger sending email reports, I have separated email sender function. However, I cannot find information/ figure out how to import the said function into my trigger?Thank you",
"username": "Vladimir"
},
{
"code": "",
"text": "Hi Vladimir,The trigger itself is already connected to a function, if you are asking how to call a second function from that initial trigger function please see documentation below.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB triggers - accessing a function from a function | 2023-04-25T15:06:23.945Z | MongoDB triggers - accessing a function from a function | 887 |
null | [
"aggregation",
"python"
] | [
{
"code": "pipeline = [\n {\n \"$match\": {\n \"$or\": [\n {\"sender\": {\"$eq\": account_string}},\n {\"receiver\": {\"$eq\": account_string}},\n ]\n }\n },\n {\"$sort\": {\"height\": DESCENDING}},\n {\"$project\": {\"_id\": 1}},\n {\n \"$facet\": {\n \"metadata\": [{\"$count\": \"total\"}],\n \"data\": [{\"$skip\": skip}, {\"$limit\": 20}],\n }\n },\n {\n \"$project\": {\n \"data\": 1,\n \"total\": {\"$arrayElemAt\": [\"$metadata.total\", 0]},\n }\n },\n ]\nresult = (\n await db.my_collection\n .aggregate(pipeline)\n .to_list(20)\n )\n",
"text": "Using Motor in Python, I have a Collection that has 10M documents, with 4 fields: sender, receiver, amount and height.I’m trying to fetch documents where either sender or receiver matches a predefined string. As I need to use pagination, I need the total count as well.Current pipeline:My collection has indices on sender and receiver.I execute this pipeline using:This executes at very acceptable speeds to almost all values of account_string. However, there is 1 account_string that has 9M out of 10M matches. For this account_string, execution time nears 20 sec. What’s worse, if I paginate (increase/decrease the skip amount), I again have to wait 20 sec.I believe the slowness is due to the fact that it needs to read the entire collection to calculate the count?My question: is there a quicker way to achieve paginated results?",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "This executes at very acceptable speeds to almost all values of account_string. However, there is 1 account_string that has 9M out of 10M matches. For this account_string, execution time nears 20 sec. What’s worse, if I paginate (increase/decrease the skip amount), I again have to wait 20 sec.I believe the slowness is due to the fact that it needs to read the entire collection to calculate the count?The match in this case is almost a full scan of an index tree, so definitely takes longYou sort at the second stage on height, do you have index on height ?Use limit + skip is not an ideal way for pagination in this case. This is because every time you use a different skip and limit, the index tree still has to be traversed from the very beginning until those many elements are skipped. And as a result, when skip is a big number, you will get a performance pain. What i would suggest is to create a compound index according to ESR rule, and then use {height: {$gte: last_recorded_height}} + limit for pagination. Now you have a bounded lowest value for height, smaller entries will just directly be skipped without ever visiting.If you only need an estimate number, you can try estimated count in driver APIs. Otherwise, the overhead on an accurate count has to be there. (But why do you need total count for pagination? something like number of pages on screen? )",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you.",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "If thats only for end page, then sort by height in reversed order and query with a limit.BTW, total number of results is subject to change unless you use a snapshot query",
"username": "Kobe_W"
},
{
"code": "",
"text": "I may be fussy, but if I have 85 results, with pagination showing 20 items per page, I want the last button to only show me the last 5. Doing this reverse query won’t get me there, I think?Also, I just really like the “showing items 21-40 from 85” information regarding pagination. For this I would need to total count.",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "in that case, you will have to count the total number, but as i said, the number is subject to change if not using snapshot data.",
"username": "Kobe_W"
}
] | Cache the count for query? Quicker way to get paginated results for imbalanced collection | 2023-04-26T16:00:30.173Z | Cache the count for query? Quicker way to get paginated results for imbalanced collection | 856 |
null | [
"aggregation",
"python",
"transactions",
"views"
] | [
{
"code": "coll = mongo_conn['reports']['myreport_change_history']\ncoll.create_index([\n ('all_keys', pymongo.DESCENDING),\n ('my_unique_id', pymongo.DESCENDING)\n])\nall_keys_-1_goblin_unique_id_-1'_-1> show collections\n__schema__\nmyreport\nmyreport_change_history\ncoll = mongo_conn['reports']['myreport']\nresult = coll.aggregate([\n [...]\n {'$merge': {\n 'into': 'myreport_change_history',\n 'on': ['all_keys', 'my_unique_id']\n }\n }\nOperationFailure: Cannot find index to verify that join fields will be unique, full error: {'ok': 0.0, 'errmsg': 'Cannot find index to verify that join fields will be unique', 'code': 51183, 'codeName': 'Location51183'}_-1",
"text": "I have a collection of transaction documents uploaded weekly. Its necessary for me to track changes to the fields/schema, and with the generous help of StackOverflow and user nimrod serok I have an aggregation pipeline which can produce a document of changes.Following what I’ve learned from the Aggregation course M121, I now wish to ‘merge for Rollup’ these results.I created a new index:[Strangely, pymongo has this string output when I create the change_history collection: all_keys_-1_goblin_unique_id_-1', as if the field names were saved with _-1 appended to the name.]I used Mongo Shell to log into my mongodb server to discover this step did create a new collection:Then I added to the end of the successful aggregation pipeline the merge stage:This produces an error:OperationFailure: Cannot find index to verify that join fields will be unique, full error: {'ok': 0.0, 'errmsg': 'Cannot find index to verify that join fields will be unique', 'code': 51183, 'codeName': 'Location51183'}But I just created the index. What’s going on?\nI also tried the merge stage with the mysterious _-1 appended to each. No luck. Finally, both of these new fields in the index will be unique as part of the result of the aggregation pipeline. No issue with the data, so it’s just a matter of syntax or …?",
"username": "xtian_simon"
},
{
"code": "",
"text": "hey lately even im facing the same issue are you able to resolve this?",
"username": "sai_pavan"
}
] | Create Index and Merge to create a change history for document schema | 2022-05-22T00:14:53.107Z | Create Index and Merge to create a change history for document schema | 3,110 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi ,\nWe have 3 node sharded mongodb server and 2 mongos server. We want to route queries coming from DWH ETL jobs to read only standby on mongos. Is this possibe and how to configure this?\nRegards.",
"username": "baki_sahin"
},
{
"code": "",
"text": "read only standby on mongoswhat’s this?If you want to route requests to a specific mongodb node, you can use zone (for shards )and/or tag list (for replica sets).",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi Kobe\ni mean secondary nodes on shards.\ncan you send me link for using zone?\nRegards.",
"username": "baki_sahin"
}
] | How to route read queries from spesific user to standby nodes on mongos? | 2023-04-26T08:24:32.534Z | How to route read queries from spesific user to standby nodes on mongos? | 650 |
null | [
"replication",
"connecting"
] | [
{
"code": "10.1.0.7 - mongodb-1.com (primary)\n10.1.0.8 - mongodb-2.com (secondary)\nnet:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: allowTLS\n certificateKeyFile: /etc/mongodb/certificates/mongodb.pem\nsecurity:\n authorization: enabled\n keyFile: /home/dbuser/repset\nreplication:\n replSetName: \"repset\"\n{\n _id: 'repset',\n version: 4,\n term: 3,\n members: [\n {\n _id: 0,\n host: '10.1.0.7:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: '10.1.0.8:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"641aebeb4623f6fd0e700cf2\")\n }\n}\n",
"text": "I have 2 node setuphere is my mongod.confhere is my rs.conf()Now my proposed setup would be there are 2 application servers that will connect to this replica set. 1 is via private IP and the other is via DNS name. Now when I connect via private IP. it is working fine. I cant do the same when connecting via DNS name.",
"username": "sg_irz"
},
{
"code": "",
"text": "any help here guys? ",
"username": "sg_irz"
},
{
"code": "",
"text": "are you trying the two methods from the same computer or two separate ones in two separate networks? can you connect to any other service through DNS? it might be an issue from there. or the firewall setting might be preventing such a connection.you may spin up an HTTP server and try DNS on it first. (ngix?, apache? python or nodejs?). if connections on http ports success, then check MongoDB ports you have set up.by the way, 2-member replica set with these settings might also be the issue. even numbers are not recommended as primary selection event will not properly decide who to go.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Our ideal setup would be two methods from the same computer. Because we would be using 2 app servers 1 being with the same vnet as where our DB is and will communicate via private IP. while the other app server would be connecting to the DNS since the 2nd app server would be on premise.",
"username": "sg_irz"
},
{
"code": "/etc/hosts",
"text": "you might be confusing what is a DNS. it is just a way to connect to an IP address, but using a name such as “my.private.me”.the name and the address have to be set in somewhere on the chain of name resolvers. and in return, the target machine at the address must be accessible through any number of routers correctly with redirection.the simplest DNS setting would be changing the /etc/hosts file in the client machine, to an address that resolves immediately to an IP address of another machine on the same network.Have you put this simplest setup to test? this will tell you if you have any problems on the network level. if it succeeds then you can set the next level on the router you use if it has its own external IP (you will need port forwarding). else, it means you cannot connect at any level of name resolutions and you have to first check the firewall and related security settings of your servers.PS: VNET provides a common virtual network so you won’t see this mess and I think you should have a success at network level. But I could be wrong and your VNET setup might have already added required firewall setting for its own network. you may peek at its settings if you fail at network-level access.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "No I’m not confused at the DNS part. What I need was to enable TLS for my secondary app server (which is located on our onprem env) when communicating to my mongodb server\n\nimage1161×467 6.97 KB\nExcuse my diagram (correction on the connectionstring on the ON PREM side it should be replicaSet=repset)While not requiring it for my 1st app server which is in the same environment as my mongodb servers hence the allowTLS flag. and thus connectionstrings are using private IP only.Both my 2 mongodb nodes each have their LetsEncrypt cert.",
"username": "sg_irz"
},
{
"code": "",
"text": "LetsEncryptThe diagram is great but now I am confused with your word selections: have you solved the issue?I have another suggestion to try first, to identify a point of failure. If it is not tied too tight to your setup, remove certificate requirements from the config and restart your servers, then retry connection from the on-prem app. (you can stop these servers and create/run test servers for the purpose, tell me if you need that but can’t figure it out)If this fails, we can say it is on network settings. If you connect, then we can check about the use of the certificate.",
"username": "Yilmaz_Durmaz"
},
{
"code": "{\n _id: 'repset',\n version: 4,\n term: 3,\n members: [\n {\n _id: 0,\n host: '10.1.0.7:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: '10.1.0.8:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"641aebeb4623f6fd0e700cf2\")\n }\n}\nmongodb://username:[email protected],mongodb-2.com/?authSource=admin&tls=true&replicaSet=repset\nmongodb://username:[email protected]/?authSource=admin&tls=true\nmongodb://username:[email protected]/?authSource=admin&tls=true\n",
"text": "My problem is that on my rs.conf() the host are pointed to the private IPs of each replicaset nodes.When the setup is like this. From my onprem servers, i could not connect to my replicaset using this connstring:but when I try to connect to either node 1 or node 2 as a standalone, it is working fine.",
"username": "sg_irz"
},
{
"code": "",
"text": "The fact that you can cannot connection with option replicaSet=repset and you can without it is explain in https://www.mongodb.com/docs/drivers/node/current/fundamentals/connection/connect/#connect-to-a-replica-set.With replicaSet=repset, once an initial connection to mongodb-1.com or mongodb-2.com is established, the driver reads the replica set configuration. It then tries to reconnect to all members of the replica set. In your case, you are specifying IPs in 10.x.x.x network which is part of the non globally routable networks. So you cannot connect.",
"username": "steevej"
},
{
"code": "",
"text": "but when I try to connect to either node 1 or node 2 as a standalone, it is working fine.nice, this eliminates the DNS problems, and the remaining possibility is your choice for the number of nodes, namely “2-members”.Your servers refuse to connect to the replica set because the configuration for the set is not something acceptable. You see, your servers have both the same level of priorities and they race against each other and vote for themselves to become the primary node, but that makes a tie and no primary is selected. hence there is no replica set running.try connecting with “mongo” or “mongosh” or “Compass”, and the node you connect should show “secondary”. it will show “primary” when one of them won the election.try changing their priority levels first to see if they will resolve with only 2 members. but the recommended setup is at least 3 members, 2 of them active, and 3rd is only there for voting (an arbiter, holds no data). there are many other probable settings including making one of them a low priority delayed node that will not be actively used (a passive backup).",
"username": "Yilmaz_Durmaz"
},
{
"code": "members: [\n {\n _id: 0,\n name: '10.1.0.7:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 11860,\n optime: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long(\"5\") },\n optimeDurable: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-04-26T05:48:40.000Z\"),\n optimeDurableDate: ISODate(\"2023-04-26T05:48:40.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-04-26T05:48:40.274Z\"),\n lastDurableWallTime: ISODate(\"2023-04-26T05:48:40.274Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T05:48:40.318Z\"),\n lastHeartbeatRecv: ISODate(\"2023-04-26T05:48:40.267Z\"),\n pingMs: Long(\"1\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '10.1.0.10:27017',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 12,\n configTerm: 5\n },\n {\n _id: 1,\n name: '10.1.0.8:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 11928,\n optime: { ts: Timestamp({ t: 1682488120, i: 1 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-04-26T05:48:40.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-04-26T05:48:40.274Z\"),\n lastDurableWallTime: ISODate(\"2023-04-26T05:48:40.274Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1682476269, i: 1 }),\n electionDate: ISODate(\"2023-04-26T02:31:09.000Z\"),\n configVersion: 12,\n configTerm: 5,\n self: true,\n lastHeartbeatMessage: ''\n },\n {\n _id: 2,\n name: 'mongodb-1.com:27017',\n health: 0,\n state: 8,\n stateStr: '(not reachable/healthy)',\n uptime: 0,\n optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n optimeDurableDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastAppliedWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastDurableWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T05:48:32.849Z\"),\n lastHeartbeatRecv: ISODate(\"1970-01-01T00:00:00.000Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: \"Couldn't get a connection within the time limit\",\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n configVersion: -1,\n configTerm: -1\n },\n {\n _id: 3,\n name: 'mongodb-2.com:27017',\n health: 0,\n state: 8,\n stateStr: '(not reachable/healthy)',\n uptime: 0,\n optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n optimeDurableDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastAppliedWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastDurableWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T05:48:29.994Z\"),\n lastHeartbeatRecv: ISODate(\"1970-01-01T00:00:00.000Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: \"Couldn't get a connection within the time limit\",\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n configVersion: -1,\n configTerm: -1\n }\n ]\n",
"text": "hello @Yilmaz_Durmaz @steevejI have an update. Basically I tried to add the domain name to the rs members and here is my replicaset config now.Now when I try to connect from my app outside the VM network using this script. it is now working.mongodb://user:[email protected],mongodb-2.com/?authSource=admin&tls=true&replicaSet=repsetAnd also via my app within the network (thru private IP)mongodb://user:[email protected],10.1.0.8/?authSource=admin&tls=true&replicaSet=repsetMy question is if my replica set config is valid? I dont remember the link where I found to try to add each domain name to the replicaset.",
"username": "sg_irz"
},
{
"code": "stateStr: 'PRIMARY'stateStr: '(not reachable/healthy)'_idrs.stat()",
"text": "you have corrected some parts for your configuration so the set now has stateStr: 'PRIMARY' meaning main conditions to start are met.but some other parts are not working yet hence stateStr: '(not reachable/healthy)'from _id portions, you now seem somehow created “4 member” set, 2 IP and 2 DNS name, but only IP ones are reachable.I don’t think this was your intention anyway, and this output is collective info from within the server ( rs.stat()? ).so, can you share the “actual” config files for each member (remove sensitive info, if any)?",
"username": "Yilmaz_Durmaz"
},
{
"code": "rs.stat()rs.status{\n set: 'repset',\n date: ISODate(\"2023-04-26T08:03:44.377Z\"),\n myState: 2,\n term: Long(\"6\"),\n syncSourceHost: '10.1.0.7:27017',\n syncSourceId: 0,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 2,\n writableVotingMembersCount: 2,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long(\"6\") },\n lastCommittedWallTime: ISODate(\"2023-04-26T08:03:34.364Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long(\"6\") },\n appliedOpTime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long(\"6\") },\n durableOpTime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long(\"6\") },\n lastAppliedWallTime: ISODate(\"2023-04-26T08:03:44.364Z\"),\n lastDurableWallTime: ISODate(\"2023-04-26T08:03:44.364Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1682496184, i: 1 }),\n electionParticipantMetrics: {\n votedForCandidate: true,\n electionTerm: Long(\"6\"),\n lastVoteDate: ISODate(\"2023-04-26T06:02:50.797Z\"),\n electionCandidateMemberId: 0,\n voteReason: '',\n lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1682488970, i: 1 }), t: Long(\"5\") },\n maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1682488970, i: 1 }), t: Long(\"5\") },\n priorityAtElection: 1,\n newTermStartDate: ISODate(\"2023-04-26T06:02:54.146Z\"),\n newTermAppliedDate: ISODate(\"2023-04-26T06:02:54.769Z\")\n },\n members: [\n {\n _id: 0,\n name: '10.1.0.7:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 19964,\n optime: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long(\"6\") },\n optimeDurable: { ts: Timestamp({ t: 1682496214, i: 1 }), t: Long(\"6\") },\n optimeDate: ISODate(\"2023-04-26T08:03:34.000Z\"),\n optimeDurableDate: ISODate(\"2023-04-26T08:03:34.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-04-26T08:03:34.364Z\"),\n lastDurableWallTime: ISODate(\"2023-04-26T08:03:34.364Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T08:03:43.485Z\"),\n lastHeartbeatRecv: ISODate(\"2023-04-26T08:03:44.069Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1682488970, i: 2 }),\n electionDate: ISODate(\"2023-04-26T06:02:50.000Z\"),\n configVersion: 12,\n configTerm: 6\n },\n {\n _id: 1,\n name: '10.1.0.8:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 20032,\n optime: { ts: Timestamp({ t: 1682496224, i: 1 }), t: Long(\"6\") },\n optimeDate: ISODate(\"2023-04-26T08:03:44.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-04-26T08:03:44.364Z\"),\n lastDurableWallTime: ISODate(\"2023-04-26T08:03:44.364Z\"),\n syncSourceHost: '10.1.0.7:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 12,\n configTerm: 6,\n self: true,\n lastHeartbeatMessage: ''\n },\n {\n _id: 2,\n name: 'mongodb-1.com:27017',\n health: 0,\n state: 8,\n stateStr: '(not reachable/healthy)',\n uptime: 0,\n optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n optimeDurableDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastAppliedWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastDurableWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T08:03:32.851Z\"),\n lastHeartbeatRecv: ISODate(\"1970-01-01T00:00:00.000Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: \"Couldn't get a connection within the time limit\",\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n configVersion: -1,\n configTerm: -1\n },\n {\n _id: 3,\n name: 'mongodb-2.com:27017',\n health: 0,\n state: 8,\n stateStr: '(not reachable/healthy)',\n uptime: 0,\n optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n optimeDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n optimeDurableDate: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastAppliedWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastDurableWallTime: ISODate(\"1970-01-01T00:00:00.000Z\"),\n lastHeartbeat: ISODate(\"2023-04-26T08:03:40.158Z\"),\n lastHeartbeatRecv: ISODate(\"1970-01-01T00:00:00.000Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: \"Couldn't get a connection within the time limit\",\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n configVersion: -1,\n configTerm: -1\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1682496224, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"ccd6e271e288862a6e4176fdcfa5d1055cf169c2\", \"hex\"), 0),\n keyId: Long(\"7216257184332513282\")\n }\n },\n operationTime: Timestamp({ t: 1682496224, i: 1 })\n}",
"text": "I don’t think this was your intention anyway, and this output is collective info from within the server ( rs.stat()? ).Yes, but somehow my actual use-case is now working lol. I just do not know the drawback in this.Here is my rs.status",
"username": "sg_irz"
},
{
"code": "",
"text": "Only 2 members should be in that replica set, not 4. I would suggest try fixing that.Before you can use a DNS name (FQDN or similar), you need to make sure the mapped ip is reachable from app2. Once it is, you will need to set up a DNS record for the domain name to ip address and make sure mongodb is listening on that domain name (bind_ip).I’m guessing you also need to use the same name when adding the member with rs.add or rs.initiate",
"username": "Kobe_W"
},
{
"code": "",
"text": "Only 2 members should be in that replica set, not 4. I would suggest try fixing that.Yes. But my app2 would not be able to connect to the replicaset if the hostname in rs.conf() is pointing to 10.1.0.7 and 10.1.0.8. Now when I try to change its config to the domain name (mongodb-1.com, mongodb-2.com) my app1 would be using DNS and not via private IP (since they share the same vnet) which will affect its latency that’s why we prefer using that.Before you can use a DNS name (FQDN or similar), you need to make sure the mapped ip is reachable from app2. Once it is, you will need to set up a DNS record for the domain name to ip address and make sure mongodb is listening on that domain name (bind_ip).For this part, we do not have much control since we are using DNS of our cloud provider (Azure). Although this is reachable for my app2.I just want to have an understanding why is it that my current setup in rs.conf() is working (having 4 members, 2 private IP and 2 domain name) even if the 2 domain name members are in an unhealthy state.",
"username": "sg_irz"
},
{
"code": "",
"text": "This Link is similar to my post here if I cause any confusions.",
"username": "sg_irz"
}
] | How can I connect to my replicaset via DNS? | 2023-03-28T06:55:36.652Z | How can I connect to my replicaset via DNS? | 1,891 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi everyone,I recently purchased an Intel Arc laptop and I’m interested in using MongoDB for a project I’m working on. I’ve heard that MongoDB is a popular NoSQL database that can handle large amounts of data, but I’m not sure how it will perform on my new laptop.Has anyone here used MongoDB on an Intel Arc laptop before? How did it perform? Were there any issues or limitations I should be aware of?Also, I’m curious about the security features of MongoDB. I know that Oracle NoSQL DB provides enterprise-class security1, but what about MongoDB? Are there any best practices or tips for securing a MongoDB database?Thanks in advance for your help!Best regards,\nStella",
"username": "Stella_Gomez"
},
{
"code": "",
"text": "Hello @Stella_Gomez ,Welcome to The MongoDB Community Forums! Intel Arc is a GPU technology, which typically doesn’t affect the performance of a database, as far as I know. Can you please share your expectations here?To learn about MongoDB, I would recommend you to start from below resource available on MongoDB University. All the courses available here are free and also provide hands-on labs.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Moreover, MongoDB does provide enterprise level security, and additionally also have field level encryption and queryable encryption on top of the typical enterprise-level security features. To learn more about Security features in MongoDB, kindly referRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Using MongoDB on Intel Arc Laptops | 2023-04-25T06:17:38.079Z | Using MongoDB on Intel Arc Laptops | 621 |
[
"chennai-mug"
] | [
{
"code": "Principal Software Engineer @ ZoomInfoSenior Manager @ ZoomInfoMongoDB User Group Leader - Chennai",
"text": "\nChennai MUG: MongoDB Inaugural Kickoff Meetup1920×1083 86.4 KB\nChennai MongoDB User Group is thrilled to host its inaugural meetup on Saturday, April 29, 2023, at 10:30 AM, located at The Hive Co-Working Space. The gathering will feature two engaging presentations complete with demonstrations, entertaining games, a delightful lunch, and the opportunity to win some exciting swag! This meetup is designed for a diverse audience, including seasoned developers, architects, and startup founders. It’s the perfect setting for exchanging insights, showcasing case studies, and connecting with fellow community members. We invite you to join us for an enlightening and enjoyable day filled with learning, networking, and entertainment! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are RSVPed. You need to be signed in to access the button.\n1670523937054512×512 47.6 KB\nPrincipal Software Engineer @ ZoomInfoSenior Manager @ ZoomInfo\nrishi360×510 83.3 KB\nMongoDB User Group Leader - ChennaiEvent Type: In-Person\nLocation: 6th Floor, The Hive Coworking Space, SRP Stratford, PTK Nagar, Kottivakkam, Chennai - 600041",
"username": "jeyaraj"
},
{
"code": "",
"text": "Excited to see MongoDB User Group in Chennai. Looking forward to it.",
"username": "Ahamed_Basha_N"
},
{
"code": "",
"text": "I registered the rvsp button, but I didn’t receive any confirmation mail for this event, is I successfully registered?",
"username": "Giri_Prasath"
},
{
"code": "",
"text": "Yes you are successfully registered if you see your name in the count of the RSVP. But I guess we will soon receive a confirmation email on that as well.",
"username": "Ahamed_Basha_N"
},
{
"code": "",
"text": "Hey @Giri_Prasath, As @Ahamed_Basha_N mentioned you are successfully registered if you see your name in the count of the RSVP.No confirmation email is triggered at the moment when you RSVP. We are working on it. However, a confirmation/reminder email would be sent out to you a couple of days before the event with all the details. ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey Everyone,Thank you for confirming your interest in attending the Chennai, MongoDB User Group Event! this Saturday at 10:30 AM on the 6th Floor, The Hive Coworking Space, PTK Nagar, Kottivakkam. We are thrilled to have you all join us.We want to make sure everyone has a fantastic time, so please arrive on time at 10:30 AM to ensure you don’t miss any of the sessions, and we can all have some time to chat before the talks begin.There are a few essential things to keep in mind:If you have any questions, please don’t hesitate to ask by replying to this thread. If you face any issues in accessing the building particularly, please reach out to Ijas(+919446817131)/Jeyaraj(+918639214187)Looking forward to seeing you all at the event We can’t wait to see you all at the event! Thank you,",
"username": "Ijas_Ahamed"
},
{
"code": "",
"text": "Really looking forward to meeting you all!!",
"username": "manimuthuraj_T"
}
] | Chennai MUG: MongoDB Inaugural Kickoff Meetup | 2023-04-07T14:04:10.391Z | Chennai MUG: MongoDB Inaugural Kickoff Meetup | 2,870 |
|
[
"ops-manager"
] | [
{
"code": "",
"text": "AUTOMATION DOES NOT START IN OPSMANAGER AND I SEE ERROR.\nI have left more details, I would be glad if you could help. Then I get this error in the log.\n\nimage1031×636 38.6 KB\n\n\nautttt832×576 76.3 KB\n",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "Make sure you have the binaries (.tar.gz) for the version you are trying to deploy in the default OpsManager directory.",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "The binaries are available in the ops manager’s release-directory.",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "Hello LeandroThe problem is caused by the operating system. When I reinstalled ops manager in centos, the error did not appear. For now, ops manager has been automated.",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Automation cannot start | 2022-10-26T08:21:12.518Z | Automation cannot start | 2,193 |
|
null | [
"atlas-search"
] | [
{
"code": "[\n { \"item\": \"Pens\", \"quantity\": 350, \"tags\": [ \"school\", \"office\" ] },\n { \"item\": \"Erasers\", \"quantity\": 15, \"tags\": [ \"home\", \"school\" ] },\n { \"item\": \"Maps\", \"tags\": [ \"office\", \"storage\" ] },\n { \"item\": \"Books\", \"quantity\": 5, \"tags\": [ \"school\" ] },\n { \"item\": \"Another Maps\", \"quantity\": 5, \"tags\": [ \"storage\" ] }\n]\n \"tags\": {\n \"analyzer\": \"lucene.keyword\",\n \"indexOptions\": \"docs\",\n \"norms\": \"omit\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n{\n $search: {\n \"compound\": {\n \"should\": [\n \"queryString\": {\n \"defaultPath\": \"tags\",\n \"query\": \"school OR storage\"\n }]\n }\n }\n }\n",
"text": "Hi Team,What I am trying to do with Atlas search is essentially something similar to the $in query.So my documents are like:And my search index is configured as:And my query is like:Expected results:\nReturning all the results with the same scoreActual results:\nThere are two score groups, and records with “school” tag and records with “storage” tag are having different scores.(My actual data is a little different from the above, but the query is the same)If I want all those records having the same score, given that I am using an OR query, how would I change my configuration here?Thank you!",
"username": "williamwjs"
},
{
"code": "constant\"storage\"\"tags\"\"school\"\"tags\"tags[\n { tags: [ 'office', 'storage' ], score: 0.47005030512809753 },\n { tags: [ 'storage' ], score: 0.47005030512809753 },\n { tags: [ 'school', 'office' ], score: 0.2893940806388855 },\n { tags: [ 'home', 'school' ], score: 0.2893940806388855 },\n { tags: [ 'school' ], score: 0.2893940806388855 }\n]\n\"storage\"\"tags\"tags[\n[\n { tags: [ 'school', 'office' ], score: 0.3767103850841522 },\n { tags: [ 'home', 'school' ], score: 0.3767103850841522 },\n { tags: [ 'office', 'storage' ], score: 0.3767103850841522 },\n { tags: [ 'school' ], score: 0.3767103850841522 },\n { tags: [ 'storage' ], score: 0.3767103850841522 },\n { tags: [ 'storage', 'home' ], score: 0.3767103850841522 }\n]\n\"tagsindex\"constantvalue1db.tags.aggregate({\n \"$search\": {\n \"index\": \"tagsindex\",\n \"compound\": {\n \"should\": [{\n \"queryString\": {\n \"defaultPath\": \"tags\",\n \"query\": \"school OR storage\",\n \"score\": { \"constant\" : { \"value\" : 1} }\n }\n }\n ]\n }\n }\n},\n{\n \"$project\": {\n \"_id\": 0,\n \"tags\": 1,\n \"score\": { \"$meta\": \"searchScore\"}\n }\n})\n[\n { tags: [ 'storage' ], score: 1 },\n { tags: [ 'school' ], score: 1 },\n { tags: [ 'office', 'storage' ], score: 1 },\n { tags: [ 'home', 'school' ], score: 1 },\n { tags: [ 'school', 'office' ], score: 1 }\n]\n",
"text": "Hi @williamwjs,Have you tried using the constant scoring option? I believe the behaviour you’ve described in terms of the scoring is expected (at least from analyzing the sample documents). Based off your example, there are a total of 5 documents of which:As per the Score the Documents in the Results documentation:Many factors can influence a document’s score, including:In this particular case, I believe the frequency of occurrence is one of the main factors for why you are seeing the results having different scores even though they each only contain each of the terms once. Example below from my test environment based off your sample documents provided:5 documents total, for the tags array - 2 documents containing “storage”, 3 documents containing “school”:We can see here the first 2 documents have a higher score (probably what you are experiencing).Now, let’s add in another document that contains the \"storage\" value inside of the \"tags\" array and perform the same search.6 documents total, for the tags array - 3 documents containing “storage”, 3 documents containing “school”:We can see here that the scores are now all the same for this result set.(Test environment i’m using for this has the index named \"tagsindex\") - Reverting back to the original 5 documents, when using constant scoring value of 1:Output:Wondering if this would work for you / your use case and if the explanation above helps with the scoring differences you may be seeing.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "scoreDetails$search",
"text": "Also you may wish to check out Return the Score Details which is relatively new that provides the scoreDetails boolean option in your $search stage for a detailed breakdown of the score for each document in the query results.",
"username": "Jason_Tran"
},
{
"code": "\"storage\"\"tags\"tags[\n[\n { tags: [ 'school', 'office' ], score: 0.3767103850841522 },\n { tags: [ 'home', 'school' ], score: 0.3767103850841522 },\n { tags: [ 'office', 'storage' ], score: 0.3767103850841522 },\n { tags: [ 'school' ], score: 0.3767103850841522 },\n { tags: [ 'storage' ], score: 0.3767103850841522 },\n { tags: [ 'storage', 'home' ], score: 0.3767103850841522 }\n]\n",
"text": "Now, let’s add in another document that contains the \"storage\" value inside of the \"tags\" array and perform the same search.6 documents total, for the tags array - 3 documents containing “storage”, 3 documents containing “school”:We can see here that the scores are now all the same for this result set.@Jason_Tran Thank you for your reply!!!Interesting to see that by adding one more document, it could return all records with the same score. Do you know why that would make a difference here?",
"username": "williamwjs"
},
{
"code": "\"storage\"\"school\"\"storage\"\"school\"\"storage\"\"storage\"",
"text": "Hi @williamwjs Interesting to see that by adding one more document, it could return all records with the same score. Do you know why that would make a difference here?As per my previous reply:In this particular case, I believe the frequency of occurrence is one of the main factors for why you are seeing the results having different scores even though they each only contain each of the terms once.I believe in the example with 6 documents (3 docs containing \"storage\" and 3 docs containing \"school\"), the amount of docs matching \"storage\" is the same as the amount of docs matching \"school\" out of the total of 6 docs (3/6 vs 3/6). In the example with 5 docs, its (2/5 for “storage” and 3/5 for “school”).I believe you will then get a different set of results yet again if I had added another document with \"storage\" (total of 7 documents, 4 containing \"storage\").You can test to verify if you wish.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "[\n { tags: [ 'school' ], score: 0.44143030047416687 },\n { tags: [ 'home', 'school' ], score: 0.44143030047416687 },\n { tags: [ 'school', 'office' ], score: 0.44143030047416687 },\n { tags: [ 'storage' ], score: 0.3072332739830017 },\n { tags: [ 'office', 'storage' ], score: 0.3072332739830017 },\n { tags: [ 'storage' ], score: 0.3072332739830017 },\n { tags: [ 'storage', 'test' ], score: 0.3072332739830017 }\n]\n",
"text": "Tested it out with 7 docs just now :",
"username": "Jason_Tran"
},
{
"code": "\"score\": { \"constant\" : { \"value\" : 1} }{\n $search: {\n \"compound\": {\n \"should\": [\n \"near\": {\n \"path\": \"quantity\",\n \"origin\": 350,\n \"pivot\": 400\n },\n \"queryString\": {\n \"defaultPath\": \"tags\",\n \"query\": \"school OR storage\"\n }]\n }\n }\n }\n",
"text": "Also, one issue with \"score\": { \"constant\" : { \"value\" : 1} } is that, it would set the score to be the same for all the matching ones.But my intention is to do a search with multiple factors, for example:so that the result score is the combined factoring of the two “should” clauses here.\n(Let me know if this makes sense to you )Any suggestions on this?Thank you!",
"username": "williamwjs"
},
{
"code": "\"score\": { \"constant\" : { \"value\" : 1} }constantnearconstantqueryStringnearconstantqueryString[\n { quantity: 350, tags: [ 'school', 'office' ], score: 2 },\n {\n tags: [ 'office', 'storage' ],\n quantity: 50,\n score: 1.5714285373687744\n },\n {\n quantity: 15,\n tags: [ 'home', 'school' ],\n score: 1.5442177057266235\n },\n { quantity: 5, tags: [ 'school' ], score: 1.5369126796722412 },\n { quantity: 5, tags: [ 'storage' ], score: 1.5369126796722412 }\n]\n$searchdb.tags.aggregate({\n \"$search\": {\n \"index\": \"tagsindex\",\n \"compound\": {\n \"should\": [{\n \"near\": {\n \"path\": \"quantity\",\n \"origin\": 350,\n \"pivot\": 400\n }\n },{\n \"queryString\": {\n \"defaultPath\": \"tags\",\n \"query\": \"school OR storage\",\n \"score\": { \"constant\" : { \"value\" : 1} }\n }\n }\n ]\n }\n }\n},\n{\n \"$project\": {\n \"_id\": 0,\n \"tags\": 1,\n \"quantity\": 1,\n \"score\": { \"$meta\": \"searchScore\"}\n }\n})\n",
"text": "Also, one issue with \"score\": { \"constant\" : { \"value\" : 1} } is that, it would set the score to be the same for all the matching ones.But my intention is to do a search with multiple factorsAre you saying that the constant scoring option would cause the near to return the same results? I think you would just need the constant on the queryString portion and not the near operator. For example, output using constant only on queryString:The $search used for above:",
"username": "Jason_Tran"
},
{
"code": "",
"text": "You are right! It could reflect.\nInitially I set the constant value to be 10, thus, for some reason, all the results would have the same score. After changing to 1, it would work now.Thank you so much for all your help!!!",
"username": "williamwjs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use Atlas Search to query in list of strings | 2023-04-27T01:43:13.184Z | How to use Atlas Search to query in list of strings | 674 |
null | [] | [
{
"code": "---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mongo-pvc\nspec:\n accessModes: [ReadWriteOnce]\n resources: { requests: { storage: 9Gi } }\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mongodb\nspec:\n serviceName: mongodb\n replicas: 1\n selector:\n matchLabels:\n app: mongodb\n template:\n metadata:\n labels:\n app: mongodb\n selector: mongodb\n spec:\n volumes:\n - name: pvc\n persistentVolumeClaim:\n claimName: mongo-pvc\n containers:\n - name: mongodb\n image: mongo\n env:\n - name: MONGODB_INITDB_ROOT_USERNAME\n value: \"myuser\"\n - name: MONGODB_INITDB_ROOT_PASSWORD\n value: \"mypassword\"\n ports:\n - containerPort: 27017\n volumeMounts:\n - name: pvc\n mountPath: /data/db\n",
"text": "I’ve had a lot of trouble getting the operator working, but I really don’t need it. I only want a very simple setup where I have an app that needs to connect to a standalone mongodb instance. This works fine, but it does not enable any sort of authentication. I’ve tried adding MONGODB_INITDB_ROOT_USERNAME and MONGODB_INITDB_ROOT_PASSWORD but when I try to connect to the db using those credentials, it doesn’t find the user. Here’s my basic example yaml files:From everything I’ve seen, the presence of those env vars should cause it to create that user and password, but it doesn’t seem to work. And it doesn’t even enable authentication at all!\nI found some more examples that say I need to modify the command run inside the image, basically to make it run mongod --auth. This does successfully enable authentication, but doesn’t create any user, so it’s not helpful.Any help or guidance would be greatly appreciated. I’ve spent over a week trying dozens of things that everyone claims to work on various forums, and maybe they used to work in 2017 when they were written, but they don’t seem to work with the containers today.\nIt’s just a small db needed by a web app. I have no need of replicatsets, sharding, or any advanced features. I just want something simple without introducing operators, helm charts or the like.",
"username": "Miguel_Hernandez"
},
{
"code": "",
"text": "Hi @Miguel_Hernandez and welcome to MongoDB community forums!!The Kubernetes Secrets hold confidential information that pods need to access services.Therefore, you may have to include secrets in your deployment to store usernames and passwords.In case the above solution does not work, can you provide me with all the necessary YAML files to replicate the issue in my local environment?Regards\nAasawari",
"username": "Aasawari"
}
] | K8s statefulset auth not working | 2023-04-25T00:02:53.760Z | K8s statefulset auth not working | 600 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "",
"text": "As the title says, i would like to test my app using server less instance because I would like to see how much would it cost me to run my app for few days/weeks but without being charged.is this somehow possible?",
"username": "Nikola_Bozin"
},
{
"code": "",
"text": "Hey @Nikola_Bozin,Welcome to the MongoDB Community Forums! Currently, it isn’t possible to test serverless for free. That being said, you can upvote a similar feature request in the feedback engine.Regards,\nSatyam",
"username": "Satyam"
}
] | Is there a way i can test mongoDB server-less pricing for my app without actually being charged? | 2023-04-22T09:28:17.833Z | Is there a way i can test mongoDB server-less pricing for my app without actually being charged? | 432 |
[
"unity"
] | [
{
"code": "",
"text": "My son made a coding mistake in his Unity game code and was updating a game players information in MongoDB every frame. We fixed the issue in minutes but there was a large spike of 39k writes and the “Compute Runtime” reach 49 hours/s and exceeded the 500 hour per month limit in one 2 minute spike.\nimage777×271 23.6 KB\nMy question: Is there a way to cap costs on M0? It seems that a bug in coding could run into big bucks if gone unchecked. I would rather hit a cap that give my application an error then it auto-scale to absorb the load.Would an M2 instance also spike in price if “Compute Runtime” is exceeded?Thank you",
"username": "Jerry_C"
},
{
"code": "M0",
"text": "Hey @Jerry_C,Welcome to the MongoDB Community Forums! Please note that the bill you’re seeing is not from M0 but from App Services. All App Services Apps in a MongoDB Atlas project share a single monthly free tier. All usage below the free tier thresholds in a given month is not billed. As soon as a project exceeds any monthly free tier threshold, App Services starts billing for additional usage of any kind for that project.You can use billing alerts to help manage your billing quotas. Billing alerts notify a designated person when a bill has exceeded a USD limit, or when a credit card is about to expire.\nTo configure billing alerts, click Organization Alerts in the Organization view. Set up consumption alerts to track how much a project or team is spending. You’ll be notified via email whenever your specific trigger is set off.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thank you for the reply.Please note that I understand that if my son uses significant resources then we should be billed for it. We appreciate MongoDB providing a free tier for developers. My concern is that in our case all the resources ($14+) were used in under one minute. A simple coding bug had my son’s game writing out the players status to MongoDB every frame of the game. The game writes to MongoDB from a server and my son leaves the server running all time so others can test. If this bug had not been fixed within the development session I could be looking at a $500 or much more bill. Even with alerts I may not catch an issue until the next day.It is my current understanding that no matter what tier I choose (M0 or M1) my bill could be much higher than the free or $9.99 stated price if my application exceeds usage limits.I would like MongoDB to consider an optional cost cap (limit) setting that will prevent resources from getting used if the cap for the month is reached. The application would receive an error when the cap is reached. This would prevent a runaway application from running up huge charges.Please note my bill was $2 for a couple of days. Now it is $14 which is consistent with the cost of $10 per 500 hours of compute. The under a minute peak used 1.23K hours of compute.These graphs show one major and one other peak in usage:\n\nimage987×641 37.4 KB\nHere I hover over the compute runtime plot to see a peak usage of 38.8 min/s. At this compute rate for 1 hour the cost would be $20,000! I hope you see my concern.\nThanks again",
"username": "Jerry_C"
},
{
"code": "",
"text": "Hey @Jerry_C,There currently isn’t a feature where services are stopped upon hitting a certain billing threshold. In saying so, you can submit a feature request in our Feedback engine.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I was charged $2.60 when using M0 (Forever Free Tier) | 2023-04-21T07:10:46.261Z | I was charged $2.60 when using M0 (Forever Free Tier) | 1,134 |
|
null | [] | [
{
"code": "",
"text": "Is there any difference in chunk migration performance between atlas and community edition?I am using the “community edition”, but the chunk migration performance is not up to my expectations, so I am trying to tune it.\nIt is introduced that “atlas” is capable of scale-out. Is the architecture or logic different from community?",
"username": "noisia"
},
{
"code": "",
"text": "following is my guess.Atlas is a managed mongodb service, community edition is a managed service, but by clients using it. So no difference.Chunk migration is part of core db logic, and i don’t think logic will be different between unpaid and paid version. Of course paid version will have some more powerful features, but chunk migration performance shouldn’t be one of them.",
"username": "Kobe_W"
}
] | Is there any difference in chunk migration performance between atlas and community edition? | 2023-04-26T10:55:39.970Z | Is there any difference in chunk migration performance between atlas and community edition? | 777 |
null | [] | [
{
"code": "Data Explorer operation for request ID's [XXXXXXXXXXXXXXXX] timed out after 45 secs Check your query and try again.STORAGE SIZE: 26.5MBLOGICAL DATA SIZE: 17.36MBTOTAL DOCUMENTS: 18INDEXES TOTAL SIZE: 36KB",
"text": "Hello there, I am currently facing the same issue as this post:I am currently receiving an error when I try to open/load one of the collections from my cluster:\nData Explorer operation for request ID's [XXXXXXXXXXXXXXXX] timed out after 45 secs Check your query and try again.I am not sure what the issue is, as it was loading properly moments ago.\nI have 3 other collections in the same cluster and they load properly.Here’s the stats of the cluster in question:\nSTORAGE SIZE: 26.5MB\nLOGICAL DATA SIZE: 17.36MB\nTOTAL DOCUMENTS: 18\nINDEXES TOTAL SIZE: 36KBThis cluster is using the “Shared Tier”.Any help/advice will help, please and thank you!EDIT:\nOn my live project, it took about 3 minutes and 18 seconds to finally load the data from that collection which is significantly longer than usual. Is there any insights on to why? Normally it would take at most 10~15 seconds to load.Thank you!",
"username": "Dong_Lee"
},
{
"code": "",
"text": "Hi @Dong_Lee , I am facing the a very similar situation. Have you found out what the issue was? I would like to understand the problem before there’s a major production meltdown…",
"username": "David_Lazar"
},
{
"code": "",
"text": "Hi @David_Lazar,Best to connect with the Atlas in-app chat support team if this happens again.However in saying so, you may wish to try if via MongoDB compass to see if the same behaviour is replicated. This information may be of use to the Atlas support team as well.Additionally, if it’s happening intermittently and you’re on a shared tier cluster (similar to the OP), it could possibly be that you have exceeded the Data Transfer Limits as per the Atlas M0 (Free Cluster), M2, and M5 Limitations documentation in some throttling occurs.Regards,\nJason",
"username": "Jason_Tran"
}
] | Data Explorer operation to browse Collections times out after 45s | 2022-11-08T22:34:06.287Z | Data Explorer operation to browse Collections times out after 45s | 1,172 |
null | [
"replication",
"storage"
] | [
{
"code": "",
"text": "Hello Everyone,I get frequent as below and due to this mongod instance was crashed. Could anyone please suggest in which case we print this message “WT_SESSION.checkpoint: attempt to remove non-exist\nent offset from an extent list: Invalid argument” because i din’t find in the error handling of wired-tiger Storage engine codebase .PS: We have plans to upgrade to higher version meanwhile we need to support the service till then.logs :2023-04-20T10:48:49.857-0500 E STORAGE [thread1] WiredTiger (22) [1682005729:857288][16553:0x7fc3fc297700], file:test_stats/index-46508–3641956376256280223.wt, WT_SESSION.checkpoint: attempt to remove non-exist\nent offset from an extent list: Invalid argumentMongoDB version 3.2.1\nOS - Centos 6\nTopology - Replica Set",
"username": "Vinay_Jaiswal"
},
{
"code": "corrupt:\n\tWT_BLOCK_RET(session, block, EINVAL,\n\t \"attempt to remove non-existent offset from an extent list\");\n}\n",
"text": "Hi @Vinay_JaiswalIn the 3.2.1 source, I found the exact string in this location: mongo/block_ext.c at r3.2.1 · mongodb/mongo · GitHubIt appears that this is a message displayed when there is a corruption detected in the data files.Since you’re using a replica set, if this happens in only one of the members, you might want to resync that member from a non-affected node. Maybe there’s a hardware issue on this particular node?However this is on MongoDB 3.2.1, which was out of support since September 2018. I would encourage you to upgrade to a supported version as soon as possible, since no bugfix will be done on this version anymore.Best regards\nKevin",
"username": "kevinadi"
}
] | Mongo Crashes due to "WT_SESSION.checkpoint: attempt to remove non-exist ent offset from an extent list | 2023-04-20T16:22:46.651Z | Mongo Crashes due to “WT_SESSION.checkpoint: attempt to remove non-exist ent offset from an extent list | 884 |
null | [
"python",
"compass",
"sharding",
"mongodb-shell"
] | [
{
"code": "",
"text": "Since home-brew is not supported on this version of macOS, I installed mongodb by downloading mongodb-macos-x86_64-6.0.5.tar; extracting; copying /bin/mongod (and mongos) to /usr/local/bin. I created /usr/local/etc/mongod.conf.I also downloaded mongosh-1.8.1-darwin-x64 and copied the bin folder to /usr/local/bin. I have started mongodb daemon (mongod) (either directly or via launchctl)When I started mongosh, the output includes:\nUsing MongoDB: 6.0.5\nUsing Mongosh: 1.8.1The prompt is “test>”. From mongosh, I enter the command: “use mydb” (and the prompt changes accordingly). But when I then execute “show dbs” only the system ones (admin, config, local).I have the similar problem using python/pymongo and compass.I’ve installed mongo on another macOS ventura without any problems and everything works very well. I’m just stumped why I’m having this problem on catalina. Any help would be appreciated. tia.",
"username": "rollin_rollin"
},
{
"code": "show dbstest> show dbs\nadmin 8.00 KiB\nconfig 12.00 KiB\nlocal 8.00 KiB\n\ntest> use mydb\nswitched to db mydb\n\nmydb> show dbs //note that there's no \"mydb\" here\nadmin 8.00 KiB\nconfig 12.00 KiB\nlocal 8.00 KiB\n\nmydb> db.mycoll.insertOne({})\n{\n acknowledged: true,\n insertedId: ObjectId(\"6449c8158ef002c5093eae9f\")\n}\n\nmydb> show dbs //note that it appears after I did a write\nadmin 8.00 KiB\nconfig 12.00 KiB\nlocal 8.00 KiB\nmydb 8.00 KiB\nmydb/usr/local/etc/mongod.confdb.adminCommand({getCmdLineOpts: 1})",
"text": "Hi @rollin_rollin welcome to the community!If you’re just changing the database without writing anything to it, it’s not created and will not show up in the output of show dbs. For example, I started a fresh install of MongoDB 6.0.5 locally:Note that mydb only shows up after I write to a collection under it.This is expected behaviour, but if you’re seeing something different (e.g. an error os something), then please provide more details such as:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks very much for info and test. Very stupid on my part; but thanks of providing the test and explanation.",
"username": "rollin_rollin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't create db after installing mongodb 6.0.5 on macOS catalina 10.15.7 | 2023-04-26T20:18:35.110Z | Can’t create db after installing mongodb 6.0.5 on macOS catalina 10.15.7 | 734 |
null | [
"crud"
] | [
{
"code": "",
"text": "I do not like to tag people directly but in this case I felt it was important.So @Asya_Kamsky, @Tarun_Gaur, @Jan_de_Wilde, @Hasan_Kumar, @Takis, @MaBeuLux88, @Daniel_Coupal, @kevinadi, @Jason_Tran please forgive me for tagging you. I am forgetting a lot of other people I enjoy reading so do not feel less appreciated if you are not in the list.Please read the following 2 threads and provide your always insightful input.For the record, I have been preaching the array approach.Let’s the battle begin.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevejThanks for all the contributions you made to the forum! I see that there’s no taker for this yet, so let me start with a small thought of my own.It’s very difficult to say which method is superior vs another when we’re talking about schema design. As I’m sure you’re aware, generally in MongoDB the schema design follows the use case, and not the other way around like in most SQL data design. Over there, you decide how your data will be stored, and later figure out how to query all those connected tables into a single entity that then you can use in your app.In contrast, MongoDB allows you to store an entity into a single document, speeding up the query process, and the flexible schema model allow you to store differently shaped documents in a single collection. This gives you the flexibility to change if the earlier design doesn’t work, or there are changes in your requirements in the future.What I would advocate in the most general terms is that: how will the schema help simplify your workflow, and at the same time, allow you to create indexes that make those workloads run faster?In some cases, arrays are the obvious choice, and in other cases, embedded documents is the way to go. One may find that using arrays simplify one use case, but complicates others (and vice versa), and thus the onus is on the user to determine the right balance that will satisfy all workloads, while still being able to perform well using indexes.One example is using the attribute pattern when you have varying field names. Since normal indexes in MongoDB requires a static field name (not counting wilcard indexes which is another discussion), it’s difficult to create a good index if your documents are varied like that. Thus, using the attribute pattern in an array can be considered. Note that I’m not saying it’s the best thing to do, because it depends There are other patterns as well, listed in the Building with patterns series of articles that may be interesting for some use cases. Ultimately I feel that MongoDB gives you this flexibility in choosing how to model your data as you see fit, so your database design can be super specialized & customized for the workload.Best regards\nKevin",
"username": "kevinadi"
}
] | Pros and cons of map vs array | 2023-04-20T13:52:22.991Z | Pros and cons of map vs array | 981 |
null | [] | [
{
"code": "# network interfaces\nnet:\n tls:\n FIPSMode: true\n port: 27017\n# bindIp: 127.0.0.1\n bindIp: 0.0.0.0\nUnrecognized option: net.tls.FIPSMode\ntry '/usr/bin/mongod --help' for more information\ntest> use admin\nswitched to db admin\nadmin> db.version()\n5.0.14\nadmin> db.getSiblingDB(\"admin\").runCommand({getCmdLineOpts: 1}).parsed.net.tls.FIPSMode\ntrue\nadmin>\n# network interfaces\nnet:\n tls:\n FIPSMode: true\n mode: requireTLS\n certificateKeyFile: /etc/ssl/mongodb.pem\n CAFile: /etc/ssl/ca.pem\n allowConnectionsWithoutCertificates: false\n",
"text": "We are moving to v6 on rhel9 from Community v5 on rhel8. This parameter is fine on community. What am I missing here?Here is part of the v6 config:Error logCommunity version v5Comminuty v5 config",
"username": "Eric_Barberan"
},
{
"code": "",
"text": "FIPSMode is documented as an Enterprise Edition only feature.Surprising that the option worked on 5.0.x, if the v6 is Community Edition I would say this is working as expected.",
"username": "chris"
},
{
"code": "[root@]# yum list installed | grep -i mongo\nmongodb-database-tools.x86_64 100.6.1-1 @@commandline\nmongodb-mongosh.x86_64 1.6.2-1.el8 @@commandline\nmongodb-org.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-database.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-database-tools-extra.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-mongos.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-server.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-shell.x86_64 5.0.14-1.el8 @@commandline\nmongodb-org-tools.x86_64 5.0.14-1.el8 @@com\n",
"text": "Thanks! Yeah it’s odd to be working on 5.0.14 version.",
"username": "Eric_Barberan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unrecognized option: net.tls.FIPSMode | 2023-04-24T15:35:09.695Z | Unrecognized option: net.tls.FIPSMode | 653 |
null | [
"java",
"crud",
"atlas-cluster",
"transactions"
] | [
{
"code": "Caused by: com.mongodb.MongoCommandException: Command failed with error 251 (NoSuchTransaction): 'Given transaction number 4 does not match any in-progress transactions. The active transaction number is 3' on server cluster0-shard-00-02.*****.mongodb.net:27017. The full response is {\"errorLabels\": [\"TransientTransactionError\"], \"ok\": 0.0, \"errmsg\": \"Given transaction number 4 does not match any in-progress transactions. The active transaction number is 3\", \"code\": 251, \"codeName\": \"NoSuchTransaction\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1682465400, \"i\": 14}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"vqwdne/1kG8A2qNJ4nsqFsx+6V0=\", \"subType\": \"00\"}}, \"keyId\": 7187829697743421442}}, \"operationTime\": {\"$timestamp\": {\"t\": 1682465400, \"i\": 14}}}\n ClientSession clientSession = _mongoClient.startSession();\n clientSession.startTransaction(\n TransactionOptions.builder().writeConcern(WriteConcern.MAJORITY).maxCommitTime(1L, TimeUnit.MINUTES).build());\n return /* A code block to async calling many _mongoCollection.updateOne(clientSession, filter, document) */\n .map(success -> {\n session.commitTransaction();\n return (Void) null;\n }).onFailure(t -> {\n session.abortTransaction();\n });\nclientSession",
"text": "Hi Team,We are frequently encountering the below error message when trying to transactionally update documents together:We are using mongodb java package org.mongodb:mongodb-driver-core:4.9.1And the example code is as following:Although we did not do a retry in the above logic, I am just wondering why it would hit an issue with transaction ID mismatch, given that the clientSession is passed in the mongo update?Thank you!",
"username": "williamwjs"
},
{
"code": "ClientSessionupdateOnebulkWrite",
"text": "ClientSession instances can’t be used concurrently like it appears you’re doing. Trying doing one updateOne at a time, or else batch them using the bulkWrite method.",
"username": "Jeffrey_Yemin"
},
{
"code": "/* A code block to async calling many _mongoCollection.updateOne(clientSession, filter, document) */",
"text": "/* A code block to async calling many _mongoCollection.updateOne(clientSession, filter, document) */it may help a bit if you van paste the code for this part and/or other related info. (indeed, the clientSession instance is not thread-safe, however without more info on request handling, hard to say if you use multi threading or not).",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Jeffrey_Yemin @Kobe_W Thank you for your reply!\nYeah, my code involves multi-threads to update the mongo data across multiple collections.One more question: May I ask if I could synchronize the clientSession to make it working here?",
"username": "williamwjs"
},
{
"code": "bulkWrite",
"text": "Yes, but in that case it’s not really worth it to do anything asynchronously. Better to just loop in a single thread and not worry about synchronization. But do consider batching updates using bulkWrite. The latency savings of that can be substantial.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Yeah make sense!! Thank you for your suggestions!",
"username": "williamwjs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Given transaction number does not match any in-progress transactions with Mongo Java | 2023-04-25T23:58:07.011Z | Given transaction number does not match any in-progress transactions with Mongo Java | 1,191 |
[
"montreal-mug"
] | [
{
"code": "Solution Architect - MongoDBTechnical Lead - Altitude Sports",
"text": "\nMontreal Banner960×540 42.3 KB\nThe Montréal MongoDB User Group is excited to host its inaugural meetup in Montreal!Make sure you join the Montréal Group to introduce yourself and stay abreast with future meetups and discussions. To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button. Have meetup.com? You can also RSVP for this event there.Sessions will be presented in English, with bilingual Q&AAltitude Sports Corporate Office\n90 Rue Beaubien O #601 A,\nMontréal, QC H2S 1V6Solution Architect - MongoDB–Technical Lead - Altitude Sports",
"username": "Nestor_Daza"
},
{
"code": "",
"text": "Hi all - Thanks for attending the MUG! I’m sharing @Benoit_Lacharite’s deck here. He has added a Q&A section based on the discussion at the MUG.Data Modeling 101",
"username": "Veronica_Cooley-Perry"
}
] | Montréal Inaugural Meetup | 2023-03-23T15:08:25.646Z | Montréal Inaugural Meetup | 2,339 |
|
[] | [
{
"code": "",
"text": "{ message: “Unexpected Error” }We’re sorry, there is a problem with your organization.Please contact our support team at [email protected] for assistance.What is this error when I try to login to the service?",
"username": "Sung_K"
},
{
"code": "",
"text": "Hello @Sung_K ,Welcome to The MongoDB Community Forums! Thanks for posting your issue.Service errors like this are often transient so retrying in a few minutes may resolve the error without intervention.If an Atlas service error persists or further investigation is needed, please contact the Atlas support team directly via the Chat bubble at the lower right of your Atlas UI or by opening a support case if you have a support plan. The support team should be able to provide more insight into any errors or issues that are affecting your clusters.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Error: We're sorry, there is a problem with your organization | 2023-04-21T06:08:21.648Z | Error: We’re sorry, there is a problem with your organization | 336 |
|
null | [
"time-series"
] | [
{
"code": "",
"text": "I was reading about the time-series collection. I am confused about a few terms, can someone please help me to understand?So time series will work in this scenario? Will it create a single collection always for different types of sensor data OR every time a new collection will be created? how to keep common metadata for each sensor without repeating its value in data fields?\nSince mongo allow the size of a document up to 16MB, then does it takes care of splitting data?",
"username": "Prasanna_Sasne"
},
{
"code": "splitting data",
"text": "Hello @Prasanna_Sasne ,A time-series collection in MongoDB is a special type of collection that is optimized for storing time-series data. Time-series data is typically data that is captured over time and has a timestamp associated with each data point. Examples include sensor data, log data, and financial market data.Looking at the use case you shared, I think the time-series collection would be a good fit for storing sensor data at different temperatures, with a timestamp associated with each data point. You can create a single TS collection to hold all the sensor data with metadata fields, such as sensor name and city, as document fields.Will it create a single collection always for different types of sensor data OR every time a new collection will be created?It is not necessary to do so as you can store all the data in a single collection. Internally, the data is stored in a bucket format based on its metadata and timestamp. By default, each bucket can store up to 1000 documents. As you add data from different sensors, internally the bucket will be created for each sensor based on its metadata and timestamp.Please refer to the linked post to understand the bucketing pattern in MongoDB’s Time-Series collections.how to keep common metadata for each sensor without repeating its value in data fields?It is common practice to include metadata fields, such as sensor name and city, as fields within each document when storing data. I hope this addresses your query, but please let me know if you require any additional clarification.Since mongo allow the size of a document up to 16MB, then does it takes care of splitting data?The maximum BSON document size is 16 megabytes and it’s still applicable. Can you please clarify what you meant by splitting data here?For more information, you can refer to the below resourcesRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Timeseries collection | 2023-04-21T06:58:39.898Z | Timeseries collection | 953 |
[
"react-native"
] | [
{
"code": "peopleslugplatformskeyvalue\"react-native\": \"0.70.6\",\n\"realm\": \"^11.3.1\",\nconst getCollaboratorsByPlatform = (platformName, selectedLocationId) => {\n return getPeoples()\n .filtered(\n `roles.slug LIKE \"Collaborator\" \n AND roles.platforms LIKE $0\n AND extras.key LIKE \"collaborator_registration_data_location_id\"\n AND extras.values.value LIKE $1`,\n platformName,\n selectedLocationId\n )\n .sorted('created_at', true);\n };\n{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_partitionKey\": {\n \"bsonType\": \"string\"\n },\n \"active\": {\n \"bsonType\": \"bool\"\n },\n \"birthdate\": {\n \"bsonType\": \"date\"\n },\n \"country_id\": {\n \"bsonType\": \"string\"\n },\n \"created_at\": {\n \"bsonType\": \"date\"\n },\n \"created_by\": {\n \"bsonType\": \"objectId\"\n },\n \"deleted_at\": {\n \"bsonType\": \"date\"\n },\n \"emails\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n },\n \"value\": {\n \"bsonType\": \"string\"\n },\n \"verified\": {\n \"bsonType\": \"bool\"\n }\n }\n }\n },\n \"externals\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"platform\": {\n \"bsonType\": \"string\"\n },\n \"platform_id\": {\n \"bsonType\": \"string\"\n },\n \"registered_at\": {\n \"bsonType\": \"date\"\n }\n }\n }\n },\n \"extras\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"key\": {\n \"bsonType\": \"string\"\n },\n \"updated_at\": {\n \"bsonType\": \"date\"\n },\n \"values\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"coll_name\": {\n \"bsonType\": \"string\"\n },\n \"value\": {\n \"bsonType\": \"string\"\n },\n \"value_id\": {\n \"bsonType\": \"objectId\"\n },\n \"value_type\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n }\n }\n },\n \"first_name\": {\n \"bsonType\": \"string\"\n },\n \"full_name\": {\n \"bsonType\": \"string\"\n },\n \"ids\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"country_id\": {\n \"bsonType\": \"objectId\"\n },\n \"expiration_at\": {\n \"bsonType\": \"date\"\n },\n \"issued_at\": {\n \"bsonType\": \"date\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n },\n \"url\": {\n \"bsonType\": \"string\"\n },\n \"value\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"last_name\": {\n \"bsonType\": \"string\"\n },\n \"middle_name\": {\n \"bsonType\": \"string\"\n },\n \"nationality_id\": {\n \"bsonType\": \"objectId\"\n },\n \"person_type\": {\n \"bsonType\": \"string\"\n },\n \"phones\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"calling_code\": {\n \"bsonType\": \"string\"\n },\n \"phone_number\": {\n \"bsonType\": \"string\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n },\n \"verified\": {\n \"bsonType\": \"bool\"\n }\n }\n }\n },\n \"relationships\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"person_id\": {\n \"bsonType\": \"objectId\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"roles\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"platforms\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"slug\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"second_last_name\": {\n \"bsonType\": \"string\"\n },\n \"status\": {\n \"bsonType\": \"string\"\n },\n \"updated_at\": {\n \"bsonType\": \"date\"\n }\n },\n \"required\": [\n \"_id\",\n \"first_name\",\n \"active\",\n \"created_by\",\n \"created_at\",\n \"_partitionKey\"\n ],\n \"title\": \"people\"\n}\n",
"text": "Hello,I’m new posting in this forum. I hope you can help me, guys. I have this bug about a query. It says that a comparison cannot be done with objectId type, but this schema called people with these attributes slug, platforms, key, value are not objectId type, they are all string type.I’m usingThis is the functionAnd this is the schema definitionI’ll be thankful if anyone can help me with this issue.\nimage386×765 49.7 KB\n",
"username": "Carlos_Ivan_Montesinos_Munoz"
},
{
"code": "selectedLocationId$1",
"text": "What is the type of the selectedLocationId variable? It looks like that is being provided as the value for the $1 placeholder in the query, and based on the name it seems like maybe that’s not a string, which could be the cause of the error.",
"username": "mpobrien"
},
{
"code": "",
"text": "Thanks a lot. Sometimes the type of selectedLocationId was objectdId and string.",
"username": "Carlos_Ivan_Montesinos_Munoz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Comparison error with LIKE but it doesn't have objectIds | 2023-04-05T23:52:26.373Z | Comparison error with LIKE but it doesn’t have objectIds | 1,109 |
|
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hi,I finished my first mongo function. I want to know where I call this function so I can access the data it calls. Would I call it in sagas.js? Or directly into my code where I need it called? Anything helps!Thanks,-Josh",
"username": "Josh_Stout"
},
{
"code": "",
"text": "Hi @Josh_Stout,A function can be called from verious places. If you specify its as private it can only be called internally from your hosted application components like other functions, rules or triggers.If you specify it as NOT private it can be called via one of our SDKs used in your client codehttps://docs.mongodb.com/realm/functions/call-a-function/#call-from-a-client-applicationYou can also build functions to interact with data via http or other services\nLet me know if that helps.Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello,Could you tell me please how can a function be called from other functions internally?",
"username": "Vladimir"
},
{
"code": " // To call other named functions:\n // var result = context.functions.execute(\"function_name\", arg1, arg2);\n",
"text": "You can find an example of it in the placeholder function content when creating one via the UI:",
"username": "Jan_Schwenzien"
},
{
"code": "",
"text": "Thank you so much, I have completely missed that line!",
"username": "Vladimir"
}
] | How to call a MongoDB function | 2020-07-30T16:54:41.320Z | How to call a MongoDB function | 2,988 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I don’t know if anyone (official) from MongoBD are here, but I wish you had RSS feed one some parts of your website, I’ve found that some of the articles there are really interesting and useful to read. E.g.: Nodejs | MongoDB\nThanks!",
"username": "Ader_Chox"
},
{
"code": "",
"text": "Hi there! We are here. I’m the product leader for our documentation and Developer Center. This seems like a reasonable request, so I will put it in the queue. Thanks for reaching out!Rachelle",
"username": "Rachelle"
}
] | Add RSS feed to website articles: Suggestion | 2023-04-26T15:15:39.189Z | Add RSS feed to website articles: Suggestion | 793 |
null | [
"queries",
"python"
] | [
{
"code": "for histarticlenumber in location[\"inarticle\"]:\n mdbfilter = {\"articleid\": histarticlenumber}\n projection = {\"articletitle\": 1, \"dateEdiISO\": 1, \"_id\": 0}\n articledoc = artcoll.find_one(mdbfilter, projection)\n if articledoc is not None:\n edidate = datetime2yyyymmddstr(articledoc[\"dateEdiISO\"])\n arturiref = articles_ns[\n edidate + \"-\" + slugify(articledoc[\"articletitle\"])\n ]\nTraceback (most recent call last):\n File \"/home/bob/code/mema/src/utilities/transcoders/mdb2rdf_loc.py\", line 117, in <module>\n main()\n File \"/home/bob/code/mema/src/utilities/transcoders/mdb2rdf_loc.py\", line 98, in main\n loc_graph, locmentions_graph = build_two_rdf_graphs(\n File \"/home/bob/code/mema/src/utilities/transcoders/mdb2rdf_loc.py\", line 40, in build_two_rdf_graphs\n for location in loccoll.find(mdbfilter):\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/cursor.py\", line 1248, in next\n if len(self.__data) or self._refresh():\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/cursor.py\", line 1188, in _refresh \n self.__send_message(g)\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/cursor.py\", line 1052, in __send_message\n response = client._run_operation(\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1330, in _run_operation\n return self._retryable_read(\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1448, in _retryable_read\n return func(session, server, sock_info, read_pref)\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1326, in _cmd\n return server.run_operation(\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/server.py\", line 134, in run_operation\n _check_command_response(first, sock_info.max_wire_version)\n File \"/home/bob/.cache/pypoetry/virtualenvs/mema-NHKGf2LO-py3.10/lib/python3.10/site-packages/pymongo/helpers.py\", line 179, in _check_command_response\n raise CursorNotFound(errmsg, code, response, max_wire_version)\npymongo.errors.CursorNotFound: cursor id 7243068530859290909 not found, full error: {'ok': 0.0, 'errmsg': 'cursor id 7243068530859290909 not found', 'code': 43, 'codeName': 'CursorNotFound'}\n",
"text": "Have a little python/pymongo program that essentially does the following:This morning found it had broken during the night with a CursorNotFound error.I am confused since I am using find_one, which returns a Document (or None), not a cursor.What’s happening? ThanksHere’s the whole trace:",
"username": "Robert_Alexander"
},
{
"code": " File \"/home/bob/code/mema/src/utilities/transcoders/mdb2rdf_loc.py\", line 40, in build_two_rdf_graphs\n for location in loccoll.find(mdbfilter):\n# Retry twice.\nfor i in range(3):\n try:\n for location in loccoll.find(mdbfilter):\n process(location)\n break\n except CursorNotFound:\n if i == 2:\n raise\n continue\n",
"text": "Hi @Robert_Alexander, you are correct that CursorNotFound is not possible in a simple find_one(). CursorNotFound can happen when the cursor is closed on the server side while the client is still in the middle of reading the results. One potential cause is that the server itself was restarted.From the trackback I see that the error is coming from a find() call, not a find_one():One way to make your application robust to this type of error is to catch CursorNotFound errors and retry the find():",
"username": "Shane"
},
{
"code": "",
"text": "One way to make your application robust to this type of error is to catch CursorNotFound errors and retry the find():Normally what can cause this exception?I would actually prefer focusing more on “how to avoid/minimize that CursorNotFound exception” rather than “retry when that exception happens”. That’s because processing data takes time, especially with a large data set from a find query. Better to be done only once and successfully.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks Shane and Kobe.As a matter of fact I had overlooked that the find_one() happened within a find() loop !!!While I do understand what Kobe says, I am handling really messy data from an historical archive and some fields can a) not exists at all, b) exist but their content is null, c) exists but content is an empty string (unless I still habe to discover new ways of data inflicting pain on the poor programmer ;)).So try/except with a logging of the problem while not ideal is good enough for me.Thanks a lot",
"username": "Robert_Alexander"
},
{
"code": "with client.start_session() as mdbsession:\n # Get the collection from the database\n mdatab = client[cfgdata[\"mongodb\"][\"mongodbname\"]]\n loccoll = mdatab[cfgdata[\"mongodb\"][\"loccoll\"]]\nwith loccoll.find({}, no_cursor_timeout=True, session=mdbsession).batch_size(\n 100\n ) as cursor:\n",
"text": "FWIW I went through your suggestions but it wasn’t enough.What seems to be working so far was a multiple approach:a) Declare a sessionb) Loop with optionsc) manage the try/except as aboveWill discover tomorrow morning if all of this was enough ! Fingers crossed!",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Thanks for the updates here. Is it possible your original code was keeping a cursor idle for longer than 30 minutes? If so you are likely running into this issue: https://www.mongodb.com/docs/manual/reference/method/cursor.maxTimeMS/#session-idle-timeout-overrides-maxtimemsAs the page explains, one workaround is to create an explicit session (start_session) and periodically refresh the session to make sure the cursor is not discarded due to inactivity.Another workaround is to lower the cursor batch size so that the cursor is iterated more frequently.If this is indeed the issue then you may wait to follow DRIVERS-1602 which tracks solving this issue.",
"username": "Shane"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"641ddac2685f9e1cbbc7324b\"\n },\n \"placename\": \"Fruitvale Station\",\n \"geonamesid\": 5351154,\n \"inarticle\": [\n \"2003287499\",\n \"2003224697\",\n \"2003224545\",\n \"2003219236\",\n \"2003288015\"\n ],\n \"timestamp\": {\n \"$date\": \"2023-04-12T14:23:21.241Z\"\n }\n}\n# setup MDB connection to WP articles\nclient = MongoClient(cfgdata[\"mongodb\"][\"mongouri\"])\n # Start an explicit mongodb session\n with client.start_session() as mdbsession:\n # Get the collection from the database\n mdatab = client[cfgdata[\"mongodb\"][\"mongodbname\"]]\n loccoll = mdatab[cfgdata[\"mongodb\"][\"loccoll\"]]\n artcoll = mdatab[cfgdata[\"mongodb\"][\"artcoll\"]]\nmentionsgraph = build_mentions_graph(\n mdbsession=mdbsession, loccoll=loccoll, artcoll=artcoll, logger=logger\n )\ndef build_mentions_graph(\n mdbsession: ClientSession,\n loccoll: Collection,\n artcoll: Collection,\n logger: logging.Logger,\n) -> Graph:\nwith loccoll.find({}, no_cursor_timeout=True, session=mdbsession).batch_size(\n 100\n ) as cursor: # this ensures cursor is closes when finished using it and limits batches\n for location in cursor:\n requests.get(HEALTHCHECKS_PING, timeout=10)\n geonames_slug = slugify(location[\"placename\"])\n loc_uriref = locations_ns[geonames_slug]\n for errcounter in range(3):\n try:\n for histarticlenumber in location[\"inarticle\"]:\n mdbfilter = {\"articleid\": int(histarticlenumber)}\n projection = {\n \"articletitle\": 1,\n \"dateEdiISO\": 1,\n \"articleid\": 1,\n \"_id\": 0,\n }\n articledoc = artcoll.find_one(mdbfilter, projection)\n if articledoc is not None:\n #### perform my graph building ####\n break\n except CursorNotFound as nocursor:\n logger.error(\n \"CursorNotFound %s while processing %s for article %s\",\n nocursor,\n location[\"placename\"],\n histarticlenumber,\n )\n if errcounter == 2:\n raise\n continue\nreturn mygraph\n",
"text": "On the brink of tears Went to bed hopeful I had nailed it only to find that while I was sleeping my program failed yet another time.Thanks for your hints, I believe they will help me, but I’ll try to clarify the structure of my data and program to see if our ideas are the right ones. Back to the basics My python 3.10/pymongo prog is perusing a “locations” collection with mentions of geographical places. This “loccoll” collection is 36.337 documents with the following structure:Please note that each of the 36K loccoll documents has a ‘inarticle’ field with an array of strings. In the example above this only has 5 entries, but some of them might have 20-30.000 entries (meaning that certain places such as ‘Germany’ are mentioned in 25.000 news articles identified by the inarticle field item).The second collection used in this program is ‘artcoll’ and holds around 120K documents that represent news articles of which we’re interested only about three of its fields “articleid” (matching those “2003288015” strings from the loccoll documents), “title” and “dateEdiISO” a date object for that article’s publication.The following code in my main() defines the connection to MDB and the 'loccoll\" (and ‘artcoll’ more about the latter in a short while) collections :and after a few lines main() calls the function I need to reconcile the locations (from the ‘loccoll’ collection) with the news articles (from the ‘artcoll’ collection) as follows:The mentionsgraph function is the workhorse function and the one that dies perhaps after a few hours of running This is its declaration:This sessions codes as follows:So my suspicion is that the program is looping as expected through the loccoll collection, but when it stumbles on a location that has tens of thousand of ‘inarticle’ strings and spends time fetching each one (to retrieve the title and publication date) from the artcoll collection then the cursor of the loccoll gets killed and I wake up with the program having being killed.Any further suggestions?I really appreciate your support. thanks a lot.",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Update: fingers crossed. Been 5 hours and the prog is still running.I guess what did hopefully make the difference is the refresh of the session every 10 minutes within the innermost loop (which goes through the 'inarticle\" Array of article ids).Again thanks a bunch!",
"username": "Robert_Alexander"
},
{
"code": " articles = list(artcoll.find({\"articleid\": {\"$in\": [int(i) for i in location[\"inarticle\"]]}}, projection))\n",
"text": "@Robert_Alexander it sounds like you should look into using $lookup to have the server join the documents. This would likely be many, many times faster than performing the lookup in Python using find_one:Another way would be to perform a find() with all the inarticle ids (although still not as good as $lookup):You’ll also want to ensure you have an index on “articleid” to make sure the query is fast.",
"username": "Shane"
},
{
"code": "",
"text": "Thanks a lot Shane. As of now the code works albeit slowish (but it is computing 36K * 128K documents) Will look into your suggestions to speed up things.Take care and ciao from Rome, Italy",
"username": "Robert_Alexander"
},
{
"code": "def build_per_mentions_graph(\n mdbsession: ClientSession,\n sessiontime: datetime,\n mdbsessionid,\n percoll: Collection,\n logger: logging.Logger,\n) -> Graph:\n \"\"\"\n Input the PER MDB collection\n Build the mentions graph from its data\n \"\"\"\n mentionsgraph = Graph()\n with percoll.aggregate(\n [\n {\"$unwind\": {\"path\": \"$inarticle\", \"preserveNullAndEmptyArrays\": True}},\n {\"$addFields\": {\"inarticle_int\": {\"$toInt\": \"$inarticle\"}}},\n {\n \"$lookup\": {\n \"from\": \"art\",\n \"let\": {\"inarticle_int\": \"$inarticle_int\"},\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\"$eq\": [\"$articleid\", \"$$inarticle_int\"]}\n }\n },\n {\"$match\": {\"title\": {\"$exists\": True, \"$ne\": \"\"}}},\n {\n \"$project\": {\n \"_id\": 0,\n \"title\": 1,\n \"dateEdiISO\": 1,\n \"articleid\": 1,\n }\n },\n ],\n \"as\": \"art_info\",\n }\n },\n ]\n ).batch_size(100) as cursor:\n for person in cursor:\n if (\n datetime.now() - sessiontime\n ).total_seconds() > 600: # does the MDBsession needs to be refreshed?\n logger.info(\"Refreshing MDB session\")\n mdbsession.client.admin.command(\"refreshSessions\", [mdbsessionid])\n sessiontime = datetime.now()\n ### do my conversions on the person object here ###\n return mentionsgraph\n",
"text": "The problem was that the article id in the art collection is an int32 while in the per collection is a string.So the following pipeline works well in returning what I need, but the problem is that now the cursor seems to timeout even though my code should prevent it:this function receives a MongoDB session object with its timestamp but apparently after the first 100 persons in the iteration it pauses then resumes but person is not pointing to valid objects so the code dies with exceptions.",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Pymongo find_one in a loop broke with CursorNotFound | 2023-04-22T06:44:47.032Z | Pymongo find_one in a loop broke with CursorNotFound | 1,296 |
null | [
"cxx",
"c-driver"
] | [
{
"code": "",
"text": "Hello! The MongoDB Developer Experience team is running a survey for C and C++ driver users, and we’d love to hear from you! The survey will take about 5 minutes to complete. We’ll be using the feedback from this survey to help us improve the installation experience and fix pain points that developers have when getting started with the MongoDB C and C++ drivers.If you use or are considering using the MongoDB C and/or C++ driver, please give use feedback via the survey here: https://forms.gle/z63hDXJ6mqLmBAat5",
"username": "Kyle_Kloberdanz"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB C and C++ Driver Survey | 2023-04-26T14:56:45.287Z | MongoDB C and C++ Driver Survey | 1,250 |
null | [
"queries",
"node-js"
] | [
{
"code": "async function getCollectionsNamesFromAtlas(){\n try {\n await client.connect() ;\n let arrayOfCollectionsInfoObjects = await client.db('test').listCollections().toArray();\n arrayOfCollectionsNamesAndDocsInThem =arrayOfCollectionsInfoObjects.map( ({name})=> name) /* name of the collection*/\n .map(async (collectionName)=> {\n console.log(`collectionName maped is : ${collectionName}`); // array\n\n \n let cursorOfDocsInEachCollection = client.db('test').collection(collectionName).find({}, {projection:{ _id: 0 }});\n console.log(cursorOfDocsInEachCollection) // prints the cursor\n let docsInEachCollection = await cursorOfDocsInEachCollection.toArray();\n console.log(docsInEachCollection)//MongoExpiredSessionError\n // return {collectionName ,docsInEachCollection}\n // })\n // console.log( arrayOfCollectionsNamesAndDocsInThem);\n })\n}\n catch(error){console.log(error)}\n finally { await client.close()}\n}\n",
"text": "I have been trying to solve the problem with the following code for 2 days now with no success , the code gives me the error ‘MongoExpiredSessionError: Cannot use a session that has ended’ :why I am getting this error ?",
"username": "Ali_M_Almansouri"
},
{
"code": "",
"text": "Any help please ? I’m still stuck !!",
"username": "Ali_M_Almansouri"
}
] | MongoExpiredSessionError when I use cursor.toArray()? | 2023-04-25T19:36:43.424Z | MongoExpiredSessionError when I use cursor.toArray()? | 439 |
null | [
"java",
"change-streams",
"spring-data-odm",
"kotlin"
] | [
{
"code": "",
"text": "I was just wondering if anyone can help me with my question. I want to use Change Streams to observe my collection if any change (CRUD) has occurred, then send a message to rabbitmq broker (including fields id and market). Notice: I’m using a Spring Boot application with Kotlin. I did not find any similiar solutions so far. Thank you in advance!",
"username": "Max_W"
},
{
"code": "",
"text": "Hello Max: @wan has suggested this could be nice example in Java for you to get started, but easy enough that you can port in Kotlin.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "\nimage1920×640 53.4 KB\n",
"username": "Max_W"
},
{
"code": "try (MongoClient mongoClient = MongoClients.create(uri)) {\n\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection<Document> collection = database.getCollection(\"movies\");\n\n List<Bson> pipeline = Arrays.asList( Aggregates.match( Filters.in(\"operationType\", \n Arrays.asList(\"insert\", \"update\"))));\n ChangeStreamIterable<Document> changeStream = database.watch(pipeline)\n .fullDocument(FullDocument.UPDATE_LOOKUP);\n // variables referenced in a lambda must be final; final array gives us a mutable integer\n final int[] numberOfEvents = {0};\n changeStream.forEach(event -> {\n System.out.println(\"Received a change to the collection: \" + event);\n if (++numberOfEvents[0] >= 2) {\n System.exit(0);\n }\n });\n }\n",
"text": "Oops!",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "\nBild 21.04.23 um 11.511914×1242 254 KB\n",
"username": "Max_W"
},
{
"code": "",
"text": "\nBild 26.04.23 um 16.24 (1)1374×610 79.7 KB\n",
"username": "Max_W"
},
{
"code": "",
"text": "\nBild 26.04.23 um 16.241766×888 172 KB\n",
"username": "Max_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Change Streams Spring Boot Application (language: Kotlin) | 2023-04-20T13:43:58.219Z | MongoDB Change Streams Spring Boot Application (language: Kotlin) | 1,466 |
null | [
"node-js",
"atlas"
] | [
{
"code": "",
"text": "Hello, I’m following How To Use MERN Stack: A Complete Guide | MongoDB step-by-step and I don’t get the message “Successfully connected to MongoDB.”\nWhen I copy the “server/node_modules” folder from the referenced git repository (GitHub - mongodb-developer/mern-stack-example: Mern Stack code for the Mern Tutorial), it does work and I do get the message.\nDoes the step-by-step guide omit the installation of a package? Which would that be and how would I install it by hand?",
"username": "53ac1da52812ebe8d38b9c0f38be5d9"
},
{
"code": "server/node_modules",
"text": "Hello @53ac1da52812ebe8d38b9c0f38be5d9,Welcome to the MongoDB Community forums I don’t get the message “Successfully connected to MongoDB.”Could you please let us know the error message you are receiving?When I copy the “server/node_modules” folder from the referenced git repository (GitHub - mongodb-developer/mern-stack-example: Mern Stack code for the Mern Tutorial), it does work and I do get the message.I cannot find the server/node_modules folder in the given git repository. Could you please share the link to the folder that you copied?Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi, I get no error message. I only see “Server is running on port: 5000” when in the guide it says that I should seeServer is running on port: 5000\nSuccessfully connected to MongoDB.Regarding your second question, apologies, I have to be more precise: when I set up a project with the code from the repo, I move to its “server” folder and type “npm install mongodb express cors dotenv” That creates a server/nodes_modules folder. When I then copy that server/nodes_module folder into my project that I built following the step-by-step guide, it works and shows “Server is running on port: 5000 Successfully connected to MongoDB.”To summarize:I build a project following the step-by-step guide → “node server.js” shows only “Server is running on port: 5000”I build a project using the code from the repository, install the packages in its “server” folder with “npm install mongodb express cors dotenv”, copy the server/node_modules folder from there into the project that was built following the step-by-step guide and replace its server/node_modules folder → “node server.js” shows “Server is running on port: 5000 Successfully connected to MongoDB.”Why is the server/node_modules folder different when I build it in the project with the repo and what is the difference?",
"username": "53ac1da52812ebe8d38b9c0f38be5d9"
},
{
"code": "\"mongodb\": \"^3.6.6\"\"mongodb\": \"^5.3.0\"[email protected]_modulepackage.json\"mongodb\": \"^4.16.0\"callbackPromiseasync/awaitdb/conn.jsasync/awaitconst { MongoClient } = require(\"mongodb\");\nconst Db = process.env.ATLAS_URI;\nconst client = new MongoClient(Db, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nlet _db;\n\nmodule.exports = {\n connectToServer: async function () {\n try {\n const db = await client.connect();\n // Verify we got a good \"db\" object\n if (db) {\n _db = db.db(\"employees\");\n console.log(\"Successfully connected to MongoDB.\");\n }\n return _db;\n } catch (err) {\n throw err;\n }\n },\n\n getDb: function () {\n return _db;\n },\n};\nroutes/record.js",
"text": "Hello @53ac1da52812ebe8d38b9c0f38be5d9,Thanks for sharing the detailed explanation.The cause for this issue is that the repository uses an older version of the MongoDB node.js driver, specifically, version \"mongodb\": \"^3.6.6\". On the other hand, when you create your own project and install the MongoDB node.js driver package, it installs the latest version, which is \"mongodb\": \"^5.3.0\".However, the latest version is incompatible with the code written in the tutorial because it uses the callback function which is deprecated in the new MongoDB node.js driver 5.0.0.Reference: Changelog - node-mongodb-native.If you want to follow the tutorial and build the project successfully, you have to use MongoDB version [email protected] or a lower version.To resolve this problem and make your project functional, you must delete the node_module directory and modify the MongoDB version in your package.json file to \"mongodb\": \"^4.16.0\" or a lower version.Another workaround is to use the latest node.js driver version and modify the functions that use a callback by switching to the Promise and async/await syntax instead.As an example, you can update the content of the db/conn.js file to incorporate async/await. Sharing code snippet for your reference:Similarly, you can modify the function in routes/record.js.I hope this helps. If you have any further questions, please let us know.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kushagra_Kesav ,I pasted your code for db/conn.js into the project I created according to the guide on the website and it worked! I see the message “Successfully connected to MongDB.” now. Thank you for your help.You mention to also change the code in routes/records.js. I have tried to understand how to do that, but since I’m new to this topic, I wasn’t sure how. What changes would I make? Or do you have a resource that explains (to beginners) how they would go about it?",
"username": "53ac1da52812ebe8d38b9c0f38be5d9"
},
{
"code": "",
"text": "Yo Kushi,\nI just tried something:This is neither here, nor there… just experimenting and thought I’d write it up here. No clue how to change routes/record.js.",
"username": "53ac1da52812ebe8d38b9c0f38be5d9"
},
{
"code": "routes/record.jsrecordRoutes.route(\"/record\").get(function (req, res) {\n let db_connect = dbo.getDb(\"employees\");\n db_connect\n .collection(\"records\")\n .find({})\n .toArray(function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n});\nrecordRoutes.route(\"/record\").get(async function (req, res) {\n try {\n const db_connect = await dbo.getDb(\"employees\");\n const result = await db_connect.collection(\"records\").find({}).toArray();\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n",
"text": "Hello @53ac1da52812ebe8d38b9c0f38be5d9,No clue how to change routes/record.js.since I’m new to this topic, I wasn’t sure how. What changes would I make?The routes/record.js file contains five functions that handle CRUD operations. All of these functions are written as callback functions, as follows:To use the latest version of the MongoDB Node.js driver, it’s recommended to convert these callback functions to use async/await syntax. Here’s an example of how to convert the above function:You’ll need to convert all five functions in the same pattern to be compatible with the latest Node.js driver module.do you have a resource that explains (to beginners) how they would go about it?Please refer to the documentation to learn the fundamentals of Promises and Callback.I hope this is helpful! Let us know if you have any questions.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "require('dotenv').config()\nconst express = require('express')\nconst mongoose = require('mongoose')\n\nconst app = express()\napp.use(express.json())\n\napp.get('/api/pat', (req, res) => {\n\tres.json({msg: \"build a restful api with nodejs + express + mongodb\"})\n})\n\nconst port = process.env.PORT || 5000\napp.listen(port, () => {\n\tconsole.log('Server is running on port', port)\n})\n\n\n// Connect MongoDB\nconst mongoURI = process.env.MONGODB_URL\n\n",
"text": "I don’t get the message → “Successfully connected to MongoDB.”this message has nothing to do with routing, and controllers.Look this example:",
"username": "Patrick_Biyaga"
},
{
"code": "const express = require(\"express\");\n \n// recordRoutes is an instance of the express router.\n// We use it to define our routes.\n// The router will be added as a middleware and will take control of requests starting with path /record.\nconst recordRoutes = express.Router();\n \n// This will help us connect to the database\nconst dbo = require(\"../db/conn\");\n \n// This help convert the id from string to ObjectId for the _id.\nconst ObjectId = require(\"mongodb\").ObjectId;\n \n \n// This section will help you get a list of all the records.\nrecordRoutes.route(\"/record\").get(async function (req, res) {\n try {\n const db_connect = await dbo.getDb(\"employees\");\n const result = await db_connect.collection(\"records\").find({}).toArray();\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n \n// This section will help you get a single record by id\nrecordRoutes.route(\"/record/:id\").get(async function (req, res) {\n try {\n const db_connect = await dbo.getDb();\n const myquery = { _id: ObjectId(req.params.id) };\n const result = await db_connect.collection(\"records\").findOne(myquery);\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n \n// This section will help you create a new record.\nrecordRoutes.route(\"/record/add\").post(async function (req, res) {\n try {\n const db_connect = await dbo.getDb();\n const myobj = {\n name: req.body.name,\n position: req.body.position,\n level: req.body.level,\n };\n const result = await db_connect.collection(\"records\").insertOne(myobj);\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n \n// This section will help you update a record by id.\nrecordRoutes.route(\"/update/:id\").post(async function (req, res) {\n try {\n const db_connect = await dbo.getDb();\n const myquery = { _id: ObjectId(req.params.id) };\n const newvalues = {\n $set: {\n name: req.body.name,\n position: req.body.position,\n level: req.body.level,\n },\n };\n const result = await db_connect.collection(\"records\").updateOne(myquery, newvalues);\n console.log(\"1 document updated\");\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n \n// This section will help you delete a record\nrecordRoutes.route(\"/:id\").delete(async function (req, res) {\n try {\n const db_connect = await dbo.getDb();\n const myquery = { _id: new ObjectId(req.params.id) };\n const result = await db_connect.collection(\"records\").deleteOne(myquery);\n console.log(\"1 document deleted\");\n res.json(result);\n } catch (err) {\n throw err;\n }\n});\n\n \nmodule.exports = recordRoutes;\n",
"text": "Who marked this as the solution? Shouldn’t I be the judge of that?At any rate: it’s the solution! I changed routes/record.js. Check this out:Started the server, started the frontend, and it works.\nThe “Create Record” link moved which makes me slightly uneasy, but I’ll deal with it later on. The functionality seems to be there.Thank you for your help!",
"username": "53ac1da52812ebe8d38b9c0f38be5d9"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MERN Stack Tutorial Guide Doesn't Work | 2023-04-20T08:51:16.730Z | MERN Stack Tutorial Guide Doesn’t Work | 1,965 |
null | [
"queries",
"crud",
"compass"
] | [
{
"code": "No array filter found for identifier 'match' in path 'tournament.currentOngoigMatches.$[match].options.$[option].votes'\n",
"text": "THIS WAS MY MONGODB JASON“_id”: {\n“$oid”: “64482a7242564473bac925d5”\n},\n“tournament”: {\n“currentRound”: 1,\n“previousOngoigMatches”: ,\n“currentOngoigMatches”: [\n{\n“round”: 1,\n“totalvotes”: 0,\n“options”: [\n{\n“optionId”: “64482a7242564473bac925d7”,\n“text”: “que2”,\n“image”: “https://”,\n“votes”: 0,\n“_id”: {\n“$oid”: “64482a7242564473bac925da”\n}\n},\n{\n“optionId”: “64482a7242564473bac925d6”,\n“text”: “que1”,\n“image”: “https://”,\n“votes”: 0,\n“_id”: {\n“$oid”: “64482a7242564473bac925db”\n}\n}\n],\n“_id”: {\n“$oid”: “64482a7242564473bac925d9”\n}\n}\n],\n“totalRounds”: 1\n},THIS WAS MY QUERY I USINGconst data = await Polls.findOneAndUpdate({\n“tournament.currentOngoigMatches._id”: new ObjectId(“64482a7242564473bac925d9”),\n“tournament.currentOngoigMatches.options.optionId”: ‘64482a7242564473bac925d7’\n},\n{\n$inc: {\n“tournament.currentOngoigMatches.$[match].options.$[option].votes”: 1\n}\n},\n{\narrayFilters: [\n{\n“match._id”: new ObjectId(“64482a7242564473bac925d9”)\n},\n{\n“option.optionId”: “64482a7242564473bac925d7”\n}\n],\nupsert: true\n})THIS IS THE VERSIONS I USING\nmongodb - v6.0.1\nmongodb compass - 1.36.3DONT KNOW WHY AM GETTING THIS ERROR -IS THERE ANYONE CAN HELP ME TO SOLVED THIS ISSUE IT WOULD BE A GREAT HELP !\nTHANKS IN ADVANCE",
"username": "Anas_siddiqui"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can experiment with them.",
"username": "steevej"
}
] | No array filter found for identifier 'match' in path 'tournament.currentOngoigMatches.$[match].options.$[option].votes' | 2023-04-26T10:54:46.680Z | No array filter found for identifier ‘match’ in path ‘tournament.currentOngoigMatches.$[match].options.$[option].votes’ | 1,134 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello and a happy and hopefully healthy 2022 to you all! I am despirately trying to work with dates with timezone support in Realm functions. Unfortunately I always get the error “Intl is not defined”, meaning the node environment is lacking support of DateTime standard functions.I`ve tried different polyfills via npm as a workaround, but without any luck.Is there a security concern, which led you to disable / not support Intl in the functions environment or could this maybe be enabled in a future version of the functions?Intl seems like a fairly common package and is supported by most browsers and by node since v6. What I’ve tried so far:If the issue could not be resolved, I’d have to look for other alternatives (lambda, netlify, etc.), but that would contradict the value of the MongoDB eco system. Especially since other packages are working just fine.Could you please help me or point me in the right direction?Thanks in advance!\nBest\nMathias",
"username": "Mathias_Gerdt"
},
{
"code": "",
"text": "Hello @Mathias_Gerdt,Welcome to Community Forums If the issue could not be resolved, I’d have to look for other alternatives (lambda, netlify, etc.), but that would contradict the value of the MongoDB eco system. Especially since other packages are working just fine.We have an ongoing project for supporting timezones. Could you please share a code snippet that could help us know what you are trying to do?I look forward to your response.Kind Regards,\nHenna",
"username": "henna.s"
},
{
"code": "const {DateTime} = require('luxon');\nlet dt = DateTime.now().setZone('Europe/Berlin').toISO();\n",
"text": "Hello Henna,\nthank you for your reply!A basic example is this:We are trying to calculate time differences of JS date objects across timezones, because we are building a rather complex crew scheduling system.Talking about the ongoing project, will there be a major change or is there maybe even a quick fix? Maybe another underlying node environment with Intl support as flag enabled?We would like to use the luxon library in realm functions, maybe there’s another workaround for timezone support?Thanks in advance! Looking forward to hearing from you.\nBest\nMathias",
"username": "Mathias_Gerdt"
},
{
"code": "IntlIntl",
"text": "Apologies for the delay in getting back to you. Unfortunately, there isn’t any quick fix for this. The support for Intl is an ongoing project but we may not have an ETA available yet and some libraries providing time zone support still rely underneath on Intl.Not ideal, but could you check if there can be APIs available to resolve time differences or work with timezones in general?Look forward to your response.Kind Regards,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hi Henna,thank you for the update.In our case, external APIs are not suitable for the task, I’m afraid, since outgoing calls generate a lot more overhead than the simple on-machine timezone calculation. Especially for hundreds of date objects per calculation.As a workaround, we’ve resorted to generating an aggregation pipeline, which takes in all date objects via an {$addFields: …} stage and then calculates the correct timestamps via a {$dateToString: …} stage, but that seems very hacky. A more direct solution via Intl in the near future would be much appreciated!",
"username": "Mathias_Gerdt"
},
{
"code": "",
"text": "Hi there,Is there any update on that? Was trying to generate a time string from a Date and can’t find a way to format it into the correct time zone.",
"username": "Daniel_Bali"
},
{
"code": "",
"text": "And another thread illustrating the need for updating Node to a version that didn’t reach EOL (no security updates even since more than one year now). Please help by upvoting Upgrade Node.js version (from v10 to v14/v16) – MongoDB Feedback Engine",
"username": "Jan_Schwenzien"
}
] | Missing Intl support in Realm Functions | 2022-01-12T12:50:43.698Z | Missing Intl support in Realm Functions | 4,489 |
null | [
"node-js",
"graphql"
] | [
{
"code": "",
"text": "Is there a way to upgrade what version of node is used on Atlas? Says it’s running Node 10, trying to pass JSON Web tokens.It’s not even supporting the passage of JSON webtokrns in GraphQL so I’m seeing it’s unsupported entirely for Atlas GQL.Anyone got a fix so far?EDIT:\nCame to realize why GQL couldn’t pass a JWT, it’s because it doesn’t support Custom Scalars in Atlas still, now realizing that. And it can’t pass payloads from one scalar to another within it still. So that answers the GQL side, but still figuring out JWT side for functions. As it should work but doesn’t want to, for some reason it’s on Node 10.",
"username": "Brock"
},
{
"code": "{\"message\":\"Value is not an object: undefined\",\"name\":\"TypeError\"}\n",
"text": "I know for a fact it’s possible to authenticate and pass JWT via Functions, I’ve literally seen others do it.Did something happen recently with Functions for passing JWT?Right now I’m trying to setup a function to pass user login for a friends inventory and sales website, and for the life of me I can’t get JWT to work for the function.This is the error I keep getting, and I know it’s lying to me lol. I swear to all that is holy that I have literally seen people pass JWT in functions.EDIT:This was the issue, we weren’t using an old enough version of JWT. Not too thrilled due to security issues, but you have to use JWT 8 to pass through Atlas Functions.Not sure when Atlas will support JWT 9 released last year, but that’s what the issue was.JSON Web Token implementation (symmetric and asymmetric). Latest version: 9.0.0, last published: 3 months ago. Start using jsonwebtoken in your project by running `npm i jsonwebtoken`. There are 23150 other projects in the npm registry using...",
"username": "Brock"
},
{
"code": "",
"text": "Just to add: the outdated Node version is becoming a pain for others too. Please help by upvoting here Upgrade Node.js version (from v10 to v14/v16) – MongoDB Feedback Engine",
"username": "Jan_Schwenzien"
}
] | Functions on Node 10? | 2023-04-01T03:30:45.504Z | Functions on Node 10? | 1,057 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have an order collection. I want to get order data of the last 7 days, last 7 weeks, last 7 months and last 7 years. For example, if today is Tuesday, I want to know how many orders did I have on tuesday, how many orders on monday and similarly all days until last tuesday. Similarly I want data for all units I mentioned above. How do I achieve this?I tried using $addFields with $dayOfTheYear, $week, $month and then grouping it inside facet pipeline accordingly but I ran into some issues cause I can only go within a year. I want a better, more straight forward method that just gives me the result of last 7 units without restriction of current year.",
"username": "Anand_S2"
},
{
"code": "",
"text": "Hi @Anand_S2,Please provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{ \"_id\": { \"$oid\": \"625d08b94c6acf367a54ef48\" }, \"orderId\": \"PC000002\", \"customerId\": { \"$oid\": \"625d0766717de05eb3d9efb8\" }, \"franchiseId\": { \"$oid\": \"622c9ad0c576441a1ff392c1\" }, \"serviceId\": { \"$oid\": \"622bfbc867851da6b0341d0c\" }, \"timeSlot\": \"01:00 PM - 02:00 PM\", \"price\": 999, \"mode\": \"COD\", \"status\": \"Completed\", \"address\": { \"_id\": \"625d08a9420441a445605863\", \"name\": \"Ajay Nellikode\", \"mobile\": 7799627766, \"house\": \"Pullat House \", \"street\": \"Punkunnam, Thrissur, Thrissur, Kerala\", \"city\": \"Thrissur\", \"state\": \"Kerala\", \"pincode\": 680002, \"landmark\": \"punkunnam Railway Station \", \"type\": \"Home-1\", \"location\": [ 10.535196599999999, 76.20149140000001 ], \"user\": \"625d0766717de05eb3d9efb8\", \"isDefault\": true, \"createdAt\": \"2022-04-18T06:43:53.389Z\", \"updatedAt\": \"2022-04-18T06:43:53.772Z\", \"__v\": 0 }, \"addOn\": [], \"date\": \"2022-04-18\", \"discountAmount\": 0, \"grandTotal\": 999, \"createdAt\": { \"$date\": { \"$numberLong\": \"1650264249642\" } }, \"updatedAt\": { \"$date\": { \"$numberLong\": \"1677146342875\" } }, \"__v\": 0, \"completedReport\": { \"serviceId\": { \"$oid\": \"622bfbc867851da6b0341d0c\" }, \"addOn\": [], \"grandTotal\": 999, \"completedBy\": { \"name\": \"adminVendor\", \"username\": \"adminVendor\", \"role\": \"admin\", \"userId\": \"622c6f86c576441a1ff38cc7\", \"phone\": \"+919605795642\" } }, \"workerId\": { \"$oid\": \"622daaacc525e950351ed9a5\" }}\n{\n// only count documents which have status as \"Completed'\norderCountsOfLastSevenDays: { tue: 1, wed: 3, thu: 5, fri: 3, sat: 7, sun: 10, mon: 6, tue: 5 },\norderCountsOfLastSevenMonths: { oct: 20, nov: 30, dec: 35, jan: 60, feb: 30, mar: 30, apr: 43 },\n//a week is from monday to sunday\norderCountsOfLastSevenWeeks: { week1: 10, week2: 15, week3: 13, week4: 20, week5: 10, week6: 21, week7: 25 }\n}\n salesData() {\n return new Promise((resolve, reject) => {\n const pipeline: PipelineStage[] = [\n {\n $addFields: {\n day: {\n $dayOfYear: '$createdAt'\n },\n week: {\n $week: '$createdAt'\n },\n month: {\n $month: '$createdAt'\n },\n year: {\n $year: '$createdAt'\n },\n currentDay: {\n $dayOfYear: new Date()\n },\n currentWeek: {\n $week: new Date()\n },\n currentMonth: {\n $month: new Date()\n },\n currentYear: {\n $year: new Date()\n },\n dayRangeStart: {\n $cond: {\n if: {\n $and: [\n {\n $eq: ['$year', '$currentYear']\n },\n {\n $gte: [\n {\n $subtract: ['$currentDay', 7]\n },\n 0\n ]\n }\n ]\n },\n then: {\n $subtract: ['$currentDay', 7]\n },\n else: 0\n }\n },\n weekRangeStart: {\n $cond: {\n if: {\n $gte: [\n {\n $subtract: ['$currentWeek', 7]\n },\n 0\n ]\n },\n then: {\n $subtract: ['$currentWeek', 7]\n },\n else: 0\n }\n },\n monthRangeStart: {\n $cond: {\n if: {\n $gte: [\n {\n $subtract: ['$currentMonth', 7]\n },\n 0\n ]\n },\n then: {\n $subtract: ['$currentMonth', 7]\n },\n else: 0\n }\n }\n }\n },\n {\n $facet: {\n daily: [\n {\n $match: {\n day: {\n $gt: '$dayRangeStart'\n },\n status: Statuses_Enum.COMPLETED\n }\n },\n {\n $group: {\n _id: 'day',\n total: {\n $sum: '$grandTotal'\n }\n }\n }\n ]\n }\n }\n ]\n Order.aggregate(pipeline)\n .then((res) => resolve(res))\n .catch((err) => reject(err))\n })\n }\n",
"text": "Sample documentExpected Output//or it can be as simple as each orderCounts being an array of numbers without those keysOutput from what I’ve attempted\nI am not getting the desired output, but this is my attempted code:Clarification on ‘for admin dashboard’\nI am not refering to MongoDB charts, I was just saying this data is for charts in a dashboard for our businessMongoDB version: 6.0.4",
"username": "Anand_S2"
}
] | How do I get data of last few days, weeks, months, years in mongodb for admin dashboard? | 2023-04-25T07:43:25.453Z | How do I get data of last few days, weeks, months, years in mongodb for admin dashboard? | 1,322 |
null | [
"storage"
] | [
{
"code": "net:\n port: 27017\n bindIp: localhost,127.0.0.1\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.095Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.112Z\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.112Z\"},\"s\":\"D2\", \"c\":\"CONNPOOL\", \"id\":22558, \"ctx\":\"main\",\"msg\":\"Initializing connection pool controller\",\"attr\":{\"pool\":\"NetworkInterfaceTL-ReplNetwork\",\"controller\":\"LimitController\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.112Z\"},\"s\":\"D1\", \"c\":\"NETWORK\", \"id\":22940, \"ctx\":\"main\",\"msg\":\"file descriptor and connection resource limits\",\"attr\":{\"hard\":64000,\"soft\":64000,\"conn\":51200}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.112Z\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":31002,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"linsrv150\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.112Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.19\",\"gitVersion\":\"9a996e0ad993148b9650dc402e6d3b1804ad3b8a\",\"openSSLVersion\":\"OpenSSL 1.1.1n 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"version\":\"Kernel 4.19.0-23-amd64\"}}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"localhost,127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"directoryPerDB\":true,\"engine\":\"wiredTiger\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":false,\"path\":\"/var/log/mongodb/mongod.log\",\"timeStampFormat\":\"iso8601-utc\",\"verbosity\":5}}}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"D1\", \"c\":\"NETWORK\", \"id\":22940, \"ctx\":\"initandlisten\",\"msg\":\"file descriptor and connection resource limits\",\"attr\":{\"hard\":64000,\"soft\":64000,\"conn\":51200}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"D1\", \"c\":\"EXECUTOR\", \"id\":23104, \"ctx\":\"OCSPManagerHTTP-0\",\"msg\":\"Starting thread\",\"attr\":{\"threadName\":\"OCSPManagerHTTP-0\",\"poolName\":\"OCSPManagerHTTP\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"D3\", \"c\":\"EXECUTOR\", \"id\":23108, \"ctx\":\"OCSPManagerHTTP-0\",\"msg\":\"Waiting for work\",\"attr\":{\"numThreads\":1,\"minThreads\":1}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"D2\", \"c\":\"-\", \"id\":23323, \"ctx\":\"initandlisten\",\"msg\":\"Starting periodic job {job_name}\",\"attr\":{\"job_name\":\"FlowControlRefresher\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.113Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:03.114Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:04.113Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:04.113Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:05.113Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:05.113Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:06.113Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:06.113Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:07.113Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-02T09:16:07.113Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n\n",
"text": "Hello,We are running a MongoDB server at Debian 10.13.\nThe MongoDB version is 4.4.19.\nAfter the restart of one MongoDB server, the process does not open a network port anymore.\nI have already checked the configuration file:Even when I try to use another port it does not work.\nThe mondogb user and group is the owner of the dataarea and of the socket.\nI already tried to use the repair command but after it starts, it does nothing to the database, even after running for up to 12 hours.When I unmap the dataarea, a new one will be created and the server starts as I would expect it.\nI tried to use a backup of the server (without restoring the DB) to check whether there has been a change to the os which prevents the server from opening the port.In the log files you can see this:Even reinstalling the mongodb packages does not solve the issue.\nDo you have any idea?Thanks for your help!\nBest regards\nFlorian",
"username": "Florian_Streppel"
},
{
"code": "rs.status()rs.conf()dataareaunmapmongod --repair",
"text": "Hello @Florian_Streppel ,Welcome to The MongoDB Community Forums! To understand your use-case better, please share more details such as:After the restart of one MongoDB server, the process does not open a network port anymore.When I unmap the dataareaI already tried to use the repair command but after it starts, it does nothing to the database, even after running for up to 12 hours.Warning: Only use mongod --repair if you have no other options. The operation removes and does not save any corrupt data during the repair process and should not be run in a replica set as mentioned in the repairDatabase manual.If you’re having issues starting and connecting to a self-hosted MongoDB deployment, you might find hints in the following topic:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "rs.status()rs.conf()dataareaunmap",
"text": "Hello Tarun,sorry for my late reply.When I let me show the open ports on the server there is no mongodb port opened.\nIn the logfiles it look like the port is opened. So there are no hints in the log for me.I cannot share the output because I cannot even connect to the server with the cli.Yes I mean the dbpath. The dbpath is a mapped nfs share and I unmounted the share to test, whether there is an issue with the data.Thats my issue as well. I cannot see any error. But the mongod process does not open a port I can connect to.Best regards\nFlorian",
"username": "Florian_Streppel"
},
{
"code": "netstat",
"text": "Did you run something like netstat ?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Yes I ran netstat -tulpen but there is no port opened for mongod.",
"username": "Florian_Streppel"
},
{
"code": "systemctl status mongod.servicetelnet 127.0.0.1 27017netstat -an | grep LISTENsystemctl status ufw.serviceiptables -L",
"text": "Please post the output of the below to investigate further:systemctl status mongod.servicetelnet 127.0.0.1 27017netstat -an | grep LISTENsystemctl status ufw.serviceiptables -L",
"username": "Abdullah_Madani"
},
{
"code": "systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Wed 2023-03-22 09:29:51 CET; 13s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 20523 (mongod)\n Memory: 16.1M\n CGroup: /system.slice/mongod.service\n └─20523 /usr/bin/mongod --config /etc/mongod.conf\n\nMär 22 09:29:51 linsrv150 systemd[1]: Started MongoDB Database Server.\nMär 22 09:29:51 linsrv150 mongod[20523]: {\"t\":{\"$date\":\"2023-03-22T08:29:51.761Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20697, \"ctx\":\"main\",\"msg\":\"Renamed existing log file\",\"attr\":{\"oldLogPath\":\"/var/log/mongodb/mon\n\n\ntelnet 127.0.0.1 27017\nTrying 127.0.0.1...\ntelnet: Unable to connect to remote host: Connection refused\n\nnetstat -an |grep LISTEN\ntcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:60623 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:42131 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN\ntcp6 0 0 :::46575 :::* LISTEN\ntcp6 0 0 :::111 :::* LISTEN\ntcp6 0 0 :::56625 :::* LISTEN\ntcp6 0 0 :::22 :::* LISTEN\n\n\nsystemctl status ufw\nUnit ufw.service could not be found.\n\niptables -L\nChain INPUT (policy ACCEPT)\ntarget prot opt source destination\n\nChain FORWARD (policy ACCEPT)\ntarget prot opt source destination\n\nChain OUTPUT (policy ACCEPT)\ntarget prot opt source destination\n\n",
"text": "",
"username": "Florian_Streppel"
},
{
"code": "/etc/mongod.conf\n",
"text": "Please share the content ofand of the log file. I cannot tell you the exact path of the log file because for some reason the message is truncated.1 dumb question: Are you trying to connect from the same machine where you start mongodb or from another machine?",
"username": "steevej"
},
{
"code": "storage:\n dbPath: /var/lib/mongodb\n directoryPerDB: true\n engine: wiredTiger\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: false\n verbosity: 5\n path: /var/log/mongodb/mongod.log\n timeStampFormat: iso8601-utc\n\n# network interfaces\nnet:\n port: 27017\n bindIp: localhost,127.0.0.1,<private IP>\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.763Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"D2\", \"c\":\"CONNPOOL\", \"id\":22558, \"ctx\":\"main\",\"msg\":\"Initializing connection pool controller\",\"attr\":{\"pool\":\"NetworkInterfaceTL-ReplNetwork\",\"controller\":\"LimitController\"}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"D1\", \"c\":\"NETWORK\", \"id\":22940, \"ctx\":\"main\",\"msg\":\"file descriptor and connection resource limits\",\"attr\":{\"hard\":64000,\"soft\":64000,\"conn\":51200}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":20523,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"linsrv150\"}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.19\",\"gitVersion\":\"9a996e0ad993148b9650dc402e6d3b1804ad3b8a\",\"openSSLVersion\":\"OpenSSL 1.1.1n 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"version\":\"Kernel 4.19.0-23-amd64\"}}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"localhost,127.0.0.1,10.54.9.10\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"directoryPerDB\":true,\"engine\":\"wiredTiger\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":false,\"path\":\"/var/log/mongodb/mongod.log\",\"timeStampFormat\":\"iso8601-utc\",\"verbosity\":5}}}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"D1\", \"c\":\"NETWORK\", \"id\":22940, \"ctx\":\"initandlisten\",\"msg\":\"file descriptor and connection resource limits\",\"attr\":{\"hard\":64000,\"soft\":64000,\"conn\":51200}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"D1\", \"c\":\"EXECUTOR\", \"id\":23104, \"ctx\":\"OCSPManagerHTTP-0\",\"msg\":\"Starting thread\",\"attr\":{\"threadName\":\"OCSPManagerHTTP-0\",\"poolName\":\"OCSPManagerHTTP\"}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.836Z\"},\"s\":\"D3\", \"c\":\"EXECUTOR\", \"id\":23108, \"ctx\":\"OCSPManagerHTTP-0\",\"msg\":\"Waiting for work\",\"attr\":{\"numThreads\":1,\"minThreads\":1}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.838Z\"},\"s\":\"D2\", \"c\":\"-\", \"id\":23323, \"ctx\":\"initandlisten\",\"msg\":\"Starting periodic job {job_name}\",\"attr\":{\"job_name\":\"FlowControlRefresher\"}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:51.838Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:52.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:52.838Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:53.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:53.838Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:54.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:54.838Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:55.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:55.838Z\"},\"s\":\"D4\", \"c\":\"-\", \"id\":20518, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Refreshing tickets. Before: {tickets} Now: {numTickets}\",\"attr\":{\"tickets\":1000000000,\"numTickets\":1000000000}}\n{\"t\":{\"$date\":\"2023-03-22T08:29:56.838Z\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22222, \"ctx\":\"FlowControlRefresher\",\"msg\":\"Trimmed samples. Num: {numTrimmed}\",\"attr\":{\"numTrimmed\":0}}\n\n",
"text": "I try to connect from the same machine.\nI just connect for monitoring from remote.",
"username": "Florian_Streppel"
},
{
"code": "bindIp: 0.0.0.0\n\n",
"text": "@Florian_StreppelFor troubleshooting, please set the bindIP as:Restart the mongod services and try to connect first locally and then remotely",
"username": "Abdullah_Madani"
},
{
"code": "netstat -an |grep LISTEN\ntcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:60623 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:42131 0.0.0.0:* LISTEN\ntcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN\ntcp6 0 0 :::46575 :::* LISTEN\ntcp6 0 0 :::111 :::* LISTEN\ntcp6 0 0 :::56625 :::* LISTEN\ntcp6 0 0 :::22 :::* LISTEN\n\nTrying 127.0.0.1...\ntelnet: Unable to connect to remote host: Connection refused\n\n",
"text": "I did change it:But there ist still no open port.\nWhen I try to connect with telnet it does not work as well.",
"username": "Florian_Streppel"
},
{
"code": "ps -ef | grep mongod\n\n",
"text": "@Florian_Streppel Please let me know by which user the mongod service is running",
"username": "Abdullah_Madani"
},
{
"code": "ps -ef | grep mongodmongodb 18573 1 0 15:04 ? 00:00:00 /usr/bin/mongod --config /etc/mongod.conf\n\n",
"text": "ps -ef | grep mongodIt is running with the mongodb user:The dbpath is owned by the mongodb user.",
"username": "Florian_Streppel"
},
{
"code": "<private IP>{\"bindIp\":\"localhost,127.0.0.1,10.54.9.10\",\"port\":27017}",
"text": "When you redact<private IP>redact all{\"bindIp\":\"localhost,127.0.0.1,10.54.9.10\",\"port\":27017}What else did you redact? May be it stops us from investigating.",
"username": "steevej"
},
{
"code": "ls -l /var/lib/mongodb/*",
"text": "Please share output ofls -l /var/lib/mongodb/*If your data set is huge, I mean huge, you might need to let mongod open all the files before it starts to listen.",
"username": "steevej"
},
{
"code": "",
"text": "Are you running SELinux?",
"username": "steevej"
},
{
"code": "",
"text": "Is your /var/lib/mongodb storage NAS, SAN or local?",
"username": "steevej"
},
{
"code": "cat /proc/mounts\n\ngetenforce\n\n",
"text": "In addition to what Steeve has asked, do you have free space in dbPath directory to write Journal files?Did the directory get mounted as read write after reboot? Because as you have mentioned earlier, the problem started after you rebooted the machine.Check if there are any mount points as read only (ro)As Steeve mentioned, check for SELinux if enabled",
"username": "Abdullah_Madani"
},
{
"code": "\"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"version\":\"Kernel 4.19.0-23-amd64\"}}}\n\"s\":\"D4\", \"c\":\"STORAGE\"",
"text": "it is Debian, so not a Mac issue.When I let me show the open ports on the server there is no mongodb port opened.\nIn the logfiles it look like the port is opened. So there are no hints in the log for me.I believe you were quick to judge, are looking at the wrong place and thus leading us to the wrong ideas. MongoDB will start serving at that port only when everything is fine, else will shutdown with an error. And, as happens all the time, there can be certain cases that are not anticipated by developers that cause lingering states without a timeout. And I believe this is one of those situations: Your config file is fine so the server passes the initial phases, but gets into a lingering state at storage stage \"s\":\"D4\", \"c\":\"STORAGE\"When I unmap the dataarea, a new one will be created and the server starts as I would expect it.Yes I mean the dbpath. The dbpath is a mapped nfs share and I unmounted the share to test, whether there is an issue with the data.Here, you should now have some new sights. The problem is in the NFS portion of your setup and in one of these sections: nfs driver on this pc (did you changed any related settings?), network itself (any heavy load recently added that slow network?), nfs data host pc (is nfs host healthy?), nfs shared data drive/folder (is data hard drive healthy? data folder?). check all these parts and those I could not think of that are in this pipeline.PS: as noted above, when you attach/write log files, check for possible sensitive information in them too.",
"username": "Yilmaz_Durmaz"
},
{
"code": "Apr 24 10:44:24 Server kernel: [4830863.552178] lockd: server <NFS Server> not responding, still trying\nApr 24 10:44:24 Server kernel: [4830863.554797] lockd: server <NFS Server> OK\n",
"text": "Hi there,I had conact with the MongoDB Support and we solved the issue.\nWe are running MongoDB with the DBPath set to an NFS share.\nThe server is able to access the share and can create and read files there.When I unmounted the NFS share and started MongoDB again, it was working.\nThe DBPath was now pointing on the partition on the server itself.\nAfter this I created an empty NFS Share and tried it there as well, but with the result, that it did not work.\nNo database was created. Just the mongod.lock file was written.I found this error message in /var/log/syslog.I added a rule at the firewall between the server and the NFS server that allowed anything.\nAfter restarting the service it did work and I got this message in the syslog.In the log of the Firewall I found a connection from the MongoDB server to the NFS server on port 4045 UDP.\nThe Port is used for NFS lock daemon/manager.\nI added the port to the firewall rule and restarted the service.Now everything is working just fine again.Thank you for having a look at our issue.Best regards Florian",
"username": "Florian_Streppel"
}
] | MongoDB does not open Port | 2023-03-02T09:18:59.240Z | MongoDB does not open Port | 3,087 |
null | [
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "{\n _id: something,\n property:value,\n animals: [\n {\n dogs: [\n { name: 'Dog1' },\n { name: 'Dog2' }\n ],\n cats: [\n { name: 'Cat1' },\n { name: 'Cat2' }\n ]\n }\n ]\n}\n{\n _id: something,\n property:value,\n animals: [\n {\n dogs: [\n { name: 'Dog1', _id: uniqueID },\n { name: 'Dog2', _id: uniqueID }\n ],\n cats: [\n { name: 'Cat1', _id: uniqueID },\n { name: 'Cat2', _id: uniqueID }\n ]\n }\n ]\n}\ndb.collection('collectionName').updateMany(\n { animals: { $ne: [] } },\n {\n $set: {\n 'animals.$[].cats.$[]._id': mongoose.Types.ObjectId(),\n 'animals.$[].dogs.$[]._id': mongoose.Types.ObjectId()\n }\n }\n );\nmongoose.Types.ObjectId()",
"text": "Hello.I’m using mongoose in Express and I need to add IDs to many objects inside of array inside of array. Assume that my schema looks like this:I want to achieve something like this:My codeIt works but IDs are exactly the same. Now I know mongoose.Types.ObjectId() is a client-side function and it returns only once when running query so I have the same IDs in all objects.How to achieve unique IDs?",
"username": "Patryk_Luczak"
},
{
"code": "mongoose.connect('<ConnectionString>', { useNewUrlParser: true, useUnifiedTopology: true })\n .then(() => {\n const AnimalSchema = new mongoose.Schema({\n animals: [{\n cats: [{ name: String, _id: mongoose.Types.ObjectId }],\n dogs: [{ name: String, _id: mongoose.Types.ObjectId }]\n }]\n },\n { collection: \"<collectionName>\"});\n const AnimalModel = mongoose.model('Animal', AnimalSchema);\n\n AnimalModel.find({ animals: { $ne: [] } }).then(animals => {\n const bulkWriteOperations = [];\n\n animals.forEach(animal => {\n animal.animals.forEach(({ cats, dogs }) => {\n cats.forEach(cat => {\n bulkWriteOperations.push({\n updateOne: {\n filter: { _id: animal._id, 'animals.cats.name': cat.name },\n update: { $set: { 'animals.$[i].cats.$[j]._id': mongoose.Types.ObjectId() } },\n arrayFilters: [{ 'i.cats.name': cat.name }, { 'j.name': cat.name }]\n }\n });\n });\n\n dogs.forEach(dog => {\n bulkWriteOperations.push({\n updateOne: {\n filter: { _id: animal._id, 'animals.dogs.name': dog.name },\n update: { $set: { 'animals.$[i].dogs.$[j]._id': mongoose.Types.ObjectId() } },\n arrayFilters: [{ 'i.dogs.name': dog.name }, { 'j.name': dog.name }]\n }\n });\n });\n });\n });\n\n AnimalModel.bulkWrite(bulkWriteOperations).then(result => {\n console.log(`${result.modifiedCount} documents updated`);\n mongoose.connection.close();\n });\n });\n })\n .catch(err => console.error(err));\n[\n {\n _id: ObjectId(\"6448ba860bc375d62e0654fd\"),\n property: 'sampletest1',\n animals: [\n {\n dogs: [\n { name: 'Dog1', _id: ObjectId(\"6448bf41f1fa51ee3fa874b2\") },\n { name: 'Dog2', _id: ObjectId(\"6448bf41f1fa51ee3fa874b3\") }\n ],\n cats: [\n { name: 'Cat1', _id: ObjectId(\"6448bf41f1fa51ee3fa874b0\") },\n { name: 'Cat2', _id: ObjectId(\"6448bf41f1fa51ee3fa874b1\") }\n ]\n }\n ]\n }\n]\n",
"text": "Hi @Patryk_Luczak and welcome to MongoDB community forums!!I believe we can achieve this by using application-level code. Therefore, I have attempted to achieve the same using the following code. I’m sharing the code for your reference:The output for the following would be:In the above code, forEach has been used with BulkWrite to achieve the results.Additionally, for my own understanding, could you please explain the reason why you are modeling the _id in such a way? Generally in MongoDB, the _id field in a subdocument is added as a reference to another collection in the document.Let us know if you have any further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Setting Object IDs in updateMany | 2023-04-21T07:08:51.063Z | Setting Object IDs in updateMany | 1,325 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi there,I attended the MongoDB Associate Developer Node certification exam this April 25, 2023 in the Examity portal. I have got the result breakdown percentage for each section but I was not able to save those percentage. I am not sure where I can find it. I do need it to show to my boss.I will be glad to receive those percentages via my email address.\nHow can I get it?Thanks",
"username": "JUNIOR-OREOL_NOUMODONG_TINDJONG"
},
{
"code": "",
"text": "Hey @JUNIOR-OREOL_NOUMODONG_TINDJONG,Welcome to the MongoDB Community Forums! I see you have already reached out to the Certification team and gotten your copy of the results and gotten more information about the developer path too.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I get the result breakdown percentage for each section | 2023-04-25T16:42:47.443Z | How can I get the result breakdown percentage for each section | 932 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "",
"text": "I am trying to query a collection that has documents of different categories. However, i don’t know which would be a cheaper and efficient expression, using $bucketAuto and $match or quering multiple times but only using $match?If I haven’t made anything clear do let me know ",
"username": "Nirmal_Bhandari"
},
{
"code": "",
"text": "I just thought about it. I think the best idea would be use $match before $bucketAuto, which will ensure only the documents I want are sorted into buckets and only one query is needed.",
"username": "Nirmal_Bhandari"
},
{
"code": "",
"text": "Hey @Nirmal_Bhandari,Welcome to the MongoDB Community Forums! I just thought about it. I think the best idea would be use $match before $bucketAuto, which will ensure only the documents I want are sorted into buckets and only one query is needed.Glad that you have found the answer you’re looking for, but do you mind sharing why you think it’s the best solution for you?In general, the scenario you described depends on the size of your collection and the distribution of the categories within it.If your collection is small, or if the categories are relatively evenly distributed, querying multiple times using $match may be simpler and not should not take up much time too since the collection is small.\nOn the other hand, if your collection is large, using $bucketAuto and $match may be a better choice. This is because $bucketAuto can automatically group documents based on their values, which can save you the effort of manually querying multiple times.Please note that these are general pointers in nature and it’s always a good idea to test both approaches with your specific data set and measure their performance. This will help you determine which approach is more efficient for your use case.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | What would be cheaper query-- querying using $bucketAuto to categorize data and then using $match (with indexes) to filter again OR querying mutiple times for data of different categories (using compound index)? | 2023-04-19T12:36:41.802Z | What would be cheaper query– querying using $bucketAuto to categorize data and then using $match (with indexes) to filter again OR querying mutiple times for data of different categories (using compound index)? | 537 |
null | [
"node-js"
] | [
{
"code": "socketTimeoutMS",
"text": "The mongodb documentation states that connectionPoolCleared is “Created when a connection pool is cleared”.\nMy question is that when does a connection pool is cleared?Documentation also states that If socketTimeoutMS is set to n milliseconds, it means that Mongodb driver will close sockets if they are inactive more than n milliseconds.Does this mean that if all the connections(sockets) in a connection pool timeout, then this event is emitted?\nAre there any other cases in which this event is emitted?",
"username": "Sadegh_Hosseini"
},
{
"code": "",
"text": "Hello @Sadegh_Hosseini ,Welcome to The MongoDB Community Forums! Connection Pool’s primary benefit is to have ready-to-use database connections which helps reduce application latency and the number of times new connections are created.My question is that when does a connection pool is cleared?There can be many cases when a connection pool is cleared such as:Does this mean that if all the connections(sockets) in a connection pool timeout, then this event is emitted?It could be one of the reason but mostly MongoDB instances try to have a cache of open, ready-to-use database connections maintained by the driver. You can actually specify the min and max pool size in your MongoDB URI. Please refer to the Connection Pool Configuration Settings documentation.Can you please share your requirements to help me understand your use-case in more detail? Specifically, do you want to clear the connection pool, or do you have any other requirements?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks for answering.\nI am using Nodejs and Mongoose.\nFor a “Database per tenant” multitenancy approach. There are two solutions:In order to implement the 2nd approach, I was thinking it would be best if there was a way to remove the connection objects of tenants(who have been inactive for certain amount of time) from the connection cache.Of course it is possible to keep records of tenant’s most recent activity time and schedule a task for this. But I think it would be easier if there were some event which got emitted by nodejs mongodb drive whenever all the connections in a connection pool timed out, that I could listen to and remove the connection object from the connection cache.",
"username": "Sadegh_Hosseini"
},
{
"code": "MongoClientminPoolSize",
"text": "All official MongoDB drivers create a connection pool, and the individual connections inside the pool are automatically managed by the driver themselves using typically a MongoClient object. This object is expected to last for the lifetime of the application. This is done to help and make things easy so that developers do not need to worry about managing the connections along with other things and it also helps in reducing application latency. You can take a look at Create and Use a Connection Pool for more information.You can tune your connection pool settings as most users try to adjust minPoolSize or maxPoolSize as per their application requirements and resources available. You can also take a look at Tuning Your Connection Pool Settings for more information regarding this. To learn about connection and authentication options supported by the Node.js driver please visit MongoDB connection and authentication options.Lastly, please refer below link in case you are interested in implementing Multi-Tenant Architecture for your application in MongoDB Atlas",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "For a “Database per tenant” multitenancy approach. There are two solutions:In order to implement the 2nd approach, I was thinking it would be best if there was a way to remove the connection objects of tenants(who have been inactive for certain amount of time) from the connection cache.From the view point of “connection management in MongoClient”, it doesn’t (and shouldn’t) care how many databases will be used by this application or which one has more/less traffic. They are just treated equivalently during connection management. The connection is just a virtual tcp connection between your driver and the other party (be it a mongodb server or mongos or similar).Whenever you need a connection to read-from/write-to the mongodb deployment, the driver helps you to find one from the pool if any is available.So in a nutshell, mongo drivers will manage then in a mostly transparent way, and generally you can just rely on it.",
"username": "Kobe_W"
}
] | When does a connection pool gets cleared? | 2023-04-14T16:30:53.762Z | When does a connection pool gets cleared? | 1,871 |
null | [
"node-js",
"crud"
] | [
{
"code": "const timeoutError = new MongoServerSelectionError(\nMongoServerSelectionError: connect ENOBUFS 127.0.0.1:27017 - Local (undefined:undefined)\n...\n",
"text": "Hello, I have a strange error, and can’t find similar solutions to fix it. I would be incredibly grateful if someone could help me with it.\nMy environment: windows 11, mongodb community server (v5.0.10), nodejs 16.3.0 version.I wrote a script that makes requests through a proxy to a specific endpoint in a for loop, receives the data, and writes it to a collection. At first, I used the updateOne() method, then I switched to the bulkWrite() method, although I’m not sure if it can help in any way. The problem is that I make 25 requests per minute (and this number will grow in the future), and after 4000 or 5000 requests, my code throws the following error:I want the code to work for at least 24 hours without errors.\nAfter research I guess that the problem was that mongod was not running, but I started it through cmd and I’m not sure that that’s a reason.I can provide code examples/logs/any additional info. Thanks for any help.",
"username": "Mykhailo_Popovych"
},
{
"code": "poolSizeretryWritestrueretryWritesmongodb://localhost:27017/mydatabase?retryWrites=true\n",
"text": "Hey @Mykhailo_Popovych,Welcome to the MongoDB Community Forums! The ENOBUFS error usually indicates that the system’s network buffer is full, and it cannot establish a new connection. You mentioned that you make 25 requests per minute, which translates to one request every 2.4 seconds. It’s possible that you are overwhelming the MongoDB server with too many requests. You can try adding a delay between requests to reduce the load on the server. You can also try and increase the maximum number of connections that your Node.js application can create to the MongoDB server by increasing the poolSize option.You can also use a MongoDB connection string with a retryWrites option set to true. This option enables your application to retry failed to write operations automatically, which might help in scenarios where the server is temporarily unavailable. Here’s an example connection string with the retryWrites option:I hope these suggestions help you resolve the issue. If not, it would be good if you can provide your code, sample documents, and logs.Regards,\nSatyam",
"username": "Satyam"
}
] | Writing to DB stops after some time (MongoServerSelectionError: connect ENOBUFS 127.0.0.1:27017) | 2023-04-22T19:47:23.558Z | Writing to DB stops after some time (MongoServerSelectionError: connect ENOBUFS 127.0.0.1:27017) | 1,323 |
null | [
"aggregation",
"views"
] | [
{
"code": "",
"text": "I have multiple collections for data related to a server. I am trying to add all the collection into one big collection as that will save the number of time I will have to query the database for data related to a server. I am thinking $lookup and $merge might be the way to go about it. However, I don’t have full knowledge of how to use $lookup and $merge yet. If I am going the wrong way about it or if you have a better way, do let know.Or should I do it mannually?If you got any questions do let me know.Thank you in advance",
"username": "Nirmal_Bhandari"
},
{
"code": "",
"text": "Hello @Nirmal_Bhandari ,Before we try to figure out if you should merge all the collections into one, can you please answer few questions?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Adding multiple collection into one collection? | 2023-04-24T11:16:30.254Z | Adding multiple collection into one collection? | 905 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n \"_id\": 1,\n \"price\": 1\n },\n {\n \"_id\": 2,\n \"price\": 2\n },\n {\n \"_id\": 3,\n \"price\": 3\n },\n {\n \"_id\": 4,\n \"price\": 4\n },\n {\n \"_id\": 5,\n \"price\": 5\n },\n {\n \"_id\": 6,\n \"price\": 6\n }\n]\nconst aggregation1 = [\n {\n $setWindowFields: {\n sortBy: {\n _id: 1\n },\n output: {\n mean: {\n $avg: \"$price\",\n window: {\n documents: [-4, 0]\n }\n }\n }\n }\n },\n {\n $setWindowFields: {\n sortBy: {\n _id: 1\n },\n output: {\n field_new: {\n $sum: [\n \"$price\",\n { $last: \"$mean\" } //Gives error\n ],\n window: {\n documents: [-4, 0]\n }\n }\n }\n }\n }\n];\ndb.collection.aggregate(aggregation);\n[\n {\n \"_id\": 1,\n \"price\": 1\n },\n {\n \"_id\": 2,\n \"price\": 2\n },\n {\n \"_id\": 3,\n \"price\": 3\n },\n {\n \"_id\": 4,\n \"price\": 4\n },\n {\n \"_id\": 5,\n \"price\": 5,\n \"field_new\": 8 // 5 + 3 (3=(1+2+3+4+5)/5 mean from last 5 docs)\n },\n {\n \"_id\": 6,\n \"price\": 6,\n \"field_new\": 10 // 6 + 4 (4=(2+3+4+5+6)/5 mean from last 5 docs)\n }\n]\n",
"text": "I have a MongoDB collection like this:I want to calculate standard deviation myself (I know there’s a built in operator but I want to change some parameters, so implementing it myself).I calculated the running mean, but how do use the last mean in a setWindowFields stage:I’m looking to perform an operation on each price field in a document (sum), with the last mean.\ne.g. x1 + mean at x5 , x2 + mean at x5, … , x6 + mean at x10, x7 + mean at x10, …Like we do in a standard deviation formula: Summation of square of difference between price and average price.Here’s how the expected output should look like:",
"username": "Anuj_Agrawal"
},
{
"code": "$addFieldsfield_newsetWindowFieldsfield_newpricemeansd>const a =\n[\n {\n '$setWindowFields': {\n sortBy: { _id: 1 },\n output: {\n mean: { '$avg': '$price', window: { documents: [ -4, 0 ] } }\n }\n }\n },\n {\n '$addFields': { field_new: { '$sum': [ '$price', '$mean' ] } }\n },\n { '$project': { _id: 1, price: 1, field_new: 1 } }\n]\nsd>db.collection.aggregate(a)\n[\n { _id: 1, price: 1, field_new: 2 },\n { _id: 2, price: 2, field_new: 3.5 },\n { _id: 3, price: 3, field_new: 5 },\n { _id: 4, price: 4, field_new: 6.5 },\n { _id: 5, price: 5, field_new: 8 },\n { _id: 6, price: 6, field_new: 10 }\n]\n$project$addFields$project",
"text": "Hi @Anuj_Agrawal,Apologies as I’m still trying to understand the calculations explained but briefly scanning your expected output, would using an $addFields to calculate field_new after the initial setWindowFields is used (to calculate the running mean) work for you? Based off the expected output you had provided, field_new just appears to be a sum of the current price and running mean value. Is this correct?Pipeline used in my test environment:Output:I used a $project at the end to get it closer to your expected output but I believe the $addFields and $project could probably be combined Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
}
] | How to do operations to a document other than the current one in MongoDB's setWindowFields | 2023-04-24T06:46:06.193Z | How to do operations to a document other than the current one in MongoDB’s setWindowFields | 381 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "How can we create and export cluster architecture diagram in atlas?",
"username": "Gopal_Sharma"
},
{
"code": "",
"text": "Hi @Gopal_Sharma - Welcome to the community Are you after something similar to what is shown in the Set Up a Private Endpoint documentation? (as an example):\nimage801×621 43.5 KB\nExporting the architecture in Atlas is not feature available. If you want something like this in future then perhaps provide your use case details in a feedback post.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Export cluster architecture diagram | 2023-04-25T13:42:35.153Z | Export cluster architecture diagram | 479 |
null | [] | [
{
"code": "exports = function(changeEvent) {\n const { Client } = require('pg')\n \n const client = new Client({\n host: 'xxxxx',\n port: xxx,\n dbname: 'xxxxxxx',\n user: 'xxxxxxx',\n password: 'xxxxxxx',\n ssl: true,\n });\n \n client.connect(err => {\n\tif (err) {\n\t console.error('Connection error', err)\n\t} else {\n\t console.log('Connected')\n\t}\n });\n .....\n};\n> error logs: \nconnection error FunctionError: 'tls' module: not a TLS connection\n",
"text": "Hello All,We are trying to push selective data from Atlas DB to PostgreSQL database using triggers. Below is the snippet of code used to connect to PostgreSQL.As seen above, node-postgres is being used to connect to the database. While this code works fine when executed through a standalone program, we are getting the below TLS error from MongoDB trigger function.Appreciate any help to resolve this issue. Thanks!",
"username": "Ananth"
},
{
"code": "",
"text": "Is the Postgresql instance set up for TLS?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes, we are using Azure PostgreSQL server and it is setup with TLS. Here is a reference to the relevant documentation.",
"username": "Ananth"
},
{
"code": "connection error FunctionError: 'tls' module: not a TLS connection",
"text": "connection error FunctionError: 'tls' module: not a TLS connectionHave you carefully read https://www.mongodb.com/docs/atlas/triggers/ and https://www.mongodb.com/docs/atlas/triggers/external-dependencies/#std-label-external-dependencies ?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi Jack_Woehr,Yes, have gone through those and checked them again, but couldn’t find any related information. Please let me know if I’m missing something.Thanks!",
"username": "Ananth"
},
{
"code": "[Atlas cloud] ---> [Postgresql somewhere]\n ^\n |\n[Your workstation running a standalone program]\n",
"text": "@Ananth is this the setup that works?or with the standalone program do you fetch the data from Atlas to your workstation and then from your workstation write it to Postgresql?",
"username": "Jack_Woehr"
},
{
"code": "┌──────────────────────┐\n│ node.js standalone │\n│ program from local │\n│ machine │\n└──────────────────────┘\n \t │ Connects over \n\t │ SSL successfully\n\t ▼\n ┌────────────┐\n │ PostgreSQL│\n └────────────┘\n┌────────────┐\n│ MongoDB │\n│ Atlas │\n└────────────┘\n\t │ Push using\n\t │ triggers \n\t │(fails with TLS error)\n\t ▼\n┌───────────────┐\n│ PostgreSQL │\n│(same instance│\n│ as above) │\n└───────────────┘\n\n",
"text": "@Jack_WoehrTo validate the PostgreSQL SSL setup, I’ve tried the below mode and it works fine.What we intend to do is below, which is not working:",
"username": "Ananth"
},
{
"code": "",
"text": "Sounds like the outgoing connection from Atlas is blocked in some fashion.\nAre you a paying Atlas customer? If so, file a support ticket. The menu is in the upper right of the Atlas web interface.\nIf you’re on the free plan, I suspect that sort of triggered callout to a 3rd-party site may not be supported.\nI’m not a MongoDB employee, but I think it’s a question Atlas support will have to answer for you.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I had the same doubt and had tried connecting to an external service over HTTPS, that works fine.\nWe will explore reaching out to Atlas Support.Thanks very much for your time!",
"username": "Ananth"
},
{
"code": "",
"text": "Hi,I have the same problem. Have you find the solution?Thanks in advance for your reply.",
"username": "Franck_Anso"
},
{
"code": "",
"text": "No, we were not able to resolve this and hence moved the sync logic to our application code.",
"username": "Ananth"
},
{
"code": "",
"text": "Hi Folks – What version of the pg npm package are you using? We do test with v8.7.1 and the hope is that is supported.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "I am also trying to connect to a remote Postgres DB from Atlas trigger. I downgraded pg npm package to 8.7.1 but still having the same error:Connection error FunctionError: ‘tls’ module: not a TLS connection",
"username": "Kashif_Aziz"
},
{
"code": "",
"text": "Hi, what is the version of your Postgres DB server?",
"username": "Lijie_Zhang"
},
{
"code": "",
"text": "I am using Postgres 11.18",
"username": "Kashif_Aziz"
},
{
"code": "",
"text": "It should work now. Please check it out.",
"username": "Lijie_Zhang"
}
] | Connecting to PostgreSQL from MongoDB Atlas trigger | 2022-09-27T17:13:04.126Z | Connecting to PostgreSQL from MongoDB Atlas trigger | 3,794 |
null | [
"swift"
] | [
{
"code": "Begin processing pending FLX bootstrap for query version xxx\n",
"text": "I get this print in my console log a lot. What does it mean?",
"username": "Itamar_Gil"
},
{
"code": "",
"text": "Hi, this is just a normal log message that is printed when a change of subscription (query) is made by the client (or if it is connecting for the first time). FLX stands for “flexible sync” and the query version is an incrementing counter for the number of times your subscription has changed.You can update the log level if you would prefer not to see these, but during development it might be more helpful than not to see logs at the default level.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_Kaye do you know how I can terminate these tasks, especially if I run them on the @MainActor? I understand it will lead to partial queries being executed but I wonder if it’s possible/recommended.",
"username": "Itamar_Gil"
},
{
"code": "",
"text": "I am not quite sure why you would want to terminate them. They are being run in the background and are only happening when you are making changes to your query. I believe there is no way to terminate them, but you also should see no effects of this happening. Is there a reason you are looking to do this?",
"username": "Tyler_Kaye"
},
{
"code": "async/await@MainActor",
"text": "I’ll give you a scenario:In that case, I may want to terminate the leftover work from the previous subscription update, or at least de-prioritize it.\nDoes that make sense?",
"username": "Itamar_Gil"
},
{
"code": "",
"text": "Got it. Thank you for providing that detail. Unfortunately, there is no way to do this currently, but I have forwarded along a link to this post to the team and they have added it to their list of feature requests.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Begin processing pending FLX bootstrap for query version xxx | 2023-04-22T14:44:10.287Z | Begin processing pending FLX bootstrap for query version xxx | 968 |
null | [
"python"
] | [
{
"code": "from pymongo import MongoClient\n\nmongo_url = \"mongodb://localhost:27017\"\n\nmongo_db = MongoClient(mongo_url)\n\n#print(mongo_db.list_database_names())\n\n#Create db\ndb = mongo_db[\"TestDB5\"]\n#Create collection\ncol1 = db.get_collection(\"TestColl5\")\n\n#Create document\n\ndata = dict()\ndata[\"Name\"] = \"John\"\ndata[\"Age\"] = 17\ndata[\"Id\"] = 7\ndata[\"Class\"] = 8\ndata[\"FinalScore\"] = 76\n\nx= col1.insert_one(data).inserted_id\n\nprint(\"Data created successfully\")\n",
"text": "",
"username": "Sathyan_R"
},
{
"code": "#Create collection\ncol1 = db.get_collection(\"TestColl5\")\ncol1 = db.create_collection(\"TestColl5\")\n",
"text": "Rather:",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "@Sathyan_R if you’re still facing this error please post the entire Python traceback so we can see which line(s) the error gets raised on.",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Typeerror 'mongoclient' object is not callable | 2023-04-22T03:31:18.143Z | Typeerror ‘mongoclient’ object is not callable | 771 |
null | [
"sharding"
] | [
{
"code": "",
"text": "When following the MongoDB docs to install version 6.0 I receive a few errors that I cannot find the solution too.The errors:The following packages have unmet dependencies:\nmongodb-org-mongos : Depends: libssl1.1 (>= 1.1.1) but it is not installable\nmongodb-org-server : Depends: libssl1.1 (>= 1.1.1) but it is not installable\nE: Unable to correct problems, you have held broken packages.I get these when I attempt to do: sudo apt-get install -y mongodb-org",
"username": "Tyler_Kitts"
},
{
"code": "",
"text": "However it is possible that the RPi 4 may not meet the minimum specs of ARMv8.2-AIn which case you’re limited to MongoDB 4.4 or try compiling from source.",
"username": "chris"
},
{
"code": "",
"text": "Indeed the Pi doesn’t meet the minimum officially supported specs as it runs on ARMv8.0-A.In my personal capacity I have built from source and have been running MongoDB on a Pi at home for a few months now.You can find my custom-build binaries on Github.",
"username": "Matt_Kneiser"
}
] | Error while installing mongoDB on a raspberry pi 4 using ubuntu 22.04 | 2022-12-31T18:28:22.321Z | Error while installing mongoDB on a raspberry pi 4 using ubuntu 22.04 | 3,139 |
null | [
"server",
"installation"
] | [
{
"code": "> mongod\n[1] 695793 illegal hardware instruction (core dumped) mongod\n Static hostname: <hide>\n Icon name: computer\n Machine ID: <hide>\n Boot ID: <hide>\nOperating System: Ubuntu 22.04.2 LTS\n Kernel: Linux 5.15.0-1027-raspi\n Architecture: arm64\nDistributor ID: Ubuntu\nDescription: Ubuntu 22.04.2 LTS\nRelease: 22.04\nCodename: jammy\n",
"text": "i has try all ver of MongoDB from 4.2.X to 6.0.X and none of them has run with outand I has try all way I have\nraspberry infoand ubuntu info",
"username": "Tuan_Duong"
},
{
"code": "",
"text": "Hello!The Pi platform is not officially supported at this time.In my personal capacity, I have built MongoDB 6.x from source and published the binaries on Github. Feel free to raise an issue on that repo if you run into issues. I have been running these binaries on a Pi 4 running a 64-bit OS for ~6 months now.",
"username": "Matt_Kneiser"
}
] | Raspberry pi 4 (4GB ram) can not install mongodb server on any ver | 2023-04-25T15:35:22.243Z | Raspberry pi 4 (4GB ram) can not install mongodb server on any ver | 1,056 |
null | [
"replication",
"upgrading"
] | [
{
"code": "",
"text": "Hello all,I am a little stuck with a database that is in production which I want to “secure” and “update”.The actual crazy situation Is the following (don’t ask me why…):My first issue is that I cannot sync the secondary with the primary:My following plans will be:Have you any idea how I can proceed or help the sync?Error when it occurs:E REPL [replication-27] Initial sync attempt failed – attempts left: 9 cause: Networ\nkInterfaceExceededTimeLimit: error fetching oplog during initial sync :: caused by :: error in fetcher batch callback:\nOperation timed outThanks a lot for your help.",
"username": "PierreP3"
},
{
"code": "",
"text": "Did you check network usage during the sync? the error looks like to be caused by network timeout. Is the sync source overloaded during the sync?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello, thanks for your answer.At the time of the issue:Finally, I was able to stabilize the cluster but not using mongod:Then I added another replica that syncs normally (80 mbit/s) for several hours.Now I have the production “stable” and started to test the migration to newer versions.",
"username": "PierreP3"
}
] | Resync Replicaset always restarting from scratch | 2023-04-16T23:38:02.044Z | Resync Replicaset always restarting from scratch | 897 |
null | [
"python",
"containers"
] | [
{
"code": "",
"text": "Good afternoon Mongo Grey Beards!I am looking to compile the latest version of mongoDB for Raspberry pi 4 armv7.\nI have custom compiled GCC 11.3 as Ubuntu 20.04 (WSL) does not meet the 11.3 minimum requirement for compilation.I need to be able to pass the GCC binary directory to python as I opted not to overwrite my current GCC binary but install it to a different directory. However I am not sure how to do so.I also don’t even know if 6.x support armv7 as a potential compilation target. From my google foo I have seen that 5.x is possible.The end goal is to throw it into a docker container and use it for learning.\nI am not an expert when it comes to compiling software by any means. This is a learning experience for me and I have set mongoDB in my sights as a learning experience. I am a network engineer by trade so please be gentle if I ask dumb questions.–edit\nScrolling through the SConstruct file on github I don’t see any add_options or environment variables to specify a GCC binary location. It is looking more and more like I am going to be forced into overwriting my base GCC install which is undesired.",
"username": "Andrew_Creigh"
},
{
"code": "aarch64-linux-gnu-gccAR=/usr/bin/aarch64-linux-gnu-arCC=/usr/bin/aarch64-linux-gnu-gcc-${compiler_version}CXX=/usr/bin/aarch64-linux-gnu-g++-${compiler_version}CCFLAGS=\"-march=armv8-a+crc -moutline-atomics -mtune=cortex-a72\"-moutline-atomics",
"text": "Hello!It looks like you’re on the right track. When you say you have custom compiled GCC, what do you mean by that? To build on WSL for the Pi you will need a cross-compiler toolchain - i.e. a compiler that runs on x86 but produces ARM binaries. You can use aarch64-linux-gnu-gcc for this, and it is distributed on Ubuntu 20.04’s package manager.One point of clarification is that the Raspberry Pi 4 uses an ARMv8-A Cortex A72. I’m not aware of a Pi 4 that uses ARMv7.In order to compile with SCons, you will need to point the build system to several necessary binaries:I would recommend building with GCC 9.4+ which will recognize the -moutline-atomics flag.To cut to the chase, in my personal capacity I have compiled MongoDB from source and published the binaries on Github. I have similarly built a Docker container with these binaries here.I hope this helps!",
"username": "Matt_Kneiser"
}
] | Mongo 6.x on raspberry PI 4 compilation | 2023-01-20T17:28:10.740Z | Mongo 6.x on raspberry PI 4 compilation | 3,849 |
null | [] | [
{
"code": "",
"text": "According to the documentation MongoDB requires a ARMv8.2-A processor.The Pi 4 has Cortex-A72 cores (I can’t link that), which has a ARMv8.0-A architecture, which seems to indicate that it should not work. There are however guides for installing MongoDB on the Pi. Does it actually work (and the documentation is problematic) or was the instructions adapted from other platforms without actually being tested?(That note is there at least back to the MongoDB 4.4 documentation)(The Pi 3 has an even older Cortex-A53 core, which should have the same issue)",
"username": "Gert_van_den_Berg"
},
{
"code": "",
"text": "trying this build out and indeed getting illegal instruction with a core dump - am thinking it needs older version than the compatibility matrix (6.0.4+, and 6.0.3+ not sure how to read that) suggests, or it flat out just doesn’t work https://www.mongodb.com/docs/manual/administration/production-notes/#std-label-prod-notes-supported-platforms-ARM64 , so not really clear which version for the Cortex A72 in the RaspberryPI 4B",
"username": "ZebraZebraZebra"
},
{
"code": "systemctl2020-08-09 08:09:07 UTC; 4s ago",
"text": "The blog post from Sept 2022 has systemctl output showing 2020-08-09 08:09:07 UTC; 4s ago, so I suspect that someone might have adapted the instructions from another platform without actually testing it…",
"username": "Gert_van_den_Berg"
},
{
"code": "",
"text": "An official response would be useful - does Mongo require a ARMv8.2 CPU or does it work on the Raspberry Pi 3/4?",
"username": "Gert_van_den_Berg"
},
{
"code": "",
"text": "Hello,The Pi platform is not officially supported at the time of this writing.However, in my personal capacity I have built the server from source and have instructions and binaries sitting on Github. I personally have been running one of these 6.x binaries on a Pi 4 for ~6 months with sensors writing to a time-series collection. Feel free to file an issue on that repo if you run into issues with the binaries or instructions.",
"username": "Matt_Kneiser"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB and the Pi 4 on Ubuntu 64-bit (aka ARMv8.0-A support) | 2023-04-04T09:48:04.972Z | MongoDB and the Pi 4 on Ubuntu 64-bit (aka ARMv8.0-A support) | 1,706 |
null | [
"data-modeling",
"indexes"
] | [
{
"code": "",
"text": "I have multiple games I want to store data for. The data for each player is stored in a single document with a UUID to identify it. The two options I have thought of so far is:The problem with option 1 is that I do not know how I would get all stats for all games per player while having good performance. Sending a query to each collection simultaneously to get all their stats seems wrong, although I think this is the best option so far.For option 2, I would not be able to create indexes on the fields that needs to be indexed because the number of games and fields per game could be expanded at any time. It would also be very many read and write operations on a single collection which seems very wrong.Does anyone have any advice on this?",
"username": "Henrik"
},
{
"code": "",
"text": "Hi @Henrik ,Having a collection per game doesn’t sound right. We have a known antipatterns of having to many collections and indexes and we should avoid it as much as possible.The MongoDB locking is per document so writing similar objects to the same collection should not interfere with concurrency as long as you hit different documents.Additionally, as your database grow and you wish going to sharding the deployment having single collection to be sharded makes more sense than lots of small ones which doesn’t.You might consider having a hybrid solutions if the single collection doesn’t work like collection per month or per 6 months for all games.Now if you have a large number of attributes that you need to index for search you may consider the attribute pattern for each gameLearn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.I suggest reading the following articleshttps://www.mongodb.com/article/schema-design-anti-pattern-summary/https://www.mongodb.com/article/mongodb-schema-design-best-practices/Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "\n\"Paths\":\n{\n \"PathOrder\": [Path1, Path3, Path4, Path2] \n \"Path1\":\n { \t\n \"Stage1\":\n {\n \"GameName\": \"SpeedDrill\", \n\t\"PlayTime\":92,\n\t\"FirstPlayTime\": 168123123,\n\t\"LastPlayTime\": 168124567,\n\t\"SpecData\": \n\t\t{[\n\t\t{\"key\": \"Crashes\", \"value\": 2},\n\t\t{\"key\": \"AvgSpeed\", \"value\": 65},\n\t\t{\"key\": \"Laps\", \"value\": 7}\n\t\t]}\n \"completed\": false,\n \"attempts\": 1\n }, \n \"Stage2\":\n {\n\t\"GameName\": \"AccuracyDrill\", \n\t...\n }\n\n }\n \"Path2\":\n ...\n",
"text": "HI @Pavel_Duchovny,Thanks for the good reference to Henriks problem. I have bit similar, and did some research about patterns.I have a followup question and thought perhaps it would be ok to continue in this thread, feel free to move or ask me to create a new topic.I’m tracking users progress like following. User has few paths to complete. Paths have few stages to complete that are basically different games.The data is used to give feedback to the user and also for analytical purposes and exported to olap(bq).I was also thinking about having different games data\nin their own collections, but Pavel didn’t suggest that approach. Different games amount would be less than fifty.I was thinking something like following, but started thinking that maybe all the stage data should be as key-value pairs? Also if want at some point more detailed data about gameplay for analytical purposes,\nmaybe that data should be in its own collection to prevent not to fill 15 mb limit and because it is not accessed by the app?",
"username": "Jussi_R1"
},
{
"code": "",
"text": "I flagged the first post as spam rather than the last one.I do not see the option to undo my mistake.",
"username": "steevej"
}
] | Advice for modeling a database for multiple games | 2021-06-24T13:14:29.441Z | Advice for modeling a database for multiple games | 3,457 |
null | [
"swift",
"flexible-sync"
] | [
{
"code": "",
"text": "I want to integrate the conflict resolution for my iOS app.\nLet us suppose I have 2 users.\nUserA: Admin role\nUserB: Normal UserBoth the users are updating the same records at the same time in offline mode. User A is updating first then User B is updating the same record when internet is available then changes made by the User A should remain and it should sync to the Altas i.e. it should work opposite to Last Update Wins rule of conflict resolution.Apart from this I want to maintain the History of the operation performed in the app. How can I maintain the history of the users of a app.\nThanks in advanvce.",
"username": "Aditya_Kumar8"
},
{
"code": "",
"text": "Hi, the best way to implement the custom resolution in this way is probably to structure your data as a list. Then you can have the different roles and do different things with the data. IE, the Admin can insert to the front of the list and the Normal user can insert to the back of the list. Then, you also have the history of the changes to this field.Docs: https://www.mongodb.com/docs/atlas/app-services/sync/details/conflict-resolution/#custom-conflict-resolutionAnother way of achieving this is to have the Admin user always fully replace an object (delete it and re-insert it). That will result in the Admin writes always winning.Let me know if this works.\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_Kaye Could you please help me out via giving sample code for the above approach.\nAnd my next question was after conflicts resolution how we can keep all previous updates in a log that could be viewed as a version history in the application if needed. I’ll be really thank full to you.",
"username": "Aditya_Kumar8"
},
{
"code": "type AuditInfo struct {\n UserId string \n NewValue string\n UpdatedAt Time\n}\n\ntype Person struct {\n ID ObjectId \n Age int \n Name string \n NameHistory []AuditInfo\n}\nrealm.write({\n person.Name = \"new name\" \n person.AuditInfo.push({\n UserId = \"my user\", \n NewValue = \"new name\", \n UpdatedAt = time.Now(), \n })\n})\n// For the admin \nrealm.write({\n personCopy := person.Copy()\n person.Delete() \n personCopy.Name = \"new name\"\n personCopy.Insert()\n})\n",
"text": "Hi,I, unfortunately, do not have as much time to respond here as I would like ideally and a full code sample might take a bit so I can pseudo-code it here if that works.There are two things going on. the first is how to have some sort of “audit”. That can be done by storing not just a primitive but also a “list”. Therefore you could have something like this:Now, using the above, if we wanted to capture some sort of “audit” of a person as we change their name, we can update the code to be something like:Then you can have the “Name” field represent the name of the person, and if you are interested in observing the history of changes made you can look at the NameHistory field.Now for the second issue of “how do you structure better semantics for an admin who always wants their writes to win”, you can do something like this to delete the object and re-create it to force its updates to “win”Hope this helps out!\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thanks for the reply. My question is can we make Class type object in place of Struct type. i.e in Realm Swift SDK we are creating realm objects in Class type when we are creating Struct type then we are getting error 'wrappedValue’ is unavailable: @Persisted can only be used as a property on a Realm object.",
"username": "Aditya_Kumar8"
},
{
"code": "",
"text": "Hi, that seems like an unrelated question (correct me if I am wrong). I do not have familiarity with the Swift SDK but I would recommend creating a new post for this with the proper tags and keywords and it will get routed to the proper team.",
"username": "Tyler_Kaye"
}
] | Custom Conflict Resolution i.e Role based conflict resolution | 2023-04-17T13:26:37.502Z | Custom Conflict Resolution i.e Role based conflict resolution | 984 |
null | [
"transactions"
] | [
{
"code": "",
"text": "Hi, is there a way to find what happens for an X document on the X date?\nI am trying to find out when the document was modified and what was modified.\nThanks.",
"username": "Ed_Durguti"
},
{
"code": "local.oplog.rs_id",
"text": "Hi @Ed_Durguti,If you are running a Replica Set and if your oplog window is large enough, you can search in the collection local.oplog.rs based on the _id of this document and you will find the latest operations if they haven’t been overwritten already.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88,\nThanks for your reply. I am running the DB in Atlas. Sorry forgot to mention. Is it possible to get these kind of logs from Atlas?",
"username": "Ed_Durguti"
},
{
"code": "use local\ndb.oplog.rs.find()\nAtlas pre-prod-shard-0 [primary] local> rs.printReplicationInfo()\nactual oplog size\n'10000 MB'\n---\nconfigured oplog size\n'10000 MB'\n---\nlog length start to end\n'23828 secs (6.62 hrs)'\n---\noplog first event time\n'Mon Dec 12 2022 13:07:13 GMT+0100 (Central European Standard Time)'\n---\noplog last event time\n'Mon Dec 12 2022 19:44:21 GMT+0100 (Central European Standard Time)'\n---\nnow\n'Mon Dec 12 2022 19:44:24 GMT+0100 (Central European Standard Time)'\n",
"text": "Hi @Ed_Durguti,Yes of course!\nMongoDB on Atlas is like MongoDB anywhere else, it’s the same thing.But you’ll probably want to filter, depending what you are looking for.You will probably also need to check the size of your Oplog Window. The more you write (Insert / Update / Delete) docs, the smaller the window. Here is an example of my Open Data COVID Project:As you can see here, my window is quite short because I clear and reimport everything every 4 hours.If you want, you can enlarge the oplog size in the cluster configuration in the Additional Options > More Configuration Options. (I already did that for this particular cluster for instance).Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hey @MaBeuLux88 I am facing similar issue. I am trying to find why the MongoDb transaction is failing. But when i look at the opLog i don’t see any errors related to the transactions failure. Where can i find this information?",
"username": "Anagh_Hegde"
},
{
"code": "",
"text": "when i look at the opLog i don’t see any errors related to the transactions failureI believe oplog is not for failure information. You should check mongodb server log file.",
"username": "Kobe_W"
}
] | Transaction Logging | 2022-12-08T08:49:44.879Z | Transaction Logging | 3,285 |
null | [] | [
{
"code": "deleteOnetimestamp_1__id_1deleteOne",
"text": "Hello community,I am wondering whether a simple deleteOne requires a FETCH operation or not.To give some more context:\nGiven a database cluster with a WT cache size capable of holding 100mio documents. The single collection of this cluster contains 200mio documents.Now my application picks 100mio random documents (out of the 200mio) and updates a timestamp for each document. I’d expect to have almost all of this 100mio “touched” documents in the WT cache now.My applications task is now to DELETE all documents with an untouched timestamp. Given an index timestamp_1__id_1 I can query for the documents _id by reading from the index only.But what happens next, when I call a deleteOne for every _id of my queries result set? Does MongoDB FETCH the to be deleted documents into the cache, evicting all my previously touched documents I’d like to keep and work with or is no FETCH required for the delete operation?Best,\nJens",
"username": "Jens_Lippmann"
},
{
"code": "",
"text": "You can see what “explain” says first.Also use bulk write for multiple deletes and it should run faster than deleteOne.",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does a DELETE operation require a document FETCH? | 2023-04-25T06:26:42.817Z | Does a DELETE operation require a document FETCH? | 389 |
[] | [
{
"code": "",
"text": "I want to resume my database but when I click resume button it says mongodb backup snapshots are too old.\nHow to resume my database.\n\nScreenshot_20230425-1900331080×1920 126 KB\n",
"username": "Fahim_Sium"
},
{
"code": "",
"text": "Did you contact MongoDB Support like it says to ?",
"username": "chris"
}
] | Resume my database | 2023-04-25T16:13:54.622Z | Resume my database | 350 |
|
null | [
"replication"
] | [
{
"code": "",
"text": "I have 2 servers with long distance. I want to have data backup, I don’t need auto failover. And I don’t want performance decrease by write ack since it’s over internet and long distance.So I setup a replica set with 2 members, with secondary member priority=0 and votes=0. Is this setup OK for my expectation?",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "I don’t need auto failoverpriority=0 and votes=0 can work.And I don’t want performance decrease by write ackTo achieve that, you can use w=1 write concern. (with journaling enabled)",
"username": "Kobe_W"
}
] | 2 members rs for data backup | 2023-04-25T05:35:42.598Z | 2 members rs for data backup | 646 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "[\n {\n $search: {\n compound: {\n should: [\n {\n phrase: {\n query: \"bonding\",\n path: \"name\",\n score: { boost: { value: 1.0 } },\n },\n },\n {\n phrase: {\n query: \"bonding\",\n path: \"sections.content\",\n score: {\n boost: {\n value: 0.10000000000000001,\n },\n },\n },\n },\n {\n phrase: {\n query: \"bonding\",\n path: \"tags\",\n score: {\n boost: {\n value: 0.59999999999999998,\n },\n },\n },\n },\n ],\n },\n highlight: {\n path: [\n \"name\",\n \"sections.content\",\n \"tags\",\n ],\n },\n index: \"law_summary_search\",\n },\n },\n {\n $match: {\n isDeleted: false,\n isActive: true,\n $or: [\n { excludedFromSearchResult: false },\n { excludedFromSearchResult: null },\n ],\n \"state.key\": { $in: [\"CA\"] },\n },\n },\n {\n $project: {\n name: 1,\n sections: 1,\n tags: 1,\n state: 1,\n categoryId: 1,\n country: 1,\n county: 1,\n city: 1,\n topicId: 1,\n epic: 1,\n score: { $meta: \"searchScore\" },\n highlights: { $meta: \"searchHighlights\" },\n },\n },\n { $sort: { score: { $meta: \"textScore\" } } },\n { $skip: NumberLong(0) },\n { $limit: NumberLong(12) },\n]\n",
"text": "This issue occurred from 19:04/13/2023 UTC and is no longer happening now",
"username": "Long_Van"
},
{
"code": "",
"text": "Hello @Long_Van ,Welcome to The MongoDB Community Forums! Also, could you contact the Atlas in-app chat support team regarding this? Please provide them with the errors you’re receiving as well.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur ,Yes, this is the first time I see this issue.I didn’t do any changes to resolve this issue.Thanks,\nLong",
"username": "Long_Van"
}
] | Command aggregate failed: $search is not allowed with a non-simple collation | 2023-04-14T03:54:03.829Z | Command aggregate failed: $search is not allowed with a non-simple collation | 656 |
null | [
"aggregation"
] | [
{
"code": "$bucket[{\n \"_id\": {\n \"$oid\": \"6447544a4e512379dee1ced4\"\n },\n \"name\": \"Mike Trout\",\n \"uniformNumber\": 27,\n \"team\": \"Los Angeles Angels\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 333\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1ced5\"\n },\n \"name\": \"Shohei Ohtani\",\n \"uniformNumber\": 17,\n \"team\": \"Los Angeles Angels\",\n \"date\": \"2023/03\",\n \"totalHomeruns\": 337\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1ced6\"\n },\n \"name\": \"Aaron Judge\",\n \"uniformNumber\": 99,\n \"team\": \"New York Yankees\",\n \"date\": \"2023/01\",\n \"totalHomeruns\": 295\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1ced7\"\n },\n \"name\": \"Mookie Betts\",\n \"uniformNumber\": 11,\n \"team\": \"Los Angeles Dodgers\",\n \"date\": \"2023/01\",\n \"totalHomeruns\": 251\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1ced8\"\n },\n \"name\": \"Bryce Harper\",\n \"uniformNumber\": 25,\n \"team\": \"Philadelphia Phillies\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 217\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1ced9\"\n },\n \"name\": \"Ronald Acuna Jr.\",\n \"uniformNumber\": 23,\n \"team\": \"Atlanta Braves\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 229\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1cee1\"\n },\n \"name\": \"Marcus Semien\",\n \"uniformNumber\": 19,\n \"team\": \"Texas Rangers\",\n \"date\": \"2023/03\",\n \"totalHomeruns\": 196\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1cee2\"\n },\n \"name\": \"Vladimir Guerrero Jr.\",\n \"uniformNumber\": 6,\n \"team\": \"Toronto Blue Jays\",\n \"date\": \"2023/03\",\n \"totalHomeruns\": 188\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1cee3\"\n },\n \"name\": \"Yordan Alvarez\",\n \"uniformNumber\": 10,\n \"team\": \"Houston Astros\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 137\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1cee4\"\n },\n \"name\": \"Andrew Benintendi\",\n \"uniformNumber\": 23,\n \"team\": \"Chicago Whitesox\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 116\n},{\n \"_id\": {\n \"$oid\": \"644755b54e512379dee1cee5\"\n },\n \"name\": \"Wander Franco\",\n \"uniformNumber\": 33,\n \"team\": \"Tampa Bay Rays\",\n \"date\": \"2023/02\",\n \"totalHomeruns\": 97\n}]\n$bucket{\n groupBy: '$totalHomeruns',\n boundaries: [\n 0, 95, 105,115,125,135,145,155,165,175,185,195,\n 205,215,225,235,245,255,265,275,285,295,\n 305,315,325,335,345,355,365,375,385,395,405,\n ],\n default: 'Other',\n output: {\n count: {\n $sum: 1,\n },\n playerData: {\n $push: {\n name: '$name',\n uniformNumber: '$uniformNumber',\n team: '$team',\n totalHomeruns: '$totalHomeruns',\n date: '$date'\n },\n },\n },\n}\n$bucket[\n {\n _id: 95,\n count: 1,\n playerData: [\n {\n name: 'Wander Franco',\n uniformNumber: 33,\n team: 'Tampa Bay Rays',\n totalHomeruns: 97,\n date: '2023/02',\n },\n ],\n },\n {\n _id: 115,\n count: 1,\n playerData: [\n {\n name: 'Andrew Benintendi',\n uniformNumber: 23,\n team: 'Chicago Whitesox',\n totalHomeruns: 116,\n date: '2023/02',\n },\n ],\n },\n\n ...\n \n {\n _id: 335,\n count: 1,\n playerData: [\n {\n name: 'Shohei Ohtani',\n uniformNumber: 17,\n team: 'Los Angeles Angels',\n totalHomeruns: 337,\n date: '2023/03',\n },\n ],\n }\n]\n$bucket",
"text": "Having migrated to Amazon DocumentDB, I suppose I am in in dire straits as it does not support $bucket stage operator, as mentioned in Amazon DocumentDB - Supported MongoDB APIs, Operations, and Data Types.– Data –This is raw data, one can insert it in MongoDB :– Aggregation –The $bucket stage looks like this :– Expected Result –This is what $bucket stage above produces :– Question –Is it possible to get the expected output above without $bucket stage operator?Thank you in advance!",
"username": "marc"
},
{
"code": "$group{\n _id: {\n $switch: {\n branches: [\n { case: { $lt: [ \"$totalHomeruns\", 95] }, then: 0 },\n { case: { $lt: [ \"$totalHomeruns\", 105] }, then: 95 },\n { case: { $lt: [ \"$totalHomeruns\", 115] }, then: 105 },\n { case: { $lt: [ \"$totalHomeruns\", 125] }, then: 115 },\n { case: { $lt: [ \"$totalHomeruns\", 135] }, then: 125 },\n { case: { $lt: [ \"$totalHomeruns\", 145] }, then: 135 },\n { case: { $lt: [ \"$totalHomeruns\", 155] }, then: 145 },\n { case: { $lt: [ \"$totalHomeruns\", 165] }, then: 155 },\n { case: { $lt: [ \"$totalHomeruns\", 175] }, then: 165 },\n { case: { $lt: [ \"$totalHomeruns\", 185] }, then: 175 },\n { case: { $lt: [ \"$totalHomeruns\", 195] }, then: 185 },\n { case: { $lt: [ \"$totalHomeruns\", 205] }, then: 195 },\n { case: { $lt: [ \"$totalHomeruns\", 215] }, then: 205 },\n { case: { $lt: [ \"$totalHomeruns\", 225] }, then: 215 },\n { case: { $lt: [ \"$totalHomeruns\", 235] }, then: 225 },\n { case: { $lt: [ \"$totalHomeruns\", 245] }, then: 235 },\n { case: { $lt: [ \"$totalHomeruns\", 255] }, then: 245 },\n { case: { $lt: [ \"$totalHomeruns\", 265] }, then: 255 },\n { case: { $lt: [ \"$totalHomeruns\", 275] }, then: 265 },\n { case: { $lt: [ \"$totalHomeruns\", 285] }, then: 275 },\n { case: { $lt: [ \"$totalHomeruns\", 295] }, then: 285 },\n { case: { $lt: [ \"$totalHomeruns\", 305] }, then: 295 },\n { case: { $lt: [ \"$totalHomeruns\", 315] }, then: 305 },\n { case: { $lt: [ \"$totalHomeruns\", 325] }, then: 315 },\n { case: { $lt: [ \"$totalHomeruns\", 335] }, then: 325 },\n { case: { $lt: [ \"$totalHomeruns\", 345] }, then: 335 },\n { case: { $lt: [ \"$totalHomeruns\", 355] }, then: 345 },\n { case: { $lt: [ \"$totalHomeruns\", 365] }, then: 355 },\n { case: { $lt: [ \"$totalHomeruns\", 375] }, then: 365 },\n { case: { $lt: [ \"$totalHomeruns\", 385] }, then: 375 },\n { case: { $lt: [ \"$totalHomeruns\", 395] }, then: 385 },\n { case: { $lt: [ \"$totalHomeruns\", 405] }, then: 395 },\n { case: { $lt: [ \"$totalHomeruns\", 415] }, then: 405 },\n { case: { $lt: [ \"$totalHomeruns\", 425] }, then: 415 },\n { case: { $lt: [ \"$totalHomeruns\", 435] }, then: 425 },\n { case: { $lt: [ \"$totalHomeruns\", 445] }, then: 435 },\n { case: { $lt: [ \"$totalHomeruns\", 455] }, then: 445 },\n { case: { $lt: [ \"$totalHomeruns\", 465] }, then: 455 },\n { case: { $lt: [ \"$totalHomeruns\", 475] }, then: 465 },\n { case: { $lt: [ \"$totalHomeruns\", 485] }, then: 475 },\n { case: { $lt: [ \"$totalHomeruns\", 495] }, then: 485 },\n { case: { $lt: [ \"$totalHomeruns\", 505] }, then: 495 },\n ],\n default: \"Other\"\n }\n },\n count: { $sum: 1 },\n playerData: {\n $push: {\n name: \"$name\",\n uniformNumber: \"$uniformNumber\",\n team: \"$team\",\n totalHomeruns: \"$totalHomeruns\",\n date: \"$date\"\n }\n }\n}\n_id$bucket$sort{\n _id: 1\n}\n$bucket[\n {\n \"_id\": 95,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Wander Franco\",\n \"team\": \"Tampa Bay Rays\",\n \"totalHomeruns\": 97,\n \"uniformNumber\": 33\n }\n ]\n },\n {\n \"_id\": 115,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Andrew Benintendi\",\n \"team\": \"Chicago Whitesox\",\n \"totalHomeruns\": 116,\n \"uniformNumber\": 23\n }\n ]\n },\n {\n \"_id\": 135,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Yordan Alvarez\",\n \"team\": \"Houston Astros\",\n \"totalHomeruns\": 137,\n \"uniformNumber\": 10\n }\n ]\n },\n {\n \"_id\": 185,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/03\",\n \"name\": \"Vladimir Guerrero Jr.\",\n \"team\": \"Toronto Blue Jays\",\n \"totalHomeruns\": 188,\n \"uniformNumber\": 6\n }\n ]\n },\n {\n \"_id\": 195,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/03\",\n \"name\": \"Marcus Semien\",\n \"team\": \"Texas Rangers\",\n \"totalHomeruns\": 196,\n \"uniformNumber\": 19\n }\n ]\n },\n {\n \"_id\": 215,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Bryce Harper\",\n \"team\": \"Philadelphia Phillies\",\n \"totalHomeruns\": 217,\n \"uniformNumber\": 25\n }\n ]\n },\n {\n \"_id\": 225,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Ronald Acuna Jr.\",\n \"team\": \"Atlanta Braves\",\n \"totalHomeruns\": 229,\n \"uniformNumber\": 23\n }\n ]\n },\n {\n \"_id\": 245,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/01\",\n \"name\": \"Mookie Betts\",\n \"team\": \"Los Angeles Dodgers\",\n \"totalHomeruns\": 251,\n \"uniformNumber\": 11\n }\n ]\n },\n {\n \"_id\": 295,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/01\",\n \"name\": \"Aaron Judge\",\n \"team\": \"New York Yankees\",\n \"totalHomeruns\": 295,\n \"uniformNumber\": 99\n }\n ]\n },\n {\n \"_id\": 325,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/02\",\n \"name\": \"Mike Trout\",\n \"team\": \"Los Angeles Angels\",\n \"totalHomeruns\": 333,\n \"uniformNumber\": 27\n }\n ]\n },\n {\n \"_id\": 335,\n \"count\": 1,\n \"playerData\": [\n {\n \"date\": \"2023/03\",\n \"name\": \"Shohei Ohtani\",\n \"team\": \"Los Angeles Angels\",\n \"totalHomeruns\": 337,\n \"uniformNumber\": 17\n }\n ]\n }\n]\n",
"text": "Fortunately, $group operation did the trick :However, the output above does not get sorted based on _id like $bucket’s.\nTherefore, $sort comes in to play :To have a look at entire aggregation, please have a look at MongoDB Playground - $bucket alternative.– Result –This might not be the best answer but better than crying over spilled milk xD",
"username": "marc"
}
] | MongoDB - Are there any `$bucket` aggregation operator alternatives for Amazon DocumentDB? | 2023-04-25T07:05:08.790Z | MongoDB - Are there any `$bucket` aggregation operator alternatives for Amazon DocumentDB? | 403 |
null | [
"replication",
"atlas-cluster",
"database-tools",
"atlas"
] | [
{
"code": "",
"text": "In a replica set, I run mongo import twice a day on primary (spaced out - 12 hours) - which upserts 290k records, causing oplog window to fall below 1 hour. It automatically resolves as write frequency is nominal except these two times a day. Should I increase oplog size to keep the window higher?How does oplog entries work, do the old ones get removed after they are successfully sent and run on secondary mongod-s? Any info on that would also be helpful",
"username": "Arun_S_R"
},
{
"code": "",
"text": "This manual covers some aspects of oplog",
"username": "Kobe_W"
},
{
"code": "",
"text": "do the old ones get removed after they are successfully sent and run on secondary mongod-s?No, check above manual for more information. (Basically rule is: 1. more space needed for new oplog entries and 2. the retention period has passed)Should I increase oplog size to keep the window higher?I can’t think of any harm (other than slightly more disk use on oplogs) of doing that.One related thing is possibly flow control. You can tune it a bit so the oplogs are not filled too quickly.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I only find this help doc for increasing (resizing without restart) oplog size, and I am having trouble connecting to secondaries. Is there a way to do the resizing via GUI? please let me know - also pls let me know, what is the URI for connecting to secondary and primary - where we can get it from the atlas portal?",
"username": "Arun_S_R"
},
{
"code": "",
"text": "What tier is your cluster. Some features are not available on the shared tiers: M0,M2,M5",
"username": "chris"
}
] | Mongoimport causing oplog window alert - should I increase size? | 2023-04-22T08:02:24.784Z | Mongoimport causing oplog window alert - should I increase size? | 867 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I have 282 Collections in MongoDB database and I want to make ER Diagram of those collections.\nAre there any Tools where I can just import collections and generate ER diagram.",
"username": "Aditya_Dalvi"
},
{
"code": "",
"text": "Hi @Aditya_Dalvi\nplease check out my posting Data Modeling Tools for a starting point.Further links you can checkout:\nGenMyModel\nMoon Modeler\nHackolade\nGleek_\nPlanUMLHope that helps\nMichael",
"username": "michael_hoeller"
}
] | ER Diagram of Collections | 2023-04-25T11:19:49.353Z | ER Diagram of Collections | 873 |
null | [
"kafka-connector"
] | [
{
"code": "curl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" localhost:8083/connectors/ -d '''{\n \"name\": \"source_mysql_connector\", \n \"config\": { \n \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",\n \"tasks.max\": \"1\", \n \"database.hostname\": \"host.docker.internal\", \n \"database.port\": \"3306\",\n \"database.user\": \"test\",\n \"database.password\": \"$apr1$o7RbW.GvrPIY1\",\n \"database.server.id\": \"8111999\", \n \"database.server.name\": \"db_source\", \n \"database.include.list\": \"example\", \n \"database.history.kafka.bootstrap.servers\": \"broker:29092\", \n \"database.history.kafka.topic\": \"schema-changes.example\",\n \"database.allowPublicKeyRetrieval\":\"true\",\n \"include.schema.changes\": \"true\"\n }\n}'''\ncurl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" localhost:8083/connectors/ -d '''{\n \"name\": \"sink_mongodb_connector\", \n \"config\": { \n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics\":\"db_source.example.employees\",\n \"connection.uri\":\"mongodb://172.17.0.1:27017/example?w=1&journal=true\",\n \"database\":\"example\",\n \"collection\":\"employees\",\n \"value.converter\": \"io.confluent.connect.avro.AvroConverter\",\n \"value.converter.schema.registry.url\": \"http://schema-registry:8081\"\n }\n}'''\n{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac1\"), \"before\" : null, \"after\" : { \"id\" : NumberLong(11), \"name\" : \"Steve Shining\", \"team\" : \"DevOps\", \"birthday\" : 11477 }, \"source\" : { \"version\" : \"1.5.0.Final\", \"connector\" : \"mysql\", \"name\" : \"db_source\", \"ts_ms\" : NumberLong(\"1624303251000\"), \"snapshot\" : \"false\", \"db\" : \"example\", \"sequence\" : null, \"table\" : \"employees\", \"server_id\" : NumberLong(6030811), \"gtid\" : null, \"file\" : \"mysql-bin.000003\", \"pos\" : NumberLong(5445), \"row\" : 2, \"thread\" : null, \"query\" : null }, \"op\" : \"c\", \"ts_ms\" : NumberLong(\"1624303251190\"), \"transaction\" : null }{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac2\"), \"before\" : null, \"after\" : { \"id\" : NumberLong(12), \"name\" : \"John\", \"team\" : \"Support\", \"birthday\" : 6270 }, \"source\" : { \"version\" : \"1.5.0.Final\", \"connector\" : \"mysql\", \"name\" : \"db_source\", \"ts_ms\" : NumberLong(\"1624303251000\"), \"snapshot\" : \"false\", \"db\" : \"example\", \"sequence\" : null, \"table\" : \"employees\", \"server_id\" : NumberLong(6030811), \"gtid\" : null, \"file\" : \"mysql-bin.000003\", \"pos\" : NumberLong(5445), \"row\" : 3, \"thread\" : null, \"query\" : null }, \"op\" : \"c\", \"ts_ms\" : NumberLong(\"1624303251190\"), \"transaction\" : null }mysql> select * from employees;\n+----+---------------+-----------+------------+------------+\n| id | name | team | birthday |\n+----+---------------+-----------+------------+------------+\n| 1 | Peter Smith | DevOps | 2003-07-21 |\n| 11 | Steve Shining | DevOps | 2001-06-04 |\n| 12 | John | Support | 1987-03-03 |\n+----+---------------+-----------+------------+------------+\n{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac2\"), \"name\" : \"John\", \"team\" : \"Support\", \"birthday\" : \"1987-03-03 \"}",
"text": "I am currently using MySQL database as source connector using this config below, I want to monitor changes to a database and send it to mongoDB,Here’s my source connector config,Here’s my sink connector (mongodb) config,Using this I was able to establish the connection and catch the data changes and store them onto mongodb collection for a table called employees,But the problem here is when I checked the collections in mongodb the documents were saved like this,{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac1\"), \"before\" : null, \"after\" : { \"id\" : NumberLong(11), \"name\" : \"Steve Shining\", \"team\" : \"DevOps\", \"birthday\" : 11477 }, \"source\" : { \"version\" : \"1.5.0.Final\", \"connector\" : \"mysql\", \"name\" : \"db_source\", \"ts_ms\" : NumberLong(\"1624303251000\"), \"snapshot\" : \"false\", \"db\" : \"example\", \"sequence\" : null, \"table\" : \"employees\", \"server_id\" : NumberLong(6030811), \"gtid\" : null, \"file\" : \"mysql-bin.000003\", \"pos\" : NumberLong(5445), \"row\" : 2, \"thread\" : null, \"query\" : null }, \"op\" : \"c\", \"ts_ms\" : NumberLong(\"1624303251190\"), \"transaction\" : null }{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac2\"), \"before\" : null, \"after\" : { \"id\" : NumberLong(12), \"name\" : \"John\", \"team\" : \"Support\", \"birthday\" : 6270 }, \"source\" : { \"version\" : \"1.5.0.Final\", \"connector\" : \"mysql\", \"name\" : \"db_source\", \"ts_ms\" : NumberLong(\"1624303251000\"), \"snapshot\" : \"false\", \"db\" : \"example\", \"sequence\" : null, \"table\" : \"employees\", \"server_id\" : NumberLong(6030811), \"gtid\" : null, \"file\" : \"mysql-bin.000003\", \"pos\" : NumberLong(5445), \"row\" : 3, \"thread\" : null, \"query\" : null }, \"op\" : \"c\", \"ts_ms\" : NumberLong(\"1624303251190\"), \"transaction\" : null }But my mysql database looks like this,I want my collections to look like this,{ \"_id\" : ObjectId(\"60d0e6939e00e22f274ccac2\"), \"name\" : \"John\", \"team\" : \"Support\", \"birthday\" : \"1987-03-03 \"}What am I doing wrong here? Even the delete message is stored in collection like this, it is not able to identify the message and all. How do I fix it? Even the dates are not stored properly?",
"username": "Abhi"
},
{
"code": "",
"text": "You might be able to use a post-processor in the sink to allow only those columns like after.name, after.team, etc",
"username": "Robert_Walters"
},
{
"code": "\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.debezium.rdbms.RdbmsHandler\"",
"text": "Hi @AbhiWhen you want the MongoDB Sink Connector to process CDC events as created by Debezium - in your example from a MySQL instance - you have to make sure to configure the sink connector properly.Read about the options here: https://docs.mongodb.com/kafka-connector/current/kafka-sink-cdc/#change-data-capture-using-debeziumThe most important thing for your example is to configure the following sink connector property:\"change.data.capture.handler\": \"com.mongodb.kafka.connect.sink.cdc.debezium.rdbms.RdbmsHandler\"This should do the trick to insert/update/delete the actual data into the target collection the sink connector writes to.",
"username": "hpgrahsl"
},
{
"code": "",
"text": "Hi @Abhi I am also doing a similar thing. I followed @hpgrahsl solution and now data is being inserted in the way I want. But there is problem with date field. In MySQL, I have date stored in datetime format and I want the date to get stored as as string literal datetime expression in mongodb but it is getting stored as long timestamp expression. Can you please help.Thanks",
"username": "Sudhir_Daga"
},
{
"code": "",
"text": "Assuming that you are using Debezium MySQL Source connector, right? If it captures your changes from the mysql table it will convert temporal types accordingly. So if you look into the corresponding kafka topic you should see that the field in question is already stored with a numeric type. You find details about this in the Debezium docs here Debezium connector for MySQL :: Debezium Documentation (make sure to verify which version of Debezium you are running so that you don’t read docs for a different version). So I’m afraid the mongodb sink connector in this case cannot directly give you the temporal type conversion you want to have. You can look into kafka connect transformations (SMTs) and check if you can configure something you need. The TimestampConverter is a good starting point. Your problem will most likely be that the date/time field is a nested field within the kafka records payload and I think this isn’t supported by the TimestampConverter directly. You can combine multiple SMTs. If you cannot do it with the existing pre-built SMTs you might fallback to writing your own custom SMT for that.",
"username": "hpgrahsl"
},
{
"code": "",
"text": "Thank you @hpgrahsl. My issue was resolved by using the Debezium MySQL connector’s built-in support for converting MySQL’s temporal types to Kafka Connect’s temporal types. I added these to parameter’s in my mysql connector config and issue was resolved.“time.precision.mode”: “connect”,\n“time.integral.types”: “timestamp,date”",
"username": "Sudhir_Daga"
},
{
"code": "",
"text": "If the built-in temporal conversions can help you it’s definitely the easiest and in that case better option. Happy to hear you got it working according to you requirements!",
"username": "hpgrahsl"
}
] | MongoDB as sink connector not capturing data as expected - kafka? | 2021-06-21T20:11:15.264Z | MongoDB as sink connector not capturing data as expected - kafka? | 6,168 |
null | [
"compass",
"atlas-cluster",
"connector-for-bi"
] | [
{
"code": "systemLog:\n\n logAppend: false\n\n path: C:/data/mongosqld.log\n\n \n\nsecurity:\n\n enabled: true\n\n \n\nmongodb:\n\n net:\n\n uri: \"mongodb://spt-dev.iody2.azure.mongodb.net/\"\n\n auth:\n\n username: \"<databaseuser>\"\n\n password: \"<password>\"\n\n source: \"admin\"\nschema:\n- db: smart_pricing_tool\n tables:\n - table: PowerBICartReportData\n collection: PowerBICartReportData\n pipeline: []\n columns:\n - Name: _id\n MongoType: bson.ObjectId\n SqlName: _id\n SqlType: varchar\n - Name: answers\n MongoType: string\n SqlName: answers\n SqlType: varchar\n - Name: cartId\n MongoType: string\n SqlName: cartId\n SqlType: varchar\n - Name: cartItemAdditionalName\n MongoType: string\n SqlName: cartItemAdditionalName\n SqlType: varchar\n - Name: cartItemId\n MongoType: string\n SqlName: cartItemId\n SqlType: varchar\n - Name: cartName\n MongoType: string\n SqlName: cartName\n SqlType: varchar\n - Name: client\n MongoType: string\n SqlName: client\n SqlType: varchar\n - Name: createdAt\n MongoType: date\n SqlName: createdAt\n SqlType: timestamp \n - Name: createdBy\n MongoType: bson.ObjectId\n SqlName: createdBy\n SqlType: varchar\n - Name: input\n MongoType: float64\n SqlName: input\n SqlType: numeric \n - Name: inputType\n MongoType: string\n SqlName: inputType\n SqlType: varchar\n - Name: model\n MongoType: string\n SqlName: model\n SqlType: varchar\n - Name: questionFullName\n MongoType: string\n SqlName: questionFullName\n SqlType: varchar\n - Name: questionPrice\n MongoType: float64\n SqlName: questionPrice\n SqlType: numeric \n - Name: questionShortName\n MongoType: string\n SqlName: questionShortName\n SqlType: varchar\n - Name: questionValue\n MongoType: float64\n SqlName: questionValue\n SqlType: numeric \n - Name: section\n MongoType: string\n SqlName: section\n SqlType: varchar\n - Name: version\n MongoType: float64\n SqlName: version\n SqlType: numeric\n",
"text": "I’m issuing the following command to start the bi connector on my local machine: .\\mongosqld.exe --config mongosqld.conf --schema schema.drdl. I can connect to the database using compass so I know I can get to the cluster.I’m getting the following error when I execute the command:unable to load MongoDB information: failed to create admin session for loading server cluster information: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: Unknown, Servers: [{ Addr: spt-dev.iody2.azure.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: dial tcp: lookup spt-dev.iody2.azure.mongodb.net: no such host }, ] }Thanks in advance!!!mongosqld.confschema.drdl",
"username": "Frank_Segarra"
},
{
"code": "",
"text": "I am also facing the same error …is there any solution for the same issue.",
"username": "satyendra_kumar1"
}
] | Connecting to atlas cluster from bi connector running on my local machine | 2022-08-01T18:38:29.696Z | Connecting to atlas cluster from bi connector running on my local machine | 2,558 |
null | [
"atlas",
"atlas-triggers"
] | [
{
"code": "",
"text": "Is there is any way to monitor Atlas Trigger from other monitoring tools like ELK, New relic, Datadog?",
"username": "Sri_Ram3"
},
{
"code": "",
"text": "Hi @Sri_Ram3,You can set up log forwarding to send App Services logs to external monitoring tools. See this blog post for an example of how to integrate with Datadog.You can also poll the App Services Admin API metrics endpoint to retrieve metrics about your app, which includes metrics for your triggers’ executions.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Hi @Kiro_Morkos,Thanks for your suggestions. I have integrated MongoDB Atlas with Prometheus but there I can’t able to get server CPU metrics also I need to forward DB Trigger logs to Prometheus",
"username": "Sri_Ram3"
}
] | How to manage Atlas trigger health from other monitoring tool like ELK, Newrelic | 2023-04-12T09:47:56.248Z | How to manage Atlas trigger health from other monitoring tool like ELK, Newrelic | 1,236 |
null | [
"cxx",
"c-driver"
] | [
{
"code": "collection collectionPointer = conn[DB_NAME][collectionName];connacquire()/path/to/source/mongo-c-driver-1.23.0/src/libmongoc/src/mongoc/mongoc-client.c:1339 mongoc_client_get_database(): precondition failed: clientBSON_ASSERT (client);(gdb) bt\n#0 0x00007f048b149e87 in raise () from /lib/x86_64-linux-gnu/libc.so.6\n#1 0x00007f048b14b8cb in abort () from /lib/x86_64-linux-gnu/libc.so.6\n#2 0x00007f048a84bc12 in mongoc_client_get_database (client=<optimized out>, name=<optimized out>)\n at /path/to/source/mongo-c-driver-1.23.0/src/libmongoc/src/mongoc/mongoc-client.c:1340\n#3 0x00007f04900d2fd6 in mongocxx::v_noabi::database::database(mongocxx::v_noabi::client const&, bsoncxx::v_noabi::string::view_or_value) ()\n from /usr/lib/libmongocxx.so._noabi\n#4 0x00007f04900be6a8 in mongocxx::v_noabi::client::database(bsoncxx::v_noabi::string::view_or_value) const & () from /usr/lib/libmongocxx.so._noabi\n#5 0x000055f10d4cbea0 in mongocxx::v_noabi::client::operator[](bsoncxx::v_noabi::string::view_or_value) const & (this=0x7eff7d2cc494, name=...)\n at /usr/include/mongocxx/v_noabi/mongocxx/client.hpp:406\n#6 0x000055f10d4c2f79 in zmongoPutObject (conn=..., <removed args from here but the are all fine>)\n at zios/zmongo/zmongo_utils.cpp:204\n#7 0x000055f10d4f6589 in zmongo_put_object (client_pool=0x7f0340189630, <removed args from here but the are all fine>) at zios/zmongo/zmongo_c.cpp:35\n<below trace is irrelevant>\n(gdb) p conn\n$1 = (mongocxx::v_noabi::client &) @0x7eff7d2cc494: {_impl = std::unique_ptr<mongocxx::v_noabi::client::impl> = {get() = 0x7eff66612054}}\n",
"text": "Hi,\nNoticed a an issue I haven’t seen before and not sure how to explain.\nUsing:\nMongoDB 6.0.5\nmongo-cxx-driver 3.7.0 (with mongo-c-driver 1.23.0)In part of the code I am initiating a collection pointer using the mongodriver syntax:\ncollection collectionPointer = conn[DB_NAME][collectionName];The conn is arriving from a connection pool (using acquire() ).\nAnd when reaching this line above the application gets abort() called on it.\nFrom looking at the logs, saw this:\n/path/to/source/mongo-c-driver-1.23.0/src/libmongoc/src/mongoc/mongoc-client.c:1339 mongoc_client_get_database(): precondition failed: client\nWhich seems like its failing in the BSON_ASSERT in that function:\nBSON_ASSERT (client);And from the core file noticed this trace:I changed the trace a bit to not have internal info, but the arguments came in to my function just fine.\nAlso, in frame 5 I can see that the DB_NAME is also ok.When printing conn in frame 6 I see this:Not sure if its ok or not.Above that all the mongocxx/mongoc code will not show me the argument values.At start I thought it was a one time thing but then saw it again a few days later occurring at the same line, which crashes the application.\nThis is a code path which runs many times (millions per day), so its not something that I can re-create since its not repeating.Do you have any insight to this? any help will be appreciated.Thanks!",
"username": "Oded_Raiches"
},
{
"code": "mongocxx::clientmongoc_client_get_databasemongoc_client_get_databasemongocxx::pool::entry.acquire().acquire()",
"text": "Hi @Oded_Raichesmongocxx::client is designed to perform a null check before calling mongoc_client_get_database. This check is supposed to throw an “invalid client object” exception if the internal pointer is null. However, this null check is not thread-safe. If the client object is being modified concurrently by another thread, this may cause a race condition that defeats the null check and allows a null pointer to be passed to mongoc_client_get_database. This would explain the unpredictable application crashes as well as the apparent inconsistencies in the resulting stack trace.The mongocxx::pool::entry object returned by .acquire() is likely being destroyed (which sets the client object’s internal pointer to null) while the entry’s corresponding client object is still being used by another thread. The pool entry object owns the provided client object: the lifetime of the pool entry object returned by .acquire() must be greater than the scope of the provided client object’s use.Have you ensured that the pool entry object providing the client object remains valid for the duration of the client object’s use?",
"username": "Rishabh_Bisht"
},
{
"code": "zmongo_put_object (client_pool=0x7f0340189630, <removed args from here but the are all fine>)\n{\n\t....\n\tauto entry = client_pool.acquire();\n\tif (!entry) {\n\t\treturn some_error;\n\t}\n\t... \n\treturn zmongoPutObject(*entry, <removed args from here but the are all fine>);\n}\n\nzmongoPutObject (conn=..., <removed args from here but the are all fine>)\n{\n\t...\n\tcollection collectionPointer = conn[DB_NAME][collectionName]; <-- failure here\n\t...\n}\nzmongo_put_objectzmongo_put_object",
"text": "Hi @Rishabh_Bisht\nThe flow is rather simple:zmongo_put_object function and its incoming pool is used by parallel threads here, but it suppose to be thread safe, right?\nThe conn acquired here is never used by parallel threads.\nAFAIU, the conn acquired by the pool will be released once exiting zmongo_put_object.\nDo you see any issue with this? also the scope of the entry and its lifetime is surly larger than the user of the conn.",
"username": "Oded_Raiches"
},
{
"code": "mongocxx::poolmongocxx::client",
"text": "A mongocxx::pool can be used across multiple threads and used to create clients. However, each mongocxx::client can only be used in a single thread.\nUnfortunately, given the description so far, there isn’t enough information to determine the root cause.Please take a look at Connection pools for reference - this might be a helpful resource.",
"username": "Rishabh_Bisht"
},
{
"code": "auto threadfunc = [](mongocxx::client& client, std::string dbname) {\n auto col = client[dbname][\"col\"].insert_one({});\n};\n\nstd::thread t1 ([&]() {\n auto c = pool.acquire();\n threadfunc(*c, \"db1\");\n threadfunc(*c, \"db2\");\n});\n\nstd::thread t2 ([&]() {\n auto c = pool.acquire();\n threadfunc(*c, \"db2\");\n threadfunc(*c, \"db1\");\n});\n\nt1.join();\nt2.join();\n",
"text": "Thanks for the reply @Rishabh_Bisht\nLet me try to explain again, the pool in my application is used by multiple threads, calling the acquire function, but the client connections acquired are used only withing the thread and released at the end of the calling function within the thread, just like this:Using it this way for over a year now without any issue, but I spotted the above issue 3 times already this past 2 weeks, which makes me feel it is some issue new to mongo 6.0.5 version.",
"username": "Oded_Raiches"
}
] | Mongoc_client_get_database(): precondition failed: client | 2023-04-20T12:32:05.763Z | Mongoc_client_get_database(): precondition failed: client | 1,083 |
null | [
"kafka-connector",
"time-series"
] | [
{
"code": "(com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-source-mongo-condensed-cam-connector-0]\n2023-04-24 11:48:23,103 INFO [source-mongo-condensed-cam-connector|task-0] Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-source-mongo-condensed-cam-connector-0]\n2023-04-24 11:48:23,104 INFO [source-mongo-condensed-cam-connector|task-0] WorkerSourceTask{id=source-mongo-condensed-cam-connector-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask) [task-thread-source-mongo-condensed-cam-connector-0]\n2023-04-24 11:48:23,108 INFO [source-mongo-condensed-cam-connector|task-0] Watching for collection changes on 'TestDB.TestCollection' (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-source-mongo-condensed-cam-connector-0]\n2023-04-24 11:48:23,111 INFO [source-mongo-condensed-cam-connector|task-0] New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-source-mongo-condensed-cam-connector-0]\n2023-04-24 11:48:23,964 WARN [source-mongo-condensed-cam-connector|task-0] Failed to resume change stream: Namespace TestDB.TestCollection is a timeseries collection 166\n",
"text": "Hi,\nI have a MongoDB time series collection that I need to publish to Kafka. To achieve this, I used the Mongo source connector. However, when I attempted to do so, I received the following message:Upon researching timeseries, I discovered that timeseries collections do not support change streams, as outlined in the MongoDB documentation here: https://www.mongodb.com/docs/manual/core/timeseries/timeseries-limitations/.Is there a way to use the MongoDB connector for timeseries data? If not, are there any other alternatives available?",
"username": "Kanishka_Divan"
},
{
"code": "",
"text": "You are correct, currently time series collections do not support change streams. While this support may come in a future version of MongoDB, today you’ll have to move the data to a regular collection. What do you do with the data once it is in Kafka? Is there processing you could do within the aggregation framework that might help ?",
"username": "Robert_Walters"
},
{
"code": "",
"text": "aggregation frameworkHi Robert,\nThe data will be sent to multiple devices via MQTT. A large volume of time-series data is continuously being received by the time-series database, which needs to be published to the devices. However, I’m not quite sure if the aggregation framework would be super helpful in this situation.",
"username": "Kanishka_Divan"
}
] | Use mongo source connector with timeseries collections | 2023-04-24T12:01:17.598Z | Use mongo source connector with timeseries collections | 880 |
null | [
"replication"
] | [
{
"code": "bindIp: localhost,m.xxx.commember[0].host127.0.0.1:27017member[0].hostMongoServerError: Our replica set config is invalid or we are not a member of itisSelf could not authenticate internal user\n\"code\":74,\"codeName\":\"NodeNotFound\",\"errmsg\":\"No host described in new configuration with {version: 2, term: 7} for replica set rs0 maps to this node\"}\n",
"text": "I setup a single member replica set with bindIp: localhost,m.xxx.com. I got member[0].host = 127.0.0.1:27017. Then I changed member[0].host to “m.xxx.com:27017” by rs.reconf().After that, I’m stuck in REMOVED state. rs.conf() is OK, but rs.status() returns error: MongoServerError: Our replica set config is invalid or we are not a member of it.I found some logs:How can I fix it?",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "Could somebody help? It’s easy to reproduce.",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "I see the article about converting replica set to standalone.How to convert a MongoDB Replica Set into a Standalone server.\nEst. reading time: 5 minutes\nCould anyone confirm is there risk of data loss doing so ?",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "I added the self IP into clusterIpSourceAllowlist, then converted the rs to standalone, then converted it to rs. After that, it’s fixed.",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stuck in REMOVED state after change member hostname of a single member replica set | 2023-04-24T17:14:28.060Z | Stuck in REMOVED state after change member hostname of a single member replica set | 731 |
null | [
"queries",
"sharding"
] | [
{
"code": "",
"text": "We have sharded collection with ~104 million documents and want to remove around 12M documents from the collection based on a filter condition. I have tried Unordered Bulk but still query is getting timedout. Please provide some better approach\nvar bulk = db.coll.initializeUnorderedBulkOp();\nbulk.find({“periodId”: “”, Source: { $exists : false }}).remove();\nbulk.execute();",
"username": "Geetha_M"
},
{
"code": "",
"text": "Make sure you have index in place for periodId.Also, you can instead do the remove op in batches so that each batched remove can finish within operation timeout if any (Mongodb has a number of timeout values in different scenarios)",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi Kobe …Index is already available",
"username": "Geetha_M"
}
] | Delete Bulk Records in a sharded collection | 2023-04-24T07:56:15.211Z | Delete Bulk Records in a sharded collection | 573 |
null | [
"queries",
"java"
] | [
{
"code": "",
"text": "Based on the business requirement we are saving a few columns at MongDb from a structured database. We want to maintain the composite key we mentioned in the structured database in Mongo as well. Here I am using Reactive Java. Is there any command which will add the key for the particular collection?",
"username": "Soumi_Paul"
},
{
"code": "",
"text": "",
"username": "Kobe_W"
}
] | Create Composite key | 2023-04-24T07:24:01.132Z | Create Composite key | 898 |
null | [
"replication",
"kubernetes-operator",
"cluster-to-cluster-sync"
] | [
{
"code": "",
"text": "We are running two mongoDbs in same kubernetes cluster on different namespace, need to sync these mongodbs.Tried mongosync but it is unsupported as we are using 5.x.x version of mongo.Using mongodb replicaset we are able initiate rs.initiate(), but while adding the slave/another mongodb (i.e. rs.add(“hostname:port”) it is getting stuck and after couple of minutes throwing errors.Initially I thought of connectivity issue but Using mongoshell I can remotely access both the databases. Both the databases exposed via kubernetes cluster-ip.Please help.",
"username": "Prashant_Ingle"
},
{
"code": "",
"text": "I can remotely access both the databases.It doesn’t mean these two mongodb servers can talk to each other. Did you verify this ?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Both mongoDb instances are running in kubernetes but in a different namespace, From one kubernetes mongoDB pod I can able to access another mongodb via mongoshell and vice-versa.",
"username": "Prashant_Ingle"
},
{
"code": "",
"text": "it is getting stuck and after couple of minutes throwing errorswhat is the error like?",
"username": "Kobe_W"
}
] | Mongo Db replication || Kubernetes | 2023-04-23T14:11:43.935Z | Mongo Db replication || Kubernetes | 892 |
null | [] | [
{
"code": "",
"text": "We faced an issue in our prod env, where one Primary VM was accessible, but It was not accepting any connections. All application that tried to connect to mongo were failing. Attempt to Mongo login to the primary member was not successful. Manual try to start up MongoDB in the VM was also unsuccessful.\nSince, Mongo did not went down completely, No election happened. The problemed VM was showing as “primary” according to rs.status. We had to restart the server and then the issue got resolved. We need to find RCA on this.We are using Mongo 4.4.7 community version.\nAnd we are having below configuration:\nconfig replicaset - 1 Primary, 2 Secondary\nshard1 - 1 Primary, 2 Secondary\nshard2 - 1 Primary, 2 Secondary\nshard3 - 1 Primary, 2 Secondary\n2 query router.errorMessage\":\"NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time limit .We checked our query router available connection:\nQR1\n“current” : 7654,\n“available” : 43546,\n“totalCreated” : 134309236,\n“active” : 2890,\n“exhaustIsMaster” : 487,\n“exhaustHello” : 229,\n“awaitingTopologyChanges” : 716QR2\n“current” : 7746,\n“available” : 43454,\n“totalCreated” : 134299931,\n“active” : 2997,\n“exhaustIsMaster” : 487,\n“exhaustHello” : 229,\n“awaitingTopologyChanges” : 716Also we checked logs thoroughly , connection was getting accepted till 07:09 UTC. The error “NetworkInterfaceExceededTimeLimit” was not present at 07:11 UTC.But the error suddenly started exact at 2023-03-29T07:12 utc.",
"username": "Debalina_Saha"
},
{
"code": "",
"text": "Hi @Debalina_Saha and welcome to MongoDB community forums!!As mentioned in the MongoDB documentation:Each sharded cluster must have its own config servers. Do not use the same config servers for different sharded clusters.It’s important to note that applying administrative operations could potentially have an impact on the performance of the sharded cluster. In this particular case, we would like to suggest considering the use of more than one config server to handle any potential performance impacts that may arise from the deployment. This could help to mitigate any potential issues and ensure that the cluster continues to run smoothly.Also, the forum post mentions a workaround solution for the similar issue.If the above recommendations does not work, could you help us with the output for the following:Regards\nAasawari",
"username": "Aasawari"
}
] | Mongo primary stopped accepting connections | 2023-03-31T07:42:54.060Z | Mongo primary stopped accepting connections | 1,026 |
null | [
"replication"
] | [
{
"code": "",
"text": "I setup a single member rs with bindIp: localhost,m.xxx.com. I got member[0].host “127.0.0.1:27017”.How can I control which name to be used as member.host ?",
"username": "Hong_zhi_guo"
},
{
"code": "",
"text": "You do this through your config file, try putting the m.xxx.com first before the local host in the bindip.When adding new members to the replica set make sure you do the rs.add with the hostname.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks. I think this information should be in the official doc.",
"username": "Hong_zhi_guo"
}
] | How to control replica set member hostname? | 2023-04-24T17:02:11.815Z | How to control replica set member hostname? | 587 |
null | [
"queries",
"dot-net",
"replication",
"crud",
"performance"
] | [
{
"code": "{\n \"_id\" : UUID(\"9a7b2d8b-0648-4381-9317-c69e2c4d21b7\"),\n \"Data\" : {\n \"CustomAttributes\" : [\n .....................\n {\n \"k\" : NumberInt(6),\n \"v\" : \"J\"\n },\n ....................... \n\t\t\t{\n \"k\" : NumberInt(160),\n \"v\" : \"00:18\"\n },\n ],\n \"IsEvent\" : false,\n\t\t....................\n\t\t\"Prop18\" : someValue\n }\n}\n var flightCollection = db.GetCollection<BsonDocument>(stateStore);\n\n var resultFlights = flightCollection.UpdateMany(\n Builders<BsonDocument>.Filter.Eq(\"Data.IsEvent\", false),\n Builders<BsonDocument>.Update.Set(\"Data.CustomAttributes.$[elem].v\", \"Z\"), \n new UpdateOptions \n {\n ArrayFilters = new[]\n {\n new BsonDocumentArrayFilterDefinition<BsonDocument>(new BsonDocument(\"elem.k\", 6))\n }\n });\n",
"text": "Hi community,We are facing a huge performance issue during migration. There is a collection with 2B records, and we would like to change each document’s property value that satisfies the filter query (It will be around 78% of all documents). The current performance of that operation is terrible.\nIt is updating 55 documents per second. That means for 2B records, it will take around 420 days .Is there any way to optimize this to the few hours instead?Here are the details.C# Driver: 2.10.4\nMongo DB version: 4.4.2 (Replicaset with 3 nodes)\nDocument structure:CustomAttributes is a dictionary with 160 items with the indexes from 1 to 160the actual query",
"username": "Azat_TAZAYAN"
},
{
"code": "processedDocuments\nvar collection = database.GetCollection<BsonDocument>(\"myCollection\");\n\nvar updates = new List<WriteModel<BsonDocument>>();\n\n// Build the update operation for each document\nforeach (var doc in documentsToUpdate)\n{\n var filter = Builders<BsonDocument>.Filter.Eq(\"_id\", doc[\"_id\"]);\n var update = Builders<BsonDocument>.Update.Set(\"myField\", \"myNewValue\");\n var updateModel = new UpdateOneModel<BsonDocument>(filter, update);\n updates.Add(updateModel);\n}\n\n// Execute the bulk write operation\nBulkWriteResult result = collection.BulkWrite(updates);\nint numUpdated = result.ModifiedCount;\nvar collection = database.GetCollection<BsonDocument>(\"myCollection\");\n\nList<BsonDocument> documentsToUpdate = // retrieve documents to update\n\nint batchSize = 5000;\nint totalDocs = documentsToUpdate.Count;\nint currentDocIndex = 0;\nint numUpdated = 0;\n\nwhile (currentDocIndex < totalDocs) {\n List<UpdateOneModel<BsonDocument>> updates = new List<UpdateOneModel<BsonDocument>>();\n\n // Build the update operation for each document in the batch\n for (int i = currentDocIndex; i < currentDocIndex + batchSize && i < totalDocs; i++) {\n BsonDocument doc = documentsToUpdate[I];\n var filter = Builders<BsonDocument>.Filter.Eq(\"_id\", doc.GetValue(\"_id\"));\n var update = Builders<BsonDocument>.Update.Set(\"myField\", \"myNewValue\");\n var updateModel = new UpdateOneModel<BsonDocument>(filter, update);\n updates.Add(updateModel);\n }\n\n // Execute the bulk write operation for the batch\n BulkWriteResult result = collection.BulkWrite(updates);\n numUpdated += result.ModifiedCount;\n\n // Move to the next batch\n currentDocIndex += batchSize;\n}\n\nConsole.WriteLine($\"Total documents updated: {numUpdated}\");\nconst collection = db.collection('myCollection');\n\nconst updates = documentsToUpdate.map((doc) => ({\n updateOne: {\n filter: { _id: doc._id },\n update: { $set: { myField: 'myNewValue' } },\n },\n}));\n\n// Execute the bulk write operation\ncollection.bulkWrite(updates).then((result) => {\n const numUpdated = result.modifiedCount;\n});\n\nconst MongoClient = require('mongodb').MongoClient;\n\nconst batchSize = 5000;\nconst dbName = 'myDatabase';\nconst collectionName = 'myCollection';\n\nasync function updateDocumentsSequentially() {\n const client = await MongoClient.connect('mongodb://localhost:27017');\n const db = client.db(dbName);\n const collection = db.collection(collectionName);\n\n const totalDocuments = await collection.countDocuments({});\n\n let processedDocuments = 0;\n while (processedDocuments < totalDocuments) {\n const documentsToUpdate = await collection.find({}).limit(batchSize).toArray();\n const updates = documentsToUpdate.map((doc) => ({\n updateOne: {\n filter: { _id: doc._id },\n update: { $set: { myField: 'myNewValue' } },\n },\n }));\n\n const result = await collection.bulkWrite(updates);\n processedDocuments += result.modifiedCount;\n }\n\n await client.close();\n}\n\nupdateDocumentsSequentially().catch((err) => console.error(err));\n\nlet collection = database.collection(\"myCollection\");\n\nlet documents_to_update: Vec<BsonDocument> = // retrieve documents to update\n\nlet batch_size = 5000;\nlet total_docs = documents_to_update.len();\nlet mut current_doc_index = 0;\nlet mut num_updated = 0;\n\nwhile current_doc_index < total_docs {\n let mut updates: Vec<UpdateOneModel<BsonDocument>> = Vec::new();\n\n // Build the update operation for each document in the batch\n for i in current_doc_index..std::cmp::min(current_doc_index + batch_size, total_docs) {\n let doc = &documents_to_update[i];\n let filter = doc.get(\"_id\").and_then(|id| Some(doc! {\"_id\": id.to_owned()})).unwrap();\n let update = doc! {\"$set\": {\"myField\": \"myNewValue\"}};\n let update_model = UpdateOneModel::new(filter, update, None);\n updates.push(update_model);\n }\n\n // Execute the bulk write operation for the batch\n let result = collection.bulk_write(updates, None);\n num_updated += result.modified_count;\n\n // Move to the next batch\n current_doc_index += batch_size;\n}\n\nprintln!(\"Total documents updated: {}\", num_updated);\n\nfrom pymongo import MongoClient, UpdateOne\n\nclient = MongoClient(\"mongodb://localhost:27017/\")\ndatabase = client[\"myDatabase\"]\ncollection = database[\"myCollection\"]\n\ndocuments_to_update = # retrieve documents to update\n\nbatch_size = 5000\ntotal_docs = len(documents_to_update)\ncurrent_doc_index = 0\nnum_updated = 0\n\nwhile current_doc_index < total_docs:\n updates = []\n\n # Build the update operation for each document in the batch\n for i in range(current_doc_index, min(current_doc_index + batch_size, total_docs)):\n doc = documents_to_update[i]\n filter = {\"_id\": doc[\"_id\"]}\n update = {\"$set\": {\"myField\": \"myNewValue\"}}\n update_model = UpdateOne(filter, update)\n updates.append(update_model)\n\n # Execute the bulk write operation for the batch\n result = collection.bulk_write(updates)\n num_updated += result.modified_count\n\n # Move to the next batch\n current_doc_index += batch_size\n\nprint(f\"Total documents updated: {num_updated}\")\n",
"text": "For the sake of the fact I’ve dealt with DBs with upward of 7 billion documents before, I pulled out some of my old scripts to do stuff like this. Call me cocky, but I love stuff like this…These use drivers to connect to the database, fetch documents in batches of 5000, and update them sequentially using the BulkWrite API. The processedDocuments variable keeps track of how many documents have been updated so far, and the loop continues until all documents have been processed. Doing the 2BN in batches of 5k at a time will keep your systems from freezing, and honestly you’ll probably be done in a few days.Here are some tips:\nTo optimize the update performance for the given scenario, here are some suggestions:1. Use a multi-updates approach - Instead of updating all 2B records in one go, you can split them into smaller batches and update them in parallel. This will make use of the available resources and improve the overall update performance. - This is why I also recommend doing it in C# or Node.JS, but also I fixed your C# stuff.2. Use a dedicated server - You can use a separate server with high processing power and memory to execute the update operation. This will ensure that other database operations are not impacted due to the high load generated by the update.3. Use bulk writes - Use bulk write operations to perform multiple update operations in a single request. This can help reduce network latency and improve performance.4. Use a covered query - Use a covered query to fetch only the necessary fields from the collection. This will help reduce the amount of data transferred between the database and the application, and improve performance.5. Optimize index usage - Ensure that the collection has an appropriate index that is being used to execute the update operation. This can help improve query performance by reducing the amount of time taken to search for records.6. Optimize query filters - Ensure that the query filters are optimized and are using the appropriate operators to retrieve only the necessary records. This can help reduce the number of records being updated and improve performance.7. Monitor database resources - Monitor the database resources such as CPU, memory, and network usage during the update operation. This can help identify any performance bottlenecks and optimize the update process accordingly.Compare these to what you have, and let me know what you think.C#Now to do it all in batches that will always keep running until all 2B are done:For JavaScript/Node:I’m also going to give you my Python and Rust versions of the 5000 batch. The RUST VERSION has the LOWEST impact to system resource in its operation almost no memory issues at all comparatively. And is what I’d recommend overall above the others for that much workload, but it’s up to you.RUSTPYTHONNote that this requires the PyMongo driver to be installed.",
"username": "Brock"
},
{
"code": "",
"text": "Hi Brock,Thank you very much for your replay, I will try and let you know.",
"username": "Azat_TAZAYAN"
},
{
"code": "",
"text": "Just remember to run it all in batches, that’s key, so then it’s going batch by batch instead of all at once.It’ll greatly improve the performance. You may scale up or down the batches, but 5000 docs per batch is usually a sweet spot for a lot of systems, even a Raspberry Pi 4 running 4.4 can handle 5000 at a time.",
"username": "Brock"
},
{
"code": " public void Up(IMongoDatabase database)\n {\n var db = dBHelper.GetDatabase(\"name\"));\n\n long numUpdated = 0;\n int batchSize = 5000;\n var documentsToUpdate = db.GetCollection<BsonDocument>(stateStore);\n \n System.Diagnostics.Debug.WriteLine($\"Total: --- {documentsToUpdate.EstimatedDocumentCount()}\");\n\n string query = @\"{ \n $set:\n { \n 'Data.CustomAttributes' : \n {\n $map:\n {\n input: '$Data.CustomAttributes',\n as:'this',\n in:\n {\n $cond:\n {\n if: { $eq: ['$$this.k', 6] },\n then:\n {\n 'k' : NumberInt(6),\n 'v' : {$toString: '$$this.v'}\n },\n else: '$$this'\n }\n }\n }\n }\n }\n }\";\n\n var pipelineDefinition = new BsonDocumentStagePipelineDefinition<BsonDocument, BsonDocument>(new[] { BsonDocument.Parse(query) });\n\n var all = db.GetCollection<BsonDocument>(stateStore).Find(Builders<BsonDocument>.Filter.Empty, new FindOptions { BatchSize = batchSize, }).ToCursor();\n\n List<UpdateOneModel<BsonDocument>> updates = new List<UpdateOneModel<BsonDocument>>();\n \n Stopwatch stopwatch = Stopwatch.StartNew();\n\n while (all.MoveNext())\n {\n updates.Clear();\n\n foreach (var document in all.Current)\n {\n var filter = Builders<BsonDocument>.Filter.Eq(\"_id\", document.GetValue(\"_id\"));\n updates.Add(new UpdateOneModel<BsonDocument>(filter, pipelineDefinition));\n }\n\n // Execute the bulk write operation for the batch\n BulkWriteResult result = documentsToUpdate.BulkWrite(updates);\n numUpdated += result.ModifiedCount;\n System.Diagnostics.Debug.WriteLine($\"Count: --- {numUpdated}\");\n System.Diagnostics.Debug.WriteLine($\"Time: --- {stopwatch.Elapsed}\");\n\n }\n }\n",
"text": "Hi @Brock,I’m using following code for update but seems that it will take 300 hours for 1B records:\n6 sec for each 5000 items (1 batch); How can we improve it? If it will be 1 sec per batch, then during two weekends I can apply this to our PROD db.",
"username": "Azat_TAZAYAN"
}
] | Update on 2 billion records will take 1.5 year | 2023-04-11T04:08:30.418Z | Update on 2 billion records will take 1.5 year | 1,606 |
null | [
"aggregation",
"queries"
] | [
{
"code": "$group$group$unwind$group$group$group$project$group'ings[\n{courier: \"John Brown\", productType: \"Package\", status: \"DELIVERED\"},\n{courier: \"John Brown\", productType: \"Insured Parcel\", status: \"DELIVERED\"},\n{courier: \"John Brown\", productType: \"Package\", status: \"DELIVERY_RESCHEDULED\"},\n{courier: \"Eve White\", productType: \"Bubble Mailer\", status: \"DELIVERY_FAILED\"}\n]\n$group{\ndata: [\n{\n courier: \"John Brown\",\n products: [\n {\n productType: \"Package\",\n status: \"DELIVERED\",\n count: 45\n },\n {\n productType: \"Package\",\n status: \"DELIVERY_RESCHEDULED\",\n count: 2,\n },\n {\n productType: \"Insured Parcel\",\n status: \"DELIVERED\",\n count: 21,\n }\n ]\n },\n {\n courier: \"Eve White\",\n products: [\n <...this courier's listings follow the same data structure...>\n ]}\n]\n",
"text": "Every single time I need to use $group I find myself perplexed by how limited and wasteful this stage is in an otherwise really powerful pipeline system.Why is it that $group MUST destroy whatever was garnered in previous stages? Why was it not possible to design it to be akin to how $unwind works, i.e. why can’t $group store its results onto a new field without ruining whatever was aggregated in previous stages? This way a user would have a way to preserve their fields, and most importantly - easily perform chained $group stages, and if they desired to obtain the shape that $group does today, all they would need to do is throw in a $project stage at the end.What should one do to aggregate data with fields that have multiple $group'ings on them? Suppose I have delivery status data:How do I run $group in order to obtain the following:Group by courier and and then group by delivery status:For instance:",
"username": "Vladimir"
},
{
"code": "$group$group",
"text": "Why is it that $group MUST destroyIt must because it groups multiple documents into groups.why can’t $group store its results onto a new fieldBecause not single way will fit all the use cases. But you may store the documents you want using thing like $first, $last together with $$ROOT.Your first $group _id will be { “courier”:“$courier” , “productType”:“$productType” , “status”:“$status” }. You would then $group on the groups with _id:{“courier”:“$_id.courier”,“productType”:“$_id.productType”} and finally you $group on _id:{ “courier”:“$_id.courier”}. Not exactly like you shown but with a little bit more information because could also get a count per productType.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $group stage - a wasted opportunity | 2023-04-24T16:27:46.617Z | $group stage - a wasted opportunity | 365 |
null | [
"swift",
"atlas",
"flexible-sync"
] | [
{
"code": "",
"text": "I want to integrate Event Library for iOS App. I have enabled the Event Recording and set Default Event Configuration. Now I was expecting AuditEvent Collection in my Atlas. So I looked all over the Atlas but could not find the collection AuditEvent. I following this tutorial to integrate the Event Library.\nPlease help me in finding the AuditEvent in my Atlas. Thanks in Advance.",
"username": "Aditya_Kumar8"
},
{
"code": "sleep()",
"text": "Hey Adityha - what tutorial were you following to integrate the Event Library? Is there a link I can check out to see what you were doing?Without more information, it’s hard to say what the issue might be. Here are a few things you could check out:Hopefully one of these things helps solve your issue. Otherwise, let us know what tutorial you’re trying to follow so we can try to spot the issue.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Hi Dachary_Carey,\nThanks for your replay. I was following the tutorial [https://www.mongodb.com/docs/realm/sdk/swift/sync/event-library/](https://Event Library Tutorial)…",
"username": "Aditya_Kumar8"
},
{
"code": "Package Dependencies> Realm 10.38.0> RealmDatabase 13.9.0Update PackageFile > Packages > Update to Latest Package VersionsData ServicesCollections",
"text": "If you’re using Xcode you should be able to see the version of the SDK you’ve currently got installed under Package Dependencies in the Project navigator. It should give you a version number that would look something like:\n> Realm 10.38.0\n> RealmDatabase 13.9.0If you’re using the default configuration and your Realm version is between 10.26.0 and 10.37.2, or if you’re not using the default configuration but are using async open and your realm version is 10.37.2, you should update to the latest version and try running your code again. You can update just the Realm package by right-clicking Realm in the Package Dependencies and selecting Update Package, or you can update all of your project’s dependencies from File > Packages > Update to Latest Package Versions.The AuditEvent collection should appear in Atlas. If you go to your Atlas project where you’ve got your App Services App set up, and select the Data Services tab, then select Collections and open the development database you’re using in your Device Sync configuration. After you successfully sync some AuditEvent objects, you will see the collection in your development database, and when you open the collection, you should see AuditEvent objects.",
"username": "Dachary_Carey"
},
{
"code": "> Realm 10.38.0> RealmDatabase 13.9.0let eventSyncUser = try await app.login(credentials: Credentials.anonymous)\nvar config = user.configuration(partitionValue: \"Some partition value\")\nconfig.eventConfiguration = EventConfiguration(metadata: [\"username\": \"Jason Bourne\"], syncUser: eventSyncUser, partitionPrefix: \"event-\")\n",
"text": "Hi Dachary_Carey,\nI have checked and found that I using below versions of Realm.\n> Realm 10.38.0\n> RealmDatabase 13.9.0\nBut I still can not see AuditEvent collection in under Data Services tab. I’m using below code for enabling recoding and configuratin:To enable event recording, set the Event.Configuration property on the Realm.ConfigurationBut still I can not see the AuditEvent collection under DataServices tab in Atlas. I’m attaching some screenshots for your reference.\n\nScreenshot 2023-04-24 2324141625×820 53.2 KB\n. also I want to add that I’m using free version M0 of Atlas sync.",
"username": "Aditya_Kumar8"
},
{
"code": "partitionValueEventConfiguration",
"text": "The example code you’re using from the documentation has an arbitrary partitionValue and EventConfiguration which may not match how your App Services app is configured.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Hi,\n1.I’m using Flexible Sync for sync.\n2. No error message while writing or syncing the data.\n3. Yes, In my case I was able to sync the Task and Notes and sync was working perfectly fine.\nI Integrated this Event Library before adding Subscription.\nCould you please share a sample code so that I can understand, what mistake I have done.",
"username": "Aditya_Kumar8"
},
{
"code": "let eventSyncUser = try await app.login(credentials: Credentials.anonymous)\nvar config = user.configuration(partitionValue: \"Some partition value\")\nconfig.eventConfiguration = EventConfiguration(metadata: [\"username\": \"Jason Bourne\"], syncUser: eventSyncUser, partitionPrefix: \"event-\")\nvar config = user.flexibleSyncConfiguration().flexibleSyncConfiguration().configuration(partitionValue: [YOUR PARTITION VALUE])AuditEvent",
"text": "Ahh, then it sounds like the user configuration you posted above isn’t getting used at all.This example from the docs initializes a Sync configuration for Partition-Based Sync:If you’re using Flexible Sync successfully, then this configuration isn’t getting used at all. This EventLibrary configuration example builds on the example in the docs of opening a Synced Realm for Partition-Based Sync. If you’re using Flexible Sync and the objects are syncing, you’ve probably got something like this configuration somewhere else in your codebase:var config = user.flexibleSyncConfiguration()The .flexibleSyncConfiguration() opens a realm for Flexible Sync, and the .configuration(partitionValue: [YOUR PARTITION VALUE]) from the Event Library example opens a realm for Partition-Based Sync. Those two things don’t work together. If your App Services App uses Flexible Sync and you are successfully syncing objects, then the Partition-Based Sync config you’ve pasted here isn’t getting used at all and that’s why you’re not seeing the AuditEvent collection in Atlas.I’ve just confirmed with the Product team that the Event Library does not work with Flexible Sync, so this functionality isn’t currently supported.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to see AuditEvent collection in for my app in Atlas | 2023-04-23T17:16:05.069Z | How to see AuditEvent collection in for my app in Atlas | 906 |
null | [] | [
{
"code": "",
"text": "Hi,We have been trying to get a realm encryption solution created across our whole application. This has worked successfully across the application in android and on an ios emulator. However, when we try to launch the application on an ios device, the first realm we try to open throws an error:_ERROR: Connection[2]: Session[2]: In finalize_state_transfer, the realm /path_to_file/partially_downloaded.realm could not be opened, msg = Realm file decryption failed Path: /path_to_file/partially_downloaded.realmSince the realm decryption fails, we cannot access the local realm file and so we can’t progress into the application. The code used:const userConfig = user.createConfiguration({\nsync: { url: REALM_URL + “/~/userProfile”, error: (err) => console.log(err), fullSynchronization: true },\nschema: userSchema\n});\nuserConfig.encryptionKey = key;Realm.open(userConfig).then((realm) => {We get the key from the ios keychain using the react-native-sensitive-info package. What do you think could be the issue we are facing? Is there a workaround or solution to the problem?Thanks in advance!",
"username": "Matthew_Hughes"
},
{
"code": "",
"text": "Have you had any solution for this issue?",
"username": "Ram_N_A"
}
] | Realm decryption failed on ios device but not on emulator (ios 13 & react native) | 2020-06-23T14:21:36.530Z | Realm decryption failed on ios device but not on emulator (ios 13 & react native) | 2,510 |
[
"node-js",
"compass"
] | [
{
"code": "",
"text": "Hello, i’m having a lot of issues with the connection from node to mongodb (in windows and endeavour os) but in compass mongo seems to work, idk what to do\n\nScreenshot_20230423_1325521920×1080 172 KB\n",
"username": "Matteo_Nicola_D_Amato"
},
{
"code": "",
"text": "Try 127.0.0.1 instead of localhost in your uri",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I already tried 127.0.0.1 and 0.0.0.0",
"username": "Matteo_Nicola_D_Amato"
},
{
"code": "",
"text": "Is mongod up?\nAny other program using the same port?\nCould be firewall blocking your connection\nTry with a different internet connection like mobile hotspot",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "i don’t think that it’s a firewall problem because when from the browser i go to localhost:27017 there is writing “It looks like you are trying to access MongoDB over HTTP on the native driver port.” and mongodb compass and mongosh work",
"username": "Matteo_Nicola_D_Amato"
}
] | Cannot connect to NODE.JS EXPRESS from LOCALHOST | 2023-04-23T11:30:27.890Z | Cannot connect to NODE.JS EXPRESS from LOCALHOST | 792 |
|
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Is it possible I can have access to the previous Labs I have done already?",
"username": "Olatunde_Adebayo"
},
{
"code": "",
"text": "Hi @Olatunde_Adebayo,Welcome to the MongoDB Community forums Can you please send an email to [email protected] with the URL for the lab that you are encountering difficulties with!\nBest,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I got a response that I have to register or enroll again for the course to have access to the Labs completed already. Thank you.",
"username": "Olatunde_Adebayo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I Can't Access The Labs I have Done Before | 2023-02-02T19:46:19.550Z | I Can’t Access The Labs I have Done Before | 1,242 |
null | [
"crud",
"php"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"644377c8bda636ff6f04ceb2\"\n },\n \"acct_id\": {\n \"$oid\": \"6427eead36cef4611c0ab772\"\n },\n \"co_contacts\": [\n {\n \"con_id\": \"000000001\",\n \"con_type\": \"address\",\n \"con_label\": \"Main Office\",\n \"con_add1\": \"51 Maple Dr\",\n \"con_city\": \"Somewhere\",\n \"con_state\": \"FL\",\n \"con_zip\": \"12323\",\n \"con_country\": \"United States\"\n },\n {\n \"con_id\": \"000000002\",\n \"con_type\": \"address\",\n \"con_label\": \"2nd Office\",\n \"con_add1\": \"123 First St.\",\n \"con_city\": \"Kansas City\",\n \"con_state\": \"KS\",\n \"con_zip\": \"13222\",\n \"con_country\": \"United States\"\n },\n {\n \"con_id\": \"000000003\",\t\n \"con_type\": \"person\",\n \"con_label\": \"Manager\",\n \"con_fname\": \"Bob\",\n \"con_lname\": \"Smith\",\n \"con_person_num\": \"232 2212 222\",\n \"con_person_email\": \"[email protected]\"\n }\n ],\n \"co_name\": \"Big Sky Co\",\n \"co_size\": \"small\",\n}\n<?php\n\t$updateResult = $collection->findOneAndUpdate(['_id' => $oid, 'co_contacts.con_id' => '00000002'],['$set' => ['co_contacts.$' => $data]],['returnDocument' => MongoDB\\Operation\\FindOneAndUpdate::RETURN_DOCUMENT_AFTER]);\n\n\tif (is_null($updateResult)) $updateResult = $collection->updateOne(['_id' => $oid],['$push' => ['co_contacts' => $data]]);\n",
"text": "Greetings All,I am using the latest version of MongoDB 6.0.5 Community version on a standalone server and I’m using the latest version of the PHP MongoDB Driver library (mongodb/mongodb - Packagist) to interact with the database.I have a collection of documents that resemble the following example:I would like to be able to either update or insert (upsert) documents into the co_contacts array with one call to the database, but I can’t seem to get it right. My workaround is to use two calls as follows:Can someone help me optimize my PHP code into one call (e.g. upsert)?Thanks in advance,\nAlec",
"username": "Alexander_Dean"
},
{
"code": "",
"text": "Hello @Alexander_Dean, Welcome to the MongoDB developer forum,I don’t know the syntax in PHP, you can refer to the following topic,",
"username": "turivishal"
},
{
"code": "",
"text": "@turivishal Thanks, I’ll have a look at that thread.",
"username": "Alexander_Dean"
}
] | Upsert Embedded Array | 2023-04-24T07:57:24.538Z | Upsert Embedded Array | 769 |
null | [
"java",
"connecting",
"containers",
"spring-data-odm"
] | [
{
"code": "2021-05-30 02:43:51.909 INFO 1 --- [nio-9090-exec-3] org.mongodb.driver.connection : Closed connection [connectionId{localValue:5}] to mongo-db:27017 because there was a socket exception raised by this connection.\n\n2021-05-30 02:43:51.924 ERROR 1 --- [nio-9090-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='benz', source='admin', password=<hidden>, mechanismProperties=<hidden>}; nested exception is com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='benz', source='admin', password=<hidden>, mechanismProperties=<hidden>}] with root cause\n\ncom.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongo-db:27017. The full response is {\"ok\": 0.0, \"errmsg\": \"Authentication failed.\", \"code\": 18, \"codeName\": \"AuthenticationFailed\"}\n at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175) ~[mongodb-driver-core-4.2.3.jar!/:na]\n at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:358) ~[mongodb-driver-core-4.2.3.jar!/:na]\n at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:279) ~[mongodb-driver-core-4.2.3.jar!/:na]\n",
"text": "when I try to connect spring boot application with the mongo-db container then the authentication failed exception has thrown.please refer to this link to get more details authentication-failed",
"username": "Nafaz_M_N_M"
},
{
"code": "nested exception is com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-",
"text": "nested exception is com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-Have you tried with authSource in URI string?",
"username": "Ramachandra_Tummala"
},
{
"code": "mongodb://benz:14292@mongo-db/producer_db?authSource=admin",
"text": "yes, this is my URLmongodb://benz:14292@mongo-db/producer_db?authSource=admin",
"username": "Nafaz_M_N_M"
},
{
"code": "",
"text": "Can you connect by shell or Compass using same connection details?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes, it works with shell",
"username": "Nafaz_M_N_M"
},
{
"code": "",
"text": "Hi @Nafaz_M_N_M,We hope that the problem has been resolved. So, we can close this thread for now.!!\nIn case of any further issues or queries, please feel free to reach out by creating new post in a relevant category.All the Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "KushagraThe problem hasn’t been solved, it works with shell but not with application. Do not close the thread before getting an answer",
"username": "Nafaz_M_N_M"
},
{
"code": "",
"text": "Hi @Nafaz_M_N_M,Be sure, the thread will not be closed before getting resolved.\nI have moved the thread to a specific category and added some tags for broader visibility.All the Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Nafaz_M_N_M did you resolve the issue?\nI get the same error as you did. I can connect by shell using the same connection details, but not via Spring application.\nMy application.properties specify:\nspring.data.mongodb.authentication-database=admin",
"username": "Ciz"
},
{
"code": "",
"text": "UPDATE:\nI have now resolved my problem. This is what happend:\nI wanted to learn how to use a MongoDB instance from a Docker container instead of using my local MongoDB. I mapped the MongoDB container’s port 27017 to the computers localhost 27017. When running the Spring Application, MongoDB seemed to respond. It was only when I tried to performe CRUD-operations that authentication failed. Finally I realized that the localhost 27017 was already in use by the local MongoDB. Thus, I mapped the MongoDB container to port 8094 on localhost, and now it seems to work fine. I hope this could be of any help to anyone.",
"username": "Ciz"
},
{
"code": "",
"text": "I seem to be having the same problem, I just fired a spring jpa mogo app and I have authentication issues when I try to perform any crud operation, I am running mongo on docker and I dont have any instances running locally also I checked my ports and one other app is using it. I am able to log on successfully when I using the mongo app.By the way running spring 2.7.4 with java 17 on an m1 macThe error I get is shown below2022-12-24 13:30:35.002 ERROR 86462 — [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path threw exception [Request processing failed; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘adebola’, source=‘onecard’, password=, mechanismProperties=}; nested exception is com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘adebola’, source=‘onecard’, password=, mechanismProperties=}] with root causecom.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server localhost:27017. The full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: “AuthenticationFailed”}",
"username": "Adebola_51132"
},
{
"code": "",
"text": "Hello,\nDid you resolve this error? Im stuck on a similar error with code 18",
"username": "Splatted_I0I"
},
{
"code": "?authSource=adminmongodb://<username>:<password>@<host>:27017/image_hosting_db?authSource=admin\n",
"text": "I faced similar error. And i resolve it by adding ?authSource=admin to the end of my mongo connection string.",
"username": "Ken_Low"
},
{
"code": "",
"text": "I had the same problem with spring boot and solved it like this:\nreplicaset “rset”\nusername user\npassword pw\nDtatabase workdb\ni have mongodb on pc “mini-pc” and 2 replica “hostReplica1” “hostReplica2” .THIS NOT WORKERROR 18 authentication failTHIS NOT WORKIN THIS CASE IT START BUT DIE AFTER SOME TIME FOR REPLICASET ERRORTHIS WORKHopefully it helps someone",
"username": "Filippo_Pratesi"
}
] | MongoDB Authentication Failed With Spring Data URI | 2021-05-30T03:00:06.138Z | MongoDB Authentication Failed With Spring Data URI | 30,924 |
null | [] | [
{
"code": "",
"text": "HI ,Could you confirm Polling Or Webhook is supported by Mongo DB Atlas Rest Endpoints or JDBC.\nCan you provide doumentation for this .Thanks,\nShubhangi",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "Hello @Shubhangi_Pawar .As per this documentation on Third-Party ServiceWebhooks have been renamed to HTTPS Endpoints with no change in behavior.Third party services and push notifications in App Services have been deprecated in favor of creating HTTP endpoints that use external dependencies in functions.Let me know if you have any additional questions, would be happy to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks, Tarun, for your quick response.Yes. We have few more queries, It would be helpful if you answer it.1.Do we have rest endpoint for also listing db names and listing all collection name2.SCRAM,X509 Authentication, LDAP, Kerberos Authentication mechanism – this Authentication mechanism is for Database User login, right? We are not using it externally when we connect to db with Rest Endpoints.3.For rest endpoints Authentication mechanism (Authentication Provider) is :API key, email-password, JWT token – From https://www.mongodb.com/docs/atlas/app-services/users/sessions/ I am understanding we can get bearer token and refresh token.I tried from postman ,getting below response. “Authentication via local-userpass is unsupported”, same observed with apikey and annon-user.Thanks,\nShubhangi",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "Hello @Shubhangi_Pawar ,I saw that Andrew replied on your other similar post.\nPlease feel free to open a new thread for any additional queries.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "",
"username": "Tarun_Gaur"
}
] | Could you confirm Polling and Webhook is supported by Mongo DB Atlas Rest Endpoints or JDBC | 2023-04-11T05:41:00.876Z | Could you confirm Polling and Webhook is supported by Mongo DB Atlas Rest Endpoints or JDBC | 669 |
null | [
"queries"
] | [
{
"code": "{\n_id: 'abb',\nprojectId:\"1\"\nproject:{\nname:'abc',\nid:'1'\n}\n}\n",
"text": "Given the following schemaA senior dev told me that if I query for projectId it would be a faster query compared to\nif I query for “project.id”.\nIs this true?\nI just wanted some confirmation and explanation as to why this is",
"username": "AbdulRahman_Riyaz"
},
{
"code": "{\n user : { id : 1 , name : steevej }\n project : { id : 2 , name : data-warehouse }\n}\n{\n userId : 1 ,\n userName : steevej , \n projectId : 2 ,\n projectName : data-warehouse\n}\n{\n user : { id : 1 , name : steevej }\n projects :\n [\n { id : 2 , name : data-warehouse } ,\n { id : 3 , name : web-interface } ,\n ]\n}\n{\n userId : 1 ,\n userName : steevej , \n projectId_0 : 2 ,\n projectName_0 : data-warehouse ,\n projectId_1 : 3 ,\n projectName_1 : web-interface\n}\n",
"text": "confirmation and explanation as to why this isWell, you should definitively invite yoursenior devhere in order for him/her to supply the reasoning behind his/her remark. There are factors more important than that for performance. A indexed project.id will be a lot faster compared to a non-indexed projectId.Functionally, I prefer to have a project: object.vsvsThen each element of projects: could be processed by the same project functions directly. Otherwise you have un-mangle the name before calling the functions.So please ask your senior dev to provide his/her input.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is query speed faster if I query for non-nested fields? | 2023-04-24T06:01:18.649Z | Is query speed faster if I query for non-nested fields? | 429 |
[
"kafka-connector"
] | [
{
"code": "",
"text": "Hey all,\nI connected a mongo collection with a kafka topic using MongoDB Atlas Source in confluent cloud.\nI’m using JSON as the output format, however, when fetching messages from the topic, some fields in the JSON are being stringified and not being passed as an object. See attached screenshot:The message from the topic (see the _id and engagement.history fields) :\n\nimage1257×661 128 KB\nWould love some help with understanding how to fix it.\nThanks!",
"username": "Jonathan_Alumot"
},
{
"code": "",
"text": "Regarding the above issue this is the doc in the collection:\n",
"username": "Jonathan_Alumot"
},
{
"code": "",
"text": "We’re experiencing the same issue and are working around it with custom serializers. I found another post on the same issue, but it looks like the referenced Jira item was resolved as “Works as Designed”, so it looks we’re going to have to continue working around it.",
"username": "Robert_Blackwood"
},
{
"code": "ObjectId",
"text": "Hello @Jonathan_Alumot, were you able to find a workaround for your issue?@Robert_Blackwood, while the ticket you mention explains the issue with arrays, it does not for ObjectIds class. I’m still trying to figure out why it is being serialized as string.",
"username": "Henrique_Tensfeld"
},
{
"code": "",
"text": "Hi @Henrique_Tensfeld, it looks like this is the Simplified Json formatter’s expected behavior if you are using that formatter.",
"username": "Robert_Blackwood"
},
{
"code": "simplified json",
"text": "True, the simplified json formatter would transform the ObjectId object into a string containing the ID itself as string. In my case, I wasn’t using it though I was getting a “stringuified” JSON object.",
"username": "Henrique_Tensfeld"
}
] | Kafka-connector: Some fields in the JSON are being stringified in the topic | 2022-07-20T09:39:29.241Z | Kafka-connector: Some fields in the JSON are being stringified in the topic | 2,978 |
|
null | [
"replication"
] | [
{
"code": "rs.add(\"<IP>:<PORT>\")db.adminCommand({ setFeatureCompatibilityVersion: \"3.6\" })rspp1:PRIMARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )\n{\n \"featureCompatibilityVersion\" : {\n \"version\" : \"3.4\",\n \"targetVersion\" : \"3.6\"\n },\n \"ok\" : 1,\n \"$gleStats\" : {\n \"lastOpTime\" : {\n \"ts\" : Timestamp(1681888293, 1),\n \"t\" : NumberLong(27)\n },\n \"electionId\" : ObjectId(\"7fffffff000000000000001b\")\n },\n \"$configServerState\" : {\n \"opTime\" : {\n \"ts\" : Timestamp(1681889207, 1),\n \"t\" : NumberLong(5)\n }\n }\n}\ndb.adminCommand({ setFeatureCompatibilityVersion: \"3.4\" })",
"text": "I want to add new servers to my replica set.\nSo I used rs.add(\"<IP>:<PORT>\") to add the servers.\nBut the servers became not reachable. On checking Logs I found out it was feature compatibility version issue. So I updated the feature Compatibility Version using, db.adminCommand({ setFeatureCompatibilityVersion: \"3.6\" }).\nBut on checking status I got,.\nThis means for some reason my upgrade is stuck and not finished.\nI need a way to revert it back to 3.4.\nAlso db.adminCommand({ setFeatureCompatibilityVersion: \"3.4\" }) doesn’t work because the first upgrade is not completed yet.\nI want ot stop the update and revert it back to 3.4\nKindly help me.",
"username": "19_231_Chirag_Sharma"
},
{
"code": "db.adminCommand({ setFeatureCompatibilityVersion: \"3.6\" })rs.conf()rs.status()rs.add(\"<IP>:<PORT>\")hostname",
"text": "Hello @19_231_Chirag_Sharma,I updated the feature Compatibility Version using, db.adminCommand({ setFeatureCompatibilityVersion: \"3.6\" }).I see that you are currently using an older version of MongoDB. The said version has reached its end of life (refer Legacy Support Policy). Therefore, it’s highly recommended that you upgrade to a newer version, such as 4.4.20, which is the latest stable release at the time of this writing.To perform the upgrade, you can use the mongodump and mongorestore utilities to backup and restore your data. While there is no guarantee that this will work, it may be worth considering. However, before proceeding with the upgrade, it’s important to take a complete backup of your data to avoid any potential data loss.On checking Logs I found out it was a feature compatibility version issue.So I used rs.add(\"<IP>:<PORT>\") to add the servers.Also, I’ll recommend using hostname instead of an actual IP address as per the recommendation in the documentation.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Feature Compatibility Version Issue | 2023-04-19T08:17:24.062Z | Feature Compatibility Version Issue | 844 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "/home/runner/Cat-1/node_modules/mongoose/lib/connection.js:755\n err = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n at _handleConnectionErrors (/home/runner/Cat-1/node_modules/mongoose/lib/connection.js:755:11)\n at NativeConnection.openUri (/home/runner/Cat-1/node_modules/mongoose/lib/connection.js:730:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-pmec9fr-shard-00-00.jcedrpx.mongodb.net:27017' => ServerDescription {\n address: 'ac-pmec9fr-shard-00-00.jcedrpx.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 3256436,\n lastWriteDate: 0,\n error: MongoNetworkError: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-00.jcedrpx.mongodb.net\n at connectionFailureError (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:383:20)\n at TLSSocket.<anonymous> (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at TLSSocket.emit (node:events:513:28)\n at TLSSocket.emit (node:domain:489:12)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-00.jcedrpx.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {\n errno: -3008,\n code: 'ENOTFOUND',\n syscall: 'getaddrinfo',\n hostname: 'ac-pmec9fr-shard-00-00.jcedrpx.mongodb.net'\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n },\n 'ac-pmec9fr-shard-00-01.jcedrpx.mongodb.net:27017' => ServerDescription {\n address: 'ac-pmec9fr-shard-00-01.jcedrpx.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 3256433,\n lastWriteDate: 0,\n error: MongoNetworkError: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-01.jcedrpx.mongodb.net\n at connectionFailureError (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:383:20)\n at TLSSocket.<anonymous> (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at TLSSocket.emit (node:events:513:28)\n at TLSSocket.emit (node:domain:489:12)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-01.jcedrpx.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {\n errno: -3008,\n code: 'ENOTFOUND',\n syscall: 'getaddrinfo',\n hostname: 'ac-pmec9fr-shard-00-01.jcedrpx.mongodb.net'\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n },\n 'ac-pmec9fr-shard-00-02.jcedrpx.mongodb.net:27017' => ServerDescription {\n address: 'ac-pmec9fr-shard-00-02.jcedrpx.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 3255939,\n lastWriteDate: 0,\n error: MongoNetworkError: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-02.jcedrpx.mongodb.net\n at connectionFailureError (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:383:20)\n at TLSSocket.<anonymous> (/home/runner/Cat-1/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at TLSSocket.emit (node:events:513:28)\n at TLSSocket.emit (node:domain:489:12)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: getaddrinfo ENOTFOUND ac-pmec9fr-shard-00-02.jcedrpx.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {\n errno: -3008,\n code: 'ENOTFOUND',\n syscall: 'getaddrinfo',\n hostname: 'ac-pmec9fr-shard-00-02.jcedrpx.mongodb.net'\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-w46576-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n\nNode.js v18.12.1\nexit status 1\n \n",
"text": "Hello,I’ve been experiencing this issue with Mongo (currently on a free m0). I’m currently running a node/express/mongoose server on repl.it and occasionally run into this issue. Usually it lasts permanently until I recreate the project again on a new repl.itThis happens when I run the project.Thing’s i’ve done:Strangely: Using tools like ReTool, I can still pull queries from the database even while experiencing this. And the database seems to work fine when I recreate the app from scratch again as a new project. But this only lasts for maybe 5-7 days. Then same issue again.",
"username": "Phung_Dao"
},
{
"code": "",
"text": "Hi,\nI’d recommend you to change the question title for something that clearly states the problem you’re facing. The current one is far too generic and may not draw enough attention.My suggestion:Random “Could not connect to any servers in your MongoDB Atlas cluster” while connecting from white-listed IPs",
"username": "Ricardo_Montoya"
},
{
"code": "repl.itrepl.itrepl.itrepl.it",
"text": "Hey @Phung_Dao,Welcome to the MongoDB Community forums I’m currently running a node/express/mongoose server on repl.it and occasionally run into this issue. Usually, it lasts permanently until I recreate the project again on a new repl.itIt appears that other members of the community had encountered the same issue before, and it was determined that the problem does not lie with MongoDB, but with repl.it. Would you kindly consider sending a bug report to the repl.it team to bring their attention to the issue? Additionally, for further details, please refer to the thread I have linked below.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Continuing to get this error | 2023-04-15T02:15:36.113Z | Continuing to get this error | 1,447 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.