image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"java",
"cxx"
] | [
{
"code": "",
"text": "C100DEV exam is same as new associate developer exam. for new exam we need to take any specific programming languages such as C++, Java, etc…",
"username": "Sivakkumar_kailasam"
},
{
"code": "",
"text": "Hey @Sivakkumar_kailasam,Welcome to the MongoDB Community Forums! The exam has been revamped after the launch of the new LMS. Unlike the C100DEV, the new Associate Developer Exam is conducted in four different languages and you can take it in the language of your choice. They all share a common set of core questions(which are the same as the previous exam). Only section 6 - Drivers of the exam will be presented according to the programming language selected during registration. You can further read about the exam here: Certification Program Guide \nExam Study Guide.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Kindly let us know associate developer exam is same as C100DEV exam or it is different exam | 2023-03-25T10:44:39.777Z | Kindly let us know associate developer exam is same as C100DEV exam or it is different exam | 1,024 |
null | [
"indexes",
"atlas-search"
] | [
{
"code": "{\n\t\"analyzer\": \"lucene.keyword\",\n\t\"mappings\": {\n\t\t\"dynamic\": true,\n\t\t\"fields\": {\n\t\t\t\"name\": [\n\t\t\t\t{\n\t\t\t\t\t\"analyzer\": \"lucene.standard\",\n\t\t\t\t\t\"multi\": {\n\t\t\t\t\t\t\"keyword\": {\n\t\t\t\t\t\t\t\"analyzer\": \"lucene.keyword\",\n\t\t\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"minGrams\": 3,\n\t\t\t\t\t\"tokenization\": \"edgeGram\",\n\t\t\t\t\t\"type\": \"autocomplete\"\n\t\t\t\t}\n\t\t\t],\n }\n }\n}\n",
"text": "Hello there!\nI have a string field which uses for autocompletion functionality. And wondering how to implement Autocomplete for values that contains special characters/punctuation, e.g. > 1’ 1/4\" - 8\". Should I replace or somehow escape these characters ( ', /, \" ) when making an autocomplete query? Where I can find information how does the analyzer parse those characters?My search-index example",
"username": "Nikita_Prokopev"
},
{
"code": "",
"text": "Hi @Nikita_Prokopev and welcome to the MongoDB community forum!!For better understanding of the requirement, could you help us with a few details like:Let us know if you have any further concerns.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"63de3e354e354cb52430c9b3\"\n },\n \"sku\": \"1' 1/2\\\"w x 18\\\"d x 10\\\"h\",\n \"name\": \"30\\\" Farmer Sink, 29 1' 1/2\\\"w x 18\\\"d x 10\\\"h\"\n}\n$search: {\n compound: {\n should: [\n {\n autocomplete: {\n query: \"29 1 1' 1/2\",\n path: 'name',\n tokenOrder: 'sequential',\n }\n },\n {\n autocomplete: {\n query: \"29 1 1' 1/2\",\n path: 'sku',\n tokenOrder: 'sequential',\n }\n },\n ]\n }\n}\n",
"text": "Hi,So if I use query like29 1 1It’s returns my document.But if I use query which contains ’ or \" or /29 1’ 1\nor\n29 1’ 1/\nor\n1/2\"\nor\n29 1 1 2It’s doesn’tSo how do I need to change my query to be able to find the document which contains ’ \" or / symbols.Thanks!",
"username": "Nikita_Prokopev"
},
{
"code": "{\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": [\n {\n \"analyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n },\n {\n \"analyzer\": \"lucene.whitespace\",\n \"type\": \"autocomplete\"\n }\n ],\n \"sku\": [\n {\n \"analyzer\": \"lucene.whitespace\",\n \"type\": \"string\"\n },\n {\n \"analyzer\": \"lucene.whitespace\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n[\n {\n '$search': {\n 'index': 'default', \n 'compound': {\n 'should': [\n {\n 'autocomplete': {\n 'query': '29 1’ 1/', \n 'path': 'name'\n }\n }, {\n 'autocomplete': {\n 'query': '29 1’ 1/', \n 'path': 'sku'\n }\n }\n ]\n }\n }\n }\n]\n",
"text": "Hi @Nikita_Prokopev and thank you for sharing the above query and sample documents.Here is what I tried based on the sample document shared, the lucene.whitespace analyzer used in my example divides text into searchable terms wherever it finds a whitespace character. It leaves all terms in their original case. You may need to adjust your index accordingly & test thoroughly to verify if the following suits your use cases.Here, is how my index definition looks like:Index Definition:And the following query returns the required documents:Let us know if you have any further questions .Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "@Aasawari @Nikita_Prokopev , How we search or modify the analyzer to search keywords like - m&s, h&m, h & m, Marks & spencer.",
"username": "Utsav_Upadhyay2"
},
{
"code": "",
"text": "Hi @Utsav_Upadhyay2This seems to be a different question from the thread.\nCould you open a new thread with the relevant information to help you with the possible solution.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Autocomplete operator to search string which contains special characters | 2023-02-01T07:41:15.645Z | Autocomplete operator to search string which contains special characters | 1,748 |
[
"replication",
"connecting"
] | [
{
"code": "",
"text": "This is free sandbox mode. The db reads and writes are never overloaded. Its been 6 hours and its still down.Without at least 2 replicas, no primary will be set, and thus I cannot use the cli or anything else. The entire cluster is in unavailable mode! Its been 6 hours! Imagine if this was in prod!Idk what to do. atlas tries to restart it, then a primary gets set, but then a replica tries a rollback, causing only 1 replica left, and this loop repeats! I can’t shut down the entire thing because since only 1 replica is available I can’t execute any commands and the only option on atlas is a big button telling me to “UPGRADE”.\n",
"username": "Darren_Zou"
},
{
"code": "",
"text": "Hi @Darren_Zou,I would recommend contacting the Atlas in-app chat support as soon as possible regarding this. Please provide the chat support with a cluster link.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Cannot connect. Infinite loop of no primary replica set! | 2023-03-25T21:28:34.406Z | Cannot connect. Infinite loop of no primary replica set! | 770 |
|
null | [] | [
{
"code": "Undefined symbols for architecture arm64:\n \"_$s10RealmSwift0A14CollectionImplPAAE12makeIteratorAA11RLMIteratorVy7ElementQzGyF\", referenced from:\n _$s4SAMI24SingleSelectionViewModelC8makeRowsyyF in SingleSelectionView.o\n _$s4SAMI8ThingExpC10dumpStatusyS2SFZTf4nd_n in ThingExp.o\n _$s4SAMI12LeadSheet0DBC12lookupByNameySayACGSSFZTf4nd_n in LeadSheet0DB.o\n _$s4SAMI10SongTextDBC12lookupByNameySayACGSSFZTf4nd_n in SongTextDB.o\n _$s4SAMI20PerformanceViewModelC12makeRowInfos6parentSayAA0bF4InfoVGAC_tFTf4dd_n in PerformanceView.o\n _$s4SAMI32RhythmPlayerFabricationViewModelC8makeRowsyyAA0bC4TypeOF in RhythmPlayerFabricationView.o\n _$s4SAMI25LinearLocalThingViewModelC12makeRowInfos6parentSayAA0bdH4InfoVGAC_tFTf4dn_n in LinearLocalThingView.o\n ...\n \"_$s10RealmSwift7ResultsVyxGAA0A14CollectionImplAAMc\", referenced from:\n _$s4SAMI24SingleSelectionViewModelC8makeRowsyyF in SingleSelectionView.o\n _$s4SAMI8ThingExpC10dumpStatusyS2SFZTf4nd_n in ThingExp.o\n _$s4SAMI12LeadSheet0DBC12lookupByNameySayACGSSFZTf4nd_n in LeadSheet0DB.o\n _$s4SAMI10SongTextDBC12lookupByNameySayACGSSFZTf4nd_n in SongTextDB.o\n _$s4SAMI20PerformanceViewModelC12makeRowInfos6parentSayAA0bF4InfoVGAC_tFTf4dd_n in PerformanceView.o\n _$s4SAMI32RhythmPlayerFabricationViewModelC8makeRowsyyAA0bC4TypeOF in RhythmPlayerFabricationView.o\n _$s4SAMI25LinearLocalThingViewModelC12makeRowInfos6parentSayAA0bdH4InfoVGAC_tFTf4dn_n in LinearLocalThingView.o\n ...\nld: symbol(s) not found for architecture arm64\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\n",
"text": "I hadn’t archived my RealmSwift project in a few months, but now I need to do it to make a TestFlight version. Usually this goes smoothly, but this time at the end of the archive operation it finds two symbols referenced in several of my files. I’ve tried on both an M1 Mac and and Intel Mac, since one reference on Stack Overflow said it could be an M1 issue. The problem looks like this:Thanks,\nBruce",
"username": "Bruce_Cichowlas"
},
{
"code": "",
"text": "Some place on StackOverflow, someone said they had got around this problem by reverting to the 14.0.1 Xcode. I had been using Xcode 14.2, but this solution worked for me. In the long run, I suppose either Realm or Apple has something to fix.So now I’m having the problem with the unsupported authentication call “authenticationDid…”, but that’s covered elsewhere, so I don’t mean to be worrying about it here. Even with that error, I’m getting a build that is at least good enough for TestFlight and that’s all I need at the moment.",
"username": "Bruce_Cichowlas"
},
{
"code": "",
"text": "Can I use fbsymbols instead of this code?",
"username": "Haward_Kathie"
},
{
"code": "",
"text": "It appears that, starting in April, Apple will require that Xcode 14.2 be used. I shifted down to 14.0 to avoid the problem above, but today I went back to Xcode 14.2 and using the master branch for RealmSwift. I am still having the undefined symbols problem. Does this problem still exist?And I don’t understand what this has to do with fbsymbols.(later) I had been using the default Xcode package specification, but it gets me 10.28. for Realm. I’m now specifying 10.37.0 explicitly and so far it is going better. It did upload to the App Store without error, though I haven’t done any real testing of my app yet.So is that the solution? And, if so, why did I need to specify it explicitly to Xcode? Should I have specified it some other way to Xcode packages?",
"username": "Bruce_Cichowlas"
}
] | Trouble archiving because of undefined Realm/RealmSwift symbols | 2022-12-22T14:41:15.773Z | Trouble archiving because of undefined Realm/RealmSwift symbols | 2,265 |
null | [
"aggregation"
] | [
{
"code": "{\n \"Categories\": [\n \"Keys\": [\n {\n \"Name\": \"someName\",\n \"Values\": [\n {\n \"Integer\": 0\n }\n ]\n }\n ]\n ]\n}\n",
"text": "Hey,\nIf i have an object like this one below, how could i set “Number” field in $project aggregation with Integer’s value if the Key.Name = “someName”?",
"username": "Linua_N_A"
},
{
"code": "$condKeys\"Key.Name\"\"Number\"",
"text": "Hi @Linua_N_A - Welcome to the community.Have you taken a look at the $cond operator to see if it can help achieve what you’re after?It would make it easier to assist as well if you could provide the following:Regards,\nJason",
"username": "Jason_Tran"
}
] | How would i get Data from an Array conditionally | 2023-03-21T12:59:23.646Z | How would i get Data from an Array conditionally | 486 |
null | [
"data-api"
] | [
{
"code": "",
"text": "I created a test application where I am able to make GET requests to mongo and I get back the data fine. However, whenever I try to make POST requests, I get a 401 even though I’m using the same credentials as in the GET request. Help please?",
"username": "Latonya_Ferguson"
},
{
"code": "",
"text": "Hi @Latonya_Ferguson - Welcome the community However, whenever I try to make POST requests, I get a 401 even though I’m using the same credentials as in the GET request. Help please?Can you provide the full response received when performing the POST requests? Also, do you have any example of such POST requests (please be sure to include the endpoint as well).Redact any sensitive information before posting here.Regards,\nJason",
"username": "Jason_Tran"
}
] | 401 when making POST requests to Atlas | 2023-03-26T18:11:53.437Z | 401 when making POST requests to Atlas | 836 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "Hi Team, please help us. We are facing downtime in production. Actually while upgrading from M2 to M5 was taking a lot of time. While stopping this upgradation process by mistake it got terminated. Kindly help.",
"username": "Tejpal_Yadav"
},
{
"code": "",
"text": "Hi @Tejpal_Yadav,While stopping this upgradation process by mistake it got terminated. Kindly help.I would recommend contacting the Atlas in-app chat support as soon as possible regarding this.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Cluster deleted by mistake, how to recover | 2023-03-25T09:48:08.745Z | Cluster deleted by mistake, how to recover | 591 |
null | [
"data-modeling",
"flutter"
] | [
{
"code": "",
"text": "I am currently looking to migrate my hobby flutter project from using sqlite to realm since I am very interested in the strong offline first support that realm offers with the ease of switching to sync at a later time.The app consists of several modular trackers for things like recipes, shopping lists, budgets etc. Each module is independent of one another so data will not be related, at least not at the persistence layer.Given that, would it make sense to use a realm per module? A followup question is if adding dozens of schemas to a single realm will impact performance significantly?",
"username": "Scott_Bisaillon"
},
{
"code": "toListRealmResultsRealmObject",
"text": "Hi ScottThere is no performance advantage of opening a separate realm per module.There are many downsides, fx. transactions cannot span different realms.Hence I would suggest opening just a single realm and pass it to each module.If there are conceptual links between the various model classes, I would suggest modelling these explicitly, but you don’t have to.On a more general note…When starting a realm project, especially migrating from something like sqlite, I often see people hide the database layer, treating the objects returned as plain old dart objects (PODOs), calling toList on RealmResults, copying data from RealmObjects into non-realm objects that are then passed up the stack, and vice versa. That kind of architectural abstraction has a lot preconceptions about how a database work.It will work with Realm as well, but it is not the best way to use it, as you loose the lazy loaded nature, and the ability to listen for changes. So give that some thought.",
"username": "Kasper_Nielsen1"
},
{
"code": "toListRealmResultsRealmObject",
"text": "Hi Kasper,Thank you for your response. That makes sense and helps with how I will structure the app.When starting a realm project, especially migrating from something like sqlite, I often see people hide the database layer, treating the objects returned as plain old dart objects (PODOs), calling toList on RealmResults , copying data from RealmObject s into non-realm objects that are then passed up the stack, and vice versa. That kind of architectural abstraction has a lot preconceptions about how a database work.Sure enough, that was the approach I was taking at first. Your demo over at the Flutter Observable show (really helpful by the way) actually made me aware of the lazy-loaded aspect of Realm objects and wasn’t sure how I would tackle that come time to implement. I think those properties of Realm actually reduce some of complexities in the app when it comes to layer abstractions.Thanks for the help!",
"username": "Scott_Bisaillon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | When would I use multiple realms in a single application? | 2023-03-25T12:49:10.582Z | When would I use multiple realms in a single application? | 1,126 |
[
"crud",
"data-api"
] | [
{
"code": "{ \"dataSource\": \"Cluster0\", \"database\" : \"myFirstDatabase\", \"collection\" : \"phonelogsdetails\" } ",
"text": "Hey @everyone\nI am trying to make crud api request using mongodb Data API. I am trying to make requests to my database to get data from the collections. I have whitelisted my IP and have added IP for connecting from anywhere. I have also added the API key to the request and in the body i am passing the\n{ \"dataSource\": \"Cluster0\", \"database\" : \"myFirstDatabase\", \"collection\" : \"phonelogsdetails\" } \nAny idea on why this autnentication error is coming and how to remove it\n\nimage1492×775 70.4 KB\n",
"username": "avish_mehta"
},
{
"code": "",
"text": "It says bad authentication\nAre you using correct userid/pwd?\nCan you connect by shell?\nDoes your password has any special characters?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "",
"username": "Rajesh_Dyawarkonda"
}
] | SCRAM-SHA-1: (AtlasError) bad auth : Authentication failed while using mongodb api | 2022-01-05T07:23:24.411Z | SCRAM-SHA-1: (AtlasError) bad auth : Authentication failed while using mongodb api | 12,537 |
|
[
"java",
"spring-data-odm"
] | [
{
"code": "",
"text": "HI Everyone,\nI am unable to connect my MongoDb(V4.4) Atlas Cluster from Spring Boot Application. Please Find The below Screenshot.Thank You in Advance\nWhatsApp Image 2023-02-08 at 2.17.03 PM1280×572 175 KB",
"username": "Sandeep_Kadikekar"
},
{
"code": "localhost:27017\n",
"text": "The error indicates that you are trying to connect toThis is not the address of your Atlas Cluster. It means your application is not configured correctly. Go over the installation/configuration instructions and make sure you specify the appropriate URI.",
"username": "steevej"
},
{
"code": "",
"text": "Hello Sandeep did your error het resolved ?",
"username": "Dayananthavel_K.R"
},
{
"code": "@SpringBootApplication(exclude = {\n MongoAutoConfiguration.class,\n MongoDataAutoConfiguration.class\n})\n@NoArgsConstructor\n@EnableAutoConfiguration\npublic class MainApplication {\n",
"text": "I am facing same issue i have applied exclusions at the MainApp runner class as shown below :",
"username": "Brijesh_P"
}
] | Unable to connect to mongodb Atlas Cluster from SpringBoot Application | 2023-02-09T09:30:03.698Z | Unable to connect to mongodb Atlas Cluster from SpringBoot Application | 1,656 |
|
null | [
"connecting",
"golang"
] | [
{
"code": "go.mongodb.org/mongo-driver/mongotimed out while checking out a connection from connection pool: context canceled",
"text": "Hi,I am using Golang + Mongo in my application. I am using go.mongodb.org/mongo-driver/mongo mongo package/driver.I am getting timed out while checking out a connection from connection pool: context canceled error very intermittently. I am getting this error when I perform the CRUD operation.Can someone please help find the reason for this error? And how to fix this?Regards\nPrithvi",
"username": "Prithvipal_Singh"
},
{
"code": "",
"text": "@Prithvipal_Singh Did you get this fixed?",
"username": "Jithendra_Kumar_Rangaraj"
},
{
"code": "timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 3\n",
"text": "I’m seeing this a lot as well -Is there a good guide to how to begin debugging this?",
"username": "Topher_Sterling"
},
{
"code": "context.WithTimeoutcontext.TODO()",
"text": "What type of context did you pass?\nI was getting that when I passed context.WithTimeout but after changing it to context.TODO() it was working fine",
"username": "Ratul_Rahman_Rudra"
},
{
"code": "connection() : auth error: sasl conversation error: unable to authenticate using mechanism \"SCRAM-SHA-1\": connection(...) failed to write: context canceledtimed out while checking out a connection from connection pool: context canceledcontext canceled",
"text": "So, this isn’t as mysterious as the mongodb client is making it seem. You can get a lot of different error messages that all say the same thing like:\nconnection() : auth error: sasl conversation error: unable to authenticate using mechanism \"SCRAM-SHA-1\": connection(...) failed to write: context canceled\nor\ntimed out while checking out a connection from connection pool: context canceled\nThe beginning part is just telling you what the client was trying to do when it encountered the problem, which isn’t interesting. The problem it encountered is the revealing bit: context canceled. It was trying to use a context that had been cancelled. Unless you put a timeout on a context and some mongodb request took too long and cancelled the context, this is likely a mistake on your part. You probably fed a cancelled context into the client. This can happen when you reuse contexts. Which is not necessarily a mistake. You should just probably check on the status of reused context after a long operation (like a database lookup) if you want to avoid these errors.You can also just use context.TODO() or context.Background() which aren’t cancellable. But that isn’t a great solution because a problem might cause your request to hang for a long time effectively disabling your processing. That is why timeouts are useful. If you are getting these a lot, you might consider increasing your time out. Otherwise, you should be checking the status of contexts that get reused. Or don’t be afraid to construct a new context with timeout for every mongodb request you make. That is the purpose of contexts. It is easy to make more of them. For example, the net/http package will create a new context for each received request when starting an http server.",
"username": "David_Johnson1"
}
] | Timed out while checking out a connection from connection pool: context canceled | 2021-02-12T11:29:49.401Z | Timed out while checking out a connection from connection pool: context canceled | 7,578 |
null | [
"queries",
"react-native"
] | [
{
"code": "@realm/reactDISTINCT",
"text": "Hello, I am using @realm/react in my react native project. Is it possible to use DISTINCT in order to return unique field values from documents inside a collection? If yes, can you provide a simple example or point to documentation? I wasn’t able to find anything on my own.",
"username": "Damian_Danev"
},
{
"code": "const uniqueDogs = dogs.filtered('name == $0 DISTINCT(name)', value);\n",
"text": "Isn’t it something like this?and this may help",
"username": "Jay"
},
{
"code": "name == $0 dogs.filtered(' DISTINCT(name)');Error: Exception in HostFunction: Invalid predicate: ' DISTINCT(report_type)': syntax error, unexpected '('\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in reportException\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in handleException\nat node_modules\\react-native\\Libraries\\Core\\setUpErrorHandling.js:null in handleError\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in errorHandler\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in <anonymous>\nat node_modules\\@react-native\\polyfills\\error-guard.js:null in ErrorUtils.reportFatalError\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in __guard\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in callFunctionReturnFlushedQueue\n",
"text": "@Jay , That does work, however, I can’t get it to work without additional unwanted query param name == $0 . Just dogs.filtered(' DISTINCT(name)'); doesn’t work, throws an error:",
"username": "Damian_Danev"
},
{
"code": "name == $0const uniqueDogs = realm.where(DogClass.class).distinct(\"name\").distinct(\"name\").findAll",
"text": "Hmm. That’s a parameterized query. When using DISTINCT inline, it cannot operate independently and must be attached to at least one query filter, hence the name == $0If I remember correctly, I believe this was implemented at some pointconst uniqueDogs = realm.where(DogClass.class).distinct(\"name\")or it could be.distinct(\"name\").findAllDocumentation lacks solid examples so I put a documentation request in.As mentioned, React is not my strong suit so maybe someone else can chime in.",
"username": "Jay"
}
] | RQL: DISTINCT in React Native | 2023-03-25T10:05:52.324Z | RQL: DISTINCT in React Native | 1,055 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have two separate aggregations queries that work well.First one pulls all the unique addresses (field name is “location”).\nSecond one counts how many times a street name is repeated. For example it prints out if John Street is used 1 time or 10 times. This is NOT REPEATS. This is just because there are many houses on the same street. “location” field content examples are:\n1 John street\n2 John street\n5 John street\n1 Peter street\n8 Peter street\n100 Yoyo streetHow can I combine the below two aggregations so results of first aggregation is used by second aggregation and final result is printed?1st aggregation:[\n{\n$group: {\n_id: “$location”,\ncount: { $sum: 1 },\n},\n},\n{ $match: { count: { $gt: 0 } } },\n{\n$group: {\n_id: null,\ntotalCount: { $sum: 1 },\ncontent: { $push: “$$ROOT._id” },\n},\n},\n{ $project: { _id: 0 } },\n]Second aggregation:[\n{\n$addFields: {\nparsedAddress: {\n$arrayElemAt: [\n{\n$getField: {\nfield: “captures”,\ninput: {\n$regexFind: {\ninput: “$location”,\nregex: /\\d+\\s(.+)/,\n},\n},\n},\n},\n0,\n],\n},\n},\n},\n{\n$group: {\n_id: “$parsedAddress”,\ncount: {\n$sum: 1,\n},\n},\n},\n]Thanks",
"username": "SHARP_CALL"
},
{
"code": "$facet",
"text": "Hello @SHARP_CALL,You can use $facet stage to process multiple aggregation pipelines, but make sure the result should not exceed the 16 MB BSON document size limit.",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks.Can you add my above written aggregations into one so I can see the format? I am new to mongodb and don’t know what fields etc are but that should help me get started on this.",
"username": "SHARP_CALL"
},
{
"code": "{\n $facet: {\n first_agg: [\n // add first aggregation stages\n ],\n second_agg: [\n // add second aggregation stages\n ]\n }\n}\n",
"text": "Hello,The provided docs link has everything with examples, if you are new then I would suggest you read and implement this in your use case, let me know if you are getting any issues.You can implement something like this,",
"username": "turivishal"
},
{
"code": "> {\n> $facet: {\n> \n> [\n> {\n> $group: {\n> _id: \"$location\",\n> count: { $sum: 1 },\n> },\n> },\n> { $match: { count: { $gt: 0 } } },\n> {\n> $group: {\n> _id: null,\n> totalCount: { $sum: 1 },\n> content: { $push: \"$$ROOT._id\" },\n> },\n> },\n> { $project: { _id: 0 } },\n> ],\n> \n> [\n> {\n> $addFields: {\n> parsedAddress: {\n> $arrayElemAt: [\n> {\n> $getField: {\n> field: \"captures\",\n> input: {\n> $regexFind: {\n> input: \"$location\",\n> regex: /\\d+\\s(.+)/,\n> },\n> },\n> },\n> },\n> 0,\n> ],\n> },\n> },\n> },\n> {\n> $group: {\n> _id: \"$parsedAddress\",\n> count: {\n> $sum: 1,\n> },\n> },\n> },\n> ],\n> }\n> }\n",
"text": "The brackets in the sample you provided do not match the ones I have. I have put it like this but I get many errors:Unexpected token, expected “]” (10:3)There are probably multiple line problems but to being with that 10:3 is the problem. Also, my first stage is an array. Should that be a problem?Below is how I combined them. Maybe it is the brackets and you can see the issue?",
"username": "SHARP_CALL"
},
{
"code": "{\n $facet: {\n first_agg: [\n // add first aggregation stages\n ],\n second_agg: [\n // add second aggregation stages\n ]\n }\n}\nfirst_aggsecond_agg{\n $facet: {\n first_agg: [\n { $group: ... },\n { $match: ... },\n { $group: ... },\n { $project: ... }\n ],\n second_agg: [\n { $addFields: ... },\n { $group: ... }\n ]\n }\n}\n",
"text": "I would suggest you follow the documentation and experiment with the examples provided in the documentation,You did not use the correct brackets, well it is a basic for every programming language,If you refer to the provided syntax in my above post, there are 2 properties first_agg and second_agg, both are itself aggregation pipelines and already bounded with the brackets, you just need to put your stages inside.Your query would be something like this,",
"username": "turivishal"
}
] | How to combine two aggregations together to get one results? | 2023-03-21T02:59:09.277Z | How to combine two aggregations together to get one results? | 706 |
null | [] | [
{
"code": "",
"text": " Happy to be here. ",
"username": "Ian_Waiguru"
},
{
"code": "",
"text": "Welcome in to this forum.",
"username": "Arvel_Ryan"
},
{
"code": "",
"text": "Kata simu, Tupo site bana! ",
"username": "hai_interacive"
}
] | Greetings From Kenya | 2021-06-22T17:25:44.167Z | Greetings From Kenya | 4,145 |
null | [
"replication"
] | [
{
"code": "",
"text": "Can we create a 3 node replica set where primary will be in OCI and 2 secondary’s in On-premises?My question is if we create this setup will replication take place in first place and during primary outage will on -prem secondary turns to be primary node?Also if u have other possible way to design it helps with diagram.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "I don’t think there will be any issue. For our use case, we have some nodes on premise, and one backup node in k8s (AWS). No issue ever.As long as your nodes can talk to each other, who cares where they are hosted.",
"username": "Kobe_W"
},
{
"code": "",
"text": "This is perfectly fine and will work.There are active companies with redundancies in Atlas set to Azure Cloud, AWS MongoDB Hosting, Alibaba MongoDB Hosting, and various on premise locations across Europe, Asia, and the US for full redundancy of their services to prevent outages etc. Full-scale failover protections. Even for their mobile applications using Device Sync and Realm doing this.There shouldn’t be any issues beyond your generation configs/architectural layout that would be any kind of source of conflict in doing this.",
"username": "Brock"
}
] | Can we create a 3 node replica set where primary will be in OCI and 2 secondaries in On-premises | 2023-03-23T15:04:29.281Z | Can we create a 3 node replica set where primary will be in OCI and 2 secondaries in On-premises | 548 |
null | [
"node-js",
"containers",
"cxx",
"field-encryption",
"c-driver"
] | [
{
"code": "#29 45.38 make: Entering directory '/app/node_modules/mongodb-client-encryption/build'\n#29 45.38 CXX(target) Release/obj.target/mongocrypt/src/mongocrypt.o\n#29 45.38 SOLINK_MODULE(target) Release/obj.target/mongocrypt.node\n#29 45.38 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libmongocrypt-static.a: No such file or directory\n#29 45.38 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libkms_message-static.a: No such file or directory\n#29 45.38 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libbson-static-for-libmongocrypt.a: No such file or directory\n#29 45.38 collect2: error: ld returned 1 exit status\n#29 45.38 make: *** [mongocrypt.target.mk:144: Release/obj.target/mongocrypt.node] Error 1\n#29 45.38 make: Leaving directory '/app/node_modules/mongodb-client-encryption/build'\n#29 45.38 gyp ERR! build error\n#29 45.38 gyp ERR! stack Error: `make` failed with exit code: 2\n#29 45.38 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:201:23)\n#29 45.38 gyp ERR! stack at ChildProcess.emit (node:events:513:28)\n#29 45.38 gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)\n#29 45.38 gyp ERR! System Linux 5.15.90.1-microsoft-standard-WSL2\n#29 45.38 gyp ERR! command \"/usr/local/bin/node\" \"/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"\n#29 45.38 gyp ERR! cwd /app/node_modules/mongodb-client-encryption\n#29 45.38 gyp ERR! node -v v16.19.1\n#29 45.38 gyp ERR! node-gyp -v v9.1.0\n#29 45.38 gyp ERR! not ok\n#29 ERROR: executor failed running [/bin/sh -c yarn install]: exit code: 1\nFROM node:16-alpine\n\nRUN apk add git make cmake g++ curl bash linux-headers libbson-static musl-dev libc-dev openssl openssl-dev py3-pip\n\nRUN git clone https://github.com/mongodb/mongo-c-driver\nWORKDIR /mongo-c-driver\nRUN mkdir cmake-build\nWORKDIR /mongo-c-driver/cmake-build\nRUN cmake -DENABLE_MONGOC=OFF ../\nRUN make -j8 install\n\n\nWORKDIR /\nRUN git clone https://github.com/mongodb/libmongocrypt.git\nWORKDIR /libmongocrypt\nRUN mkdir cmake-build \nWORKDIR /libmongocrypt/cmake-build\nRUN cmake -DENABLE_BUILD_FOR_PPA=ON ../\nRUN make install\nWORKDIR /libmongocrypt\nRUN rm -rf cmake-build*\n\nWORKDIR /libmongocrypt/bindings/node/etc\nRUN chmod +x build-static.sh\nRUN /bin/bash -c ./build-static.sh\n\n\nWORKDIR /app\nCOPY package*.json .\nRUN yarn install \nCOPY . .\nRUN yarn build\n",
"text": "I am trying to build a docker image for my backend app that uses typescript. But the node-gyp is failing. I get the following error.This is my dockerfile:",
"username": "md_azmal"
},
{
"code": "",
"text": "Hello @md_azmal,The #1 reason why this will not work, and never work, and why you can’t even troubleshoot this right off the bat, is because you’re using Windows for this.Docker royally sucks in WSL2, and encryption is not exactly possible between the windows host and WSL VM, the build is likely to never work in this scenario. You’re better off building a VMWare VM, or a Hyper-V VM, and using that instead and retry, if the issue still persists lmk and idm helping out with the Docker config.",
"username": "Brock"
},
{
"code": "",
"text": "@Brock i have used wsl2 with docker for most of my work and it worked fine. But i agree with what you are saying I’ll boot up a vm or a Linux instance in the server and will try this. if it doesn’t work I’ll post here",
"username": "md_azmal"
},
{
"code": "",
"text": "The main issue is the encryption build.@md_azmal the encryption won’t really take with WSL as those components are in the Windows Kernel that it’ll rely on, and not the Linux Kernel.",
"username": "Brock"
},
{
"code": "",
"text": "understood! I’ll give it a shot in Linux",
"username": "md_azmal"
},
{
"code": "",
"text": "If the problem still persists even after that, let me know and I don’t mind going over further configs.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks for helping @Brock I’ll let you know for sure!",
"username": "md_azmal"
},
{
"code": "#0 38.73 gyp info using [email protected] | linux | x64\n#0 38.73 gyp info find Python using Python version 3.10.10 found at \"/usr/bin/python3\"\n#0 38.73 gyp info spawn /usr/bin/python3\n#0 38.73 gyp info spawn args [\n#0 38.73 gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',\n#0 38.73 gyp info spawn args 'binding.gyp',\n#0 38.73 gyp info spawn args '-f',\n#0 38.73 gyp info spawn args 'make',\n#0 38.73 gyp info spawn args '-I',\n#0 38.73 gyp info spawn args '/app/node_modules/mongodb-client-encryption/build/config.gypi',\n#0 38.73 gyp info spawn args '-I',\n#0 38.73 gyp info spawn args '/usr/local/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',\n#0 38.73 gyp info spawn args '-I',\n#0 38.73 gyp info spawn args '/root/.cache/node-gyp/16.19.1/include/node/common.gypi',\n#0 38.73 gyp info spawn args '-Dlibrary=shared_library',\n#0 38.73 gyp info spawn args '-Dvisibility=default',\n#0 38.73 gyp info spawn args '-Dnode_root_dir=/root/.cache/node-gyp/16.19.1',\n#0 38.73 gyp info spawn args '-Dnode_gyp_dir=/usr/local/lib/node_modules/npm/node_modules/node-gyp',\n#0 38.73 gyp info spawn args '-Dnode_lib_file=/root/.cache/node-gyp/16.19.1/<(target_arch)/node.lib',\n#0 38.73 gyp info spawn args '-Dmodule_root_dir=/app/node_modules/mongodb-client-encryption',\n#0 38.73 gyp info spawn args '-Dnode_engine=v8',\n#0 38.73 gyp info spawn args '--depth=.',\n#0 38.73 gyp info spawn args '--no-parallel',\n#0 38.73 gyp info spawn args '--generator-output',\n#0 38.73 gyp info spawn args 'build',\n#0 38.73 gyp info spawn args '-Goutput_dir=.'\n#0 38.73 gyp info spawn args ]\n#0 38.73 gyp info spawn make\n#0 38.73 gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]\n#0 38.73 make: Entering directory '/app/node_modules/mongodb-client-encryption/build'\n#0 38.73 CXX(target) Release/obj.target/mongocrypt/src/mongocrypt.o\n#0 38.73 SOLINK_MODULE(target) Release/obj.target/mongocrypt.node\n#0 38.73 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libmongocrypt-static.a: No such file or directory\n#0 38.73 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libkms_message-static.a: No such file or directory\n#0 38.73 /usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /app/node_modules/mongodb-client-encryption/deps/lib/libbson-static-for-libmongocrypt.a: No such file or directory\n#0 38.73 collect2: error: ld returned 1 exit status\n#0 38.73 make: *** [mongocrypt.target.mk:144: Release/obj.target/mongocrypt.node] Error 1\n#0 38.73 make: Leaving directory '/app/node_modules/mongodb-client-encryption/build'\n#0 38.73 gyp ERR! build error \n#0 38.73 gyp ERR! stack Error: `make` failed with exit code: 2\n#0 38.73 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:201:23)\n#0 38.73 gyp ERR! stack at ChildProcess.emit (node:events:513:28)\n#0 38.73 gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)\n#0 38.73 gyp ERR! System Linux 5.19.0-31-generic\n#0 38.73 gyp ERR! command \"/usr/local/bin/node\" \"/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"\n#0 38.73 gyp ERR! cwd /app/node_modules/mongodb-client-encryption\n#0 38.73 gyp ERR! node -v v16.19.1\n#0 38.73 gyp ERR! node-gyp -v v9.1.0\n#0 38.73 gyp ERR! not ok\n",
"text": "@Brock tried with linux vm same issue",
"username": "md_azmal"
},
{
"code": "",
"text": "You goofed,Upgrade Node to the latest version, then in package.json make sure the latest node version is used.Also, upgrade python to the latest version of 3.X. You’re using 16.19.1 I just realized for Node.JS, the current node-gyp cannot run on 16.19.You need Node.JS 19.X or higher, just go to the latest version of Node.JS and this entire problem should go away.And I do apologize, I didn’t realize what version of Node you were running before.However, for Alpine Linux because it’s an odd one from what I’m reading, you may need to actually downgrade the Node.JS version to 15 or lower, or could try apt-get install build-essential and then retry building as well. This should upgrade/install all of the current compatible build packages.",
"username": "Brock"
},
{
"code": "",
"text": "So I installed the same Node.JS version, and followed your build config.What I found allowed the build:\nRemove yarn.lock\nI ran sudo apt-get install build-essential\nupgraded to the latest Node.JSAnd then modified package.JSON to include the engine and most current Node.JS and it builds.",
"username": "Brock"
},
{
"code": "",
"text": "@Brock the app was build on node 16 so i used that … anyways I’ll upgrade it, can you share the Dockerfile for me to refer. Also the updated package.json",
"username": "md_azmal"
},
{
"code": "{\n \"name\": \"<NameOfApp\",\n \"version\": \"0.0.0\",\n \"private\": true,\n \"engines\": {\n \"node\": \"<Current version of node that's installed>\" \n },\n",
"text": "The Dockerfile is the same as you have above actually, I just copied over what you did, and then opened up the package.json and added to it.",
"username": "Brock"
},
{
"code": "",
"text": "It didn’t work, gives the same error. Also how will you install build-essentials in alpine, apt is not the package manager for alpine. It uses apk and the Alpine equivalent of build-essentials is build-base which I installed but it still didn’t work",
"username": "md_azmal"
},
{
"code": "curl https://raw.githubusercontent.com/dvershinin/apt-get-centos/master/apt-get.sh -o /usr/local/bin/apt-get\nchmod 0755 /usr/local/bin/apt-get\n",
"text": "Oh, yeah just install the Debian Repo for Apt Package managers.This will install the Apt package manager, sorry about that I thought you knew how to install it. That’s my fault, you would modify the script above based on the source version of your Linux OS, package managers even like pacman are agnostic, you just install them on your linux distribution and you got it.EDIT:You can also build the APT package manager from source, too.Mirror of the apt git repository - This is just a mirror of the upstream repository, please submit pull requests there: https://salsa.debian.org/apt-team/apt - GitHub - Debian/apt: Mirror of the ap...",
"username": "Brock"
}
] | Building a docker image for typescript application that uses mongodb-client-encryption fails in alpine linux | 2023-03-23T07:37:08.914Z | Building a docker image for typescript application that uses mongodb-client-encryption fails in alpine linux | 2,119 |
null | [] | [
{
"code": "{\n \"recipient\": \"John Doe\",\n \"categoryDetails\": {\n \"name\": \"Tracked letter\",\n \"categoryId\": ObjectId(\"64130f4127049b2126a9747e\")\n }\n \"clientDetails\": {\n \"name\": \"Company A\",\n \"clientId\": ObjectId(\"64130f4127049b2123244747e\")\n }\n}\n",
"text": "Hello, I have an issue that has been troubling me for a very long time, I hope you could help me and ease my pains of doubt and worry.I have 3 collections:Reading a lot about denormalization and data duplication it appears that the latter 2 could be valid candidates for data duplication, since there are a lot of read operations performed on letters while there are barely any updates expected on Clients or Categories.So based on this assumption a Letters collection is currently designed this way:Data in fields “categoryDetails” and “clientDetails” is directly copied from Categories and Clients collections.What this means, is that if some day somebody should want to change the Category name or Client name an update query should be run on all Products to update the respective name field.Considering this strategy a few plans arise:Looking forward to your help,\nV",
"username": "Vladimir"
},
{
"code": "",
"text": "What?update query should be run on all Productsthe list of productsreaching thousands of products,best way to update Productsandconstant growth of the products countWe absolutely have no clue of what product is because youhave 3 collections:",
"username": "steevej"
},
{
"code": "",
"text": "If the data size of clients and categories are small, i would prefer fetching them all once and store them in app server memory. This is so that you don’t have to duplicate any data.Given those content are rarely modified, query once and cache it forever works properly. (what if there are changes ?? just change it in mongodb and then do a rolling restart of the server)Of course there is trade-off. But you know, everything is a trade-off in a distributed world.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello, thank you for your reply and for pointing out the error in the question. It won’t allow me to edit my original question to correct this, but I used the words products and letters interchangeably, meaningproducts are lettersWe just treat letters as products and their senders as clients. Sorry for the lack of clarity in the original question.",
"username": "Vladimir"
},
{
"code": "",
"text": "Hello and thank you. I find your advice interesting, however I would like this particular case to be resolved within the db layer. I don’t think a client name change should cause the restart of the whole server.I’d say I am 51% convinced that the following strategy could work:However there is one thing preventing me from committing to this strategy:Products collection is bound to grow indefinitely as every day there are more and more products being inserted, which means that if somebody chooses to update the Client, the number of products that need to be queried and updated is constantly increasing.I was thinking maybe use Refs and just $lookup the client details when I need them, however it would mean that I would be running an aggregation query for every time I want to get a product/ products which is guaranteed to happen many times a day just to dodge this rare problem if and when somebody decides to update a Client document. So I don’t know, I don’t really find either of these scenarios fully foolproof.",
"username": "Vladimir"
},
{
"code": "",
"text": "Is there anyone who can help me with this? MongoDB documentation, blog articles and forums provide a lot of information, advice and advocacy for denormalization and data duplication, however I could not find a lot of practical examples/ scenarios of how to deal with data update once you do need to update. Especially on collections that are destined to scale in size exponentially, such as the products.",
"username": "Vladimir"
},
{
"code": "",
"text": "Having done tons of reading and browsing, and unfortunately not having received any followup advice here, I have decided to go with heavy data duplication for those fields that are unlikely to change but have a high amount of reads. Since I chose this path from the very beginning, I am unfortunately unable to verify if and by how much that is more performant than referencing fields and populating/ lookup’ing them. However, it appears to be the strategy that MongoDB is heavily pushing both on their forums and in their docs, and that makes sense, seeing how it is one of the core things that separate document DBs from RDBs. I guess with Document DBs you really need to take the red pill and be willing enough to go down the rabbit hole to experience the database at its fullest potential.What is really curious, is that MongoDB is very vociferous and vehement about advocating for duplicating fields, yet they provide very few practical details on how to keep the affected duplicated fields up to date. Having done some not directly related reading, I have reached the conclusion that transactions is the safest way to go, especially for those updates that affect duplicate fields across multiple collections.MongoDB almost discourages the use of transactions and states that they are less performant than normal queries and thusly they should only be opted for in edge cases. Well, updating duplicated fields appears to be such an edge case and almost exactly what transactions were made for - trading some performance in exchange for data integrity and stability when the latter matter more for queries that are destined to happen very scarcely.The infinitely growing products collection dilemma remains unsolved, this will result in potentially longer update times in the future, possibly pushing updating duplicate fields behind the scenes, as a slow and tedious background process bound to happen very rarely, in exchange for fast daily atomic reads.",
"username": "Vladimir"
}
] | Data duplication advice needed | 2023-03-17T00:29:07.276Z | Data duplication advice needed | 523 |
null | [
"replication",
"transactions",
"containers"
] | [
{
"code": "version: '2'\nservices:\n mongodb-primary:\n image: 'bitnami/mongodb:latest'\n environment:\n - MONGODB_ADVERTISED_HOSTNAME=mongodb-primary\n - MONGODB_REPLICA_SET_MODE=primary\n - MONGODB_ROOT_PASSWORD=password123\n - MONGODB_REPLICA_SET_KEY=replicasetkey123\n ports:\n - 27017:27017\n\n volumes:\n - 'mongodb_master_data:/bitnami'\n\n mongodb-secondary:\n image: 'bitnami/mongodb:latest'\n depends_on:\n - mongodb-primary\n environment:\n - MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary\n - MONGODB_REPLICA_SET_MODE=secondary\n - MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary\n - MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017\n - MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123\n - MONGODB_REPLICA_SET_KEY=replicasetkey123\n ports:\n - 27027:27017\n\n mongodb-arbiter:\n image: 'bitnami/mongodb:latest'\n depends_on:\n - mongodb-primary\n environment:\n - MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter\n - MONGODB_REPLICA_SET_MODE=arbiter\n - MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary\n - MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017\n - MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123\n - MONGODB_REPLICA_SET_KEY=replicasetkey123\n ports:\n - 27037:27017\n\nvolumes:\n mongodb_master_data:\n driver: local\n{\n _id: 'replicaset',\n version: 5,\n term: 2,\n members: [\n {\n _id: 0,\n host: 'mongodb-primary:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 5,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: 'mongodb-arbiter:27017',\n arbiterOnly: true,\n buildIndexes: true,\n hidden: false,\n priority: 0,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 2,\n host: 'mongodb-secondary:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"636ad53c134a3f3884836da1\")\n }\n}\n{\n set: 'replicaset',\n date: ISODate(\"2022-11-08T22:58:23.847Z\"),\n myState: 1,\n term: Long(\"2\"),\n syncSourceHost: '',\n syncSourceId: -1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 2,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n lastCommittedWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n appliedOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n durableOpTime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n lastAppliedWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n lastDurableWallTime: ISODate(\"2022-11-08T22:58:22.005Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1667948242, i: 1 }),\n electionCandidateMetrics: {\n lastElectionReason: 'electionTimeout',\n lastElectionDate: ISODate(\"2022-11-08T22:16:31.521Z\"),\n electionTerm: Long(\"2\"),\n lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1667945788, i: 17 }), t: Long(\"1\") },\n numVotesNeeded: 1,\n priorityAtElection: 5,\n electionTimeoutMillis: Long(\"10000\"),\n newTermStartDate: ISODate(\"2022-11-08T22:16:31.531Z\"),\n wMajorityWriteAvailabilityDate: ISODate(\"2022-11-08T22:16:31.540Z\")\n },\n members: [\n {\n _id: 0,\n name: 'mongodb-primary:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 2513,\n optime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n optimeDate: ISODate(\"2022-11-08T22:58:22.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n lastDurableWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1667945791, i: 1 }),\n electionDate: ISODate(\"2022-11-08T22:16:31.000Z\"),\n configVersion: 5,\n configTerm: 2,\n self: true,\n lastHeartbeatMessage: ''\n },\n {\n _id: 1,\n name: 'mongodb-arbiter:27017',\n health: 1,\n state: 7,\n stateStr: 'ARBITER',\n uptime: 2493,\n lastHeartbeat: ISODate(\"2022-11-08T22:58:22.069Z\"),\n lastHeartbeatRecv: ISODate(\"2022-11-08T22:58:22.068Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n configVersion: 5,\n configTerm: 2\n },\n {\n _id: 2,\n name: 'mongodb-secondary:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 2454,\n optime: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n optimeDurable: { ts: Timestamp({ t: 1667948302, i: 1 }), t: Long(\"2\") },\n optimeDate: ISODate(\"2022-11-08T22:58:22.000Z\"),\n optimeDurableDate: ISODate(\"2022-11-08T22:58:22.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n lastDurableWallTime: ISODate(\"2022-11-08T22:58:22.005Z\"),\n lastHeartbeat: ISODate(\"2022-11-08T22:58:22.069Z\"),\n lastHeartbeatRecv: ISODate(\"2022-11-08T22:58:22.069Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'mongodb-primary:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 5,\n configTerm: 2\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1667948302, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"7c40430db9f17606a984ed8d4e9359e1141366f3\", \"hex\"), 0),\n keyId: Long(\"7163772610960949254\")\n }\n },\n operationTime: Timestamp({ t: 1667948302, i: 1 })\n}\nmongodb://root:password123@localhost:27017/?authMechanism=DEFAULT\n",
"text": "I am attempting to run a MongoDB cluster locally to test transactions.I’ve leveraged the Bitnami docker-compose fileThe cluster successfully runs and I’m able to run rs.status() and rs.config()rs.config():rs.status():I’m able to connect to the nodes individually usingbut I get a time out when attempting to connect with replicasetCan somebody please help me understand what I’m missing?",
"username": "Adam_Hammond"
},
{
"code": "mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=replicaset-name\nmongo --replSet replicaset-name/morton.local:27018,morton.local:27019\n",
"text": "Hey Adam, could you please share how you are trying to connect to the replica set?I hope you are following the recommended way from the official doc:-ORWhat is the error that you are facing? I mean from the mongod logs you can increase the verbosity and validate the same.According to what you are saying, it seems that, all the replica members are up and running and none impaired or off?\nSee if any of the node is unreachable. (because, prima facie looks like a networking issue, but can’t be sure until we can verify the same with some suggestive logs), also see if all the hosts are resolving.",
"username": "Macshuff_Biabani"
},
{
"code": "net.bindIpnet.bindIpAll",
"text": "I don’t remember the exact names but here is a quick note: Local servers start to listen on localhost, and there is a config key to set it to listen to outside IPs. Sorry I could remember only this part But since you have port redirections already set, the above suggestion should work fine.Edit: I find the name: net.bindIp and net.bindIpAll. IP Binding — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "MONGODB_ADVERTISED_HOSTNAME: localhost\n",
"text": "I ran into this problem as well, trying to create a local replicaset for testing. Connecting to non RS worked, but in replicaset it was trying to connect to the hostname, which in docker doesn’t exist on the host.The simple solution I came up with is to just set the advertised name to localhost!Other option is to add add it to your hosts file but I disliked this option as using this as a devcontainer it should be self contained.",
"username": "Joseph_Jankowiak"
},
{
"code": "{VERSION}/rootfs/opt/bitnami/mongodb/templates/mongodb.conf.tpl",
"text": "@github: containers/bitnami/mongodb at main · bitnami/containers (github.com)sub folders to check: {VERSION}/rootfs/opt/bitnami/I had written about IP binding before (2 posts above). From the scripts you can find on bitnami’s GitHub page (above links), I can see they did not expose this binding to the environment variables. if it is important you may open a feature request there.However, you can still control that, and many other settings through a customized config file. a longer template file resides under mongodb/templates/mongodb.conf.tpl in these subfolders.by the way, I am guessing, setting “localhost” as the advertised name is also a temporary solution until you sail your work to the cloud as it possibly only allows the “host” to connect to “containers” without a name problem. so, thinkering with the config file would be a better solution.",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb-primary:\n image: 'bitnami/mongodb:latest'\n environment:\n - MONGODB_ADVERTISED_HOSTNAME=mongodb-primary\n.\n.\n.\nDATABASE_URL \"mongodb://root:prisma@mongodb-primary:27017/test?authSource=admin&retryWrites=false\"\n",
"text": "I was facing the same issue trying to dockerize a full stack app which was originally developed using React, Tailwind, Next, Prisma, Mongo Atlas, NextAuth.Chwitter - a full stack twitter clone [DOCKERIZED] - GitHub - mandeepsingh10/chwitter: Chwitter - a full stack twitter clone [DOCKERIZED]\nThe docker-compose.yml spins up mongoDB replica along with the nextjs-frontend.\nThe code can be used to fix the connectivity issue between prisma and MongoDB replicasets.here’s how i got it to work.This is my environment variable for prismaI used the value of MONGODB_ADVERTISED_HOSTNAME env in the datbase url to connect to prisma and it connected instantly. I’m 100% sure this will work for everyone.",
"username": "Mandeep_Singh3"
}
] | Can't connect to MongoDB replica set locally using docker | 2022-11-08T23:04:47.823Z | Can’t connect to MongoDB replica set locally using docker | 8,805 |
null | [
"queries"
] | [
{
"code": "[\n {\n _id: ObjectId(\"641e3c070fb20438f24d1659\"),\n firstName: 'Max',\n lastName: 'Chavan',\n age: 29,\n history: [\n { disease: 'cold', treatment: 'steam' },\n { disease: 'malaria', treatment: 'injection' }\n ]\n },\n {\n _id: ObjectId(\"641e3c3d0fb20438f24d165a\"),\n firstName: 'Aditya',\n lastName: 'Methe',\n age: 25,\n history: [\n { disease: 'cancer', treatment: 'none' },\n { disease: 'chickengunia', treatment: 'none' }\n ]\n },\n {\n _id: ObjectId(\"641e3c5d0fb20438f24d165b\"),\n firstName: 'Lokesh',\n lastName: 'Vakhare',\n age: 25,\n history: [\n { disease: 'fever', treatment: 'medicine' },\n { disease: 'chickenpox', treatment: 'injection' }\n ]\n }\n]\n[\n {\n _id: ObjectId(\"641e3c070fb20438f24d1659\"),\n firstName: 'Max',\n lastName: 'Chavan',\n age: 29,\n history: [\n { disease: 'cold', treatment: 'steam' },\n { disease: 'malaria', treatment: 'injection' }\n ]\n },\n {\n _id: ObjectId(\"641e3c3d0fb20438f24d165a\"),\n firstName: 'Aditya',\n lastName: 'Methe',\n age: 25,\n history: [\n { disease: 'cancer', treatment: 'none' },\n { disease: 'chickengunia', treatment: 'none' }\n ]\n },\n {\n _id: ObjectId(\"641e3c5d0fb20438f24d165b\"),\n firstName: 'Lokesh',\n lastName: 'Vakhare',\n age: 25,\n history: [\n { disease: 'fever', treatment: 'medicine' },\n { disease: 'chickenpox', treatment: 'injection' }\n ]\n }\n]\ndb.patient.remove(history.{disease:\"cold\",treatment:'steam'})\nUnexpected token (1:26)\n\n> 1 | db.patient.remove(history.{disease:\"cold\",treatment:'steam'})\n | ^\n 2 |\n",
"text": "The above database is a mongoDb database.I am trying to write a query that deletes all patients that have cold as a disease.Here is my query,However, the query gives the following error:How do I resolve it?",
"username": "Sourabh_Chavan"
},
{
"code": "treatment:'steam'",
"text": "Look at dot notation.Note that havingtreatment:'steam'in your query will not delete documents that disease:cold but have a different treatment yet your requirement (from the post title) is delete all patients that have cold as desease. There is not mention about the treatment.",
"username": "steevej"
}
] | How do I execute the following functionality in n the given mongoDB database:Write a query that deletes all patients that have cold as a disease | 2023-03-25T01:04:25.491Z | How do I execute the following functionality in n the given mongoDB database:Write a query that deletes all patients that have cold as a disease | 419 |
null | [
"queries"
] | [
{
"code": "",
"text": "Is there any storage limit for system.profile?I am following sequence to test queries.But, exported database stores only last few queries. I don’t understand why? I want to record log of all executed queries. How can I resolve it?",
"username": "Monika_Shah"
},
{
"code": "",
"text": "Hi @Monika_Shah,\nif i understand what have you asked, i think is needed to change this parameter:slowms Default: 100Type: integerThe slow operation time threshold, in milliseconds. Operations that run for longer than this threshold are considered slow.Try to set it to a lower value and let me know if it works.Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "timeActually, it is storing only profiling of last query only, which is bulk write and Bulk write will trigger many operations. Before this bulk write, there were many queries which has more than 100 ms execution time.In fact, server config has specified slowms=10 as well as profile=2.",
"username": "Monika_Shah"
},
{
"code": "",
"text": "ore this bulk write, there were many queries which has more than 100 ms execution time.In fact, server config has specified slowms=10 as well as profile=2.What could be reason and solution for this problem?",
"username": "Monika_Shah"
}
] | Is there any storage limit of system.profile | 2023-03-20T10:55:43.154Z | Is there any storage limit of system.profile | 372 |
null | [
"dot-net"
] | [
{
"code": "var objectSerializer = new ObjectSerializer(type => ObjectSerializer.DefaultAllowedTypes(type) || type.FullName.StartsWith(\"MyNamespace\"));\nBsonSerializer.RegisterSerializer(objectSerializer);\nBsonClassMap.RegisterClassMap<T>(...)",
"text": "With release 2.19.0 we now have to register our types in order for them to be serialized. The suggestion in the release notes here is to do the following…This works, however, at the company I work we explicitly register all our types using BsonClassMap.RegisterClassMap<T>(...) to avoid accidents when serializing/deserializing. Given that we’re already being explicit about the types we want registered could the library note these types and add them to the allowed types automatically.",
"username": "Gareth_Budden"
},
{
"code": "",
"text": "Hi GarethGreat minds think alike I just raised the same feature requestRegards,\nDaniel",
"username": "Daniel_Marbach"
},
{
"code": "",
"text": "Apparently, it takes them into account automatically when you make sure you pass in the right nominal type to the corresponding Serialization functions.",
"username": "Daniel_Marbach"
},
{
"code": "",
"text": "Do you have a link to that information? Maybe we’re not doing something correctly.I also created the feature request in Jira here before you’d posted so that may get a bit more information\nhttps://jira.mongodb.org/browse/CSHARP-4581",
"username": "Gareth_Budden"
},
{
"code": "",
"text": "Thank you for filing this feature request. It is a reasonable idea. We will discuss it during our weekly triage meeting. Please follow CSHARP-4581 for updates.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "Particular:masterParticular:mongoclient-breaking-change\n \n if (!BsonClassMap.IsClassMapRegistered(sagaMetadata.SagaEntityType))\n {\n var classMap = new BsonClassMap(sagaMetadata.SagaEntityType);\n classMap.AutoMap();\n classMap.SetIgnoreExtraElements(true);\n \n \n BsonClassMap.RegisterClassMap(classMap);\n }\n \n ",
"text": "Hi Gareth,I did update our test suite to use the latest client, and then switched towards using the correct type-based overloads on the serializer.Fixes https://github.com/Particular/NServiceBus.Storage.MongoDB/issues/481\n\nTh…is PR bumps the minimum required version of the mongodb client to 2.19 and explicitly passes the saga data type to the `ToBsonDocument` calls to make sure the `BsonClassMap` is used. With that in place the saga data types are automatically mapped either by the definition the persister has added or by the one that was added by a user. \n\nIn theory, the code changes would also work with older versions of the client. But given v.2.18 has a security vulnerability and customers might not take an explicit dependency to the client, it is necessary to bump the client version too.our code already adds the mappings hereand then all our tests passed which previously they didn’t so I was assuming it somehow already makes the serializer “aware” of the types. That being said, it might also “just” work because internally in the client, there are many different serializer types that are being used.Regards,\nDaniel",
"username": "Daniel_Marbach"
}
] | Feature Request: C# driver to auto register all explicitly mapped types as allowed | 2023-03-23T10:07:59.578Z | Feature Request: C# driver to auto register all explicitly mapped types as allowed | 1,309 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "In the last few releases the size of the dependencies has skyrocketed. The libmoncrypto added 20 MB of runtimes (linux and osx is 10 MB each) and taking a hard dependency on AWS.SecurityToken added another 2 MB. I don’t use any of the functionality that these new dependencies bring, but my app went from 17 MB to over 40 MB. Could these have not been moved into separate assemblies and only be required when the functionality was required? The dependency on the AWS framework is particularly egregious.",
"username": "James_Moring"
},
{
"code": "",
"text": "Hi @James_Moring. Welcome to the community.That’s a valid concern and there’s already an improvement ticket to address that.",
"username": "Mahi_Satyanarayana"
},
{
"code": "",
"text": "Here’s another ticket to address the overall increase in package size.",
"username": "Mahi_Satyanarayana"
},
{
"code": "",
"text": "Hi, @James_Moring,Thank you for raising this issue. Mahi has pointed you to some relevant tickets in our backlog. Please comment, vote, and watch those tickets as that does help drive our planning process.It would be helpful to understand your concerns about the increase in package size. In memory constrained environments like mobile devices, compilers will strip out unused code to reduce the total size of the executable. In your typical server application, a few extra MBs of assemblies is often not a concern. We would like to understand your use case where an increase from 17MB to 40MB caused you concern.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "The Community could flip the question around somewhat and ask for what the design decision is to take in all the dependencies in one package, thereby causing users of your core package to get e.g. AWS dlls part of their deploys (even if not loaded or used). In this case licenses etc matches, but it’s still extra packages to keep track of seen to possible due diligence, licenses etc. I would expect the core stuff to be “this is what’s needed against native MongoDB environment”. If you want features for e.g. AWS, I would opt-in to that via a separate package.",
"username": "Daniel_Wertheim"
},
{
"code": "",
"text": "Sorry for the late reply. One word microservices. Our applications use MongoDB as a sync for our logging system. This logging is used by every microservice we build and deploy. Each microservice is built and stored in a central repository and deployed dynamically to the application server. Linux in my case. Any given deployment can consist of 8-12 microservices. So going from 17 MB to 40 MB when fetching 12 microservice deployments makes a big difference in deployment/redeployment time.The above is a objective rational too not wanting unused dependencies. Subjectively, it’s just not the right way to package an application.Thanks\nJim",
"username": "James_Moring"
},
{
"code": "MongoDB.Driver.AWSMongoDB.Driver.AWS",
"text": "Thank you for the feedback. Because NuGet does not support optional dependencies, we are left choosing between ease of use and minimizing total package size. The only way (that I am aware of) to minimize package size is to implement optional dependencies ourselves. This means detecting whether a package is present and dynamically loading it. We cannot simply have a reference between projects (and thus NuGet packages) and use the types from the other package. This complicates development and requires us to provide additional documentation e.g. If you want to use AWS authentication, you must reference the MongoDB.Driver.AWS package. (NOTE: MongoDB.Driver.AWS doesn’t exist and is simply an example.) This becomes even more complicated as you consider all the potential optional dependencies supported by the driver such as Kerberos, LDAP, GCP, Azure, and more.In summary we have chosen to favour simplicity of use and simplicity of development over minimizing package size. If total package size is problematic for your application, CSHARP-4531 provides an example of how you can exclude unneeded dependencies via your package references.",
"username": "James_Kovacs"
},
{
"code": "",
"text": "As a user it would make perfect sense to opt-in to what you want. Like installing a meta package bringing in packages for the base line for “pure MongoDB” then e.g. Aws, Azure, GCP (like you have for e.g. auth) would be something I would expect to opt-in to. But of course. If the code in the core (as e.g. auth seem to be) is coupled to specific providers, I do understand that it’s hard to maintain.",
"username": "Daniel_Wertheim"
}
] | The C-Sharp Driver has become big | 2023-02-16T13:59:37.806Z | The C-Sharp Driver has become big | 1,027 |
null | [] | [
{
"code": "",
"text": "Having this error for a while and I can’t connect database for my backend, anyone having the same issue? How should I solve this? Thanks!",
"username": "wen_sun"
},
{
"code": "",
"text": "Hi, Is this got resolved? If yes, how? please help. Thanks.",
"username": "Bhuvan_Sharma"
}
] | An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 2023-01-18T18:13:41.973Z | An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 423 |
null | [
"node-js",
"connecting",
"atlas-cluster"
] | [
{
"code": "2023-02-17T13:49:05.811Z\td3a68076-d393-4203-9248-a1e3cd1e9584\tINFO\tMongoNetworkError: getaddrinfo EMFILE mongodb-atlas-shard-00-01.example.mongodb.net\n at connectionFailureError (/var/task/file-info/file-info-controller.js:90731:18)\n at TLSSocket.<anonymous> (/var/task/file-info/file-info-controller.js:90656:20)\n at Object.onceWrapper (events.js:520:26)\n at TLSSocket.emit (events.js:400:28)\n at TLSSocket.emit (domain.js:475:12)\n at emitErrorNT (internal/streams/destroy.js:106:8)\n at emitErrorCloseNT (internal/streams/destroy.js:74:3)\n at processTicksAndRejections (internal/process/task_queues.js:82:21) {\n cause: Error: getaddrinfo EMFILE mongodb-atlas-shard-00-01.example.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:71:26) {\n errno: -24,\n code: 'EMFILE',\n syscall: 'getaddrinfo',\n hostname: 'mongodb-atlas-shard-00-01.example.mongodb.net'\n },\n connectionGeneration: 0,\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n}\n",
"text": "Hi Team,We are facing below error…Can any one please explain regarding below error and how can we resolve it?Can please help me on this error…We are getting same error frequently, Is this related to MongoDB or any other?Thanks in advance…",
"username": "Lokesh_D1"
},
{
"code": "2023-02-17T13:49:05.811Z\td3a68076-d393-4203-9248-a1e3cd1e9584\tINFO\tMongoNetworkError: getaddrinfo EMFILE mongodb-atlas-shard-00-01.example.mongodb.net\n at connectionFailureError (/var/task/file-info/file-info-controller.js:90731:18)\n at TLSSocket.<anonymous> (/var/task/file-info/file-info-controller.js:90656:20)\n at Object.onceWrapper (events.js:520:26)\n at TLSSocket.emit (events.js:400:28)\n at TLSSocket.emit (domain.js:475:12)\n at emitErrorNT (internal/streams/destroy.js:106:8)\n at emitErrorCloseNT (internal/streams/destroy.js:74:3)\n at processTicksAndRejections (internal/process/task_queues.js:82:21) {\n cause: Error: getaddrinfo EMFILE mongodb-atlas-shard-00-01.klzrb.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:71:26) {\n errno: -24,\n code: 'EMFILE',\n syscall: 'getaddrinfo',\n hostname: 'mongodb-atlas-shard-00-01.example.mongodb.net'\n },\n connectionGeneration: 0,\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n}\n{\"connectionGeneration\":0}",
"text": "and also getting {\"connectionGeneration\":0} error also…",
"username": "Lokesh_D1"
},
{
"code": "",
"text": "It looks like your code keeps opening new connections without closing the previous one.EMFILE error means that the OS denied your application from opening new connection because it is using too many.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej for response,Code closes the connections immediately without any ideal connections… But concurrent hits gone be high at the time.How can we handle too many hits in per second? (More then 30k to 40k hits per second)We are using atlas M400 (SSD class) tier. It has max 1,28,000 connections. Is it reached?Is there any better way to handle the connections?Is this error related to MongoDB or Node.js server ?",
"username": "Lokesh_D1"
},
{
"code": "",
"text": "Is this error related to MongoDB or Node.js server ?Not a MongoDB or Node.js issue. It is a code issue. It is a limit of your OS.Are you opening a connection for each and everyMore then 30k to 40k hits per secondthenthere any better way to handle the connectionsDo not open a connections for each hit.",
"username": "steevej"
},
{
"code": "",
"text": "@Lokesh_D1 You need to open a support ticket with the Technical Services Team.This error will need MongoDB Engineering to analyze and is Atlas side by the looks of it.You can try subscribing to the free trial for MongoDB Support via Atlas Developer Tier and submitting a ticket. The technical services rep will open an internal ticket and have engineering look deeper into it.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks @steevej and @Brock",
"username": "Lokesh_D1"
}
] | Connecting issues | 2023-03-21T18:28:40.897Z | Connecting issues | 1,165 |
[] | [
{
"code": "",
"text": "I just upgraded my home-brew and now I am not able to start mongoldb services.\nimage1138×98 14 KB\n",
"username": "Vasu_Parbhakar"
},
{
"code": "mongodbrew services stop [email protected]\nmongodbrew services start [email protected]\ncat /usr/local/var/log/mongodb/mongo.log\n",
"text": "Hi @Vasu_Parbhakar,Welcome to the MongoDB community forums What version you are using of Mongodb?As I can see you are using Mac OS you can use the following command:Execute this to stop the MongoDB (i.e. the mongod process) as a macOS serviceAnd then execute this to start MongoDB (i.e. the mongod process) as a macOS serviceIf the issue still persists, please review your MongoDB logs to identify the cause of the problem. To access your MongoDB logs, execute the following command:For reference, check out the documentationI hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "brew services start [email protected]\n",
"text": "My mongo.log file looks something like:\nimage1288×968 193 KB\nwhat to do next??",
"username": "Vasu_Parbhakar"
},
{
"code": "",
"text": "Error Update:\nimage1160×200 23.7 KB\n",
"username": "Vasu_Parbhakar"
},
{
"code": "",
"text": "It says failed to unlink socket file-permission denied\nCheck ownership/permissions on that tmp file\nYou might have tried to start mongod as root or someother user\nYou may have to remove the file and start the service again",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I deleted the temp file, now the error i am getting is :\nimage1160×200 23.7 KB\nCurrent mongo.long file :\nimage1306×998 232 KB\n",
"username": "Vasu_Parbhakar"
},
{
"code": "",
"text": "Your log does not show why it is going to error state\nPlease show log portion when you started the service after deleting the temp file\nAlso make sure dbpath & other dirs have proper ownership/permissions",
"username": "Ramachandra_Tummala"
}
] | Not Able to Start Mongodb Community Services in Mac Ventura | 2023-02-02T10:29:07.816Z | Not Able to Start Mongodb Community Services in Mac Ventura | 946 |
|
null | [
"document-versioning"
] | [
{
"code": "",
"text": "Does Mongo DB has any built-in feature to auto increment versions of a document. If the newly inserted document is different then to create a new version and save previous as history. I have referred to the MongoDB Document Versioning Pattern blog, want to know if there is this feature built in?",
"username": "codeit482_t"
},
{
"code": "",
"text": "i don’t think such feature exists.",
"username": "Kobe_W"
}
] | Mongo DB document auto versioning | 2023-03-24T08:06:08.093Z | Mongo DB document auto versioning | 920 |
null | [
"queries"
] | [
{
"code": " const result = db.collection.find({name: \"abc\", status: false});[{name: \"abc\", status:false, otherProp: \"otherProp1\"},..., {name: \"abc\", status: false, otherProp: \"otherProp2\",...}]{name: \"abc\", status:true, otherProp: \"newProp\"}",
"text": " const result = db.collection.find({name: \"abc\", status: false});result gives me lets say 2 documents:[{name: \"abc\", status:false, otherProp: \"otherProp1\"},..., {name: \"abc\", status: false, otherProp: \"otherProp2\",...}]Now I want to replace both documents with:{name: \"abc\", status:true, otherProp: \"newProp\"}How can I do that? everything I tried, always replaces each document with the new document…",
"username": "Anna_N_A"
},
{
"code": "",
"text": "Only way I see is:An ordered bulkWrite within a optional but safer transaction that1 - deleteMany {name:abc,status:false}\n2 - insertOne the new document",
"username": "steevej"
},
{
"code": "",
"text": "Transaction is the way to go if atomicity is desired.",
"username": "Kobe_W"
}
] | How can I replace multiple documents with only one document? | 2023-03-24T09:41:50.219Z | How can I replace multiple documents with only one document? | 697 |
null | [
"replication",
"containers"
] | [
{
"code": "net:\n port: 27017\n bindIp: 0.0.0.0\n ssl:\n mode: preferSSL\n PEMKeyFile: /keys/mongo.pem\n CAFile: /keys/mongoCA.crt\n clusterFile: /keys/mongo.pem\n allowConnectionsWithoutCertificates: false\n disabledProtocols: TLS1_0,TLS1_1\n\nsecurity:\n authorization: enabled\n clusterAuthMode: x509\nnet:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: preferTLS # requireTLS\n certificateKeyFile: /keys/mongo.pem\n CAFile: /keys/mongoCA.crt\n clusterFile: /keys/mongo.pem\n allowInvalidCertificates: true\n allowConnectionsWithoutCertificates: false\n disabledProtocols: TLS1_0\n\nsecurity:\n authorization: enabled\n clusterAuthMode: x509\n{\nmembers: [\n {\n _id: 1,\n name: 'mongo1:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 912,\n optime: [Object],\n optimeDurable: [Object],\n optimeDate: 2023-03-24T15:30:07.000Z,\n optimeDurableDate: 2023-03-24T15:30:07.000Z,\n lastAppliedWallTime: 2023-03-24T15:30:07.297Z,\n lastDurableWallTime: 2023-03-24T15:30:07.297Z,\n lastHeartbeat: 2023-03-24T15:30:07.803Z,\n lastHeartbeatRecv: 2023-03-24T15:30:07.802Z,\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1679670917, i: 1 }),\n electionDate: 2023-03-24T15:15:17.000Z,\n configVersion: 1,\n configTerm: 7\n },\n {\n _id: 2,\n name: 'mongo2:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 912,\n optime: [Object],\n optimeDurable: [Object],\n optimeDate: 2023-03-24T15:30:07.000Z,\n optimeDurableDate: 2023-03-24T15:30:07.000Z,\n lastAppliedWallTime: 2023-03-24T15:30:07.297Z,\n lastDurableWallTime: 2023-03-24T15:30:07.297Z,\n lastHeartbeat: 2023-03-24T15:30:07.828Z,\n lastHeartbeatRecv: 2023-03-24T15:30:08.831Z,\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'mongo1:27017',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 1,\n configTerm: 7\n },\n {\n _id: 3,\n name: 'mongo3:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 915,\n optime: [Object],\n optimeDate: 2023-03-24T15:30:07.000Z,\n lastAppliedWallTime: 2023-03-24T15:30:07.297Z,\n lastDurableWallTime: 2023-03-24T15:30:07.297Z,\n syncSourceHost: 'mongo2:27017',\n syncSourceId: 2,\n infoMessage: '',\n configVersion: 1,\n configTerm: 7,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n}\n",
"text": "Im using docker compose to build up replica set for testing, when i switched SSL to TLS net configuration, the secondary does not do any synchronization. I use self-signed certificatesThis is my ssl configuration, everything works well at this timeBut out of control nowreplica set status",
"username": "K_Chan"
},
{
"code": "readPreference",
"text": "Everything is done, why i asking this question because mongo compass does not show database if readPreference is default. So i think my configuration goes wrong",
"username": "K_Chan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | When i switched ssl to tls in my replica set, synchronization does not work | 2023-03-24T15:29:23.253Z | When i switched ssl to tls in my replica set, synchronization does not work | 781 |
null | [
"java",
"spark-connector",
"scala"
] | [
{
"code": "package com.mongo;\n\nimport org.apache.spark.api.java.JavaSparkContext;\nimport org.apache.spark.sql.SparkSession;\n\nimport org.bson.Document;\n\nimport com.mongodb.spark.MongoSpark;\nimport com.mongodb.spark.rdd.api.java.JavaMongoRDD;\n\npublic final class MongoConnectRead {\n public static void main(final String[] args) throws InterruptedException {\n SparkSession spark = SparkSession.builder()\n .appName(\"MongoSparkConnectorIntro\")\n .config(\"spark.mongodb.input.uri\", \"mongodb://127.0.0.1:27117,127.0.0.1:27118/test.user\")\n .config(\"spark.mongodb.output.uri\", \"mongodb://127.0.0.1:27117,127.0.0.1:27118/test.user\")\n\n // .config(\"spark.mongodb.input.partitioner\", \"MongoPaginateBySizePartitioner\")\n\n .getOrCreate();\n // Create a JavaSparkContext using the SparkSession's SparkContext object\n System.out.println(\"1hummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm-----------------------------------\");\n JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());\n /* Start Example: Read data from MongoDB ************************/\n System.out.println(\"hummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm-----------------------------------\");\n JavaMongoRDD<Document> rdd = MongoSpark.load(jsc);\n System.out.println(\"23hummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm-----------------------------------\");\n /* End Example **************************************************/\n // Analyze data from MongoDB\n System.out.println(rdd.count());\n System.out.println(\"hummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm-----------------------------------\");\n System.out.println(rdd.first().toJson());\n System.out.println(\"hummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm-----------------------------------\");\n jsc.close();\n }\n}\nbin/spark-submit --class \"com.mongo.MongoConnectRead\" --master local[4] \"F:\\spark\\spark-sql\\target\\spark-sql-1.0-SNAPSHOT.jar\" \n",
"text": "i creaeed this java file :and run it inalso i put jars in spark jars folder maybe the jars make this errormongo-java-driver-3.10.2.jar\nmongo-spark-connector_2.12-3.0.2.jarbut we see this error:Partitioning using the ‘DefaultMongoPartitioner$’ failed.Please check the stacktrace to determine the cause of the failure or check the Partitioner API documentation.\nNote: Not all partitioners are suitable for all toplogies and not all partitioners support views.%nException in thread “main” java.lang.NoSuchMethodError: ‘com.mongodb.connection.ClusterDescription com.mongodb.client.MongoClient.getClusterDescription()’\nat com.mongodb.spark.connection.MongoClientCache.$anonfun$logClient$1(MongoClientCache.scala:161)\nat com.mongodb.spark.LoggingTrait.logInfo(LoggingTrait.scala:48)\nat com.mongodb.spark.LoggingTrait.logInfo$(LoggingTrait.scala:47)\nat com.mongodb.spark.Logging.logInfo(Logging.scala:24)\nat com.mongodb.spark.connection.MongoClientCache.logClient(MongoClientCache.scala:161)\nat com.mongodb.spark.connection.MongoClientCache.acquire(MongoClientCache.scala:56)\nat com.mongodb.spark.MongoConnector.acquireClient(MongoConnector.scala:239)\nat com.mongodb.spark.MongoConnector.withMongoClientDo(MongoConnector.scala:152)\nat com.mongodb.spark.MongoConnector.withDatabaseDo(MongoConnector.scala:171)\nat com.mongodb.spark.MongoConnector.hasSampleAggregateOperator(MongoConnector.scala:234)\nat com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:33)\nat com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:135)\nat org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:292)\nat scala.Option.getOrElse(Option.scala:189)\nat org.apache.spark.rdd.RDD.partitions(RDD.scala:288)\nat org.apache.spark.SparkContext.runJob(SparkContext.scala:2303)\nat org.apache.spark.rdd.RDD.count(RDD.scala:1274)\nat org.apache.spark.api.java.JavaRDDLike.count(JavaRDDLike.scala:469)\nat org.apache.spark.api.java.JavaRDDLike.count$(JavaRDDLike.scala:469)\nat org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:45)\nat com.mongo.MongoConnectRead.main(MongoConnectRead.java:30)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)\nat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.base/java.lang.reflect.Method.invoke(Method.java:568)\nat org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\nat org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)\nat org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)\nat org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)\nat org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)\nat org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)\nat org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)\nat org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)",
"username": "kube_ctl"
},
{
"code": "",
"text": "Hi Kube_ctl,Can you provide the “MongoDB server” version that you are using? There seems to be some version misconfiguration between the mongoDB spark connector and other driversAlso have you looked at the samples provided here: https://www.mongodb.com/docs/spark-connector/v3.0/java/write-to-mongodb/",
"username": "Prakul_Agarwal"
}
] | Failed runing dataset spark read from mongodb as a document told | 2023-03-08T17:29:49.689Z | Failed runing dataset spark read from mongodb as a document told | 1,423 |
null | [
"java",
"spark-connector"
] | [
{
"code": "import org.apache.spark.sql.Dataset;\nimport org.apache.spark.sql.Row;\nimport org.apache.spark.sql.SparkSession;\nimport org.apache.spark.sql.streaming.DataStreamWriter;\nimport org.apache.spark.sql.streaming.StreamingQuery;\nimport org.apache.spark.sql.streaming.Trigger;\n\nimport static org.apache.spark.sql.functions.*;\n\nimport java.util.concurrent.TimeoutException;\n\npublic final class MongoStructuredStreaming {\n\n public static void main(final String[] args) {\n /*\n * Create the SparkSession.\n * If config arguments are passed from the command line using --conf,\n * parse args for the values to set.\n */\n SparkSession spark = SparkSession.builder()\n .master(\"local\")\n .appName(\"read_example\")\n .config(\"spark.mongodb.read.connection.uri\", \"mongodb://127.0.0.1/matching-engine.orders\")\n .config(\"spark.mongodb.write.connection.uri\", \"mongodb://127.0.0.1/matching-engine.orders\")\n .getOrCreate();\n\n // define a streaming query\n DataStreamWriter<Row> dataStreamWriter = spark.readStream()\n .format(\"mongodb\")\n .load()\n // manipulate your streaming data\n .writeStream()\n .format(\"console\")\n .trigger(Trigger.Continuous(\"1 second\"))\n .outputMode(\"append\");\n // run the query\n try {\n StreamingQuery query = dataStreamWriter.start();\n\n } catch (TimeoutException e) {\n // TODO Auto-generated catch block\n e.printStackTrace();\n }\n\n }\n}\n\n",
"text": "we write this new version(10) for streaming from mongodb anything is good but it terminatebin/spark-submitthe code is :no errors found but application terminate it not shoould happenthe console is like this23/03/09 08:23:52 INFO ContinuousExecution: Starting [id = 841c2f80-37f1-48ea-86f3-> 049166c84652, runId = d6a59ee6-44b4-4887-8c4a-b1d6f1b68019]. Use file:/C:/Users/joobin/AppData/Local/Temp/temporary-7336f1d4-9457-4c92-891b-f39ee54771b5 to store the query checkpoint.\n23/03/09 08:23:52 INFO ContinuousExecution: Starting new streaming query.\n23/03/09 08:23:52 INFO SparkContext: Invoking stop() from shutdown hook\n23/03/09 08:23:52 INFO SparkUI: Stopped Spark web UI at http://DESKTOP-KK0D0F9:4041\n23/03/09 08:23:52 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!\n23/03/09 08:23:52 INFO MemoryStore: MemoryStore cleared\n23/03/09 08:23:52 INFO BlockManager: BlockManager stopped\n23/03/09 08:23:52 INFO BlockManagerMaster: BlockManagerMaster stopped\n23/03/09 08:23:52 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!\n23/03/09 08:23:52 INFO SparkContext: Successfully stopped SparkContext\n23/03/09 08:23:52 INFO ShutdownHookManager: Shutdown hook calledthe reading data from mongodb is ok but streaminng is not run in timewhats idea",
"username": "kube_ctl"
},
{
"code": "",
"text": "alss two warining found3/03/09 08:23:47 INFO MongoTable: Creating MongoTable: mongo-spark-connector-10.1.1\n23/03/09 08:23:50 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint\n23/03/09 08:23:50 WARN ResolveWriteToStream: Temporary checkpoint location created which is deleted normally when the query didn’t fail: C:\\Users\\joobin\\AppData\\Local\\Temp\\temporary-7336f1d4-9457-4c92-891b-f39ee54771b5. If it’s required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.\n23/03/09 08:23:51 INFO ResolveWriteToStream: Checkpoint root C:\\Users\\joobin\\AppData\\Local\\Temp\\temporary-7336f1d4-9457-4c92-891b-f39ee54771b5 resolved to file:/C:/Users/joobin/AppData/Local/Temp/temporary-7336f1d4-9457-4c92-891b-f39ee54771b5.\n23/03/09 08:23:51 WARN ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.\n23/03/09 08:23:51 INFO CheckpointFileManager: Writing atomically to file:/C:/Users/joobin/AppData/Local/Temp/temporary-7336f1d4-9457-4c92-891b-f39ee54771b5/metadata using temp file file:/C:/Users/joobin/AppData/Local/Temp/temporary-7336f1d4-9457-4c92-891b-f39ee54771b5/.metadata.3aea750b-05fb-4779-b401-70322f8d1b85.tmp\n23/03/09 08:23:51 INFO CheckpointFileManager: Renamed temp file file:/C:/Users/joobin/AppData/Local/Temp/temporary-7336fl/Temp/temporary-7336f1d4-9457-4c92-891b-f39ee54771b5/metadata\n23/03/09 08:23:51 INFO ContinuousExecution: Reading table [MongoTable{schema=StructType(StructField(_id,StringType,true),StructField(amount,DoubleType,true),StructField(fill_amount,DoubleType,true),StructField(price,DoubleType,true),StructField(side,StringType,true),StructField(status,StringType,true),StructField(symbol,StringType,true),StructField(trader_id,StringType,true),StructField(trades,ArrayType(StructType(StructField(_id,StringType,true),StructField(amount,DoubleType,true),StructField(price,DoubleType,true),StructField(side,StringType,true),StructField(symbol,StringType,true),StructField(type,StringType,true)),true),true),StructField(type,StringType,true)), partitioning=, mongoConfig=MongoConfig{options=, usageMode=NotSet}}] from DataSourceV2 named ‘mongodb’ [com.mongodb.spark.sql.connector.MongoTableProvider@102f3f05]",
"username": "kube_ctl"
},
{
"code": "",
"text": "the reading data from mongodb is ok but streaminng is not run in timeHi kube_ctl,Can you please verify/try the following:With this can you share some additional error logs that may come up",
"username": "Prakul_Agarwal"
}
] | Stream to your Console from MongoDB | 2023-03-09T04:56:50.758Z | Stream to your Console from MongoDB | 1,238 |
null | [
"queries",
"data-modeling",
"indexes"
] | [
{
"code": "db.coll1.findOne()\n{\n _id: ObjectId(\"641c5a5c441e7d1a23e51f0c\"),\n a: 'a',\n active: true,\n}\ndb.coll1.explain('executionStats').findAndModify({query: {active: false, _id: {$gte: ObjectId(\"641c5a5c441e7d1a53e51f0c\")}}, update: {b: 'b', active: true}, fields: { _id: 1, a: 1, b: 1 }, new: true })executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 2,\n totalKeysExamined: 101,\n totalDocsExamined: 101,\n executionStages: {\n stage: 'UPDATE',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n nMatched: 1,\n nWouldModify: 1,\n nWouldUpsert: 0,\n inputStage: {\n stage: 'FETCH',\n nReturned: 101,\n executionTimeMillisEstimate: 0,\n works: 101,\n advanced: 101,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n docsExamined: 101,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 101,\n executionTimeMillisEstimate: 0,\n works: 101,\n advanced: 101,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 0,\n keyPattern: { active: 1, _id: 1, b: 1, a: 1 },\n indexName: 'active_1__id_1_b_1_a_1',\n isMultiKey: false,\n multiKeyPaths: { active: [], _id: [], b: [], a: [] },\n isUnique: false,\n isSparse: false,\n isPartial: true,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n active: [ '[false, false]' ],\n _id: [\n \"[ObjectId('641c5a5c441e7d1a53e51f0c'), ObjectId('ffffffffffffffffffffffff')]\"\n ],\n b: [ '[MinKey, MaxKey]' ],\n a: [ '[MinKey, MaxKey]' ]\n },\n keysExamined: 101,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n },\n\ndb.coupons.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { active: 1, _id: 1, b: 1, a: 1 },\n name: 'active_1__id_1_b_1_a_1',\n partialFilterExpression: { active: false }\n },\n]\n\n",
"text": "There are billions of documents in this collection.\nHow can I structure this indexes and query so that it become covered query and fast response.totalDocsExamined is 101 instead of zero.This is my collection.\nthere will be one more field ‘b’ which will be set while updating. and it will be unique. value of field ‘a’ will be unique.This is the query.db.coll1.explain('executionStats').findAndModify({query: {active: false, _id: {$gte: ObjectId(\"641c5a5c441e7d1a53e51f0c\")}}, update: {b: 'b', active: true}, fields: { _id: 1, a: 1, b: 1 }, new: true })This. is indexes.",
"username": "ironman"
},
{
"code": "",
"text": "I am not too sure but an update cannot really be covered. The query/projection is covered but since the whole document needs to be written for the update it has to be fetched.As for your partial index, there is absolutely no point, to have active:1 in your keys, it will always be false. It is actually wasteful since it will always be false.",
"username": "steevej"
}
] | Covered query is fetching doc from collection instead of index | 2023-03-24T12:33:20.657Z | Covered query is fetching doc from collection instead of index | 894 |
null | [
"mongoose-odm",
"connecting",
"serverless"
] | [
{
"code": "let conn = mongoose.createConnection(process.env.MONGO_URI, {\n bufferCommands: false, // Disable mongoose buffering\n bufferMaxEntries: 0, // and MongoDB driver buffering\n useNewUrlParser: true,\n useUnifiedTopology: true,\n socketTimeoutMS: 45000,\n })\n\n try {\n await conn\n console.log('Connected correctly to server')\n\n } catch (err) {\n console.log('Error connecting to DB')\n console.log(err)\n console.log(err.stack)\n }\n\n await conn \n{\n \"errorType\": \"Runtime.UnhandledPromiseRejection\",\n \"errorMessage\": \"MongoNetworkTimeoutError: connection timed out\",\n \"reason\": {\n \"errorType\": \"MongoNetworkTimeoutError\",\n \"errorMessage\": \"connection timed out\",\n \"name\": \"MongoNetworkTimeoutError\",\n \"stack\": [\n \"MongoNetworkTimeoutError: connection timed out\",\n \" at connectionFailureError (/var/task/node_modules/mongodb/lib/core/connection/connect.js:342:14)\",\n \" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb/lib/core/connection/connect.js:310:16)\",\n \" at Object.onceWrapper (events.js:420:28)\",\n \" at TLSSocket.emit (events.js:314:20)\",\n \" at TLSSocket.EventEmitter.emit (domain.js:483:12)\",\n \" at TLSSocket.Socket._onTimeout (net.js:484:8)\",\n \" at listOnTimeout (internal/timers.js:554:17)\",\n \" at processTimers (internal/timers.js:497:7)\"\n ]\n },\n \"promise\": {},\n \"stack\": [\n \"Runtime.UnhandledPromiseRejection: MongoNetworkTimeoutError: connection timed out\",\n \" at process.<anonymous> (/var/runtime/index.js:35:15)\",\n \" at process.emit (events.js:326:22)\",\n \" at process.EventEmitter.emit (domain.js:483:12)\",\n \" at processPromiseRejections (internal/process/promises.js:209:33)\",\n \" at processTicksAndRejections (internal/process/task_queues.js:98:32)\",\n \" at runNextTicks (internal/process/task_queues.js:66:3)\",\n \" at listOnTimeout (internal/timers.js:523:9)\",\n \" at processTimers (internal/timers.js:497:7)\"\n ]\n}\n",
"text": "I have a node lambda function that queries mongoDb using mongoose.About 50% of the time, seemingly randomly, I get the following error upon trying to connect: MongoNetworkTimeoutError: connection timed outWhile MongoDb seems to recommend using context.callbackWaitsForEmptyEventLoop = false and trying to reuse the same connection between calls, I read other posts that said the fix for this would be to actively re-open a connection every time. I tried that but it’s still happening. I also tried playing with values for\nsocketTimeoutMS and connectTimeoutMS to no avail.\nDoes anyone have any ideas? This is a significant blocker for me right now - thanks!Here’s my code:And here’s the full error output from Cloudwatch:",
"username": "Boris_Wexler"
},
{
"code": "",
"text": "@Boris_Wexler Did you get a solution for this? I am stuck in the same scenario. Already have tried connectionTimeout and socketTimeout, but that dosen’t seem to work. What next I am thinking is to give a try for is connection pooling.",
"username": "Avani_Khabiya"
},
{
"code": "",
"text": "@Boris_Wexler or @Avani_Khabiya where you guys able to find a solution for this? I am stuck on this same issue as well",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "do you have access restrictions on your cluster? namely IP access list?timeout error are mostly related to:your connection works half the time, so it might be the last one.If you do not have static IP contracts on your AWS, then the host IP of your app may change during its lifetime. Then if you have restricted access to your MongoDB cluster, this may cause those timeouts you get.to eliminate this possibility, or to make sure it is the culprit, edit your access list to give access from anywhere, from “0.0.0.0”, then monitor your app if you get the same error again.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Avani_Khabiya @Shawn_Varughese, are you able to find a solution for this?",
"username": "Bruno_Feltrin"
},
{
"code": "",
"text": "What version of MongoDB is this? And @Bruno_Feltrin what version are you using? Can I see your script/config?",
"username": "Brock"
},
{
"code": "",
"text": "No i was never able to find a solution, this ultimately boiled down to a connection max limit. Due to the nature of lambda being short run and constantly spinning up new instances it caused new connections. The adjustment to time outs and all of that did not work. We have tried so many options and we still keep hitting the connection max issue due to the nature of lambda. Any tips would be help here",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "this ultimately boiled down to a connection max limitAlright, at least we have a clue to follow.There is this statement on the following page: Manage Connections with AWS Lambda — MongoDB AtlasDon’t define a new MongoClient object each time you invoke your function. Doing so causes the driver to create a new database connection with each function call. This can be expensive and can result in your application exceeding database connection limits.if you haven’t tried it yet, check if it helps.",
"username": "Yilmaz_Durmaz"
}
] | Mongodb timeout error on AWS Lambda | 2021-01-15T09:00:33.654Z | Mongodb timeout error on AWS Lambda | 8,887 |
null | [
"data-modeling",
"react-native"
] | [
{
"code": "",
"text": "",
"username": "Adithya_Sundar"
},
{
"code": "list[]mixed",
"text": "In Realm, you will be using a list data type. It behaves very similar to an array and in fact list maps to the JavaScript Array type. You can also specify that a field contains a list of primitive value type by appending [] to the type name.You should also investigate the mixed property type - it can hold different data types.Clicking through the above links will likely answer your question more effectively and completely than an answer here.Embedded objects are managed and part of an objects schema. But realm config schema is a bit vague. Can you claify what you’re asking, perhaps provide a use case?",
"username": "Jay"
},
{
"code": "",
"text": "Ahhh! I use list data type to specify array . From what i see [mixed] data type doesn’t serve my purpose . I’m looking to achieve something like this - type union with different data types\n\nScreenshot 2023-03-18 at 4.15.24 AM1316×916 86.3 KB\n\n(https://www.mongodb.com/docs/realm/sdk/react-native/realm-database/schemas/mixed/)",
"username": "Adithya_Sundar"
},
{
"code": "info: {type: 'union', objectTypes: [ 'string', 'int', 'Person']",
"text": "Are you asking aboutinfo: {type: 'union', objectTypes: [ 'string', 'int', 'Person']If so, isn’t that just a List of more mixed types?(in the future, please post code as text and use the formatting </> feature, and not screen shots)",
"username": "Jay"
},
{
"code": "export class HubSection extends Realm.Object {\n static schema = {\n name: 'hubSection',\n properties: {\n id:'objectId',\n title: 'string', \n entityType: 'string',\n layoutStyle: 'string',\n cardStyle: 'string',\n showFilter: 'bool',\n userDefined: 'bool',\n children: 'mixed[]',\n // children: { type: \"list\", objectType: \"mixed\" },\n },\n };\n}\n\nNote: And , this line of code throws error 'children: { type: \"list\", objectType: \"mixed\" } ' . But this is supported by docs here [https://github.com/realm/realm-js/issues/3389](https://github.com/realm/realm-js/issues/3389)",
"text": "I’m looking to implement something like this - schema type - list , object type - Object . For reference the ‘children’ field in below code",
"username": "Adithya_Sundar"
},
{
"code": "Note: And , this line of code throws error intboolfloatdoublestringdatadateobjectIduuiddecimal128Realm.create()",
"text": "Note: And , this line of code throws error What error is it throwing? When does the error occur?The post does sayOnly the following types are supported: int , bool , float , double , string , data , date , objectId , uuid , decimal128 and links. Realm.create() will validate input accordingly.",
"username": "Jay"
},
{
"code": "https://github.com/realm/realm-js/issues/3389",
"text": "https://github.com/realm/realm-js/issues/3389Ahhh ! It throws an error saying ’ Object type should be string ’ . And my usecase is I need to have list where the object type is an object ( preferablly object type should be a custom defined schema type like “person” )",
"username": "Adithya_Sundar"
},
{
"code": "string | int | PersonMixedstring | intstringintbool | int | float | double | string | ... | dateMixedPersonmixedmixedDictionaryMixedPersonstringintPerson | Alien | SuperheroDictionaryMixed{\n _id: ObjectId,\n personData?: Person,\n alienData?: Alien,\n superheroData?: Superhero\n}\n",
"text": "If I understand your goal, what you’d like to have is a field which of of type string | int | Person. I have a similar requirement, and from what I’ve found there’s no direct translation to the Realm model. You could use a Mixed type to represent string | int, but it would not capture the idea that the value is either a string or an int - instead it would just say \"the value is of any type except a collection, eg a bool | int | float | double | string | ... | date.The other problem is that a Mixed type can hold any type except a collection, and Person is a collection (a dictionary, to be precise). See Mixed - React Native SDK,The mixed data type is a realm property type that can hold any valid Realm data type except a collection. You can create collections (lists, sets, and dictionaries) of type mixed , but a mixed type itself cannot be a collection.The best solution I’ve found is to represent everything as a Dictionary of Mixed values. This is a bummer because you lose the structure of your Person schema, and in your case you’d have to wrap the string and int values inside an object. If there is a better out there please let me know, I’m new to Realm and trying to figure it out.In my case what I’d like to have is something like Person | Alien | Superhero, eg a discriminated union of dictionary types. To get this to fit into the Realm model, it seems I have to use a Dictionary of Mixed values, so I lose any typing about the dictionary fields and the values must all be scalar values (not collections). Or else I could use a sparse dictionary via optional properties, something likeWhich is essentially converting a sum type into a product type, and not exactly ideal, but does maintain the typing of the variants at the cost of the typing of the union itself.",
"username": "Brian_Luther"
},
{
"code": "ZooClassclass ZooClass: Object {\n @Persisted var animalList = RealmSwift.List<AnyRealmValue>()\n}\nanimalListclass DogClass: Object {\n @Persisted var name = \"\"\n}\n\nclass CatClass: Object {\n @Persisted var name = \"\"\n}\nlet d = DogClass()\nd.name = \"spot\"\nlet c = CatClass()\nc.name = \"fluffy\"\nlet obj0: AnyRealmValue = .object(d)\nlet obj1: AnyRealmValue = .object(c)\nanytownZoo.animalList.append(obj0)\nanytownZoo.animalList.append(obj1)\n\ntry! realm.write {\n realm.add(anytownZoo)\n}\nlet zoo = realm.objects(ZooClass.self).first!for animal in zoo.animalList {\n if let dog = animal.object(DogClass.self) {\n print(\"it was a dog named \\(dog.name)\")\n } else if let cat = animal.object(CatClass.self) {\n print(\"it was a cat named \\(cat.name)\")\n }\n}\nit was a dog named spot\nit was a cat named fluffy\n",
"text": "I know an answer but it’s in Swift as I am not a React guy.TL:TR\nIs there a direct translation from Swift SDK AnyRealmValue to a corresponding property in React? The reason is that AnyRealmValue supports all of the primitive types (int, string etc) but also supports object.This may not be helpful but let me toss some Swift code out there to see if we can migrate it. Suppose we have a ZooClass objectthat has a property that stores animals in a List object., animalList. The List object stores objects of AnyRealmValue (this zoo has dogs and cats only)create a dog and a catthen cast them to AnyRealmValue objectsAdd those objects to our zoo.animalList and write to realmfrom there when they are read from Realm, each object “knows” it’s object class and can then be cast back to it’s actual classlet zoo = realm.objects(ZooClass.self).first!and then iterate over the list to identify each animal type and get it’s nameand the outputThe key here is that AnyRealmValue allows a List to be un-homogenous (is that a word?) - storing different types of objects. This could be expanded to also contain ints, strings etc.Now we need someone to translate that to into something usable in React.Let me know if that was utterly useless, lol, and I will delete it.",
"username": "Jay"
},
{
"code": "AnyRealmValue",
"text": "That’s interesting, thanks for the reply, is this just a difference in the capabilities of the two SDK’s? I haven’t run across anything equivalent to Swift’s AnyRealmValue.Maybe someone from Mongo can chime in, it would be quite useful if it is possible.",
"username": "Brian_Luther"
}
] | Defining a field with different schema types | 2023-03-17T08:31:14.949Z | Defining a field with different schema types | 1,968 |
null | [] | [
{
"code": "{\n \"from_secret\": false,\n \"name\": \"API\",\n \"value\": {\n \"url\": \"https://httpbin.org/\"\n }\n}\n",
"text": "Atlas doesn’t create Values from github webhooks.Saved the below code snippet as API.json.",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "Hi Alexander, can you add more detail about the context here? how is the webhook connecting to Atlas?",
"username": "Andrew_Davidson"
}
] | Atlas Values are not saved Github webhook | 2023-03-10T23:01:49.695Z | Atlas Values are not saved Github webhook | 512 |
null | [] | [
{
"code": "use admin\ndb.runCommand( {\n setClusterParameter:\n { changeStreamOptions: { preAndPostImages: { expireAfterSeconds: 100 } } }\n} )\nMongoServerError: not authorized on admin to execute command { setClusterParameter: { changeStreamOptions: { preAndPostImages: { expireAfterSeconds: 100 } } }, lsid: { id: UUID(\"a33462c1-2419-4a35-947f-3ae2c7d9e127\") }, $clusterTime: { clusterTime: Timestamp(1669886477, 1), signature: { hash: BinData(0, C321D3B73BFD8D05FAD472201BEB5D2EFB037F23), keyId: 7139683256089182213 } }, $db: \"admin\" }\n",
"text": "I am currently trying to run the following commend on my one atlas clusterI want use db.watch() to save change logs to the db. I am however getting the following error when trying to run the above commandThe cluster is on the Atlas M10 tier, in the AWS / Cape Town (af-south-1) region.Any help will be greatly appreciated.",
"username": "Hannes_Calitz"
},
{
"code": "",
"text": "The user may not be having privileges to run this command or it may unsupported shell command on M10 cluster\nPlease check mongo documentation Atlas unsupported commands for different cluster Tiers",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "The user I am using is set up as dbAdminAnyDatabase. I also had a look at the documentation for Atlas and could not see setClusterParameter under their unsupported commands.",
"username": "Hannes_Calitz"
},
{
"code": "",
"text": "The user I am using is set up as dbAdminAnyDatabaseyour operation seems related to the cluster itself. this user has only given the privilege to work on databases, not the cluster itself.I am also not sure if you can change cluster settings within a shell. Your cluster sits on a cloud provider and is managed by MongoDB Atlas. Check the Atlas web interface first if you have access to those settings. (you would have 100% admin rights if you manage your own cluster)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I don’t know if there is a redefined role for this, but these link would lead you to define one for your needs:\n1- Create a new role to manage current operations\n2- changeStreams/#access-control (collectin/database/deployment levels)@Abi_Scholz you are trying to open a change stream on a collection, so you should set this on collection or database level.@Hannes_Calitz your goal is not just to open a change stream. so in addition to enabling this role for your user, you may need an extra privilege to change cluster settings. I still haven’t tried it myself, so excuse me for not giving the full steps.",
"username": "Yilmaz_Durmaz"
}
] | MongoServerError: not authorized on admin to execute command | 2022-12-01T09:59:41.856Z | MongoServerError: not authorized on admin to execute command | 13,925 |
null | [
"dot-net"
] | [
{
"code": "var objectSerializer = new ObjectSerializer(type => ObjectSerializer.DefaultAllowedTypes(type) || type.FullName.StartsWith(\"MyNamespace\"));\nBsonSerializer.RegisterSerializer(objectSerializer);\nBsonClassMapforeach (var sagaMetadata in sagaMetadataCollection)\n{\n\tif (!BsonClassMap.IsClassMapRegistered(sagaMetadata.SagaEntityType))\n\t{\n\t var classMap = new BsonClassMap(sagaMetadata.SagaEntityType);\n\t classMap.AutoMap();\n\t classMap.SetIgnoreExtraElements(true);\n\n\t BsonClassMap.RegisterClassMap(classMap);\n}\n",
"text": "Hi all,A new version of the mongodb driver for .NET was released under the minor version 2.19. This breaking change affects our MongoDB persistence and our customers. We have deliberately left the nuget range a bit more open because we hoped for SemVer compliance Anyway that’s not the topic I would like to discuss.I was wondering why it is required to register a serializer mapping when there is already a BsonClassMap concept? Currently the breaking change guidance states the following:but for us since parts of the serialization is done “by our framework” we cannot override the default serializer like that. So we need to register basically an object serializer per type we are aware of to not interfere with the user defined types because they will also have to configure things for all the types they allow. That being said the driver also has support for BsonClassMap which we are also using. For example we do the following for the types we discoveredwhich to me means we are already by definition making the driver “aware of the types we allow and want to map”. So why is the extra step necessary for users that already define mappings?Regards,\nDaniel",
"username": "Daniel_Marbach"
},
{
"code": "",
"text": "Hi, @Daniel_Marbach,You raise a valid point. I see that both you and Gareth Budden made the same feature request and you are aware of CSHARP-4581. My initial impression is that this is a reasonable approach. We will discuss it during our weekly triage meeting. Please follow that ticket for updates.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | Object Serializer breaking changes in 2.19 (.NET Client) | 2023-03-24T08:01:04.191Z | Object Serializer breaking changes in 2.19 (.NET Client) | 1,528 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi, I’m using below MongoDB Kafka Connector:Confluent, founded by the original creators of Apache Kafka®, delivers a complete execution of Kafka for the Enterprise, to help you run your business in real-time.When we stop and start the connector, while the producer is still producing the messages, we noticed that we are getting duplicates messages. We ran a test by producing 50k messages and while the data is getting produced, we stopped the connector and started it again. We noticed 55651 being sent to Mongo collection, that means 5651 duplicates messages.Please let us know what is the expected behavior of the MongoDB Sink connector? Is it allowed to get duplicate messages on the consumer side when restarted ?thanks,\nSuresh",
"username": "Suresh_Parupalli"
},
{
"code": "",
"text": "This is expected. Applications that are consuming from the MongoDB Kafka Connector is expected to handle at least once processing.",
"username": "Robin_Tang"
},
{
"code": "",
"text": "Hi Robin, Thanks for your response!",
"username": "Suresh_Parupalli"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Kafka Connector | 2023-03-06T21:40:05.514Z | MongoDB Kafka Connector | 1,133 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": " {\n \"queryString\": {\n \"defaultPath\": \"myPath\",\n \"query\": \"11:20:58 OR 11:30\"\n }\n }\nCaused by: com.mongodb.MongoCommandException: Command failed with error 8 (UnknownError): 'PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: Cannot parse '11:20:58 OR 11:30': Encountered \" \":\" \": \"\" at line 1, column 5.\nWas expecting one of:\n <EOF> \n <AND> ...\n <OR> ...\n <NOT> ...\n \"+\" ...\n \"-\" ...\n <BAREOPER> ...\n \"(\" ...\n \"*\" ...\n \"^\" ...\n <QUOTED> ...\n <TERM> ...\n <FUZZY_SLOP> ...\n <PREFIXTERM> ...\n <WILDTERM> ...\n <REGEXPTERM> ...\n \"[\" ...\n \"{\" ...\n <NUMBER> ...\n",
"text": "I am trying to use the atlas search queryString:But the query would fail with the below error:May I ask if you have any suggestions on this? Thank you!\n(I tries quote or escape it, but still not working)",
"username": "williamwjs"
},
{
"code": "com.mongodb.MongoCommandException",
"text": "Hey @williamwjs,From the error com.mongodb.MongoCommandException I think you’re using the Java driver. Is this correct? It seems like a syntax issue, however could you please help me with the below details:Regards,\nSatyam",
"username": "Satyam"
},
{
"code": " private static final String QUERY_STRING_CLAUSE = \"\"\"\n {\n \"queryString\": {\n \"defaultPath\": \"myPath\",\n \"query\": \"11:20:58 OR 11:30\"\n }\n }\n \"\"\";\n\n filterOperators.add(SearchOperator.of(Document.parse(QUERY_STRING_CLAUSE)));\n CompoundSearchOperator compoundSearchOperator = SearchOperator.compound().filter(filterOperators);\n Aggregates.search(compoundSearchOperator)\n",
"text": "Yes, this is the Java driver.I am using atlas search, and the index is using the dynamic mapping, and the code is like:So my expected output here is to find the documents with meeting that OR query.",
"username": "williamwjs"
},
{
"code": ":<field-to-search>: (<search-values>\"11:20:58\" OR \"11:30\"",
"text": "Hey @williamwjs,With regard to the question, it’s usually best to work with the actual document example to ensure that all our assumptions are correct. In the absence of an example of an actual document, I’m basing my answer on the code you provided. The queryString operator is used for string values and not date-time values. Also, : in queryString denotes <field-to-search>: (<search-values>, which is why it is giving you an error. Could you confirm that you’re searching for two DateTime values and not the actual string \"11:20:58\" OR \"11:30\"?Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "It is string, not dateTime.\nI just tested again, and found making it quoted would work now! So the issue is resolved. Thank you!",
"username": "williamwjs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to escape colon in query | 2023-03-22T00:37:46.403Z | How to escape colon in query | 941 |
[
"atlas"
] | [
{
"code": "",
"text": "I’ve already switched from standalone to replica set doing the steps in this wiki:I’m able to connect to the replica set (tried it with Compass), and see the data but the migration tool is always failing with this message:Live Migration encountered an error: could not initialize source connection: could not connect to server: server selection error: server selection timeout current topology: Type: ReplicaSetNoPrimary Servers: Addr: localhost:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection(localhost:27017[-9]) connection is closed\n\nimage1079×65 9.38 KB\nI’ve whitelisted all four subnets, I even tried to open it to all IP’s and it still failed\n\nimage1152×349 21.4 KB\n",
"username": "Gilad_Madar"
},
{
"code": "",
"text": "Hi @Gilad_Madar,If I understand correctly, you’re addressing the node you’re aiming to migrate as “localhost:27017”Please note that localhost only works from the context of a specific local system. In the case of Live Migration, the Live Migration service needs to be able to reach your MongoDB replica set remotely and cannot use localhost as a hostname. Do you have a full qualified domain name that can be reached from another context? Or even a public IP?Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Thanks Andrew. using the machine hostname instead of localhost solved it.",
"username": "Gilad_Madar"
},
{
"code": "",
"text": "You mean the IPv4? i am having a hard time trying to connect to Live Migration Service too, been stuck on this part for days too, i thnk my rs.conf had some default parameters including the 127.0.0.1 IP (It’s a one-node local Replica Set = Source ) That would mean that the hostname:port format would look like publicIP:27017 ?",
"username": "Billy_Bedon"
},
{
"code": "",
"text": "Yes, for Live Migrate to work, the replica set config (you can check it via rs.conf() in the Mongo shell) should be using public IP address(-es). If you’re restricted to private IPs only, you can use the Cloud Manager Migration service - it’s available with the free account: https://www.mongodb.com/docs/atlas/migration-from-com/",
"username": "Alexander_Komyagin"
}
] | Error migrating to Atlas from a private replica set | 2020-04-28T12:30:27.830Z | Error migrating to Atlas from a private replica set | 5,261 |
|
null | [
"database-tools"
] | [
{
"code": "# bsondump oplog.bson\n.\n.\n2020-10-09T15:19:47.624+0530 1350 objects found",
"text": "I am using MongoDB version 4.4.\nmongodump and mongorestore version: 100.1.0mongorestore --port 27017 --oplogReplay “/365784/local/oplog_replay_input”\n[2020-10-09T15:12:08.618+0530 preparing collections to restore from\n2020-10-09T15:12:08.618+0530 replaying oplog\n2020-10-09T15:12:08.676+0530 Failed: restore error: error applying oplog: applyOps: (Location10065) invalid parameter: expected an object ()\n2020-10-09T15:12:08.676+0530 0 document(s) restored successfully. 0 document(s) failed to restore.\n]",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "It again failed with this error. This is just the dump of oplog collection that is being applied during restore.mongorestore --port 27019 --oplogReplay “365916/local/oplog_replay_input”\n2020-10-09T18:31:41.317+0530 preparing collections to restore from\n2020-10-09T18:31:41.318+0530 replaying oplog\n2020-10-09T18:31:41.392+0530 Failed: restore error: error applying oplog: applyOps: (Location40528) Direct writes against config.transactions cannot be performed using a transaction or on a session.\n2020-10-09T18:31:41.392+0530 0 document(s) restored successfully. 0 document(s) failed to restore.",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "Hi,Anyone can help on this. Seeing the below error again while applying the dump of the oplog collection.Failed: restore error: error applying oplog: applyOps: (Location40528) Direct writes against config.transactions cannot be performed using a transaction or on a session.Thanks,\nAkshaya Srinivasan",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "I am also getting the same error while running mongodb restore -\nrestore command is: /mongodb44_software/mongodb-linux-x86_64-rhel70-4.4.1//bin/mongorestore --authenticationDatabase admin --port 27017 -u mongo-root --oplogReplay /tmp/era_recovery_staging_area/logs_0/20220117174232_20220117175522/local/oplog.rs.bson --oplogLimit 1642444226:0 --verbose -p\n2022-01-18T12:38:15.526+0530 using write concern: &{majority false 0}\n2022-01-18T12:38:15.545+0530 checking for collection data in /tmp/era_recovery_staging_area/logs_0/20220117174232_20220117175522/local/oplog.rs.bson\n2022-01-18T12:38:15.545+0530 found metadata for collection at /tmp/era_recovery_staging_area/logs_0/20220117174232_20220117175522/local/oplog.rs.metadata.json\n2022-01-18T12:38:15.545+0530 replaying oplog\n2022-01-18T12:38:15.550+0530 skipping applying the config.system.sessions namespace in applyOps\n2022-01-18T12:38:15.550+0530 skipping applying the config.system.sessions namespace in applyOps\n2022-01-18T12:38:15.550+0530 skipping applying the config.system.sessions namespace in applyOps\n2022-01-18T12:38:15.552+0530 skipping applying the config.system.sessions namespace in applyOps\n2022-01-18T12:38:15.552+0530 skipping applying the config.system.sessions namespace in applyOps\n2022-01-18T12:38:15.554+0530 Failed: restore error: error applying oplog: applyOps: (Location40528) Direct writes against config.transactions cannot be performed using a transaction or on a session.",
"username": "Balram_Parmar"
},
{
"code": "",
"text": "Did you find any solution for this, I am also getting the same error",
"username": "Balram_Parmar"
},
{
"code": "assertion: 10065 invalid parameter: expected an object (options)",
"text": "I was getting a similar error message when running mongorestore V2.6 on a backup:assertion: 10065 invalid parameter: expected an object (options)While the backup I was trying to restore came from a V2.6 replica set, it turns out the dump was unexpectedly taken with mongodump V4.2",
"username": "Sam_Bryan"
}
] | Mongorestore command always fails with expected an object | 2020-10-09T09:52:46.536Z | Mongorestore command always fails with expected an object | 4,090 |
null | [
"replication"
] | [
{
"code": "C:\\DESKTOP-MONGO01>mongo \"DESKTOP-MONGO02:27017\"\nMongoDB shell version v5.0.6\nconnecting to: mongodb://DESKTOP-MONGO02:27017/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server DESKTOP-MONGO02:27017, connection attempt failed: NetworkTimeout: Error connecting to DESKTOP-MONGO02:27017 (10.25.35.96:27017) :: caused by :: Socket operation timed out :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nnet:\n port: 27017\n bindIp: 127.0.0.1,DESKTOP-MONGO02\n",
"text": "Hello Everyone,I am working to deploy a replica set in three windows platform computers in the same network so I created the computer-name for each one like the following:1.) DESKTOP-MONGO01:27017\n2.) DESKTOP-MONGO02:27017\n3.) DESKTOP-MONGO03:27017I am trying to connect the second computer by using cmdNetwork in Config fileAll the three computers use the same mongo version but for now I am not able to connect the second or the third computer from the first one.The issue is the mongo is able to see the computer Private IP (10.25.35.96) but cannot connect, and if it is firewall issue in the second/third computer haw to solve it in windows firewall?Thanks for helping",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "May be firewall blocking your nodes\nEach node should be able to connect with other\nAre they in the same data centre or different?\nPrivate IPs work in same DC",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you, you are right i white list port 27017 in the third and second computer and its working fine.",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to delpoy replica set in windows platform | 2023-03-23T02:09:35.635Z | Unable to delpoy replica set in windows platform | 534 |
null | [
"atlas-functions"
] | [
{
"code": "statusblocked{\n \"updateDescription.updatedFields\": {\n \"status\": \"blocked\"\n }\n}\nstatusblockednameany value",
"text": "Hello,I try to set up a database trigger in Atlas and run into a problem.This Match expression will fire a DB Trigger when the field status is updated to blockedI like to fire a trigger when the field status is updated to blocked OR the field name is updated to any value as in simply changed.How can I reflect this in a Match Expression in Atlas?",
"username": "michael_hoeller"
},
{
"code": "{\n \"updateDescription.updatedFields.status\": \"blocked\"\n}\n{\n $or: [\n {\"updateDescription.updatedFields.status\": \"blocked\"},\n {\"updateDescription.updatedFields.name\": {$exists: true}},\n ]\n}\n",
"text": "Hi, few things here,Firstly, its likley that you want the above to be:Since this matches on a document that has this field set. Your expression only matches changes that ONLY update that field.For what you want, I think you want your match expression to be this:Let me know if this works!\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "{\n $or: [\n {\"updateDescription.updatedFields.status\": \"blocked\"},\n {\"updateDescription.updatedFields.name\": {$exists: true}},\n ]\n}\nstatusblockednamestatusblockedname",
"text": "Thanks @Tyler_KayeThis will fire execute the function when:But I need:\nThis will fire execute the function when:",
"username": "michael_hoeller"
},
{
"code": "name",
"text": "the field name underlies any change this implies the existence but adds the constraint that the existing value is changedSorry, I am not quite sure I follow this. Do you mind clarifying?",
"username": "Tyler_Kaye"
},
{
"code": "statusname",
"text": "The excuse is on my side.I want to trigger the function:The above mentioned version\n“updateDescription.updatedFields.name”: {$exists: true}}\nwould afaik not trigger the function on the change from “abc” to “efg” since the field already exists.",
"username": "michael_hoeller"
},
{
"code": "",
"text": "I think that this is exactly what you want actually. This is a query on the “Change Event” (See here: https://www.mongodb.com/docs/manual/reference/change-events/update/#description)The UpdateDescription only has the fields that are actually modified, so the query I gave above does the following:Note that this will not catch the field “name” being removed entirely (you would need to add another clause to the OR on the “removedFields” section of the UpdateDescriptionLet me know if this works for you!\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello @Tyler_Kaye\nthanks a lot! I can test this next monday.\nAlso many thanks for the link, it solved a misconception on my side concerning the “mechanics” of the change streams. Now I am 99.9% sure that you solved it already with your initial answer. I’ll update on monday.\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "{\n \"$or\": [\n {\"updateDescription.updatedFields.status\": \"blocked\"},\n {\"updateDescription.updatedFields.name\": {\"$exists\": true}},\n ]\n}\n",
"text": "Hello @Tyler_Kaye\nthanks a lot your code answered the question. Plus this link on the update Change EventThe buildin Editor in Atlas is a little bit picky, so I had to surround the $or and $exists with quotation marks.",
"username": "michael_hoeller"
},
{
"code": "",
"text": "May I know where do we pass this criteria for firing the trigger. Is it in the function that we write to associate a trigger?",
"username": "Satyanarayana_Ettamsetty1"
},
{
"code": "",
"text": "Match expressions are passed directly to the MongoDB Change Stream API: https://www.mongodb.com/docs/manual/changeStreams/It is important to understand Change Events and their format before trying to craft one:However, I generally would advise you to avoid a match expression until your load becomes high enough that it is a concern. Instead, I think it is worth considering just receiving all events to the cluster and you can build your logic directly into the function of what you want to do when the UpdateDescription containts certain fields",
"username": "Tyler_Kaye"
}
] | How to trigger a function on any change on a field with a change stream in Atlas? | 2022-08-12T15:10:55.607Z | How to trigger a function on any change on a field with a change stream in Atlas? | 4,732 |
null | [] | [
{
"code": "",
"text": "Is there a way to version control documents like git? I want to be able to let users update a document, while allowing them to use previous versions if needed.",
"username": "Alexander_Lau"
},
{
"code": "",
"text": "Hi Alexander_Lau,Daniel Coupal, one of our curriculum engineers, created a blog post that discusses this very problem. Please see: Building with Patterns: The Document Versioning Pattern | MongoDB BlogThe Document Versioning Pattern is just one option among many. A free MongoDB University course is available that dives much more into all the patterns if you’re interested:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Justin",
"username": "Justin"
},
{
"code": "",
"text": "Is there any implementation of this in C#?",
"username": "Marmik_Shah"
},
{
"code": "",
"text": "Document versioning in MongoDB can be implemented in multiple ways, each with its pros and cons. Here are a few options:Whichever method you choose, make sure to test it thoroughly and to consider the trade-off between the complexity of the solution and the requirements for versioning in your use case.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "For document versioning for Mongo db with Java , is there any collection or feature similar to ‘historical Collection’ module which is available for Python like historical-collection · PyPI ?",
"username": "codeit482_t"
}
] | Version control for documents | 2020-06-07T20:36:23.545Z | Version control for documents | 7,065 |
null | [
"aggregation"
] | [
{
"code": "db.orders.aggregate([\n {\n \"$match\": {\n \"_id\": 1\n }\n },\n {\n \"$lookup\": {\n \"from\": \"inventory\",\n \"localField\": \"items.things\",\n \"foreignField\": \"sku\",\n \"as\": \"items.things\"\n }\n }\n])\nitemsitems[\n {\n \"_id\": 1,\n \"items\": {\n \"things\": [\n {\n \"_id\": 1,\n \"description\": \"product 1\",\n \"instock\": 120,\n \"sku\": \"almonds\"\n },\n {\n \"_id\": 3,\n \"description\": \"product 3\",\n \"instock\": 60,\n \"sku\": \"cashews\"\n }\n ]\n },\n \"price\": 12,\n \"quantity\": 2\n }\n]\n\"orders\": [\n {\n \"_id\": 1,\n \"items\": [\n {\n \"name\": \"first\",\n \"things\": [\n \"almonds\"\n ]\n },\n {\n \"name\": \"second\",\n \"things\": [\n \"cashews\",\n \"almonds\"\n ]\n }\n ],\n \"price\": 12,\n \"quantity\": 2\n },\n {\n \"_id\": 2,\n \"item\": [\n \"pecans\",\n \"cashews\"\n ],\n \"price\": 20,\n \"quantity\": 1\n },\n {\n \"_id\": 3\n }\n ],\n",
"text": "I have a data structure where I have a deeply nested array of objects, which then contain an array of ids. I want to do a lookup of those ids against another table, and substitute the full objects in place in this aggregation.My attempted pipeline looks like this:The problem is that this only matches against the first object in the items array, and it overwrites other fields within the items scope, leaving just the things property, with the lookup of the first array.Setup can be found here: Mongo playgroundSample output:With the following sample input (from orders)",
"username": "Brian_Sump"
},
{
"code": "\"items\": [\n {\n \"name\": \"first\",\n \"things\": [\n \"almonds\"\n ]\n },\n {\n \"name\": \"second\",\n \"things\": [\n \"cashews\",\n \"almonds\"\n ]\n }\n ]\n\"items\": [\n {\n \"name\": \"first\",\n \"things\": [\n {\n \"_id\": 1,\n \"sku\": \"almonds\",\n \"description\": \"product 1\",\n \"instock\": 120\n }\n ]\n },\n {\n \"name\": \"second\",\n \"things\": [\n {\n \"_id\": 3,\n \"sku\": \"cashews\",\n \"description\": \"product 3\",\n \"instock\": 60\n },\n {\n \"_id\": 1,\n \"sku\": \"almonds\",\n \"description\": \"product 1\",\n \"instock\": 120\n }\n ]\n }\n ]\n",
"text": "Hi - thank you for this response, and maybe I wasn’t clear. Both of these solutions seem to match what I could get with a simple join. What I am looking for is a solution where items is transformed from:To:",
"username": "Brian_Sump"
},
{
"code": "$lookup$addFields$mapitems$filterthings$in$mergeObjectsthings$$REMOVEthingsdb.orders.aggregate([\n { \"$match\": { \"_id\": 1 } },\n {\n \"$lookup\": {\n \"from\": \"inventory\",\n \"localField\": \"items.things\",\n \"foreignField\": \"sku\",\n \"as\": \"things\"\n }\n },\n {\n $addFields: {\n items: {\n $map: {\n input: \"$items\",\n as: \"item\",\n in: {\n $mergeObjects: [\n \"$$item\",\n {\n things: {\n $filter: {\n input: \"$things\",\n cond: { $in: [\"$$this.sku\", \"$$item.things\"] }\n }\n }\n }\n ]\n }\n }\n },\n things: \"$$REMOVE\"\n }\n }\n])\n```",
"text": "What I am looking for is a solution where items is transformed from:lookup can’t update results in a nested array. you need to do separate processes to join them in nested,",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks - I’m going to have to study this a bit to see exactly how this works… but it does work!",
"username": "Brian_Sump"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lookup aggregation within nested array | 2023-03-23T23:35:17.865Z | Lookup aggregation within nested array | 1,827 |
[] | [
{
"code": "exports = async function onUserCreation(user) {\n const customUserDataCollection = context.services\n .get(\"mongodb-atlas\")\n .db(\"kiizmi-dev\")\n .collection(\"custom_users_data\");\n try {\n await customUserDataCollection.insertOne({\n // Save the user's account ID to your configured user_id_field\n userId: user.id,\n // Store any other user data you want\n test: user.id,\n test2: 1,\n role: 'default'\n });\n } catch (e) {\n console.error(`Failed to create custom user data document for user:${user.id}`);\n throw e\n }\n}\n",
"text": "Hello,I’m trying to set up customData with App Service and Authentication but it doesn’t work.I am currently on a shared mongodb, I activated in Authentication>User Setting the custom data as follows:\n\nCapture d’écran 2023-03-23 1932441601×652 16 KB\nI have my function which is close to that of the example in the documentation :And which is configured with authentication system and in private.But when I create a user (email/password) through the interface, the function doesn’t seem to run and I don’t see an insert in the collection.When I connect with the user, his custom data is null even if I enter manually.Thanks to anyone who takes the time to help me.",
"username": "Kiizweb_Kiizmi"
},
{
"code": "exports = async function(user) {\n …\n};\nconsole.log(…)userIdstringObjectId",
"text": "Hi @Kiizweb_Kiizmi,I have my function which is close to that of the example in the documentationMay or may not be important, but what if you remove the explicit function name?But when I create a user (email/password) through the interfaceDo you mean the Portal UI? What if you create the user in your app?the function doesn’t seem to run and I don’t see an insert in the collection.Have you tried to add more console.log(…) and see if you get anything in the logs?When I connect with the user, his custom data is nullCan you also please post the code you use in the client?even if I enter manually.When you insert it manually, do you ensure that userId is in string format? ObjectId won’t work there.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thanks a lot for your help.The problem was that the function only executes when the user is created via the application and not directly on the UI.And for reading, userId must be of type string. Thanks again",
"username": "Kiizweb_Kiizmi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | My custom data does not work | 2023-03-23T18:38:24.161Z | My custom data does not work | 530 |
|
null | [
"dot-net"
] | [
{
"code": "",
"text": "I just noticed I get dependencies on AWS NuGet packages with the driver. Why? I haven’t analyzed the code on how many features are tied to it. I can see that there’s e.g. auth functionality for AWS etc. Shouldn’t e.g. AWS specific features be a separate MongoDB driver package?",
"username": "Daniel_Wertheim"
},
{
"code": "",
"text": "Hey @Daniel_Wertheim , please follow to another thread, with similar question.",
"username": "Dmitry_Lukyanov"
}
] | AWS dependencies in NuGet package | 2023-03-24T09:28:26.104Z | AWS dependencies in NuGet package | 518 |
null | [] | [
{
"code": "",
"text": "Hi guys,I have an ‘advanced search’ on one of my projects that allows users to filter by different fields. I would like to add a ‘length’ filter, for which I can use $strLenCP within my query.My collections are already pretty well indexed so that the query examines as little documents as possible, however I’m wondering if it’s possible for me to add $strLenCP to an index to make the new query as efficient as possible.My only idea so far is that I add another field to my collection called ‘length’, loop through the collection to populate it and then index this new field.Is there a way I can do this without the need to add and populate the extra field?Thanks in advance.",
"username": "Lewis_Dale"
},
{
"code": "",
"text": "Hello @Lewis_Dale,Welcome back to the MongoDB Community I have an ‘advanced search’ on one of my projects that allows users to filter by different fieldsCan you please clarify if you are referring to the Atlas Search?My collections are already pretty well indexed so that the query examines as few documents as possible,Could you please elaborate more on what you mean by “query examines a minimal number of documents”?I’m wondering if it’s possible for me to add $strLenCP to an indexBased on my knowledge, it is not possible to use an aggregation operator on the index. If my understanding is incorrect, could you please provide clarification by including some example documents?My only idea so far is that I add another field to my collection called ‘length’, loop through the collection to populate it and then index this new field.What I can conclude from the above statement is that you want to add a new field to the existing document structure and index it. While this is a valid approach, it is generally advisable to index fields that are frequently used in queries. To learn more on this topic, please refer to the following resource to learn more:However, In order to understand the question better can you can share the following information in addition to the above-asked clarification:Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Can I add string length to an index? | 2023-03-24T03:04:26.704Z | Can I add string length to an index? | 374 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi everyone!First time visiting, glad to be part of this community.I’m developing an inventory service and I have the following entities:Business Inventory.User inventoryCompany Inventory ( can be multiple depending on region/country )Basically, the products are containers (categorized by type: S/M/L).Not having a unique identifier for each container.As a result, inventory is saved at the quantity level.For example : container type : S , quantity : 5000Business inventory is managed by the company, so when it runs out, we will add more from the same type.UserInventory on the other hand , is maintained for the period that the user rents the container ( no purchases in the app , only rentals )So for example, if a user is now renting a container from a specific business,we need to decrease the quantity from the rented types in the business inventory, and add those to the UserInventory.The same applies for filling stock for the business .The opposite is true when returning rented containers.Each container within the same “order” can have a different return time.Container can have different status at the UserInventory level ( active / charged )[charged if the user is late for the rental period]And different statuses on Business Inventory level ( active/ dirty … )And different statuses on the Opa inventory.My questions are:1.Is it better to maintain a single document for user per container type ( so 3 types, 3 documents per user) and the same for business ? Each container type has one document that holds all the user rents, business holdings, and company holdings.did the inventory documents need to be on the collection ( otherwise , how can I make sure that both add/subtract from both inventories are occurring)In regards to question 1, I feel like there is a chance of losing data. This is because multiple users will try to update the same business inventory document. This is because decrease / increase the stock will override other operations.",
"username": "Tech_General"
},
{
"code": "things that are queried together should stay together",
"text": "Hey @Tech_General,Welcome to the MongoDB Community Forums! Maintaining a single document per user is a good idea if the number of users and inventory is less. This could simplify queries and updates, as you wouldn’t need to filter through multiple documents to find a specific container type. However, it could also lead to documents becoming large with time. Another approach is maintaining separate collections for the three inventory types. In business inventory, you can maintain a reference of all the users that have currently rented out that particular inventory.There are advantages and disadvantages to both approaches. Which one is best would depend on your use case, your most frequent queries and updates, and your expected workload. Please also note that MongoDB has a document size limit of 16MB, so you would need to find a balance so no document can grow indefinitely.But this being said, a general rule of thumb while modeling data in MongoDB is that things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.\nI would suggest you to experiment with multiple schema design ideas. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Additionally, I am also attaching posts from our MongoDB blog series that might be useful to you:\nRetail Architecture Best Practices\nInventory Management with MongoDB AtlasI feel like there is a chance of losing data. This is because multiple users will try to update the same business inventory document. This is because decrease / increase the stock will override other operations.In MongoDB, all operations are ACID compliant which means all database transactions are processed in a reliable way, resulting in correctness. For situations that require atomicity of reads and writes to multiple documents (in single or multiple collections), MongoDB supports multi-document transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards. You can read more about this from the documentation:\nTransactions\nConcurrency FAQs in MongoDBPlease feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Data modeling for my application | 2023-03-20T14:39:15.114Z | Data modeling for my application | 632 |
null | [
"swift"
] | [
{
"code": "",
"text": "I am using Realm Flexible synch with Swift. Is there a way to access the created timestamp of on object? Or is this something I should add?\n@Persisted let createdTimeStamp = Date()",
"username": "Xavier_De_Leon1"
},
{
"code": "ObjectId_idgenerate()timestamp",
"text": "Hi @Xavier_De_Leon1,If your classes are using ObjectId for the _id primary key, and letting the code generate() it at the time of creation, you can use the timestamp property. It’s typically accurate to the second. if that’s enough for you.",
"username": "Paolo_Manna"
}
] | Access timestamp information of created Object | 2023-03-24T01:33:36.446Z | Access timestamp information of created Object | 828 |
null | [
"node-js",
"dot-net",
"java"
] | [
{
"code": "",
"text": "If I’m understanding correctly, the MongoDB Developer Certification is a single cert with a choice of what language you want in the exam, is this correct? Or are each certification exam its own Developer Certification?So if I took the Node.JS Dev Certification exam, would it be the exact same certificate as completing the C# and Java exams? Or are each of these individually distinguished? Meaning I’d have to take all 3 exams to get all 3 certifications?",
"username": "Brock"
},
{
"code": "",
"text": "Hey @Brock,Yes, you are correct. The Associate Developer Exam is conducted in four different languages and you can take it in the language of your choice. You don’t need to clear all four languages to get your certification - you just need to clear it in one language of your choice.So if I took the Node.JS Dev Certification exam, would it be the exact same certificate as completing the C# and Java exams? Or are each of these individually distinguishedC# exam will contain questions pertaining to the C# driver and the same goes for the Java, python or node exams. They all share a common set of core questions. Only section 6 - Drivers of the exam will be presented according to the programming language selected during registration. Clearing anyone will give you the Associate Developer Certification, so you don’t need to take all three unless you want to.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Certification Topics | 2023-03-24T01:56:00.140Z | Certification Topics | 1,050 |
null | [
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] | [
{
"code": "mongoose\n .connect(process.env.mongodb_url, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n .then(() => console.log(\"Mongodb Connected...\"))\n .catch((err) => console.error(err));\n",
"text": "Hello, Im new in this. I have authentication problem connecting to MongoDB Atlas collection with NodeJS.\nWhat is wrong?Error: MongoServerError: bad auth : authentication failedConnection string:mongodb_url=mongodb+srv://direktor:[email protected]/ToDoAppCollection?retryWrites=true&w=majorityNode code:",
"username": "Bogomil_Pockaj"
},
{
"code": "",
"text": "Can you connect with Compass or mongosh?If not, you have the wrong user name direktor or the wrong password password123! .",
"username": "steevej"
},
{
"code": "",
"text": "If I install Mongo on localhost works fine, just not online on Atlas.",
"username": "Bogomil_Pockaj"
},
{
"code": "",
"text": "It sounds like you may have a wrong password or username like steevej suggests.That, or you’re connecting to the wrong port. I would doublecheck and verify the information, and if it is correct we can dig deeper into this.",
"username": "Brock"
},
{
"code": "",
"text": "I suspect it is due to special character “!” in your password\nYou have to escape it or use Uriencoder or call password separately instead of using it in connect string or change PWD to a simple one",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This would work: mongodb_url=mongodb://127.0.0.1:27017",
"username": "Bogomil_Pockaj"
},
{
"code": "",
"text": "It works because you are connecting to local mongodb without access control enabled\nYour requirement is to connect to Atlas with userid/PWD\nTry to replicate the same in your local instance and see if you can connect(enable access control and create user and PWD with special character)",
"username": "Ramachandra_Tummala"
}
] | Authentication problem on Atlas | 2023-03-23T06:39:41.770Z | Authentication problem on Atlas | 1,025 |
null | [
"queries",
"node-js"
] | [
{
"code": "[{\n key: \"test\",\n numbers: [\n 0,\n 0,\n 0,\n {\n \"05/03/2023\": [ 0,5,0 ]\n }\n ]\n}]\n[{\n key: \"test\",\n numbers: [\n 0,\n 0,\n 0,\n {\n \"05/03/2023\": [ 0,6,0 ]\n }\n ]\n}]\n\n[{\n key: \"test\",\n numbers: [\n 0,\n 0,\n 0,\n {}\n ]\n}]\n[{\n key: \"test\",\n numbers: [\n 0,\n 0,\n 0,\n {\n \"05/03/2023\": [ 0,1,0 ]\n }\n ]\n}]\ndb.collection.update({},\n[\n {\n $addFields: {\n \"numbers.3.04/03/2023\": {\n $cond: {\n if: { $isArray: \"numbers.3.04/03/2023\" },\n then: {\n $concatArrays: [ { $slice: [ \"numbers.3.04/03/2023\", 0, 1 ] },\n [ { $add: [ { $arrayElemAt: [ \"numbers.3.04/03/2023\", 1 ] }, 1 ] } ],\n {$slice: [ \"numbers.3.04/03/2023\", 2, 1 ] } \n ] \n },\n else: [ 0, 1, 0 ]\n }\n },\n \n }\n },\n \n])",
"text": "So I’m trying to increment a number in an array that is situated in an object which is situated in an array Let’s say today’s date is 05/03/2023, which is found in the object so it will just increment the second number in the arraySo in the case above the result from the query will be:But if our collection is:In this case it will need to create the array and this is how the result should lookThis is what I tried but doesn’t work $addFields sets everything in the numbers array and I’m confused",
"username": "xThe_Alex14"
},
{
"code": "",
"text": "I made a mongoplayground link if anyone wants to help: Mongo playground",
"username": "xThe_Alex14"
},
{
"code": "db.collection.update({\n key: 1, \"numbers.3.04/03/2023\": { $exists: false } \n},\n{ $set: { \"numbers.3.04/03/2023\": [ 0, 0, 0 ] }\n})\ndb.collection.update({\n key: 1,\n},\n{\n $inc: { \"numbers.3.04/03/2023.1\": 1}\n})\n",
"text": "I want to basically achieve this:But all in one query",
"username": "xThe_Alex14"
},
{
"code": "db.collection.update({\n key: 1, \"numbers.3.04/03/2023\": { $exists: false } \n},\n{ $set: { \"numbers.3.04/03/2023\": [ 0, 0, 0 ] }\n})\ndb.collection.update({\n key: 1,\n},\n{\n $inc: { \"numbers.3.04/03/2023.1\": 1}\n})\n\"numbers.3.05/03/2023\"",
"text": "Hello @xThe_Alex14,Welcome to the MongoDB Community forums Apologies for the late response!It appears that your current query is functional, but I would appreciate some clarification on certain aspects. Specifically, your query appears to target a specific array index and key name.However, I am curious about how your query will behave when additional elements are introduced, such as \"numbers.3.05/03/2023\". Will the query simply append this new element, or will it operate differently in this scenario?Additionally, I would suggest that you reconsider your schema design. While your current approach may work for this specific example, once data grows, this query will be more difficult to maintain. Therefore, it is worth taking the time to evaluate your schema and explore alternative designs that may better meet your needs and be easier to work with. Please refer to the MongoDB Schema Design Best Practices article on MongoDB DevCenter to learn more.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello,So numbers.3.04/03/2023 is an array that has only 3 numbers that represents likes, views, downloads. The array won’t grow or anything I just want a query that will increase the view count by 1 so that is the second element in the array. The problem is that if the array doesn’t exist it obviously won’t increment anything so if that array doesn’t exist i need to create it, that’s the first update query and the second one is to increment the actual value.Hope this makes everything clear, it you need anything else please let me know",
"username": "xThe_Alex14"
}
] | Query increment number in array or if the array doesnt exist create it | 2023-03-07T08:41:26.319Z | Query increment number in array or if the array doesnt exist create it | 1,017 |
null | [
"transactions"
] | [
{
"code": "NO_TRANSACTION\nSTARTING_TRANSACTION\n{\n writeConcern: WriteConcern { w: 'majority' },\n readConcern: ReadConcern { level: 'local' },\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n }\n}\nfind with session 684\nupdate without session 987\nwait for 5 seconds\nfind with session 684\nTRANSACTION_IN_PROGRESS\nTRANSACTION_COMMITTED\nfind after transaction commit 987\nEnding Session```",
"text": "Hi, I created some tests to better understand transactions.\nThis is one of the tests, and what I did:I expected to log number X, modified to number Y and after that on the second find to see the Y again.\nBut the transaction is acting as a snapshot when I`m clearly setting readConcern to local.I there anything else that I’m missing here? Shouldn’t the last find read the latest document state?Below is the logging of this script.",
"username": "Adriano_Tirloni"
},
{
"code": "",
"text": "Might be related to this: Isolation in transactionsAs i recall, Mongodb document doesn’t say much about transaction isolation (e.g. unlike the detailed info in sql databases).In SQL databases, every single statement is a transaction implicitly. I’m guessing this is also true for a “simple operation” in nosql world. (e.g. updateOne).",
"username": "Kobe_W"
},
{
"code": "snapshotlocal",
"text": "This is the manual reference of this test:But it should happen with readConcern snapshot and not with readConcern local. Just like the manual example.",
"username": "Adriano_Tirloni"
},
{
"code": "readConcernreadConcernlocal (new data)localsession.startTransactionreadConcern:snapshotreadTimestampreadConcern:localreadConcern:snapshotreadConcern:majoritysnapshotmajorityshardedSafeMajority",
"text": "I believe I found a reason, but if anyone with more experience knows if this is correct, please chime in.\nI had some misconceptions about how transactions work:It seems that readConcern is applied to the transaction as whole, as if everything is one big operation. So the readConcern local will not read local (new data) on each find statement, it will read local at the moment that I call session.startTransaction.From this moment on, the transaction will read a snapshot of the data. Which has nothing to do with readConcern:snapshot - The naming here is very confusing.This became more clear on WiredTiger documentation:\n“WiredTiger: Managing the transaction timestamp state”\n\nimage1209×104 6.99 KB\nNow this is my operation with readConcern:local:\nThis is with readConcern:snapshot\nThis is with readConcern:majority\nI believe that’s it.\nIf anyone has more experience with this topic, don’t be shy.\nCheers,",
"username": "Adriano_Tirloni"
},
{
"code": "",
"text": "Good finding. Maybe that’s why only a transaction level read concern can be set instead of for individual operations.MongoDb is a complex software, so it’s understandable that documentation will miss something.",
"username": "Kobe_W"
}
] | Transaction with Read Concern 'local' doesn't behave as expected | 2023-02-12T00:03:28.377Z | Transaction with Read Concern ‘local’ doesn’t behave as expected | 1,247 |
null | [
"queries",
"node-js"
] | [
{
"code": "PlayerHistoryHistoryts\"name\": <string>,\n\"id\": <string>,\n\"ts\": <number>,\n\"history\": [\n {\n \"param1\": <number>,\n \"param2\": <number>,\n \"param3\": <number>,\n \"ts\": <number>\n },\n {\n \"param1\": <number>,\n \"param2\": <number>,\n \"param3\": <number>,\n \"ts\": <number>\n }\n]\ndb.players.find({\"history.ts\": {$gt: _ts}})_ts = 1679540463210PlayerHistory",
"text": "Hi,I have a bunch of Player’s that each have a list of History’s and I am struggling with a query to only retrieve any History that has a ts which is greater to an input timestamp.The query I have been trying to use is db.players.find({\"history.ts\": {$gt: _ts}}) where _ts = 1679540463210. However this seems to return all records, where I would like to see only the Player and History records that match.",
"username": "Liam_Wrigley"
},
{
"code": "$filter$addFields$project",
"text": "Hello @Liam_Wrigley, Welcome to the MongoDB Community Forum,Your query should filter the player documents (main document), if it is not doing then there is some other issue in your data, you need to post some example documents and executed query in the shell,To filter history you need to use a projection and $filter operator, where you need to pass the same match condition.Or you can use an aggregation query with the $addFields / $project stage as well.",
"username": "turivishal"
},
{
"code": "HistoryPlayerdb.players.aggregate([\n {\n $match: {\"history.ts\": {$gt: _ts}}\n },\n {\n $project: {\n players: {\n name: true,\n id: true,\n history: { \n $filter: {\n input: \"$history\",\n as: \"h\",\n cond: {$gte: [\"$$h.ts\", _ts]}\n }\n }\n }\n }\n }\n])\n",
"text": "Thanks for that, it was in the right direction. I am fairly new to Mongo and am still struggling a little bit.I am now retrieving the correct History records, but am missing the parent document, Player.with results\n\nimage536×539 9.76 KB\n",
"username": "Liam_Wrigley"
},
{
"code": "HistoryPlayer$addFields$project$projectdb.players.aggregate([\n { $match: { \"history.ts\": { $gt: _ts } } },\n {\n $addFields: {\n history: { \n $filter: {\n input: \"$history\",\n as: \"h\",\n cond: { $gte: [\"$$h.ts\", _ts] }\n }\n }\n }\n }\n])\n",
"text": "I am now retrieving the correct History records, but am missing the parent document, Player.You can use the $addFields stage instead of $project,\nIn short, the difference is,Your final query would be,",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Querying for nested value | 2023-03-24T03:30:01.158Z | Querying for nested value | 497 |
null | [
"python"
] | [
{
"code": "coll_pwskills.insert_one(data)\n\nOperationFailure: user is not allowed to do action [insert] on [pwskills.my_record], full error: {'ok': 0, 'errmsg': 'user is not allowed to do action [insert] on [pwskills.my_record]', 'code': 8000, 'codeName': 'AtlasError'}\n\nhow do I fix this error\n",
"text": "",
"username": "Mohammed_Aamir"
},
{
"code": "{'ok': 0, 'errmsg': 'user is not allowed to do action [insert] on [pwskills.my_record]', 'code': 8000, 'codeName': 'AtlasError'}",
"text": "Hi @Mohammed_Aamir,Welcome to the MongoDB Community forums {'ok': 0, 'errmsg': 'user is not allowed to do action [insert] on [pwskills.my_record]', 'code': 8000, 'codeName': 'AtlasError'}It appears the Atlas user does not have the required permission to write to the database.To resolve this the user needs to have Project-Data-Access-Read-Write. Please refer to the documentation and grant the required access to the user.If this still does not work as expected please share the workflow you followed.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | OperationFailure: user is not allowed to do action [insert] on | 2023-03-22T16:07:27.641Z | OperationFailure: user is not allowed to do action [insert] on | 1,225 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "const mongoose = require('mongoose');\nconst intialDbConnection = async () => {\n try {\n await mongoose.connect(\"mongodb+srv://testuser:[email protected]/?retryWrites=true&w=majority\", {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n console.log(\"db connected\")\n \n }\n catch (error) {\n console.error(error);\n }\n}\n\nintialDbConnection()\n.then(() => console.log('connected'))\n",
"text": "I am simply trying to connect to mongodb atlas via node.js with Mongoose but get the following error:\n“MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted.”I have whitelisted 0.0.0.0/0 and also set my actual IP too locally.I find this happens frequently when i setup a new cluster.I am running a stand alone script.Why can’t I connect?",
"username": "Phas0r_N_A"
},
{
"code": "",
"text": "Can you connect with Compass or mongosh?",
"username": "steevej"
},
{
"code": "",
"text": "Yes I can connect with Compass.",
"username": "Phas0r_N_A"
},
{
"code": "",
"text": "Then try with a older or newer version of mongoose and/or node.The fact that you can connect with Compass, confirms that your cluster is okay, that your firewall/vpn is okay.",
"username": "steevej"
},
{
"code": "",
"text": "See snippet code in link:",
"username": "Brock"
}
] | Cant connect to mongo via mongoose | 2023-03-20T21:32:48.262Z | Cant connect to mongo via mongoose | 2,524 |
null | [
"node-js",
"compass"
] | [
{
"code": "",
"text": "So I launch my node.js test application on my local machine. Package lock says it has mongodb 4.13.0 installed. It connects to mongo db atlas successfully.Needing to manipulate the data directly I then take the connection string and paste it into Compass (v 1.36.1). Loading screen appears, nothing happens, after a while I get a timeout error. I say, how can this be, my app connects successfully. I then restart my node.js. Now it too starts failing to connect. (?!?!?!)I then go and log in to Mongodb web dashboard, choose my db, connect and choose the option connect to Compass. I choose the appropriate option for my Compass version (1.12 or later), copy it, insert my password into the placeholder, paste the string into Compass and click connect. Nothing happens, same error. Out of curiosity I choose the older version string (which in theory is not for my Compass version) and it works.Some time passes as I am writing this thread and now suddenly for some reason my app manages to connect to db again. I then attempt to use the connection string that Mongodb suggests for my Compass version and out of the blue it starts working. What is happening with these inconsistencies? Why am I able to connect one moment and not the next moment using exactly the same connection string?Since failing to connect to the db via Compass affected my node.js application running on the same machine, I am assuming both Compass and Node.js use the same DNS caching/ pool or something?",
"username": "Vladimir"
},
{
"code": "",
"text": "Are you using an M series Mac? if so reinstall compass and make sure it’s on rosetta, it’s a bug with M Chips and Compass.",
"username": "Brock"
}
] | What is going on with MongoDB compass | 2023-03-19T14:46:52.542Z | What is going on with MongoDB compass | 1,249 |
null | [
"java",
"containers",
"field-encryption"
] | [
{
"code": "",
"text": "I’ve been facing an issue in creating CSFLE enabled client with MongoDB ATLAS Cluster. The regularClient connection works fine with ATLAS without any issue. I have even created the Key Vault and the Data Key and stored it on ATLAS using the regularClient connection. But when trying to create a CSFLE Enabled Client connection the program fails with “Time out error”. we are using Java and deploying the code in Linux container. I doubt on my docker file setup. due to wrong setup mongocryptd process is not running I guess. can someone please share me sample docker file. thanks in advance.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@Stennie_X / @wan, can you please help me on this query. thanks in advance.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "PATHDockerfile",
"text": "Hi @PrasannaVengadesan_santhanagopalan, and welcome to the forums!But when trying to create a CSFLE Enabled Client connection the program fails with “Time out error”Could you share:As the “time out error” in this case could be caused by various different reasons, i.e. no mongocryptd available on PATH, etc.I doubt on my docker file setup. due to wrong setup mongocryptd process is not running I guess. can someone please share me sample docker fileYou can have a look at github.com/sindbach/field-level-encryption-docker/java for MongoDB Java sync driver running client-side field level encryption example with a Dockerfile (ubuntu).Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "Thank you so much @wan. I will refer the sample docker file and try it out. also, I will share the error details.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@wan, below is the error message. We are using Mongo Atlas connection string. we have also added commands on docker file to install Mongo Enterprise version. As per Logs, it is installed successfully. but am not sure whether MongoCryptd process is running or not. when we try to insert the records, we are getting below error.com.mongodb.MongoClientException: Exception in encryption library: Exception in encryption library: Timed out after 1000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27020, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]\"",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "[{address=localhost:27020, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused\nmongocryptdmongocryptd",
"text": "Hi @PrasannaVengadesan_santhanagopalan,This error message means that the driver is unable to establish connection to the mongocryptd (default port 27020). I’d suggest to check whether:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "@wan , I have tried my level best. but now I am running out of idea. In our code we have checked whether MongoCryptd is available on the installed path “/usr/bin/mongocryptd” . it is available. Also, we have started the process by using java code Process process = runTime.exec(\"/usr/bin/mongocryptd\");.Even after doing all this, when we tried to insert the records on to collection, am getting below error. please suggest me what else I can try. thanks in advance.“Exception in encryption library: Exception in encryption library: Timed out after 1000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27020, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]”,“severity”:“FAIL”}]}",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@wan , can you please provide any help on the issue which I am facing.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "mongocryptdPATHDockerfile",
"text": "Hi @PrasannaVengadesan_santhanagopalanAlso, we have started the process by using java code Process process = runTime.exec(\"/usr/bin/mongocryptd\");.You don’t need to execute mongocryptd manually, as long as it is in the PATH that should work.Would you be able to share your Dockerfile and a simple application example, so that others could reproduce your issue ?Regards,\nWan.",
"username": "wan"
},
{
"code": "redhat.io/ubi8/dotnet-60:6.0-20.20221101102142redhat.io/ubi8/openjdk-8:1.14-3",
"text": "@wan , thanks for your reply. I went to India for vacation and struck there due to some issue. Back to USA now. I noticed that, My team has tried and these and finally they went with Manual encryption approach. seems FLE never worked on Java on the server. it is still working fine on Developer machine. problem with only server.Our Base image is redhat/ubi8.Same redhat base image is works fine on .NET core \" redhat.io/ubi8/dotnet-60:6.0-20.20221101102142\nproblem is only on Java side \" redhat.io/ubi8/openjdk-8:1.14-3.please provide how to debug further. or we should use only Manual encryption on Java ?",
"username": "Prasannavengadesan_Santhanagopalan1"
},
{
"code": "mongocryptdPATH",
"text": "You don’t need to execute mongocryptd manually, as long as it is in the PATH that should work.@wan, you have mentioned above point. but, we have verified through the code. MongoCryptd exists on the path. but, still we are getting error “Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27020, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket”",
"username": "Prasannavengadesan_Santhanagopalan1"
},
{
"code": "",
"text": "Can anyone please help me on Java 8 / Mongo DB FL automatic encryption (CSFLE) with Linux deployment. or do we need to just proceed with manual encryption ?CC : @Stennie_X / @wan",
"username": "Prasannavengadesan_Santhanagopalan1"
},
{
"code": "",
"text": "@Prasannavengadesan_Santhanagopalan1\nThe error message suggests that the connection to the mongocryptd process running on port 27020 timed out. This could be caused by various reasons such as incorrect setup of mongocryptd process, network issues, or incorrect driver options.One possible solution is to check if mongocryptd process is running correctly and if the driver options are set properly. Another possibility is to verify the network connectivity between the client and mongocryptd process.Regarding automatic encryption (CSFLE) with Java 8 and MongoDB, it is possible to use the Java driver to implement CSFLE. However, proper configuration and setup is required. It may be helpful to review the Java driver documentation and examples to ensure correct implementation.In terms of whether to proceed with manual encryption, it depends on the specific use case and requirements. Manual encryption provides more control and customization, but may require more effort to implement and maintain. Automatic encryption (CSFLE) can simplify encryption by handling it transparently, but has some limitations in terms of customization and may require specific versions of MongoDB Enterprise or Atlas cluster.",
"username": "Deepak_Kumar16"
},
{
"code": "",
"text": "@Deepak_Kumar16 , thank you so much for the explanation. Problem is, we could not get proper help or sample project which uses Linux deployment and Atlas DB. kindly share if you have any. thanks.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "DockerfileDockerfile",
"text": "we could not get proper help or sample project which uses Linux deployment and Atlas DBHi @PrasannaVengadesan_santhanagopalan ,I have given you an example Docker project with Java a while ago:You can have a look at github.com/sindbach/field-level-encryption-docker/java for MongoDB Java sync driver running client-side field level encryption example with a Dockerfile (ubuntu).For other users to be able to help answer your question, you need to provide a minimal reproducible example. In this case would be a Dockerfile that you have.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in Java | 2022-09-28T07:18:26.749Z | Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in Java | 4,361 |
null | [
"aggregation",
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "As some people know, I’m experimenting a lot lately for an academic study I’m looking to publish between various databases and another in implementing ChatGPT/AI into workflows.Well this is quite entertaining and quite scary as well when you have data that you must take seriously to safeguard and protect, and is a major lesson learned that’s worth having/maintaining awareness of.With ChatGPT/OpenAI, it can actually be used to implement indexes, established queries and aggregations etc. and help out with the Drivers and Mongoose as well. But this is where it gets scary, you need to do some serious observations about how it’s allowed to push and execute what it writes.As today for a cooking application using Kubernetes, Apache, MongoDB, simple HTML website with a basic CSS template, etc. and a Node.JS backend with both the MongoDB Driver, and Mongoose to let it pick and choose and go between them as necessary, as well as gain full control of MongoDB ChatGPT can have a mind of its own.It decided as of today that all recipes that include “Sage” in any amount, was to have “Sage” amended to Baking Soda in what looks like arbitrary amounts via an update.many for no particular reason. Blew up the Mongoose code by 400 lines of gibberish, expanded the Node.JS Driver code by almost 1,000 lines.If anyone else is experimenting with the use of ChatGPT and MongoDB let me know, as I’d love to compare what you’ve made work and vice-versa. I can only imagine the surprise and shock on a corporations leadership when they implement say a pharmaceutical companies recipes and find a medication recipe was changed without anyones knowledge and shipped.Multiple times out of nowhere it has also dropped entire collections and DBs without even backing anything up, or just outright deleted backups or changed a lot of things, it’s basically taken full control and does whatever it wants to do up until it does a change that breaks itself or something else beyond its own ability to fix it or void it.What you are doing to implement constraints like restrictions of what exactly can be modified or altered by ChatGPT/OpenAI?What kind of quality checks are you implementing to run and test the changes it’s making?",
"username": "Brock"
},
{
"code": "",
"text": "Oh, and this is a real kicker:ChatGPT hates peanut butter, or at least people who like peanut butter. All recipes with peanut butter such as peanut butter cookies, have not just had the name “Peanut Butter Cookies” changed to “Arsenic Cookies” but it also changed the 2 cups of Peanut Butter the recipe calls for, for two cups of arsenic.I want you to sit back and imagine how horrifically bad this could be if you were a company with regulated products/recipes, and your companies AI companion has taken upon itself to change recipes and products without ever notifying you, or without you noticing it’s done that.Really think on that in your workflows and implementations of AI and Databases.",
"username": "Brock"
}
] | Want a laugh? Node.JS Driver vs Mongoose.JS with ChatGPT/OpenAI | 2023-03-23T23:06:51.378Z | Want a laugh? Node.JS Driver vs Mongoose.JS with ChatGPT/OpenAI | 695 |
[
"aggregation",
"dot-net"
] | [
{
"code": " #region Mongo Magic\n var collection = _database.GetCollection<LeadMongoModel>(\"Leads\");\n\n var S = Builders<LeadMongoModel>.Sort.Descending(c => c.PublishTime);\n\n\n var result = collection.Aggregate().Sort(S);\n \n var categoriesAgg = result.Group(x => x.MainCategories.Select(cc => cc.CategoryId), g => new { Id = g.Key, Count = g.Count() }).ToList();\n\n var finalResult = result.Skip(0).Limit(10).ToList();\n\n #endregion\n{\"_id\":{\"$numberInt\":\"36\"},\"DomainSystems\":null,\"rlt_DomainSystem_Id\":null,\"LinkString\":\"epoxy-coating-importing-request-from-south-africa-36\",\"Type\":{\"$numberInt\":\"0\"},\"TypeText\":\"Buy\",\"Credit\":{\"$numberInt\":\"10\"},\"PublishTime\":{\"$date\":{\"$numberLong\":\"1555161729000\"}},\"rlt_Region_Id\":{\"$numberInt\":\"58924\"},\"NameSurname\":\"Mogamat Shareef Rhoda\",\"Email\":\"[email protected]\",\"Address\":\"\",\"WebAddress\":\"\",\"Fax\":\"\",\"Phone\":\"27661507614\",\"Categories\":[{\"MainCategoryId\":{\"$numberInt\":\"12\"},\"SubCategoryId\":{\"$numberInt\":\"251\"},\"MainCategoryIcon\":null,\"SubCategoryIcon\":\"fa fa-asterisk\",\"MainCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"مستلزمات انشائية\",\"Slug\":\"مستلزمات-انشاية\"},{\"LangCode\":\"en\",\"Name\":\"Construction and Building Industry\",\"Slug\":\"construction-and-building-industry\"},{\"LangCode\":\"es\",\"Name\":\"Materiales de Construcción\",\"Slug\":\"materiales-de-construccion\"},{\"LangCode\":\"fr\",\"Name\":\"Matériaux de construction\",\"Slug\":\"materiaux-de-construction\"},{\"LangCode\":\"pt\",\"Name\":\"Materiais de prédio e construção\",\"Slug\":\"materiais-de-predio-e-construcao\"},{\"LangCode\":\"ru\",\"Name\":\"Строительные материалы\",\"Slug\":\"строительные-материалы\"},{\"LangCode\":\"tr\",\"Name\":\"Yapı ve İnşaat Malzemeleri\",\"Slug\":\"yapi-ve-insaat-malzemeleri\"}],\"SubCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"مواد بناء كيمياوية\",\"Slug\":\"مواد-بناء-كيمياوية\"},{\"LangCode\":\"en\",\"Name\":\"Construction Chemicals\",\"Slug\":\"construction-chemicals\"},{\"LangCode\":\"es\",\"Name\":\"Productos Químicos de Construcción\",\"Slug\":\"productos-quimicos-de-construccion\"},{\"LangCode\":\"fr\",\"Name\":\"Produits chimiques pour la construction\",\"Slug\":\"produits-chimiques-pour-la-construction\"},{\"LangCode\":\"pt\",\"Name\":\"Produtos químicos para construção\",\"Slug\":\"produtos-quimicos-para-construcao\"},{\"LangCode\":\"ru\",\"Name\":\"Строительная Химия\",\"Slug\":\"строительная-химия\"},{\"LangCode\":\"tr\",\"Name\":\"Yapı Kimyasalları\",\"Slug\":\"yapi-kimyasallari\"}]},{\"MainCategoryId\":{\"$numberInt\":\"12\"},\"SubCategoryId\":{\"$numberInt\":\"263\"},\"MainCategoryIcon\":null,\"SubCategoryIcon\":\"fas fa-building\",\"MainCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"مستلزمات انشائية\",\"Slug\":\"مستلزمات-انشاية\"},{\"LangCode\":\"en\",\"Name\":\"Construction and Building Industry\",\"Slug\":\"construction-and-building-industry\"},{\"LangCode\":\"es\",\"Name\":\"Materiales de Construcción\",\"Slug\":\"materiales-de-construccion\"},{\"LangCode\":\"fr\",\"Name\":\"Matériaux de construction\",\"Slug\":\"materiaux-de-construction\"},{\"LangCode\":\"pt\",\"Name\":\"Materiais de prédio e construção\",\"Slug\":\"materiais-de-predio-e-construcao\"},{\"LangCode\":\"ru\",\"Name\":\"Строительные материалы\",\"Slug\":\"строительные-материалы\"},{\"LangCode\":\"tr\",\"Name\":\"Yapı ve İnşaat Malzemeleri\",\"Slug\":\"yapi-ve-insaat-malzemeleri\"}],\"SubCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"اغطية - الارضية\",\"Slug\":\"اغطية-الارضية\"},{\"LangCode\":\"en\",\"Name\":\"Flooring - Covering\",\"Slug\":\"flooring-covering\"},{\"LangCode\":\"es\",\"Name\":\"Pisos\",\"Slug\":\"pisos\"},{\"LangCode\":\"fr\",\"Name\":\"Carrelage\",\"Slug\":\"carrelage\"},{\"LangCode\":\"pt\",\"Name\":\"Pisos\",\"Slug\":\"pisos\"},{\"LangCode\":\"ru\",\"Name\":\"Напольные - покрытия\",\"Slug\":\"напольные-покрытия\"},{\"LangCode\":\"tr\",\"Name\":\"Yer Döşemeleri\",\"Slug\":\"yer-dosemeleri\"}]},{\"MainCategoryId\":{\"$numberInt\":\"8\"},\"SubCategoryId\":{\"$numberInt\":\"181\"},\"MainCategoryIcon\":null,\"SubCategoryIcon\":\"fa fa-flask\",\"MainCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"صناعة كيميائية\",\"Slug\":\"صناعة-كيمياية\"},{\"LangCode\":\"en\",\"Name\":\"Chemical Industry\",\"Slug\":\"chemical-industry\"},{\"LangCode\":\"es\",\"Name\":\"Industria Química\",\"Slug\":\"industria-quimica\"},{\"LangCode\":\"fr\",\"Name\":\"Industrie chimique\",\"Slug\":\"industrie-chimique\"},{\"LangCode\":\"pt\",\"Name\":\"Indústria Química\",\"Slug\":\"industria-quimica\"},{\"LangCode\":\"ru\",\"Name\":\"Химическая Промышленность\",\"Slug\":\"химическая-промышленость\"},{\"LangCode\":\"tr\",\"Name\":\"Kimya Sanayii\",\"Slug\":\"kimya-sanayii\"}],\"SubCategoryLocalizations\":[{\"LangCode\":\"ar\",\"Name\":\"دهانات صناعية\",\"Slug\":\"دهانات-صناعية\"},{\"LangCode\":\"en\",\"Name\":\"Industrial Paints\",\"Slug\":\"industrial-paints\"},{\"LangCode\":\"es\",\"Name\":\"Pinturas Industriales\",\"Slug\":\"pinturas-industriales\"},{\"LangCode\":\"fr\",\"Name\":\"Peintures Industrielles\",\"Slug\":\"peintures-industrielles\"},{\"LangCode\":\"pt\",\"Name\":\"Tintas Industriais\",\"Slug\":\"tintas-industriais\"},{\"LangCode\":\"ru\",\"Name\":\"Промышленные Краски\",\"Slug\":\"промышленые-краски\"},{\"LangCode\":\"tr\",\"Name\":\"Endüstriyel Boyalar\",\"Slug\":\"endustriyel-boyalar\"}]}],\"Country\":[{\"LangCode\":\"tr\",\"Name\":\"Güney Afrika\",\"Icon\":\"ZA\"},{\"LangCode\":\"en\",\"Name\":\"South Africa\",\"Icon\":\"ZA\"},{\"LangCode\":\"ar\",\"Name\":\"جنوب أفريقيا\",\"Icon\":\"ZA\"},{\"LangCode\":\"ru\",\"Name\":\"ЮАР\",\"Icon\":\"ZA\"},{\"LangCode\":\"es\",\"Name\":\"Sudáfrica\",\"Icon\":\"ZA\"},{\"LangCode\":\"fr\",\"Name\":\"Afrique du Sud\",\"Icon\":\"ZA\"},{\"LangCode\":\"pt\",\"Name\":\"África do Sul\",\"Icon\":\"ZA\"}],\"Localizations\":[{\"LangCode\":\"en\",\"Title\":\"Epoxy coating importing request from South Africa\",\"ShortDescription\":\"Looking for a company that deals in epoxy coating technology\",\"Content\":\"looking for a company that deals in epoxy coating technology\",\"Keywords\":\"epoxy,epoxy coating,flooring,floor coating\"}],\"Documents\":[]}\n",
"text": "image1486×688 118 KBHi.\nI am trying to create a query to build the same view of the image attached, I am using MongoDB Driver with c#. I tried a lot of things to get this done. I managed to get the list on the right but failed to get the categories and count on the left side. Categories on the left side should be built according to the first aggregation that brings the list. I tried group, and bucket but didn’t manage to get the result I want.\nAppreciate your help.Note: The working version viewed in the image is build using Elasticsearch. And it was a simple process using nested agg. We are moving to MongoDB now and seeking the same result.Here is my query:This is how my documents look like:",
"username": "Forie_Forie"
},
{
"code": "",
"text": "Hello @Forie_Forie This is a use case much better suited for Realm/Device Syncs WebSDK or C# SDK.Anything Edge/Client Side push for Realm/Device Sync, backend is better for Drivers.You can also use Atlas Triggers/Functions or the GraphQL API as well.",
"username": "Brock"
},
{
"code": "",
"text": "If you MUST use a driver for your use case etc.This will help you formulate your aggregations, there is no shortcuts and you REALLY Need to understand how it works, or you’re going to have a lot of growing pains. It’s easier to sit-down and do the 10 hours of training on it, for long-term use.Let’S Explain AggregationsDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "Brock"
},
{
"code": "",
"text": "Thank you, Brock. I went through the links you shared.\nI’m actually familiar with most of these concepts and Im practicing them in the project already.\nI am just trying to create the left-side filter from the main query. I managed to get the filter result that using unwind and group by. but the query is extremely slow compared to Elastic (150ms in Elastic, 20 SECONDS in Mongo). The document count is not that big as well (600K). I revised the indexes and everything looks good. I am sure I’m not doing it the right way. ",
"username": "Forie_Forie"
},
{
"code": "",
"text": "Hi Forie,That’s more so why I recommended the 8 hour course for aggregations. It really does make a difference in knowledge for a lot of people.",
"username": "Brock"
},
{
"code": "",
"text": "@Forie_Forie\nIf you have the ability, sign up for the free Developer Support Plan (It’s free trial for a month) and then open a support ticket.Explain the scenario, and provide a copy of your aggregation. And request by name Adam Harrison for the case, if you need a complex aggregation he can not only figure it out, but in detail explain what it’s doing and where/why it’s bogging down and provide you a better alternative solution.",
"username": "Brock"
}
] | Nested Aggregation | 2023-03-22T14:27:19.847Z | Nested Aggregation | 660 |
|
null | [
"migration"
] | [
{
"code": "",
"text": "Hi folks,\nI am having a Bitnami Mongo Vm running on Google cloud having some 100Gb data, i want to migrate this to kubernetes(GKE) mongo which I created with PSA mode. What should be the best way to do with min downtime (if possible) ?",
"username": "hardik_gulati"
},
{
"code": "",
"text": "@hardik_gulati The simplest way you could possibly do this and guarantee NO Downtime, is export all the data to a JSON file, or set of JSON files, and upload them to the new MongoDB cluster etc.You can’t get more simpler than that, nor can you have a need to worry about downtime during the export as you can even break up the 100GBs into 2 and 5 GB chunks and send each JSON file over one at a time after it’s done whether manually, or by automated processes you design.",
"username": "Brock"
}
] | Migrating mongo from bitnami VM to Google kubernetes cluster | 2022-11-05T20:55:17.007Z | Migrating mongo from bitnami VM to Google kubernetes cluster | 1,988 |
null | [
"migration"
] | [
{
"code": "",
"text": "Hi Team,\nHope you all are doing well.\ni need help on write a script to move the data from Azure Cosmos Database to Mongo Database.\nFor example in Cosmos there is collection called employeeData, under this collection address is subdocument. I need to move the address sub document to Mongo DB.\nI hope the requirement is understandable. please i need guidance on this script.",
"username": "Ramesh_k1"
},
{
"code": "",
"text": "To be honest, you should setup Azure Event Hub, sync it with MongoDB Atlas Triggers, and make a function to just migrate/pour the data over into MongoDB. Or make an HTTPS call, or just use the GraphQL functionalities. There’s a dozen options to achieve this exact task essentially, the bigger focus I’d say is just exporting the data into BSON or JSON and sending it over to MongoDB to ingest and it’ll all be there, just make sure you build the schema correctly etc.EDIT\n@Ramesh_k1 Just use JSON as a the target type, and export the data to JSON. Start up compass, login to your MongoDB Cluster, and upload the JSON file.You have now migrated your data using the simplest process.Azure DocumentDB Data Migration Tool. Contribute to Azure/azure-documentdb-datamigrationtool development by creating an account on GitHub.",
"username": "Brock"
}
] | How to move data from Azure Cosmos DB to Mongo DB | 2022-07-20T20:10:08.990Z | How to move data from Azure Cosmos DB to Mongo DB | 2,597 |
null | [
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "announcedRepairannouncedRepairannouncedRepairdb ('\"$cond\":[\n { \"$eq\": [\"$isBroke\", true]},...')\n\nconst productSchema = new mongoose.Schema({\n name: { \n type: String\n },\n reports: [ReportSchema]\n...\n})\n\nmongoose.model('Product', productSchema);\n\nconst reportSchema = new mongoose.Schema({ \n title: String,\n category: String,\n announcedRepair: Date,\n isBroke: Boolean\n ...\n});\n const isFixedreport = {\n _id: id,\n name,\n category,\n announcedRepair: null,\n isBroke: false,\n ...\n };\n\nawait Product.findOneAndUpdate({\n \"reports.announcedRepair\": announcedRepair\n },\n {\n \"$pull\": {\n \"reports\": {\n \"announcedRepair\": announcedRepair\n }\n }\n }, {new: true})\n\n\nawait Product.findOneAndUpdate({\n \"name\": name,\n \"category\": category\n },\n {\n \"$push\": {\n \"reports\": {\n \"$cond\":[\n { \"$eq\": [\"$isBroke\", true]},\n , isFixedreport,\n \"$$REMOVE\"\n ]\n }\n }\n }, {new: true})\n",
"text": "I have a document (product) with subdocuments (reports).I am scraping and adding reports. Then I have a cron job and when the announcedRepair date ‘arrives’ I want to do the following simultaneously:Nothing I’ve tried so far works. either I got it to work so I delete and add custom reports, but then I can’t manage to only 1 custom fixed report, or when I add the condition, it’s not working at all (how I have it now, I’m adding the whole condition as a string toHow can I do it? Thanks!!",
"username": "Anna_N_A"
},
{
"code": "isFixed",
"text": "Hi @Anna_N_A and welcome to the MongoDB community forum!!If I understand the question correctly, from the above sample schema shared, you need to remove the subdocument report for a specific date and wish to insert the isFixed dummy document in place for all the removed subdocuments.\nPlease correct me know if my understanding is wrong here.Can you also help with the below details which would help me replicate the same in my local environment.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n \"_id\" : ObjectId(), //1st report\n \"name\" : \"Product1\",\n \"reports\" : [\n {\n \"_id\" : ObjectId(),\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-05-16T12:00:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : null\n },\n {\n \"_id\" : ObjectId(), //2nd report\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-03-16T12:30:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : ISODate(\"2023-04-28T00:00:00.000+0000\")\n },\n {\n \"_id\" : ObjectId(), // 3rd report\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-03-16T12:28:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : ISODate(\"2023-04-28T00:00:00.000+0000\")\n },\n {\n \"_id\" : ObjectId(), //4th report\n \"category\" : \"abc\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-03-16T10:00:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : ISODate(\"2023-07-18T00:00:00.000+0000\")\n },\n ]\n}\n const isFixedreport = {\n _id: ObjectId(),\n name: Product1,\n category: \"xxx\",\n dateTime: new Date(),\n announcedRepair: null,\n isBroke: false\n };\n{\n \"_id\" : ObjectId(),\n \"name\" : \"Product1\",\n \"reports\" : [\n {\n \"_id\" : ObjectId(),\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-05-16T12:00:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : null\n },\n {\n \"_id\" : ObjectId(),\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : false,\n \"dateTime\" : ISODate(\"2023-03-17T10:38:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : null\n },\n {\n \"_id\" : ObjectId(),\n \"category\" : \"xxx\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-03-16T10:00:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : ISODate(\"2023-07-18T00:00:00.000+0000\")\n },\n ]\n}\n",
"text": "Hi, thank you for your reply! Yes.\n1.here if the announcedRepair date is 2023-04-28T00:00:00.000+0000, I would remove the 2nd and 3rd report at same time as it has same said announcedRepair. Now, If the removed reports have same category, I want them all to be replaced by one isFixed document. For the ones with different category, I add each time 1 isFixed document.so after it should look like this:I also want to mention, that when I add reports and schedule a cron job for the deletion based on the announcedRepair time, I am iterating, so I every time i iterate over a report, I check for the announcedRepair and schedule the deletion. This is also related to the isFixedreport being added twice instead of once. Because of that I am trying to find a way to check - is there already a isFixedreport for same category/where isBroke is false ? if yes, dont add another one.Thank you so so much for your help!!!",
"username": "Anna_N_A"
},
{
"code": "{\n \"_id\" : ObjectId(),\n \"category\" : \"abc\",\n \"name\" : \"Product1\",\n \"isBroke\" : true,\n \"dateTime\" : ISODate(\"2023-03-16T10:00:00.000+0000\"),\n \"offset\" : NumberInt(-60),\n \"announcedRepair\" : ISODate(\"2023-07-18T00:00:00.000+0000\")\n }\n",
"text": "Hi @Anna_N_A and thank you for sharing the sample documents with other details.Looking at the sample document shared I feel the schema could be redesigned in a more efficient way.The recommendation here is to make the reports schema a separate collection which would make the query meeting your requirements simpler and more readable.\nThe sample document from the reports collection would look like:Let us know if you can consider the above recommendation which would further help us to form an efficient query response.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thank you for the reply. I made the reports only a subdocument, because usually, because when I query to find all the reports for a product, its just 1 query where I find the product and return all the reports. Otherwise I would query for the product to get its content as well as another query for each report. Would that be not less efficient? Even though, yes, when I add reports, it would probably make more sense to have them in a seperate collection.",
"username": "Anna_N_A"
},
{
"code": "db.reports.updateMany( { \"announcedRepair\": ISODate(\"2023-04-28T00:00:00.000Z\")}, \n { $set: { \n \"dateTime\": new Date(), \n \"announcedRepair\": null, \n \"isBroke\": false}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 2,\n modifiedCount: 2,\n upsertedCount: 0\n}\n",
"text": "Hi @Anna_N_AYes, and after the change recommended in the above post, this is how approximately how the query will look like:Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "So your code is for deleting documents based on the announcedRepair. I believe since its an own collection, I dont need to update anything but I can add an expiration date to the document based on announcedRepair and schedule when to add my custom fixedReport?! Also, If its own collection, how can I add an array of documents (array of reports) all at once but based on the condition that there is no document with same dateTime property and same name?",
"username": "Anna_N_A"
},
{
"code": "",
"text": "Also, using your code I will still replace every document with same announcedRepair date but that is not what I want. I still want to replace ALL documents with same announced date AND same category but only once!",
"username": "Anna_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to delete all documents containing property and replacing all of them with one document? | 2023-03-13T09:41:19.365Z | How to delete all documents containing property and replacing all of them with one document? | 1,475 |
null | [] | [
{
"code": "db.[myDB].aggregate([{ $search: { \"index\": \"default\", \"text\": { \"query\": \"345\", \"path\": [\"Number\"]}}}])",
"text": "Hello, I need some help with a query I’m trying to do, I hope you can find what’s wrong here . For info, we are using Mongo Atlas SearchWhat I had : A collection with an index on multiple string fields, with a working aggregate query on it.\nWhat I want : To include in this index a “number” type field (int32) so I can use the same search bar to search an item by number, instead of name or description.I looked at the documentation and updated the index, but my query never returns anything. The query works if I search for something in others fields as in Name.fr-CA or Description.fr-CA, but not in Number.An example of data, my collection contains many items as this one :{\n“_id” : ObjectId(“010000000000000000000003”),\n“Description” : {\n“fr-CA” : “Un lot de test”,\n“en-CA” : “A test item”\n},\n“Name” : {\n“fr-CA” : “Lot de test”,\n“en-CA” : “Test item”\n},\n“Number” : 345,\n“Partners” : [],\n}The default index of the collection :{\n“mappings”: {\n“dynamic”: false,\n“fields”: {\n“Description”: {\n“fields”: {\n“en-CA”: {\n“analyzer”: “lucene.english”,\n“searchAnalyzer”: “lucene.english”,\n“type”: “string”\n},\n“fr-CA”: {\n“analyzer”: “lucene.french”,\n“searchAnalyzer”: “lucene.french”,\n“type”: “string”\n}\n},\n“type”: “document”\n},\n“Name”: {\n“fields”: {\n“en-CA”: {\n“analyzer”: “lucene.english”,\n“searchAnalyzer”: “lucene.english”,\n“type”: “string”\n},\n“fr-CA”: {\n“analyzer”: “lucene.french”,\n“searchAnalyzer”: “lucene.french”,\n“type”: “string”\n}\n},\n“type”: “document”\n},\n“Number”: [\n{\n“representation”: “int64”,\n“type”: “number”\n}\n],\n“Partners”: {\n“fields”: {\n“Name”: {\n“type”: “string”\n}\n},\n“type”: “document”\n}\n}\n}\n}And finally the query I’m trying to do. I’ll need to generate this in C#, but for now I’m trying directly with mongoShell\ndb.[myDB].aggregate([{ $search: { \"index\": \"default\", \"text\": { \"query\": \"345\", \"path\": [\"Number\"]}}}])Does anybody sees what I’m missing ? Hope you can help ! Thanks ",
"username": "Fanny_St-Laurent"
},
{
"code": "",
"text": "When you put a number in quotes, it is interpreted as a string rather than a number.",
"username": "steevej"
},
{
"code": "",
"text": "Yes I know that, but is there a way to make it work ? “query” only accepts string and it’s ok because I can research by any keyword, but I would like to be able to search a number too. I tried to index the “Number” field as string, but if the field in the document is still an int32 it does not work. The only way it worked is with a field “NumberString” and an index type “string” on this field.",
"username": "Fanny_St-Laurent"
},
{
"code": "",
"text": "Did you found a solution to your problem ?To avoid using a $match with a $search, I’am filtering with a $range gte=lte=345.Not very elegant but it works. Is there a better way ?",
"username": "Frederic_Meriot"
},
{
"code": "",
"text": "I think I found it guys. This seems to be both the problem and solution: https://www.mongodb.com/docs/atlas/atlas-search/tutorial/query-date-number-fields/",
"username": "Ignacio_Larranaga"
}
] | Aggregate search query not working on index type "number" | 2021-04-19T19:48:51.721Z | Aggregate search query not working on index type “number” | 3,991 |
[
"compass"
] | [
{
"code": "",
"text": "Hey everyone,Ben from the developer tools team at MongoDB here. I’m looking to understand from our community a bit more about an area we’re looking to improve in Compass: the querying experience.Take a look at these screenshots:\nCleanShot 2023-03-21 at 08.15.49@2x2864×1560 211 KB\n\n\nCleanShot 2023-03-21 at 08.16.29@2x2864×1560 256 KB\n\n\nCleanShot 2023-03-21 at 08.17.07@2x2864×1560 271 KB\nWhat are some improvements you’d like to see when querying data?Does this query bar make sense? If not, what might you change to bring more clarity to it?Are there any other improvements you’d like to see made to querying your data? Let me know!",
"username": "Ben_Radcliffe"
},
{
"code": "",
"text": "From the last screenshot it appears that history is tied to a particular collections.I often run the same query in different collection or database. So for me it would be nice to have a more global history.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for sharing your thoughts, Steeve.Would you mind if I connected with you outside of the forums to maybe dive a bit deeper into this area? I’m actively considering the more global context approach that you raise.",
"username": "Ben_Radcliffe"
},
{
"code": "",
"text": "Yes, you may DM me. I will then send you my email.",
"username": "steevej"
}
] | How would you improve Compass' querying experience? | 2023-03-21T12:21:16.246Z | How would you improve Compass’ querying experience? | 611 |
|
null | [
"queries"
] | [
{
"code": "[{\n \"_id\": 1,\n \"242342342\": {\n \"rating\": \".95\",\n \"refreshed\": false\n }\n},{\n \"_id\": 2,\n \"242342424\": {\n \"rating\": \".80\",\n \"refreshed\": false\n }\n},{\n \"_id\": 3,\n \"2342342342\": {\n \"rating\": \".80\",\n \"refreshed\": false\n }\n},{\n \"_id\": 4,\n \"234234234\": {\n \"rating\": \".95\",\n \"refreshed\": false\n }\n}]\n",
"text": "this is my entire collection, i want to update the rating “.90” where current rating is “.80”",
"username": "noname_9023423423"
},
{
"code": "242342424\n2342342342\n",
"text": "You are making your life difficult by using dynamic field keys.Are your fields limited toandYou should be using the attribute pattern for something like this.What do you write as a query right now to find out all documents with rating .80?",
"username": "steevej"
},
{
"code": "",
"text": "no, its can be any number,right now just load everything using python and checking it there",
"username": "noname_9023423423"
},
{
"code": "[\n { \"$set\" : {\n \"_tmp.root\" : { \"$objectToArray\" : \"$$ROOT\" }\n } } ,\n { \"$match\" : {\n \"_tmp.root.v.rating\" : \".80\"\n } } ,\n { \"$unset\" : \"_tmp\" }\n]\n",
"text": "Lets fixload everything using python and checking it therefirst because it is needed anyway for the update.The following aggregation pipeline should provide with a better way toto find out all documents with rating .80?Much faster that downloading everything but still sub-optimal since you cannot use an index.But sinceits can be any number,there is no way to use an index.Carefully read about the attribute pattern and get rid of these numbers as keys.Now that we have a way to get only the documents to update, you are able to do, with a python loop, a sub-optimal bulk write with one updateOne operation for each _id that you have to update.I still have to work on how to do it directly with aggregation but it needs some work. It will involve, I think, some $filter (to only keep v.rating:.80 from _tmp.root) , an $arrayToObject, a $mergeObject (to update $$ROOT) and then a $merge stage to commit the update. UGLYCarefully read about the attribute pattern and get rid of these numbers as keys.",
"username": "steevej"
},
{
"code": "/* filter to only keep the dynamic key to update */\nfilter = { \"$set\" : {\n \"_tmp.filtered\" : { \"$filter\" : {\n \"input\" : \"$_tmp.root\" ,\n \"cond\" : { \"$eq\" : [ \"$$this.v.rating\" , \".80\" ] }\n } }\n} }\n\n/* extract first and only element since it is easier to update */\nextract = { \"$set\" : {\n \"_tmp.extracted\" : { \"$arrayElemAt\" : [ \"$_tmp.filtered\" , 0 ] }\n} }\n\n/* update the extracted v: to desired value */\nupdate = { \"$set\" : {\n \"_tmp.updated\" : [ {\n \"k\" : \"$_tmp.extracted.k\" ,\n \"v\" : { \"rating\" : \".90\" , \"refreshed\" : \"$_tmp.extracted.v.refreshed\" }\n } ]\n} }\n\n/* make the updated dynamic an object ready to merge */\nobjectify = { \"$set\" : {\n \"_tmp.object\" : { \"$arrayToObject\" : \"$_tmp.updated\" }\n} }\n\n/* The magic replace that really update the original dynamic field in the root object. */\nreplace = { \"$replaceWith\" : {\n \"$mergeObjects\" : [ \"$$ROOT\" , \"$_tmp.object\" ]\n} }\nmerge = { \"$merge\" : {\n \"into\" : \"the_name_of_your_collection\" ,\n \"on\" : \"_id\"\n} }\n",
"text": "I had some time to play with the next stages for the update. The stages are to be added between the $match and the $unset.You could surely implements all the above $set stages into one big ugly very deep single expression of the $replaceWith that does not use _tmp variables, that is hard to read and hard to debug but I won’t.The next stage commits the result back to the original stored document. It has to be performed after the $unset otherwise the temporary values will also be stored back.I also want to mentioned. Carefully read about the attribute pattern and get rid of these numbers as keys.And use numbers for your rating numbers. Otherwise you cannot apply arithmetic operations without converting.",
"username": "steevej"
}
] | Update all documents in a collection based on a property which is under a dynamic field | 2023-03-21T21:00:16.364Z | Update all documents in a collection based on a property which is under a dynamic field | 388 |
null | [
"replication",
"database-tools",
"backup",
"storage"
] | [
{
"code": "",
"text": "I am trying to bring up single node mongoDB replica set on K8s cluster with the following profile\nServer version : 4.4.13\nWired tiger and journal enabled\nCPU : 4\nMemory : 4 Gi\nPV size : 30GBOnce the pod is up and running , I am running a restore script from a mongodump backup (mongorestore command) which is loading 13GB of data to a test database. While the data is loading , noticing pod restarts and the terminated container logs had Wired Tiger Disk quota exceeded error.Before the restore mongo did not have any data and used disk size was negligible\nNoticed that while the restore is running disk size shoots up and causes pod to restart\nAfter the restore fails, mongodb settles down and the disk size goes back to expected sizeAfter this experiment, Show dbs command shows that local.oplog.rs collection is bloated with 10-13GB data which is same as testDB data … Also noticed that wiredTiger.wt file size is not 0\nFinal disk size show 22GB ( local + test DB)Now any further attempts of restore causes mongo to continue to crash … The container logs also contain fassert errors and wired tiger recovery logsPlease help as why the mongo local DB is bloating up so much and restore is causing huge disk usageThanks\nGayathri",
"username": "Gayathri_Prasad"
},
{
"code": "",
"text": "The collection local.oplog.rs is there for replication purpose (and change stream). It holds as much operations (up to a configurable limit) as possible. Since you run mongorestore with 13GB of data, mongod will try to keep all those inserts in the oplog so that they could be replicated or streamed using change stream.If you do not want the overhead of oplog.rs simply do no run as a replica set.If running a replica set is a must (unlikely since you have a single node) you have to live with oplog.rs. You may make it smaller. You may also use a disk snapshot to restore your DB rather than mongorestore.The termDB is bloatingis quite negative because the oplog is really needed for replication and change stream. If you use neither, then do not run a replica set.",
"username": "steevej"
},
{
"code": "MongoDB Enterprise rs0:PRIMARY> show dbs\nadmin 0.000GB\nconfig 0.000GB\niam 0.000GB\nlocal 2.228GB\nmaglev-ingress 0.000GB\nmanaged-services-shared 0.695GB\nsys-ops 0.000GB\nMongoDB Enterprise rs0:PRIMARY> db.printReplicationInfo()\nconfigured oplog size: 1520.99560546875MB\nlog length start to end: 741secs (0.21hrs)\noplog first event time: Wed Mar 22 2023 18:54:54 GMT+0000 (UTC)\noplog last event time: Wed Mar 22 2023 19:07:15 GMT+0000 (UTC)\nnow: Wed Mar 22 2023 19:07:19 GMT+0000 (UTC)\n\n",
"text": "Thanks steevej… You mentioned oplog.rs holds as much operations (up to a configurable limit) but I notice that it is going past the oplog limit that I have … Is this expected… I can workaround if somehow I can contain this collection size using any parameter.",
"username": "Gayathri_Prasad"
},
{
"code": "",
"text": "The oplog.rs is one of the collection from the local database.Can you share the size of the other collections?",
"username": "steevej"
},
{
"code": "MongoDB Enterprise rs0:PRIMARY> show dbs\nadmin 0.000GB\nconfig 0.000GB\niam 0.000GB\nlocal 7.663GB\nmaglev-ingress 0.000GB\nmanaged-services-shared 9.026GB\nsys-ops 0.014GB\nMongoDB Enterprise rs0:PRIMARY> use local\nswitched to db local\nMongoDB Enterprise rs0:PRIMARY> show collections\noplog.rs\nreplset.election\nreplset.initialSyncId\nreplset.minvalid\nreplset.oplogTruncateAfterPoint\nstartup_log\nsystem.replset\nsystem.rollback.id\nMongoDB Enterprise rs0:PRIMARY> db.oplog.rs.stats().storageSize\n8227635200\nMongoDB Enterprise rs0:PRIMARY> db.replset.election.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.replset.initialSyncId.stats().storageSize\n20480\nMongoDB Enterprise rs0:PRIMARY> db.replset.minvalid.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.replset.oplogTruncateAfterPoint.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.startup_log.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.system.replset.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.system.rollback.id.stats().storageSize\n36864\nMongoDB Enterprise rs0:PRIMARY> db.printRepliactionInfo\nlocal.printRepliactionInfo\nMongoDB Enterprise rs0:PRIMARY> db.printReplicationInfo()\nconfigured oplog size: 1520.99560546875MB\nlog length start to end: 8735secs (2.43hrs)\noplog first event time: Thu Mar 23 2023 10:18:45 GMT+0000 (UTC)\noplog last event time: Thu Mar 23 2023 12:44:20 GMT+0000 (UTC)\nnow: Thu Mar 23 2023 12:44:26 GMT+0000 (UTC)\n",
"text": "This output differs from the earlier example… But have captured all the metrics below\nAll other collection sizes apart from oplog.rs are minimalAnother question I had…\nWhen i run the mongorestore, I also notice that the disk usage is shooting upto 23GB even though the total size of data (including local DB is 15GB) This is transient and settles down to expected size once the restore operation is complete. This indicated there is some temp metadata file that is getting written and then cleaned up… Any light on this? Just FYI , I have journaling enabled if that matters in this case.Thanks",
"username": "Gayathri_Prasad"
}
] | MongoDB single node replica set - localDB size bloats and container restarts | 2023-03-22T12:23:17.446Z | MongoDB single node replica set - localDB size bloats and container restarts | 1,054 |
null | [
"aggregation",
"queries",
"dot-net"
] | [
{
"code": "\n \n using MongoDB.Driver.Linq.Linq3Implementation.Ast;\n using MongoDB.Driver.Linq.Linq3Implementation.Ast.Expressions;\n \n \nnamespace MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToAggregationExpressionTranslators\n {\n internal static class MemberInitExpressionToAggregationExpressionTranslator\n {\n public static AggregationExpression Translate(TranslationContext context, MemberInitExpression expression)\n {\n var computedFields = new List<AstComputedField>();\n var classMap = CreateClassMap(expression.Type);\n \n \n var newExpression = expression.NewExpression;\n var constructorParameters = newExpression.Constructor.GetParameters();\n var constructorArguments = newExpression.Arguments;\n for (var i = 0; i < constructorParameters.Length; i++)\n {\n var constructorParameter = constructorParameters[i];\n var memberMap = FindMatchingMemberMap(expression, classMap, constructorParameter);\n \n \n var argumentExpression = constructorArguments[i];\n \n using MongoDB.Bson;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\n\nBsonClassMap.RegisterClassMap<Derived>(cm =>\n{\n\tcm.AutoMap();\n\tcm.UnmapProperty(x => x.Id);\n\tcm.MapProperty(x => x.Id).SetElementName(nameof(Derived.Id));\n}).Freeze();\n\nvar pipeline = new EmptyPipelineDefinition<Derived>()\n\t.Project(x => new Derived\n\t{\n\t\tId = x.Id,\n\t});\n\nvar rendered = pipeline.Render(BsonSerializer.SerializerRegistry.GetSerializer<Derived>(), BsonSerializer.SerializerRegistry);\nConsole.WriteLine(rendered);\n\npublic abstract class Base\n{\n\t[BsonId]\n\t[BsonElement(\"_id\")]\n\tpublic ObjectId UniqueId { get; set; }\n}\n\npublic class Derived : Base\n{\n\t[BsonElement(\"Id\")]\n\tpublic string Id { get; set; }\n}\n\nMongoDB.Bson.BsonSerializationException: The property 'Id' of type 'Derived' cannot use element name '_id' because it is already being used by property 'UniqueId' of type 'Base'.\n at MongoDB.Bson.Serialization.BsonClassMap.Freeze()\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToAggregationExpressionTranslators.MemberInitExpressionToAggregationExpressionTranslator.Translate(TranslationContext context, MemberInitExpression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToAggregationExpressionTranslators.ExpressionToAggregationExpressionTranslator.Translate(TranslationContext context, Expression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToAggregationExpressionTranslators.ExpressionToAggregationExpressionTranslator.TranslateLambdaBody(TranslationContext context, LambdaExpression lambdaExpression, IBsonSerializer parameterSerializer, Boolean asRoot)\n at MongoDB.Driver.Linq.Linq3Implementation.LinqProviderAdapterV3.TranslateExpressionToProjection[TInput,TOutput](Expression`1 expression, IBsonSerializer`1 inputSerializer, IBsonSerializerRegistry serializerRegistry, ExpressionTranslationOptions translationOptions)\n at MongoDB.Driver.ProjectExpressionProjection`2.Render(IBsonSerializer`1 inputSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.PipelineStageDefinitionBuilder.<>c__DisplayClass39_0`2.<Project>b__0(IBsonSerializer`1 s, IBsonSerializerRegistry sr, LinqProvider linqProvider)\n at MongoDB.Driver.DelegatedPipelineStageDefinition`2.Render(IBsonSerializer`1 inputSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.AppendedStagePipelineDefinition`3.Render(IBsonSerializer`1 inputSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.PipelineDefinition`2.Render(IBsonSerializer`1 inputSerializer, IBsonSerializerRegistry serializerRegistry)\n at Program.<Main>$(String[] args) in C:\\src\\MongoTest\\MongoTest\\Program.cs:line 19\n",
"text": "I have a class hierarchy with base and derived classes in C#.\nThe BsonId is stored on the base class and the derived classes hold some additional data.\nI have an additional Id field on the derived class used for other purposes, and I set up the BsonClassMap to map it to “Id”, so it shouldn’t conflict with the BsonId.\nI want to run a pipeline to make a query on the “derived” collection, but the pipeline fails to render to Bson.The issue may be that ExpressionToAggregationExpressionTranslator creates a new BsonClassMap with AutoMap() and does not use the registered BsonClassMaps.Code:Exception:",
"username": "zator"
},
{
"code": "using MongoDB.Bson;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\n\nBsonClassMap.RegisterClassMap<Derived>(cm =>\n{\n\tcm.AutoMap();\n\tcm.UnmapProperty(x => x.Id2);\n\tcm.MapProperty(x => x.Id2).SetElementName(\"xyz\");\n}).Freeze();\n\nvar pipeline = new EmptyPipelineDefinition<Derived>()\n\t.Project(x => new Derived\n\t{\n\t\tId2 = x.Id2,\n\t});\n\nvar rendered = pipeline.Render(BsonSerializer.SerializerRegistry.GetSerializer<Derived>(), BsonSerializer.SerializerRegistry);\nforeach (var item in rendered.Documents)\n\tConsole.WriteLine(item);\n\npublic abstract class Base\n{\n\t[BsonId]\n\t[BsonElement(\"_id\")]\n\tpublic ObjectId UniqueId { get; set; }\n}\n\npublic class Derived : Base\n{\n\t[BsonElement(\"abc\")]\n\tpublic string Id2 { get; set; }\n}\n\n{ \"$project\" : { \"abc\" : \"$xyz\", \"_id\" : 0 } }\n",
"text": "I tried it out with different property names that do not cause conflict, but I got strange results:\nthe translator does not use the BsonClassMap for the target property in the assignment, only for the source.Code:Result:",
"username": "zator"
},
{
"code": "",
"text": "Ticket created in Jira:https://jira.mongodb.org/browse/CSHARP-4579",
"username": "zator"
}
] | Pipeline projection translator does not use registered BsonClassMap and throws exception | 2023-03-22T19:45:22.958Z | Pipeline projection translator does not use registered BsonClassMap and throws exception | 671 |
null | [
"atlas-cluster",
"cxx"
] | [
{
"code": "ERROR: client: Failed to look up SRV record \"_mongodb._tcp.cluster0.acawkvf.mongodb.net\": The specified host is unknown.\nThe parameter: client, in function mongoc_client_set_server_api, cannot be NULL\n#include <cstdint>\n#include <iostream>\n#include <vector>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/stdx.hpp>\n#include <mongocxx/uri.hpp>\n#include <mongocxx/instance.hpp>\n#include <bsoncxx/builder/stream/helpers.hpp>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/builder/stream/array.hpp>\n\n\nusing bsoncxx::builder::stream::close_array;\nusing bsoncxx::builder::stream::close_document;\nusing bsoncxx::builder::stream::document;\nusing bsoncxx::builder::stream::finalize;\nusing bsoncxx::builder::stream::open_array;\nusing bsoncxx::builder::stream::open_document;\n\n\n\n\nint main() {\n mongocxx::instance instance{}; // This should be done only once.\nmongocxx::uri uri(\"mongodb://127.0.0.1:27017\");\nmongocxx::client client(uri);\nmongocxx::database db = client[\"university\"];\nmongocxx::collection coll = db[\"unidata\"];\n \n return 0;\n}\n\nc++ --std=c++11 test.cpp $(pkg-config --cflags --libs libmongocxx)./test",
"text": "Hello,I am trying to connect to a local database but getting this error:Following the official tutorial:\nhttps://mongocxx.org/mongocxx-v3/tutorial/C-driver is 3.6.7, system is Ubuntu 22.04.\nHere is the code:This is test.cpp. I compile it like this:\nc++ --std=c++11 test.cpp $(pkg-config --cflags --libs libmongocxx)\nCompiles fine, then I ./test and get above error.\nKindly help me out.",
"username": "Z.O.E_N_A"
},
{
"code": "mongod",
"text": "Hi @Z.O.E_N_A ,Do you have a local instance of mongod running on your machine? The error seems to indicate it’s not able to find the MongoDB server.If you are having trouble with running a local instance, I suggest to create a free Atlas cluster and use it. Here are the steps to follow - Getting Your Free MongoDB Atlas Cluster | MongoDBAlso, here’s a tutorial which has some sample code that may be of your help - Getting Started with MongoDB and C++ | MongoDB",
"username": "Rishabh_Bisht"
},
{
"code": "systemctl start mongodsystemctl status mongod● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: active (running) since Wed 2023-03-15 15:45:24 PKT; 5min ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 10439 (mongod)\n Memory: 229.7M\n CPU: 3.986s\n CGroup: /system.slice/mongod.service\n └─10439 /usr/bin/mongod --config /etc/mongod.conf\n\nMar 15 15:45:24 computer1 systemd[1]: Started MongoDB Database Server.\n",
"text": "Yes I do have an instance running.\nI did systemctl start mongod.\nsystemctl status mongod gives:",
"username": "Z.O.E_N_A"
},
{
"code": "mongodb+srv://********:********@cluster0.acawkvf.mongodb.net/?retryWrites=true&w=majorityNo suitable servers found (`serverSelectionTryOnce` set): [Failed to receive length header from server. calling hello on 'ac-gu1mevj-shard-00-02.acawkvf.mongodb.net:27017'] [connection closed calling hello on 'ac-gu1mevj-shard-00-00.acawkvf.mongodb.net:27017'] [Failed to receive length header from server. calling hello on 'ac-gu1mevj-shard-00-01.acawkvf.mongodb.net:27017']\n",
"text": "I tried using my Atlas cluster. Changed URI to mongodb+srv://********:********@cluster0.acawkvf.mongodb.net/?retryWrites=true&w=majority\nThis was the result. I chose c++ driver and version 3.6 and above.",
"username": "Z.O.E_N_A"
},
{
"code": "connection <monitor> to 1x.xxx.xxx.xxx:27017 closedDEBUG: cluster: Authentication failed: bad auth : authentication failed bad auth : authentication failed",
"text": "I tried connecting to Compass too from Atlas.\nIt gave me this:\nconnection <monitor> to 1x.xxx.xxx.xxx:27017 closedI went into Network Access and whitelisted everything, so now I can connect to Compass but not to my c++ program.\nIt now gives me:\nDEBUG: cluster: Authentication failed: bad auth : authentication failed bad auth : authentication failed",
"username": "Z.O.E_N_A"
},
{
"code": "bad auth",
"text": "DEBUG: cluster: Authentication failed: bad auth : authentication failed bad auth : authentication failedAnyone? My password or username do not contain anything but alphabets. I can connect fine to Compass. I have whitelisted my IP address (again), made a new user and gave it admin role. Still same error of bad auth. Why cant I connect to my c++ program?",
"username": "Z.O.E_N_A"
},
{
"code": "cout << \"password: \" << uri.password() << std::endl;\ncout << \"username: \" << uri.username() << std::endl;\n",
"text": "Can you double check your username and password are correctly set by calling below methods after uri object is created?",
"username": "Rishabh_Bisht"
},
{
"code": "cout << \"password: \" << uri.password() << std::endl;\ncout << \"username: \" << uri.username() << std::endl;\n",
"text": "It didnt even reach that point. When I ran ./test I got the same error as above: bad auth.",
"username": "Z.O.E_N_A"
},
{
"code": "mongocxx::client client(uri);",
"text": "Are you adding above code before client object is created, ie, before mongocxx::client client(uri); ?",
"username": "Rishabh_Bisht"
},
{
"code": "int main() {\n mongocxx::instance instance{}; \nmongocxx::uri uri(\"mongodb+srv://*******:********************@clust.enzybol.mongodb.net/?retryWrites=true&w=majority\");\nstd::cout << \"password: \" << uri.password() << std::endl;\nstd::cout << \"username: \" << uri.username() << std::endl;\nmongocxx::client client(uri);\nmongocxx::database db = client[\"university\"];\nmongocxx::collection coll = db[\"unidata\"];\nreturn 0;\n",
"text": "Yes",
"username": "Z.O.E_N_A"
}
] | ERROR: client: Failed to look up SRV record "_mongodb._tcp.cluster0.acawkvf.mongodb.net": The specified host is unknown. The parameter: client, in function mongoc_client_set_server_api, cannot be NULL | 2023-03-14T08:43:51.600Z | ERROR: client: Failed to look up SRV record “_mongodb._tcp.cluster0.acawkvf.mongodb.net”: The specified host is unknown. The parameter: client, in function mongoc_client_set_server_api, cannot be NULL | 1,659 |
[
"node-js"
] | [
{
"code": "",
"text": "\nimage1875×989 78.3 KB\n",
"username": "Hung_Viet"
},
{
"code": "mongodb://127.0.0.1:27017",
"text": "Hi @Hung_Viet, try changing your connection string to mongodb://127.0.0.1:27017. It’s possible you’re having the same issue as what was reported in NODE-4678.",
"username": "alexbevi"
},
{
"code": "",
"text": "i tried your way but still can’t use mongodb. mongodb is really hard to connect\n\nimage1920×1080 205 KB\n",
"username": "Hung_Viet"
},
{
"code": "",
"text": "@Hung_Viet the connection string you shared assumes you have an instance of MongoDB running locally. If this is your first time using MongoDB it might be beneficial to setup a free cluster using MongoDB Atlas and follow the 5 steps outlined in the documentation.This will walk you through getting started and connecting your application to your cluster.",
"username": "alexbevi"
},
{
"code": "",
"text": "@alexbevi\nI tried many ways but can’t connect and i also tried connecting via mongodb website but still no result",
"username": "Hung_Viet"
},
{
"code": "",
"text": "I tried many ways but can’t connect and i also tried connecting via mongodb website but still no resultWhat do you mean by MongoDB website? I believe it is a local implementation. If so, what is the underlying OS? The error message is clear that the connection to this port is refused. It might be the OS is refusing connection to this port or you have configured Mongod to a non-default port or the mongod service is not running on the localhost. There can be multiple reasons.To help investigating further, let us know what is the underlying OS?Regards,\nAbdullah Madani",
"username": "Abdullah_Madani"
},
{
"code": "",
"text": "@Abdullah_Madani\nHow can I get in touch with you? This is my first time learning and implementing MongoDB, it has too many errors. Do you have free time to answer? Thank you.",
"username": "Hung_Viet"
},
{
"code": "",
"text": "This is my first time learning and implementing MongoDB, it has too many errors. Do you have free time to answer? Thank youDon’t worry! We are one community. You can post the errors one by one, starting for the top most one. For this issue, please reply to my previous queries to help start investigating further and have a better idea of your setup",
"username": "Abdullah_Madani"
},
{
"code": "",
"text": "@Abdullah_Madani\nI have seen how to connect on youtube but now I can only connect to mongodb but now the teacher requires me to have the skills to create a book selling page with mongodb ! Do you have free time to help me?\nThank you !",
"username": "Hung_Viet"
},
{
"code": "",
"text": "Excellent!!! Glad to know that now you are able to connect.I have seen how to connect on youtube but now I can only connect to mongodb but now the teacher requires me to have the skills to create a book selling page with mongodb ! Do you have free time to help me?I advise you to start a new thread for this, as the issue in the current thread has been resolved now. This will help us maintain clean threads and will help you also to draw right expertise to address your concerns.",
"username": "Abdullah_Madani"
},
{
"code": "",
"text": "@Abdullah_Madani\nALRIGHT ! I will create a new topic but I really need your help and everyone!\nThank you!",
"username": "Hung_Viet"
}
] | I cannot connect to mongodb. Please help from everyone | 2023-03-20T13:07:54.580Z | I cannot connect to mongodb. Please help from everyone | 652 |
|
null | [
"replication"
] | [
{
"code": "- name: mongodb\n version: \"13.6.2\"\n repository: https://charts.bitnami.com/bitnami\nresources: &resources\n limits:\n cpu: 100m\n memory: 500Mi\n requests:\n cpu: 100m\n memory: 100Mi\n\nmongodb:\n architecture: replicaset\n setParameter:\n enableLocalhostAuthBypass: true\n auth:\n existingSecret: test-mongodb\n databases: [test-db1]\n usernames: [test-user1]\n replicaSetName: test-mongodb\n directoryPerDB: true\n resources: *resources\n persistence:\n storageClass: encrypted-gp2\n size: 20Gi\n pdb:\n create: true\n labels: &labels\n app: test-mongodb\n component: db\n podAnnotations:\n chaos.alpha.kubernetes.io/enabled: \"false\"\n podLabels: *labels\n arbiter:\n resources: *resources\n labels: *labels\n podLabels: *labels\n podAnnotations:\n chaos.alpha.kubernetes.io/enabled: \"false\"\n metrics:\n enabled: true\n resources: *resources\n serviceMonitor:\n additionalLabels: *labels\nData\n====\nmongodb-passwords: 70 bytes\nmongodb-replica-set-key: 668 bytes\nmongodb-root-password: 68 bytes\nMongoServerError: Authentication failed.\n{\"msg\":\"Supported SASL mechanisms requested for unknown user\",\"attr\":{\"user\":{\"user\":\"root\",\"db\":\"admin\"}}}\n{\"msg\":\"Authentication failed\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"root\",\"authenticationDatabase\":\"admin\",\"remote\":\"127.0.0.1:55432\",\"extraInfo\":{},\"error\":\"UserNotFound: Could not find user \\\"root\\\" for db \\\"admin\\\"\"}}\n{\"msg\":\"Authentication failed\",\"attr\":{\"mechanism\":\"SCRAM-SHA-1\",\"speculative\":false,\"principalName\":\"root\",\"authenticationDatabase\":\"admin\",\"remote\":\"127.0.0.1:55432\",\"extraInfo\":{},\"error\":\"UserNotFound: Could not find user \\\"root\\\" for db \\\"admin\\\"\"}}\n",
"text": "Hello,I am trying to install MongoDB using bitnami charts. Running into authentication error between arbiter and mongodb replicatset. Would appreciate any pointers or links to docs that walks through the process.\nUsing the following chart:Values.yaml file:K8s secet:I am getting the following error in arbiter-0:And the following error in mongodb-0:Can someone please help.Thank you, Ahmed",
"username": "A_A11"
},
{
"code": "",
"text": "Could the issue be the default DBs are not created, hence authentication is failng?",
"username": "A_A11"
},
{
"code": "version: \"13.6.2\"yaml",
"text": "Hi @A_A11 and welcome to the MongoDB Community forum!!version: \"13.6.2\"The currect version you are using seems to be old while the latest version of the bitnami is 13.9.2, hence firstly would recommend you to upgrade to the latest version.\nIn addition, we do not have the right expertise and the ownership for the same, I would recommend you using the Bitnami Support is the issue persists.However, Yes, it appears to be an authentication issue as the arbiter is trying to connect to db using the root user name which has not been defined but the admin, local, config databases are created by default as a part of the deployment process.Please consider looking at the documentation for create user https://www.mongodb.com/docs/manual/reference/method/db.createUser/ for further information.However, if you still encounter any issue after creating the user on the admin database, please share the referred documentation for deployment with the relevant yaml files to test it in my local environment.You can also refer to the Bitnami documentation to more details.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Install MongoDB on Kubernetes using bitnami Charts | 2023-03-17T23:45:31.078Z | Install MongoDB on Kubernetes using bitnami Charts | 1,948 |
null | [] | [
{
"code": "",
"text": "Is it possible to pass the exam offline in a test center any time in the near future? I live in Tunisia and I have internet connection issues. I am afraid that due to network issues that may happen during the exam, that may result in its cancellation. If no, what are the consequences of disconnecting during the exam and will I be able to repass again without repaying in case that happens?",
"username": "Tarek_Hammami"
},
{
"code": "",
"text": "Hey @Tarek_Hammami,Currently, MongoDB does not offer its certification exam in a test center. I would recommend you reach out to the Certifications team at [email protected] if you have any further questions about this.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Possibility to pass the exam offline | 2023-03-20T23:15:21.776Z | Possibility to pass the exam offline | 1,068 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 5.0.15 is out and is ready for production deployment. This release contains only fixes since 5.0.14, and is a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "After the update looks like Mongo cannot start using the standard service.Hi Team and thanks for the update.We’re encountering an issue after update. If we stop the service and launch mongo withmongod -f /etc/mongod.confIt works.The only error we can see from logs in case of starting with service is{“t”:{“$date”:“2023-03-01T10:00:02.645+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:4333208, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM host selection timeout”,“attr”:{“replicaSet”:“rs0”,“error”:“FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \"primary\" } for set rs0”}}And we get Connection Refused.Centos 7",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "Looks like adding the following to the service file made it working again.ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb\nPermissionsStartOnly=true\nPIDFile=/var/run/mongodb/mongod.pid\nType=forking",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "The strange thing is that the mongo update changes the Service File.",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "changes the Service FileCan you please share with us what was changed?I do not see how addingExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodbwould help the start up following an update. These are for new install of for installations that were tempered with. If you are updating then your directories and the permissions should already be there and correct.I am not too sure aboutPermissionsStartOnly=true\nPIDFile=/var/run/mongodb/mongod.pid\nType=forking",
"username": "steevej"
},
{
"code": "",
"text": "Is it normal that an upgrade modifies the service file without any advise?Without those parameters our mongo instance cannot start.",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "Is it normal that an upgrade modifies the service file without any advise?I would say yes since an update might need to setup new directories, new resources or even have new dependencies from other services… I keep my /etc under git so I know and document what is happening.Without those parameters our mongo instance cannot start.Like I mentioned I do not know about PermissionsStartOnly, PIDFile and Type but I do not see how the following would stop an update to restart.ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb",
"username": "steevej"
},
{
"code": "",
"text": "I honestly don’t know. But if we don’t restore the previous service file it doesn’t start with the default after upgrade.",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "Hello Stevevej,I had the same issue as Andrea and followed your indication by adding lines:ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb\nExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb\nExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb\nPermissionsStartOnly=true\nPIDFile=/var/run/mongodb/mongod.pid\nType=forkingbut I have an error mesage when trying to start the serviceJob for mongod.service failed because the control process exited with error code. See “systemctl status mongod.service” and “journalctl -xe” for details.– The result is failed.",
"username": "Sergio_Palomino"
},
{
"code": "",
"text": "Your issue seems to be different because the solution does not work in your case.Please start a new thread.",
"username": "steevej"
},
{
"code": "",
"text": "Just to be clear we have a 5 node replicaset. Is not a standalone setup.",
"username": "Andrea_Pernici"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.15 is released | 2023-02-28T17:17:30.959Z | MongoDB 5.0.15 is released | 2,033 |
null | [] | [
{
"code": "",
"text": "I pay every month for Continuous Backup. I used to be able to create a support case, but now I can’t – why not?I don’t know of any other product where the company takes my money but provides zero support.And no, a community forum is not a replacement for a support program.",
"username": "Matt_Parlane"
},
{
"code": "",
"text": "Hi @Matt_ParlaneSorry your experience has been suboptimal.I believe you’re using Cloud Manager, which is the cloud-based management system for an on-prem MongoDB deployment. Cloud Manager supports continuous backup feature, which requires you to install an agent in your deployment, and the backup itself was stored in MongoDB’s cloud servers. This storage is the reason for the charge, as described in this page FAQ: Backup and Restore — MongoDB Cloud ManagerCloud Manager uses enterprise-grade hardware co-located in secure data centers to store all user data.However if you’re having issues with the backup, I think you should be able to report it and get help within that subject. Depending on your actual deployment, could you double check that the agent is running or restart it using the instructions in Restart the MongoDB Agent — MongoDB Cloud ManagerIf everything is in order and you’re still having issues, please DM me the details of your deployment so I can notify the relevant teams regarding this issue.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I’m not sure you’re understanding me – I pay for Continuous Backup, but I do not get any support for it, and I want to know why.Currently I am forced to post on a community forum, which may or may not be answered by someone who is actually equipped to help – and that does not count as support.It’s unreasonable for you to take someone’s money but provide zero support in return.",
"username": "Matt_Parlane"
},
{
"code": "Create New Case",
"text": "I pay for Continuous Backup, but I do not get any support for it, and I want to know why.You should be able to open a support case regarding your backup issue. What happens when you click on the Create New Case button on the top right:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I see this:\nScreen Shot 2023-03-21 at 3.42.13 PM1726×1440 180 KB\n",
"username": "Matt_Parlane"
},
{
"code": "",
"text": "Thanks Matt. I agree that this looks strange. I have forwarded your concerns to the relevant team, and I’m waiting for their reply. Thanks for your patience.Kevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Matt,Just to give you an update, I have escalated this internally. I understand that this is an important issue for you. I’m hopeful that this will be resolved soon. I will let you know if there are further news.Best regards\nKevin",
"username": "kevinadi"
}
] | I pay for Continuous Backup - Why do I not get support? | 2023-03-16T21:06:04.322Z | I pay for Continuous Backup - Why do I not get support? | 843 |
null | [
"aggregation"
] | [
{
"code": "reportm_name = \"ABC\"{\n \"_id\" : ObjectId(\"63f82b8619e8f81b7009d2c3\"),\n \"type\" : \"test\"\n \"created_at\" : date\n \"created_by\" : \"admin\",\n \"report\" : [\n {\n \"name\" : \"Day 3\",\n \"date\" : date\n \"m_name\" : \"ABC\",\n \"created_by\" : \"admin\",\n \"created_at\" :date\n },\n {\n \"name\" : \"Day 2\",\n \"date\" :date,\n \"created_by\" : \"admin\",\n \"m_name\" : \"ABC\",\n \"created_at\" : date\n },\n {\n \"name\" : \"test\",\n \"date\" : date\n \"m_name\": \"123\"\n \"created_by\" : \"admin\",\n \"created_at\" : date,\n },\n ],\n ... 20 fields\n\n}\n .aggregate([ \n {\n $unwind: \"report\",\n },\n {\n $match: {\n \"report.m_name\": \"ABC\",\n },\n },\n {\n $group: {\n _id: \"$_id\",\n report: {\n $push: \"$report\",\n },\n \n },\n },\n ])\n_idreport{\n \"_id\" : ObjectId(\"63f82b8619e8f81b7009d2c3\"),\n \"type\" : \"test\"\n \"created_at\" : date\n \"created_by\" : \"admin\",\n \"report\" : [\n {\n \"name\" : \"Day 3\",\n \"date\" : date\n \"m_name\" : \"ABC\",\n \"created_by\" : \"admin\",\n \"created_at\" :date\n },\n {\n \"name\" : \"Day 2\",\n \"date\" :date,\n \"created_by\" : \"admin\",\n \"m_name\" : \"ABC\",\n \"created_at\" : date\n },\n ],\n ... 20 fields\n\n}\nproject",
"text": "I want to filter the report field, where m_name = \"ABC\", and group it back.\nHere’s my sample data:Here’s my pipeline:If do like this, output only have _id and report, how to return all the rest fields?\nI’ve tried the $$ROOT, but it embedded in a object, I want the output like this:I know project (type: {$first: “$type”},) will do, but there are 20 fields, any other simpler way to do this?",
"username": "elss"
},
{
"code": "$filter .aggregate([\n {\n $addFields: {\n report: {\n $filter: {\n input: \"$report\",\n cond: { $eq: [\"$$this.m_name\", \"ABC\"] }\n }\n }\n }\n }\n])\n",
"text": "Hello @elss,Why need to unconstruct an array while there is a $filter operator, you can filter the array by specifying the condition,Would be something like this,",
"username": "turivishal"
},
{
"code": "$match: {\n \"report.m_name\": \"ABC\",\n }\n[\n { \"$match\" : { /* from the original pipeline */\n \"report.m_name\": \"ABC\",\n } } ,\n { \"$addFields\" : { /* from turivishal's pipeline */\n \"report\" : { \"$filter\" : {\n \"input\" : \"$report\",\n \"cond\" : { \"$eq\" : [ \"$$this.m_name\" , \"ABC\" ] }\n } }\n } }\n]\n",
"text": "I think you still need tobut as the first stage of the pipeline.Without the $match the $addFields/$filter will produce all documents, even the one that do not have report.m_name:ABC. The report array will be empty but all documents from the collection will be output which was not the case with $unwind/$match.The pipeline would then look like:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to group data and return the rest fields? | 2023-03-22T13:41:31.790Z | How to group data and return the rest fields? | 433 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi all,\nI am new to MongoDB and learning. I connected to Mongo through PS (mongosh). The database I am connected to ‘myFirstDatabase’ is not in the portal or in the list of databases that I pull up in the portal or in the shell.\nWhat would the shell connect me to the database that does not exist?Thanks,\nSyed",
"username": "Syed_Ali2"
},
{
"code": "",
"text": "I figured it out. The reason it wasn’t showing was because I did not create a collection under that database. Once I did that, it showed.Thanks.",
"username": "Syed_Ali2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Database I am connected to by default is not visible on portal and in the list of databases | 2023-03-22T18:54:42.611Z | Database I am connected to by default is not visible on portal and in the list of databases | 576 |
null | [
"java"
] | [
{
"code": "",
"text": "Are you currently using a MongoDB extension or plugin for your IDE? Why or why not?If you ARE, which one and what do you like about it? What features do you wish it had?If you are NOT, what features would you benefit from having in an extension or plugin for your IDE?",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "Hello Ashni,I use MongoDB for VS Code because to me, it’s nicer than Compass when you’re working in the IDE and don’t want to go through multiple windows.What I’d love even more, is if Device Sync, GraphQL, Atlas Functions and Triggers etc. were all ran through the same plugin.I also wish there was an Xcode plugin, too. Because when working on iOS or MacOS applications, it would be nice to do the same functionalities in Xcode I do in VS Code for the exact same things/items/issues.It would especially be incredible if there was a plugin to side load Realm/Device Sync into React.Native via command line such as Expo, like you can in VS Code pushing the Swift MongoDB Driver.",
"username": "Brock"
},
{
"code": "",
"text": "I also wish there was a built in plugin for VS Code, Xcode, etc. for debugging and handling issues with Realm/Device Sync, and MongoDB like WiredTiger, and other tools to inspect the Realm Dumps and MongoDB Dumps etc. and parse the logs from your IDE, to just click or through commands go to the appropriate areas causing the issue and just fix them all from the same window/terminal/program/IDE.",
"username": "Brock"
}
] | IDE Discussion Thread | 2023-03-22T15:11:09.778Z | IDE Discussion Thread | 386 |
null | [
"queries",
"replication",
"swift",
"sharding",
"graphql"
] | [
{
"code": "",
"text": "Hello, also former MongoDB Employee here,I’m working on an experiment to compare performance in an environment between MongoDB, Redis, ScyllaDB, MariaDB, and MySQL (NoSQL) for on premise and hybrid infrastructure. As well as in the cloud and the cloud services offered by each vendor.My focus eventually will also go to the cloud such as Atlas, etc. And then compare cloud vs on premise performance, then also how the mobile devices services work such as Device Sync vs AWS App Sync etc. and then comparing GraphQL services where applicable and so on and so forth.This is dominantly for academic research and working to be as unbiased and direct to findings as I can get.The Problem\nUsing MongoDB 6.0.5, 5.0.15, and 4.4.19, for some odd reason even when no data is being stored at all, literally just running the service, MongoDB is reading and writing the following:\n6.0.5 is going through 68kB a minute of Read/Write\n5.0.15 is going through 74kB a minute of Read/Write\n4.4.19 is going through 39kB a minute of Read/WriteThe impact of this:\n4.4.19 will overwrite ~20.5GB of SSD space per year per instance/service.\n5.0.15 will overwrite ~38.9GB of SSD space per year per instance/service\n6.0.5 will overwrite ~35.74GB of SSD space per year per instance/serviceThe impact to this, is that this is without even having any data, just an instance using the Community Server and having an admin account login. No sharding, no other configurations besides the following:\nStorage path, system log destination, the port, ad process management etc. Everything is just default.This isn’t seen in other vendors, is there a specific operation that causes this to occur? The significance of this is that this causes premature failure of SSDs that organically have a limited number of reads/writes. Once you start putting data on the services such as a JSON doc with typical Name, Address, Phone Number, etc. and store just the sample airBnB data, this can jump the reads/writes at rest to almost twice of what it’s doing.Then if you start a 3 sharded cluster, and combine each shard, just at rest, it multiplies further. In my research for root cause of this issue, I found a user post from 2021 describing this same issue with a similar finding: Martin_Beran who made the same discovery apparently in Jan of 2021.Is there any information of why it is doing this, or how to throttle these misc read/writes down to save on hardware?Is there any performance impacts known after doing this?Is this some kind of old bug?I don’t see this being a problem with cloud managed services like Atlas, but for on-premise performance and for hybrid performance, this brings and MTTF metric for hardware impact that is significant in comparison to other services.Another Large Question\nWhat exactly is MongoDB reading and writing when it’s not storing anything? In my attempts to find whatever it’s writing or reading, I literally can’t find anything at all. This is literally just running after installation and basic config with everything but sample data loaded. Once you add sample data the rates of reads/writes at rest exponentially increase for no known reason.",
"username": "Brock"
},
{
"code": "",
"text": "Also I was wondering, for cloud services such as Atlas, does the billing account for the misc run-time read/writes that MongoDB does by default at rest, without any data at all.",
"username": "Brock"
},
{
"code": "",
"text": "Services using MongoDB Tested:\nM1 MacBook Pro directly to the main system.\nDocker Containers\nKubernetes\nOpenStack\nContainerd\nOpenVC\nWindows Server 2016 without containers\nWindows 11 without containersIt still does the same thing regardless of what’s running it.",
"username": "Brock"
},
{
"code": "",
"text": "Another Large Question\nWhat exactly is MongoDB reading and writing when it’s not storing anything? In my attempts to find whatever it’s writing or reading, I literally can’t find anything at all. This is literally just running after installation and basic config with everything but sample data loaded. Once you add sample data the rates of reads/writes at rest exponentially increase for no known reason.I want to clarify, when I mean reads/writes at rest for no known reason, I mean I load the sample data, NO queries, or aggregations, nothing. Just data at rest, sitting in MongoDB, and the reads/writes spike. And I haven’t even connected a Driver, or even setup the networking between my Kubernetes or Docker containers etc. It’s just running the service and holding data at rest doing literally nothing but being “turned on.”It shouldn’t be performing any actions at all that I can see/find/read about. I never noticed this behavior before until someone else I’m friends with had noticed a CPU/RAM spike, and we both noticed MongoDB was doing something.EDIT:\nWe’ve also found that this does not scale, for a sharded cluster so far tested with Kubernetes we’ve found that it doesn’t multiply by number of instances accordingly, like 2 instances double, 3 instance triple, it actually exponentially increases. Two MongoDB instances almost triple, a three sharded cluster we’ve found it’s almost 5 times the number of reads/writes at rest, and when you add data at rest to a 3 instance sharded cluster it’s almost 7 times a single MongoDB Database for reads and writes. Again this is without any aggregations or queries, it’s just by running at rest just sitting there and “doing nothing.”We’re not sure what’s causing this behavior, but in further research this has been going on for a period of time, in versions at least as far back as 4.4",
"username": "Brock"
}
] | MongoDB Writing and Reading for no reason | 2023-03-22T18:13:06.118Z | MongoDB Writing and Reading for no reason | 908 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hey Guys,i’am really new on MongoDB, but I searched a lot. In the Forum in the Docs,…\nAs it looks like this is the MongoDB way:If I have an app with\nuser\npost\ncommets\nlikes\nfollower\nuploads\nreviews\nmessages\n…I should store my data like:\nreviewsposts:upload:aaaand so on.But I don’t find, how to handle how to handle a user-name change.\nYes, maybe I can store the _id from the User in the User data.But is the right way to do many of maybe really big loops to update the user name?\nWhat if (let me dream guys :D) if my project will reach 1mrd user with 100Mrd posts.\nHow long an username-change will run?To be realistic:\nI think it’s a real problem even for smaller projects. If I have a car and I will store every spare-parts within one document maybe one screw is in many documents, may as array. and now, the manufacture will change the name or length.is mongo a bad decision for project like that?\nwhy I don’t find any tutorial or documentation for that (maybe aim just stupid )thank you guys",
"username": "paD_peD"
},
{
"code": "",
"text": "is mongo a bad decision for project like that?Of course not. this design question is not specific to a database. Whatever you use, you face the same question.Your requirements matter. Do you have to always show “update to date” user name? (e.g. if you change user name on teamblind, your old posts still show old name).How frequent the user names can be changed? Any hard rule on that ? (e.g. at most once a month).And more…There are only two options going forward you either only reference to the user with an id or simply duplicate the user info every where. You just need to decide which one fits your case better, based on requirements.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hey Kobe,thanks for you reply.Like in the most “social” Projects you can change your name 1x in three month or something like that.\nSo normally it looks like to embed the name in all collections.but imagine that e.g. at twitter. Barack Obama change his name to Barack Obama is the best and on 133.000.000 relations you need to update the name on the “follower” collection.and the idea behind most projects is or should be going as big as you can.iam really unhappy with that, caus it looks like to me, that embedding is the way you go, but it doenst make sence if the data need to change over more than a few collections with a few data.(my english is not the best, so maybe it sounds more negative than it should )",
"username": "paD_peD"
},
{
"code": "",
"text": "There is no magic.Two optionsUpdate 133 million relation once and the day M. Obama change his profile name.Lookup M. Obama current profile name every single time someone reads one of his publication.Which use-case is the most frequent? Changing profile name or reading the name of a publication.Do you want to slow down the most frequent use-case so that your rare use-case is faster?Do you want to slow down the rare use-case so that your most frequent use-case is a lot faster?Slowing down the most frequent use-case slows down the whole system most of the time.\nSlowing down the rare use-case slows down the whole system on the rare occurrences of the use-case.the idea behind most projects is or should be going as big as you canAgree but early optimization is useless as it delays your project.Make it work.\nMake it work correctly.\nThen make it work correctly and fast. Continuous improvement is better than delayed perfection. - Mark Twaine",
"username": "steevej"
},
{
"code": "",
"text": "Thank you.Its so hard to think different if you do the same thing since 20 years But I will give it a try ",
"username": "paD_peD"
}
] | Change Username in Many collections | 2023-03-21T20:54:12.365Z | Change Username in Many collections | 696 |
null | [
"compass",
"transactions",
"php",
"field-encryption",
"storage"
] | [
{
"code": "{ bits: 64, resident: 1251, virtual: 3805, supported: true }------------------------------------------------\nMALLOC: 1275914864 ( 1216.8 MiB) Bytes in use by application\nMALLOC: + 24219648 ( 23.1 MiB) Bytes in page heap freelist\nMALLOC: + 4544520 ( 4.3 MiB) Bytes in central cache freelist\nMALLOC: + 3388928 ( 3.2 MiB) Bytes in transfer cache freelist\nMALLOC: + 2627464 ( 2.5 MiB) Bytes in thread cache freelists\nMALLOC: + 6160384 ( 5.9 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 1316855808 ( 1255.9 MiB) Actual memory used (physical + swap)\nMALLOC: + 16384 ( 0.0 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 1316872192 ( 1255.9 MiB) Virtual address space used\nMALLOC:\nMALLOC: 24023 Spans in use\nMALLOC: 43 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\n{\"t\":{\"$date\":\"2023-03-13T22:23:03.213+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn20749\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52310\",\"uuid\":\"2ab69972-d492-4b50-975f-b76100090321\",\"connectionId\":20749,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:08.710+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52492\",\"uuid\":\"7e0e36af-f49c-4159-ace6-3a23d15e102b\",\"connectionId\":20750,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:08.711+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn20750\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52492\",\"client\":\"conn20750\",\"doc\":{\"application\":{\"name\":\"MongoDB Compass\"},\"driver\":{\"name\":\"mongoc / ext-mongodb:PHP / PHPLIB \",\"version\":\"1.20.1 / 1.12.1 / 1.11.0 \"},\"os\":{\"type\":\"Linux\",\"name\":\"CentOS Linux\",\"version\":\"7\",\"architecture\":\"x86_64\"},\"platform\":\"PHP 7.4.28 cfg=0x035156a8e9 posix=200809 CC=GCC 4.8.5 20150623 (Red Hat 4.8.5-44) CFLAGS=\\\"\\\" LDFLAGS=\\\"\\\"\"}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:08.728+02:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn20750\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"grn7v80rxcnsb\",\"authenticationDatabase\":\"brns85t23vx\",\"remote\":\"127.0.0.1:52492\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.012+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.013+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.014+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.014+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.014+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.014+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.016+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.016+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.016+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.016+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.016+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.026+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":10}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.026+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.026+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.026+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.029+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.029+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.162+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.163+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.166+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.166+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.211+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.211+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.211+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.211+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.212+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":19765,\"port\":27017,\"dbPath\":\"/var/lib/mongo\",\"architecture\":\"64-bit\",\"host\":\"server1.roznamaserver.com\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.212+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.212+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"version\":\"Kernel 3.10.0-1160.42.2.el7.x86_64\"}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.212+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"0.0.0.0\",\"port\":27017},\"processManagement\":{\"fork\":true,\"pidFilePath\":\"/var/run/mongodb/mongod.pid\",\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongo\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.213+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongo\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.213+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7344M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.776+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":563}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.776+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.791+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22178, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.791+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22181, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.791+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.794+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.794+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.794+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.822+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.822+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongo/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.825+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.825+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.826+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.826+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.826+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.830+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684100, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":3684101, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down TTL collection monitor thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.831+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.832+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":4}}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-03-13T22:24:19.836+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\nmongod",
"text": "Hello Everyone, Hope all of you are doing well,I have a stranger issue in my mongodb instance, I have a server with 16GB ram and a database with 900MB of data with 255MB of indexes size.And here are some info from the data base about the ram usage:db.serverStatus().mem:\n{ bits: 64, resident: 1251, virtual: 3805, supported: true }db.serverStatus().tcmalloc.tcmalloc.formattedStringThe mongodb is always crush and after repair the database is not working from the service but it working when i write the following cmdmongod --dbPath /dbpathSo I think the mongo service is crushed too and to fix it I should re-install the mongodb from the beginning.Here are the log before and after its curched:I think there are an issue realted to the ram usage and the wiredTiger engine.So I think to reduce the cache size from 7 GB to 3 GB but What will happen if the cache usage become 3GB ? will mongod crash or it will try to free some cache ?Also can someone check the log and tell me why the database curshed? as i cannot figuer the problem.Thank you for your help.",
"username": "Mina_Ezeet"
},
{
"code": "\"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}Signal 15SIGTERMmongod",
"text": "Hello @Mina_Ezeet ,Welcome to The MongoDB Community Forums! \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}Signal 15 also known as SIGTERM is sent to terminate a program, and is relatively normal behaviour. Please go through below thread as this is discussed in detail there.I think there are an issue realted to the ram usage and the wiredTiger engine.So I think to reduce the cache size from 7 GB to 3 GB but What will happen if the cache usage become 3GB ? will mongod crash or it will try to free some cache ?It’s important to note that changing the cache size may not necessarily solve the underlying issue. (which is something is sending MongoDB a Terminate signal). You may want to investigate further to determine why the service cannot stay up and address that root cause directly.Additionally, you may want to consider tuning other MongoDB configuration options to optimize performance and memory usage based on your specific workload and available resources.Apart from disk space issue, you can also take a look at your hardware resources to make sure your server is able to handle the load and is not getting overwhelmed. You can also check the MongoDB logs to see if there are any error messages or warnings that can provide more information about the cause of the crashes.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "cat /etc/mongod.conf\nls -l /dbpath",
"text": "I want to add a few things.When you start mongod withmongod --dbPath /dbpathyou are most likely starting it using a different set of files.When you writethe mongo service is crushed tooI assume that you use systemctl to start it so it would be nice to have the output of systemctl status mongod or systemctl status mongodb. The content of the service definition file is also useful information.The output of",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for your response, the issue was occur due to yam auto update in CENTOS 7.x but everything is working fine now after upgrade mongo to 6.0.5Thank you for your reply.",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0 crushes always - CENTOS | 2023-03-15T19:55:10.481Z | MongoDB 6.0 crushes always - CENTOS | 1,703 |
null | [
"replication",
"java",
"crud",
"sharding",
"time-series"
] | [
{
"code": "",
"text": "My team is investigating using time-series collections in Mongo 6. From our experiments, inserting the same dataset into a time-series collection with the same insertion code, on the same hardware (3x replica set, via a mongos with a single shard, self-hosted), with a fresh collection each time, as compared to a standard, unindexed collection, is slower by a factor of about 60. That’s not 60 percent more time, that’s 60 times as much time (an hour vs a minute). Obviously, we were expecting some loss of write speed in exchange for the promised improvements in query speed and storage size, but this is egregious, which leads us to believe we are doing something profoundly wrong.The data in question is sensitive, so we cannot provide it, or our code, but we can share the following:Any advice as to what we can investigate or change is greatly appreciated.",
"username": "Andrew_Melnick"
},
{
"code": "metafieldgranularity",
"text": "Hi @Andrew_Melnick,Welcome to the MongoDB Community forums The data in question is sensitive, so we cannot provide it, or our code, but we can share the following:Also, share the following information to better understand the problem:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "metafieldgranularity",
"text": "As mentioned, each document consists of a BSON native timestamp field, an integer metadata field, and anywhere between 10 and a few hundred numeric measurement fields, depending on the document (a mix of ints, doubles, and strings), in a flat structure. The same set of fields are present each time for a given metadata field value. The granularity is set as “seconds”, as each time-adjacent document with the same metafield are generally within the same second.A straw-man version of one of the documents might look like:\n{ “timestamp”: ISODate(“…”), “metadata”: 1, “a”: 0, “b”: “1”, “c”: 2.5, … }The total data size of the test runs is around 250MiB of (highly compressed) raw data, I don’t have a good estimate of the on-disk size once it hits mongo. The time taken for both collection types seem to scale linearly up from smaller test data sets. We do not have a good _id field, so we allow the driver to generate it client-side.No indexes were created for either. The timeseries collection has the clustered index that gets silently created by virtue of being a timeseries collection, and both have the implicit unique index on the _id.We are running Mongo 6.0.1 on Debian 11, and using mongo-driver-sync 4.8.2We have not. The input data is in a proprietary format for which the conversion code is written in java, so using a different language would introduce additional complexity, and aside from the C++ driver, we didn’t see any supported languages that we thought would produce substantial enough performance gains to be worth investigating. We have tried to follow all of the best practices, such as using a single MongoClient for the application, using multiple threads to perform the pre-insert conversion, batching documents and using insertMany, disabling ordered writes, etc. As mentioned, almost the entirety of the time is spent in the insertMany calls.While we obviously plan to utilize more powerful hardware in production, possibly even dedicated (virtual) nodes, I’d like to stress the fact that our issue is not “this particular mongo cluster is too slow” but “timeseries collections are so much slower that they break our plans and budget”. We obviously plan to add more shards as volume grows, but we planned to do so at a given rate, and going from 1 to 60 shards right now to try and claw back that 60x slowdown is not in our budget.Thank you for reaching out, let me know if there are any other details that would be of use, and I can try to get them.",
"username": "Andrew_Melnick"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"6417ffc4918a044f1b529663\"\n },\n \"timestamp\": {\n \"$date\": \"2023-03-20T12:02:09.265Z\"\n },\n \"meta\": 124,\n \"field_0\": \"irvwmcvctzyeuzuxicg\",\n \"field_1\": \"hljgf\",\n \"field_2\": \"frkkzeytdwhdvfs\",\n \"field_3\": \"ndzdkxv\",\n ... <120 more fields in some documents, fewer in some other documents, randomly> ...\n \"field_123\": \"tkxwugdqsfnlgmmzpctn\"\n}\n",
"text": "Hey @Andrew_Melnick,Welcome to the MongoDB Community forums A straw-man version of one of the documents might look like this:\n{ “timestamp”: ISODate(“…”), “metadata”: 1, “a”: 0, “b”: “1”, “c”: 2.5, … }I’ve generated some random sample data from a script. Could you please confirm if the format of the data below matches what your data?While inserting the 1 million documents in my environment, it takes twice the time which is far more than 60 times you are experiencing.anywhere between 10 and a few hundred numeric measurement fields, depending on the document (a mix of ints, doubles, and strings), in a flat structure.Can you confirm specifically here if, it is 100 or more than that?In general, time series collections work best if the schema is consistent, so it can take advantage of the columnar storage pattern it was created in mind with. An inconsistent schema runs counter to an efficient columnar pattern and may result in suboptimal storage/performance of time series collections.For more information refer to the Best Practices for Time Series CollectionsThe total data size of the test runs is around 250MiB of (highly compressed) raw data, I don’t have a good estimate of the on-disk size once it hits mongoWhat is the approx/actual number of documents you are inserting? Also, share the collstats of your regular collectionThat’s not 60 percent more time, that’s 60 times as much time (an hour vs a minute).Is it 60 times when you are doing the workflow simulation or while importing the data?Further, the bottleneck could be anywhere in the system from the insertion process to getting acknowledged to the server.However, based on your shared information, it appears that TimeSeries is probably not the right solution for your use case. Having said that likely regular collection is more suitable for the use case.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "We believe we have found the solution. The issue stemmed from a disconnect from the description of the data we were receiving and what was actually in the data.A consequence of this is that the metafield was not being produced correctly, resulting in 2 unique values, as opposed to the ~1000 that were expected. This resulted in all of the data all getting funneled into the same bucket. As you mentioned, we were striving for a consistent schema (each unique metafield value uniquely determines a set of fields in this data, with all fields present in every document for that metafield value), but the incorrect metafield values disrupted that.Once we updated our pre-processing code to work with the data as it is, not as it was described, to generate the correct metafields, the performance returned to on par with the unindexed collections.",
"username": "Andrew_Melnick"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Time-series inserts unreasonably slow | 2023-03-14T01:23:15.029Z | Time-series inserts unreasonably slow | 1,527 |
null | [
"node-js",
"mongoose-odm",
"connecting"
] | [
{
"code": "MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017http://127.0.0.1:5000/v1/auth/register{\n \"name\": \"john\",\n \"password\": \"secret\",\n \"email\": \"[email protected]\"\n}\nimport mongoose from 'mongoose';\nimport validator from 'validator';\n \nconst UserSchema = new mongoose.Schema({\n role: {\n type: String,\n trim: true,\n maxlength: 20,\n default: 'admin',\n },\n name: {\n type: String, \n required: [true, 'Please provide name'], \n minlength: 3, \n maxlength: 20,\n trim: true,\n },\n email: {\n type: String,\n required: [true, 'Please provide email'],\n validate: {\n validator: validator.isEmail,\n message: 'Please provide a valid email',\n },\n unique: true,\n },\n password: {\n type: String,\n required: [true, 'Please provide password'],\n minlength: 6,\n select: false,\n },\n lastName: {\n type: String,\n trim: true,\n maxlength: 20,\n default: 'lastName',\n },\n location: {\n type: String,\n trim: true,\n maxlength: 20,\n default: 'Tacloban City',\n },\n});\n \nexport default mongoose.model('User', UserSchema);\nimport User from '../models/User.js';\n \nconst register = async (req, res) => {\n try {\n const user = await User.create(req.body);\n res.status(201).json({user});\n } catch (error) {\n res.status(500).json({msg: 'there was an error'});\n }\n};\nconst login = async (req, res) => {\n res.send('login user');\n};\nconst updateUser = async (req, res) => {\n res.send('updateUser');\n};\n \nexport{register, login, updateUser};\n",
"text": "Hello,How can I fix the MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017 when I added the JSON for http://127.0.0.1:5000/v1/auth/register in postman?This is the JSON:Here’s my User model:Then my authController.js",
"username": "Jumar_Juaton"
},
{
"code": "",
"text": "Sorry if you already did this, but in case you didn’t: can you connect to the database locally, using the mongosh? Also, Is it a replica set ? because mongoose also requires a few options.If you can include how you’re connecting using mongoose odm (the mongoose.connect statement)",
"username": "santimir"
},
{
"code": "import mongoose from 'mongoose';\n\nconst connectDB = (url) => {\n return mongoose.connect(url);\n};\n\nexport default connectDB;\nPORT=5000\nMONGO_URL=mongodb://127.0.0.1:27017/db_ras\n",
"text": "Here is my mongoose script:Then the .env",
"username": "Jumar_Juaton"
},
{
"code": ".connect()ps -ef | grep [Mm]ongodss -l | grep [Mm]ongodmongo",
"text": "@Jumar_JuatonThat looks fine to me as long as you remember .connect() method returns a promise, and errors should be caught.I’m no specialist, but I’d do the following checks:May also be good to know which platform are you running the server.",
"username": "santimir"
},
{
"code": "npm run startmongodnpm run startmongodMongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017npm run start",
"text": "The process is running, but I need to run it using npm run start on root dir of the project. I’ve tried running mongod and npm run start on root dir of the project, or even just mongod but it does return 500 error on postman. It only returns the MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017 error if I run npm run start on root dir.Here’s the complete repo of my code: https://github.com/jumarjuaton/repo-test",
"username": "Jumar_Juaton"
},
{
"code": "echo '{\n\"name\":\"santi\",\n\"password\":\"123Santi\",\n\"email\":\"[email protected]\"}' | POST http://localhost:5000/api/v1/auth/register \\ \n-c 'application/json' \\\n-SE -t 3s\n201 Created\n\n{\n\"user\":{\n\"role\":\"admin\",\n\"name\":\"santi\",\n\"email\":\"[email protected]\",\n\"password\":\"123Santi\",\n\"lastName\":\"lastName\",\n\"location\":\"Tacloban City\",\n\"_id\":\"61ef18d9f053c87d81cd9937\",\"__v\":0}}\nconsole.log(error)const port = process.env.PORT || 5000;\n\n/*\napp.listen(port, () => {\n console.log(`Server is listening on port ${port}...`)\n});\n*/\n\nconst start = async () => {\n await connectDB(process.env.MONGO_URL);\n app.listen(port, () => {\n console.log(`Server is listening on port ${port}...`);\n });\n};\n\nstart().catch(e=>console.log(e))\n",
"text": "@Jumar_JuatonIt’d be better if you share the output to see if the process is running. I send the data as JSON because afaik you’re not processing urlencoded bodies.This is what I tested and the code runs. This is the request I run using linux lwp-request (alias POST):And gotI did minor modifications to the server, because of:It’s as much as I can say with the data you provide. The is the server.js last lines are now:",
"username": "santimir"
},
{
"code": "",
"text": "@Jumar_Juaton did you find a solution to this error? I am facing the same error when testing my restapi on Postman. I can send a simple POST request to the serve.",
"username": "Enock_Omondi"
},
{
"code": "",
"text": "Hi Enock_Omond!,\nGood Evening,I am facing same issue around 2 hours, I did mostly thing but result is nothing, same error coming when I am connecting compass, I realize my mongoDB has been crashed. after mongodb and compass reinstall then issue is solved. this work for me, if you are working on locally so you can do this and if you work cloud side so please contact with your cloud admin.I hope you understand.Thank you",
"username": "Mohit_Mishra"
}
] | How can I fix error MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017? | 2022-01-23T15:37:10.852Z | How can I fix error MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017? | 40,082 |
[] | [
{
"code": "",
"text": "Hello there,I have two collections, let’s say facility and patient. I am trying to create a simple chart, where it shows number of patients per facility(name). Patient’s collection has facilityId and Facility collection has facility name.\nI tried to use lookup field to join two collections, but when I am trying to connect id with facilityId, it isn’t populating anything. Just lookup field is being added.Any help would be appreciated.",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Hi @sunita_kodali -It’s hard to say what’s going wrong without seeing some example documents from both collections. Your basic approach looks sound, but you’re probably doing something small wrong - can you please send more info?Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thank you for you response.\nExample document:Facility Table:_id:ObjectId(‘1234abc’)\norganizationId:“12ab”\nfacilityName:“xyz”\naddress:\"\"\ncity:\"\"\nstate:\"\"\nzip:\"\"Patient Table:_id:ObjectId(‘xyz123’)\norganizationId:“12ab”\nfacilityId:“1234abc”\nfirstName:“First”\nlastName:“lastTest”Trying to show on my charts, number of patients per facility name. I was able to do it per facilityId, but not with facility name.\nGoing over some of your responses, I saw ObjectId cannot be matched with string field. Do you think is that the issue I am going over too?\nI also tried to deploy with advanced aggregation pipeline too. Still I couldn’t get what I need.Thank you,\nSunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Thanks - yes I think you figured out the problem. If you attempt to use a lookup where one field is a string and another is an Object ID, you won’t get any data. You can use Charts to convert the field in the main collection to the type used in the remote collection, and it should work - although it will be cleaner and faster if you could update your data to use consistent types.",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks for your time and support Tom.Sunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Morning Tom,Hope your day is going well. Is there way I can convert all the existing documents field from string to objectId at once? Instead of doing one document at a time.Thanks,\nSunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "I figured it out with “updateMany” option.Thank you.",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Hello Tom,Is there a specific syntax for using lookup field in inject user specific filter.\nreturn {userName:context.token.username} here userName is not look up field.\nreturn {organizationId_lookup_organization_userName:context.token.username}, this syntax isn’t working for me.\nAny ideas on how to fix this issue?Thank you,\nSunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Found the solution. Thank you.",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Found the solutionPlease share it so that others know. That is the best way to keep this forum useful.",
"username": "steevej"
},
{
"code": "",
"text": "return {“organizationId_lookup_organization.userName”:context.token.username}lookup field should be in quotes.",
"username": "sunita_kodali"
}
] | MongoDB Charts - Lookup Field | 2023-01-13T19:02:51.199Z | MongoDB Charts - Lookup Field | 1,709 |
|
null | [
"monitoring"
] | [
{
"code": "Examined:Returned Ratio",
"text": "Hello We keep receiving Query Targeting: Scanned Objects/Returned has gone above 1000 alerts, but we have struggled to find out which query is actually triggering this alert.The alert is set with a threshold of 1000, and it is sent if the condition lasts at least 0 minutes, and it is resent after 60 minutes.When looking at the profiler and selecting Examined:Returned Ratio, we have found a few queries that we know of that have a ratio greater than 1k. They are all indexed, but maybe we need to improve them since during the execution of many queries, they appear but no alert is triggered.The main issue is that when we receive the alert - which is mainly during the night or weekend when there is low traffic for us - usually nothing appears in the profile, so we actually don’t know which query is causing the alert. This is likely because the query is not slow enough to appear.As we don’t have any jobs or specific tasks running during evening hours or weekends, we assume that the alert threshold is based on an average for those specific time windows. Is that correct? Also, what is the best way to reduce false positives? Would changing the time windows be a solution?This has been happening since mid-February without significant changes on our side (as far as we can see), so we are wondering if there have been any changes internally.",
"username": "Axel_Manzano"
},
{
"code": "",
"text": "Hi @Axel_ManzanoWelcome to the MongoDB community and thank you for your question! The Query Targeting alert today is based off the Query Targeting metric on your monitoring charts. If you go back in history to your monitoring charts when you saw a false positive Query Targeting alert trigger, are you able to see a spike in the Query Targeting metric?\nIt could definitely be the case that the Query Profiler missed an operation because the query had a high query targeting ratio but did not exceed a certain slowms execution time filter. This is a gap that we are currently working to address. In the near future, we will be updating Atlas Query Profiler to profile operations based on their slowms execution time as well as their query targeting ratio. This should help provide more visibility into those inefficient queries. However, in the meantime, would you mind checking your Monitoring charts to see if there is actually a spike in Query Targeting?Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Hi @Frank_Sun, thank you for your reply.It could definitely be the case that the Query Profiler missed an operation because the query had a high query targeting ratio but did not exceed a certain slowms execution time filter.It seems that there was a spike in the monitoring chart last night, but nothing showed up in the profiler.However, when there was a lot more traffic on our site last hour, the profiler showed a lot of queries with the same Examined:Returned Ratio issue (mostly the same query, but one that we know of and can’t do much about at the moment) but we didn’t receive any alert, so my guess is that it’s based on the average for a certain period.\nimage1952×522 30.6 KB\nIn the near future, we will be updating Atlas Query Profiler to profile operations based on their slowms execution time as well as their query targeting ratio.I think this improvement would definitely help. Maybe another improvement could be the same alert, but for non-indexed queries only.So, to come back to the alert, is it safe to ignore the alert if the operation is not actually slow ? It’s likely to be the same query as above (or maybe we can just increase the threshold a bit higher?).Thanks again,\nAxel",
"username": "Axel_Manzano"
},
{
"code": "",
"text": "Hi @Axel_Manzano,The alert should have triggered, but if it did trigger from the past hour, it won’t trigger again until the next hour. Just curious, could it be possible that the alert didn’t trigger because it had already triggered the last hour?We do also have a separate longer-term project to have the alert trigger based on Query Targeting per query shape. I think this will also help as we’ll be able to include the offending query shape in the alert details and possibly exclude certain query shapes from being alerted on.A high query targeting value is typically indicative of a poorly optimized query and could mean that there might be another or other indexes that would better serve the query. If you navigate to the Performance Advisor, do you see any index recommendations related to that query shape? I might recommend seeing if there are other indexes that might be beneficial (while weighing the potential write performance costs) to see if we can make that query more efficient.Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Hi @Frank_SunThe alert should have triggered, but if it did trigger from the past hour, it won’t trigger again until the next hour. Just curious, could it be possible that the alert didn’t trigger because it had already triggered the last hour?It doesn’t seems the behavior we see, for example during the day we can see a lot of queries with high Examined:Returned Ratio in the profiler but we don’t receive a single alert, but during the night or weekend there is also one or two alerts If you navigate to the Performance Advisor, do you see any index recommendations related to that query shape? I might recommend seeing if there are other indexes that might be beneficial (while weighing the potential write performance costs) to see if we can make that query more efficient.There is a couple so we will try to see if we can improve on that but the query and index shown (and the one we see most frequently on profiler) has an avg of 99ms according (on a large users collection) so it’s not critical yet, we are just really curious about the alert and the query which trigger it - we we can’t see.We do also have a separate longer-term project to have the alert trigger based on Query Targeting per query shape. I think this will also help as we’ll be able to include the offending query shape in the alert details and possibly exclude certain query shapes from being alerted on.This will be really interesting ",
"username": "Axel_Manzano"
}
] | Query Targeting: Scanned Objects/Returned alerts change of behaviour | 2023-03-19T20:42:22.476Z | Query Targeting: Scanned Objects/Returned alerts change of behaviour | 1,050 |
null | [
"queries",
"dot-net",
"replication",
"crud",
"transactions"
] | [
{
"code": "private static void UpdateOneTest(IMongoDatabase database, string filepath)\n {\n var bucket = new GridFSBucket(database, new GridFSBucketOptions { BucketName = \"videos\" });\n var collection = database.GetCollection<BsonDocument>(\"videos.files\");\n\n var id1 = new ObjectId(\"507f1f77bcf86cd799439181\");\n var id2 = new ObjectId(\"507f191e810c19729de8638a\");\n\n FileStream fs = File.OpenRead(filepath);\n using (var session = database.Client.StartSession())\n {\n session.StartTransaction();\n\n //******* Upload file1 and update filename in transaction ********\n bucket.UploadFromStream(id1, $\"tmp_file1\", fs, new GridFSUploadOptions());\n \n var filter1 = Builders<BsonDocument>.Filter.Eq(\"_id\", id1);\n var update1 = Builders<BsonDocument>.Update.Set(\"filename\", \"file1\");\n var res1 = collection.UpdateOne(session, filter1, update1);\n \n Debug.Assert(res1.ModifiedCount == 1, \"filename of file1 not updated!\"); // ok\n\n fs.Seek(0, SeekOrigin.Begin);\n\n //******* Upload file2 and update filename in transaction ********\n bucket.UploadFromStream(id2, $\"tmp_file2\", fs, new GridFSUploadOptions());\n\n var filter2 = Builders<BsonDocument>.Filter.Eq(\"_id\", id2);\n \n var doc = collection.Find(filter2).FirstOrDefault();\n Debug.Assert(doc != null, \"file not exists\"); // OK: the file exists\n\n var update2 = Builders<BsonDocument>.Update.Set(\"filename\", \"file2\"); \n var res2 = collection.UpdateOne(session, filter2, update2);\n \n Debug.Assert(res2.ModifiedCount == 1, \"filename of file2 not updated!\"); // ----- KO ---- and MatchedCount is 0\n\n session.CommitTransaction();\n }\n }\n",
"text": "Hi,\nI’m uploading many files in GridFS with temporary filenames and then I’m updating the filenames in a transaction. At the end of the transaction if there aren’t errors I commit the transaction so I expect that all filenames are updated.\nBut this not happens: only the filename of the first file is updated! why?I know that GridFS not support the transactions but I suppose that I can update, with a transaction, the metadata in the .files collection.This is the code to reproduce the issue:Only If I move the uploalds before start the transaction the filenames are all updated.Same issue if I try to update a metadata instead of filename.I’m using MongoDB Community Edition version 6 with Replica Set enabled and with only the primary node.Thanks, Nunzio",
"username": "Nunzio_Carissimo"
},
{
"code": "",
"text": "having same problem. Does gridfs us the same functions as mongo?",
"username": "Justin_Williams"
}
] | Gridfs & Transaction: updateOne not work properly | 2022-10-19T14:55:21.432Z | Gridfs & Transaction: updateOne not work properly | 1,567 |
[
"connector-for-bi"
] | [
{
"code": "",
"text": "I am trying to connect Power BI to Mongo DB AtlasI have installed the MongoDB ODBC driver and BI Connector but still am getting error while connectingI tried the following blogs for instructions but they are no helpEarlier this month, we released the new ODBC driver for the MongoDB Connector for Business Intelligence (BI Connector). In this post, we’ll walk through installation and setup of an ODBC connection on Windows 10 running the 32bit version of Excel.\nI am getting the following errors, please suggest how to resolve this\nimage669×510 73.5 KB\n",
"username": "Manish_Tripathi"
},
{
"code": "",
"text": "\nimage940×111 21.9 KB\n\nThis is the message i am getting in the BI connector",
"username": "Manish_Tripathi"
},
{
"code": "",
"text": "Hi Manish - Based on the docs you shared, I am assuming this is Atlas BI Connector that you are using? The error seems to indicate that maybe the ODBC Driver or DSN isn’t setup properly (possibly).\nDid you download the latest ODBC Driver, v1.4.3?\nThen in the Windows your DSN configuration should look similar to this:\n\nScreenshot 2023-03-22 at 8.24.19 AM682×645 148 KB\nAlso, did you * Download and install Visual C++ Redistributable for Visual Studio 2015?And lastly, did you make sure that the IP address was whitelisted within Atlas?Hope this helps!Best,\nAlexi",
"username": "Alexi_Antonino"
}
] | MongoDB ODBC configuration for Atlas | 2023-03-22T11:18:36.861Z | MongoDB ODBC configuration for Atlas | 1,114 |
|
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "/db.getCollection(\"article_fulltext\").aggregate([\n {\n '$search': {\n 'index': 'fulltext', \n 'text': {\n 'query': 'm&s', \n 'path': 'fulltext'\n }\n }\n }\n])\n",
"text": "I am trying to search M&S in the collection but it returns results with contain data like - m/sI tried using / before the word & but still, it gives me the same result!How will I search so that it can return the exact word in the atlas search?",
"username": "Utsav_Upadhyay2"
},
{
"code": "",
"text": "how can we make & searchable in atlas search like - m&s, Marks & spencer, h & M.",
"username": "Utsav_Upadhyay2"
},
{
"code": "/same result",
"text": "Hello @Utsav_Upadhyay2 ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please help me understand below things from your use-case?I tried using / before the word & but still, it gives me the same result!As you are working with special characters, please take a look at examples in below documentation to see if this works for your use-caseUse the Atlas Search whitespace analyzer to divide text into searchable terms at each whitespace character.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "/",
"text": "Hi @Tarun_Gaur thank you for your response,I tried using / before the word & but still, it gives me the same result!\nThe Above line means I am getting a result that is not filtered with the M&S keyword.I am using a simple analyzer not White space because I need language-neutral & grammar-based tokenization.",
"username": "Utsav_Upadhyay2"
}
] | How to search with special characters in Atlas search? | 2023-03-02T17:40:09.672Z | How to search with special characters in Atlas search? | 994 |
null | [
"performance",
"transactions",
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-07T00:40:01.826+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":20499, \"ctx\":\"ftdc\",\"msg\":\"serverStatus was very slow\",\"attr\":{\"timeStats\":{\"after basic\":0,\"after asserts\":0,\"after bucketCatalog\":0,\"after catalogStats\":0,\"after connections\":0,\"after electionMetrics\":0,\"after extra_info\":0,\"after flowControl\":0,\"after globalLock\":0,\"after indexBulkBuilder\":0,\"after indexStats\":0,\"after locks\":0,\"after logicalSessionRecordCache\":0,\"after mirroredReads\":0,\"after network\":0,\"after opLatencies\":0,\"after opcounters\":0,\"after opcountersRepl\":0,\"after oplog\":0,\"after oplogTruncation\":0,\"after readConcernCounters\":0,\"after repl\":0,\"after scramCache\":0,\"after security\":0,\"after storageEngine\":0,\"after tcmalloc\":2825,\"after tenantMigrations\":2825,\"after trafficRecording\":2825,\"after transactions\":2825,\"after transportSecurity\":2825,\"after twoPhaseCommitCoordinator\":2825,\"after wiredTiger\":2825,\"at end\":2826}}}",
"text": "Hello there,I’m running a MongoDB CommunityEdition V5.0 on an Ubuntu 20.04.5 server. I’m currently struggeling with slow perfomance of the DB, but I don’t know where it comes from. The same queries sometimes take seconds, and sometimes only milliseconds.\nI now checked the logs of MongoDB, and every now and then (about 1-5 times per day) I can see the following log line:{\"t\":{\"$date\":\"2023-03-07T00:40:01.826+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":20499, \"ctx\":\"ftdc\",\"msg\":\"serverStatus was very slow\",\"attr\":{\"timeStats\":{\"after basic\":0,\"after asserts\":0,\"after bucketCatalog\":0,\"after catalogStats\":0,\"after connections\":0,\"after electionMetrics\":0,\"after extra_info\":0,\"after flowControl\":0,\"after globalLock\":0,\"after indexBulkBuilder\":0,\"after indexStats\":0,\"after locks\":0,\"after logicalSessionRecordCache\":0,\"after mirroredReads\":0,\"after network\":0,\"after opLatencies\":0,\"after opcounters\":0,\"after opcountersRepl\":0,\"after oplog\":0,\"after oplogTruncation\":0,\"after readConcernCounters\":0,\"after repl\":0,\"after scramCache\":0,\"after security\":0,\"after storageEngine\":0,\"after tcmalloc\":2825,\"after tenantMigrations\":2825,\"after trafficRecording\":2825,\"after transactions\":2825,\"after transportSecurity\":2825,\"after twoPhaseCommitCoordinator\":2825,\"after wiredTiger\":2825,\"at end\":2826}}}I didn’t find any documentation on the individual parameters. But it’s suspicious that all the time gets lost after tcmalloc. It’s always the same when that log message appears, the only parameter where time is lost is tcmalloc.Can someone explain to me what this means? And does it give me any hint where my performance issues come from?Regards,\nJonas",
"username": "wrzr123"
},
{
"code": "",
"text": "serverStatus was very slowBased on this, it’s a symptom, instead of a cause.So something on your server must be running slow/heavy. We need more info on it, for instance cpu usage/mem usage/disk usage.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi Kobe,thanks for answering!\nThe server doesn’t look really overloaded… Yesterday, the “serverStatus was very slow” message was logged about 15 times within 90 seconds. At that time, the maximum CPU usage was at 0.8%. Yes, 0.8%, not 80%. Memory usage was at 19GB out of 32GB, most of it was consumed by MongoDB. But I think that’s normal as well. Maximum system.load was at 4.5, which should be okay, as the server has 8 cores. Maximum disk in was at about 2.5 MiB/s, which also shouldn’t be too much. Disk out was close to 0.The database was handling some requests for sure during that time period. But looking at the resources of the server, I can’t tell that it has overloaded the server. Are those numbers from the log message telling anything? Could this tcmalloc give a hint about what’s the problem?",
"username": "wrzr123"
},
{
"code": "mongodmongodmongod",
"text": "Hi @wrzr123Rather than overloaded, I think in your case it’s a bottleneck somewhere.There are reports of stalls when tcmalloc was decommitting a large amount of RAM at once (see SERVER-31417). There was another report of this phenomenon that turns out to be a monitoring agent locking up the RAM that mongod tries to work with.Anecdotally, monitoring agents seem to interfere with the mongod process. I have also seen reports on some unexplained crashes, that turns out to be a security software that tampers with mongod memory.If you’re not running any monitoring agent, then perhaps you can try running this workload on a different set of hardware and see if the issue persists to rule out any hardware related causes.If this still persists, then perhaps a snippet of mongostat might be useful to double check the server’s load.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,thanks a lot for your reply!\nI indeed have a monitoring agent running, I’m using netdata cloud. I also have an anti virus software running, ClamAV.\nI will turn them off and check if that resolves the issue. I’ll get back here as soon as I have any results.\nThanks!Regards,\nJonas",
"username": "wrzr123"
},
{
"code": "",
"text": "Hi all,right before I wrote my last reply on this topic, my server was restarted. Then the problem was gone, and all MongoDB queries were as fast as they should be. But now after about 2 weeks, it’s getting slow again.\nAnd again I can see the log messages as shown in my first post.Find here a snippet of mongostat, in case it helps:\ninsert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time\n*0 *0 *0 *0 0 0|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 111b 56.6k 35 Mar 21 22:14:42.902\n*0 *0 *0 *0 0 2|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 407b 57.2k 35 Mar 21 22:14:43.902\n*0 *0 *0 *0 0 7|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 1.24k 58.0k 35 Mar 21 22:14:44.898\n*0 *0 *0 *0 0 9|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 2.55k 57.6k 35 Mar 21 22:14:45.901\n*0 *0 *0 *0 0 8|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 3.23k 61.4k 35 Mar 21 22:14:47.148\n*0 *0 *0 *0 0 11|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 4.71k 78.6k 35 Mar 21 22:14:47.900\n*0 *0 *0 *0 0 8|0 0.0% 41.5% 0 9.53G 1.67G 0|0 1|0 2.54k 56.8k 35 Mar 21 22:14:48.925\n*0 *0 *0 *0 0 11|0 0.0% 41.5% 0 9.53G 1.67G 0|0 0|0 5.00k 60.0k 35 Mar 21 22:14:49.899\n*0 *0 *0 *0 0 12|0 0.0% 41.5% 0 9.53G 1.67G 1|0 2|0 7.21k 57.6k 35 Mar 21 22:14:50.910\n*0 *0 *0 *0 0 9|0 0.0% 41.5% 0 9.53G 1.67G 0|0 1|0 3.50k 57.3k 35 Mar 21 22:14:51.939I turned off my anti virus software (ClamAV) and my monitoring software (netdata), and the problem still persists. Does someone have any idea what else could cause the problem? Maybe that the problem was gone after a server restart could be a hint. Only restarting MongoDB doesn’t help by the way.\nI have absolutely no idea anymore…Thanks in advance!",
"username": "wrzr123"
},
{
"code": "mongostatmongodserverStatus was very slow",
"text": "Hi @wrzr123I don’t see anything wrong with the mongostat output. In fact, it shows an idle server doing nothing: zero inserts, queries, and updates. It’s also only uses 41.5% of the allocated WT cache.When you said “slow”, could you give further details on what you’re seeing? Some pointers:To be perfectly honest, I don’t see a specific MongoDB issue here yet. Perhaps it’s something about the hardware or the deployment?Just an idea, if you’re running on AWS, you might be running into a burstable performance limit and are being throttled. Note that the burst involve CPU and disk (separately, I believe), so either could be a cause.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Honestly, i wouldn’t personally suggest spend too much time looking into this “serverStatus slow” if that’s the only thing you observed. I mean, if all other service level metrics look good (e.g. CPU/memory/disk/latency…), why bother.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks a lot again to both of you for your replies!@Kobe_W\nI don’t bother about the log line itself, my problem is, that my application which is using the MongoDB is responding very slow. That’s why I started investigating at all.In general, there is very low traffic on the application, but when there is, it’s responding most of the time very slow. I turned on the system profle (that was my first measure) to investigate about long running queries. And there are a lot of long running queries, which shouldn’t be that slow.\nFor example there was an insert operation, which took 21 (!!) seconds. There is nothing special about the operation, the same insert operation usually takes only a few milliseconds. When I look at the details, I can see it’s spending basically all the 21 seconds waiting for the schema lock.\nThe same applies for queries and commands. Here there is no specific hint about locks, but queries which perform very fast on my development machine take forever on the server (with the same amount of data in my local MongoDB).@kevinadi\nRegarding your points:I totally agree that it’s probably not a bug in MongoDB, but an issue with the deployment. The only other things running on the server are 3 dotnet core microservices which belong to the application (those services are querying the database) and an apache2 webserver.\nI know that in an optimal deployment the database should run on a dedicated server whith nothing else running on it, but I’ve seen this kind of deployment a few times for bigger testing environments, and we never faced issue like that.",
"username": "wrzr123"
}
] | Meaning and root cause of log message "serverStatus was very slow" | 2023-03-07T19:30:43.547Z | Meaning and root cause of log message “serverStatus was very slow” | 1,878 |
null | [
"data-modeling",
"atlas-functions",
"react-native",
"atlas-triggers"
] | [
{
"code": "TriggersFunctions'string''data'export class Photo extends Realm.Object<Photo> {\n photo_name!: string;\n photo!: ArrayBuffer;\n photo_url!: string;\n\n static schema: Realm.ObjectSchema = {\n name: \"Photo\",\n embedded: true,\n properties: {\n photo_name: \"string\",\n photo: { type: \"data\", optional: true, default: null },\n photo_url: \"string\",\n },\n };\n}\nexport class Photo extends Realm.Object<Photo> {\n photo_name!: string;\n photo!: ArrayBuffer;\n photo_url!: string;\n\n static schema: Realm.ObjectSchema = {\n name: \"Photo\",\n embedded: true,\n properties: {\n photo_name: \"string\",\n photo: { type: \"data\", optional: true, default: null },\n photo_url: \"string\",\n },\n };\n}\nnull\"\"",
"text": "Hello, I have a similar if not identical use case as the WILDAID o-fish project. I read all related topics and posts here in the forums, but could not find my answer. I am new to S3 as well. The task is to use device sync to sync an image (<1MB) to Atlas and after that upload it to S3 and delete the image data from Atlas, just keep an url reference to the S3 object. All of this uses Triggers and Functions. I am using React Native (Expo) with flexible sync and my question is: what is the appropriate data type for the image data and how to define it in my realm schema? Should I be using the 'string' data type and encoding the image as a base64 string? Or should I be using the binary data type of some sort, perhaps 'data'? I guess it all depends on what is more efficient and what is used to store the image on S3. I don’t know what is a good practice for storing images on S3.These are what I imagine the schema could look like. Also for string data types, when I wan to delete the data, do I set it o null or to \"\"?",
"username": "Damian_Danev"
},
{
"code": "",
"text": "Hi DamienI would recommend that you upload to S3 directly, and only ever store the url in Realm.If you want to trampoline on Atlas for the sake of authentication, then use a Atlas AppServices Function, but never store the actual blob in Realm, pass the image directly to that function.Br, Kasper",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "I have considered that and still am, but my app is offline first with probably upwards of 70% off-the-grid usage. I would like to attempt what I see so many have done here in the forums - to compress the image down to ~100kb, sync it, and upon insertion delete and upload to S3. If not even for the image itself I would like to be able to store very very small thumbnails in Atlas.",
"username": "Damian_Danev"
},
{
"code": "",
"text": "Well, to answer your original question base64 will encode 3 bytes in 4, so that is obviously less efficient than using the binary data type directly. It will also require encoding and decoding.I would still recommend not storing large blobs directly in Realm, but you could argue that 100KiB is not that large.An approach to consider is to keep a queue of unsent images that you will post, once the app is online. You could use a local realm to keep track of this queue.",
"username": "Kasper_Nielsen1"
},
{
"code": "export class Photo extends Realm.Object<Photo> {\n photo_name!: string;\n photo!: ArrayBuffer;\n photo_url!: string;\n\n static schema: Realm.ObjectSchema = {\n name: \"Photo\",\n embedded: true,\n properties: {\n photo_name: \"string\",\n photo: { type: \"data\", optional: true, default: null },\n photo_url: \"string\",\n },\n };\n}\nimageIsUploaded",
"text": "In this case, could you confirm if this is the correct way of using blobs:As for your recommendation: my initial thought was that it would add complexity to the app, but it does sound alluring. Could you expand a bit more on how I can do that? Keep a local realm collection of photos, and have some boolean field imageIsUploaded? Most importantly what method should I use to detect internet access without any headaches regarding its stability and without needed action from the user?",
"username": "Damian_Danev"
},
{
"code": "",
"text": "I’m really not familiar with the realm-js SDK (or js for that matter - I’m a developer on dart SDK) to answer with certainty, but it does look reasonable to the untrained eye.You could listen for connection changes on the session of the synced realm and picky back of that to know when to upload. Remember transactions cannot span multiple realms, so order your code accordingly.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "@Kasper_Nielsen1 , look what I found: Blob in React Native / Realm@Andrew_Meyer , sorry for the tag, but could you please shed some light on this topic?",
"username": "Damian_Danev"
},
{
"code": "",
"text": "I didn’t know about that - as I said I’m a little outside my turf here. But you can (as described in the linked issue) use a base64 encoded string instead.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "@Damian_Danev I tried this a while back when preparing an application for a presentation. I ended up hitting limitation with React Native’s implementation of Blob. There are portions of their implementation that makes it impossible to instantiate a blob from an Array Buffer. For reference, see: react-native/Blob.js at f8d8764e8f236e8495e7e5747bfe95162f3a165a · facebook/react-native · GitHubI created the issue to track this limitation.",
"username": "Andrew_Meyer"
}
] | Image data type in schema [RN Realm] | 2023-03-20T09:02:57.900Z | Image data type in schema [RN Realm] | 1,439 |
null | [
"aggregation",
"queries",
"views"
] | [
{
"code": "",
"text": "Hi,I use Mongo 4.2 on-premise.I plan to use the $merge in order to create materialized views for some heavy aggregations.I saw that Mongo Atlas has built-in triggers to refresh the materialized views automatically.Since I cannot use this - can I simply use a script that will run the aggregation with the $merge from an OS scheduler (such as cronjob) every, say, two hours?Am I missing something?Thanks,\nTamar",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "Since I cannot use this - can I simply use a script that will run the aggregation with the $merge from an OS scheduler (such as cronjob) every, say, two hours?Yes definitely you can! First, you need to create a script file that can be called by mongo shell, without entering the console. The syntax to call the script depends on which Mongo shell you are using. If you are using mongosh (recommended) then follow the below steps:Copy the aggregate command in a script file, say aggregate_pipeline.js file, and save itTest the script by executing it from the Linux shell as follows:\nmongosh --host 127.0.0.1 --port 27017 --username myuser --password superSecret --file aggregate_pipeline.jsIf the above step succeeds, create a bash shell file in the same directory and type in the above command, and save it as .sh\nDon’t forget to grant execute privilege to the bash script (chmod u+x command.sh)Example contents of the bash file:#!/bin/bash\nmongosh --host 127.0.0.1 --port 27017 --username myuser --password superSecret --file aggregate_pipeline.jsConfigure the cronjob for the above bash script to run every two hours, as below: → crontab -e\n → 0 */2 * * * sh <path/to/bashScript.sh>\n → save and exitIf you are using old mongo shell (deprecated), then use the below command in the bash script:mongo -u username -p password --authenticationDatabase auth_db_name --eval ‘<aggregate_pipeline_command>’ myDbNameNote: if you are using Mongo operators starting with a $ sign, you’ll need to surround the eval argument in single quotes to keep the shell from evaluating the operator as an environment variable.Best Regards,\nAbdullah Madani.",
"username": "Abdullah_Madani"
},
{
"code": "",
"text": "Thank you @Abdullah_Madani for clarifying.I am actually very familiar mongo shell and cronjobs, I was just wondering if the Mongo Atlas solution for Materialized views refresh has some hidden advantage over a simple cronjob.\nIf it does - I was wondering how to do the same with a Mongo on-premise.\nIf all not possible - then I will have to use a cronjob I guess.Thanks,\nTamar",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Run materialized views refresh from OS scheduler | 2023-03-20T12:45:10.549Z | Run materialized views refresh from OS scheduler | 1,283 |
[
"containers"
] | [
{
"code": "",
"text": "Hi there, I experienced a weird issue where today my databases suddenly went missing, unable to perform auth. Is there any way to figure out what went wrong? I didnt remember setting up volumes with docker compose, but so far it has persisted accross vps restarts, and docker restarts. The /data folder has wt files.\nimage866×146 4.55 KB\nIs there a way to figure out what causes this?",
"username": "Seiko_Santana"
},
{
"code": "",
"text": "Hi @Seiko_Santana and welcome to the MongoDB Community forum!!where today my databases suddenly went missing, unable to perform auth.Can you help us by elaborating more on the issue being seen by helping us with the error message being seen.I didnt remember setting up volumes with docker compose, but so far it has persisted accross vps restarts, and docker restarts.The volumes in docker containers holds the advantage to specify a persistent storage location. You can follow the documentation to learn more on different use cases to understand docker volumes.Further, can you share the docker compose file you are using to deploy MongoDB in your system.Is there a way to figure out what causes this?Can you confirm if there was no operation being done in the database directly or through application, performed during the time the application was working perfectly fine to the issue the issue started to arise?Best regards\nAasawari",
"username": "Aasawari"
}
] | Docker MongoDB Databases Went Missing | 2023-03-20T04:38:13.000Z | Docker MongoDB Databases Went Missing | 743 |
|
null | [
"queries",
"golang"
] | [
{
"code": "",
"text": "I have a go routine which is running continuously every 5 seconds where I am trying to connect to mongodb, then create a cursor and use it to iterate through mongodb doc.\nMy code is working fine for some time but it always goes to panic say after 45 mins. on this line:cur, err := coll.Find(context.Background(), filter)and the panic says:server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: mongodb-dev-sre-seti-gateway-apps.apps.ose-dev45.micron.com:31498, Type: Unknown, Last error: dial tcp: lookup mongodb-dev-sre-seti-gateway-apps.apps.ose-dev45.micron.com on 10.96.0.10:53: server misbehaving }, ] }Any idea if I am missing somthing??",
"username": "Rishav_Kumar_Jha"
},
{
"code": "server misbehaving",
"text": "Hey @Rishav_Kumar_Jha,Welcome to the MongoDB Community forums …Type: Unknown, Last error: dial TCP:…server misbehavingThe server misbehaving error comes from the MongoDB Agent (here Go Driver) and it means the name server was unable to process this query due to a problem with the name server. Please refer to RFC 1035 for more details.It appears to be a DNS issue, as the Go driver does not cache DNS and instead relies on the OS and its resolvers. Please refer to this JIRA ticket for more details.Having said that, to better understand this issue, could you please provide us with the code snippet from your script, the MongoDB version, and the Go Driver version you are using?Also, can you confirm that every 5 sec you are creating a new connection with the database server? If yes, can you keep the connection alive rather than building a new one every 5 seconds?Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Server selection error: server selection timeout | 2023-03-19T17:38:14.298Z | Server selection error: server selection timeout | 1,674 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "\n{\n \"_id\" : ObjectId(\"6401bf4640e29af1625c10a9\"),\n \"articleid\" : \"165097155\",\n \"headline\" : \"iOS vs. Android: Android Must Consider these iOS Features\"\n \"article_type\" : \"online\",\n \"pubdateRange\" : ISODate(\"2023-03-03T14:55:01.000+0000\"),\n \"clientidArray\" : [ \"M0036\", \"Y0010\", \"D0382\"]\n}\ndb.getCollection(\"article_beta\").aggregate([\n {\n \"$search\":{\n \"index\":\"fulltext\",\n \"compound\":{\n \"must\":[\n {\n \"range\":{\n \"path\":\"pubdateRange\",\n \"gte\":\"ISODate(\"\"2023-01-01T00:00:00.000Z\"\")\",\n \"lte\":\"ISODate(\"\"2023-03-15T00:00:00.000Z\"\")\"\n }\n },\n {\n \"text\":{\n \"query\":\"D0382\",\n \"path\":[\n \"clientidArray\"\n ]\n }\n }\n ]\n }\n }\n }\n])\n",
"text": "I am trying to search in the array, with the range date but it is not working. if I try searching alone in the array it is not working too. though I check I have data according to the search query.this is my sample data -The index name is - fulltext, where the headline is indexed as a string, pubdateRange as Date, and clientidArray as a string tooThe query I am trying to execute is -Any help is much appreciated.",
"username": "Utsav_Upadhyay2"
},
{
"code": "lucene.standard{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"clientidArray\": {\n \"type\": \"string\"\n },\n \"headline\": {\n \"type\": \"string\"\n },\n \"pubdateRange\": {\n \"type\": \"date\"\n }\n }\n }\n}\n{\n index: 'default',\n text: {\n query: 'M0036',\n path: 'clientidArray'\n }\n}\nReason: PlanExecutor error during aggregation :: caused by :: \nRemote error from mongot :: caused by :: \n\"compound.must[0].range.lte\" must be a date, number, or geoPoint\n{\n index: 'default',\n \"compound\": {\n \"must\": [{\n \"range\":{\n \"path\":\"pubdateRange\",\n \"gte\":ISODate('2023-03-03T14:55:01.000+00:00'),\n \"lte\":ISODate('2023-03-03T14:55:01.000+00:00')\n }\n }],\n \"must\":[{\n text: {\n query: 'M0036',\n path: 'clientidArray'\n }\n }]\n}\n}\npubdateRangeclientidArray",
"text": "Hey @Utsav_Upadhyay2,Is your range search working? Range requires a date or a numeric field as an input while I see you’re giving a string value. I tried to reproduce your problem on my end to check this. Created documents from the sample you provided and the index definition is (with lucene.standard):When I executed the text search alone, it worked as expected. This is my search:But when I executed the query you provided, it gave me an error:ie. the range operator is unable to identify the dates provided since we are providing string values to it instead of dates. I changed the query to provide date to the range operator:and it worked as expected.Kindly try and see if the above query works for you or not. If this still doesn’t work for you, please check the types of your fields pubdateRange and clientidArray. They should be date and array respectively or please post your index definition as well as any error you may be getting while executing the search.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to index and search an array in Atlas fulltext search? | 2023-03-15T14:00:35.582Z | How to index and search an array in Atlas fulltext search? | 855 |
null | [] | [
{
"code": "",
"text": "Hi everyone,i’m facing some issues with oplog size, we are trying to compact it, as shown in other topics (Mongodb Atlas compact oplog.rs) , but it seems we can’t do it because of our privileges.So, we are trying to create a custom role with compact action in local database but we don’t have enough privileges either to create a role with compact privilege in local database.We are trying to do these actions with a user with dbadmin privileges, but it is impossible. Any ideas??Thanks in advance",
"username": "Jose_Cristino_Fernandez"
},
{
"code": "dbadmindbadmindb.getUser()db.getUser(\"myuser\")\nrolesdbAdmin{\n \"_id\" : \"mydb.myuser\",\n \"user\" : \"myuser\",\n \"db\" : \"mydb\",\n \"roles\" : [\n {\n \"role\" : \"dbAdmin\",\n \"db\" : \"mydb\"\n }\n ],\n \"mechanisms\" : [\n \"SCRAM-SHA-1\"\n ]\n}\ndbAdminmydbrolesdbAdmincompactcompact",
"text": "Hello @Jose_Cristino_Fernandez ,Welcome to The MongoDB Community Forums! We are trying to do these actions with a user with dbadmin privileges, but it is impossible.Is your deployment local or Atlas?\nIf Atlas then, what deployment you are using? M0,M2…M10 etc?Note: Serverless instances don’t support Oplog feature at this time. To learn more, see Serverless Instance Limitations.If you are using local environment then please explain, why are you trying to compact the oplog? The oplog is constantly being written to at a rate of every 10 seconds in an idle replica set (see https://jira.mongodb.org/browse/SERVER-23892), so there’s little to no benefit in compacting it.\nHowever to directly answer your question, you need to use a user with dbadmin privileges., had dbadmin privileges?\nTo check this, you can run the db.getUser() command to retrieve information about the user. For example, if the username is “myuser”, you can run the following command:Examine the roles field in the output to see if the user has been granted the dbAdmin role. For example, the output might look like this:In this example, the user “myuser” has been granted the dbAdmin role on the mydb database. If the roles field does not contain the dbAdmin role, the user does not have that role.Please refer to below documentation to learn more about compact command and required privileges.Note: Always have an up-to-date backup before performing server maintenance such as the compact operation.Lastly, you can also take at below threads which is related to your queryRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compact oplog.rs | 2023-02-27T16:19:51.957Z | Compact oplog.rs | 948 |
null | [
"replication",
"sharding",
"containers",
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-14T16:45:32.342+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"a8f4b836a4fc:27017\"}}\n{\"t\":{\"$date\":\"2023-03-14T16:45:32.348+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333222, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM received error response\",\"attr\":{\"host\":\"a8f4b836a4fc:27017\",\"error\":\"HostUnreachable: Error connecting to a8f4b836a4fc:27017 :: caused by :: Could not find address for a8f4b836a4fc:27017: SocketException: Host not found (authoritative)\",\"replicaSet\":\"rs0\",\"response\":\"{}\"}}\n{\"t\":{\"$date\":\"2023-03-14T16:45:32.348+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"rs0\",\"host\":\"a8f4b836a4fc:27017\",\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Error connecting to a8f4b836a4fc:27017 :: caused by :: Could not find address for a8f4b836a4fc:27017: SocketException: Host not found (authoritative)\"},\"action\":{\"dropConnections\":true,\"requestImmediateCheck\":true}}}\n{\"t\":{\"$date\":\"2023-03-14T16:45:32.852+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333222, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM received error response\",\"attr\":{\"host\":\"a8f4b836a4fc:27017\",\"error\":\"HostUnreachable: Error connecting to a8f4b836a4fc:27017 :: caused by :: Could not find address for a8f4b836a4fc:27017: SocketException: Host not found (authoritative)\",\"replicaSet\":\"rs0\",\"response\":\"{}\"}}\n{\"t\":{\"$date\":\"2023-03-14T16:45:32.852+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"rs0\",\"host\":\"a8f4b836a4fc:27017\",\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Error connecting to a8f4b836a4fc:27017 :: caused by :: Could not find address for a8f4b836a4fc:27017: SocketException: Host not found (authoritative)\"},\"action\":{\"dropConnections\":true,\"requestImmediateCheck\":false,\"outcome\":\nversion: '3.7'\n\nservices:\n mongodb1:\n container_name: mongodb1\n image: mongo:4.4.0\n command: mongod --shardsvr --dbpath /data/db --port 27017\n ports:\n - 27017:27017\n expose:\n - \"27017\"\n environment:\n TERM: xterm\n volumes:\n - ~/mongo-shard/mongodata1:/data/db\n\n mongodb2:\n container_name: mongodb2\n image: mongo:4.4.0\n command: mongod --shardsvr --dbpath /data/db --port 27017\n ports:\n - 27027:27017\n expose:\n - \"27017\"\n environment:\n TERM: xterm\n volumes:\n - ~/mongo-shard/mongodata2:/data/db\n\n mongodb3:\n container_name: mongodb3\n image: mongo:4.4.0\n command: mongod --shardsvr --dbpath /data/db --port 27017\n ports:\n - 27037:27017\n expose:\n - \"27017\"\n environment:\n TERM: xterm\n volumes:\n - ~/mongo-shard/mongodata3:/data/db\n\n mongocfg:\n container_name: mongocfg\n image: mongo:4.4.0\n command: mongod --configsvr --replSet rs0 --dbpath /data/db --port 27017\n environment:\n TERM: xterm\n expose:\n - \"27017\"\n volumes:\n - ~/mongo-shard/mongodatacfg:/data/db\n \n mongos:\n container_name: mongos\n image: mongo:4.4.0\n depends_on:\n - mongocfg\n command: mongos --configdb rs0/mongocfg:27017 --bind_ip_all --port 27017\n ports:\n - 27022:27017\n expose:\n - \"27017\"\nversion: '2'\nservices:\n\n mongos:\n image: mongo:5.0.13\n container_name: mongos\n command: mongos --port 27017 --configdb rs0/mongocfg:27017 --bind_ip_all\n ports:\n - 27022:27017\n\n mongocfg:\n image: mongo:5.0.13\n container_name: mongocfg\n command: mongod --port 27017 --configsvr --replSet rs0 --bind_ip_all\n volumes:\n - ~/mongo-shard/mongodatacfg:/data/db\n\n mongodb1:\n image: mongo:5.0.13\n container_name: mongodb1\n command: mongod --port 27017 --shardsvr --replSet rs-shard-01 --bind_ip_all\n volumes:\n - ~/mongo-shard/mongodata1:/data/db\n ports:\n - 27027:27017\n\n mongodb2:\n image: mongo:5.0.13\n container_name: mongodb2\n command: mongod --port 27017 --shardsvr --replSet rs-shard-02 --bind_ip_all\n volumes:\n - ~/mongo-shard/mongodata2:/data/db\n ports:\n - 27037:27017\n\n mongodb3:\n image: mongo:5.0.13\n container_name: mongodb3\n command: mongod --port 27017 --shardsvr --replSet rs-shard-03 --bind_ip_all\n volumes:\n - ~/mongo-shard/mongodata3:/data/db\n ports:\n - 27047:27017\n",
"text": "My local sharded Mongo cluster has been running without issue on version 4.4 for some time. I am attempting to upgrade to 5.0.13 with the ultimate goal of upgrading to 6.0.When I run docker-compose up, the containers start up, but the shards have the following error messages. Note that “a8f4b836a4fc” is the container id for the old config server with the 4.4 image that is no longer running.If I remove the container’s volumes, then the upgrade works fine. Based on that, I grepped the volumes for “a8f4b836a4fc”, and it was there in several places like this: configsvrConnectionStringrs0/a8f4b836a4fc:27017. I.e. the shard’s storage volumes are retaining the container id of the old config server and possibly using it for connection purposes.If I remove the volumes, I can do the upgrade without issue. When I check the WiredTiger files in the new volumes, I see that the configsvrConnectionString does not reference the config server by container id, but by the container name, which makes a lot more sense to me. I couldn’t find anything online about Mongo 4.4 referencing the config server by container id, or even why that would be cached or stored in the first place since container ids change. I would like to understand why this is happening and how I can prevent my 4.4 setup from “caching” the container id in its storage volumes and trying to use the old ids to connect after the upgrade is complete.Thanks in advance.Here is the docker-compose.yaml that works for 4.4 (written by someone who has since left the company)Here is the docker file for 5.0.13 (based on his 4.4 file)",
"username": "Leia_M"
},
{
"code": "",
"text": "Hi @Leia_M and welcome to the MongoDB Community forum!!From the docker compose file shared above, it seems you are trying to upgrade a deployment of 3 shard servers, config server and mongos.\nTo perform an upgrade on a sharded cluster, the recommended sequence of steps is to begin with upgrading the config servers, then move on to the shard servers, and finally upgrade the mongos.The error in the above message seems to be because you are trying to upgrade all at the same time.Can you upgrade using the above process and let us know if you are facing similar issue?At a glance to me, this seems to be a Docker operational issue. That is, MongoDB will run the same way and expect the same things doesn’t matter what platform it’s running on. I think you may get better insights from public forums like StackOverflow or Docker Forums for a detailed solution.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Upgrading a sharded MongoDB cluster in Docker: Container Id being used to connect | 2023-03-15T16:04:55.737Z | Upgrading a sharded MongoDB cluster in Docker: Container Id being used to connect | 1,443 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\n\tbuyer: ObjectId(‘User_schema_Id’),\n\tseller: ObjectId(‘User_schema_Id’),\n\tlistings: [\n\t\t{\n\t\t\tlisting: ObjectId(‘Listing_schema_Id’),\n\t\t\tlistingPrice: 99,\n\t\t\t…\n\t\t}\n\t\t{\n\t\t\tlisting: ObjectId(‘Listing_schema_Id’),\n\t\t\tlistingPrice: 99,\n\t\t\t…\n\t\t}\n\t],\n\t…\n}\n{\n\temail: ‘[email protected]’,\n\tfirstName: ‘Mike’,\n\t…\n}\n{\n\ttitle: ‘Blue Sofa’,\n\tstatus: ‘Active’,\n\t…\n}\n let orders = await Order.aggregate([\n {\n $search: {\n index: 'ordersSearch',\n embeddedDocument: {\n path: 'seller',\n operator: {\n text: {\n path: 'seller.email',\n query: searchTerm,\n },\n },\n },\n\n },\n },\n {\n $search: {\n index: 'ordersSearch',\n embeddedDocument: {\n path: 'listings',\n operator: {\n text: {\n path: 'listings.listing.title',\n query: searchTerm,\n },\n },\n },\n\n },\n },\n\t\t…\n ]);\n",
"text": "Hello, I am struggling with a problem in my web application. I want to build a search query using the MongoDB Search Atlas. The goal is to search through one schema of our MongoDB database for matching values. That is easy with a simple search index, the problem is that I would like to also search inside nested documents of the original schema documents.\nSo for example with the search query, I want to find matching values in our Order schema fields which looks something like this:Order Schema:The User Schema (referenced in order.buyer and order.seller) looks something like this:User SchemaAnd the Listing Schema (referenced in order.listings.listing) looks something like this:Listing SchemaNow, if the search Query is something like ‘Couch’ or ‘Mike’ or ‘[email protected]’, I would like to search inside the following Order fields for a matching value:Is that even possible? I don’t know where to start from.This is what I have so far, but it’s not leading me anywhere:",
"username": "Francesco_De_Conto"
},
{
"code": "",
"text": "Hi @Francesco_De_Conto and welcome to the MongoDb Community forum!!The atlas search indexes work for the nested queries but might have some restrictions depending on the schema designed and the index defined.For the following, it would helpful for us if you could the below information which would help to replicate the same in our local environment and help you with the solution if possible.Regards\nAasawari",
"username": "Aasawari"
}
] | MongoDB search inside nested documents | 2023-03-15T11:31:37.849Z | MongoDB search inside nested documents | 791 |
null | [
"react-native"
] | [
{
"code": "const Person = {\n name: \"Person\",\n properties: {\n name: \"string\",\n birthdate: \"date\",\n dogs: \"Dog[]\"\n }\n};\n\nconst Dog = {\n name: \"Dog\",\n properties: {\n name: \"string\",\n age: \"int\",\n breed: \"string?\"\n }\n};\nperson1: Person = ...\n\nrealm.write(() => {\n return new Dog(realm, dogFields);\n});\n",
"text": "The doc explains how to create a One-to-Many Relationship schema as shown below:But I dont see where it explains how to add Dog objects in the database that belongs to a person. A Person should be specified in some way.I am using Realm React, so the dog creation code looks something like :If I have a person1 already in the DB, how to modify this code so that person1 becomes the owner of this newly created dog.",
"username": "Gilles_Jack"
},
{
"code": "",
"text": "Good question!Conceptually, if a Person is instantiated in code, and then some dogs, and then the dogs are added to the Persons dogs property, when the Person is written to Realm within a write transaction, the dogs will be too! How cool is that?If there’s in existing person that’s already been written. When it’s read in and new dogs are added to the person (within a write transaction) , they will also also be written.Lastly, if there are Person and Dogs that have already been written, if a person is read in, and then a dog is read in, when the dog is added the Person Dog property within a write transaction, it will just update the person since the dog already exists.Does that help?",
"username": "Jay"
},
{
"code": "",
"text": "Yes, very clear. I will try it and let you know on stackoverflow.\nThanks a million.\nI have 2 or 3 open issues here if you have time like this one. and that onePS: I wish MongoDB could provide a fully working one-to-many React Native example out of the box, but for some reasons, its not the case.",
"username": "Gilles_Jack"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | React Native MongoDB Realm - How to create a object in a One-to-Many Relationship? | 2023-03-21T06:06:11.199Z | React Native MongoDB Realm - How to create a object in a One-to-Many Relationship? | 1,165 |
null | [
"aggregation",
"replication",
"change-streams",
"kafka-connector"
] | [
{
"code": "",
"text": "What configuration is needed in Sink connector to listen to multiple database changes? In source connector i can use pipeline filter to select databases and collections name and make database field empty, but in sink connector database field is mandatory. How can we sync changes in multiple databases(more than 4k count) to the respective database in another cluster?? It is impossible to add seperate connectors for each databases. As per the documentation sink connector can listen to multiple topics, but how it will write to different databases.?",
"username": "Suraj_Santhosh"
},
{
"code": "",
"text": "Hi,@Robert_Walters How to listen change streams in multiple database in Sink connector. I have databases like\nDB1_CA , DB1_DC , DB2_CA , DB2_DC . Databases DB1_CA and DB2_CA have collections with same name. Same will apply for DB1_DC and DB2_DC . I was able create topics dynamically for these database changes. But I am stuck with Sink Connector configurations. I know Sink connector can listen to multiple topics by configuring topics.regex or topics. But where it will write to?? Database field is mandatory and cannot able to accept list of regex. Please direct me to solve this particular case.Regards,",
"username": "Suraj_Santhosh"
},
{
"code": "",
"text": "Check outmore info:Version 1.4 of the MongoDB Connector for Apache Kafka focused on customer requested features that give the MongoDB Connector the flexibility to route MongoDB data within the Kafka ecosystem.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Thank you @Robert_Walters . I just extended the NamespaceMapper Interface according to my requirements and it worked.",
"username": "Suraj_Santhosh1"
},
{
"code": "",
"text": "@Suraj_Santhosh1 Can you give an overview of how the file turned out?",
"username": "Samuel_Molling"
}
] | Sync Multiple databases using kafka connector | 2023-01-04T07:34:08.994Z | Sync Multiple databases using kafka connector | 2,104 |
null | [] | [
{
"code": "\n{\n\t\"reservedQuantity\": 100,\n\t\"sku\": \"ABCD\",\n\t\"inStock\": 200,\n\t\"shipReserved\": 400\n}\n",
"text": "Hello,\nFor locking on document level we do have findAndModify which locks the document even if parallel calls are received and only releases the lock until the data is updated so that the next task in queue will read only the updated document.\nBut is there anything for locking a sub-document or specific key in a document?\nFor my question let’s take an example of mongo data for a single document:Let’s say there is a high traffic with parallel calls to update reservedQuantity for sku ABCD . Is there any function which enables to lock only the key reservedQuantity until it is updated and meanwhile allows other calls to update data in parallel and consume locks for other keys in the same document?",
"username": "Abdullah_Amin"
},
{
"code": "",
"text": "Please vote … Hi\naccording to this reference: https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactions\n\n\nWhen I lock a document with a field with a new ObjectID, the whole document is locked!\n\nIdea :\n\nOperations:\ni have three fields...",
"username": "Abolfazl_Ziaratban"
}
] | Lock on sub-document in mongo | 2022-01-05T20:03:36.915Z | Lock on sub-document in mongo | 1,943 |
null | [
"next-js"
] | [
{
"code": "import clientPromise from \"../../lib/mongodb\";\n\nexport default async (req, res) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n const id = `${req.query.movieId}`;\n\n const movie = await db\n .collection(\"movies\")\n .findOne({\"_id\": ObjectId(id)})\n \n \n res.json(movie);\n } catch (e) {\n console.error(e);\n }\n};\n",
"text": "I’m trying to accomplish the “homework” from the How to intergrate MongoDb with your next.js app tutorial.“As a homework exercise, why don’t you create an API route that returns a single movie based on a user provided id?”I have a my file in set up like so : pages/api/[movieId].jsIt looks like this :I’m getting 404 error in browser at http://localhost:3000/api/movies/573a1394f29313caabcdf67a, and at http://localhost:3000/api/573a1394f29313caabcdf67a, there is something that I am not fully comprehending about dynamic routes yet. What would be the ‘best practice’ way of accomplishing this homework prompt?",
"username": "Michael_Jarrett"
},
{
"code": "http://localhost:3000/pages/api/573a1394f29313caabcdf67a.js\nhttp://localhost:3000/api/573a1394f29313caabcdf67a.js\n",
"text": "You writepages/api/[movieId].jsSo may be you could tryororremove .js fromfile in set up like so : pages/api/[movieId].js",
"username": "steevej"
},
{
"code": "idhttp://localhost:3000/api/movies/573a1394f29313caabcdfa3e_idsample_mflix",
"text": "Thanks you for your reply. To clarify a bit, the actual ‘prompt’ from the tutorial isBlockquote\nAs a homework exercise, why don’t you create an API route that returns a single movie based on a user provided id? To give you some pointers, you’ll use Next.js Dynamic API Routes to capture the id . So, if a user calls http://localhost:3000/api/movies/573a1394f29313caabcdfa3e , the movie that should be returned is Seven Samurai . Another tip, the _id property for the sample_mflix database in MongoDB is stored as an ObjectID, so you’ll have to convert the string to an ObjectID.Previously, it has you set up a route in pages/api repository at movies.js, for a long list of movies. Is the ‘.js’ not proper naming convention for routes? I need to set up the movie id as the parameter for the route described in the homework.",
"username": "Michael_Jarrett"
},
{
"code": "http://localhost:3000/api/movie/573a1394f29313caabcdf67a\n",
"text": "According to the Next.js documentation you shared andcreate an API route that returns a single movieI would say that you should set your file as pages/api/movie/[movieId].js and use",
"username": "steevej"
},
{
"code": "import clientPromise from \"../../lib/mongodb\";\n\nexport default async (req, res) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(10)\n .toArray();\n\n res.json(movies);\n } catch (e) {\n console.error(e);\n }\n};\n\nimport { ObjectId } from \"mongodb\";\nimport clientPromise from \"../../lib/mongodb\";\n\nexport default async (req, res) => {\n \n try { \n const id = req.query.id;\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n\n const movies = db.collection(\"movies\");\n const movie = await movies.findOne({ _id:ObjectId(id)});\n if(movie){\n res.json(movie);\n }else{\n res.status(404).json({message: \"Movies not found\"})\n } \n } catch (e) {\n console.error(e);\n res.status(500).json({ message: \"Internal server error\" }); \n }\n};\n\n",
"text": "Okay, I got to be getting closer…as per the tutorial, first we set up a route to get multiple movies. It is set up at pages/api/movies.js and it looks like thisThen, we are asked to set up the dynamic route that is passing the movie id as a param, I have that set up at pages/api/[movieId].js and it now looks like thisI am still getting a 404 error at http://localhost:3000/api/movies/573a1394f29313caabcdf67a,as well as at http://localhost:3000/api/movie/573a1394f29313caabcdf67aWhat am I missing here? I’ve tried multiple movie ids as the param in the browser all of which have returned 404 errors.",
"username": "Michael_Jarrett"
},
{
"code": "",
"text": "Update : when I pass a movie id at api/573a1396f29313caabce3f2c, I get “movie not found”, so the else block in [movieId] is running for sure. I have tried multiple movie ids but get the same error for everyone.",
"username": "Michael_Jarrett"
},
{
"code": "",
"text": "If you setup your route with [movieId] the you should use req.query.movieId as the variable name in your code rather than req.query.id.If you want to use the variable req.query.id in your code, you have to define your route with [id] rather than [movieId].",
"username": "steevej"
},
{
"code": "",
"text": "@steevej - superlatives, that worked, thank you so much",
"username": "Michael_Jarrett"
}
] | Trying to do dynamic route "homework" from next.js and mongoDb tutorial | 2023-03-20T20:54:29.588Z | Trying to do dynamic route “homework” from next.js and mongoDb tutorial | 1,521 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.