image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Hi,I have to create a new atlas cluster with admin api key, it works when i deploy the terraform code on my local as my API key access list has my ip added, and when i call the same through azure yaml pipeline it fails as the pipeline agent IP is dynamic ex if i use Europe west i cannot white list all the range, is there a way to address this issue (i cannot use self hosted on azure devops)?Thanks in advance!",
"username": "Kasirajan_Sethuraman"
},
{
"code": "",
"text": "Hi @Kasirajan_Sethuraman - Welcome to the community the pipeline agent IP is dynamic ex if i use Europe west i cannot white list all the range, is there a way to address this issue (i cannot use self hosted on azure devops)?Just to clarify, when you state you “cannot white list all the range”, is this something you are unable to do in Atlas (e.g. error message, greyed out buttons, etc) or is this a security requirement on your end?Curious to know if there’s a list or set of IP ranges (per region) for each of the pipeline agents and if you’ve tried adding that to the API key access list.Regards,\nJason",
"username": "Jason_Tran"
}
] | I have a new use case that i have to create a mongo cluster with azure pipeline terraform Iac | 2023-04-04T11:45:04.871Z | I have a new use case that i have to create a mongo cluster with azure pipeline terraform Iac | 597 |
null | [
"dot-net",
"api"
] | [
{
"code": "",
"text": "Hello All,\nis it possible to create the collection usigng MongoDB.Driver’s CreateCollection method, that will have the analytical storage enabled? , I cannot find options for that in CreateCollectionOptions class.Regards,\nTomasz",
"username": "Tomasz_Lubocki"
},
{
"code": "db.RunCommand<BsonDocument>(command)",
"text": "Hi, @Tomasz_Lubocki,CosmosDb is a third-party product that is not supported by MongoDB. Please reach out to Microsoft Technical Support about how to configure analytical storage when creating a collection. You will likely have to use db.RunCommand<BsonDocument>(command) as it is unlikely that the driver supports the necessary custom fields for a CosmosDb-specific feature.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | .Net Api for CosmosDb | 2023-04-04T18:59:13.623Z | .Net Api for CosmosDb | 907 |
null | [
"python",
"connecting"
] | [
{
"code": "",
"text": "Hi,\nI have a python script that connects to a database on MongoDB Atlas. Yesterday everything worked fine but today i’m working from a different location and my script can no longer make a connection to the database. this is the error: “ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError…”\nI’ve found some previous posts stating the same issue, but the solution there is to whitelist the IP address. I’ve whitelisted all IP addresses (it eve says in the UI that my current IP address is included) but still the same error.\nDoes anybody have an idea what i could be missing?\nThanks in advance",
"username": "Sam_Vermeir"
},
{
"code": "",
"text": "Can you connect by shell or other tool like Compass?\nCould be network related or firewall issue",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried connecting with Compass but it also gave a connection error. that’s why i thought that the issue was the IP but i added the “0.0.0.0/0” IP, so that should not be a problem, right? it is also not temporary.",
"username": "Sam_Vermeir"
},
{
"code": "",
"text": "Switch to another internet connection like your mobile hotspot and see if it works\nMay be your new location ISP not allowing/blocking the connection",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "On my hotspot it does seem to work so it must be related to the settings of the wifi there.\nI would like to use Heroku or AWS for my script that then adds stuff to the database in the cloud.\nHow can i be sure that the same thing doesn’t happen there?",
"username": "Sam_Vermeir"
},
{
"code": "mongodb://<username>:<password>@**edited**.mongodb.net:27017,**edited**.mongodb.net:27017,**edited**.mongodb.net:27017/?ssl=true&replicaSet=**edited*&authSource=admin&retryWrites=true&w=majority\nmongodb+srv://%3Cusername%3E:%3Cpassword%3E@**edited**j.mongodb.net/?retryWrites=true&w=majority\n",
"text": "Hi! I don’t know if this can help you, but today we need to change all the connection strings of our containers because something (I don’t know what) changes. You can check the string connection from your Cluster page and then in the “Connect” button.For example, for Python 3.4 or later, the connection string is:and for Python 3.6 or later is:Hope it helpsRegards,\nVíctor",
"username": "Victor_Merino"
},
{
"code": "",
"text": "On my hotspot it does seem to work so it must be related to the settings of the wifi there.Interesting. Based off this information and the fact you’re able to connect from the original location, it leads me to believe that the failure to connect might be due to a network setting/configuration from the “wifi there” location you’ve noted.I’ve seen a few posts in the pasts regarding connectivity failures from cafe wifi’s and such in which the wifi in these particular spots did not allow outbound traffic to/from port 27017 (as one example).How can i be sure that the same thing doesn’t happen there?The following blog post How to Deploy MongoDB on Heroku | MongoDB may be of use to you. It does have details regarding Configuring Heroku IP addresses in MongoDB Atlas as well.Regards,\nJason",
"username": "Jason_Tran"
}
] | Connection timed out | 2023-04-04T11:20:21.267Z | Connection timed out | 889 |
null | [
"production",
"golang",
"transactions"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.4 of the MongoDB Go Driver.This release includes optimizations to reduce memory consumption in reading compressed wire messages. The release also offers codec support for decoding struct container fields as either map or document types, rather than an ancestor type. Additionally, the mongo package will support a closed approach for checking transaction error labels. For more information please see the 1.11.4 release notes.You can obtain the driver source from GitHub under the v1.11.4 tag.`Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.11.4 Released | 2023-04-04T22:43:12.731Z | MongoDB Go Driver 1.11.4 Released | 894 |
null | [
"queries"
] | [
{
"code": "{\n _id: ObjectId(\"642c0ec1cb1151ed3126c87c\"),\n id: 171,\n name: 'Papua new Guinea',\n iso3: 'PNG',\n iso2: 'PG',\n numeric_code: '598',\n phone_code: '675',\n capital: 'Port Moresby',\n currency: 'PGK',\n currency_name: 'Papua New Guinean kina',\n currency_symbol: 'K',\n tld: '.pg',\n native: 'Papua Niugini',\n region: 'Oceania',\n subregion: 'Melanesia',\n timezones: [\n {\n zoneName: 'Pacific/Bougainville',\n gmtOffset: 39600,\n gmtOffsetName: 'UTC+11:00',\n abbreviation: 'BST',\n tzName: 'Bougainville Standard Time[6'\n },\n {\n zoneName: 'Pacific/Port_Moresby',\n gmtOffset: 36000,\n gmtOffsetName: 'UTC+10:00',\n abbreviation: 'PGT',\n tzName: 'Papua New Guinea Time'\n }\n ],\n latitude: '-6.00000000',\n longitude: '147.00000000',\n emoji: '🇵🇬',\n emojiU: 'U+1F1F5 U+1F1EC',\n translations: {\n kr: '파푸아뉴기니',\n 'pt-BR': 'Papua Nova Guiné',\n pt: 'Papua Nova Guiné',\n nl: 'Papoea-Nieuw-Guinea',\n hr: 'Papua Nova Gvineja',\n fa: 'پاپوآ گینه نو',\n de: 'Papua-Neuguinea',\n es: 'Papúa Nueva Guinea',\n fr: 'Papouasie-Nouvelle-Guinée',\n ja: 'パプアニューギニア',\n it: 'Papua Nuova Guinea',\n cn: '巴布亚新几内亚',\n tr: 'Papua Yeni Gine'\n }\n },\n",
"text": "How can i get all the documents for which the length of timezones is greater than 2 ?",
"username": "khemchand_N_A"
},
{
"code": "",
"text": "You may use $size to query based on the size of an array. Also look at https://www.mongodb.com/docs/manual/tutorial/query-arrays/.",
"username": "steevej"
},
{
"code": "",
"text": "I was using it with $gt, as i want documents for which the length of the “timezones” array is greater than 2.\nerrorinmongodb1045×100 29.6 KB\nbut i am getting error.",
"username": "khemchand_N_A"
},
{
"code": "db.countries.find( { \"timezones.2\" : { \"$exists\" : true } } )\n",
"text": "Hummm! Don’t know.You may try my hack",
"username": "steevej"
}
] | I have a collection of documents in my MongoDB database, and each document contains an array of timezones. I want to retrieve all the documents for which the length of the "timezones" array is greater than 2 | 2023-04-04T21:46:36.028Z | I have a collection of documents in my MongoDB database, and each document contains an array of timezones. I want to retrieve all the documents for which the length of the “timezones” array is greater than 2 | 637 |
null | [
"aggregation",
"queries",
"node-js",
"data-modeling"
] | [
{
"code": "const analytics = await appointments\n .aggregate([\n {\n $unwind: \"$details\",\n },\n {\n $unwind: \"$details.employees\",\n },\n {\n $unwind: \"$payment\",\n },\n {\n $group: {\n _id: {\n month: {\n $month: {\n $dateFromString: {\n dateString: \"$details.date\",\n format: \"%Y-%m-%d\",\n },\n },\n },\n year: {\n $year: {\n $dateFromString: {\n dateString: \"$details.date\",\n format: \"%Y-%m-%d\",\n },\n },\n },\n },\n employees: { $sum: { $size: \"$details.employees\" }},\n amountMade: { $sum: \"$payment.amount\" },\n count: { $sum: 1 },\n },\n },\n { $sort: { \"_id.month\": 1 } },\n ])\n .toArray();\n{\n \"_id\": {\n \"$oid\": \"6391da7061126c0016580b9d\"\n },\n \"details\": {\n \"company\": {\n \"id\": \"M70120\",\n \"name\": \"My Clinic\"\n },\n \"date\": \"2022-12-09\",\n \"purchaseOrderNumber\": \"435476657\",\n \"clinic\": \"Churchill\",\n \"ndaAccepted\": true,\n \"employees\": [\n {\n \"id\": \"bUfj8N3hhZ3dqo3A9HGLAE\",\n \"name\": \"Someone\",\n \"idNumber\": \"88788758751\",\n \"comments\": [],\n \"occupation\": \"Worker\",\n \"services\": [\n {\n \"price\": {\n \"$numberDouble\": \"37.43\"\n },\n \"id\": \"cannabis\"\n },\n {\n \"price\": {\n \"$numberInt\": \"445\"\n },\n \"id\": \"clearance\"\n }\n ],\n \"sites\": [\n {\n \"id\": \"kxnhmvU1UqFcnUxFUMHGNQ\",\n \"name\": \"Proud Mines\",\n \"hasAccessCard\": true\n }\n ]\n }\n ]\n },\n \"usersWhoCanEdit\": [],\n \"usersWhoCanManage\": [\n {\n \"id\": \"DAV17421\",\n \"name\": \"David Davies\"\n }\n ],\n \"payment\": {\n \"proofOfPayment\": \"\",\n \"amount\": {\n \"$numberDouble\": \"517.4300000000001\"\n }\n },\n \"isVoided\": false,\n \"isComplete\": true,\n \"messages\": [\n {\n \"message\": \"Hi My Clinic team\",\n \"author\": {\n \"id\": \"ADM81947\",\n \"name\": \"Admin \",\n \"role\": \"admin\"\n },\n \"createdAt\": \"2022-12-08 14:41:02\"\n },\n {\n \"message\": \"\",\n \"author\": {\n \"id\": \"ADM81947\",\n \"name\": \"Admin \",\n \"role\": \"admin\"\n },\n \"createdAt\": \"2022-12-08 14:41:02\"\n }\n ],\n \"status\": \"approved\",\n \"id\": \"WILI751191CHU\",\n \"invoice\": {\n \"id\": \"506dada3-d92a-4ee8-bc23-0a7129196c24\",\n \"amount\": {\n \"$numberDouble\": \"517.4300000000001\"\n },\n \"date\": \"2022-12-08T12:39:29.788Z\"\n }\n}\n[\n { _id: { month: 2, year: 2023 }, amountMade: 179.2, count: 2, employeesCaterdTo: 10, servicesRendered: 3 },\n { _id: { month: 3, year: 2023 }, amountMade: 179.2, count: 2, employeesCaterdTo: 5, servicesRendered: 8 },\n { _id: { month: 4, year: 2023 }, amountMade: 7130, count: 5,employeesCaterdTo: 2, servicesRendered: 9 }\n]\n",
"text": "I currently have an appointments booking system for clinics on my ReactJS website supported by NodeJS and ExpressJS as my server.\nI am trying to calculate the amount paid, number of appointments and number of employees which services were rendered to for each month.I have the following codeThis does not work as expected, I keep on getting errors such as an error that tells me $size is not an operator for group.I am not going to post a specific error message but what I would like is some help which would tell me how I can accomplish what I am trying to achieveHere is an example document copied from my atlas dbwith the js code I want to get some aggregate analytics data in the form",
"username": "Ayabonga_Qwabi"
},
{
"code": "",
"text": "First thing that jumps to the eyes is that you $unwind details but details is not an array.I would correct that and try again.",
"username": "steevej"
}
] | How to use Mongodb aggregate group to count the size of a nested array? | 2023-04-04T16:02:26.199Z | How to use Mongodb aggregate group to count the size of a nested array? | 910 |
null | [
"c-driver"
] | [
{
"code": "aligned_alloc()helloconnectionIddouble",
"text": "Announcing 1.23.3 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Fixes:Fixes:Thanks to everyone who contributed to this release.",
"username": "Colby_Pike"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB C Driver 1.23.3 Released | 2023-04-04T21:55:34.257Z | MongoDB C Driver 1.23.3 Released | 863 |
null | [
"queries",
"crud",
"performance",
"transactions"
] | [
{
"code": "22261 {\n \"name\": \"a name of a category\",\n \"categoryId\": ObjectId(\"xxxxx\"),\n }\nnamecategoryIddb.products.updateMany({\"categoryDetails.categoryId\": ObjectId(\"63d224e007e09cd3c6526038\")},\n{$set: {\"categoryDetails.name\": \"New category name\"}})\n4.412sec{\n \"acknowledged\" : true,\n \"insertedId\" : null,\n \"matchedCount\" : 9538.0,\n \"modifiedCount\" : 9538.0,\n \"upsertedCount\" : 0.0\n}\n20sec4.412sec20secdb.products.find({\"categoryDetails.categoryId\": ObjectId(\"63d224e007e09cd3c6526038\")}).explain(){\n \"explainVersion\" : \"1\",\n \"queryPlanner\" : {\n \"namespace\" : \"Preproduction.products\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"categoryDetails.categoryId\" : {\n \"$eq\" : ObjectId(\"63d224e007e09cd3c6526038\")\n }\n },\n \"collation\" : {\n \"locale\" : \"it\",\n \"caseLevel\" : false,\n \"caseFirst\" : \"off\",\n \"strength\" : 2.0,\n \"numericOrdering\" : false,\n \"alternate\" : \"non-ignorable\",\n \"maxVariable\" : \"punct\",\n \"normalization\" : false,\n \"backwards\" : false,\n \"version\" : \"57.1\"\n },\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"categoryDetails.categoryId\" : 1.0\n },\n \"indexName\" : \"categoryDetails.categoryId_1\",\n \"collation\" : {\n \"locale\" : \"it\",\n \"caseLevel\" : false,\n \"caseFirst\" : \"off\",\n \"strength\" : 2.0,\n \"numericOrdering\" : false,\n \"alternate\" : \"non-ignorable\",\n \"maxVariable\" : \"punct\",\n \"normalization\" : false,\n \"backwards\" : false,\n \"version\" : \"57.1\"\n },\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"categoryDetails.categoryId\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"categoryDetails.categoryId\" : [\n \"[ObjectId('63d224e007e09cd3c6526038'), ObjectId('63d224e007e09cd3c6526038')]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [\n\n ]\n },\n \"command\" : {\n \"find\" : \"products\",\n \"filter\" : {\n \"categoryDetails.categoryId\" : ObjectId(\"63d224e007e09cd3c6526038\")\n },\n \"$db\" : \"Preproduction\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 16793600.0,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 33554432.0,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0,\n \"internalQueryMaxAddToSetBytes\" : 104857600.0,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1680093107, 6),\n \"signature\" : {\n \"keyId\" : NumberLong(7183337578563633158)\n }\n },\n \"operationTime\" : Timestamp(1680093107, 6)\n}\n\n",
"text": "Hello,I have 22261 documents in a collection.Inside each of these documents, among other things, there is a subdocument:I am attempting to update the category name, by a matching categoryId by running this query:The query takes 4.412sec to execute and returns the following:Given the parameters provided is this considered good? Where things get really bad is when I try to run this update query inside a transaction. Even as the only query inside the transaction, it takes around 20sec to execute.Is this normal? Are 4.412sec as a standalone query and 20sec within transaction normal performance numbers? For reference, I am attaching thedb.products.find({\"categoryDetails.categoryId\": ObjectId(\"63d224e007e09cd3c6526038\")}).explain()output below:Thank you",
"username": "Vladimir"
},
{
"code": "\"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n4.412sec22261 \"matchedCount\" : 9538.0,\n \"modifiedCount\" : 9538.0,\n",
"text": "Given that you getthe following indeed looks slow.4.412sec as a standalone queryConsidering that you only have22261 documents in a collectionand only updatingI suspect that your hardware setup is insufficient for your use-case and data set.",
"username": "steevej"
},
{
"code": "",
"text": "You mentioned that “given that you get Input Stage as IXSCAN”. What do you mean by that? Could you share what information does this give you that allows you to draw further conclusions in your comment?",
"username": "Vladimir"
},
{
"code": "",
"text": "Please readand thenhttps://www.google.com/search?q=mongodb+IXSCAN+vs+COLLSCAN",
"username": "steevej"
},
{
"code": "",
"text": "I am aware that IXSCAN is generally better than COLLSCAN since it means the query has to traverse fewer documents. I was just wondering if there was something more to consider when reading your previous comment.",
"username": "Vladimir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is this Update performance ok? | 2023-03-29T12:34:24.570Z | Is this Update performance ok? | 965 |
null | [
"replication",
"python"
] | [
{
"code": "",
"text": "I have been trying to create a replica set for a work project, but I can’t find any help, only, all videos are from MongoDB 4.0. I have been using ChatGPT to create a replica set using pymongo he made this code:`from pymongo import MongoClient\nfrom pymongo.errors import OperationFailurereplica_set_name = ‘myReplicaSet’\nhosts = [‘localhost:27017’, ‘localhost:27018’, ‘localhost:27019’]client = MongoClient(hosts[0])config = {\n‘_id’: replica_set_name,\n‘members’: [\n{‘_id’: 0, ‘host’: hosts[0]},\n{‘_id’: 1, ‘host’: hosts[1]},\n{‘_id’: 2, ‘host’: hosts[2]}\n]\n}try:\nclient.admin.command(‘replSetInitiate’, config)\nprint(f\"Replica set ‘{replica_set_name}’ created successfully\")\nexcept OperationFailure as e:\nprint(f\"Failed to create replica set: {e}\")`But once I run it in VS Code gives me this error:\npymongo.errors.OperationFailure: This node was not started with replication enabled., full error: {‘ok’: 0.0, ‘errmsg’: ‘This node was not started with replication enabled.’, ‘code’: 76, ‘codeName’: ‘NoReplicationEnabled’}",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "How did you start your 3 mongods?\nYou should add replsetname in the config file or mention the same if you started from command line\nI am guessing you most likely connected to default mongod whic runs on port 27017 as service\nThis will not have any replset param setRefer to mongodb documentation",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "The config file is the one here “C:\\Program Files\\MongoDB\\Server\\6.0\\bin” if so, what parameter do I edit and how?\nCurrently, it’s like this.\n\nimage893×894 14.9 KB\n\nAnother question is it easier to create a replica set using MongoDB compass, I’m also having problems with that approach. If you know how to do it in compass, please tell me.",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "Hi @Henrique_Eira,\nas mentioned from @Ramachandra_Tummala, you did not add the correct parameters to initialize the repl set.\n&this should help you!BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I managed to follow a guide and created a replica set with 2 secondary nodes from this YouTube video: (4) 8.MongoDB DBA Tutorials: MongoDB Replication Setup on Windows - YouTube\nBut after I restarted my pc I can’t access any of the nodes, what do I need to do?",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "When you reboot your server all mongods will be terminated\nTry to bring up your mongods",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "How do I do that?\nIn mongo Compass i put in the URI: mongodb://localhost:27017, localhost:27020, localhost:27021/?replicaSet=r2schools\nand it gives me the error with TLS/SSL as default: getaddrinfo ENOTFOUND localhost\nwith TLS/SSL on gives me this error: read ECONNRESET.\nI believe that I have to put “r2schools” in the URI because of how I created the replicas.",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "What you have put us the connect string to connect to your replica\nFirst your replica should be up & running to connect to it\nPlease follow steps you used to start mongods\nYou have to stop default mongod which came up as service\nThen start 3 mongods of your replica starting with primary then secondaries\nOnce all 3 are up try to connect with your connect string",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "After I restart my pc to star the 3 mongods I just have to go into cmd and type this?\nmongod --dbpath “c:\\data1\\db” --logpath “c:\\data1\\log\\mongod.log” --port 27020 --storageEngine=wiredTiger --journal --replSet r2schools\nOr is it something else?",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "Yes from cmd line run it\nHave you captured rs status() when your replica was working?\nI think port 27017 was primary as per that video\nWhat port numbers you have used,?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Port 27030 now is primary for some reason, what do I need to retrieve from rs.staus()?",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "So is your replica up?\nrs.status shows status of your replica like which node is primary,node status etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "It is now I’m afraid if I restart my pc all the replicas will be gone. This is what shows after rs.status()\n\nimage1918×1037 95.5 KB\n\n\nimage1920×1025 83.7 KB\n\n\nimage1920×913 73.4 KB\n\nAfter I restart, I just run this line in cmd for the two replicas\nfor port 27020–> mongod --dbpath “c:\\data1\\db” --logpath “c:\\data1\\log\\mongod.log” --port 27020 --storageEngine=wiredTiger --journal --replSet r2schools\nfor port 27030 -->mongod --dbpath “c:\\data2\\db” --logpath “c:\\data2\\log\\mongod.log” --port 27030 --storageEngine=wiredTiger --journal --replSet r2schoolsAnd why did port 27030 became primary?",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "Quick update I restarted my pc and ran two cmd windows with the lines: for port 27030 -->mongod --dbpath “c:\\data2\\db” --logpath “c:\\data2\\log\\mongod.log” --port 27030 --storageEngine=wiredTiger --journal --replSet r2schools and for port 27020–> mongod --dbpath “c:\\data1\\db” --logpath “c:\\data1\\log\\mongod.log” --port 27020 --storageEngine=wiredTiger --journal --replSet r2schools\nNow when I do rs.status() the port 27017 shows like this\n\nSince it’s working, I guess there is nothing to worry.\nReplica sets are now working, thank you, Ramachandra_Tummala for your patience and time.",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "Please refer to mongodb documentation on replica\nWhen one of the node in 3 node replica goes down an election takes places and new primary is elected\nRefer to my earlier reply.All 3 nodes should be up for high availability\nMake sure the default mongod on port 27017 is down(if it came up) and bringup mongod on port 27017 from cmd line similar to other 2 nodes",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Henrique_Eira,\nyou’ re using the same dbpath & the same logpath for 3 different instance in the same host…\nas suggested from @Ramachandra_Tummala read from docs how to create correctly a repl set in only one host for test purpose.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Since I’m running multiple instances on my local machine, there’s not a problem for them to be in different ports?",
"username": "Henrique_Eira"
},
{
"code": "",
"text": "Your dbpath,logpath dirs look good.\nThey are all different like c:\\data1,c:\\data2 etc\nAs long as your mongods use their own dbpath & logpath and port_number you are fine",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "You’ re right, I had read it wrong😂",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Create a replica set on windows with mongo db comunity edition 6.0 | 2023-04-03T08:16:18.974Z | Create a replica set on windows with mongo db comunity edition 6.0 | 2,143 |
null | [
"data-modeling",
"python",
"time-series"
] | [
{
"code": "\"Failed to insert document: FunctionError: Failed to insert documents: bulk write exception: write errors: ['time' must be present and contain a valid BSON UTC datetime value]\"import datetime as dt\nurl = \"XXXX/app/data-ozscf/endpoint/data/v1/action/insertOne\"\nvar data = {}\ndata[\"time\"] = dt.datetime.now()\npayload = json.dumps({\n \"collection\": \"time-series\",\n \"database\": \"db\",\n \"dataSource\": \"Cluster0\",\n \"document\": data,\n})\nheaders = {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'api-key': \"XXX\", \n}\nresponse = requests.request(\"POST\", url, headers=headers, data=payload, verify=False)\n",
"text": "Hi,We need to use Time Series feature in MongoDB & currently we are using MongoDB Data API for adding values in our Database.We need to add time in the body in the POST method but we are getting errors as\"Failed to insert document: FunctionError: Failed to insert documents: bulk write exception: write errors: ['time' must be present and contain a valid BSON UTC datetime value]\"Code:without the time it is working perfectly fine, Please help",
"username": "Ankit_Arora"
},
{
"code": "from bson import json_util\n...\npayload = json_util.dumps(...) # <--- use bson.json_util.dumps to encode MongoDB Extended JSON\n\nheaders = {\n 'Content-Type': 'application/ejson', # <--- use ejson\n 'Access-Control-Request-Headers': '*',\n 'api-key': \"XXX\", \n}\n",
"text": "Could you try using MongoDB Extended JSON which preserves type information: https://www.mongodb.com/docs/atlas/api/data-api/#specify-the-request-data-format",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Insert date time using Data API in Python | 2023-04-04T12:56:21.626Z | Insert date time using Data API in Python | 1,201 |
null | [
"node-js"
] | [
{
"code": "BinaryUUIDObjectIdclass ObjectId {\n static createFromHexString(hex: string): ObjectId;\n static createFromBase64(base64: string): ObjectId;\n}\n\nclass Binary {\n static createFromHexString(hex: string, subType? number): Binary;\n static createFromBase64(base64: string, subType? number): Binary;\n}\n\nclass UUID extends Binary {\n static override createFromHexString(hex: string): UUID;\n static override createFromBase64(base64: string): UUID;\n}\n",
"text": "The MongoDB Node.js team is pleased to announce version 5.2.0 of the mongodb package!This release includes driver support for automatically obtaining Azure credentials when using automatic client side encryption. You can find a tutorial for using Azure and automatic encryption here: Use Automatic Queryable Encryption with AzureAdditionally, we have a number of minor bug fixes listed below.NOTE: This release includes some experimental features that are not yet ready for use. As a reminder, anything marked experimental is not a part of the stable driver API and is subject to change without notice.With this release we have pulled in BSON 5.2.0 which has added APIs to create BSON Binary / UUID / ObjectId types from hex and base64 strings.We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "neal"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Node.js Driver 5.2.0 Released | 2023-04-04T18:21:07.348Z | MongoDB Node.js Driver 5.2.0 Released | 993 |
null | [
"queries",
"node-js"
] | [
{
"code": "/node_modules/mongodb/lib/operations/add_user.js:16\n this.options = options ?? {};\nconst { MongoClient } = require(\"mongodb\").MongoClient;\nconst http = require('http');\nconst hostname = 'preworn.co.uk';\nconst port = 3000;\nconst uri =\"mongodb://\"SERVER IP ADDRESS\":27017/\";\nconst client = new MongoClient(uri);\nconst server = http.createServer((req, res) => \n{\n res.statusCode = 200;\n res.setHeader('Content-Type', 'text/html');\n async function run() \n {\n try \n {\n await client.connect();\n const db = client.db(\"preworn_market\");\n const specifics = db.collection(\"product_specifics\");\n const cursor = specifics.find();\n await cursor.forEach(function(myDoc) { res.write(\"<h4>\"+myDoc.category+\"</h4>\"); });\n } \n finally \n {\n await client.close();\n res.end(\"\");\n }\n }\n run().catch(console.dir);\n\n});\n\nserver.listen(port, hostname, () => \n{\n console.log(`Server running at http://${hostname}:${port}/`);\n});\n\n",
"text": "I have tested the code with a localhost and it works fine however when I try to test the connection on the server I get the errorThe process is npm init and give the project a name npm install mongodb then run node file, it seems like the error is on the package as when I run node without the MongoClient the node runs and running this script (locally on a window machine the server is linux) it works even with the same ip address.\nHere is the code I use for the connection",
"username": "Aneurin_Jones"
},
{
"code": "const URI =\"mongodb://\"SERVER IP ADDRESS\":27017/\";\nconst SERVER_IP_ADDRESS = xx.x.x.xxx;\nconst uri = `mongodb://${SERVER_IP_ADDRESS}:27017/`;\n.envconst uri = `mongodb://${process.env.SERVER_IP_ADDRESS}:27017/`;\n",
"text": "Hi @Aneurin_Jones,Welcome to the MongoDB Community forums Looking at this line, it seems that there is a syntactical error.I’ll recommend modifying it to:Alternatively, if the value is being fetched from the .env file:then please modifies it to:I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connecting node to mongodb getting error from node module | 2023-03-31T13:37:22.629Z | Connecting node to mongodb getting error from node module | 1,477 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hello ! IOS App crashes while i perform a write to the db , the record gets added and the app crashes .here is the crash report for referencecrashRelam.txt (57.5 KB)Any light is much appreciated .",
"username": "Adithya_Sundar"
},
{
"code": ".ipshermes…\nThread 7 Crashed:: com.facebook.react.JavaScript\n0 hermes \t 0x108c18a20 0x108c14000 + 18976\n1 hermes \t 0x108e852d8 0x108c14000 + 2560728\n2 hermes \t 0x108e3a3d0 0x108c14000 + 2253776\n3 hermes \t 0x108c650f0 0x108c14000 + 332016\n4 hermes \t 0x108c592c8 0x108c14000 + 283336\n5 hermes \t 0x108c57c34 0x108c14000 + 277556\n6 hermes \t 0x108c3a728 0x108c14000 + 157480\n7 hermes \t 0x108cde9d0 0x108c14000 + 829904\n8 hermes \t 0x108c39a50 0x108c14000 + 154192\n9 hermes \t 0x108c58734 0x108c14000 + 280372\n10 hermes \t 0x108c57c34 0x108c14000 + 277556\n11 hermes \t 0x108c39cf8 0x108c14000 + 154872\n12 hermes \t 0x108c385fc 0x108c14000 + 148988\n13 hermes \t 0x108d0a564 0x108c14000 + 1008996\n14 hermes \t 0x108c39a50 0x108c14000 + 154192\n15 hermes \t 0x108c58734 0x108c14000 + 280372\n16 hermes \t 0x108c57c34 0x108c14000 + 277556\n17 hermes \t 0x108c39cf8 0x108c14000 + 154872\n18 hermes \t 0x108c39560 0x108c14000 + 152928\n19 hermes \t 0x108c58758 0x108c14000 + 280408\n20 hermes \t 0x108c57c34 0x108c14000 + 277556\n21 hermes \t 0x108c39cf8 0x108c14000 + 154872\n22 hermes \t 0x108c39560 0x108c14000 + 152928\n23 hermes \t 0x108c1e5cc 0x108c14000 + 42444\n24 Paperflite \t 0x104e2fae8 facebook::jsi::RuntimeDecorator<facebook::jsi::Runtime, facebook::jsi::Runtime>::call(facebook::jsi::Function const&, facebook::jsi::Value const&, facebook::jsi::Value const*, unsigned long) + 76 (decorator.h:303)\n25 Paperflite \t 0x104e2e198 facebook::jsi::WithRuntimeDecorator<facebook::react::(anonymous namespace)::ReentrancyCheck, facebook::jsi::Runtime, facebook::jsi::Runtime>::call(facebook::jsi::Function const&, facebook::jsi::Value const&, facebook::jsi::Value const*, unsigned long) + 88 (decorator.h:709)\n26 Paperflite \t 0x104ed1180 facebook::jsi::Function::call(facebook::jsi::Runtime&, facebook::jsi::Value const*, unsigned long) const + 100 (jsi-inl.h:234)\n27 Paperflite \t 0x104ed10fc facebook::jsi::Function::call(facebook::jsi::Runtime&, std::initializer_list<facebook::jsi::Value>) const + 112 (jsi-inl.h:239)\n28 Paperflite \t 0x104eee760 facebook::jsi::Value\n…\n",
"text": "Hi @Adithya_Sundar,Just as a reference, if you give the crash report a .ips suffix, it will be properly formatted.It looks like the crash happens well inside the hermes Javascript engineIt’s then difficult to understand what may have gone wrong, without looking at what the Javascript was doing, i.e. your code, including the proper setup of Realm before writing to the DB.Can you share some more info on that?",
"username": "Paolo_Manna"
},
{
"code": "import Realm from 'realm'\nimport {createRealmContext} from '@realm/react';\nimport { Assets } from './schema/asset';\nimport { Collections } from './schema/collection';\nimport { Sections } from './schema/section';\nimport { Columns, HubLayout, Rows } from './schema/hubLayout';\nimport { HubSection } from './schema/hubSection';\nimport { Groups } from './schema/group';\nimport { Streams } from './schema/stream';\nimport { Banner, CreatedBy, Icon, StreamAnalytics, assetAnalytics, assetMetaData, assetSettings, customFields, sectionChildren } from './schema/commons';\nimport { GroupAssetMapping, StreamAssetMapping } from './schema/mappingTable';\n\n\nexport const RealmContext = createRealmContext({\n schema: [ Collections, Sections, Assets, assetAnalytics, assetMetaData, assetSettings, customFields, HubLayout, Rows, Columns, HubSection, sectionChildren, Streams, Groups, StreamAssetMapping, GroupAssetMapping, Banner, Icon, CreatedBy, StreamAnalytics ],\n deleteRealmIfMigrationNeeded: true\n});\n\nimport Realm, { BSON } from 'realm';\n\nexport class HubLayout extends Realm.Object {\n static schema = {\n name: 'hubLayout',\n primaryKey: 'id',\n properties: {\n id: 'objectId',\n rows: 'rows[]',\n customLayout: 'bool',\n empty:'bool',\n },\n };\n}\n\nexport class Rows extends Realm.Object {\n static schema = {\n name: 'rows',\n embedded: true,\n properties: {\n columns: 'columns[]'\n },\n };\n}\n\nexport class Columns extends Realm.Object {\n static schema = {\n name: 'columns',\n embedded: true,\n properties: {\n width: 'int',\n section: 'hubSection'\n },\n };\n}\n\n\nimport Realm from 'realm';\n\nexport class HubSection extends Realm.Object {\n static schema = {\n name: 'hubSection',\n primaryKey: 'id',\n properties: {\n id: 'objectId',\n entityId: 'string',\n title: 'string?', \n entityType: 'string?',\n layoutStyle: 'string?',\n cardStyle: 'string?',\n showFilter: 'bool?',\n userDefined: 'bool?',\n children: 'sectionChildren?'\n },\n };\n}\n\nAPI call and feed api data into db\n\n const fetchSectionById = async id => {\n let url = buildUrl(HUB_SECTION_URL_V2, {\n path: id,\n });\n try {\n const response = await APIKit.get(url);\n let section = await response.data;\n let existingSection = sections.filtered(`entityId = \"${id}\"`); // returns array\n if (section && existingSection) {\n let updatedSection = updateHubSectionService(section, existingSection[0]);\n if(updatedSection) {\n updateHubLayout(updatedSection, realm)\n }\n }\n } catch (error) {\n console.log(error, 'error');\n }\n };\n\nModel where all realm write occurs . \nimport {RealmContext} from '@/db';\n\nexport const useHubModel = () => {\n const {useRealm, useQuery} = RealmContext;\n const realm = useRealm();\n\n const createHubLayout = ( data ) => {\n try {\n realm.write(() => {\n let createdLayout = realm.create('hubLayout', data, 'modified');\n });\n } catch (error) {\n console.log(error);\n }\n };\n\n const updateHubLayout = ( data ) => {\n if (data) {\n try {\n realm.write(() => {\n let createdSection = realm.create('hubSection', data, 'modified');\n console.log(createdSection, 'createdSection');\n });\n } catch (error) {\n console.log(error);\n }\n }\n };\n\n return {createHubLayout, updateHubLayout};\n};\n\n",
"text": "Sure .Schema setupHope this helps . And will attach crash report with .ips next time . thanks",
"username": "Adithya_Sundar"
},
{
"code": "",
"text": "Hi @Adithya_SundarIt’s a common issue that can occur when reading or writing data in React Native. The trick is to make sure you have all the necessary dependencies installed and configured correctly.Make sure the react-native-fetch-blob library is installed for files I/O operations, as this will alleviate any issues caused by problems with file permissions or overwriting existing files. You might also need to install an async library like redux-thunk in order to process fetch requests asynchronously.Finally, verify that your code runs without error before running it on device by using Jest for unit tests. Taking these steps should help you troubleshoot the app crashing while read/write operation in React Native.P.s - I’m Aliena Jose works as a app developer at Techmango, React native app development company",
"username": "Aliena_Jose"
}
] | App crashes while read/write operation in react native | 2023-03-30T11:41:23.893Z | App crashes while read/write operation in react native | 1,306 |
null | [
"queries",
"java"
] | [
{
"code": "final List<DocumentData> data = new ArrayList<>();\ntry (final MongoClient mongoClient = MongoClients.create(cosmosDBConnectionString)) {\n\n mongoClient\n .getDatabase(\"assessmentDB\")\n .withCodecRegistry(\n CodecRegistries.fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n CodecRegistries.fromProviders(PojoCodecProvider.builder().automatic(true).build())\n )\n )\n .getCollection(\"assessmentDataCollection\", DocumentData.class)\n .find(Filters.expr(Document.parse(\"{ $gt: [ '$modificationDateTime', '$lastExportDateTime' ] }\")))\n .into(data);\n}\n",
"text": "Hi,I have an Azure CosmosDB (version 4.2) resource where I run the following query against and gives me the correct response:\ndb.assessmentDataCollection.count({$expr: { $gt: [ “$modificationDateTime” , “$lastExportDateTime” ] }})When I use the Java Mongo Client than I get no result. Any idea what I’m doing wrong?",
"username": "Dominique_Claes"
},
{
"code": "",
"text": "you are not using the same database name",
"username": "steevej"
},
{
"code": "",
"text": "I’m so embarrassed to overlook this ‘small’ difference…",
"username": "Dominique_Claes"
},
{
"code": "",
"text": "I have been there and done that, so this is one of the first thing I look.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Azure CosmosDB query - MongoDB API | 2023-04-04T10:06:27.634Z | Azure CosmosDB query - MongoDB API | 433 |
null | [
"node-js",
"data-modeling"
] | [
{
"code": "DATABASE NAME (database)\n— SCHOOL USERNAME (collection)\n— — STUDENTS (subcollection)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— ANOTHER SCHOOL USERNAME (collection)\n— — STUDENTS (subcollection)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— — — STUDENT (document)\n",
"text": "I'm creating a website which schools can signup and manage their students' information and data. I want every school to have his own collection of data in the database. The structure of the database which I want to create is like:DATABASE NAME (database)\n— SCHOOL USERNAME (collection)\n— — STUDENTS (subcollection)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— ANOTHER SCHOOL USERNAME (collection)\n— — STUDENTS (subcollection)\n— — — STUDENT (document)\n— — — STUDENT (document)\n— — — STUDENT (document)\nThat’s how I want my database to look like. But I’m wondering how to do. Can anyone help me.ns",
"username": "Mohamed_Abdillahi"
},
{
"code": "things that are queried together should stay togetherDatabase 1 (School 1)\n Collection 1 (Students)\n Collection 2 (Teachers)\n ...\n\nDatabase 2 (School 2)\n Collection 1 (Students)\n Collection 2 (Teachers)\n ...\n...\n\nDatabase \n Collection 1 (School 1)\n Collection 2 (School 2)\n...\n",
"text": "Hey @Mohamed_Abdillahi,Welcome to the MongoDB Community Forums! A general rule of thumb while modeling data in MongoDB is that things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and then let the schema design follow the query pattern.\nReading from your description, I think that you can create a data model where the database name is the name of the school and then this database has different collections like- students, teachers, administrative staff, etc and within these collections, you add the details of the corresponding entity:This would also help in faster reads and writes since one school would want to read and update records related to their students only.\nOr if there are just students’ record in your database, then you can model it by having a database and having separate collections for each school and for each collection, having the students’ documents corresponding to that school collection.Ultimately, as mentioned earlier, it would boil down to the queries that you are intending to use in your application. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Regarding creating collections and documents in MongoDB, you can refer to this guide: Create a CollectionI’m also attaching some more resources that you might find useful:\nModel Data Model Design\nMongoDB Fundamentals FAQsHope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "hopefully, this source is addressing this thread - Build a school management from scratch using MongoDB & Express | by Primerose Katena | Medium .ravi verma",
"username": "Ravi_Verma3"
},
{
"code": "",
"text": "Thanks @Satyam,\nBy the first approach you mentioned, the database name isn’t same as the school name. There is only one database. It has collections where collection name is same as the school username. Every collection/school has subcollections like students, teachers, classes, etc. How can I do that with mongoDB?",
"username": "Mohamed_Abdillahi"
},
{
"code": " school1.students //collection 1\n school1.teachers //collection 2\n school1.classes. // collection 3\n school2.students // collection 4\n school2.teachers //collection 5\n .... so on\nstudentsteachersSchool{\n \"_id\": ObjectId(\"61501c6d24a680f9947d36f9\"),\n \"name\": \"John Doe\",\n \"age\": 18,\n \"grade\": 12,\n \"school\": \"XYZ High School\"\n},\n{\n \"_id\": ObjectId(\"61501c6d24a680f9947d36fb\"),\n \"name\": \"Paul Doe\",\n \"age\": 17,\n \"grade\": 11,\n \"school\": \"ABC High School\"\n}\nuse <db_name>things that are queried together should stay together",
"text": "Hey @Mohamed_Abdillahi,Thanks for the reply.The approach you described seems a bit tricky to me. One suggestion I would give is to name your collections starting with the name of the school and then the entity that you want to include in that collection, ie.:With this approach, you can use lookup operator when needing to search across many collections of the same school.It has collections where the collection name is the same as the school username. Every collection/school has subcollections like students, teachers, classes, etcYou can also do this by creating a students and teachers etc collection, then in the documents, specify the school name, ie. keep a field named School in the documents itself. This way, your student document might look like this:and similarly, you can store a school field in your other collections too like teachers, staff, etc.By the first approach you mentioned, the database name isn’t same as the school name. There is only one database.I still didn’t get why keeping different schools as different databases can’t be done. Is there any particular reason why we are dismissing this thought? With this approach, it would look like this:\nDBs516×738 22.1 KB\n\nSchool_1, School_2, etc are the databases’ names. And within them are your collections of students, teachers, etc. You can switch between different databases using the use <db_name> command. If you have any doubts about how to create or use multiple databases or collections, you can go through the following documentation: Databases and CollectionAs you can see, there are many approaches that you can use while designing your model. I would suggest you explore all the alternatives before deciding there has to be only one database or limit your other options.\nA general rule of thumb while modeling data in MongoDB is that things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.I’m also attaching some more resources that you should find useful:\nMongoDB Data Modelling Course\nMongoDB Documents\nFAQ: MongoDB FundamentalsHope this helps.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I want to create a database for a school management system using mongodb | 2023-03-30T17:56:32.484Z | I want to create a database for a school management system using mongodb | 2,239 |
null | [
"sharding",
"ruby"
] | [
{
"code": " database_name.collection_name\n shard key: { \"_id\" : \"hashed\" }\n unique: false\n balancing: true\n chunks:\n chunk-1\t121520\n chunk-2\t121520\n\nclient.command(serverStatus: 1)\nclient.command(getShardMap: 1)\nclient.command(listShards: 1)\nclient.command(listCollections: 1)\n",
"text": "Hi. I want to get the results of sh.status() but using the ruby mongo driver. Specifically I need to see the shard keys for each collection, not just the hosts for each shard.I’m looking for something similar to the output of running sh.status() on a mongos:How do I do this using Ruby mongo?I’ve already triedThe first 3 give only the hosts of the shards and the 4th only collection data, but not the shard keys of the collections. I’ve also tried getting the contents of the config.databases collection, that doesn’t include the shard keys either.",
"username": "Neil_Ongkingco"
},
{
"code": "require 'mongo'\nclient = Mongo::Client.new(\"mongodb://localhost:27017/test\")\nconfigdb = client.use(:config)\nconfigdb[:collections].find.each do |c|\n puts c[\"_id\"]\n puts \"\\tshard key: #{c['key']}\"\n puts \"\\tunique: #{c['unique']}\"\n puts \"\\tbalancing: #{!c['noBalance']}\"\nend\ntest.foo\n\tshard key: {\"_id\"=>\"hashed\"}\n\tunique: false\n\tbalancing: true\ntest.bar\n\tshard key: {\"a\"=>1.0, \"b\"=>1.0}\n\tunique: false\n\tbalancing: true\nconfig.system.sessions\n\tshard key: {\"_id\"=>1}\n\tunique: false\n\tbalancing: true\nconfig.collectionsconfig.chunks",
"text": "@Neil_Ongkingco, the information you’re looking for is in the Config Database, which you can query directly using the Ruby driver.For example, to list the sharded collections in your cluster you could try something similar to the following:For the test cluster I set up the output was:All we’re doing here is querying the config.collections collection and formatting the results. If you wanted to show chunk details, as you iterate over each sharded collection above you could query the config.chunks collection to collect the necessary information.",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sharding status with shard keys (sh.status()) using mongo ruby driver | 2023-04-03T06:40:22.123Z | Sharding status with shard keys (sh.status()) using mongo ruby driver | 1,000 |
null | [
"node-js",
"data-modeling",
"crud"
] | [
{
"code": "...\nwithTransaction(async (session) => {\n...\n const coupon = await this.storage.coupons\n .findOneAndUpdate(\n {\n referralId: { $exists: false },\n },\n { $set: { referralId: referralDoc.id } },\n { session, new: true },\n )\n .lean();\n\n if (!coupon || !coupon.referralId) {\n throw newError(ErrorCode.INVALID_REQUEST);\n }\n\n return { code: coupon.code, ok: true };\n})\nretry",
"text": "Hello guys,\nThe title may be cloudy but here the scenario:My problem: I am not sure whether that coupon does not conflict if there are many requests? What is the better approach if I dont want to use retry?\nThank you all for reading. I’m looking forward to your question and response.",
"username": "Thanh_An_Nguy_n"
},
{
"code": "",
"text": "Hey guys, actually MongoDB Transaction handles it for me, so there will be no problem, right?",
"username": "Thanh_An_Nguy_n"
},
{
"code": "retry",
"text": "Hi @Thanh_An_Nguy_n,Welcome to the MongoDB Community forums My problem: I am not sure whether that coupon does not conflict if there are many requests. What is the better approach if I don’t want to use retry?Can you provide more details regarding your goals and your issues? For example:actually, MongoDB Transaction handles it for meWhat MongoDB Transaction has handled for you. Can you please provide more details to better understand the issue you are facing?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "filter{\n referralId: { $exists: false },\n },\n",
"text": "Hey @Kushagra_Kesav , I have canceled this referral system due to security problem. However, I’m pleased to discuss with you more about this.Background: We buy 3rd party coupons and insert them to our db, and then let users collect them and redeem.“pre-saved” means that we have inserted coupons, not generating on users’ demand.Every coupon is unique and is 1 time use. Meaning that users should not receive the same coupon if the request is duplicated or there are many requests.\nSince the filter on query is just like thisSo the issue is about concurrent requests and after raising this post, I realized that mongodb transaction has\nperfectly handled this case for me.Thanks for your concern Kushagra.Best,\nThanh An",
"username": "Thanh_An_Nguy_n"
}
] | Data Model for Coupons storing and redeeming | 2023-02-03T09:08:48.406Z | Data Model for Coupons storing and redeeming | 1,160 |
[
"monitoring"
] | [
{
"code": "",
"text": "\n微信图片_202304031513521324×371 34 KB\ni have done below\nproject ---- setting — Reset Duplicates\ndeployment — more — hostmapping (delete )but the incorrect entry of host mapping will be add again ,\nnslookup hostname(duplicated) , answer corretly , at mms serverDo I clean the cache in the mms’s database ? what should i do ? thanks .",
"username": "feng_deng"
},
{
"code": "",
"text": "Hi @feng_deng,Welcome to The MongoDB Community Forums! I presume that you are working with MongoDB Ops Manager. The issue seems to be with your configuration/environment.I would recommend you open a support case at MongoDB Support Portal as they have the required expertise and could help you with root cause analysis and provide the best solutions as per your use-case.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongodb ops can't update hostname | 2023-04-03T07:29:11.471Z | Mongodb ops can’t update hostname | 960 |
|
null | [
"compass"
] | [
{
"code": "",
"text": "Not able to connect to mongoDB from a different machine using compass or cmd.Ports are opened.\nAble to ping destination.\nUse 127.0.0.1 as the connection string with port 27017.\nmongoDB: v6",
"username": "ko_lamizana"
},
{
"code": "",
"text": "127.0.0.1 means the localhost not a different machine.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I should have rather specified that I’m using 127.0.0.1 in the net.bindIp, when testing the connection from another machine I used the ip address of the machine where my db instance is.I also tried replacing me net.bindIp with the machine’s ip address.",
"username": "ko_lamizana"
},
{
"code": "",
"text": "note the net.bindIp configuration option",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "For /tmp/mongod.sock, I do not see an tmp folder or a mongod.sock file, I have the /server/6.0/bin/mongod.exe and mongos.exe.\nThe mongos.exe does not run. The mongod does run, but I am not sure what role it has since I am able to access the database through compass and through my app when it isn’t runningI’m not using ipv6.",
"username": "ko_lamizana"
},
{
"code": "",
"text": "To connect to your mongodb from shell you need mongosh.Download and install it\nmongod is to start a mongod instance\nYou should not run it since you already can connect with Compass\nIt will start a local mongod on the machine you started it provided it has required default params like default dbpath dir",
"username": "Ramachandra_Tummala"
}
] | MongoDB compass connection - ECONNREFUSED 172.20.2.159:27010 | 2023-03-31T17:12:53.380Z | MongoDB compass connection - ECONNREFUSED 172.20.2.159:27010 | 630 |
null | [
"queries",
"node-js"
] | [
{
"code": "Cannot stringify arbitrary non-POJOs ",
"text": "Hi, I use Sveltekit with load function who need a stringifiable object for send data to the view.\nMongo query with nodejs adapter return a nested document full of ObjectId(string) value who cannot be stringified.Cannot stringify arbitrary non-POJOs How can I get a query result with string instead of ObjectId?\nI have spend hours to try and find a solution without success.The only way is to use JSON.parse and JSON.stringify everytime.",
"username": "Axel_B"
},
{
"code": "",
"text": "Hi @Axel_B and welcome to the MongoDB community forum!!If I understand your concern correctly, you are looking an operator to convert the ObjectID returned from Find() and FindOne() query to return in the form of String and not ObjectID.The documentation for Objectid.toString would be a good reference to start with.Please let me know If my understanding is wrong here.\nFor better understanding, could you share some details like:Regards\nAasawari",
"username": "Aasawari"
}
] | Access id from find() or findOne() result return an ObjectId not a string | 2023-04-02T20:31:04.251Z | Access id from find() or findOne() result return an ObjectId not a string | 2,270 |
null | [
"node-js"
] | [
{
"code": "console.log('Starting MongoDB connection...');\nconst { MongoClient } = require('mongodb');\n\n// Connection URL with the database name\nconst url = 'mongodb://127.0.0.1:27017/fruitsDB';\n\n// Create a new MongoClient\nconst client = new MongoClient(url, { useUnifiedTopology: true });\n\nsetTimeout(() => {\n\t// Use connect method to connect to the Server\n\tclient.connect(function (err) {\n\t\tif (err) {\n\t\t\tconsole.log(err);\n\t\t\treturn;\n\t\t}\n\n\t\tconsole.log(\"Connected successfully to server\");\n\n\t\tconst db = client.db('fruitsDB');\n\n\t\t// Perform further operations on the database here\n\n\t});\n}, 2000); // Wait 5 seconds before connecting\n{\"t\":{\"$date\":\"2023-03-30T09:42:31.973+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:63528\",\"uuid\":\"1593aa60-b0b1-46d3-b011-47e8ec2db558\",\"connectionId\":34,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-03-30T09:42:31.979+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn34\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63528\",\"client\":\"conn34\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"5.1.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\"},\"platform\":\"Node.js v18.15.0, LE (unified)|Node.js v18.15.0, LE (unified)\"}}}\n",
"text": "Could someone help with this problem?\nI am using MongoDB version 6. The driver version is 5.1\nMongoDB is running as a service on Windows 10.\nWhen I run the following simple code to connect to the database the code hangs.\nThe fruitsDB has already been set up and has one collection called ‘fruits’\nI will show you the code and the appropriate portion of the logs.\nThanking youapp.jsHere are the logs:",
"username": "Greg_Guy"
},
{
"code": "client.close()asyncawaitawaitconsole.log('Starting MongoDB connection...');\nconst { MongoClient } = require('mongodb');\n\n// Connection URL with the database name\nconst url = 'mongodb://127.0.0.1:27017/fruitsDB';\n\n// Create a new MongoClient\nconst client = new MongoClient(url, { useUnifiedTopology: true });\n\nasync function main() {\n\ttry {\n\t\t// Use connect method to connect to the Server\n\t\tawait client.connect();\n\t\tconsole.log(\"Connected successfully to server\");\n\n\t\tconst db = client.db('fruitsDB');\n\n\t\t// Perform further operations on the database here\n\n\t} catch (err) {\n\t\tconsole.log(err);\n\t} finally {\n\t\tawait client.close();\n\t}\n}\n\nmain();\nasync/awaitclient.connect()try/catchfinallyclient.close()",
"text": "Hello @Greg_Guy ,Welcome to The MongoDB Community Forums! Based on the provided code and logs, it seems that the connection to MongoDB is being established successfully and there are no errors reported. I think your code looks like in hang state because the connection was never closed.However, I noticed that you are not using Promises and client.close() statement to connect or to close the already opened connection. Below blobs are from documentation regarding PromisesPromisesA Promise is an object returned by the asynchronous method call that allows you to access information on the eventual success or failure of the operation that they wrap. The Promise is in the Pending state if the operation is still running, Fulfilled if the operation completed successfully, and Rejected if the operation threw an exception. For more information on Promises and related terminology, see the MDN documentation on Promises.If you are using async functions, you can use the await operator on a Promise to pause further execution until the Promise reaches either the Fulfilled or Rejected state and returns. Since the await operator waits for the resolution of the Promise, you can use it in place of Promise chaining to sequentially execute your logic. For additional information, see the MDN documentation on await.Below is an example on how you can use such syntax and you may update the same as per your requirementsThis code uses async/await to wait for the client.connect() method to resolve before proceeding with the rest of the code. The try/catch block is used to catch any errors that may occur, and the finally block is used to ensure that the client.close() method is called, regardless of whether an error occurred or not.I hope this helps resolve the issue with your code hanging.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Code Hangs on Connection | 2023-03-30T07:47:04.465Z | Code Hangs on Connection | 1,500 |
null | [] | [
{
"code": "",
"text": "hi i am taking a cource that talks about mongodb and it comes to a part that the instructor talks about but he didnt get into details but i want to know beifly what is the rule of it which is the BI Connectors i read about it in the documnetation but i didnt understand what are the rule of it is it reporting or other things can any one explain to me",
"username": "mina_remon"
},
{
"code": "mongodmongos",
"text": "Hello @mina_remon ,Are you asking about the role of BI connector in MongoDB?Traditional business intelligence tools work with flat, tabular data. These tools aren’t sophisticated enough to understand three-dimensional data stored in MongoDB databases.The MongoDB Connector for Business Intelligence (BI) allows you to create queries with SQL to visualize, graph, and report on your three-dimensional MongoDB data using relational business intelligence tools such as Tableau and Power BI. It acts as a layer that translates queries and data between a mongod or mongos instance and your reporting tool. The BI Connector stores no data, and purely serves to bridge your MongoDB cluster with business intelligence tools.Note: As an alternative to using third-party data visualization tools and the BI Connector, you can use MongoDB Charts to create data visualizations directly from your MongoDB collections.For more information, please checkRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What are the BI Connectors? | 2023-04-02T10:34:20.599Z | What are the BI Connectors? | 613 |
null | [] | [
{
"code": "",
"text": "I had this in one of my previous “problems with new university” posts, but it seems to put many problems in a single post was not a good idea because the problem in the title is still a thing.Before the “Learn”, we were able to help newcomers with the problems they faced in the labs and onward if they needed because we could easily open those labs and check what could go wrong and advise about solutions. When someone hits a problem, we were around to help.Now, the labs are lost to anyone who completes a stage in the lab, step by step.there is no going back, no reset/restart, no redo, or no replay. That is it. Even the resources used in them are lost if a developer “decided” otherwise.When someone hits a problem, they need to get the attention of an employee which is a limited official human resource. But since Learn is a “free” resource, it is not always easy to get that attention. (if it was a paid service we could squish immediate help, right?)In other words, Labs are no more a part of the “community”.In order to make things right (ok, at least “right” as in the sense of “community”), Labs should be accessible after their completion for the community to supply their help.At least, a “reset lab” option to start fresh would benefit a lot. even though it would require completing steps again, we could test and see any problem at any step (and on any browser) as the community. I am sure you may come up with a better solution, or just do this and allow us to “reset” the labs.I hope you implement this change pretty soon for the sake of the community.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hey @Yilmaz_Durmaz,Thanks a lot for your feedback. I have raised this with the internal team.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | New University Labs should be accessible after completion | 2023-04-01T05:45:20.583Z | New University Labs should be accessible after completion | 926 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "I have a replset, but its oplog is too big , even more biger than the db’s storage size, I want to dump the db , then restore to a instance, add the restored instance to the replset at last, I don’t know the detail of operating ,so could somebody tell me the detail and show me the refer and code ? thanks very much .",
"username": "feng_deng"
},
{
"code": "",
"text": "Hi @feng_deng and welcome back to MongoDB community forum!!I have a replset, but its oplog is too big , even more biger than the db’s storage size, IThe issue of growing oplog size could be possibly resolved by upgrading to version 4.4 or more, whereMongoDB 4.4 supports specifying a minimum oplog retention period in hours, where MongoDB only removes an oplog entry if:Please refer to the Replica Set Oplog for more details.Now, forI want to dump the db , then restore to a instance, add the restored instance to the replset at last,Regarding the dumping and restoration, you can visit the documentation for MongoDB tools for Backup and restore.Let us know if you have more queries.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "mongodb version 4.4.18\nreplSetResizeOplog , I reconfig oplog size sucessfully , but when i do the action of compact at the second optlog resized (mongo shell connect to sencond directly, not a replset connect character), it prompt auth failed, even i give the user and password correct,so it doesn’t do the compact . so what should i do to resolve the problem\ni’m sure the user is cluster admin and have the privilege of root .\nrefer https://www.mongodb.com/docs/manual/tutorial/change-oplog-size/",
"username": "feng_deng"
},
{
"code": "",
"text": "Hi @feng_dengIf the secondary required authentication, you need to provide the right privileges on the local database to perform the operation. You can follow the documentation for the same.Regards\nAasawari",
"username": "Aasawari"
}
] | What do i do to add a second with a backup (mongodump --oplog )? | 2023-03-30T05:50:57.944Z | What do i do to add a second with a backup (mongodump –oplog )? | 838 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hello good morning.Could you please help me,I want to insert the data from one field to another field within the same collection in a massive way that applies to all documents in the collection.example:“trail”: {\n“idUsuarioCreacion”: “499”,\n“idUsuarioActualizacion”: “”,\n“fechaHoraCreacion”: “2023-03-13T18:17:59.532Z”\n“fechaHoraActualizacion”: “”,\n}the information from the “idUsuarioCreacion” field must be replicated in the “idUsuarioActualizacion” field\nas well as the information of “fechaHoraCreacion” in “fechaHoraActualizacion”.First of all, Thanks.",
"username": "edith_t"
},
{
"code": "DB>db.collection.find({},{_id:0})\n[\n {\n trail: {\n idUsuarioCreacion: '599',\n idUsuarioActualizacion: '',\n fechaHoraCreacion: '2022-01-11T18:17:59.532Z',\n fechaHoraActualizacion: ''\n }\n },\n {\n trail: {\n idUsuarioCreacion: '499',\n idUsuarioActualizacion: '',\n fechaHoraCreacion: '2023-03-13T18:17:59.532Z',\n fechaHoraActualizacion: ''\n }\n }\n]\nupdateMany()$setDB>db.collection.updateMany({},\n[\n {\n '$set': {\n 'trail.idUsuarioActualizacion': '$trail.idUsuarioCreacion',\n 'trail.fechaHoraActualizacion': '$trail.fechaHoraCreacion'\n }\n }\n])\nDB>db.collection.find()\n[\n {\n _id: ObjectId(\"64251cbb6e5491f12577f41f\"),\n trail: {\n idUsuarioCreacion: '599',\n idUsuarioActualizacion: '599',\n fechaHoraCreacion: '2022-01-11T18:17:59.532Z',\n fechaHoraActualizacion: '2022-01-11T18:17:59.532Z'\n }\n },\n {\n _id: ObjectId(\"64251cbe6e5491f12577f420\"),\n trail: {\n idUsuarioCreacion: '499',\n idUsuarioActualizacion: '499',\n fechaHoraCreacion: '2023-03-13T18:17:59.532Z',\n fechaHoraActualizacion: '2023-03-13T18:17:59.532Z'\n }\n }\n]\n",
"text": "Hi Edith,Not sure if this suits your use case, but I believe the following may work:Test data:the updateMany() used with a pipeline containing $set:Documents mentioned above after the update:I want to insert the data from one field to another field within the same collection in a massive way that applies to all documents in the collection.If you believe this may work for your use case, I would test and verify it meets your requirements in a test environment thoroughly beforehand as you mentioned you need to update the whole collection. In my example, I have only done this on 2 sample documents.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you very much, it was very helpful.",
"username": "edith_t"
},
{
"code": "",
"text": "Awesome. Glad to hear it helped!",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Insert data from one existing field to another so that they contain the same information | 2023-03-29T22:54:05.115Z | Insert data from one existing field to another so that they contain the same information | 452 |
null | [] | [
{
"code": "",
"text": "Hi, I interested to use Mongodb Atlas App Services for new projectsbut i’m not found some elements to ban, block ip or use trigger on failed login to ban ip trying to brute force password/login for example.Do some features exist to secure the app services ?",
"username": "DomC"
},
{
"code": "",
"text": "A lot of that is logic you need to build. Otherwise you have white listing and black listing.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks you Brock, a lot a information with your answer.So you confirm my first impressions I will read again the Mongo Docs to see what i can do on the Grapqhl / API app services authentification part.Thank a lot Brock.",
"username": "DomC"
},
{
"code": "",
"text": "Not quite, the email/password functionality works fine, but the controls to block or ban failed login attempts is your job to implement and build within your app like any other services.Just as logging failed login attempts and determining timeouts and so on.",
"username": "Brock"
},
{
"code": "",
"text": "Of course the service is ok but i need to ask first before considering some features as ready to use.\nThis one is not production ready because the features need some adjustments.I says so because some mongodb youtube official videos suggest the “ready to go” but it’s just introduction and commercial promotional material. I needed some confirmation than i need to add it by myself.If the features was already on the services, i wanted to be sure i was not missing it ",
"username": "DomC"
},
{
"code": "",
"text": "On that I can agree, there are a lot of company produced materials that are introductory. But that’s also where experience comes in to know the pieces you need to build on your own, and how things intercommunicate.On the flip side, the fact as a corporation everything is introductory, you can make your projects and tutorials stand out even more.Same reason my GraphQL tutorial has turned so far into a book for interfacing MongoDB and Atlas with GraphQL, etc.But also you need to understand there’s a lot of features like GraphQL that are partially supported, but not fully supported. (custom scalars, enum scalars, passing payloads, etc) which does create its own challenges, but you can build around it. But it does test how much you actually know about GraphQL in that example.",
"username": "Brock"
},
{
"code": "",
"text": "@DomC as some things were brought to my attention from elsewhere, (Not MongoDB, barely ever talk to anyone from it these days) I just want to be clear I am not employed by MongoDB, nor sponsored by it. Separation from MongoDB was back in end of Nov due to fictitious drama from someone external to MBD that I won’t get into here.But in large I am very objective when it comes to MongoDB or any company for that matter, I just want to make it clear I’m not white knighting, nor am I worried or afraid of producing criticism or opinions about products or services, or actual known public facts. (Much of the time I wait until someone else exposes a problem I found previously before talking about it.)In relation to your GraphQL that I’d like to make clear, yes, there are a lot of functionalities that aren’t supported, despite Apollo GraphQL fully supporting it. Yes, I can see where you can feel it isn’t production ready, in some aspects I do agree with you and on other aspects I have to heavily disagree. Because you should not be focusing on just GraphQL to secure your environment, nor should you be making Atlas itself the core focus of securing your environment.Security is a culture, not a product. You need to make sure that you firmly understand that, in which your database is only one layer of the onion whether the feature is organic to it or not, there is always other ways to implement what you need.This is coming from someone who’s worked on NIPRNET and SIPRNET networks and applications, who’s also worked on and developed applications used I can’t even discuss or go into specifics of that are used in high security environments.I’ve also worked on penetration testing tools, RATs, vulnerability assessment tools, even reverse engineering tools to identify and break down an application to its machine code functionalities from compilation for exploitation.When I say this, I do mean it. You will never have a fully secured environment, and there will always be a security vulnerability that you need to make the conscious effort to either accept the liability or mitigate the threat.You also need to make stronger efforts besides password/email auth to really evaluate whether an application or service does, or can meet your applications needs. Coming from someone who’s assessed many applications for various reasons, even when people were panicking Seagate Harddrives had spyware built into them, you’re talking to one of the men who evaluated whether or not the situation was hyperbole. I also know the security team from the Air Force that handled the Ukraine power plant hack, and later worked in the evaluation of vulnerabilities and root causes of what allowed it to be breached in the first place. That’s my name among several in the report to Congress in 2016, and the report given to the EU by the company I contracted with when I say this.If you would like help in working out the needs requirements for your application, and mitigations that could potentially workout what services may help you out, just let me know as I don’t mind. But “Atlas with GraphQL makes Brute forcing easy.” isn’t entirely accurate as you’re focusing on a server authentication and login error count without a handler. Which if MongoDB did implement said kind of handler feature, you then need to work on, and ensure it’s in sync with your SSO/LDAP/ETC. systems. Have a way to handle or mitigate false positives, and so on. Which honestly, your SSO solution, not Atlas should be responsible for that, but that’s just MHO.And I would have edited and added this to my other post, but for some reason the edit button disappeared early? I’m not exactly sure.Just my $0.02",
"username": "Brock"
},
{
"code": "",
"text": "Yes, we are talking about the same thing actually.EDIT: My bad, I got a web scraper that’s pulling my posts and posting for me, it didn’t indicate your IR was deleted.",
"username": "Brock"
}
] | App Service secure login impossible? | 2023-04-02T18:50:04.989Z | App Service secure login impossible? | 508 |
null | [
"queries",
"dot-net"
] | [
{
"code": "var data = collection\n .AsQueryable()\n .Select(x => new\n {\n A1 = (string)x.Data[\"a1\"],\n B2 = (string)x.Data[\"b2\"],\n C3 = (string)x.Data[\"c3\"],\n D4 = (long?)x.Data[\"d4\"][\"d5\"],\n E5 = (int?)x.Data[\"e5\"][\"e6\"],\n F6 = x.Data[\"f6\"][\"7\"] == null\n ? null\n : ((IEnumerable<BsonValue>)x.Data[\"f6\"][\"f7\"]).Select(_ => (string)_[\"f8\"])\n })\n .OrderByDescending(x => x.E5)\n .FirstOrDefault();\n",
"text": "Hi, I upgraded from 2.17 to 2.19 the C# driver and I have a query like:This worked fine with 2.17, after upgrading (I found out you folks switched to LINQv3 provider which might be root cause) it fails with expression not supported exceptions for Nullable (long? and int?) casts.Any ideas how to fix it and keep new LINQv3 provider?I looked at related:Thank you,\nV.",
"username": "Vedran_Mandic"
},
{
"code": "",
"text": "Switching to LINQv2 provider resolves the issue, but I still think this should work with v3.",
"username": "Vedran_Mandic"
},
{
"code": "Data",
"text": "Hi, @Vedran_Mandic,Thank you for reaching out to us about this issue. In LINQ2, we discarded all casts in the LINQ AST, which is not always the right thing to do. In LINQ3 we try to be more purposeful when removing casts - only removing them when we know the removal is safe and correct.Please file a CSHARP bug in our issue tracker with a repro so that we can investigate further. While quickly trying to repro the issue with the code above, it wasn’t clear to me the exact type of the Data property. Providing a self-contained repro with all the necessary data models will assist us greatly in quickly reproducing and investigating this issue.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | Upgrading from 2.17 to 2.19 causes LINQ based queries to fail with not supported expression | 2023-03-23T14:18:47.284Z | Upgrading from 2.17 to 2.19 causes LINQ based queries to fail with not supported expression | 2,060 |
null | [
"aggregation"
] | [
{
"code": "tablestable_rowstables = {\n \"_id\": \"641ce65852a7ccd2f4a7b298\",\n \"name\": \"table name\",\n \"description\": \"table description\",\n \"columns\": [{\n \"_id\": \"641ce65852a7ccd2f4a7b299\",\n \"name\": \"column 1\",\n \"dataType\": \"String\"\n }, {\n \"_id\": \"641cf95543a5f258bfaf69e3\",\n \"name\": \"column 2\",\n \"dataType\": \"Number\"\n }]\n}\n\ntable_rows = {\n \"tableId\": \"641ce65852a7ccd2f4a7b298\",\n \"641ce65852a7ccd2f4a7b299\": \"Example string\",\n \"641cf95543a5f258bfaf69e3\": 101\n}\n$lookup_idcolumnstable_rowscard_rows_id",
"text": "Having two collections tables and table_rows like following:That in a nutshell represent regular tabular data of 2 columns and 1 row like this:Is there any way to use $lookup operator to join those two collection based on _id from columns array in tables collection and based on the key name from table_rows collection? What i’m trying is to somehow join columns definitions (name, datatype, etc) along with cell values.As you can see actual key name in card_rows collection is _id of column itself.Ideally this would be single collection, but those tables can grow to hundred of columns and 10K of rows, so it is modeled as two collections to avoid unbound arrays in mongo.",
"username": "Srdjan_Cengic"
},
{
"code": "table_rows = {\n \"tableId\": \"641ce65852a7ccd2f4a7b298\",\n \"columns\": [\n { \"column_id\" : \"641ce65852a7ccd2f4a7b299\", \"value\": \"Example string\" } ,\n { \"column_id\" : \"641cf95543a5f258bfaf69e3\", \"value\" : 101 }\n ]\n}\ntables = {\n \"_id\": \"641ce65852a7ccd2f4a7b298\",\n \"name\": \"table name\",\n \"description\": \"table description\",\n \"columns\": [{\n \"name\": \"column 1\",\n \"dataType\": \"String\"\n }, {\n \"name\": \"column 2\",\n \"dataType\": \"Number\"\n }]\n}\n\ntable_rows = {\n \"tableId\": \"641ce65852a7ccd2f4a7b298\",\n \"columns\" : [\n \"Example string\",\n 101\n ]\n}\n",
"text": "The use of data value (like columns _id) as field name (in your table_rows) is usually a bad idea. With the attribute pattern the table_rows collection will avoid using dynamic value as field name like inBut then why don’t you simply completely remove the columns _id and use another array for your table_rows? The size of the data and direct 1-to-1 mapping between columns definition and values would be much simpler, much faster and must smaller.Like:Are you going to access a table without its table_rows?Are you going to access a table_row without its table?If you are going to $lookup from table_rows all the rows from a table whenever you access the table you will still risk to hit the maximum size of a document.",
"username": "steevej"
},
{
"code": "card_rows",
"text": "Thank you very much @steevej . You dont know how much i appreciated your answer. Being not that experienced with mongodb, for quite sometime im trying to find some good-enough solution that would cover standard operation that user can do with regular spreadsheet or table of data, including ability to support:I have tried to seek some answers and shared my thoughts in this forum on following link, regarding how this could be designed in mongodb: Database design for tabular data (user defined columns with potentially lof of rows of data to maintain) - #3 by SatyamJust to answer to your questions:Are you going to access a table without its table_rows? => rarely, if ever\nAre you going to access a table_row without its table? => rarely, if everIm totally aware that data that should be queried together should be part of same collection, and that was my first attempt as explained in above referenced question. My concern about that approach is that if everything is in same collection, more or less i will hit unbound array problem with mongodb, correct?The reasoning behind, solution i proposed here where dynamic object id in card_rows collection is actually columnId, is that i need ability to sort, filter and do other aggregate operation on card rows.Now looking at your proposed solution it looks much more better.\nIn your opinion with your proposed solution, would these operations like grouping, sorting, etc, work with elements of array and does this come with some drawbacks in your opinion?Again, really really thank you for your time and for your answer.",
"username": "Srdjan_Cengic"
},
{
"code": "",
"text": "I saw the other thread and Satyam answer was more than appropriate.For the record, I am an independent consultant. I do not design and code for free. I help, I point to resource, I bring questions to think about and I provide solutions to tricky aggregations. But I will not design and code for free. Why would my customer contract me if they can come here and have the work done for free?",
"username": "steevej"
},
{
"code": "",
"text": "A little bi surprised by your answer. I had absolutely nothing in mind more than just having discussion about two approaches, and seeking for other opinions as well, nothing more, which i guess community like this is all about. Anyway as i said earlier thank you very very much for your time and for your opinion, it helped really a lot. Really appreciated. Kind regards",
"username": "Srdjan_Cengic"
},
{
"code": "card_rows",
"text": "if everything is in same collection, more or less i will hit unbound array problemThe goal of the following questions and answersAre you going to access a table without its table_rows? => rarely, if ever\nAre you going to access a table_row without its table? => rarely, if everwas that if you access everything together most of the time you still risk having a huge array. A $lookup simply build an array of documents. So if rows are too big to fit in a single document they will also be too big when you $lookup.But there is nothing wrong about limiting the number of columns and rows. I am sure there are some in Excell and LibreOffice.The reasoning behind, solution i proposed here where dynamic object id in card_rows collection is actually columnId, is that i need ability to sort, filter and do other aggregate operation on card rows.Doing sort and filtering in documents is probably more efficient since you might be able to define indexes while sorting and filtering an array will always be a memory sort.",
"username": "steevej"
},
{
"code": "tables = [{\n _id: 1,\n tenantId: 1,\n name: “Parking spots”,\n description: “List of our parking spots”.\n columns: [{\n _id: 2,\n columnName: “parkingType”,\n order: 1\n }, {\n _id: 3,\n columnName: “numberOfSpots”,\n order: 2\n }],\n]\n\nrows = [\n {\n \"_id\": ...,\n \"table_id\": 1,\n \"tenant_id\": 1,\n \"cells\": [\n {\n \"_id\": ...,\n \"columnId\": 2,\n \"value\": \"Garage\",\n // other possible fields about cell itself\n },\n {\n \"_id\": ...,\n \"columnId\": 3,\n \"value\": 1000\n }\n ]\n },\n ...\n]\ndb.rows.aggregate([\n $match: { tenant_id: 1, table_id: 1 },\n $sort: {\n \"cells.0.value\": 1\n }\n]);\n",
"text": "Thank you @steevej once again really, really appreciated.In the end based on some testing and different approaches and based on some very good inputs by you and similar threads, i will go with two collections. One to keep data about tables and column definitions, another one to keep rows of the tables. Some sort of attribute pattern as you mentioned:With some nice compound index on { table_id, tenant_id} in rows, i have found that queries like sorting performs not that badOnce again really thank you for your time and answer.",
"username": "Srdjan_Cengic"
}
] | Mongodb $lookup operator based on key name | 2023-03-28T23:56:23.618Z | Mongodb $lookup operator based on key name | 862 |
null | [
"python",
"motor-driver"
] | [
{
"code": "multiprocessing",
"text": "We are pleased to announce the 3.1.2 release of Motor - MongoDB’s Asynchronous Python Driver. This release fixes a bug preventing Motor from working when using multiprocessing.See the changelog for a high-level summary of what is in this release or see the Motor 3.1.2 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | Motor 3.1.2 Released | 2023-04-03T20:25:43.347Z | Motor 3.1.2 Released | 1,137 |
null | [
"java",
"containers",
"field-encryption"
] | [
{
"code": "",
"text": "@wan, this is continuation of my previous thread. I am not able to reply further on that thread.As you have mentioned last, please find below our docker steps to deploy on Linux server.ARG SWARM_REGISTRY\nFROM ${SWARM_REGISTRY}/bi-common-baseimages/openjdk:8.0.342 AS build-envUSER root\nRUN microdnf install gnupg -y\nRUN microdnf install wget -y\nRUN microdnf install gpg -y#Adding file contents to the microdnf repository for mongodb-enterprise\nSHELL [“/bin/bash”, “-c”]\nRUN echo $‘[mongodb-enterprise-4.4] \\n\nname=MongoDB Enterprise Repository \\n\nbaseurl=https://repo.mongodb.com/yum/redhat/$releasever/mongodb-enterprise/4.4/$basearch/ \\n\ngpgcheck=1 \\n\nenabled=1 \\n\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc’ > /etc/yum.repos.d/mongodb-enterprise-4.4.repoRUN microdnf install -y mongodb-enterprise-cryptd#Adding file contents to the yum repository for libmongocrypt\nSHELL [“/bin/bash”, “-c”]\nRUN echo $‘[libmongocrypt] \\n\nname=libmongocrypt repository \\n\nbaseurl=https://libmongocrypt.s3.amazonaws.com/yum/redhat/$releasever/libmongocrypt/1.6/x86_64 \\n\ngpgcheck=1 \\n\nenabled=1 \\n\ngpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc’ > /etc/yum.repos.d/libmongocrypt.repoRUN microdnf install -y libmongocryptRUN whereis mongocryptd\nRUN whereis libmongocryptCOPY /target/execjar-java-encryption-1.0.2-SNAPSHOT.jar /usr/src/myapp/\nWORKDIR /usr/src/myappCMD [“java”, “-jar”, “execjar-java-encryption-1.0.2-SNAPSHOT.jar”]#CMD [“/bin/mongod.exe”]#RUN mongocryptd.exe",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@wan. we have our dedicated devops team there to support us. they own base image file. this is the content i see in there. our server is Linux. base image is ubi8. we have maven dependency. please check and provide any input if you have. thanks.FROM docker-frameworkimage-rel.prodlb.travp.net/redhat.io/ubi8/openjdk-8:1.14-3WORKDIR /usr/src/appUSER rootARG MAVENVERSION=3.8.7#Only for openjdk image, it seems to be missing yum\nRUN microdnf install -y ca-certificates curl jq git unzip wget tzdataENV TZ America/New_York\nRUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone\nRUN date#Download and Upgrade Maven\nRUN rm -Rf /usr/share/maven && \nwget https://dlcdn.apache.org/maven/maven-3/${MAVENVERSION}/binaries/apache-maven-${MAVENVERSION}-bin.tar.gz && \ntar -xvf apache-maven-${MAVENVERSION}-bin.tar.gz && \nmv apache-maven-${MAVENVERSION} /usr/share/maven && \nrm apache-maven-${MAVENVERSION}-bin.tar.gzENV PIPELINEGID=1001 PIPELINE_PRODUID=8530906 PIPELINE_PRODUSER=biwspipeline_p\nRUN groupadd -g ${PIPELINEGID} biwsusers && \nuseradd -u ${PIPELINE_PRODUID} ${PIPELINE_PRODUSER} -g biwsusers && \nmkdir -p /usr/src/app && \nmkdir -p /home/${PIPELINE_PRODUSER}/.m2 && \nchown -R ${PIPELINE_PRODUSER}:biwsusers /home/${PIPELINE_PRODUSER} && \nchown -R ${PIPELINE_PRODUSER}:biwsusers /home/${PIPELINE_PRODUSER}/.m2 && \nchmod g+s /home/${PIPELINE_PRODUSER} && \nchmod -R 774 /home/${PIPELINE_PRODUSER} && \nchgrp -R biwsusers /usr/src/app && \nchmod g+s /usr/src/app && \nchmod -R 774 /usr/src/appENV HOME /home/${PIPELINE_PRODUSER}#Install maven settings\nENV MAVEN_CONFIG “/home/${PIPELINE_PRODUSER}/.m2”\nRUN mkdir -p $MAVEN_CONFIG\nCOPY java/ubi8/conf/settings-docker.xml /usr/share/maven/ref/\nCOPY java/ubi8/conf/settings-ent.xml $MAVEN_CONFIG\nRUN mv ${MAVEN_CONFIG}/settings-ent.xml ${MAVEN_CONFIG}/settings.xmlCOPY --from=docker-appimage-bi-snp.prodlb.travp.net/bi-common-builder/trv-certs:latest /certs /etc/pki/ca-trust/source/anchors\nRUN update-ca-trustUSER ${PIPELINE_PRODUID}:biwsusers",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@wan, Redhat UBI8 is our base image. we have same base image for .NET core as well. it is working flawlessly. problem is only with Java and Linux. Request you to check above code and please let me know If you have any findings. thanks.",
"username": "PrasannaVengadesan_santhanagopalan"
},
{
"code": "",
"text": "@wan, any update on this? were you able to check on this ?",
"username": "PrasannaVengadesan_santhanagopalan"
}
] | Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in Java - Part2 | 2023-03-27T18:23:09.925Z | Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in Java - Part2 | 1,124 |
null | [
"react-native"
] | [
{
"code": "ActivitiesPublicUserexport class Activity extends Realm.Object<Activity> {\n _id: Realm.BSON.ObjectId = new Realm.BSON.ObjectId()\n name: string = 'Unnamed activity'\n creator!: PublicUser\n\n static primaryKey = '_id'\n }\n}\nActivitycreatorPublicUserexport class Activity extends Realm.Object<Activity> {\n _id: Realm.BSON.ObjectId = new Realm.BSON.ObjectId()\n name: string = 'Unnamed activity'\n creator!: PublicUser\n creatorId: Realm.BSON.ObjectId\n\n static primaryKey = '_id'\n}\nObjectIdLink",
"text": "In my application a user can create Activities, which for the sake of simplicity just have a name, an id, and a creator, where the creator field is a linked object in the PublicUser collection. I’m using @realm/react and typescript, so the model looks like this:I would like every user to be subscribed to documents in the Activity collection where they are the creator. However, this seems to be disallowed, as the PublicUser is a linked collection. To set up the subscription, all I need is the ID, which is exactly what is stored in the database itself. But Realm only sees a link, as far as I can tell, and there is no way to use the ID itself when establishing a sync subscription.To get around this, I can duplicate the ID in a separate field, egThis duplication allows me to establish a subscription on the basis of the creators ID. I plan on using this approach, but I wanted to make sure that I am not missing some way to simply look at the link as an ObjectId rather than a Link, as it would be much cleaner. There are several similar cases in my app so I’d rather avoid the duplication strategy if possible.Thanks,\nBrian",
"username": "Brian_Luther"
},
{
"code": "",
"text": "Hello @Brian_Luther,This is actually the same approach I came up with back in 2021, and have used ever since for Realm except for when I use CoreData in which case I sync Realm to CoreData or Room to store the IDs and reference them, and use aggregations and indexes in Atlas to handle things from outside of the clients/reference them and so on.But you are on track, as I also take the IDs and make them their own associated collection that ties the username to the ID numbers and make other realm apps as separated collections hold the other associated information.Such as transactions etc. So that way if my app(s) using Realm were ever breached no one attacker would have the whole picture of any one users. Addresses were separated from phone numbers, and SSNs were separated from names and addresses, names were held separately from address and phone numbers and SSNs, their purchase histories were all separate and so on.So the internal, non-displayed ID is what tied everything together for any one user. This also helped with a CDC audit due to COVID to determine how many and who were at location Y, I could provide the CDC with the exact info requested for my friends app, instead of giving them all of the users info.It actually made things a lot easier, same goes for his bookkeeping. He’s using a different platform now, but same kind of a setup.",
"username": "Brock"
},
{
"code": "",
"text": "Hi @Brian_Luther,You are correct that this is not currently supported. The workaround you proposed is how we’d recommend doing this for now.",
"username": "Kiro_Morkos"
}
] | Sanity check: Subscriptions based on the ID of a linked object are not supported | 2023-03-30T16:56:57.348Z | Sanity check: Subscriptions based on the ID of a linked object are not supported | 1,021 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hello all,I use Power Bi to connect to MongoDB Atlas using ODBC connector : BI connector version 2.14 I don’t receive all the columns from the Data Sources.I tried many time to refresh the connection, clean all the connection setting and isn’t work ?\nI contact Microsoft support team and after checking they told that the problem come from the connector BI.Any one please can help me to solve this issues ? It’s very Urgent.Thank you for your help.",
"username": "MARWEN_FATNASSI"
},
{
"code": "",
"text": "Can you see all columns from command line? (mongosqld or whichever tool you use)\nMay be permissions issue\nCheck data source settings",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Ramachandra_Tummala ,Thank you for your answer.\nYes I get all columns using MongoDB Compass with the same permissions.\nSo the problem is not come from Data Source Settings.Best,",
"username": "MARWEN_FATNASSI"
},
{
"code": "",
"text": "I am not referring to mongodb user permissions\nIn your Power BI under datasource tab something like global & file permissions radio tab must be there",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Yes, it’s there :\n\nimage794×680 18.7 KB\n",
"username": "MARWEN_FATNASSI"
},
{
"code": "",
"text": "It could be filters or other settings causing issue\nWhat is the query you are using in editor\nCheck BI forums.You may get more helpA few of my columns are not comming into Power BI from the data source. I have selected \"get data\" and selected the table I would like from SQL Server I then selected the table view on the left side of screen. I would expect to see the full...",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello @Ramachandra_Tummala,\nI already check all this but the problem is not come from Power BI, I had a discussion with Power BI support Team. And we conclude that the problem come more the MongoDB BI connector using ODBC.\nThis is why I am here to understand the issues.\nIt will be very helpful for me if you make me in contact with the Engineer Team.\nThank you.Best,",
"username": "MARWEN_FATNASSI"
},
{
"code": "",
"text": "HI, @MARWEN_FATNASSI\ndid you get any solution to this problem?when you update/add new data at MongoDB, it is not reflecteing in powerbi.\nand there is intermediate connector between mongo and powerbi i.e MongoDB bi connector",
"username": "heart_hacker"
},
{
"code": "",
"text": "Hello, same issue for me. Didn’t found the solution. I use ODBC Connector on Power BI, works good, after that i add one collections but i can’t see this collection on Power BI. I delete and recreate the ODBC Connector and refresh, but did’nt work. Thx",
"username": "Nicolas_Ehrhard"
},
{
"code": "",
"text": "ThxI have having the exact same problem, and I have done all the steps you provided. I am experiencing this anytime I use the ODBC connector (Excel, Tableau), I can use the exact same permissions in Compass and see the missing collections.To me, this narrows down that is in fact the ODBC connector.How do I fix this???",
"username": "Jake_Shivers"
},
{
"code": "",
"text": "problemWere you able to resolve this? I’m having the same problem.",
"username": "Jake_Shivers"
},
{
"code": "",
"text": "Does anyone know how to fix it? I had the same problem",
"username": "Alice_Nguy_n"
},
{
"code": "",
"text": "I have not found a solution. However, I opened a case with MongoDB about a week ago. They’re currently investigating.",
"username": "Shmub"
},
{
"code": "",
"text": "@Shmub @Alice_Nguy_n @Jake_Shivers\ninstead of using the ODBC connector, I moved to an alternative option i.e Python\nBy using pymongo library in python, we can connect to the MongoDB collections and pull the data.\nyou just need a search on the internet - how to connect MongoDB to python using pymongo?\nonce the data is fetched to the python (i have used visual studio software)environment, you can copy-paste the same python code in the powerbi ,so the data fetches into powerquery.Run Python scripts in Power BI Desktop - Power BI | Microsoft Learn.you can read this document to activate python supported option in powerbi and to fetch the data.",
"username": "heart_hacker"
},
{
"code": "",
"text": "The problem here is probably the sampling size system variable on the BI Connector:If you are using MongoDB Atlas, this can be configured within the Atlas admin web interface in an optional configuration text box when you first set up your BI Connector. It’s set my default to 100 but you can change this to 0 to force the connector to sample all the objects in a collection.",
"username": "Thomas_Russell"
},
{
"code": "",
"text": "This worked for me . change the value to 0 in atlasThank you @Thomas_Russell",
"username": "PMED_1"
},
{
"code": "",
"text": "Hi, Having the exact same problem but in SSIS. We are losing metadata a bit randomly. We do not have “MongoDB Atlas.” I havnt been able to understand how the sample size can be set to 0. Someone who knows how to do it?",
"username": "Fredrik_Soderlind"
},
{
"code": "",
"text": "I can confirm setting this to Zero within your Atlas configuration has resolved the problem. It may take a bit longer to pull down data given some collections are quite large and it’s resampling all collections, however, that is better than it failing when attempting to work on the data.",
"username": "Seth_Helgeson"
},
{
"code": "",
"text": "Hello @Seth_Helgeson how mush time it takes?",
"username": "Sapna_Upreti"
},
{
"code": "",
"text": "Very helpful @Thomas_Russell, thanks for sharing \nThis fixed the issue.",
"username": "Travis_Gillespie"
}
] | I don't receive all fields in Power BI from MongoDB Atlas using ODBC | 2021-07-07T12:31:38.050Z | I don’t receive all fields in Power BI from MongoDB Atlas using ODBC | 10,172 |
[] | [
{
"code": "",
"text": "Getting Started with MongoDB AtlasOn clicking the Atlas Registration tab to log in to the MongoDB Atlas webpage, I do not see any place/option to paste the verification code. I see standard page to create the Atlas account. I created a new account from this page also but my submission fails. Please suggest how I can complete this assignment.\nimage636×822 20.9 KB\n",
"username": "Vikram_Modi1"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Vikram and welcome to the forums! Are you trying to use your Atlas promo code or are you trying to complete a lab? If you’re trying to find where you can apply your Atlas promo code, you can find instructions for applying your code here.If you are having issues with completing the lab portion of your course, I recommend you reach out to [email protected] with your issue. For quicker resolution of the issue, try to provide the name of the lesson and the name of the lab you’re experiencing issues with.Hope this answers your question!",
"username": "Aiyana_McConnell"
}
] | Issue with Lesson2 Lab: Creating and Deploying an atlas cluster | 2023-03-31T21:36:37.542Z | Issue with Lesson2 Lab: Creating and Deploying an atlas cluster | 1,429 |
|
null | [] | [
{
"code": "",
"text": "I have done some of the course on Mongo University and I am a professional experienced user and I want to increase skill level and learn more. I am trying to do the courses and when I try to start the labs and I am unable to find the correct connection string to be able to anything. I redid the intro course and could not find they very simple information I need.",
"username": "Joe_Creaney"
},
{
"code": "",
"text": "Hi @Joe_Creaney,Welcome to the MongoDB Community forums I am trying to do the courses and when I try to start the labsPlease share the link to the course and the lab you are facing issues with.Also, can you confirm if you have created a dedicated Atlas account for MongoDB University learning?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I am having a similar issue! First lab, I get a bash shell, enter my authentication code, then click CHECK and get invalid response. It is not clear, at least to me, what we are supposed to do in this window! This is before the lesson #3 quiz.",
"username": "Jeff_Westman"
},
{
"code": "",
"text": "Hi @Jeff_Westman,Welcome to the MongoDB Community forums!Could you email this to [email protected] with the quiz/labs link? The team will get back to you!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I am new to this so forgive me if this is incorrect. I am having same issue and wondered if you got an answer?",
"username": "Ann_Duncan"
},
{
"code": "",
"text": "Hi @Ann_Duncan,Welcome back to the MongoDB Community forums Apologies for the late response!Could you please provide me with the link to the lab you are having trouble with? Additionally, can you explain in detail the specific issues you are encountering while attempting the labs?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Dear, I’m having the same issue. I’ve already created ticket but maybe answer to my issue will be a help:my link",
"username": "Pawczak_pawel"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to do labs | 2023-01-31T18:05:24.215Z | Unable to do labs | 1,570 |
null | [
"aggregation",
"atlas-triggers",
"charts"
] | [
{
"code": "",
"text": "Hi,How can I create a scheduled trigger based on aggregation pipeline in Atlas → Charts → data source?I have multi stage aggregation pipeline that I need to execute on a daily basis to populate a collection (MV).While creating a Scheduled Trigger, I see that there is a LINK button that says “Link Data Source(s)” but it does not allow me to select a data source, it just allowed me to link the cluster. Am I doing something wrong?Any help or advice is greatly appreciated.\nThanks in Advance.Vidya",
"username": "Vidya_Swar"
},
{
"code": "",
"text": "Hi @Vidya_Swar -The “data source” term is a bit overloaded… in Charts it refers to a specific collection, but in App Services and Triggers it refers to the cluster.If you want to refresh a materialised view via a trigger, there is a tutorial for this at How to create and manage Mongo DB Materialized Views using triggers. | by Boni Gopalan | MediumTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks a lot for your time to bring clarity to my questions Tom, much appreciate it. Wish I could actually link DataSource (with my heavy aggregation pipeline code in it) and be able to schedule it. Hope MongoDB builds this feature. Am more of a T-SQL and PL-SQL programmer and less of object oriented programmer now teaching myself this powerful MongoDB. So was hoping that the aggregation-pipeline (stored procedure) could be scheduled.Using database trigger will cost a lot as we get millions of records monthly.\nSo, I did try to schedule a heavy aggregation pipeline code into MV including Spirits in the Materialized View: Automatic Refresh of Materialized Views | MongoDB Blog. But I keep getting undefined error in the scheduled trigger and I cannot see the detailed error message.Best regards and have a great week ahead.\nVidya",
"username": "Vidya_Swar"
}
] | Create Scheduled Trigger using Atlas Charts Data source | 2023-03-31T18:34:36.356Z | Create Scheduled Trigger using Atlas Charts Data source | 896 |
null | [
"queries",
"indexes"
] | [
{
"code": "Partial Indexdb.reviews.createIndex(\n {\n catalog_id: 1,\n product_id: 1,\n score: -1,\n created_at: -1\n },\n {\n name: \"reviews_only_fetch_by_catalog_product\",\n partialFilterExpression: {\n $or: [\n { comments: { $exists: true } },\n { images: { $exists: true } },\n { videos: { $exists: true } }\n ]\n }\n }\n)\n{\n $and: [\n {\n catalog_id: '100'\n },\n {\n $or: [\n {\n comments: {\n $exists: true\n }\n },\n {\n images: {\n $exists: true\n }\n },\n {\n videos: {\n $exists: true\n }\n }\n ]\n }\n ]\n}\n\nexplain planCOLLSCAN$or{\n catalog_id: '100',\n comments: {\n $exists: true\n }\n}\n6.0.4",
"text": "I have a MongoDB collection with documents having fields as we would see below. I’m trying to build a partial index with an expression that would be used in my filter queries.Here’s the Partial Index creation commandWhen I run the below query, I was expecting the partial index getting leveraged as the filter expression is a subset of partial index expression.But surprisingly, the explain plan makes a COLLSCAN instead of using the index. Why would the $or filter that is exactly as defined in the index definition not work for the query?While the below query is able to leverage the index.MongoDB version - 6.0.4",
"username": "Shriyog_Ingale"
},
{
"code": "",
"text": "It would help us help you if you share sample documents from your collection. We need sample documents to experiment your use-case on our system.",
"username": "steevej"
}
] | Partial Index not covering OR query filter | 2023-04-03T07:34:49.233Z | Partial Index not covering OR query filter | 758 |
null | [
"node-js",
"graphql",
"graphql-api"
] | [
{
"code": "{\n \"graphQLErrors\": [],\n \"clientErrors\": [],\n \"networkError\": {\n \"name\": \"ServerError\",\n \"response\": {},\n \"statusCode\": 404,\n \"result\": {\n \"error\": \"cannot find app using Client App ID $MY_CLIENT_ID\n }\n },\n \"message\": \"Response not successful: Received status code 404\"\n}\nasync function getValidAccessToken() {\n if (!app.currentUser) {\n console.log(\"getting api key creds\");\n await app.logIn(Realm.Credentials.apiKey(API_KEY));\n console.log(\"got credentials!\");\n } else {\n await app.currentUser.refreshCustomData();\n console.log(\"refreshed\");\n }\n return app.currentUser?.accessToken;\n}\nexport const apolloClient = new ApolloClient({\n link: new HttpLink({\n uri: `https://realm.mongodb.com/api/client/v2.0/app/${APP_ID}/graphql`,\n\n fetch: async (uri, options) => {\n console.log(\"fetching\");\n const accessToken = await getValidAccessToken();\n console.log(accessToken);\n (\n options?.headers as Record<string, string>\n ).Authorization = `Bearer ${accessToken}`;\n return fetch(uri, options);\n },\n }),\n cache: new InMemoryCache(),\n});\n",
"text": "Hello!I am attempting to execute queries on a newly created collection via the GraphQL api using Apollo.The frontend auths via an API key, the call to get an access token is successful.\nWhen attempting to run any queries after this I can see the bearer token appended however every call returns a 404 and the error:cannot find app using Client App ID $CLIENT_IDThe full error details are here:I’ve verified that the client ID is correct. I’ve attempted to delete the entire cluster and re-create it in another region with no luck.\nIs there some issue with locating newly created clusters or is there some configuration in the UI I am missing?\nI am able to access the console at http://cloud.mongodb.com/ and I can successfully run queries in GraphiQL there.\nI am aware there is an old thread about this though this issue seems to have resurfacedfwiw auth code is as follows:",
"username": "Andrew_Karasek"
},
{
"code": "",
"text": "I’m having the same issue.Did you have any luck with this?\nWhat was your outcome?Thanks",
"username": "Zanek_shaw"
},
{
"code": "",
"text": "I am having the same issue intermittently. This is a BIG showstopper if this error continues.Need to know if Graphql will be reliable for my app, as this is the main reason I’m using Atlas App Service. If this cannot be resolved soon, I may find a completely different solution.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Where I went wrong was deploying in a single reagon but referencing it globally. It only made error sometimes on some computers when on some networks.Having an application deployed in a single region is better in terms of performance than having the application deployed globally.Therefore, the recommendation (i think) is to keep the app in a single region and use a local URL\nEG:\nhttps://ap-southeast-2.aws.realm.mongodb.com/api/client/v2.0/app/(APP_ID)/graphql\nINSTEAD OF:\nhttps://realm.mongodb.com/api/client/v2.0/app/(APP_ID)/graphql",
"username": "Zanek_shaw"
},
{
"code": "",
"text": "This actually worked.\nThe official docs definitely need to be updated to call this out. Thank you.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Hi @Zanek_shaw, @Try_Catch_Do_Nothing ,Thank you for helping and bringing this to our attention. Your feedback is important to us and this has been reported to the concerned team.The section “Setting up Apollo Client” will be updated to have both local and global urls.Thank you again for your contributions.Cheers, \nHenna",
"username": "henna.s"
}
] | Realm GraphQL error when executing queries: "cannot find app using Client App ID $CLIENT_ID" | 2022-11-29T00:31:46.859Z | Realm GraphQL error when executing queries: “cannot find app using Client App ID $CLIENT_ID” | 2,374 |
[] | [
{
"code": "",
"text": "The button in this category () “MongoDB University FAQ”\ncurrently leads to the following error.\nI am guessing it was linked to the old “Learn” category which was seemingly removed when the new university fully replaced the old one.I don’t know where else this button is shown so it might not be wise to just remove it. Asking for advice from the education team would be best.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hey @Yilmaz_Durmaz,Thanks for letting us know about this! This issue has been resolved and the link is now working.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "MongoDB University FAQ" button is not working | 2023-04-03T06:23:46.018Z | “MongoDB University FAQ” button is not working | 858 |
|
null | [
"android",
"flexible-sync"
] | [
{
"code": "subscriptions.add(Subscription.create(\"userSubscription\", realm.where(my_user.class) .equalTo(\"email\",\"[email protected]\")));",
"text": "If I modify data directly in Atlas and the android App is not running, then on the next run of the App; Client Reset is fired. Is it an expected behavior?No Schema Change happening in this case.Steps to reproduce:Is it a normal behavior or is it a BUG?",
"username": "Santosh_Kumar4"
},
{
"code": "",
"text": "Client reset logic is wha you need to have implemented.Then terminate sync, wait 10 minutes, re-enable sync. It’s a type of bad changeset.",
"username": "Brock"
},
{
"code": "",
"text": "To clarify on this as some over at DevCalibration reached out to me.This issue is caused because it’s a mislabeling of a bad changeset, you have a document on the local client which has been updated and that’s trying to push at the same time the server is sending another version of the document elsewhere, basically making the client and server both have different versions of the document.The only known and seen solution to this error, is a full termination and resync, as deleting the document in the server has proven to not work. Same with just deleting the document on the local client because it’s something going on with the writer processes and the server not completing the upload/write/syncing.By terminating sync, AFTER client reset logic is in place to keep unsynced documents this issue is corrected once sync is re-enabled.It won’t make a difference if you delete the doc locally, or on server at all, because it will not abort the “upload” even if the writer doesn’t even see it.It’s only when the term and resync happens that the upload is aborted, and upon resync the new version of the document makes it to be synced, again to reiterate, a term and resync is the only thing you can really do in this situation for the reasons outlined.",
"username": "Brock"
}
] | Data modification directly in Atlas fires Client Reset if App is not running (No breaking Schema) - Flexible Sync | 2022-06-15T12:49:59.059Z | Data modification directly in Atlas fires Client Reset if App is not running (No breaking Schema) - Flexible Sync | 2,614 |
null | [] | [
{
"code": "",
"text": "We are looking to write a mobile app that will be available within mainland China and running on servers in that country. Anyone know if there a provider/partner that offers Atlas Device Sync there now or in the near future?Thanks,Alvin",
"username": "Alvin_Chan"
},
{
"code": "",
"text": "Hello @Alvin_ChanYou would need to speak to Alibaba, and you would also need to consult 中华人民共和国工业和信息化部 China’s Ministry of Industry and Information Technology for any app compliance standards you need to meet to launch and deploy the App across mainland China. Because the Great Firewall of China has serious restrictions your app is going to have to meet.They would be the go-to for anything app deployment related, but as far as hosted services go Alibaba does host and service MongoDB, you can also run Apollo GraphQL servers within Mainland China or on Alibaba virtual infrastructure, and then route the appropriate GraphQL clients on the mobile apps.MongoDB may launch Realm/Device Sync on Alibaba, but that’d yet to be determined. You can also establish HTTP and JSON based APIs and run them to the Alibaba hosted MongoDB instances as well. There’s a myriad of ways to ensure data gets from mobile to the cloud that way.But again, prior to you launching your app, I’d highly encourage that you communicate with 中华人民共和国工业和信息化部 and get a solid, approved-in writing guide out, be sure to explain you’re looking to comply with Chinese law, what the app is, what it does, why you’re making it, and why you want to deploy it across mainland China and the consumers there, as well as what private information of Chinese citizens will be recorded, and what will happen to that data.The PRC (People’s Republic of China) takes foreign software to serious extremes in what they view as security or safety risks to protect PRC citizens from what they may perceive as a threat to the public. You need to be sure to navigate those waters prior to committing to any services, however if you do need help navigating how to get in contact with 中华人民共和国工业和信息化部, you’re welcome to DM me your business contact e-mail and I can walk you through the steps and processes. I’ve worked on, launched, and deployed 11 mobile apps that operate within China, Vietnam, Siam (Thailand, just some still know it as Siam), and Myanmar. All of which are still operating today.Again, you need to focus more on being highly respectful to any requests or wishes by the PRC Government, and make it clear that’s why you’re reaching out to them, and why you want to launch your app and they will be more than happy to help, but they will have requirements of things that will have to be implemented within the applications versions that will be operational and available to the Chinese public.EDIT:@Alvin_ChanAlibaba Cloud ApsaraDB for MongoDB is a secure, reliable, and elastically scalable cloud database service.MongoDB does support the above, but am unsure if you would pay for support directly with Alibaba, or if it’s direct with MongoDB.There is the OpenAPI by Alibaba that can link your mobile app to the MongoDB hosted in Alibaba.SDK中心是阿里云OpenAPI开发者门户支持9门SDK语言的主要平台,为每种SDK语言提供demo、完整工程、部署指南、调试平台和场景化示例。You would need to select the correct SDK for you, etc.It’s known when or if Alibaba will also host Realm/Device Sync, but I’m sure they may cook something up in time.",
"username": "Brock"
},
{
"code": "",
"text": "Thank you Brock for the detailed guidance! Much appreciated! We are hoping to launch over the next couple months to regions outside China first. If successful, then into China. Navigating all these obstacles is a real challenge but thanks to people like you in the community, it encourages us to keep going.Regards,Alvin",
"username": "Alvin_Chan"
},
{
"code": "",
"text": "@Alvin_ChanVietnam and Myanmar are the same boats, both countries you want to check with their equivalents to the FCC about your app(s).Myanmar was an extremely interesting process compared to China, unknown if you’re trying to deploy there but anything that’s “new” or could be considered controversial run it through their government first as well. Or things can get extremely ugly and overly problematic, as they also will go through local governments. It took almost a year to clear things up and get rid of labels a client company I worked for was an espionage platform to overthrow the Junta. That labeling for my client had intense ramifications in dealing with other governments in the region to allow the apps deployments.India is also an interesting process, just to be aware of.South Korea was probably the best experience to handle if your company is US Based, or if it’s based in Singapore. Japan probably had the strictest user data compliances Ive ever seen from any nation in that region, even down to when data had to be destroyed related to a consumer if that consumer hasn’t accessed the app in so long.That said though, in the Far East Asia landscape AWS is everywhere, it’s also in Beijing, but I haven’t seen Realm offered out off Beijing. That said, Alibaba has dozens of datacenter within China, compared to I think one ore two AWS data centers.",
"username": "Brock"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Device Sync in mainland China | 2023-02-15T11:40:37.476Z | Device Sync in mainland China | 1,335 |
null | [
"aggregation",
"replication"
] | [
{
"code": "",
"text": "I encountered a problem when I was working, but because I am a self-taught program and have no professional training, I have no way to judge whether there is a better solution, so I come up to ask everyoneSystem limitations:Problems encountered:\nAt present, Mongodb is running very normally when the traffic is low, and it is also working very well. The amount of data may overflow to 144TB after 3-5 years. The current concern is whether it will cause the search process. Mongodb’s Timeout or low search efficiencyQuestion:\nWould like to ask if there is a way to improve the search efficiency?\nFor example: I put all the birthday categories in the birthday DB, and when I need to find the birthday, I go to that DB to findOr will using Cluster/Sharding improve efficiency?Thank you very much for your answers and help. I will keep every answer of yours in mind. Best wishes.",
"username": "William_Lyu"
},
{
"code": "",
"text": "Hey @William_Lyu,Welcome to the MongoDB Community Forums! The current concern is whether it will cause the search process. Mongodb’s Timeout or low search efficiencyCan you please explain this further? Like how are you measuring the slowness in performance or if a timeout is happening since this is not very clear at the moment.Would like to ask if there is a way to improve the search efficiency?There are a few ways to improve search based on what you described such as:Please note that the above points are suggestions only and the specific solution or combination of solutions will depend on a variety of factors, including the types of queries being executed, the data model, and the hardware and infrastructure available. It would be good if you can share your current schema design, any sample documents, and queries that you’re executing(since you have mentioned aggregation with the use of $in,$nin,$regex operators which can also impact performance) along with the indexing, the output of explain(‘executionStats’) and the expected outputs to further pinpoint exact solutions.\nIf this is not a possibility, I’m attaching some documentation and other useful links that you can go through:\nAggregation Optimization\nBest Practices for MongoDB Performance\n$regex and Index Use\nPerformance Tuning in MongoDB\nIndex SelectivityHope this helps.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "OMG, I’m very excited to see your reply\nFor me, it really means a lot, especially as someone who is self-taught in programming.\nSometimes I really feel at a loss for certain problems.To the First problem:\nRight now there is less than 10TB of data, Mongodb is working normally, but I’m not quite sure if there is a large amount of data,find() and aggregate() whether it’s right or wrong, it’s possible to operate smoothly as usually.For example:\nWhen I am searching for Data with time index , Does MongoDB search for data lead to poor system performance due to lengthy search times?I’m sorry for the misunderstanding. This is a hypothetical question.Second problem:\nI’m currently using find() or is $match’s weather capital available index\nFor example:\nWhen Mongodb initial , createIndex() time addition time : 1 index\nAfter this time, the search for time with find() and aggregate() ,$match: …However, I don’t know, it’s a good way to use find() and indexExtraordinary quetion:\nare there any books or articles you would recommend for self-learners like me?\nCurrently, I have read through all the MongoDB tutorials, but there is little information about performance and scalability.Thank you again, I’m really impressed!\nI am forever in mind\nBest Wish",
"username": "William_Lyu"
},
{
"code": "",
"text": "Hey @William_Lyu,However, I don’t know, it’s a good way to use find() and indexRegarding not knowing if find() is useful with an index or not, you can use explain output to analyze if your queries are using index or not and if there’s scope for further optimization of the index.\nI have linked all the useful resources in my previous reply that should help you out with your first two questions.Coming to books or articles, MongoDB Documentation is the best resource to know more about MongoDB. It is the most up-to-date resource of MongoDB. You can also learn more from MongoDB’s University Platform. It hosts a lot of free, amazing courses from basics to advanced topics that you can take up to increase your knowledge of MongoDB. You can also refer to MongoDB Developer Center which has all the latest MongoDB tutorials, videos, and code examples in different languages and tools.Hope this helps.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This is an example I use aggregate() and match:\nat the begiing:\ncol.create_index([(“time”, 1)])\ncol.create_index([(“time”, -1)])use aggregate:for i in db.testCollection.aggregate(\n[\n{“$match”: { query }},\n{“$sort”: {“time”: -1, “_id”: -1}},\n{“$skip”: skip},\n{“$limit”: limit},\n{\n“$project”: {\n“_id”: {“$toString”: “$_id”},\n“FileName”: 1,\n“type”: 1,\n“time”: {\n“$dateToString”: {\n“date”: “$time”,\n“format”: “%Y-%m-%d %H:%M:%S”,\n“onNull”: “”,\n}\n},\n},\n},\n]\n)find() :\ntest = db.testCollection.find_one({“_id”: ObjectId(search[“ID”])}, {“_id”: 0})Like I said , this is a statement I learned from the Mongodb document. It works great when the data volume is low. However, I am concerned that it may lead to long search times when the data volume is large. I don’t know if there is a better way to use the find() and aggregate() statements.In the past few days, I also found the courses in Mongodb College while searching for documents, and I have started taking classes. The Mongodb community is really great, both in terms of your answers and learning resources. It is really helpful for beginners.Thank for your reply. It’s help a lot to me.",
"username": "William_Lyu"
}
] | Mongodb in Big Data Search | 2023-03-27T03:56:03.073Z | Mongodb in Big Data Search | 892 |
null | [
"replication",
"connecting",
"golang",
"containers"
] | [
{
"code": "v1.10.3mongo.db.0mongo.db.0replicaSetserver selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: <OLD_IP_ADDR>:27017, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp <OLD_IP_ADDR>:27017: i/o timeout }, ] }",
"text": "Hello,I am using Monstache which under the hood uses the Go MongoDB driver. Driver version is v1.10.3 & connects to Mongo 4.4.The issue I am facing is as follows:My questions are:Thank you,\nMax",
"username": "Max_Dudzinski"
},
{
"code": "",
"text": "I don’t know how your specific driver behaves, but probably something is wrong with your configuration/use on the driver-server communication.The error makes sense because the old ip is long gone and it’s still trying to access it.Maybe the driver is not periodically refreshing the mapped ip address? i don’t know.",
"username": "Kobe_W"
},
{
"code": "server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: <OLD_IP_ADDR>:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp <OLD_IP_ADDR>:27017: i/o timeout }, ] }",
"text": "Hello @Max_Dudzinski,Welcome to the MongoDB Community forums server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: <OLD_IP_ADDR>:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp <OLD_IP_ADDR>:27017: i/o timeout }, ] }As @Kobe_W also mentioned if you change the IP, it’s gone and this will probably happen.To better understand the issue, can you please share the output of rs.conf() and rs.status().At first glance, I don’t think this is related to the Go driver, but rather the change in DNS in your environment.As far as I know, the go driver uses the default resolver from the net package. Also, as per the JIRA ticket - the Go driver does not cache DNS and instead relies on the OS and its resolvers.So if the IP is stale, the DNS cache is the possible issue as it could be in the OS, network, etc.Furthermore, if you are looking to integrate the search solution into the MongoDB Atlas Dataset, I’ll recommend using Atlas search for better compatibility and using the combination of three systems database, search engine, and sync mechanisms into one, delivering application search experiences much faster.For more information, please visit the MongoDB Atlas Search documentation.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "solidatus-db-0.dbrs0:PRIMARY> rs.status()\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2023-03-22T08:30:34.395Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(2),\n\t\"syncingTo\" : \"\",\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 1,\n\t\"writeMajorityCount\" : 1,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679473830, 7),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-03-22T08:30:30.674Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679473830, 7),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2023-03-22T08:30:30.674Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679473830, 7),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679473830, 7),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-22T08:30:30.674Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-22T08:30:30.674Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1679473810, 1),\n\t\"lastStableCheckpointTimestamp\" : Timestamp(1679473810, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-01-12T08:13:22.040Z\"),\n\t\t\"electionTerm\" : NumberLong(2),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1673511115, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"numVotesNeeded\" : 1,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"newTermStartDate\" : ISODate(\"2023-01-12T08:13:22.042Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-01-12T08:13:22.094Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"10.1.95.24:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 5962708,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1679473830, 7),\n\t\t\t\t\"t\" : NumberLong(2)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-22T08:30:30Z\"),\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1673511202, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-01-12T08:13:22Z\"),\n\t\t\t\"configVersion\" : 182936,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1679473830, 7),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1679473830, 7)\n}\nrs0:PRIMARY> rs.config()\n{\n\t\"_id\" : \"rs0\",\n\t\"version\" : 182936,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"writeConcernMajorityJournalDefault\" : true,\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"host\" : \"10.1.95.24:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"6311ccaab8d114d352e0655e\")\n\t}\n}\n$ date \nWed Mar 22 08:29:50 UTC 2023\n$ curl solidatus-db-0.db:27017\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.\nrs0:PRIMARY> rs.status()\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2023-03-22T08:34:24.906Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(3),\n\t\"syncingTo\" : \"\",\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 1,\n\t\"writeMajorityCount\" : 1,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679474063, 6),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-03-22T08:34:23.612Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679474063, 6),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2023-03-22T08:34:23.612Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679474063, 6),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1679474063, 6),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-22T08:34:23.612Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-22T08:34:23.612Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1679473965, 6),\n\t\"lastStableCheckpointTimestamp\" : Timestamp(1679473965, 6),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-03-22T08:34:14.203Z\"),\n\t\t\"electionTerm\" : NumberLong(3),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1679473965, 6),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"numVotesNeeded\" : 1,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"newTermStartDate\" : ISODate(\"2023-03-22T08:34:14.206Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-03-22T08:34:14.249Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"10.1.95.4:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 86,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1679474063, 6),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-22T08:34:23Z\"),\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1679474054, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-03-22T08:34:14Z\"),\n\t\t\t\"configVersion\" : 276040,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1679474063, 6),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1679474063, 6)\n}\nrs0:PRIMARY> rs.config()\n{\n\t\"_id\" : \"rs0\",\n\t\"version\" : 276040,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"writeConcernMajorityJournalDefault\" : true,\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"host\" : \"10.1.95.4:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"6311ccaab8d114d352e0655e\")\n\t}\n}\n$ date\nWed Mar 22 08:35:02 UTC 2023\n$ curl solidatus-db-0.db:27017\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.\nError starting change stream. Will retry: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 10.1.95.24:27017, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp 10.1.95.24:27017: i/o timeout }, ] }\n10.1.95.2410.1.95.4",
"text": "Hi @Kobe_W, Hello @Kushagra_Kesav,Thank you for your replies.Go driver successfully connected, no errors. Host in connection string is: solidatus-db-0.dbMongo reachable as:Then MongoDB restarts, comes up under a new IP address:Mongo still reachable under same host:MongoDB Go driver reportsFrom what I understand, the driver has itself cached the topology with single member at IP address 10.1.95.24 which has now become stale.\nShouldn’t the driver go back and use it’s provided connection string to re-discover the topology & new IP of member i.e. 10.1.95.4?Thank you for your time,\nMax",
"username": "Max_Dudzinski"
},
{
"code": "\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"10.1.95.4:27017\",\n\"solidatus-db-0.db.\"\"10.1.95.4:27017\"member1.example.commember2.example.commember3.example.com",
"text": "Hi @Max_Dudzinski,Thanks for sharing the details.As per the shared information,I believe the reason why the driver cannot reconnect to the new host is that the replica set was configured with the actual IP address instead of using a hostname as per the recommendations in the documentation.This is because if you use IP addresses, any changes in the IP address of a member will require updating the configuration file of all other members, which can be time-consuming and error-prone. On the other hand, if you use a DNS hostname, the IP address of a member can change without requiring any configuration updates on other members.For example, If you use hostnames like member1.example.com, member2.example.com, and member3.example.com to configure a MongoDB replica set, any changes in their IP addresses will be automatically resolved by DNS without needing to update the configuration file.To resolve this - I will recommend you change the config of your replica set to use hostnames instead of IP addresses.For detailed information please refer:I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This issue with the driver not being able to recover when the cluster topology might be related to this issue that was fixed in 1.10.4 onwards.Can you try the latest 1.10.x driver and see if the issue still happens?",
"username": "Mavericks2022"
},
{
"code": "curl solidatus-db-0.db:27017solidatus-db-0.db:27017",
"text": "Hi @Mavericks2022,Thanks for your comment. I’ve looked at the issue & resultant commit, and I believe the fix in the issue is strictly to do with SRV polling, which is not what I’m using.I will try out the latest driver later on in the hope that it works.Hi @Kushagra_Kesav,Thanks for the follow up.The DNS record within the environment is correctly up to date - you can see that from the second curl solidatus-db-0.db:27017 command I ran, after MongoDB has restarted & come up with a new IP address & the RS was reconfigured.The problem is that the driver itself is caching & not updating the stale topology - if it simply re-connected to solidatus-db-0.db:27017 & rediscovered the updated topology with the updated IP addrs & connected to it instead, all would be fine.To resolve this - I will recommend you change the config of your replica set to use hostnames instead of IP addresses.I appreciate this is the recommended best practise - in my case however, the 3rd party tool responsible for maintaining RS in my changing environment unfortunately does not support the use of hostnames.Further, RS reconfigurations should be expected to happen, for various reasons.\nWhile the use of IP addresses in RS config’s may not be the best, it is a valid configuration option.\nI simply believe it is a pretty bad oversight from the MongoDB Go driver to not do something smarter, like re-try connecting to one of the nodes via the original connection string.I have resolved my issue by manually detecting this type of connection error via log parsing & triggering a restart of the entire process which was using the MongoDB Go driver, causing a fresh connect via connection string to MongoDB & thus a fresh discovery of updated topology",
"username": "Max_Dudzinski"
},
{
"code": "rs.status()",
"text": "@Max_Dudzinski thanks for all the information, this is a really interesting problem! There is actually a section from the MongoDB driver specification that describes the expected behavior of drivers under similar circumstances, and some rationale for those decisions:…An alternative proposal is for clients to continue using the hostnames in the seed list. It could add new hosts from the hello or legacy hello response, and where a host is known by two names, the client can deduplicate them using the “me” field and prefer the name in the seed list.This proposal was rejected because it does not support key features of replica sets: failover and zero-downtime reconfiguration.In our example, if “host1” and “host2” are not reachable from the client, the client continues to use “host_alias” only. If that server goes down or is removed by a replica set reconfig, the client is suddenly unable to reach the replica set at all: by allowing the client to use the alias, we have hidden the fact that the replica set’s failover feature will not work in a crisis or during a reconfig.…Basically, MongoDB drivers connect to the replica set nodes as described by the replica set (i.e. the information that rs.status() returns) because they depend on timely and accurate topology change info from the MongoDB replica set to support “failover and zero-downtime reconfiguration”.When a driver has completely lost connection to a replica set, there are two possible circumstances:Drivers could simultaneously attempt to connect to the last known MongoDB replia set and re-initialize using the connection string to see which succeeds first. However, that may not always be the best behavior for all use cases, so we have historically assumed case #1 (the more common case) and required users to implement their own recovery logic for case #2.Another section from the specification seems to suggest that using arbiter nodes can help with the case where all replica set members are moved in a short period of time:… in the rare case that all data members are moved to new hosts in a short time, an arbiter may be the client’s last hope to find the new replica set configuration.Do you have the option of running arbiter nodes that could help the Go driver keep track of the replica set node after it is moved? If not, it sounds like your solution to detect the error and reconnect is the correct solution.",
"username": "Matt_Dale"
},
{
"code": "host_alias",
"text": "Hi @Matt_Dale,Thank you very much for the detailed answer. The links posted are extremely helpful, I unfortunately missed them during my Googling I’m not sure I agree with this part of the docs:… by allowing the client to use the alias, we have hidden the fact that the replica set’s failover feature will not work in a crisis or during a reconfig.Imo the reconfig is an internal change that should be invisible to users who only know host_alias - if mongo is still reachable under the alias, the driver should reconnect.Anyway, thanks again for taking the time to answer, so far the manual reconnect seems to be working fine (i don’t really have/want arbiter nodes :))",
"username": "Max_Dudzinski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Go driver: gracefully reconnecting when RS topology changes due to pod recreation | 2023-03-20T16:31:52.662Z | Go driver: gracefully reconnecting when RS topology changes due to pod recreation | 1,565 |
null | [
"etl"
] | [
{
"code": "",
"text": "Hi All,We currently use cloud manager for maintain our mongodb . so we have plan for build data extraction from spesific Hidden Secondary Node in production . is it safe ? is it ok if we run it in every hour ?",
"username": "Abdul_Haris"
},
{
"code": "secondarysecondaryPreferred",
"text": "Hi @Abdul_Haris welcome to the community!“Safe” is a relative term here, and I don’t think Cloud Manager comes into effect in this case (it is a management tool after all).Typically a hidden secondary is used to perform tasks not associated with the replica set’s usual workload, so client’s won’t see it, and won’t route queries to it even when secondary or secondaryPreferred read preference is used.However it still behaves like a secondary, so all the caveats regarding a normal secondary still applies:Thus, the suitability of the hidden secondary to do what you asked for is dependent on how much work you’re asking it to do, on top of its regular work of keeping up with the primary. If it can handle it, it’s should be ok. If it cannot, then you might see some issues with it.I would suggest you to experiment with some example workloads to start with, and see if the hardware can keep up with demand.Best regards\nKevin",
"username": "kevinadi"
}
] | Batching Datas From Hidden Secondary Node | 2023-04-01T02:30:12.570Z | Batching Datas From Hidden Secondary Node | 590 |
null | [] | [
{
"code": "",
"text": "It seems you have cleared/deleted the old “Learn” category, but now everything goes to “Certification Exams” sub category even though the posts are not related to certification exams.It is even impossible (currently) to create a new post under “MongoDB University” category anymore. There is simply no button to do that.Under the main category, there is only “New Event” button which requires some permission as if it is a MUG topic.please correct this issue.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hey @Yilmaz_Durmaz,Thank you so much for the feedback! We have updated the settings and users should now be able to create a post that is related to the University learning under the MongoDB University main category.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thanks for the fix @Satyam Changes in the forum itself may have this kind of undesired result. A side note to moderators would be nice as it is easy to miss out on these effects.PS: I will now move this to the main category ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "MongoDB University" forum sub categories are missing and everything goes to "Certification Exams" | 2023-04-01T05:56:52.520Z | “MongoDB University” forum sub categories are missing and everything goes to “Certification Exams” | 983 |
[
"data-modeling"
] | [
{
"code": "",
"text": "I am trying to attempt M100 course.\nNew Mongo DB University UI is not proper. its not showing more quiz options.\nRight side Scroll-bar is visible but no scroller found. Seems some issue with UI.\nquiz-ui-not-responding1866×866 72.3 KB\n",
"username": "Harpal_Singh"
},
{
"code": "",
"text": "it might be something browser-dependent (chrome, firefox, safari, etc) or screen size in effect. try to zoom in/out of the page and see if you can view it better. pressing the ctrl key while turning the mouse wheel is one way to do that (don’t turn it fast).",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo University new UI is not showing Quiz options | 2023-03-28T11:24:45.615Z | Mongo University new UI is not showing Quiz options | 1,058 |
|
null | [] | [
{
"code": "",
"text": "Hi MongoDbMy name is Byron Odhiambo from a small village called Asembo in Kenya. My journey learning about MongoDb has been epic from its vast documentation and an engaging free course material at MongoDb Unversity. I’m currently Mongodb ambasador in my home region and I believe in its potential in transforming the way developers work with data. The only request I’d want to make is for Mongodb to have a consideration for students coming from LMIC. The whole examination process was challenging and partly affected the outcome of the exam. Scored a 75 of which I believe I would have done better. Hope you take this into consideration when grading for the exams.",
"username": "hue_man_N_A"
},
{
"code": "",
"text": "Hey @hue_man_N_A,It’s great to see you back on MongoDB Community Forums! You can send in your feedback and any certification-related requests/information to [email protected]. There, someone from the concerned team should be able to guide and help you out with your concerned query.Regarding finding the exam challenging, I would recommend going through MongoDB University’s Learning Path and practice exam questions before appearing for the exam to be best prepared for the exam. Kindly also refer to the Program Guide and Exam Study Guide for more details on how to best prepare for the exam and other details.Hope this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDb Exam Grading for students from LMIC | 2023-04-02T19:35:30.249Z | MongoDb Exam Grading for students from LMIC | 955 |
[
"compass"
] | [
{
"code": "",
"text": "\n2023-03-31 (1)1920×1080 243 KB\nRegion that i am trying to connect to - Mumbai AWS\nI am using the free service\nI have whitelisted all IPs and uinstalled VPN , ANTI VIRUS , Turned off the Windows Firewall.\nThe Compass Version that i am using is 1.36.2.",
"username": "MR_JINGOIST"
},
{
"code": "",
"text": "",
"username": "kevinadi"
},
{
"code": "",
"text": "@MR_JINGOIST if I try exactly the same connection string I get an “Authentication Failed” error, which makes sense as probably the credentials in your screenshot are not the actual credentials. So the cluster is reachable? Is it possible that when you posted your message you had not set the access rules but now they are set and things work?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "",
"username": "kevinadi"
}
] | Cannot connect atlas cluster to Compass | 2023-04-03T05:53:10.892Z | Cannot connect atlas cluster to Compass | 472 |
|
null | [] | [
{
"code": "",
"text": "I’m considering a version upgrade 4.2 to (4.4 or 5.0)\nMay I use “stable api” in community edition 5.0 version?( In MongoDB 5.0, “stable api” only used in atlas? or it can use in community edition? )",
"username": "noisia"
},
{
"code": "",
"text": "Hello @noisia ,Welcome back to The MongoDB Community Forums! Yes, you can use the Stable API feature in MongoDB Community Edition 5.0. This feature previously labeled the Versioned API, lets you upgrade your MongoDB server at will, and ensure that behavior changes between MongoDB versions do not break your application.The default behavior for your driver connection will continue to function as expected, even if you do not explicitly specify an apiVersion. The Stable API encompasses the subset of MongoDB commands that applications use to read and write data, create collections and indexes, and perform other common tasks.For more information, please refer below documentationRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello @Tarun_Gaur,\nThanks you for the reply.",
"username": "noisia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | In MongoDB 5.0, "stable api" only used in atlas? or it can use in community edition? | 2023-03-28T08:55:12.825Z | In MongoDB 5.0, “stable api” only used in atlas? or it can use in community edition? | 637 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "This create cluster button is not working can somebody fix it.Serverless and Dedicated works just fine only when its free , nothing is happening when create is clicked ( a captcha normally pops up )",
"username": "MR_JINGOIST"
},
{
"code": "M0",
"text": "Hi @MR_JINGOIST - Welcome to the community I’d raise this with the Atlas in-app chat support. However, please note you can create only one M0 free cluster per project.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Now i can , but yeah once i wasnt able to",
"username": "MR_JINGOIST"
},
{
"code": "",
"text": "Glad to hear it. Not sure if it was possibly related to the incident on March 30 noted here. You can check the link in future if you encounter this issue again.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Cant create new cluster in atlas | 2023-03-30T14:47:05.310Z | Cant create new cluster in atlas | 917 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "",
"text": "I have a list of candidates (with first name, last name and date of births) who been tested on multiple occasions and I now want to find those duplicate occasions however names may well have spelling mistakes and synonyms may have been used. Ideally I’d like to do a group by using information an Atlas Search index. Is this possible?",
"username": "Brian_Henderson"
},
{
"code": "$group",
"text": "Hi @Brian_Henderson - Welcome to the community I now want to find those duplicate occasions however names may well have spelling mistakes and synonyms may have been used.I’m not entirely sure of the use case details (including how to verify which documents would be duplicates) but perhaps the following documentation may help:Ideally I’d like to do a group by using information an Atlas Search index. Is this possible?Would this work flow for identifying “duplicates” be used on an every now and then basis? Or would it be considered part of your application’s standard workload? Curious to know if it’s just going to be used irregular to update / remove duplicates or just for identifying duplicates.If you could also provide some examples / demonstrations using sample documents that would be helpful as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Sorry I should give a bit more information. We work with schools and assess the same cohorts of students over a number of years but do not normally have a unique student identifier so we need the synonyms lookup (e.g. Elizabeth, Beth, Liz) as well as fuzzy logic matching. Ideally we would want to be able to group ‘on the fly’ to take the top table and run a query to generate the second:\n\nScreenshot 2023-03-31 at 08.56.251094×514 51.3 KB\nThanksIs this possible?Thanks",
"username": "Brian_Henderson"
},
{
"code": "\"Elizabeth\"\"assessment\"compoundtextfuzzysynonymstestdb> db.school.find({},{_id:0})\n[\n {\n firstName: 'Elizabeth',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a1: 70 }\n },\n {\n firstName: 'Liz',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a2: 80 }\n },\n {\n firstName: 'ElizabethX',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a3: 90 }\n }\n]\n\"scores\"\"nameSynonyms\"testdb> db.schoolsynonyms.find({},{_id:0})\n[ {\n mappingType: 'equivalent',\n synonyms: [ 'elizabeth', 'liz' ]\n } ]\n\"Elizabeth\"\"queryFirstName\"$searchavar queryFirstName = 'Elizabeth'\nvar a = \n{\n\t\t$search: {\n\t\t\tindex: 'scores',\n\t\t\tcompound: {\n\t\t\t\tshould:[\n\t\t\t\t{\n\t\t\t\t\ttext: {\n\t\t\t\t\t\tquery: queryFirstName,\n\t\t\t\t\t\tpath: 'firstName',\n\t\t\t\t\t\tfuzzy : {}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\ttext: {\n\t\t\t\t\t\tquery: queryFirstName,\n\t\t\t\t\t\tpath: 'firstName',\n\t\t\t\t\t\tsynonyms:'nameSynonyms'\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}\n\"_id\"testdb> db.school.aggregate(a)\n[\n {\n firstName: 'Elizabeth',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a1: 70 }\n },\n {\n firstName: 'Liz',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a2: 80 }\n },\n {\n firstName: 'ElizabethX',\n lastName: 'Apple',\n DOB: ISODate(\"2000-01-01T00:00:00.000Z\"),\n assessment: { a3: 90 }\n }\n]\n$search",
"text": "Thanks for providing those details Brian. Firstly, I’m not sure of all other requirements here but based purely off the sample documents and information provided, it may be possible to obtain the second table assuming the first table is a collection where the atlas search index exists.In my example below for the student with a first name of \"Elizabeth\", I have just added an \"assessment\" field which contains the score for a particular assessment. This is not what I am recommending the schema to be and is only for demonstration purposes for Atlas search (and in particular, the compound operator, the text operator with fuzzy and synonyms options used).Sample documents used in the test:I am using a default Atlas Search index (named the \"scores\" index) but have defined Synonym Mappings. For the synonym mappings, I’ve created the \"nameSynonyms\" mapping below:Assigning \"Elizabeth\" to variable \"queryFirstName\" and details of the $search portion of the pipeline (assigned to var a):Output (removed \"_id\" for brevity):I presume the question here was more based off identifying duplicate names using $search. How you uniquely group and identify a student is a different matter. Hopefully the above example provides a bit of help in determining if this works for your use case.You may wish to consider cleaning duplicates prior to them being entered into MongoDB if possible. This may assist with inaccuracies that may appear with the above method (for example - consider if an entry had the first name “Lizx”).Regards,\nJason",
"username": "Jason_Tran"
}
] | Identify duplicate names in a collection that may have been entered with spelling mistakes or synonyms | 2023-03-27T12:51:45.182Z | Identify duplicate names in a collection that may have been entered with spelling mistakes or synonyms | 829 |
null | [] | [
{
"code": "",
"text": "Hello,\nI am currently evaluating Atlas Device Sync for a mobile app with respect to GDPR requirements for a German company.\nThe data that should be synced is non-personal, but it seems that the Atlas Sync service itself collects personal identifiable data like the client’s IP address or device identifiers.There are two solutions that came to my mind, in order to prevent personal data from being send to MongoDB:\n1: Encrypt Atlas with a custom KMS key, that my company owns. Does this encryption include metadata (like the IP) from Atlas Device Sync?\n2: Proxy the connection between the device and Atlas and remove the client’s IP address before it reaches Atlas. Is this possible? How can I configure a custom URL in the Realm Sync Configuration?I would appreciate your thoughts and ideas on that topic. Thanks in advance.",
"username": "Markus_Kieselmann"
},
{
"code": "",
"text": "Hello @Markus_Kieselmann I hope someone already answered this at least externally to the forums.Basically Device Sync is in fact compliant for GDPR, you can find all the things it’s compliant for here: Trust Center — MongoDB Cloud Services | MongoDBIt even recently got CJIS and a higher level of FedRAMP certification, too. I’m not sure at what level CJIS is for Atlas, but it exceeds Interpol’s RPD and CCF requirements, meaning it well exceeds GDPR compliance requirements, it’s going to come down to how YOU choose to design the collections and infra around it.The FedRAMP is also heavier than Germany’s BDSG, you could probably talk to the provisional authorities in your region of Germany even, about a German Government Atlas instance if enough manpower were available etc. You never know, and honestly I’m sure MongoDB wouldn’t mind said kind of conversation.When I worked with NATO back in 2012 I know Atlas and Firebase were both infrastructure that would have been amazing to have had available. But without the special stuff involved with FEDRAMP and CJIS, it still is GDPR compliant just on its own accord in a stand alone constraint, it’s even HIPAA compliant, which HIPAA exceeds needs of GDPR, and is marginally even more strict than GDPR, which also exceeds the CFRA and other consumer data privacy acts.That said, in Atlas the data is locked away from even employees unless exclusive circumstances and permissions from the customer directly are given for emergency situations.And to prevent data from being transferred to MongoDB in the states? The data is local to the regional datacenter you choose. If you pick Frankfurts DC, that’s where the data is, which is ran and operated by German’s.Amazon FRA50 Kleyerstrasse 88-90. Frankfurt am Main 60326 is the Atlas Datacenter in Germany, hosted by Amazon, manned by German born citizens, which you could always reach out to Amazon directly for any specific restrictions you need to have in place, and communicate that with MongoDB for what’s needed.Given MongoDB is GDPR compliant, I honestly wouldn’t worry as much about that.EDIT\nAnd hey @Markus_Kieselmann, if you’re needing NATO compliance I do have the contact information for the Joint Information Security, Systems, and Data Command out of Ulm, as well as NCISG out of Mons, in the Joint Support and Enabling Command Headquarters. I do have the direct lines if you need questions answered of what AWS services and third party services are approved.Which the Datacenter Europe Central - 1 regions 50, 53, ETC. are all approved datacenters, they do have channels within Amazon you can be directed through if you’re asking because of a NATO need. If this request of info isn’t related to NATO, you can freely just use the general consumer Atlas and normal AWS channels and be GDPR compliant based upon your configurations and what data you choose to collect.",
"username": "Brock"
},
{
"code": "",
"text": "@Markus_Kieselmann Also, if you’re asking for something related to MAD, BND, BfV, or the BKA, your companies government liaison will already have a pre-determine list of services for you to use for your mobile application if maintaining it within the borders of Germany is required.The BND Headquarters out of Berlin can give you the proper guidance and oversight if that’s the case.",
"username": "Brock"
},
{
"code": "",
"text": "@Markus_Kieselmann if you see this, just to let you know I reached out to some of my contacts in Germany and whether Atlas or Realm can be used for GDPR sensitive items.Realm was deemed acceptable per BDSG guidelines, and is acceptable enough to meet the needs of the 2021 Cyber Security Strategy, but is not approved for military or militia use as per Heer’s publication for Germany’s KdoCIR, who also had findings determined Realm was not a threat to Germany’s consumer population.Germany as a nation has the means to lock down foreign internet traffic to and from any mobile application on any devices not using a Germany sourced SIM card. Or geographically lock access to the application. This would be done by the BND.Do note a caveat to this, is that MongoDB support personnel are not located in Germany, so doing this would lock out MongoDBs ability to directly provide support to your mobile applications Realm infrastructure. But still maintain access to your company and its services, but I’m sure you could work something out with MongoDB.KdoCIR Is experimenting with Realm like it is other systems like it, but there’s no comment on whether or not the Heer will adopt Realm, or consider how to develop their own version of it.EDIT\nAlso under the Verbandssanktionengesetz, MongoDB cannot put in place efforts to undermine GDPR within Germany, or it can violate the EU international laws, and Germany’s laws which would bring in the US State Department, and DoJ. Basically that means MongoDB as an American Company can face very scary people in the US Government who can push very severe punishments domestically, in addition to EU company sanctions should they violate international laws.Simply put: All of this in summary, I wouldn’t have GDPR concerns when even the German Government has approved its use, and it’s used presently in a postal application to deliver and track mail, with potential other areas of the German government may formally use Realm pending stability fixes.",
"username": "Brock"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Atlas Device Sync + GDPR | 2022-11-01T10:29:18.696Z | Atlas Device Sync + GDPR | 1,719 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "I followed the docs here: https://www.mongodb.com/docs/atlas/security-private-endpoint/#make-sure-that-your-security-groups-are-configured-properly-1I have a VPC with a private and a public subnet. The VPC endpoint is configured and both active on the AWS and the Atlas side.In the Atlas UI I got the connection string for the PrivateLink connection.When I try to connect a AWS Lambda function, residing in the private subnet, the connection times out.Connection string looks like this: mongodb+srv://test-user:[email protected]/?retryWrites=true&w=majorityAny help would be appreciated.",
"username": "Florian_Bischoff"
},
{
"code": "",
"text": "Hi @Florian_Bischoff,When I try to connect a AWS Lambda function, residing in the private subnet, the connection times out.Are you still having issues with connecting via the private endpoint? If so, I was wondering if you could provide the following details:Regards,\nJason",
"username": "Jason_Tran"
}
] | Lambda Private Link | 2023-03-18T15:02:37.558Z | Lambda Private Link | 840 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "$search$search$search",
"text": "I learned that when using atlas search with aggregation, it would require $search to be the first stage in the pipeline, thus we cannot use $match to filter the results first.As a result, I am looking to use a way to do exact term (string) filter to help me filter the results in the $search stage.\nI was looking at https://www.mongodb.com/docs/atlas/atlas-search/phrase/, but unfortunately, it is not exact filter, which may over select the results.Question is: Is there currently a way to do exact term search within $search stage? If not, is there a plan to expand https://www.mongodb.com/docs/atlas/atlas-search/equals/ to support string as well?Thank you!",
"username": "williamwjs"
},
{
"code": "$searchfiltercompound",
"text": "Hi William,I learned that when using atlas search with aggregation, it would require $search to be the first stage in the pipeline, thus we cannot use $match to filter the results first.Have you looked into filter option for the compound operator to see if it works for your use case?If you’ve tested it, can you highlight the document(s) from your testing that should / shouldn’t be returned?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "filterequalsphrase[\n {\n \"fieldA\": \"abc\",\n ...\n },\n {\n \"fieldA\": \"abc\",\n ...\n },\n {\n \"fieldA\": \"abcde\",\n ...\n },\n {\n \"fieldA\": \"abc def\",\n ...\n }\n]\n",
"text": "Hi @Jason_Tran , thank you for your quick reply!!My understanding with filter option is that, it would still require to pick an operator to do the actual filter work, like using equals or phrase as what I mentioned. So my question is on how to pick the right operator to help me do exact term filter.As for the document example, consider this:I would like to do exact term filter, so that when I filter by fieldA to be “abc”, it should only return the first two documents.",
"username": "williamwjs"
},
{
"code": "keywordtestdb> db.collection.find({},{_id:0})\n[\n { fieldA: 'abc' },\n { fieldA: 'abc' },\n { fieldA: 'abcde' },\n { fieldA: 'abc def' }\n]\n\"ftindex\"*{\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"mappings\": {\n \"dynamic\": true\n }\n}\n$searchtestdb> db.collection.aggregate([{$search:{index:'ftindex',phrase:{query:'abc',path:'fieldA'}}},{$project:{_id:0}}])\n[ \n{ fieldA: 'abc' }, \n{ fieldA: 'abc' } \n]\n",
"text": "Gotcha! Thanks for providing those sample documents and noting which ones you expect to be returned.Will the keyword analyzer work for you?Example based off your sample docs:Index definition for my test environment (*called \"ftindex\"*):$search pipeline and output:You may also find the following blog post useful too regarding Exact Matches in Atlas Search.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Yes! It works!!\nThank you @Jason_TranGood to learn how analyzer would help with it",
"username": "williamwjs"
},
{
"code": "",
"text": "Glad to hear that helped out ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | How to do exact term search using atlas search | 2023-03-30T23:43:35.199Z | How to do exact term search using atlas search | 718 |
null | [
"queries"
] | [
{
"code": "",
"text": "My mongodb cluster was deactivated after inactivity . Fast forward>> and i have activated the cluster, but i notice that data posted to my oldest database before the activation does not show up in GET request, while data that i posted to the database after activation is available after GET request.\nHas this happened to anyone else, any help would be appreciated",
"username": "Lerin_Owoade"
},
{
"code": "",
"text": "How much data does it say is being stored? What tier do you have?",
"username": "Brock"
}
] | Mongodb disturbing problem | 2023-04-02T20:10:08.739Z | Mongodb disturbing problem | 355 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I restated my laptop and when I tried running command mongod, I’m getting this error:\n{“t”:{“$date”:“2023-04-02T21:15:51.525+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to\nforce-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-04-02T21:15:51.529+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“-”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}",
"username": "Vishwajeet_Singh4"
},
{
"code": "",
"text": "when I tried running mongosh, it threw me this error:\nCurrent Mongosh Log ID: 6429a44bcecf092414b0e00c\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.8.0\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017",
"username": "Vishwajeet_Singh4"
},
{
"code": "",
"text": "Hi @Vishwajeet_Singh4,\nTry to see if the service of mongodb Is on.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "No, it’s not on. when I try to run mongod. it shows\n{“t”:{“$date”:“2023-04-02T21:15:51.525+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to\nforce-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-04-02T21:15:51.529+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“-”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}",
"username": "Vishwajeet_Singh4"
},
{
"code": "",
"text": "Hi @Vishwajeet_Singh4 ,\nI don’ t undestand from that log, if your mongod instance Is up and running …Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Well, this is what I’m receiving when trying to run the mongod command and I don’t understand it as well. Can you please help me, I’m new here.",
"username": "Vishwajeet_Singh4"
},
{
"code": "",
"text": "Thanks, @Fabio_Ramohitaj I resolved this by manually starting the MongoDB server from services .",
"username": "Vishwajeet_Singh4"
},
{
"code": "",
"text": "Hi @Vishwajeet_Singh4,\nGreat!\nCheck the answer i gave you as solution.BR",
"username": "Fabio_Ramohitaj"
}
] | MongoNetworkError: | 2023-04-02T15:49:43.069Z | MongoNetworkError: | 489 |
null | [
"realm-web"
] | [
{
"code": "const credentials = Realm.Credentials.google({ idToken });\napp.logIn(credentials).then((user) => {\n console.log(`Logged in with id: ${user.id}`);\n});\n",
"text": "We’ve recently upgraded from Stitch to Realm and are authenticating to Google using the official documentation guide. I receive the id_token from Google and pass it along to MongoDB using this code:That’s when I receive a 401 error, with this message in the logs: error fetching info from OAuth2 providerIf I switch on OpenID in Realm, suddenly it works, however that’s not helpful for my situation, as I need to retrieve the user’s email address.I’m on version 1.7.0. I’m using this official package.",
"username": "Noora_Chahine"
},
{
"code": "",
"text": "Hi I’ve run in the same problem and it’s funny how bad the documentation is… Never had a problem implementing auth in firebase , supabase ecc… with expo and also with react native it self. But now I’m stuck with this problem. Have you fixed it in the end ? Hopefully i can make it work, realm seems nice from the outside but get lost in this stuff that should be at the base of an app.",
"username": "Vasile_Andrei_Calin"
},
{
"code": "",
"text": "@Vasile_Andrei_Calin I submitted a bug report on Github. this bug is currently on their to-do list, though not priority, because there is a workaround: it requires switching to logging in with the MongoDB JWT function, instead of the Google function. Here’s the solution from one of the developers. It’s been working fine for us and for someone logging in with their Google account, they’ll notice no difference.",
"username": "Noora_Chahine"
},
{
"code": "",
"text": "This issue has been around the last two years, you have to work around and use JWT. The same with the Apple Login, and Facebook Login. Overall it’s never worked.You got to also use JWT 8.0 you can’t use the new JWT 9.0 FYI.EDIT:\nNora beat me to it, didn’t see that. But they are correct.",
"username": "Brock"
}
] | Error fetching info from OAuth2 provider (realm-web) | 2022-10-10T05:46:10.315Z | Error fetching info from OAuth2 provider (realm-web) | 2,163 |
null | [
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-25T05:04:26.904+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"WTCheckpointThread\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1679720666:903992][803:0x7f0b1dc21700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 21472, snapshot max: 21472 snapshot count: 0, oldest timestamp: (1679720661, 1) , meta checkpoint timestamp: (1679720666, 1) base write gen: 2878249\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.027+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.027+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.347+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"55876002346A\",\"b\":\"55875D226000\",\"o\":\"2DFD46A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"},{\"a\":\"558760024EF9\",\"b\":\"55875D226000\",\"o\":\"2DFEEF9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"5587600220C6\",\"b\":\"55875D226000\",\"o\":\"2DFC0C6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"},{\"a\":\"5587601B26D6\",\"b\":\"55875D226000\",\"o\":\"2F8C6D6\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"},{\"a\":\"558760246739\",\"b\":\"55875D226000\",\"o\":\"3020739\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"},{\"a\":\"5587601B20C5\",\"b\":\"55875D226000\",\"o\":\"2F8C0C5\",\"s\":\"__gxx_personality_v0\",\"s+\":\"275\"},{\"a\":\"7F0B2630DBEF\",\"b\":\"7F0B262FD000\",\"o\":\"10BEF\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E7F\"},{\"a\":\"7F0B2630E281\",\"b\":\"7F0B262FD000\",\"o\":\"11281\",\"s\":\"_Unwind_RaiseException\",\"s+\":\"331\"},{\"a\":\"5587601B2837\",\"b\":\"55875D226000\",\"o\":\"2F8C837\",\"s\":\"__cxa_throw\",\"s+\":\"37\"},{\"a\":\"55875E160F60\",\"b\":\"55875D226000\",\"o\":\"F3AF60\",\"s\":\"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE\",\"s+\":\"1B72\"},{\"a\":\"55875E1751FD\",\"b\":\"55875D226000\",\"o\":\"F4F1FD\",\"s\":\"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj\",\"s+\":\"27B\"},{\"a\":\"55875DECDC6F\",\"b\":\"55875D226000\",\"o\":\"CA7C6F\",\"s\":\"_ZN5mongo14FTDCController6doLoopEv.cold.395\",\"s+\":\"2D\"},{\"a\":\"55875E70770C\",\"b\":\"55875D226000\",\"o\":\"14E170C\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_14FTDCController5startEvEUlvE0_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"5C\"},{\"a\":\"5587601CE19F\",\"b\":\"55875D226000\",\"o\":\"2FA819F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"},{\"a\":\"7F0B262E2609\",\"b\":\"7F0B262DA000\",\"o\":\"8609\",\"s\":\"start_thread\",\"s+\":\"D9\"},{\"a\":\"7F0B26207133\",\"b\":\"7F0B260E8000\",\"o\":\"11F133\",\"s\":\"clone\",\"s+\":\"43\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.4.0-144-generic\",\"version\":\"#161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"55875D226000\",\"elfType\":3,\"buildId\":\"EE0334AB46B2152536232E843AA38EAFC636FE8F\"},{\"b\":\"7F0B262FD000\",\"path\":\"/lib/x86_64-linux-gnu/libgcc_s.so.1\",\"elfType\":3,\"buildId\":\"4ABD133CC80E01BB388A9C42D9E3CB338836544A\"},{\"b\":\"7F0B262DA000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"7B4536F41CDAA5888408E82D0836E33DCF436466\"},{\"b\":\"7F0B260E8000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"1878E6B475720C7C51969E69AB2D276FAE6D1DEE\"}]}}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55876002346A\",\"b\":\"55875D226000\",\"o\":\"2DFD46A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"558760024EF9\",\"b\":\"55875D226000\",\"o\":\"2DFEEF9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":3\n{\"t\":{\"$date\":\"2023-03-25T18:30:00.556+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:00.597+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.037+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.037+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.038+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.209+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":804,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"ahus3\"}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.209+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.209+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.209+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"replication\":{\"replSetName\":\"rs0\"},\"security\":{\"authorization\":\"enabled\",\"keyFile\":\"/home/developer/mongo-security/mongodb-key\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"logRotate\":\"reopen\",\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.214+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22271, \"ctx\":\"initandlisten\",\"msg\":\"Detected unclean shutdown - Lock file is not empty\",\"attr\":{\"lockFile\":\"/var/lib/mongodb/mongod.lock\"}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.215+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.215+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.215+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-25T18:30:01.221+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3466M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-03-25T18:30:02.505+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1679769002:505835][804:0x7f21853c7cc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 18 through 19\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.027+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.027+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.347+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"55876002346A\",\"b\":\"55875D226000\",\"o\":\"2DFD46A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"},{\"a\":\"558760024EF9\",\"b\":\"55875D226000\",\"o\":\"2DFEEF9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"5587600220C6\",\"b\":\"55875D226000\",\"o\":\"2DFC0C6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"},{\"a\":\"5587601B26D6\",\"b\":\"55875D226000\",\"o\":\"2F8C6D6\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"},{\"a\":\"558760246739\",\"b\":\"55875D226000\",\"o\":\"3020739\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"},{\"a\":\"5587601B20C5\",\"b\":\"55875D226000\",\"o\":\"2F8C0C5\",\"s\":\"__gxx_personality_v0\",\"s+\":\"275\"},{\"a\":\"7F0B2630DBEF\",\"b\":\"7F0B262FD000\",\"o\":\"10BEF\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E7F\"},{\"a\":\"7F0B2630E281\",\"b\":\"7F0B262FD000\",\"o\":\"11281\",\"s\":\"_Unwind_RaiseException\",\"s+\":\"331\"},{\"a\":\"5587601B2837\",\"b\":\"55875D226000\",\"o\":\"2F8C837\",\"s\":\"__cxa_throw\",\"s+\":\"37\"},{\"a\":\"55875E160F60\",\"b\":\"55875D226000\",\"o\":\"F3AF60\",\"s\":\"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE\",\"s+\":\"1B72\"},{\"a\":\"55875E1751FD\",\"b\":\"55875D226000\",\"o\":\"F4F1FD\",\"s\":\"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj\",\"s+\":\"27B\"},{\"a\":\"55875DECDC6F\",\"b\":\"55875D226000\",\"o\":\"CA7C6F\",\"s\":\"_ZN5mongo14FTDCController6doLoopEv.cold.395\",\"s+\":\"2D\"},{\"a\":\"55875E70770C\",\"b\":\"55875D226000\",\"o\":\"14E170C\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_14FTDCController5startEvEUlvE0_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"5C\"},{\"a\":\"5587601CE19F\",\"b\":\"55875D226000\",\"o\":\"2FA819F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"},{\"a\":\"7F0B262E2609\",\"b\":\"7F0B262DA000\",\"o\":\"8609\",\"s\":\"start_thread\",\"s+\":\"D9\"},{\"a\":\"7F0B26207133\",\"b\":\"7F0B260E8000\",\"o\":\"11F133\",\"s\":\"clone\",\"s+\":\"43\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.4.0-144-generic\",\"version\":\"#161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"55875D226000\",\"elfType\":3,\"buildId\":\"EE0334AB46B2152536232E843AA38EAFC636FE8F\"},{\"b\":\"7F0B262FD000\",\"path\":\"/lib/x86_64-linux-gnu/libgcc_s.so.1\",\"elfType\":3,\"buildId\":\"4ABD133CC80E01BB388A9C42D9E3CB338836544A\"},{\"b\":\"7F0B262DA000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"7B4536F41CDAA5888408E82D0836E33DCF436466\"},{\"b\":\"7F0B260E8000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"1878E6B475720C7C51969E69AB2D276FAE6D1DEE\"}]}}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55876002346A\",\"b\":\"55875D226000\",\"o\":\"2DFD46A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"558760024EF9\",\"b\":\"55875D226000\",\"o\":\"2DFEEF9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2023-03-25T05:04:33.348+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":3\n",
"text": "Hello everyone,We are facing some issues with mongoDB and still have no clue what is going on, so I am posting here for some suggestion/help. The issue is that mongod service is getting stopped/down a few times a day or once a week - completely random. This is what we can see in the mongo logs:These log lines should help:Aftrer those, mongodb is down…I have found some really similar issues in this forum, but it was not the same. For most of them, the problem was “no space left on device” but in this instance, we have plenty of space (and we do not have that line in the logs). The only way to fix this so far is to restart mongd service.Thanks in advance for any kind of help!",
"username": "Benjamin_Beganovic"
},
{
"code": "ulimit",
"text": "What’s the platform? If it’s Linux you might check ulimit",
"username": "Jack_Woehr"
},
{
"code": "user@xxxxxx:~$ ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 31470\nmax locked memory (kbytes, -l) 65536\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 1024\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 31470\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\nreturn-limits(){\n for process in $@; do\n process_pids=`ps -C $process -o pid --no-headers | cut -d \" \" -f 2`\n if [ -z $@ ]; then\n echo \"[no $process running]\"\n else\n for pid in $process_pids; do\n echo \"[$process #$pid -- limits]\"\n cat /proc/$pid/limits\n done\n fi\n done\n}\n[mongod #260286 -- limits]\nLimit Soft Limit Hard Limit Units\nMax cpu time unlimited unlimited seconds\nMax file size unlimited unlimited bytes\nMax data size unlimited unlimited bytes\nMax stack size 8388608 unlimited bytes\nMax core file size 0 unlimited bytes\nMax resident set unlimited unlimited bytes\nMax processes 64000 64000 processes\nMax open files 64000 64000 files\nMax locked memory unlimited unlimited bytes\nMax address space unlimited unlimited bytes\nMax file locks unlimited unlimited locks\nMax pending signals 31470 31470 signals\nMax msgqueue size 819200 819200 bytes\nMax nice priority 0 0\nMax realtime priority 0 0\nMax realtime timeout unlimited unlimited us\ndone\n",
"text": "Good point! I can see there are some lower values, and maybe it can help by increasing them.Based on the recommended ulimit settings from mongodb documentation, the number of open files and threads should be much higher. But if I runIt looks like mongo can get recommended resources. Here is the output:",
"username": "Benjamin_Beganovic"
},
{
"code": "[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\nExecStart=/usr/bin/mongod --config /etc/mongod.conf\nPIDFile=/var/run/mongodb/mongod.pid\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n\n# Recommended limits for mongod as specified in\n# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings\n\n[Install]\nWantedBy=multi-user.target\n",
"text": "Turned out that the values from the service configuration (/usr/lib/systemd/system/mongod.service) are taking effect, as shown in the output above (return-limits function). This is the mongod.service content:Sure, there is a super dirty workaround. I can modify mongod.service so it restarts on the failures but it does not help us to reveal the root cause of “random” crashes.",
"username": "Benjamin_Beganovic"
},
{
"code": "",
"text": "Perhaps open an Issue?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "You could also try updating to Ubuntu 22.04",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Still, there are no reproduction steps, it is only random. If I open an issue I believe it will be there just hanging for a while… today I have noticed the same problem on the windows platform, so it might be something internal with mongo 4.4 rather than the environment configuration or limitation.",
"username": "Benjamin_Beganovic"
},
{
"code": "",
"text": "You’re probably right, @Benjamin_Beganovic … can you upgrade to a later version of MongoDB?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Upgrade needs to be done anyway in a couple of months (some other things have to be upgraded first), but for now, I have to come up with some at least good enough workaround.",
"username": "Benjamin_Beganovic"
},
{
"code": "",
"text": "I would build a Docker with MongoDB 6.0 and get out of 4.4, as you’re not the only person in the last couple of weeks who’s brought up 4.4 crashing on Ubuntu 20 and above.After people started 6.0 services they haven’t experienced problems since that I know of.After building the 6.0, export your indexes to it and the aggregations, then export the BSON over to it. That would be a lot more ideal than trying to troubleshoot what is essentially going to be EOL in Feb of 2024 anyway, in the next 10 months you’ll be in a worse situation support wise, so it makes sense to be ahead of the curve for the next two years instead of 10 months. You could even upgrade to the latest ops manager as well on top of it.",
"username": "Brock"
}
] | MongoDB random crash from time to time, version 4.4.15, Ubuntu 20.04.5 LTS | 2023-03-29T14:52:28.331Z | MongoDB random crash from time to time, version 4.4.15, Ubuntu 20.04.5 LTS | 1,923 |
null | [
"java",
"android",
"kotlin"
] | [
{
"code": "",
"text": "Has any of you migrated a large project from Realm Java to Realm Kotlin? How was your experience? Did it take a lot of effort or was it a smooth migration?My app is currently written in Kotlin, but I am still using Realm-java. However, it does seem like a good idea to migrate and I have also noticed that the development on Realm-java is lagging behind a little (for example Realm java doesn’t support file format 23 yet whereas Realm-kotlin has done so for a month. )",
"username": "Simon_Persson"
},
{
"code": "",
"text": "No response, so I guess this hasn’t been done a lot yet. I started on the migration. Unfortunately it is a HUGE effort. Everything is changed, from threading to query language and you even need to switch out the date types used. I am sure this will simplify the code base in the long run, but it is a risky project to do a full migration. I haven’t been able to compile my project for weeks, so the risk of introducing errors is big",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Hello Simon,Best advice? wipe all the realm packages, and start the Realm components from scratch with your application if you’re going to do this.Either way, you’re going to end up with a completely different application backend wise doing this.",
"username": "Brock"
},
{
"code": "",
"text": "I did remove the Realm Java package and replaced it with Realm Kotlin. The whole migration of my app took about a month. Some takeaways1: Migration is completely different. Make sure you have good tests for this or you will run into trouble.2: Testing in general is a pain with the Kotlin SDK. MongoDb has no documentation on testing and if you use the recommended way to write to the database (Realm.write) you will inevitably end up with a bunch of deadlocks in your tests. There doesn’t seem to be a way to override the default write dispatcher in your tests. I ended up using a custom write function that calls realm.write in production but realm.writeblocking in tests.3: Database objects are no longer automatically updated. This is nice once you get the hang of it, but can lead to significant rewrites in your viewmodelsIn general I think the transition will be worth it for me. The new SDK makes it easier to use coroutines and I am much more confident regarding threading than before. The code base is simpler and more consistent. For larger apps/organizations I am not sure if the migration path I took is feasible since it will completely break your app for a long long time, so make sure to have a strategy before doing the migration.",
"username": "Simon_Persson"
},
{
"code": "",
"text": "A different approach I would have done, is build a new realm app and make a secondary branch of your app and make the changes there. So then your app is fully functional and running, but you’re able to work on the new version of your app with the new SDK.Then once you finish everything, merge it into the main branch and archive the old main branch. Then in app stores push the new update.",
"username": "Brock"
},
{
"code": "",
"text": "Not sure what you mean by creating a new Realm app, but I assume this refers to a new app on MongoDb Realm/Atlas? I only use Realm locally so far (although the plan is to eventually move to Atlas), so no need to involve the server side components at all for me. I guess the migration would be even harder for those that use sync",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Hello Simon,Yes,Basically just cloning the main apps repo to a new repo, so your main app in production isn’t being impacted, and then just migrate on the second version of things with the new realm app using the Kotlin SDK.",
"username": "Brock"
},
{
"code": "",
"text": "@Simon_Persson Sorry it won’t let me edit my last post anymore, not sure why the edit buttons gone.But anyways.You have right now say:\nApp 1\nRealm Java App1.If you make a replica of the GitHub/GitLab repo and rename it:\nApp 1 clone - > App 2\nFlexible Sync Kotlin App1You can do all of your work in the cloned repo, and not even touch your app that’s in production. So your users are unaffected.Then, when you’re done working on the Flexible sync version of your app, merge it and replace the old app using Java with your Kotlin version. Then the users who already have your app downloaded will download a “big update,” and not be the wiser at all what happened.Then on Atlas before this, migrate all the data from one collection to the new collection, make it all the same and drop the old collection after you’re done pushing the Kotlin version of your app, and no one will ever be the wiser via user base that anything happened. But you get to have all the ample time etc. to workout and figure out your moves without having any app downtime for your users.",
"username": "Brock"
},
{
"code": "",
"text": "As mentioned. I don’t use Atlas, so no need for me to think about that, but good info for others looking to migrate ",
"username": "Simon_Persson"
}
] | Anyone migrated a large project from Realm Java to Realm Kotlin? | 2023-02-27T14:02:22.417Z | Anyone migrated a large project from Realm Java to Realm Kotlin? | 1,221 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "\n const Followers = await followers.aggregate([\n { \"$match\": { \"userId\": userId }},\n {\n $project:{\n _id: 1,\n f_id: {\"$toObjectId\": \"$followerId\"}\n }\n },\n {\n $lookup:{\n from: 'users',\n localField: 'f_id', \n foreignField: 'id',\n as: 'user'\n }\n }]);\n",
"text": "this is my code:",
"username": "Pana_MIA"
},
{
"code": "",
"text": "Please share sample documents from both collections.Without the data it is impossible to tell what is wrong with your pipeline.",
"username": "steevej"
},
{
"code": "\nimport mongoose from \"mongoose\";\nimport { stripVTControlCharacters } from \"util\";\nconst Schema = mongoose.Schema;\n\nconst followersSchema = new Schema(\n {\n followerId: {\n type: String,\n required: true,\n unique: false\n },\n userId: {\n type: String,\n required: true,\n unique: false\n }\n }\n)\n\nconst followers = mongoose.models.followers || mongoose.model(\"followers\", followersSchema);\nexport default followers;\nimport mongoose from \"mongoose\";\nimport { stripVTControlCharacters } from \"util\";\nconst Schema = mongoose.Schema;\n\nconst usersSchema = new Schema(\n {\n username: {\n type: String,\n required: true,\n unique: true\n },\n fullname: {\n type: String,\n required: false,\n unique: false\n },\n pronouns: {\n type: String,\n required: false,\n unique: false\n },\n email: {\n type: String,\n required: true,\n unique: true\n },\n instagramHandle: {\n type: String,\n required: false,\n unique: false\n },\n twitterHandle: {\n type: String,\n required: false,\n unique: false\n },\n link1: {\n type: String,\n required: false,\n unique: false\n },\n link2: {\n type: String,\n required: false,\n unique: false\n },\n bio: {\n type: String,\n required: false,\n unique: false\n },\n category: {\n type: [],\n required: false,\n unique: false\n },\n avatar: {\n type: String,\n required:false,\n unique:false\n },\n bannerImage: {\n type: String,\n required:false,\n unique:false\n },\n hashedPassword: {\n type: String,\n required: true,\n minlength: 5,\n unique:false\n },\n admin: {\n type: Boolean,\n required: false,\n unique:false\n },\n featured:{\n type: Boolean,\n required: false,\n unique:false\n },\n onboardingFormComplete:{\n type: Boolean,\n required: false,\n unique:false\n },\n location: {\n type: String,\n required: false,\n unique:false\n },\n dateJoined: {\n type: Date,\n required: false,\n unique:false\n }\n\n\n }\n)\n\nconst users = mongoose.models.users || mongoose.model(\"users\", usersSchema);\nexport default users;\n",
"text": "Follower Model:Users Model:",
"username": "Pana_MIA"
},
{
"code": "",
"text": "I do not use mongoose, I do not use schema. I asked forPlease share sample documents from both collections.The schema does not provide us with real data to experiment with. Time is limited and most of us will not take the time to created our own documents from your schema. You certainly have documents that we can cut-n-paste directly into our system.However what I can say is that you are using foreignField : id but your users schema does not specify a field named id.",
"username": "steevej"
},
{
"code": "{\"_id\":{\"$oid\":\"6423d23d8dc1c6ee2ef05ca7\"},\"username\":\"genwav\",\"email\":\"[email protected]\",\"category\":[\"Art\",\"Services\"],\"hashedPassword\":\"$2...\",\"onboardingFormComplete\":false,\"dateJoined\":{\"$date\":{\"$numberLong\":\"1680069181027\"}},\"__v\":{\"$numberInt\":\"0\"},\"bio\":\"Software Engineer, Creative Coder, Artist, Producer\",\"instagramHandle\":\"gen.wav\",\"link1\":\"http://genesisbarrios.co\",\"location\":\"Miami, FL\",\"twitterHandle\":\"gendotwav\",\"avatar\":\"data:image/jpeg;base64,/...\"}\n{\"_id\":{\"$oid\":\"6423ce8b8dc1c6ee2ef05b65\"},\"followerId\":\"6423d23d8dc1c6ee2ef05ca7\",\"userId\":\"64062bdf13bc30624bddcdde\",\"__v\":{\"$numberInt\":\"0\"}}\n",
"text": "user:followers",
"username": "Pana_MIA"
},
{
"code": " foreignField: 'id', foreignField: '_id',\n",
"text": "Same conclusion:you are using foreignField : id but your users schema does not specify a field named id.But I can see that followerId matches the _id from the user. So simply replacing foreignField: 'id',withshould work.I strongly recommend that you store all your ids, such as userId and followerId as $oid because $oid takes less space than the string representation, $oid is faster to compare and you would be able to avoid the constant conversion you need to do for your $lookup.",
"username": "steevej"
}
] | Aggregate returning empty array | 2023-03-29T06:49:23.487Z | Aggregate returning empty array | 1,530 |
null | [
"swift"
] | [
{
"code": "import Foundation\nimport RealmSwift\n\nclass User: Object, Identifiable {\n @objc dynamic var id: String = UUID().uuidString\n @objc dynamic var name: String = \"\"\n @objc dynamic var age: Int = 0\n\n override static func primaryKey() -> String? {\n return \"id\"\n }\n}\nimport Foundation\nimport RealmSwift\nimport Combine\n\nclass RealmDatabaseManager: ObservableObject {\n private var realm: Realm\n private var cancellables: Set<AnyCancellable> = []\n\n @Published var users: [User] = []\n\n init() {\n realm = try! Realm()\n fetchUsers()\n observeUsers()\n }\n\n private func fetchUsers() {\n users = Array(realm.objects(User.self))\n }\n\n private func observeUsers() {\n realm.objects(User.self)\n .observe { [weak self] (changes: RealmCollectionChange) in\n DispatchQueue.main.async {\n switch changes {\n case .initial, .update:\n self?.fetchUsers()\n default:\n break\n }\n }\n }\n }\n\n func addUser(_ user: User) {\n do {\n try realm.write {\n realm.add(user)\n }\n } catch {\n print(\"Error adding user: \\(error)\")\n }\n }\n\n func updateUser(_ user: User, withName name: String, age: Int) {\n do {\n try realm.write {\n user.name = name\n user.age = age\n }\n } catch {\n print(\"Error updating user: \\(error)\")\n }\n }\n\n func deleteUser(_ user: User) {\n do {\n try realm.write {\n realm.delete(user)\n }\n } catch {\n print(\"Error deleting user: \\(error)\")\n }\n }\n}\nimport SwiftUI\n\nstruct UserRowView: View {\n let user: User\n\n var body: some View {\n HStack {\n if !user.isInvalidated { // Add this check\n VStack(alignment: .leading) {\n Text(user.name)\n .font(.headline)\n Text(\"Age: \\(user.age)\")\n .font(.subheadline)\n }\n }\n }\n }\n}\nimport SwiftUI\n\nstruct UserListView: View {\n @ObservedObject private var userManager = RealmDatabaseManager()\n\n var body: some View {\n NavigationView {\n List {\n ForEach(userManager.users) { user in\n if !user.isInvalidated { // Add this check\n NavigationLink(destination: UserDetailView(user: user, userManager: userManager)) {\n UserRowView(user: user)\n }\n }\n }\n }\n .navigationTitle(\"Users\")\n .toolbar {\n ToolbarItem(placement: .navigationBarTrailing) {\n Button(action: addUser) {\n Image(systemName: \"plus\")\n }\n }\n }\n }\n }\n\n private func addUser() {\n let newUser = User()\n newUser.name = \"New User\"\n newUser.age = 0\n userManager.addUser(newUser)\n }\n}\nimport SwiftUI\n\nstruct UserDetailView: View {\n let user: User\n let userManager: RealmDatabaseManager\n @State private var name: String = \"\"\n @State private var age: String = \"\"\n @Environment(\\.presentationMode) var presentationMode\n\n var body: some View {\n Form {\n Section {\n TextField(\"Name\", text: $name)\n TextField(\"Age\", text: $age)\n .keyboardType(.numberPad)\n }\n\n Section {\n Button(\"Save\") {\n if let ageInt = Int(age), !user.isInvalidated {\n userManager.updateUser(user, withName: name, age: ageInt)\n }\n presentationMode.wrappedValue.dismiss()\n }\n\n Button(\"Delete\") {\n if !user.isInvalidated {\n userManager.deleteUser(user)\n presentationMode.wrappedValue.dismiss()\n }\n }\n .foregroundColor(.red)\n }\n }\n .onAppear {\n if !user.isInvalidated {\n name = user.name\n age = String(user.age)\n } else {\n presentationMode.wrappedValue.dismiss()\n }\n }\n .navigationBarTitle(user.isInvalidated ? \"\" : user.name, displayMode: .inline)\n }\n}\nimport SwiftUI\n\n@main\nstruct SomeTestApp: App {\n var body: some Scene {\n WindowGroup {\n UserListView()\n }\n }\n}\ndeleteUserfunc deleteUser(_ user: User) {\n guard let thawedItem = user.thaw() else {\n return\n }\n\n if thawedItem.isInvalidated == false { //ensure it's a valid item\n let thawedRealm = thawedItem.realm! //get the realm it belongs to\n do {\n try thawedRealm.write {\n thawedRealm.delete(thawedItem)\n }\n } catch {\n print(\"Error deleting user: \\(error)\")\n }\n }\n }\n",
"text": "Hello guys, given the following implementation of a RealmDatamanager and some basic View, I end up with this code; similarly, after asking chatgpt, I ended up with the exact same example code.\nIt keeps crashing when deleting an element from the database.User.swiftRealmDatabaseManager.swiftUserRowView.swiftUserListView.swiftUserDetailView.swiftApp.swiftI am starting to give up on Realm. Working with it with no backtrace on the error is becoming very frustrating.\nThis code above, by simply adding it in a new swift/swiftui project, will compile and replicate the issue. (do not forget to add the realmswift repo in the package depedency)Do you have any recommendations to improve that code not to make it crash?\nI checked the other existing topics, with for example using thaw. should I add the thaw to the delete function in the realm manager? it seems weird, why wouldn’t it be by default instead of crashing?Edit 1: I tried by changing my deleteUser function to the followingstill crashing",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "What does TestFlight say?@Oleg_Gorbatchev Apple TestFlight will literally tell you at what point it’s crashing, connect it to your app and run your app on an iPhone or iPad, or in the emulator and post the TestFlight logs.It would then make it a lot faster to look through the code and see what we can change up.",
"username": "Brock"
},
{
"code": "",
"text": "I am not sure what you are asking me here. Are you saying I have to deploy a test flight to be able to access a decent crash log?\nThis is happening in a demo project with no other code than the one above.\nI already shared the entire code that makes it crash above. if you copy and paste all of it into a project, it compiles.",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "To be honest? Yes. Or Crashalytics etc but TestFlight is ridiculously incredible at literally saying the exact point your app crashes, and even the exact process that crashed it. As well as it will tell you other environment factors that are causing it to crash.",
"username": "Brock"
},
{
"code": "",
"text": "\nimage1920×836 87.6 KB\n\nHey, it gives the information that SwiftUI is using the variable when the object has been deleted or invalidated.",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "it is the same error as in the simulator\n\nimage1920×745 79.5 KB\n",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "Take out the USER and just do system, and see if something changes. If the same error comes out with system.",
"username": "Brock"
},
{
"code": "",
"text": "So what’s happening, is Realm is not letting go of this User object, because it’s still being referenced somewhere even after being deleted via a Var, and for the life of me I’m not catching it.Where are you deleting the user object?",
"username": "Brock"
},
{
"code": "UserDetailView.swiftButton(\"Delete\") {\n if !user.isInvalidated {\n userManager.deleteUser(user)\n presentationMode.wrappedValue.dismiss()\n }\n }\n .foregroundColor(.red)\nRealmDatabaseManager.swiftfunc deleteUser(_ user: User) {\n do {\n try realm.write {\n realm.delete(user)\n }\n } catch {\n print(\"Error deleting user: \\(error)\")\n }\n }\n",
"text": "In UserDetailView.swiftIn RealmDatabaseManager.swift",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "After this deletion where then is it being referenced? Because that’s what causing your error.",
"username": "Brock"
},
{
"code": "UserDetailView.swiftUserRowView.swift",
"text": "it is referenced in UserDetailView.swift and in the associated row UserRowView.swift",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "I’m getting a bit confused, I apologize, just one of these are causing your error. We figure out which it is, which I’d recommend changing out one, see if the error persists, then do the same for the other. If it still does it for both, remove both and see if the error is still there.But Realm isn’t liking it being referenced after it’s being deleted.",
"username": "Brock"
},
{
"code": "@objc dynamic var isDeleted = falseUserdeleteUseruser.isDeleted = truerealm.objects(User.self).filter(\"isDeleted == false\")isDeleted",
"text": "Well, this is kinda sad for a big framework like this not to have an easy way to delete an object without crashing half the app.\nThe solution I ended up using now is to:then at each app launch, I will clear the database from all objects that have isDeleted set to truethat’s it\nI shouldn’t have to do something like that, but I guess that is what it comes down to",
"username": "Oleg_Gorbatchev"
},
{
"code": "",
"text": "Honestly I do have to agree with you, but I do apologize that’s what it ends up having to be.I’m going to look at reproducing this in the future and see if there’s possibly maybe simpler ways to do this, as this can be replicated in Flutter and React.Native SDKs as well. I’ll post some findings when I get around to them.",
"username": "Brock"
},
{
"code": "private func fetchUsers() {\n users = Array(realm.objects(User.self))\n}\nRealmDatabaseManager.countResults",
"text": "I took a quick look a the question and here’s a possibility:@BrockWhen you’re getting the users from Realm, instead of returning a Realm Collection (List or Results), they are instead being returned as a Swift ArrayThere are two issues with that, the first is Results are lazily loaded so casting them to an Array could have an impact on memory with large datasets.The more important thing is that deleting a user from Realm does not also remove that index from the Array. Looking at the deleteUser function in the RealmDatabaseManager, I am not seeing anything being done with the array when a user is removed.In other words if Realm has three usersuser_0\nuser_1\nuser_2which is a .count of 3, and user_1 is deleted, the collection would have a count of 2 - so all is good.However, the Array that contained those elements would still have a count of 3 - causing a crash as the iterator is attempting to access an object that no longer exists.Two possible fixes:1 - When deleting the object from Realm, also delete it from the Arrayor preferably2 - return Realm Results object instead of the array - the Results object always reflects the status of the underlying data.I could be incorrect here but in our experience, casting Realm collections to Arrays is always a red flag for us.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay THANK YOU!!!THAT is what I was missing! I completely spaced on the array, I kept trying to figure out why it wasn’t zeroing out the object, which I know causes that error. I didn’t think about the array referencing it!Now that makes sense! Thank you Jay! I didn’t even think about the array.",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | When deleting an object with SwiftUI, it crashes even if I check for invalidated objects | 2023-03-29T23:04:15.400Z | When deleting an object with SwiftUI, it crashes even if I check for invalidated objects | 1,839 |
null | [
"crud",
"mongodb-shell"
] | [
{
"code": "db.devices.updateOne({\"_id\": ObjectId(\"6427f848b39a9135e7b7e63e\")}, {\"$set\" :{\"value\": \"on\"}})",
"text": "I am a MongoDB newbie and the examples I am seeing is using other fields as a filter but I want to update using the ObjectId\nHow do you update by ObjectId in mongosh?\nI tried\ndb.devices.updateOne({\"_id\": ObjectId(\"6427f848b39a9135e7b7e63e\")}, {\"$set\" :{\"value\": \"on\"}})but no rows were updated.What should be the correct query in this case?",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "The code is good.You might be connected on the wrong server.You might be using the wrong database.You might be using the wrong collection.You might be using a non-existing _id on the correct collection of the correct database on the correct server.",
"username": "steevej"
}
] | Update by ObjectId | 2023-04-01T10:57:25.564Z | Update by ObjectId | 805 |
[
"kotlin"
] | [
{
"code": "",
"text": "This is extremely anemic.How do developers implement client reset logic using the Kotlin SDK?It says what the error means, how would you even correct the error? How would you setup client reset logic?The Kotlin SDK has been hyper focused on Flexible Sync since its launch with the Multiplatform Kotlin SDK, how would you have a wrong sync type? Is that an error? What would cause Flexible Sync in Kotlin to think Partitioned sync has been called?",
"username": "Brock"
},
{
"code": "",
"text": "Hi, the Kotlin SDK should automatically select the new client reset logic as of 1.7. Please see this article by an engineer on the team: Realm Kotlin 1.7. We just released Realm Kotlin 1.7.0 to… | by Christian Melchior | Realm Blog | Mar, 2023 | Medium",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_Kaye Wait a minute… Why does Kotlin get automatic reset logic, but not React.Native, Flutter, or C#?Yo, that ain’t fair, but I’m extremely thrilled that automatic implementations are in the works!!!@Tyler_Kaye What is Wrong Sync type though? If you have flex sync setup as the default, how would you have it think it’s partition? Or is this related to some kind of migration issue that’s been seen?",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Documentation, Kotlin Client Reset Logic | 2023-04-01T02:31:48.597Z | Documentation, Kotlin Client Reset Logic | 882 |
|
[
"flutter"
] | [
{
"code": "",
"text": "So friends of mine and myself are looking to launch a Flutter application, and were considering Device Sync in lieu of, or in addition to Firebase/Firestore.The main issue we have right now, is where is the documentation to initiate Client Reset Logic?It’s not in the documentation, and there’s no chance we’re going to risk not having reset logic in place.",
"username": "Brock"
},
{
"code": "",
"text": "Nevermind we found it, it’s under a weird heading compared to the C# and Swift sections.Instead of a dedicated section called “Client Reset Logic,” it’s titled “Handle Sync Errors.”",
"username": "Brock"
},
{
"code": "",
"text": "Sounds good. Glad you were able to find it!",
"username": "Tyler_Kaye"
}
] | Where is client reset logic for Flutter SDK? | 2023-04-01T00:35:37.566Z | Where is client reset logic for Flutter SDK? | 823 |
|
null | [
"mongodb-shell",
"app-services-user-auth"
] | [
{
"code": "sudo systemctl start mongod/usr/bin/mongod --config /etc/mongod.confadmin> show users\n[\n {\n _id: 'admin.superuser',\n userId: new UUID(\"554176216d-5d9f-47ab-91c6-f0bf73ffb30asdc\"),\n user: 'superuser',\n db: 'admin',\n roles: [ { role: 'root', db: 'admin' } ],\n mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]\n }\n]\n mongosh \"mongodb://localhost:27019/admin\" -u superuser --authenticationDatabase admin/etc/mongod.confsecurity:\n authorization: \"enabled\"\nsudo systemctl restart mongodmongosh \"mongodb://localhost:27019/admin\" -u superuser --authenticationDatabase adminMongoNetworkError: connect ECONNREFUSED",
"text": "Hello, I have launched my mongodb using sudo systemctl start mongod which runs /usr/bin/mongod --config /etc/mongod.conf.I have created a superuser user in my admin database:I am able to connect using mongosh \"mongodb://localhost:27019/admin\" -u superuser --authenticationDatabase admin and then giving “my_pw” (giving the wrong password fails).After this setup, I add the following lines to my /etc/mongod.conf:and restart mongodb service with sudo systemctl restart mongod.But then, the same password that worked before is not working anymore:\nmongosh \"mongodb://localhost:27019/admin\" -u superuser --authenticationDatabase admin\n“my_pw” fails, even though I just set it!\nMongoNetworkError: connect ECONNREFUSEDIs there anything I’m missing here?",
"username": "Hendrik_Klug"
},
{
"code": "MongoNetworkError: connect ECONNREFUSEDsecurity:\n authorization: \"enabled\"\n",
"text": "I just realized that MongoNetworkError: connect ECONNREFUSED actually means that my mongodb is not running… My tab indentation in the .conf file was the issue.Entry should be:Very confusing that the error comes after asking for the password and not before.",
"username": "Hendrik_Klug"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Trouble enabling authorization to my self hosted mongodb | 2023-04-01T10:54:58.081Z | Trouble enabling authorization to my self hosted mongodb | 909 |
[
"sharding",
"connector-for-bi",
"bengaluru-mug"
] | [
{
"code": "",
"text": "\nMUG_1stApr1920×1080 387 KB\n Attention all Bengaluru MongoDB Enthusiasts! Get ready for an unforgettable night of learning, networking, and fun at the upcoming Bengaluru MongoDB User group meetup on April 1st, 2023. Join us at the Slice office in Bengaluru for a day packed with exciting topics and speakers! The first topic of the evening will be BI-connector, presented by Santhosh, a Consulting Engineer at MongoDB. In this talk, you’ll learn how to integrate MongoDB with your BI tools to unlock the power of your data. Whether you’re a seasoned MongoDB user or new to the platform, this session is sure to be informative and engaging. Next up, we have the opportunity to hear from the, Sarthak Dalabehera SDE III at Slice! He’ll share insights and tips on MongoDB Indexing. Learn from the best in the business and take away valuable knowledge to apply to your own projects. And, as if that wasn’t enough, we’ll also have a talk on Scaling MongoDB with Horizontal and Vertical Sharding by Manosh, the CTO of MyDBOPS. With his expertise and experience, you’re sure to come away with a deeper understanding of this topic. Plus, the meetup will be hosted at the Slice office in Bengaluru, making for a unique and exciting environment to learn and connect with other MongoDB enthusiasts. But wait, there’s more! We’ll also be hosting a quiz and trivia session, giving you the chance to put your MongoDB knowledge to the test and win some cool swag . Who knows, you may just walk away with some awesome prizes!Don’t miss out on this incredible opportunity to learn, network, and have some fun with other MongoDB users. Be sure to mark your calendars for April 1st, 2023, 10:00AM Onwards and we’ll see you there! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are RSVPed. You need to be signed in to access the button.Event Type: In-Person\nLocation: Slice, Bangalore\n IndiQube Ashford\nAddress: 6/B, Mahatyagi Laksmidevi Rd, Koramangala 1A Block, Koramangala 3 Block, Koramangala, Bengaluru, Karnataka 560034",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "Hi, how can I register to the event.",
"username": "Channaveer_Hakari1"
},
{
"code": "",
"text": "How to register? Where to register?",
"username": "Harsha_Vardhana"
},
{
"code": "",
"text": "If you are logged in - you can click on the green “RSVP button” on the top of the post to register for the event \nScreenshot 2023-03-14 at 12.53.10 PM768×125 18.7 KB\n",
"username": "Harshit"
},
{
"code": "",
"text": "How do we vote for the topic of interest?",
"username": "N_A_N_A14"
},
{
"code": "",
"text": "Hi @N_A_N_A14 ,You should be able to see the polling in this page, can you please refresh the page.Thanks,\nDarshan",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "After registering do we receive any mail with invite. As I did not get any confirmation mail.",
"username": "vishwanath_kumbi"
},
{
"code": "",
"text": "Hey @viswanatha_k,\nYou don’t receive any emails at the moment after RSVPing. (we are working on it as a platform feature) However, we will be sending out event reminder emails 24 hours before the event ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey All,\nWe are excited to remind you that Bengaluru MongoDB User Group Meetup! is tomorrow at Slice Office, IndiQube Ashford, Koramangala! We are thrilled to have you all join us.We want to make sure everyone has a fantastic time, so please arrive on time at 09:30 AM to ensure you don’t miss any of the sessions, and we can all have some time to chat before the talks begin.There are a few important things to keep in mind:Please bring along one of your Government-Issued IDs, which you’ll need to present at the reception to access the building and event premises. From there, just follow the signs to the cafeteria area.Please stay within the designated event premises and maintain a respectful and professional atmosphere throughout the office. We also kindly request that you throw away any used plates and cans to keep the space clean.If you have any questions, please don’t hesitate to ask by replying to this Looking forward to seeing you at the event The MongoDB User Group Leaders, Bengaluru",
"username": "Harshit"
},
{
"code": "",
"text": "Thanks everyone for coming, it was interacting with you all.Please find the indexing internals presentation link: here",
"username": "Sarthaka_Dalabehera"
}
] | Bengaluru Meetup: From MongoDB Atlas Bi-Connectors to Sharding, Performance Strategies for Data Driven World | 2023-03-14T12:13:54.044Z | Bengaluru Meetup: From MongoDB Atlas Bi-Connectors to Sharding, Performance Strategies for Data Driven World | 3,796 |
|
null | [
"field-encryption"
] | [
{
"code": "C:\\Programme\\mongodb\\Server\\6.0\\bin>mongod\n{\"t\":{\"$date\":\"2023-03-25T08:47:40.365+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:40.369+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.907+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.909+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.909+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.909+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.910+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.911+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":13736,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"URANUS\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.911+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.911+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.4\",\"gitVersion\":\"44ff59461c1353638a71e710f385a566bcd2f547\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.911+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19044)\"}}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.912+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.914+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory C:\\\\data\\\\db\\\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.915+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.915+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.916+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.916+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.917+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.917+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.918+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.922+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.922+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.922+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.923+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.924+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.924+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.924+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.925+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.926+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.926+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.927+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.927+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.927+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-03-25T08:47:41.928+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n",
"text": "Hello. I´m would like to create a new data base for a new website. In this computer this has been done many times and now there are two versions of mongo installed 5.0 and 6.0I cannot run any of the two as when I try the command returns.",
"username": "Gladys_Sobrido_Sambade"
},
{
"code": "\"error\":\"NonExistentPath: Data directory C:\\\\data\\\\db\\\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"",
"text": "Hi @Gladys_Sobrido_Sambade\"error\":\"NonExistentPath: Data directory C:\\\\data\\\\db\\\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"Here Is your error.\nYou Need to modify your configuration file with the correct db Path or you Need to add the option --dbpath directory.Above i link the reference to manual.I Hope Is useful.B.R.",
"username": "Fabio_Ramohitaj"
},
{
"code": "C:\\Windows\\System32>mongod --port 27017 --dbpath D:/A2Projekt/data/db\n{\"t\":{\"$date\":\"2023-03-26T20:08:46.881+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.545+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.546+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.548+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.548+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.548+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.548+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.550+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":3536,\"port\":27017,\"dbPath\":\"D:/A2Projekt/data/db\",\"architecture\":\"64-bit\",\"host\":\"URANUS\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.550+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.550+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.550+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19044)\"}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.551+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"port\":27017},\"storage\":{\"dbPath\":\"D:/A2Projekt/data/db\"}}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.615+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"D:/A2Projekt/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:48.615+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15812M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.470+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":853}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.470+02:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.740+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.741+02:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.811+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.812+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.816+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:49.854+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:50.282+02:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"D:/A2Projekt/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:50.363+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:50.364+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-03-26T20:08:50.374+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-03-26T20:08:50.374+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n",
"text": "Unfortunately after using path command, issue persists",
"username": "Gladys_Sobrido_Sambade"
},
{
"code": " \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n",
"text": "Change the port for one of them, or rename it etc.You’re still going into the same MongoDB setup when both databases share the same IP and Port, so change the port and IP it’s config’d with, and assign a DNS as well, then you can reference either of them as separate.",
"username": "Brock"
},
{
"code": "",
"text": "Hi @Gladys_Sobrido_Sambade,\nI don’ t see any issue from your log.\nOnly things i suggest you to change is the bind ip to 0.0.0.0 or the correct nic.Best Regards",
"username": "Fabio_Ramohitaj"
}
] | Error: Could not find any releases for the requested version | 2023-03-25T07:49:02.175Z | Error: Could not find any releases for the requested version | 1,107 |
null | [
"golang"
] | [
{
"code": "",
"text": "I recently started using the official driver instead of the globalsign project and one of the things I miss most is the ability to count how many active connections I have going with mongodb. I can see it on the server side in the monitoring section of the mongodb.com website. But I don’t know which of my servers is misbehaving without restarting them and watching the number go down. It would be nice if the mongo client gave health info like this. Does it already exist and I just can’t find it?",
"username": "David_Johnson1"
},
{
"code": "",
"text": "Have I posted in the wrong forum? Is there something I can do better to get a response?",
"username": "David_Johnson1"
},
{
"code": "",
"text": "I don’t think such thing exists. You may need to “emit metric” from all nodes in your cluster and do sort of aggregation. Host identifier is needed so that you know where the data is from.In our system. We use k8s + prometheus, so we know which k8s pod is sending what data.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Sure, I can wrap the mongodb client and track what requests are made and when. But I can’t see the internal workings of the client. So, I can’t know how many connections currently exist (which matters because we can run out0. Is there really nothing to check the internal health of the client?",
"username": "David_Johnson1"
},
{
"code": "globalsign/mgomgovar openConns int32\npoolMonitor := &event.PoolMonitor{\n\tEvent: func(evt *event.PoolEvent) {\n\t\tswitch evt.Type {\n\t\tcase event.ConnectionReady:\n\t\t\tatomic.AddInt32(&openConns, 1)\n\t\tcase event.ConnectionClosed:\n\t\t\tatomic.AddInt32(&openConns, -1)\n\t\t}\n\t},\n}\n\nmongo.Connect(\n\tcontext.Background(),\n\toptions.Client().\n\t\tApplyURI(\"myURI\").\n\t\tSetPoolMonitor(poolMonitor))\n",
"text": "@David_Johnson1 thanks for the question! Which APIs from the globalsign/mgo driver did you find useful getting telemetry information? Is there information other than just connection count that mgo provided that you found useful?As far as how to measure it in the official MongoDB Go Driver, the best way currently is using a PoolMonitor via ClientOptions.SetPoolMonitor configuration.Here’s an example of counting total open connections:",
"username": "Matt_Dale"
}
] | What monitoring/health checking options does the golang mongodb driver support? Like connection count? | 2023-03-09T15:36:38.960Z | What monitoring/health checking options does the golang mongodb driver support? Like connection count? | 1,257 |
null | [] | [
{
"code": "WriteResultupdate{ muitl: true }_id",
"text": "Is it possible to get the WriteResult from within an Atlas Trigger?Specifically, I have an update statement with { muitl: true }, I need to know when more than one document has been updated.Also, would it be possible to get the documents that were updated? Or at least their _id’ s?Thanks.",
"username": "Dev_Ops"
},
{
"code": "",
"text": "Yes, in log forwarding for the Data API you can get this information.",
"username": "Brock"
}
] | Get WriteResult in Atlas Trigger | 2023-03-22T13:05:21.867Z | Get WriteResult in Atlas Trigger | 694 |
null | [] | [
{
"code": "",
"text": "Hello,\nI have a requirement to move certain data from 1 project cluster to another project cluster through triggers. When I’m creating a trigger, it is giving me an option to Linked data source for different clusters within same project.Is there a way I can link cluster from the different project as target for trigger?Thanks",
"username": "Nikhil_Chawla"
},
{
"code": "",
"text": "Were you able to find out how to do this ? I have a similar requirement",
"username": "Vishal_Thakur1"
},
{
"code": "",
"text": "I have a similar requirement too",
"username": "Rafael_Martins"
},
{
"code": "",
"text": "Short answer no.\nLong answer yes.Build GraphQL API to interface between the clusters.\nBuild an API with Data API between the clusters\nBuild an HTTP Endpoint between each cluster.\nBuild an API with functions between clusters and setup triggers to fire as events happen over the API.",
"username": "Brock"
}
] | Can we write into collection from one project collection to other project collection through trigger? | 2022-08-05T15:16:41.198Z | Can we write into collection from one project collection to other project collection through trigger? | 2,174 |
null | [
"python"
] | [
{
"code": "{\n\t\"field1\": \"value 1\",\n\t\"field2\": \"value 2\",\n\t\"field3\": \"value 3\",\n\t\"field4\": \"value 4\",\n\t\"field5\": \"value 5\",\n\t\"field6\": \"value 6\",\n}\[email protected](\"/api/devices/<id>\")\ndef update_devices(id):\n _json = request.json\n db.sensors.update_one({'_id': ObjectId(id)}, {\"$set\", _json})\n\n resp = jsonify({\"message\": \"Devices updated successfully\"})\n resp.status_code = 200\n return resp\n{\n\t\"field1\": \"value 1\",\n\t\"field3\": \"value 3\",\n\t\"field4\": \"value 4\",\n}\n{\n\t\"field2\": \"value 2\",\n\t\"field4\": \"value 4\",\n\t\"field5\": \"value 5\",\n}\n",
"text": "Hi,\nI have a device collection that stores dynamic fields per documentUsing a Flask PyMongo REST server, I am updating my document by passing the JSON from the client\nusing update_one()But I am getting “TypeError: unhashable type: ‘dict’”The input from the client could be different only like this,or could beIs there a way to handle dynamic field updates in PyMongo? Or could I be doing something wrong?",
"username": "Nel_Neliel"
},
{
"code": "db.sensors.update_one({'_id': ObjectId(id)}, {\"$set\": _json})",
"text": "db.sensors.update_one({'_id': ObjectId(id)}, {\"$set\": _json})Sorry, made a silly mistake on this line. I put a “,” instead of “:”.\nAll is well now.Thanks",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | PyMongo Dynamic Fields on Update | 2023-04-01T00:47:43.021Z | PyMongo Dynamic Fields on Update | 870 |
null | [] | [
{
"code": " \"productsData\": {\n \"totalPrice\": \"0\",\n \"totalPriceBeforeModifiers\": \"0\",\n \"count\": 0,\n \"products\": []\n },\ncountproducts",
"text": "Is there a way to make it so that a field is not projected at all if a certain condition is not met?Example:Because the value of the field count is 0, products should not be projected.",
"username": "Vladimir"
},
{
"code": "countproductscount1count0myFirstDatabase> db.data.find()\n[\n {\n _id: ObjectId(\"6420f52f85731f19c94ceb58\"),\n productsData: {\n totalPrice: '0',\n totalPriceBeforeModifiers: '0',\n count: 0,\n products: []\n }\n },\n {\n _id: ObjectId(\"6420fabe85731f19c94ceb59\"),\n productsData: {\n totalPrice: '0',\n totalPriceBeforeModifiers: '0',\n count: 1,\n products: [ 'testproduct' ]\n }\n }\n]\nmyFirstDatabase> db.data.aggregate({\n '$set': {\n 'productsData.products': {\n '$cond': [\n { '$eq': [ '$productsData.count', 0 ] },\n '$$REMOVE',\n '$productsData.products'\n ]\n }\n }\n})\n[\n {\n _id: ObjectId(\"6420f52f85731f19c94ceb58\"),\n productsData: { totalPrice: '0', totalPriceBeforeModifiers: '0', count: 0 }\n },\n {\n _id: ObjectId(\"6420fabe85731f19c94ceb59\"),\n productsData: {\n totalPrice: '0',\n totalPriceBeforeModifiers: '0',\n count: 1,\n products: [ 'testproduct' ]\n }\n }\n]\n$project$set$cond$$REMOVE$project$cond",
"text": "Hi @Vladimir,Because the value of the field count is 0, products should not be projected.Not sure if the following suits your use case, I’ve only tested it on the 2 sample documents in my own test environment (One document with a count value of 1 and the other document with a count value of 0Aggregation used:Output:I believe you could use a similar form with $project instead of $set for the same output but the main thing here would be the following operators / variables:If you’re having trouble with the $project stage with $cond, let me know what you’ve tried and the output / errors you’re getting.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you very much!",
"username": "Vladimir"
},
{
"code": " { $match: { \"count \": { $gt : 0}}}",
"text": "This may be a bit simplistic but why not simply $match on “count” > 0 at the beginning of your pipeline { $match: { \"count \": { $gt : 0}}}",
"username": "Marshall_Giguere"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Conditional field projection | 2023-03-26T18:19:17.846Z | Conditional field projection | 1,126 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.16-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.15. The next stable release 5.0.16 will be a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.16-rc0 is released | 2023-03-31T20:08:59.785Z | MongoDB 5.0.16-rc0 is released | 1,016 |
null | [
"aggregation"
] | [
{
"code": "[{\n _id: {\n \"$oid\": \"641c81364e238225b6362cfb\"\n },\nparent: {\n \"$oid\": \"641cd6eb5d09817ea0222ecc\"\n },\nname: \"Towable\",\n...other fields\n}]\n[{\n{\n \"_id\": {\n \"$oid\": \"641c81364e238225b6362cfb\"\n },\n \"inventoryCategory\": \"Towable\",\n \"ancestors\": [\n \"Boom Lifts\",\n \"Material Handling\"\n ]\n },\n}]\n[{\n{\n \"_id\": {\n \"$oid\": \"6425b1f0f7c12b4f67bdbd51\"\n },\n \"inventoryCategoryId\": {\n \"$oid\": \"641c81364e238225b6362cfb\"\n },\n...other fields\n}]\n[{\n{\n \"_id\": {\n \"$oid\": \"6425b1f0f7c12b4f67bdbd51\"\n },\n \"inventoryCategoryId\": {\n \"$oid\": \"641c81364e238225b6362cfb\"\n },\n \"inventoryCategory\": \"Towable\",\n \"ancestors\": [\n \"Boom Lifts\",\n \"Material Handling\"\n ]\n}]\n const pipeline = [\n {\n $graphLookup: {\n from: 'inventoryCategories',\n startWith: '$parent',\n connectFromField: 'parent',\n connectToField: '_id',\n depthField: 'order',\n as: 'ancestor',\n },\n },\n {\n $unwind: '$ancestor',\n },\n {\n $sort: {\n _id: 1,\n 'ancestor.order': 1,\n },\n },\n {\n $group: {\n _id: '$_id',\n name: { $first: '$name' },\n parent: { $first: '$parent' },\n ancestor: { $push: '$ancestor' },\n },\n },\n {\n $project: {\n inventoryCategory: '$name',\n ancestors: '$ancestor.name',\n },\n },\n ];\n\n const categoriesWithAncestors = await categories.aggregate(pipeline).toArray();\n\n const inventoryPipeline = [\n {\n $lookup: {\n localField: 'inventoryCategoryId',\n foreignField: '_id',\n as: 'category',\n pipeline: [\n {\n $documents: [...categoriesWithAncestors],\n },\n ],\n },\n },\n {\n $set: {\n inventoryCategory: { $arrayElemAt: ['$category.name', 0] },\n },\n },\n {\n $unset: 'category',\n },\n {\n $limit: 5,\n },\n {\n $merge: {\n into: 'inventoryWithCategory',\n whenMatched: 'replace',\n whenNotMatched: 'insert',\n },\n },\n ];\n",
"text": "The docs says I can perform a $lookup without the ‘from’ field and passing an array to $documents stage inside a pipeline. But I’m getting an error trying that: “MongoServerError: missing ‘from’ option to $lookup stage specification”.Is that really supported?reference: https://www.mongodb.com/docs/manual/reference/operator/aggregation/documents/#use-a--documents-stage-in-a--lookup-stageMy use case is I have to perform an aggregation in one collection using $graphLookup to build an array with the document ancestors. That is in one collection called ‘inventoryCategories’\nNow I need to get this data and do a $lookup in the “inventory’ collection to add that information in each document.\nThe solution I imagined was to perform the first aggregation on ‘inventoryCategories’” save the data on a array, and use this array in the second aggregation in the ‘inventory’ collection. That makes sense?InventoryCategory:Array after first aggregationInventory:Desired output (Materialized View - Inventory)Aggregations:",
"username": "Romulo_Melo"
},
{
"code": "$group: {\n _id: '$_id',\n$_id$documents$lookup$documents accepts any valid expression that resolves to an array of objects.[...categoriesWithAncestors]",
"text": "First of all, you should never group on $_id - that’s an antipattern you can just sort the array in each document.As far as $documents inside $lookup it absolutely works but $documents accepts any valid expression that resolves to an array of objects. and I can’t tell from your post, what is [...categoriesWithAncestors] - it seems like it’s a local variable?What version are you using?Asya",
"username": "Asya_Kamsky"
},
{
"code": "missing ‘from’ option$documents$lookup",
"text": "This is a bit strange - I cannot reproduce the error you say you are getting - are you sure this exact syntax gave you the missing ‘from’ option error? Are you using version 5.0 or older by chance? $documents was only introduced in 5.1/6.0 so maybe the parser is giving that error before getting to unrecognized stage inside $lookup?",
"username": "Asya_Kamsky"
},
{
"code": "$documentsfrom$lookup",
"text": "If you are on older version and don’t have $documents you can emulate the same thing you are doing by defining a read only view on the pipeline that does the $graphLookup. Now you can just specify the view name in the from field of $lookup and it should all work.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Yes, is a local variable where I stored an array with the output of the previous aggregation.\nYou are totally right, I’m using atlas shared cluster and didn’t realize it only supports mongo 5.0",
"username": "Romulo_Melo"
},
{
"code": "",
"text": "how can I use a read only view? It will not consume resources creating a view every time I run the aggregation?",
"username": "Romulo_Melo"
},
{
"code": "",
"text": "The docs says that the results of $graphLookup are not in order, and I need the ancestors list to be ordered. How can I do that without grouping by _id?\nThanks for the help! I didn’t know it was an anti-pattern. Where can I found more information about patterns and anti-patterns?",
"username": "Romulo_Melo"
}
] | $lookup using $documents instead of 'from' | 2023-03-31T00:06:53.229Z | $lookup using $documents instead of ‘from’ | 651 |
null | [
"java",
"atlas-cluster"
] | [
{
"code": "",
"text": "HelloùI havw a java progran and I want to connect to my mongo db\nhere the connection string I got from the site\nString uri = “mongodb+srv://cipollan:xxxxxx%40xxxxxx%[email protected]/?retryWrites=true&w=majority”;\nThe folllowing fails\nMongoClient mongoClient = MongoClients.create(uri);\nHow can I connect to my atlas mongo db?\nThanks",
"username": "andre_cipo"
},
{
"code": "",
"text": "That URI is, I believe, incorrect. I suspect you copied or typed it wrong. There should not be those special characters in your password.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I have modified the connectionstring by removing any special characted but it doesn’t wark anyway.\nDo you have a working example of a java program connecting to a mongodb database on atlas platform? Thanks",
"username": "andre_cipo"
},
{
"code": "",
"text": "Did you get the URL for the connection exactly from what Atlas offers you in the connection page?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Also, if you’re having trouble connecting, I would suggest first you try connecting with mongosh and make sure everything works, and then try a language driver like Java.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi\nI tried to connect with mongosh and I got the following error (I used the conneciton string got from the site)\nCurrent Mongosh Log ID: 64268d897beeb0e40a749f48\nConnecting to: mongodb+srv://@cluster0.53g76t2.mongodb.net/?retryWrites=true&w=majority&appName=mongosh+1.8.0\nMongoServerSelectionError: connect ETIMEDOUT 63.33.255.135:27017\nPress any key to exit:Any Idea on what I should look for?\nThanks a lot",
"username": "andre_cipo"
},
{
"code": "",
"text": "Maybe I should allow connections from my client IP? But How could I handle a conectione from a Mobile cient?",
"username": "andre_cipo"
},
{
"code": "",
"text": "I have added in the network panel permission for connecting from any client but the error of connection timeout is still happening",
"username": "andre_cipo"
},
{
"code": "",
"text": "It sounds like you are being blocked by a firewall. Are you in a corporate environment and behind a firewall?",
"username": "Jack_Woehr"
}
] | Connection to MongoDB in the cloud | 2023-03-26T10:02:23.012Z | Connection to MongoDB in the cloud | 1,207 |
null | [
"node-js",
"atlas-cluster"
] | [
{
"code": "MongoServerSelectionError: connection <monitor> to 35.187.27.116:27017 closed at Timeout._onTimeouttype: 'ReplicaSetNoPrimary'const { MongoClient } = require(\"mongodb\");\n\nconst uri = \"mongodb+srv://NBirarov:<password>@rperception.m8wjs1d.mongodb.net/?retryWrites=true&w=majority\";\nconst mongoDBclient = new MongoClient(uri);\n\n\nasync function writeNewUser(email, tokenid){ //Calling this function after user logs in with email\n try {\n const database = mongoDBclient.db('RPerception');\n const users = database.collection('Users');\n const user = {\n \"email\": email,\n \"tokenID\": tokenid\n }\n \n await users.insertOne(user);\n }\n finally{\n console.log('Email saved');\n }\n}\n",
"text": "Hello, I started learning how to use MongoDB and I am trying now to connect to my database from a Google App Engine instance (flexible). I’m using NodeJS v16 and MongoDB v5.1 driver. I took the most basic code from here https://www.mongodb.com/docs/drivers/node/current/quick-start/connect-to-mongodb/ , yet the connection fails with the error MongoServerSelectionError: connection <monitor> to 35.187.27.116:27017 closed at Timeout._onTimeout and type: 'ReplicaSetNoPrimary'. From what I’ve read it has something to do with the IP address being blocked?I saw saw a solution to use VPC network peering, but I’m using an M0 cluster for now so it doesn’t let me. Do I have to use a paid subscription to be able to connect from App Engine to the database? or is there a way to do it with the free tier, at least for testing?This is my code, the connection part taken from the link above. Am I missing something?Thank you for the help",
"username": "Yoav_Banitt"
},
{
"code": "Outbound IP addresses for App Engine services",
"text": "Hello @Yoav_Banitt ,Welcome to The MongoDB Community Forums! As mentioned in your update, the issue was that your IP was not added in your IP Access List at MongoDB Atlas side. This is a security feature and Atlas only allows client connections to the database deployment from entries in the project’s IP access list.A quick google search provides below result which might be helpful in figuring out Outbound IP addresses for App Engine services. I am not familiar with Google App Engine hence won’t be able to help regarding the configuration of outbound ip address from App Engine.For more details regarding MongoDB Atlas IP Access Entries, please checkRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | How to connect to MongoDB Atlas from Google App Engine | 2023-03-30T10:36:46.095Z | How to connect to MongoDB Atlas from Google App Engine | 1,445 |
null | [
"sharding"
] | [
{
"code": "mongomongodmongos",
"text": "I feel like I’m losing my mind. I realized that my server installation through yum did not include the mongo shell. Then when I go to install it manually from Download MongoDB Community Server | MongoDB it also does not include mongo, just mongod and mongos. How can I get the CLI client?",
"username": "Ornj_SWGR"
},
{
"code": "",
"text": "You have to install mongosh",
"username": "Ramachandra_Tummala"
},
{
"code": "mongo",
"text": "Ah okay there is a lot of documentation that is outdated then:The mongo shell command man page.All these things say that mongo is included in the Server distribution, but it is not.",
"username": "Ornj_SWGR"
}
] | Mongo Shell not in distribution package | 2023-03-31T15:51:15.338Z | Mongo Shell not in distribution package | 709 |
null | [
"aggregation",
"performance"
] | [
{
"code": "$group$sort$group$match$group",
"text": "I have a dataset of 3 million documents and I want to perform a $group query.I find that if $sort before I $group my query only touches the indexes it goes pretty fast - but if my query looks at the contents of the documents it takes forever. Worse, if I attempt to $match to reduce the query size it goes slower - even though my $match is on the same index as the $sort.Are there some way to coax mongo into handling $group gueries better? For example can I hit to the $group that the records are in-order so that it knows what once it finds a document that belongs to a different group that the previous group is complete?",
"username": "Matthew_Shaylor"
},
{
"code": "",
"text": "Hi @Matthew_Shaylor ,Welcome to MongoDB community.The official way of optimizing group stages is by sorting and then Grouping based on appropriate index:Now if you match and group it will work if the order of the fields is Equality Sort Range. Since you group I believe there might be range or other fields not in that order, additionally make sure to add a sort by the index between match and group.Otherwise I need an explain plan and your indexes to assist further.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " return $collection->aggregate([\n ['$match' => ['date'=> [\n '$gte'=>'2022-01-01',\n '$lte'=> '2022-12-31'\n ]]],\n\n ['$group' => [\n '_id' => '$importer',\n 'success_num' => ['$sum' => 1],\n\n ]],\n\n ['$sort'=>[\n 'success_num'=>-1\n ]],\n [ '$limit' => 50 ],\n ]);\n",
"text": "here is an example. you can change it base on object query:",
"username": "kaveh_zhian"
}
] | Aggregate $group then $limit queries on large datasets | 2021-07-30T12:57:39.127Z | Aggregate $group then $limit queries on large datasets | 6,423 |
[
"dot-net",
"replication",
"database-tools",
"graphql",
"android"
] | [
{
"code": "",
"text": "Discussion about this site, its organization, how it works, and how we can improve it.",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | 1 of 3 Most Simplest Ways to "Upgrade" and "Migrate" MongoDB Clusters - Migration and upgrade of 6.2TB | 2023-03-24T22:17:19.645Z | 1 of 3 Most Simplest Ways to “Upgrade” and “Migrate” MongoDB Clusters - Migration and upgrade of 6.2TB | 1,905 |
|
null | [] | [
{
"code": "",
"text": "Hi,I have a database with mongo version 6.0.4 and I need to populate it with information that will be obtained from other environments, however, not all of them have the same version, some of them have versions 4.4 and 5 implemented, due to this I intend to upgrade the environments to 6.0.4, therefore, is it necessary to have the same version of mongo in the source and destination to import the information? Or is it possible to do it without carrying out the upgrade?Regards.",
"username": "Bet_TS"
},
{
"code": "",
"text": "From old to new? Super easy, new to old? Not advised because then you have data types and things that aren’t supported in the older versions.Do this instead:Easy guide with multiple ways to get the data over:",
"username": "Brock"
}
] | Import data with different mongo versions | 2023-03-31T06:43:40.859Z | Import data with different mongo versions | 1,199 |
null | [
"python"
] | [
{
"code": "def load_pickle_raw():\n\n # Lade das\n with open(savefile + 'pickle_raw.txt', 'rb') as l_file:\n load_raw = pickle.load(l_file)\n\n return load_raw\n\nload_raw = load_pickle_raw()\n\nfor lo in range(0, (len(load_raw))):\n print(load_raw[lo])\n{'resort': 'Lorem', 'headline1': '\"Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis \nparturient montes, nascetur ridiculus mus.\"', 'headline2': 'Lorem ipsum dolor sit amet, consectetuer adipiscing elit.', 'snippet': 'ddd', 'link': 'https://www.webseite.com', \n'linkhash': '1e5e6b27bde47251aa01c590368cff430d1dd3d2', 'timestamp': '2023-03-29 20:02:20', 'status_create': 'False', 'status_release': 'False'}\nmany many Dict.....\nuserID = ctx.message.author.id\nif conn.mydb.mycol.count_documents({ 'userID': userID }):\n await ctx.send(\"**Error: You're already in the database**\")\nelse:\n await ctx.send(\"**Adding new inventory to the database**\")\n mydict = { \"userID\": userID, \"coin\": \"0\", \"inv\": {\"inv1\": \"\", \"inv2\": \"\", \"inv3\": \"\", \"inv4\": \"\", \"inv5\": \"\"} }\n y = conn.mydb.mycol.insert_one(mydict)\n",
"text": "Hello everyone,\nI would like to fill my mangoDB, but I don’t have an exact plan how to do it, maybe someone of the Python nerds here can give me some tips or show me the way how to do it.Output situation.I have a pickle file that was saved as a list with a lot of dictionaries. About 230 pieces plus minu more.how exactly do i get the dictionaries from the list into mongoDB,it is important to know that I have created a HASH and a dict which is already saved in the DB should not be saved again. I have already found a solution for this on https://stackoverflow.com/, but my question is this really the best and most effective method?I decided to save with pickle because I didn’t like the handling of json and the pickle files are much smaller.",
"username": "Rainer_Schmitz"
},
{
"code": "import json \n# importing the module to show pickles\n\nPickles_dictionary ={ \n \"pickles_id\": \"1\", \n \"pickles_yeah\": \"Rick\", \n \"eatspicypickle\": \"sweet\"\n} \n \n\n# Dictionaries output\nprint(\"The dictionary is as: \\n\", pickles, \"\\n\")\njson_object = json.dumps(pickles, indent = 4) \n\n# JSON output for inspection\nprint(\"Conversion Completed:\", json_object)\n",
"text": "This is super easy, try not to over think this friend,So remember any and all Document DBs or NoSQL DBs essentially devour JSON.The easiest way to do this, instead of figuring out how to get the dictionaries into MongoDB, figure out how to convert the dictionaries to JSON instead. And then use the import function to import the JSON into MongoDB.We can accomplish the Conversion from dictionaries to JSON by doing the following in Python:",
"username": "Brock"
},
{
"code": "",
"text": "Basically the main goal is just to convert the dictionaries to JSON and then just import the JSON.There are like 20 ways to convert entire dictionaries with even thousands of entries into JSON and accomplish this.This is the better way to do things, and then in your application you match the fields to your outputs.Then you don’t care about the format stored in MongoDB at all.",
"username": "Brock"
},
{
"code": "def load_pickle_raw():\n\n with open(savefile + 'pickle_raw.txt', 'rb') as l_file:\n load_raw = pickle.load(l_file)\n\n return load_raw\n\nload_raw = load_pickle_raw()\n\nres = myCollection.insert_many(load_raw)\n",
"text": "I have now already managed, here mail my codeI will stay away from JSON, because the data I store with the Pickle Libery is much more compact. and I have now found another way to synchronize the already stored data.I still thank you for the quick and friendly response here in the forum.",
"username": "Rainer_Schmitz"
},
{
"code": "",
"text": "All good, if you feel that works better for you, then all good.",
"username": "Brock"
}
] | Sava a list with many dictionary to mongoDB | 2023-03-29T18:41:29.010Z | Sava a list with many dictionary to mongoDB | 1,412 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "let query = {};\n if (req.query.location) {\n query.location = { \"$regex\": req.query.location, \"$options\": \"i\" };\n }\nif (req.query.noOfGuests) {\n query.guests_included = { \"$where\": Number(req.query.noOfGuests) };\n }\n\nconst searchResults = await Listing.find(query);\n\n{\n \"stringValue\": \"\\\"3\\\"\",\n \"valueType\": \"number\",\n \"kind\": \"number\",\n \"value\": 3,\n \"path\": \"guests_included\",\n \"reason\": null,\n \"name\": \"CastError\",\n \"message\": \"Cast to number failed for value \\\"3\\\" (type number) at path \\\"guests_included\\\" for model \\\"Listing\\\"\"\n}\n{\n\"_id\": {\"$oid\":\"64250f9a01625ffc9d744ac9\"},\n\n\"name\": \"Stunning 1 Bedroom Apartment in Holborn, London\",\n\n\"description\": \"Located in the heart of Holborn, this beautiful apartment is the ideal base to explore central London. The apartment comprises a large open-plan living space, a fully equipped kitchen, a spacious bedroom with a king bed, and a bathroom with complimentary toiletries and fluffy towels. There is space for up to 3 guests with the use of the sofa bed. In the heart of the city, you are within walking distance of many of London's most famous attractions - the ideal base for your trip!\",\n\n\"beds\":{\"$numberInt\":\"2\"},\n\n\"numReviews\":{\"$numberInt\":\"0\"},\n\n\"rating\":{\"$numberInt\":\"5\"},\n\n\"bathrooms\":{\"$numberInt\":\"1\"},\n\n\"amenities\":[\"Kitchen\", \"TV\", \"Wifi\", \"Washer\", \"Air conditioner\"],\n\n\"price\":{\"$numberInt\":\"4498\"},\n\n\"guests_included\":{\"$numberInt\":\"3\"},\n\n\"images\":[\"https://a0.muscache.com/im/pictures/prohost-api/Hosting-810793423389859668/original/6c93c696-0e4c-48fb-ad06-fde74dc946d4.jpeg?im_w=1200\",\"https://a0.muscache.com/im/pictures/prohost-api/Hosting-810793423389859668/original/381a0099-2409-4be8-b1f9-f5e4b954a601.jpeg?im_w=720\",\"https://a0.muscache.com/im/pictures/prohost-api/Hosting-810793423389859668/original/a04a5821-fc07-4122-8c20-c373db7f1794.jpeg?im_w=720\",\"https://a0.muscache.com/im/pictures/prohost-api/Hosting-810793423389859668/original/9b7a1e06-bd86-475c-bb79-79e8ee469849.jpeg?im_w=720\",\"https://a0.muscache.com/im/pictures/prohost-api/Hosting-810793423389859668/original/a2c5daa3-866f-4c75-b014-6c35e1727038.jpeg?im_w=720\"],\n\n\"reviews\":[],\n\n\"createdAt\":{\"$date\":{\"$numberLong\":\"1680150426657\"}},\"\n\nupdatedAt\":{\"$date\":{\"$numberLong\":\"1680150426657\"}},\n\n\"__v\":{\"$numberInt\":\"0\"},\n\n\"address\":{\"street\":\"Brooklyn, NY, United States\", \"suburb\":\"Brooklyn\", \"government_area\": \"Bushwick\", \"market\":\"New York\",\"country\":\"United States\"},\n\n\"location\":\"Brooklyn\"}\n\n",
"text": "Hello guys, I’m trying to perform a search operation for a data model below: The query is to return every property where the number of guests matches the number of guests specified in the query for example 3, and the location should match the provided location in the query, for example, Brooklyn in this case.I have tried the regex expression like this:This query returns a Cast Error like so below:This is a sample data:I have read the MongoDB documentation and couldn’t find a way around this yet.",
"username": "Rex_Joseph"
},
{
"code": "let query = {};\n if (req.query.location) {\n query.location = { \"$regex\": req.query.location, \"$options\": \"i\" };\n }\n if (req.query.noOfGuests) {\n query.guests_included = { $in: Number(req.query.noOfGuests) };\n }\n\n try {\n const searchResults = await Listing.find(query);\n res.status(200).json(searchResults);\n } catch (err) {\n console.log(err);\n res.status(500).json(err);\n }\n",
"text": "Okay! I just fixed this using $in like so below:If anyone else has a better method to reduce computation time and increase efficiency, please share. Thanks!",
"username": "Rex_Joseph"
},
{
"code": "$where$in$eq{a:5}{a:{$eq:5}}",
"text": "You should absolutely never use $where. $in works but you only need it when you want to match one of several possible values, when it’s a simple equality match just use $eq or implicit equality (that is {a:5} is the same as {a:{$eq:5}}.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you Asya, this solves it.",
"username": "Rex_Joseph"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need help performing a string and integer query | 2023-03-31T09:52:16.866Z | Need help performing a string and integer query | 623 |
null | [
"queries",
"storage"
] | [
{
"code": "",
"text": "I am working on a task in which I am running a cron job, work of cron job is to complete a task and send notifications(emails and sms).\nTime interval for cron job is in every 8 minutes.CronJob is running for 25000 merchants, and there was around 4-5 tasks for each merchant and around 60-70 db queries are running for each merchant, in which around 15-20 db queries are insertion and updation and around 40-50 queries are get queries.I have run the cron job using goroutines, in which I have implemented worker pool, and I have set the worker to 200, which means 200 merchants are completing their tasks parallely.I have setup a server for database, for all merchants exists and there was seperate db for each merchant. And I have used mongoDb as Database, and mongoDb is running on it’s default settings.My servers system specifications are,Database server:-\nRAM:-192GB\nDatabase size:- 570GB\nOS:- Ubuntu 22.04Cron are running on different server and specifications of that server is:-\nRAM:- 16GB\nOS:-Ubuntu 22.04My problem is that, whenever I am starting the cron service, for first few merchants the db is working fine. all the Db queries including insert, update, delete, Get are running fast, but after a period of time, db becomes slow, all the queries run very slow.The db becomes slow for every operations including cronJob or other operations. I have noticed that mongoDb goes into the locking condition for certain period of time.\nAnd this locking time is increasing rapidly, i.e. Whenever it was stopped for first time it was again started in 1-2 seconds, but after some time the time is increased.\nAfter 2-3 hours, It goes to a state in which db got locked for more than 5 minutes and run queries for only 1 minute after that again goes to the locking state.I have noticed a log which was logged frequently whenever db is stopped{“t”:{“$date”:“2023-03-31T06:38:04.021+00:00”},“s”:“W”, “c”:“COMMAND”, “id”:20525, “ctx”:“conn60701”,“msg”:“Failed to gather storage statistics for slow operation”,“attr”:{“opId”:2317177,“error”:“lock acquire timeout”}}I have noticed the locking condition by examining the logs, whenever the db is started after the lock, I am seeing these type of slow query logs in which handleLock and schemaLock is high.{“t”:{“$date”:“2023-03-31T06:40:34.908+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn59118”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“ausloc678_bk_db.providers”,“command”:{“find”:“providers”,“filter”:{“uid”:7},“limit”:1,“projection”:{“_id”:1,“show_payment_method_and_price”:1,“show_payment_method_and_price_for”:1,“is_team_member”:1,“who_see_payment_method_and_price”:1,“team_lead_id”:1,“hide_provider_payments”:1,“hidden_provider_payments”:1,“show_booking_price”:1,“show_booking_price_for”:1,“who_see_booking_price”:1},“singleBatch”:true,“lsid”:{“id”:{“$uuid”:“c6c4c42b-216c-48c4-92bf-8ca3b1db93f7”}},“$db”:“ausloc678_bk_db”},“planSummary”:“COLLSCAN”,“keysExamined”:0,“docsExamined”:52,“cursorExhausted”:true,“numYields”:1,“nreturned”:0,“queryHash”:“B89C5911”,“planCacheKey”:“B89C5911”,“reslen”:114,“locks”:{“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:2}},“Global”:{“acquireCount”:{“r”:2}},“Database”:{“acquireCount”:{“r”:2}},“Collection”:{“acquireCount”:{“r”:2}},“Mutex”:{“acquireCount”:{“r”:1}}},“storage”:{“data”:{“bytesRead”:28496,“timeReadingMicros”:13},“timeWaitingMicros”:{“handleLock”:122143,“schemaLock”:15285487}},“protocol”:“op_msg”,“durationMillis”:15899}}Can someone help me to find the solution to prevent these locking condition, I have optimized all the db queries, there was no lookup or joins are used in any query.And I have some questions:-",
"username": "sahil_garg1"
},
{
"code": "",
"text": "What would be the reasons for this issue I am facing?You haveseperate db for each merchantand25000 merchantsso your server haas at least 50000 open files. At least, because if you have indexes you even have more files. So you effectively implemented the Massive Number of Collections anti-pattern.I have optimized all the db queriesI am not too sure about that, since the logs you shared show a“planSummary”:“COLLSCAN”",
"username": "steevej"
}
] | MongoDb becomes slow after sometime | 2023-03-31T07:05:52.132Z | MongoDb becomes slow after sometime | 1,083 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi everyone,I am having a json as,\n{\n“_id” : 1,\n“item” : “TBD”,\n“stock” : 0,\n“info” : { “publisher” : “1111”, “pages” : 430 },\n“tags” : [ “technology”, “computer” ],\n“ratings” : [ { “by” : “ijk”, “rating” : 4 ,“_id”:“2”}, { “by” : “lmn”, “rating” : 5 ,“id”= “3”} ],\n“publicratings” : [ { “by” : “ijk1”, “rating” : 4 ,“_id”:“4”}, { “by” : “lmn1”, “rating” : 5 ,“id”= “5”} ],\n“reorder” : false\n}I want to add new field “status” to parent and “Active” to child object based on id{\n“_id” : 1,\n“item” : “TBD”,\n“stock” : 0,\n“info” : { “publisher” : “1111”, “pages” : 430 },\n“tags” : [ “technology”, “computer” ],\n“ratings” : [ { “by” : “ijk”, “rating” : 4 ,“_id”:“2”,“Active”:0}, { “by” : “lmn”, “rating” : 5 ,“id”= “3”,Active=“0”\n} ],\n“publicratings” : [ { “by” : “ijk1”, “rating” : 4 ,“_id”:“4”,“Active”:0}, { “by” : “lmn1”, “rating” : 5 ,“id”= “5”,“Active”:0} ],\n“reorder” : false,\n“STATUS”: “A”\n}Please help in query to update as above",
"username": "Rajalakshmi_R"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your documents so that we can cut-n-paste into our systems.",
"username": "steevej"
}
] | Update main document and nested document based on its objectid | 2023-03-31T09:28:10.880Z | Update main document and nested document based on its objectid | 1,281 |
null | [
"queries"
] | [
{
"code": "{\n _id: \"FupuYjXeWooeTV0dEBE\",\n timestamp: 1667507338,\n group: \"VXF\",\n tags: {\n format: \"binary\",\n version: \"0.1.0\",\n action: \"store\"\n },\n data: \"\"\n}\n{ 'tags.action': 'store' }\n",
"text": "I have an Atlas cluster that I have been using for quite a while now and never had any issues. Over the weekend I installed MongoDB 6 locally to validate some data before it went on to Atlas. When I query my data and I try to filter by any nested field, there are no results. It will only return results for top level fields.\nIf I click on the Schema tab, and expand one of the nested fields and click on a value, it populates the filter for the query but upon searching it also returns no results. I can see the fields there but haven’t found any way of querying them yet.My data has docs that look like this:An example query looks likebut that returns no results. {‘tags.action’: {$exists: true}} returns no results also…If anyone has any troubleshooting recommendations I would greatly appreciate it!Thank you",
"username": "Scott_N_A2"
},
{
"code": "",
"text": "The problem turned out to be that I was inserting them as bson.E instead of bson.M from golang, and although the nested fields were showing up as expected in Compass, behind the scenes they were a different type, so I just recreated the collection and made sure to insert the whole object and nested fields as bson.M.",
"username": "Scott_N_A2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | No results for simple nested field query on Mongo 6 | 2023-03-31T11:17:59.677Z | No results for simple nested field query on Mongo 6 | 470 |
null | [
"aggregation",
"queries"
] | [
{
"code": "// Document one\n{\n \"jobs\": [{\n \"jobId\": 1,\n \"predictedCompletionStatus\": \"Late\"\n },\n {\n \"jobId\": 2,\n \"predictedCompletionStatus\": \"Early\"\n },\n {\n \"jobId\": 3,\n \"predictedCompletionStatus\": \"Early\"\n }\n ]\n}\n// Document two \n{\n \"jobs\": [{\n \"jobId\": 1,\n \"predictedCompletionStatus\": \"Early\"\n },\n {\n \"jobId\": 2,\n \"predictedCompletionStatus\": \"OnTime\"\n }\n ]\n}\n// Merged Document\n{\n\t\"merged\": [{\n\t\t\t\"jobId\": 1,\n\t\t\t\"predictedCompletionStatus1\": \"Late\",\n\t\t\t\"predictedCompletionStatus2\": \"Early\"\n\t\t},\n\t\t{\n\t\t\t\"jobId\": 2,\n\t\t\t\"predictedCompletionStatus1\": \"Early\",\n\t\t\t\"predictedCompletionStatus2\": \"OnTime\"\n\t\t}\n\t]\n}\n$lookup",
"text": "I have a requirement to merge 2 to 3 documents in the same collection and produce a new document containing copies of data from fields in both. Sample data:The documents should be joined on the jobId and then alias the columns from each document in order to differentiate them.\nI’m used to doing this kind of thing in SQL where a join would get me what I need. Is it possible to do this? I’ve looked at $lookup but that seems to need the douments in different collections.",
"username": "mc_m0ng0"
},
{
"code": "predictedCompletionStatus1predictedCompletionStatus2// Merged Document\n{ \"merged\" :\n [\n {\n \"jobId\" : 1 ,\n \"predictedCompletionStatus\" : [ 'Late' , 'Early' ] \n } ,\n {\n \"jobId\" : 2 ,\n \"predictedCompletionStatus\" : [ 'Early' , 'OnTime' ]\n }\n ] \n}\n",
"text": "need the douments in different collectionsYou may $lookup from: the same collection you started with.Please update your sample documents so that they are valid JSON documents we can cut-n-paste directly into our system. You are missing commas. You are also missing field names for your arrays.You have jobId:3 in the first document but it is absent from the result. Is it because it is not present in document 2.Having dynamic field names such aspredictedCompletionStatus1andpredictedCompletionStatus2is a bad idea. I suggest that you aim for something like:Naturally, $lookup will produce an array anyway and in most programming languages it is easier to iterate over an array.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the reply. I’ve updated the OP to include proper JSON. Your suggested output looks better. Can we guarantee the ordering of the items in the array so that index 0 is always the first documents value and index 1 is the second documents value?And yes jobID: 3 is abset as it doesn’t exist in the other document.",
"username": "mc_m0ng0"
},
{
"code": "",
"text": "guarantee the ordering of the items in the arrayIn an array or document list, the order is only guaranty if we $sort. So even withalias the columns from each documenta $sort would be needed.Do you have a field in the source document that you can sort?",
"username": "steevej"
},
{
"code": "{\n\t\"_id\": {\n\t\t\"$oid\": \"6422918c0e9f34f2000ab941\"\n\t},\n\t\"created\": \"2023-03-29T08:04:44Z\",\n\t\"name\": \"Test Doc 1\",\n\t\"jobs\": [{\n\t\t\t\"jobId\": 1,\n\t\t\t\"predictedCompletionStatus\": \"Early\"\n\t\t},\n\t\t{\n\t\t\t\"jobId\": 2,\n\t\t\t\"predictedCompletionStatus\": \"OnTime\"\n\t\t}\n\t]\n}\n",
"text": "Each document has a name,created and an id field at the root level. For example:Perhaps the created column can be used as generally the older document would be the “left hand side” and the newer the “right hand side”.",
"username": "mc_m0ng0"
},
{
"code": "[\n {\n $match: {\n name: {\n $in: [\"Test Doc 1\", \"Test Doc 2\"],\n },\n },\n },\n {\n $sort:\n {\n created: 1,\n },\n },\n {\n $lookup: {\n from: \"sandbox\",\n //localField: \"jobId\",\n //foreignField: \"jobId\",\n let: {\n job_id: \"$jobId\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\"$jobId\", \"$$job_id\"],\n },\n ],\n },\n },\n },\n {\n $project: {\n created: 0,\n },\n },\n ],\n as: \"result\",\n },\n },\n {\n $group:\n {\n _id: \"$jobId\",\n merged: {\n $addToSet: \"$result\",\n },\n },\n },\n]\n",
"text": "If you have some time to provide a working aggregation I’d really appreciate it. I’m new to this so chasing my tail a bit with the documentation and trying to understand stuff when coming from a SQL background.The above mess is my attempt to get something going but it’s no where near what I want.",
"username": "mc_m0ng0"
},
{
"code": "[\n {\n $match: {\n name: {\n $in: [\"Test Doc 1\", \"Test Doc 2\"],\n },\n },\n },\n {\n $sort: {\n created: 1,\n },\n },\n {\n $unwind: {\n path: \"$jobs\",\n },\n },\n {\n $group:\n\n {\n _id: \"$jobs.jobId\",\n predictedcompletionStatus: {\n $addToSet:\n \"$jobs.predictedCompletionStatus\",\n },\n },\n },\n {\n $addFields:\n {\n jobId: \"$_id\",\n },\n },\n {\n $project: {\n _id: 0,\n },\n }\n]\n",
"text": "Okay so I made some progress with the following aggregation. How can I filter out the item with jobID 3 with only one element in the array?",
"username": "mc_m0ng0"
},
{
"code": "{\n $match: {\n name: {\n $in: [\"Test Doc 1\", \"Test Doc 2\"],\n },\n },\n },\n{\n \"jobs\": [{\n \"jobId\": 1,\n \"predictedCompletionStatus\": \"Failed\"\n }\n ]\n}\n{ \"merged\" :\n [\n {\n \"jobId\" : 1 ,\n \"predictedCompletionStatus\" : [ 'Late' , 'Early' , 'Failed' ] \n } ,\n {\n \"jobId\" : 2 ,\n \"predictedCompletionStatus\" : [ 'Early' , 'OnTime' ]\n }\n ] \n}\n{ \"merged\" :\n [\n {\n \"jobId\" : 1 ,\n \"predictedCompletionStatus\" : [ 'Late' , 'Early' , 'Failed' ] \n }\n /* jobId:2 absent since there is not 3rd status */\n ] \n}\nmatch = { \"$match\" : {\n \"name\" : \"Test Doc 1\"\n} }\n\nunwind = { \"$unwind\" : \"$jobs\" }\n\nlookup = { \"$lookup\" : {\n \"from\" : \"sandbox\" ,\n \"as\" : \"_lookup\" ,\n \"let\" : {\n \"d1_jobId\" : \"$jobs.jobId\" ,\n \"d1_created\" : \"$created\"\n } ,\n \"pipeline\" : [\n { \"$match\" : { \"$expr\" : { \"$and\" : [\n { \"$eq\" : [ \"$jobs.jobId\" , \"$$d1_jobId\" ] } ,\n { \"$gt\" : [ \"$created\" , \"$$d1_created\" ] }\n ] } } }\n ]\n} }\n",
"text": "Any reason why you $match as follow?This will make both matching documents as the source of a $lookup which does not seem to be what you want. From the merged document, it looks like you want to start with Document one and $lookup the matching jobId in Document 2. May be you also want to do that with Document 2 and $lookup the matching jobId in Document 3 if any? What if there is a Document 3 likeWhat would you want for Merge document?or thisYour localField:/foreignField: (or let:) should be jobs.jobId rather than simply jobId because you are inside the array jobs:.To order Document 2, the {$sort:{created:1}} needs to be in the $lookup: pipeline. Since you want the document that follows Document 1 in time you will need to let: document_one_created:“$created” and use $gt to $match. If you only want to consider Document 2, you then have to $limit:1.Personally, since the source (localField) and target (foreignField) of the $lookup is an array, I think it would be simpler to $unwind before the $lookup.Assuming that you only want to start from Document 1, I would go along the following pipeline stages.I just saw that you were replying to the thread stop I am stopping here. I do not want to shoot on a moving target.",
"username": "steevej"
},
{
"code": "",
"text": "Any reason why you $match as follow?The collection can have 100’s potentially thousands of documents so I want to limit the comparison to 2 or 3 specific documents which will be known at the start.What would you want for Merge document?Good question. I’ll need to nail that down. Initially I was going for the second option.Assuming that you only want to start from Document 1, I would go along the following pipeline stages.I’ll have a play with that pipeline and see how it goesThanks for the help so far.",
"username": "mc_m0ng0"
},
{
"code": "[\n {\n $match: {\n name: {\n $in: [\"Test Doc 1\", \"Test Doc 2\"],\n },\n },\n },\n {\n $sort: {\n created: 1,\n },\n },\n {\n $unwind: {\n path: \"$jobs\",\n },\n },\n {\n $group: {\n _id: \"$jobs.jobId\",\n predictedcompletionStatus: {\n $push: \"$jobs.predictedCompletionStatus\",\n },\n },\n },\n {\n $sort: {\n _id: 1,\n },\n },\n {\n $project: {\n _id: 0,\n jobId: \"$_id\",\n predictedcompletionStatus:\n \"$predictedcompletionStatus\",\n },\n },\n {\n $addFields:\n {\n len: {\n $size: \"$predictedcompletionStatus\",\n },\n },\n },\n {\n $match: {\n len: {\n $eq: 2,\n },\n },\n },\n {\n $project:\n {\n len: 0,\n },\n },\n]\n",
"text": "I tried your pipeline but I wasn’t getting any data in the lookup output so not sure where that’s going wrong. $lookup seems like it might a cleaner approach though.I ploughed on with my approach and I’ve got it filtered down to the ones I care about but I’m wondering what gotchas are in there. The hardcoded check on the size of the predictedcompletionStatus array is not ideal. It would be great if this could be filtered with reference to the number of documents.I changed from using $addToSet to $push in the $group stage as when I went to 3 documents and a status was repeated [Late,Early,Late] the second late was dropped. Does push guarantee the ordering or am I just lucky that it seems to work even when I go to 3 documents?My current pipeline:",
"username": "mc_m0ng0"
},
{
"code": "",
"text": "I wasn’t getting any dataLike I wrote I stop right away because you were replying to the post while I was working on the pipeline. I did not wanted to waste time working on a moving target so I stop until I can see your replies.",
"username": "steevej"
},
{
"code": "",
"text": "Like I wrote I stop right away because you were replying to the post while I was working on the pipeline. I did not wanted to waste time working on a moving target so I stop until I can see your replies.Thanks. I figured that’s why I got no data I’ve replied to your queries and posted where I’m at. I’m not sure if my current approach is a good one as I’m very new to this. Appreciate the input so far. I think I’ll park it for the evening now as I’m getting cross eyed looking at documentation and data.",
"username": "mc_m0ng0"
},
{
"code": "\"predictedCompletionStatus\" : { \"$push\" : {\n \"date\" : \"$created\" ,\n \"status\" : \"$jobs.predictedCompletionStatus\"\n} }\n{ \"$set\" : {\n \"predictedCompletionStatus\" : { \"$sortArray\" : {\n \"input\" : \"predictedCompletionStatus\" ,\n \"sortBy\" : { \"date\" : 1 }\n } }\n} }\n{\n _id: 1,\n predictedCompletionStatus: [\n {\n date: 2023-03-30T13:11:54.921Z,\n status: 'Late'\n },\n {\n date: 2023-03-30T13:12:33.269Z,\n status: 'Early'\n }\n ]\n}\n{\n _id: 2,\n predictedCompletionStatus: [\n {\n date: 2023-03-30T13:11:54.921Z,\n status: 'Early'\n },\n {\n date: 2023-03-30T13:12:33.269Z,\n status: 'OnTime'\n }\n ]\n}\n{\n $addFields:\n {\n len: {\n $size: \"$predictedcompletionStatus\",\n },\n },\n },\n {\n $match: {\n len: {\n $eq: 2,\n },\n },\n },\n{\n $sort: {\n _id: 1,\n },\n },\n{\n $addFields:\n {\n len: {\n $size: \"$predictedcompletionStatus\",\n },\n },\n },\n {\n $match: {\n len: {\n $eq: 2,\n },\n },\n }\n {\n $match: {\n $expr : {\n $eq : [ 2 , { $size : \"$predictedcompletionStatus\" } ] ,\n },\n },\n }\n{\n $project:\n {\n len: 0,\n },\n }\n",
"text": "Does push guarantee the orderingAs far as I know it does add at the end but I cannot find any documentation in this effect. Hopefully someone will point us to the specification. As a solution, if the $push order cannot be guaranteed by the previous $sort stage, you couldand then update predictedCompletionStatus withwhich might provide you with more information about the evolution as the result would look likeEfficiency wise, I would movebeforebecause why sort something that will be removed later.You could also replacewiththen you could remove",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Combine specific documents, join/merge | 2023-03-29T12:04:02.470Z | Combine specific documents, join/merge | 690 |
null | [
"replication",
"transactions",
"containers",
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"F\", \"c\":\"REPL\", \"id\":21128, \"ctx\":\"BackgroundSync\",\"msg\":\"Rollback failed with unrecoverable error\",\"attr\":{\"error\":{\"code\":127,\"codeName\":\"UnrecoverableRollbackError\",\"errmsg\":\"not willing to roll back more than 86400 seconds of data. Have: 238302 seconds.\"}}}\nrollbackTimeLimitSecsdb.adminCommand({ getParameter: 1, rollbackTimeLimitSecs:1}){ \"rollbackTimeLimitSecs\" : 240000, \"ok\" : 1 }\nAdvertised Hostname: internal-ae46bb59569474ccd9253fd9f2ee8bbd-784996542.us-east-1.elb.amazonaws.com\nPod name matches initial primary pod name, configuring node as a primary\nmongodb 20:21:16.14 \nmongodb 20:21:16.14 Welcome to the Bitnami mongodb container\nmongodb 20:21:16.15 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb\nmongodb 20:21:16.15 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues\nmongodb 20:21:16.16 \nmongodb 20:21:16.16 INFO ==> ** Starting MongoDB setup **\nmongodb 20:21:16.21 INFO ==> Validating settings in MONGODB_* env vars...\nmongodb 20:21:16.25 INFO ==> Initializing MongoDB...\nmongodb 20:21:16.32 INFO ==> Enabling authentication...\nmongodb 20:21:16.34 INFO ==> Deploying MongoDB with persisted data...\nmongodb 20:21:16.34 INFO ==> Writing keyfile for replica set authentication...\nmongodb 20:21:16.43 INFO ==> ** MongoDB setup finished! **\n\nmongodb 20:21:16.48 INFO ==> ** Starting MongoDB **\n\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.526+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.529+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.531+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.532+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.532+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.595+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/bitnami/mongodb/data/db\",\"architecture\":\"64-bit\",\"host\":\"scd-open-banking-apps-mongodb-0\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.595+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":20720, \"ctx\":\"initandlisten\",\"msg\":\"Available memory is less than system memory\",\"attr\":{\"availableMemSizeMB\":2048,\"systemMemSizeMB\":3884}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.595+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.6\",\"gitVersion\":\"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\",\"openSSLVersion\":\"OpenSSL 1.1.1d 10 Sep 2019\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.595+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"version\":\"Kernel 5.4.219-126.411.amzn2.x86_64\"}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.595+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/opt/bitnami/mongodb/conf/mongodb.conf\",\"net\":{\"bindIp\":\"*\",\"ipv6\":false,\"port\":27017,\"unixDomainSocket\":{\"enabled\":true,\"pathPrefix\":\"/opt/bitnami/mongodb/tmp\"}},\"processManagement\":{\"fork\":false,\"pidFilePath\":\"/opt/bitnami/mongodb/tmp/mongodb.pid\"},\"replication\":{\"enableMajorityReadConcern\":true,\"replSetName\":\"rs0\"},\"security\":{\"authorization\":\"enabled\",\"keyFile\":\"/opt/bitnami/mongodb/conf/keyfile\"},\"setParameter\":{\"enableLocalhostAuthBypass\":\"false\"},\"storage\":{\"dbPath\":\"/bitnami/mongodb/data/db\",\"directoryPerDB\":false,\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"logRotate\":\"reopen\",\"path\":\"/opt/bitnami/mongodb/logs/mongodb.log\",\"quiet\":false,\"verbosity\":0}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.605+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22271, \"ctx\":\"initandlisten\",\"msg\":\"Detected unclean shutdown - Lock file is not empty\",\"attr\":{\"lockFile\":\"/bitnami/mongodb/data/db/mongod.lock\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.607+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/bitnami/mongodb/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.608+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:16.611+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=512M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:18.430+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207678:430654][1:0x7ffa8ab55140], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 922 through 923\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:18.484+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207678:484759][1:0x7ffa8ab55140], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 923 through 923\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:18.905+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207678:905601][1:0x7ffa8ab55140], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 922/256 to 923/256\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:18.913+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207678:913723][1:0x7ffa8ab55140], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 922 through 923\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.074+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207679:74512][1:0x7ffa8ab55140], file:collection-9-4797836291474800180.wt, txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 923 through 923\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.115+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207679:115808][1:0x7ffa8ab55140], file:collection-9-4797836291474800180.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (1679068524, 1)\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.115+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207679:115959][1:0x7ffa8ab55140], file:collection-9-4797836291474800180.wt, txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (1679068524, 1)\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.142+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1680207679:142252][1:0x7ffa8ab55140], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 5, snapshot max: 5 snapshot count: 0, oldest timestamp: (1679068524, 1) , meta checkpoint timestamp: (1679068524, 1)\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.234+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":2623}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.234+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.246+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":false}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.263+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22383, \"ctx\":\"initandlisten\",\"msg\":\"The size storer reports that the oplog contains\",\"attr\":{\"numRecords\":4318444,\"dataSize\":611921725}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.263+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22386, \"ctx\":\"initandlisten\",\"msg\":\"Sampling the oplog to determine where to place markers for truncation\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.292+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22389, \"ctx\":\"initandlisten\",\"msg\":\"Sampling from the oplog to determine where to place markers for truncation\",\"attr\":{\"from\":{\"$timestamp\":{\"t\":1635945730,\"i\":1}},\"to\":{\"$timestamp\":{\"t\":1679306827,\"i\":1}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.292+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22390, \"ctx\":\"initandlisten\",\"msg\":\"Taking samples and assuming each oplog section contains\",\"attr\":{\"numSamples\":11,\"containsNumRecords\":3788797,\"containsNumBytes\":536870964}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.689+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22393, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling complete\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.689+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22382, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger record store oplog processing finished\",\"attr\":{\"durationMillis\":425}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.695+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.799+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.819+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":20997, \"ctx\":\"initandlisten\",\"msg\":\"Refreshed RWC defaults\",\"attr\":{\"newDefaults\":{}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:19.819+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/bitnami/mongodb/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.002+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21529, \"ctx\":\"initandlisten\",\"msg\":\"Initializing rollback ID\",\"attr\":{\"rbid\":303}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.002+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":501401, \"ctx\":\"initandlisten\",\"msg\":\"Incrementing the rollback ID after unclean shutdown\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.013+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21532, \"ctx\":\"initandlisten\",\"msg\":\"Incremented the rollback ID\",\"attr\":{\"rbid\":304}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.025+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21544, \"ctx\":\"initandlisten\",\"msg\":\"Recovering from stable timestamp\",\"attr\":{\"stableTimestamp\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}},\"topOfOplog\":{\"ts\":{\"$timestamp\":{\"t\":1679306827,\"i\":1}},\"t\":1782},\"appliedThrough\":{\"ts\":{\"$timestamp\":{\"t\":0,\"i\":0}},\"t\":-1},\"oplogTruncateAfterPoint\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.025+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21545, \"ctx\":\"initandlisten\",\"msg\":\"Starting recovery oplog application at the stable timestamp\",\"attr\":{\"stableTimestamp\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.025+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21550, \"ctx\":\"initandlisten\",\"msg\":\"Replaying stored operations from startPoint (inclusive) to endPoint (inclusive)\",\"attr\":{\"startPoint\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}},\"endPoint\":{\"$timestamp\":{\"t\":1679306827,\"i\":1}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.030+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21536, \"ctx\":\"initandlisten\",\"msg\":\"Completed oplog application for recovery\",\"attr\":{\"numOpsApplied\":5,\"numBatches\":1,\"applyThroughOpTime\":{\"ts\":{\"$timestamp\":{\"t\":1679306827,\"i\":1}},\"t\":1782}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.045+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20714, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Failed to refresh session cache, will try again at the next refresh interval\",\"attr\":{\"error\":\"NotYetInitialized: Replication has not yet been configured\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.046+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40440, \"ctx\":\"initandlisten\",\"msg\":\"Starting the TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.046+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20711, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Failed to reap transaction table\",\"attr\":{\"error\":\"NotYetInitialized: Replication has not yet been configured\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.047+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40445, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Started TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.047+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/opt/bitnami/mongodb/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.047+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.047+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.115+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21315, \"ctx\":\"ReplCoord-0\",\"msg\":\"\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21316, \"ctx\":\"ReplCoord-0\",\"msg\":\"** WARNING: This replica set has a Primary-Secondary-Arbiter architecture, but readConcern:majority is enabled \",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21317, \"ctx\":\"ReplCoord-0\",\"msg\":\"** for this node. This is not a recommended configuration. Please see \",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21318, \"ctx\":\"ReplCoord-0\",\"msg\":\"** https://dochub.mongodb.org/core/psa-disable-rc-majority\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21319, \"ctx\":\"ReplCoord-0\",\"msg\":\"\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21392, \"ctx\":\"ReplCoord-0\",\"msg\":\"New replica set config in use\",\"attr\":{\"config\":{\"_id\":\"rs0\",\"version\":3,\"term\":2141,\"protocolVersion\":1,\"writeConcernMajorityJournalDefault\":true,\"members\":[{\"_id\":0,\"host\":\"scd-open-banking-apps-mongodb-0.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":5.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":1,\"host\":\"scd-open-banking-apps-mongodb-arbiter-0.scd-open-banking-apps-mongodb-arbiter-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":true,\"buildIndexes\":true,\"hidden\":false,\"priority\":0.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":2,\"host\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":1.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1}],\"settings\":{\"chainingAllowed\":true,\"heartbeatIntervalMillis\":2000,\"heartbeatTimeoutSecs\":10,\"electionTimeoutMillis\":10000,\"catchUpTimeoutMillis\":-1,\"catchUpTakeoverDelayMillis\":30000,\"getLastErrorModes\":{},\"getLastErrorDefaults\":{\"w\":1,\"wtimeout\":0},\"replicaSetId\":{\"$oid\":\"61828d01f5d2776cd79eab3e\"}}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21393, \"ctx\":\"ReplCoord-0\",\"msg\":\"Found self in config\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-0.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.116+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"STARTUP2\",\"oldState\":\"STARTUP\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.117+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21306, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication storage threads\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.117+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplNetwork\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-arbiter-0.scd-open-banking-apps-mongodb-arbiter-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.117+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplNetwork\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.120+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"RECOVERING\",\"oldState\":\"STARTUP2\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21299, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication fetcher thread\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21300, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication applier thread\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21301, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication reporter thread\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21224, \"ctx\":\"OplogApplier-0\",\"msg\":\"Starting oplog application\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21783, \"ctx\":\"BackgroundSync\",\"msg\":\"Waiting for pings from other members before syncing\",\"attr\":{\"pingsNeeded\":4}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"OplogApplier-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"SECONDARY\",\"oldState\":\"RECOVERING\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.149+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21106, \"ctx\":\"OplogApplier-0\",\"msg\":\"Resetting sync source to empty\",\"attr\":{\"previousSyncSource\":\":27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.192+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.116.39.216:36484\",\"connectionId\":5,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.193+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn5\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.116.39.216:36484\",\"client\":\"conn5\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"4.4.6\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.4.219-126.411.amzn2.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.218+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn5\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.116.39.216:36484\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.219+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21401, \"ctx\":\"conn5\",\"msg\":\"Scheduling heartbeat to fetch a newer config\",\"attr\":{\"configTerm\":2142,\"configVersion\":3,\"senderHost\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.300+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21392, \"ctx\":\"ReplCoord-1\",\"msg\":\"New replica set config in use\",\"attr\":{\"config\":{\"_id\":\"rs0\",\"version\":3,\"term\":2142,\"protocolVersion\":1,\"writeConcernMajorityJournalDefault\":true,\"members\":[{\"_id\":0,\"host\":\"scd-open-banking-apps-mongodb-0.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":5.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":1,\"host\":\"scd-open-banking-apps-mongodb-arbiter-0.scd-open-banking-apps-mongodb-arbiter-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":true,\"buildIndexes\":true,\"hidden\":false,\"priority\":0.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":2,\"host\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":1.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1}],\"settings\":{\"chainingAllowed\":true,\"heartbeatIntervalMillis\":2000,\"heartbeatTimeoutSecs\":10,\"electionTimeoutMillis\":10000,\"catchUpTimeoutMillis\":-1,\"catchUpTakeoverDelayMillis\":30000,\"getLastErrorModes\":{},\"getLastErrorDefaults\":{\"w\":1,\"wtimeout\":0},\"replicaSetId\":{\"$oid\":\"61828d01f5d2776cd79eab3e\"}}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.300+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21393, \"ctx\":\"ReplCoord-1\",\"msg\":\"Found self in config\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-0.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.301+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21215, \"ctx\":\"ReplCoord-5\",\"msg\":\"Member is in new state\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"newState\":\"PRIMARY\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.301+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":4615601, \"ctx\":\"ReplCoord-5\",\"msg\":\"Scheduling priority takeover\",\"attr\":{\"when\":{\"$date\":\"2023-03-30T20:21:31.639Z\"}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.301+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21215, \"ctx\":\"ReplCoord-3\",\"msg\":\"Member is in new state\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-arbiter-0.scd-open-banking-apps-mongodb-arbiter-headless.opbx.svc.cluster.local:27017\",\"newState\":\"ARBITER\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.335+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.116.39.216:36498\",\"connectionId\":8,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.336+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn8\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.116.39.216:36498\",\"client\":\"conn8\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"4.4.6\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.4.219-126.411.amzn2.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.365+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.116.42.115:55010\",\"connectionId\":9,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.366+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn9\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.116.42.115:55010\",\"client\":\"conn9\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"4.4.6\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.4.219-126.411.amzn2.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.378+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn8\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.116.39.216:36498\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.402+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn9\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.116.42.115:55010\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.492+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.116.42.115:55026\",\"connectionId\":10,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.493+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn10\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.116.42.115:55026\",\"connectionId\":10,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.570+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.116.42.115:55032\",\"connectionId\":11,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.571+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn11\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.116.42.115:55032\",\"client\":\"conn11\",\"doc\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"4.4.6\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 10 (buster)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.4.219-126.411.amzn2.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.620+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn11\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"root\",\"authenticationDatabase\":\"admin\",\"remote\":\"10.116.42.115:55032\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:20.631+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn11\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.116.42.115:55032\",\"connectionId\":11,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.148+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21799, \"ctx\":\"BackgroundSync\",\"msg\":\"Sync source candidate chosen\",\"attr\":{\"syncSource\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.149+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplCoordExternNetwork\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.189+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21088, \"ctx\":\"BackgroundSync\",\"msg\":\"Changed sync source\",\"attr\":{\"oldSyncSource\":\"empty\",\"newSyncSource\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.276+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21098, \"ctx\":\"BackgroundSync\",\"msg\":\"Starting rollback due to fetcher error\",\"attr\":{\"error\":\"OplogStartMissing: Our last optime fetched: { ts: Timestamp(1679306827, 1), t: 1782 }. source's GTE: { ts: Timestamp(1679306845, 2), t: 1784 }\",\"lastCommittedOpTime\":{\"ts\":{\"$timestamp\":{\"t\":0,\"i\":0}},\"t\":-1}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.276+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21102, \"ctx\":\"BackgroundSync\",\"msg\":\"Rollback using 'recoverToStableTimestamp' method\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21104, \"ctx\":\"BackgroundSync\",\"msg\":\"Scheduling rollback\",\"attr\":{\"syncSource\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"ROLLBACK\", \"id\":21593, \"ctx\":\"BackgroundSync\",\"msg\":\"Transition to ROLLBACK\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21340, \"ctx\":\"BackgroundSync\",\"msg\":\"State transition ops metrics\",\"attr\":{\"metrics\":{\"lastStateTransition\":\"rollback\",\"userOpsKilled\":0,\"userOpsRunning\":4}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21484, \"ctx\":\"BackgroundSync\",\"msg\":\"Canceling priority takeover callback\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"BackgroundSync\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"ROLLBACK\",\"oldState\":\"SECONDARY\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22991, \"ctx\":\"BackgroundSync\",\"msg\":\"Skip closing connection for connection\",\"attr\":{\"connectionId\":9}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22991, \"ctx\":\"BackgroundSync\",\"msg\":\"Skip closing connection for connection\",\"attr\":{\"connectionId\":8}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22991, \"ctx\":\"BackgroundSync\",\"msg\":\"Skip closing connection for connection\",\"attr\":{\"connectionId\":5}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.278+00:00\"},\"s\":\"I\", \"c\":\"ROLLBACK\", \"id\":21606, \"ctx\":\"BackgroundSync\",\"msg\":\"Finding common point\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.301+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":4615601, \"ctx\":\"ReplCoord-5\",\"msg\":\"Scheduling priority takeover\",\"attr\":{\"when\":{\"$date\":\"2023-03-30T20:21:31.368Z\"}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"I\", \"c\":\"ROLLBACK\", \"id\":21607, \"ctx\":\"BackgroundSync\",\"msg\":\"Rollback common point\",\"attr\":{\"commonPointOpTime\":{\"ts\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}},\"t\":1781}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"I\", \"c\":\"ROLLBACK\", \"id\":21612, \"ctx\":\"BackgroundSync\",\"msg\":\"Rollback summary\",\"attr\":{\"startTime\":{\"$date\":\"2023-03-30T20:21:21.277Z\"},\"endTime\":{\"$date\":\"2023-03-30T20:21:21.402Z\"},\"syncSource\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\",\"lastOptimeRolledBack\":{\"ts\":{\"$timestamp\":{\"t\":1679306827,\"i\":1}},\"t\":1782},\"commonPoint\":{\"ts\":{\"$timestamp\":{\"t\":1679068524,\"i\":1}},\"t\":1781},\"lastWallClockTimeRolledBack\":{\"$date\":\"2023-03-20T10:07:07.373Z\"},\"firstOpWallClockTimeAfterCommonPoint\":{\"$date\":\"2023-03-17T15:55:24.609Z\"},\"wallClockTimeDiff\":238302,\"shardIdentityRolledBack\":false,\"configServerConfigVersionRolledBack\":false,\"affectedSessions\":[],\"affectedNamespaces\":[],\"rollbackCommandCounts\":{},\"totalEntriesRolledBackIncludingNoops\":5}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"I\", \"c\":\"ROLLBACK\", \"id\":21611, \"ctx\":\"BackgroundSync\",\"msg\":\"Transition to SECONDARY\"}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"BackgroundSync\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"SECONDARY\",\"oldState\":\"ROLLBACK\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21106, \"ctx\":\"BackgroundSync\",\"msg\":\"Resetting sync source to empty\",\"attr\":{\"previousSyncSource\":\"scd-open-banking-apps-mongodb-1.scd-open-banking-apps-mongodb-headless.opbx.svc.cluster.local:27017\"}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"F\", \"c\":\"REPL\", \"id\":21128, \"ctx\":\"BackgroundSync\",\"msg\":\"Rollback failed with unrecoverable error\",\"attr\":{\"error\":{\"code\":127,\"codeName\":\"UnrecoverableRollbackError\",\"errmsg\":\"not willing to roll back more than 86400 seconds of data. Have: 238302 seconds.\"}}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23095, \"ctx\":\"BackgroundSync\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50666,\"error\":\"UnrecoverableRollbackError: not willing to roll back more than 86400 seconds of data. Have: 238302 seconds.\",\"file\":\"src/mongo/db/repl/bgsync.cpp\",\"line\":807}}\n{\"t\":{\"$date\":\"2023-03-30T20:21:21.402+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23096, \"ctx\":\"BackgroundSync\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "Hi,We are getting the fatal error below in our secondary replica.We then changed the admin parameter rollbackTimeLimitSecs from the primary node, as the secondary won’t stay up due to its unhealthiness.\nResult of command db.adminCommand({ getParameter: 1, rollbackTimeLimitSecs:1}) after changing the parameter:But the error kept occurring with the exact same message, as if we haven’t had changed it.MongoDB version: 4.4.6\nWe are running on Kubernetes, using the primary-secondary-arbiter architecture.\nComplete log:",
"username": "Missael_DeNadai"
},
{
"code": "Have: 238302 secondsrollbackTimeLimitSecstest> db.adminCommand( { setParameter: 1, rollbackTimeLimitSecs:240000 } )\n{ was: 1, ok: 1 }\ntest> db.adminCommand( { getParameter: 1, rollbackTimeLimitSecs:1 } )\n{ rollbackTimeLimitSecs: 240000, ok: 1 }\n<-- SERVER RESTARTED -->\ntest> db.adminCommand( { getParameter: 1, rollbackTimeLimitSecs:1 } )\n{ rollbackTimeLimitSecs: 86400, ok: 1 }\nsetParameterdb.adminCommand( { setParameter: 1, rollbackTimeLimitSecs:86400 } )--setParametermongod --setParameter rollbackTimeLimitSecs=86400 ...setParametermongod setParameter:\n rollbackTimeLimitSecs: 86400\n",
"text": "Hi @Missael_DeNadai,Welcome to the MongoDB Community forums mongodb 20:21:16.48 INFO ==> ** Starting MongoDB **Have: 238302 secondsIt seems that once the MongoDB server is restarted, the rollbackTimeLimitSecs parameter may revert to its default value of 24 hours. I confirmed it in my local environment where I observed the parameter being set to its default value upon restarting the server.You can set this value using setParameter which can be done:At runtime:\ndb.adminCommand( { setParameter: 1, rollbackTimeLimitSecs:86400 } )Via the --setParameter command-line option:\nmongod --setParameter rollbackTimeLimitSecs=86400 ...Via the setParameter setting in a mongod configuration file:However, I would also note that 238302 seconds is almost 66 hours of data that will be rolled back. Before doing so, I would make sure you understand the downtime incident you had over a couple of days and why this secondary appears to have diverged by 66 hours of writing.I hope it helps!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "db.adminCommand( { getParameter: 1, rollbackTimeLimitSecs:1 } ){ \"rollbackTimeLimitSecs\" : 1153219, \"ok\" : 1 }{\"t\":{\"$date\":\"2023-03-31T12:20:18.550+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23095, \"ctx\":\"BackgroundSync\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50666,\"error\":\"UnrecoverableRollbackError: not willing to roll back more than 86400 seconds of data. Have: 1152219 seconds.\",\"file\":\"src/mongo/db/repl/bgsync.cpp\",\"line\":807}}\n",
"text": "Hi @Kushagra_Kesav , thank you for your reply.Sorry if my question was misleading, but the thing is: I did change the parameter from the primary replica, but the secondary replica kept logging the same error, as if I haven´t changed.Output of db.adminCommand( { getParameter: 1, rollbackTimeLimitSecs:1 } ) after changing to 1153219:\n{ \"rollbackTimeLimitSecs\" : 1153219, \"ok\" : 1 }Secondary replica log after I changed the parameter:Is there anything else I must do so the secondary replica gets the updated parameter? Is there any cache of admin parameters kept in the secondary replica?",
"username": "Missael_DeNadai"
}
] | rollbackTimeLimitSecs parameter not being applied | 2023-03-30T20:52:05.541Z | rollbackTimeLimitSecs parameter not being applied | 1,214 |
[] | [
{
"code": "",
"text": "Why am I receiving this error even when I configure the federated database from CLI?\nimage777×482 34.1 KB\n",
"username": "Wuerike"
},
{
"code": "",
"text": "This looks like a bug as you don’t seem to have any regular expressions in the storage configuration in the first place.Would you be able to send me an email at [email protected] so I can get this resolved?(Just a note, if you’re configuring the storage configuration via the MongoDB shell then this error should not prevent you from doing anything in the mean time.)",
"username": "Benjamin_Flast"
}
] | Data federation regex data source path | 2023-03-30T21:12:14.328Z | Data federation regex data source path | 332 |
|
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have users and their email ids,creation date in the document……i need to get company name from email id and group them by company…and then need to get count of users created per month…and this count should be cumulative every month and also should be showing count of each company’s count and name in the chart… I am able to get company name and get the cumulative count by using aggregate by count in charts but I need each company’s split in the total count and shown in the charts along with their names…its happening if i choose aggregate count by value but count is not correct…what can be done in this scenario…",
"username": "Kranthi_Rayala"
},
{
"code": "_id",
"text": "Hi @Kranthi_Rayala, it would be helpful to see some screenshots that explain your problem. But as an alternative to using Count By Value, you could try using the regular Count aggregation for the _id field in the value axis and then put the thing you want to count by in the Series channel.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,\nI tried the solution you have provided but I am not able to get cumulative total … I need to get cumulative total and names of company should be displayed.",
"username": "Kranthi_Rayala"
},
{
"code": "",
"text": "\nScreenshot 2023-03-28 at 10.22.42 pm1660×740 86.1 KB\n",
"username": "Kranthi_Rayala"
},
{
"code": "",
"text": "\nScreenshot 2023-03-28 at 10.22.19 pm1654×718 77.6 KB\n",
"username": "Kranthi_Rayala"
},
{
"code": "",
"text": "Thanks for the extra info! Unfortunately it looks like Count By Value and Compare Periods don’t work well together. We generally restrict Compare Periods to single-series charts, and missed the fact that it can be enabled with Count By Value (which is another way of building multi-series charts).The team is looking into what to do about this bug, but in the short term I suspect the best fix is not to rely on either feature and instead preprocess the data with a custom aggregation pipeline that uses window functions to calculate the cumulative total, as per this example.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,Thanks for explanation. Our Mongo version is 4.4 and setwindows function is not available in this version. could you please suggest me any other workaround for this.Thanks",
"username": "Kranthi_Rayala"
},
{
"code": "",
"text": "Aggregate count by value in Mongo Charts is a useful feature for visualizing data in MongoDB. To use this feature, you’ll first need to create a chart in Mongo Charts that is based on a data source containing the values you want to aggregate. Then, select the “Aggregate” option from the “Data” dropdown in the chart configuration panel. In the “Aggregate” panel, you can specify the field you want to aggregate by selecting it from the dropdown menu. You can then choose the type of aggregation you want to perform to remini, such as a count of the number of occurrences of each unique value. Finally, you can customize the appearance of your chart to display the aggregated data in a way that makes sense for your needs. With this powerful feature, you can quickly gain insights into the distribution of values in your MongoDB data and use that information to make informed decisions about your business or project reels",
"username": "Aisha_Rizwan"
}
] | Aggregate count by value in mongo charts | 2023-03-28T09:38:30.076Z | Aggregate count by value in mongo charts | 1,230 |
null | [
"mongodb-shell"
] | [
{
"code": "db.adminCommandMongoServerError: not authorized on admin to execute command...db.grantRolesToUser",
"text": "Hi, I am trying to resize the oplog and increase the min retention hrs of my Atlas cluster (M30) via db.adminCommand in the mongosh shell, but am receiving MongoServerError: not authorized on admin to execute command... . The user I’m connecting with has the ‘atlasAdmin’ and ‘dbAdminAnyDatabase’ roles.I have also attempted to grant the ‘clusterAdmin’ role to the user via db.grantRolesToUser but receive the same authorization error.Any help in achieving the aim of reconfiguring the oplog settings would be much appreciated, thanks.",
"username": "Nye_Jones"
},
{
"code": "",
"text": "You may be running the command on admin db\noplog is a collection under local db\nYou have to switch to local db\nAlso there are unsupported commands on Atlas clusters\nCheck mongo documentation",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks for the reply, in end I was able to set the minimum oplog window by editing the cluster config in the Atlas UI as per dos here https://www.mongodb.com/docs/atlas/cluster-additional-settings/#set-minimum-oplog-window",
"username": "Nye_Jones"
},
{
"code": "replSetResizeOplogM10+",
"text": "Thanks for posting your solution Nye.For reference, if you attempted to use the replSetResizeOplog then this is listed as one of the Unsupported Commands in M10+ Clusters.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "replSetResizeOplog",
"text": "Thanks Jason, yes I was using replSetResizeOplog so that explains it.",
"username": "Nye_Jones"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoServerError: not authorized on admin to execute command when resizing oplog | 2023-03-24T15:17:45.991Z | MongoServerError: not authorized on admin to execute command when resizing oplog | 1,228 |
null | [
"mdbw22-communitycafe"
] | [
{
"code": "",
"text": "A “mini mentoring” session on breaking into tech. Come to share tips on how you got into tech, or if you’re just getting started, come learn what’s worked for others.",
"username": "TimSantos"
},
{
"code": "",
"text": "@Diego_Freniche and @henna.s ready to talk about Breaking into Tech!\nimage1920×2168 271 KB\n",
"username": "TimSantos"
},
{
"code": "",
"text": "@Michael_Lynn sharing his story on breaking into tech: his first program was the Pong game!\nimage1920×1440 186 KB\n",
"username": "TimSantos"
},
{
"code": "",
"text": "@shrey_batra wanted to become a game developer, which peaked his interest in coding!\nimage1920×2560 293 KB\n",
"username": "TimSantos"
},
{
"code": "",
"text": "More photos and @webchick sharing her story!\nImage from iOS (11)1920×1440 125 KB\n–\n\nImage from iOS (12)1920×2560 295 KB\n–\nImage from iOS (13)1920×1440 179 KB\n–\n\nImage from iOS (14)1920×1440 160 KB\n–\n\nImage from iOS (15)1920×2560 467 KB\n",
"username": "Harshit"
},
{
"code": "",
"text": "This was one of my very FAVOURITE sessions in all of the Community Café. Thanks so much for everyone who participated, and for the AMAZING job facilitating as well! Well done, everyone! ",
"username": "webchick"
},
{
"code": "",
"text": "Do you think that the development of gaming technologies is one of the most important? For me, as a game developer, this question is extremely interesting",
"username": "Sofya_Kassina"
}
] | Coffee Roulette: Breaking Into Tech | 2022-06-06T14:07:31.596Z | Coffee Roulette: Breaking Into Tech | 3,422 |
null | [
"compass",
"atlas",
"vscode"
] | [
{
"code": "",
"text": "Hi Everyone,Today, I tried accessing my collection from VS Code (Ubuntu) and when I connected to the project, there would be the standard databases: config, local, and admin. However, the database and its collections were not shown. I tried going to Atlas and Compass, and both were showing the database and the collection I want to make modifications to.Have you guys experienced this recently? If so, what could be the issue?",
"username": "Michael_Xie1"
},
{
"code": "v.0.11.1",
"text": "Hi @Michael_Xie1 ,May I suggest you update your extension to version v.0.11.1 to see if this will fix the issue for you.",
"username": "vgmda"
}
] | VS Code not able to retrieve database and its collections | 2023-03-30T16:20:24.576Z | VS Code not able to retrieve database and its collections | 1,213 |
null | [
"atlas-search"
] | [
{
"code": " $search: {\n index: \"global\",\n compound: {\n should: [\n {\n autocomplete: {\n path: \"name\",\n query: searchTerm,\n score: {\n boost: {\n value: 3,\n },\n },\n },\n },\n {\n text: {\n path: \"name\",\n query: searchTerm,\n fuzzy: {\n maxEdits: 1,\n },\n },\n },\n ],\n minimumShouldMatch: 1,\n filter: [\n {\n equals: {\n value: 0,\n path: \"type\",\n },\n },\n ],\n },\n },\nfilterequals$in",
"text": "I have created a collection which contains documents from other collections to enable a sort of global search. It works fine but now I’d like to be able to filter out a set of types that should be included in the search results.I have a compound query that I’m expanding with a filter operator which works fine when I just what to filter one specific type like thisWhat I really want is to have the the filter accept an array of accepted values but equals don’t work like that. I need something like $in but I’m not sure if that exists in search.I could do the filtering afterwords in the pipeline but that seems inefficient. Any ideas?",
"username": "Daniel_Reuterwall"
},
{
"code": "shouldfilter",
"text": "Hey @Daniel_Reuterwall,Welcome to the MongoDB Community Forums! You’re correct. Equals does not accept multiple values or arrays. I understand this may not be optimal when there are a large amount of values to be filtered. This behavior may change in the future but for now, the only workaround you can try is to use compound with multiple should clauses inside the filter to achieve what you want to do.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": " filter: [\n {\n compound: {\n should: [\n {\n equals: {\n value: 0,\n path: \"type\",\n },\n },\n {\n equals: {\n value: 4,\n path: \"type\",\n },\n },\n ],\n },\n },\n ],\n",
"text": "Works great, thanks for the guidance!Ended up with a filter like this:",
"username": "Daniel_Reuterwall"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas search filter on multiple values | 2023-03-24T13:28:14.323Z | Atlas search filter on multiple values | 1,502 |
null | [] | [
{
"code": "db.adminCommand(\n {\n setFeatureCompatibilityVersion: <version>\n }\n)\nUPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: BadValue: Invalid value for version, found 3.6, expected '4.2' or '4.0'.\ndb.adminCommand( \n { \n getParameter: 1, \n featureCompatibilityVersion: 1 \n } \n)\n",
"text": "Hi, I am working on a task to automate upgrade an existing mongo 3.6 container based database to mongo 4.4. According to the documentation I need to first upgrade to 4.0, then to 4.2 and only then to 4.4, and In each step I need to run the command:Where “version” indicates the current mongo version.\nIn each case the container have a binded volume with the mongo data on the host, so we use the same data each time\nThese are the steps I performed:This is even even though the setFeatureCompatibilityVersion command returned:\n{‘ok’: 1.0}\nThe only way I can get it to work, is if I put a “sleep” command for 60+ seconds after the setFeatureCompatibilityVersion command (the amount of sleep seems to related to the amount of data in the DB)\nI have tried running the command:To verify the setFeatureCompatibilityVersion command worked, but it always return the value given by the setFeatureCompatibilityVersion, and the container 4.2 fails nonetheless.\nI would appreciate any help ",
"username": "Dana_Pascal"
},
{
"code": "",
"text": "Best advice? Don’t even do this.Keep things simple.Build a 4.4 container, take the indexes and aggregations from the 3.6 container, and upload them into the 4.4 container. Then just export the data via exporting the data as BSON, and import the BSON documents.Save yourself huge amounts of time, and troubleshooting.Then when you have the database connected to network and services, and you are satisfied, then you can just delete the old 3.6 container.",
"username": "Brock"
}
] | Upgrade from 3.6 to 4.4, setFeatureCompatibilityVersion give wrong answer | 2023-03-27T15:30:57.871Z | Upgrade from 3.6 to 4.4, setFeatureCompatibilityVersion give wrong answer | 606 |
null | [] | [
{
"code": "",
"text": "We currently have the package version **mongodb-linux-x86_64-ubuntu1804-4.4.7 ,**the community version for current Ubuntu Linux 18.04 LTS .Since , we need to upgrade to Ubuntu Linux 20.04 LTS.\nWe need the confirmation if existing mongodb-linux-x86_64-ubuntu1804-4.4.7 will work in Ubuntu Linux 20.04 LTS, or we would need mongodb-linux-x86_64-ubuntu2004-4.4.7.",
"username": "Debalina_Saha1"
},
{
"code": "",
"text": "What version of MongoDB are you using?I use a virtual machine on my M1 MacBook using Ubuntu 22.10 and have MongoDB 6.0, and 5.15 installed and working perfectly fine.MongoDB is also in the Ubuntu App Store, so I don’t think you should have any problems.To be honest with you, to make sure you have the best results, I would just build a new Linux VM with the latest Ubuntu, and the latest MongoDB, and the latest Ops Manager, and then migrate the indexes and other things to it, build it out, and then just move your data over and run your tests/deploy to production when satisfied.That way you can keep your current setup in production and not have fears of environmental disruptions if something was wrong while you make your determinations of when to finish things.having said this, A much more effective approach would be using Docker Containers with MongoDB, which makes these things a lot more agnostic. Then you can just upgrade Ubuntu as much as you want, and either leave the container as is, or build the latest and greatest MongoDB Docker container, rinse repeat the above.",
"username": "Brock"
}
] | Will mongodb-linux-x86_64-ubuntu1804-4.4.7 work in Ubuntu Linux 20.04 LTS, or we would need mongodb-linux-x86_64-ubuntu2004-4.4.7 | 2023-03-30T07:38:27.614Z | Will mongodb-linux-x86_64-ubuntu1804-4.4.7 work in Ubuntu Linux 20.04 LTS, or we would need mongodb-linux-x86_64-ubuntu2004-4.4.7 | 602 |
null | [] | [
{
"code": "",
"text": "Hi,Is it possible to connect to Azure MongoDB via an Azure VM using managed identity?Example for Azure PostgreSQL for your reference.Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQLThanks,\nKahn",
"username": "Khanh_Quach"
},
{
"code": "",
"text": "Hello @Khanh_Quach ,Welcome to The MongoDB Community Forums! Unfortunately, this is not supported at the moment and we are aware of some requests for this feature. However unfortunately I cannot provide any timeline or comment regarding this request.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Actually, through Azures services, or any identity management services, yes.You just set it up the same way you would any other services to do it.I prefer building an LDAP server and pushing it through MongoDBs built in authentication mechanisms. and have the LDAP coincide with AAD.MongoDB I just MongoDB, no need to care whether or not they have a built in integration when it’s still just MongoDB at its core and it’s hard to find something that doesn’t have ways around connecting to it.There’s also CosmoDB which really, is just an overlay on MongoDB Choose between RU-based and vCore-based models - Azure Cosmos DB for MongoDB | Microsoft Learn and other NoSQL databases it can build a container of and connect that way, too.Choose whether the RU-based or vCore-based option for Azure Cosmos DB for MongoDB is ideal for your workload.You can also just build an Atlas DB and do this:Learn how to configure single sign-on between Azure Active Directory and MongoDB Atlas - SSO.Then there’s this:\nJust instead of AD have it go to AAD, principles work the same with Atlas.",
"username": "Brock"
},
{
"code": "",
"text": "Hello @Khanh_Quach I just want to clarify something for yourself and other readers.@Tarun_Gaur actually is correct, it’s not supported what you’re asking for, but he’s referring to organically built into MongoDB itself presently with Azure stand alone.I just want to make clear, all the methods above are you constructing other means to integrate MongoDB into an SSO/AAD system to make everything communicate and talk to each other, essentially. Basically, traditional DevOps work.In DevOps, you care a lot less what’s organic to any one specific piece of your environment, vs all the pieces that you have in play and in use. I just know of these workarounds because I used to work for Microsoft on the Azure platform, and built these forms of relations between systems for my job.But he is correct on what he had stated, just to clear up any confusions.",
"username": "Brock"
}
] | Does Azure MongoDB support managed identity authentication? | 2023-03-27T22:31:48.900Z | Does Azure MongoDB support managed identity authentication? | 937 |
null | [] | [
{
"code": "",
"text": "During database backup, it was confirmed that information about the entire database was downloaded.\nHowever, I would like to download backup data for a specific collection, is it possible?",
"username": "YE_LEE_KANG"
},
{
"code": "",
"text": "What type of backup you took?mongodump or mongoexport?\nWhat exactly you mean by download?\nYou want to take backup of a collection or restore a specific collection from dump?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "However, as shown in the picture below, when I click download from backup snapshots on my db in atlas, I can see that all collections on my db are downloaded.However, can I download only the selected collections?\nScreenshot 2023-03-31 at 10.16.23 AM1063×122 17.3 KB\n",
"username": "YE_LEE_KANG"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
}
] | How to get a specific collection backup data download | 2023-03-30T09:25:46.489Z | How to get a specific collection backup data download | 1,056 |
null | [] | [
{
"code": "",
"text": "Hi:I ran into a problem with our mongodb when balancer is performing move chuck among the shards.Our application sometimes hang for few minutes when it creates an object into a gridfs db. I saw in the db log the recently created object id went into chuck migration as soon as it was inserted. The app went into pause mode (no cpu usage) until chuck migration is completed, the application resumes itself.I stopped the balancer manually for now. No more application hang during object creation.So somehow the sharding action on move chunk on the db is causing the app went into pause mode.I want to see if you guys out there has any suggestion on where I should check for the application hang?My mongo version: 3.6. I know Thanks in advance.\nEric",
"username": "Eric_Wong"
},
{
"code": "",
"text": "Everyone would say the same thing: upgrade your server first.",
"username": "Kobe_W"
}
] | Stop the workd access when moveChunk taking place | 2023-03-30T10:49:10.071Z | Stop the workd access when moveChunk taking place | 862 |
null | [
"replication",
"transactions"
] | [
{
"code": "",
"text": "Hi GuysWe have mongo 3.6.4 running in PSA architecture in centos. All three nodes are running separately on physical severs. Today the secondary nodes was suddenly crashed and was not available in the Replicaset. When I checked rs.status(), the Secondary node was with status no route to Host. When this happened, even though primary node was up and running, my entire application became slow in processing the transactions.My application basically reads message from Rabbitmq and my application internally has multiple modules which communicate via ActiveMQ. When the Secondary node gone down, my app throughput reduced from processing 500 Messages per second to 50, 60 or 100 Message per second. Even some time totally idel. This resulted in Queue pileup in both RabbitMQ and in ActiveMQ.After Restarting all the application nodes, Rabbitmq, Activemq, Nothing was helping in returning to my application actual throughput. After all the try, I just randomly thought and removed the Secondary node from Replication and suddenly my app started processing the messages to 450, 480 500 messages per second.Question is : How is Non-availability of the Secondary node impacted my application performance even though Primary node was up and running and was fully healthy. This today’s behaviour was totally agaist the basic understanding on the mongo replication.Is there anything that I should be looking at or I forgot to look at so that this kind of issue doesn’t happen in the future ???",
"username": "Dilip_D"
},
{
"code": "No Route to HostWiredTigerLAS.wtdbpathWiredTigerLAS.wt",
"text": "Hi @Dilip_D and welcome to the MongoDB community forum!!We have mongo 3.6.4 running in PSA architecture in centos.The arbiters are useful to allow a replica set to have a primary when the secondary goes down. Although this deployment is supported, there are some caveats with regard to operations and maintenance.\nThe recommended way here would be to have a PSS architecture, with no arbiters unless compulsory for the deployment.Also, the version you are using, is quite old(Almost 6 years ago), I would recommend you to upgrade to the latest version with bug fixes and new features added.To add here, the error No Route to Host could be one of the reason for networking issues in the deployment.\nCould you help me understand how the replica set was configured or how the secondaries were added?When the Secondary node gone down, my app throughput reduced from processing 500 Messages per second to 50, 60 or 100 Message per second.As mentioned in the MongoDB release notes documentation:Starting in MongoDB 3.6, MongoDB enables support for “majority” read concern by default.What we suspect in your case, when the secondary node goes down, the majority commit point (information about the latest version of the data in all data-bearing nodes) cannot move forward due to the unavailable secondary. Consequently, the primary needs to keep old versions of data as long as the secondary stays offline. This will lead to a cache full scenario, where WiredTiger will spill it’s cache content to disk in the form of WiredTigerLAS.wt file in the dbpath.Can you confirm is you can see larger size of the WiredTigerLAS.wt file? Also, can you try disabling the majority read concern and see the similar issue.\nThe server ticket here mentions the similar behaviour in the past releases.Let us know if you have any other concerns.Best regards\nAasawari",
"username": "Aasawari"
}
] | Issue in Replication | 2023-03-23T14:29:08.762Z | Issue in Replication | 876 |
null | [
"aggregation",
"queries"
] | [
{
"code": "txn_count_log_detailstxnDateshardkey \"planSummary\":\"IXSCAN \n{ productBadgeId: 1, storeBadgeId: 1, pranthId: 1, txnDate: 1 }\"\n \"\"protocol\":\"op_msg\",\"durationMillis\":35122, \n keysExamined\":6725351,\"docsExamined\":6725295,\",\n$sort",
"text": "Hi Team,Greetings, I am having a collection called txn_count_log_details, Where the collection size is around 70GB and the count of the records in the collection is 102549448, And I am using Sharding in my environment, using the txnDate field as a shardkey. I observed the following parameters in the logsand Here I am using the query to get the results which I attached as follows. but the query is taking around 1.5 minutes to get the results and take the right index. Kindly give me suggestions to get the results fast either by creating the correct index or by modifying the query or adding some parameter like the $sort parameter …Kindly help me in this matter.txn_count_log_details_log.txt (11.8 KB)",
"username": "MERUGUPALA_RAMES"
},
{
"code": "txnDatenumYields",
"text": "Hi @MERUGUPALA_RAMES,Welcome to the MongoDB Community forums Firstly, it’s important to note that the issue is difficult to reproduce reliably since it is highly dependent on the specific data and hardware.However, by looking into your log, I can deduce the following:Based on the client’s driver info, which suggests that a Spring Boot app using MongoDB Java driver is being used, I suggest looking into the app’s code and configuration to evaluate optimization opportunities that may be the cause of the slow query.The query examined 6.7 million documents, which represents approximately 6% of the total collection size of 102 million documents. While this is still a significant number of documents, it may not necessarily indicate a major performance issue on its own. However it’s possible that the query cannot be made more selective, so it’s important to focus on optimizing other aspects of the query.Note that the shard key txnDate is not the prefix of the index used for the query.Furthermore, to gain a better understanding of the issue, please provide the following information:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "Mongos : RAM - 30 GB, CPU cores- 8, HardDisk-100 GB\nConfig: RAM - 15 GB , CPU cores - 4 HardDisk - 100GB\n3_Data_Nodes_Each: 30 GB , CPU cores - 8 and HardDisk - 500GB\n{\n \"_id\" : ObjectId(\"6401cff277a245291028fd82\"),\n \"storeId\" : NumberLong(1360255),\n \"storeName\" : \"New Kumbharwada UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345885),\n \"productName\" : \"bOPV (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(40),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cff277a245291028fd81\"),\n \"storeId\" : NumberLong(1360255),\n \"storeName\" : \"New Kumbharwada UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(10),\n \"productName\" : \"OPEN bOPV (vial)\",\n \"productBadgeId\" : NumberLong(3),\n \"productBadgeName\" : \"OPEN Vials\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(3),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cff277a245291028fd80\"),\n \"storeId\" : NumberLong(1360255),\n \"storeName\" : \"New Kumbharwada UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345907),\n \"productName\" : \"Pentavalent (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(20),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d8\"),\n \"storeId\" : NumberLong(1356656),\n \"storeName\" : \"Akhlol UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(10),\n \"productName\" : \"OPEN bOPV (vial)\",\n \"productBadgeId\" : NumberLong(3),\n \"productBadgeName\" : \"OPEN Vials\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(3),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d7\"),\n \"storeId\" : NumberLong(1356658),\n \"storeName\" : \"Bharatnagar UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345885),\n \"productName\" : \"bOPV (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(40),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d6\"),\n \"storeId\" : NumberLong(1356658),\n \"storeName\" : \"Bharatnagar UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345907),\n \"productName\" : \"Pentavalent (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(20),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d5\"),\n \"storeId\" : NumberLong(1356656),\n \"storeName\" : \"Akhlol UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345907),\n \"productName\" : \"Pentavalent (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(20),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d4\"),\n \"storeId\" : NumberLong(1356658),\n \"storeName\" : \"Bharatnagar UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(10),\n \"productName\" : \"OPEN bOPV (vial)\",\n \"productBadgeId\" : NumberLong(3),\n \"productBadgeName\" : \"OPEN Vials\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(3),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cf7a934eee407add02d3\"),\n \"storeId\" : NumberLong(1356656),\n \"storeName\" : \"Akhlol UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345885),\n \"productName\" : \"bOPV (dose)\",\n \"productBadgeId\" : NumberLong(2),\n \"productBadgeName\" : \"RI Vaccines\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 4,\n \"txnTypeName\" : \"Stock-Discards\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(40),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\n{\n \"_id\" : ObjectId(\"6401cefd77a2452910289d52\"),\n \"storeId\" : NumberLong(1356656),\n \"storeName\" : \"Akhlol UHC\",\n \"storeBadgeId\" : NumberLong(279),\n \"storeBadgeName\" : \"MCCCP\",\n \"productId\" : NumberLong(3345879),\n \"productName\" : \"OPEN Pentavalent (vial)\",\n \"productBadgeId\" : NumberLong(3),\n \"productBadgeName\" : \"OPEN Vials\",\n \"txnStringDate\" : \"2023-03-03\",\n \"txnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"txnTypeId\" : 2,\n \"txnTypeName\" : \"Stock-In\",\n \"stateId\" : NumberLong(351),\n \"stateName\" : \"Gujarat\",\n \"districtId\" : NumberLong(2030),\n \"districtName\" : \"Bhavnagar\",\n \"isDeleted\" : false,\n \"stock\" : NumberLong(10),\n \"pranthId\" : NumberLong(1344239),\n \"month\" : \"Mar\",\n \"year\" : \"2023\",\n \"initialTxnDate\" : ISODate(\"2023-03-03T00:00:00Z\"),\n \"initialTxnStringDate\" : \"2023-03-03\",\n \"_class\" : \"com.dipl.evinae.reports.mongo.entity.TxnCountLogDetails\"\n}\nISODate(\"2021-08-02T00:00:00Z\") to txnDate\" : ISODate(\"2023-03-03T00:00:00Z\")\n4.4.16",
"text": "Hi Kesav,\nThanks for the reply,As per the discussion, the following is the requested data. Please review it once.For WT cache changes I didn’t change anything yet, and for hardware resources, yes we are using separate hardware resources for each node as follows2)Furthermore, to gain a better understanding of the issue, please provide the following information:The sample documents of your collections, and the expected output that you are seeking.Shard Key provided as a data key, hence shard key existing values from “txnDate” :currently, it’s 4.4.16Thanks & Regards,\nRamesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "txnDate{ productBadgeId: 1, storeBadgeId: 1, pranthId: 1, txnDate: 1 }\ntxnDate",
"text": "Hi @MERUGUPALA_RAMES,Thanks for sharing the information Note that the shard key txnDate is not the prefix of the index used for the query.I think this indexis not very efficient for this particular query because the shard key txnDate is not the prefix of the index.It is important to choose a shard key that is frequently used in queries and ensure that it is the leading element of the index to leverage the full benefits of sharding. Please refer to the Shard Key Index for more details.If you need further help, can you please share the explain db.collection.explain(‘executionStats’) output, and the output of db.collection.getIndexes()?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | How to build a perfect index to get the results fast | 2023-02-13T14:13:42.479Z | How to build a perfect index to get the results fast | 759 |
null | [
"replication",
"java",
"ops-manager",
"morphia-odm"
] | [
{
"code": "",
"text": "Good morning!I’ve recently deployed a replica set with TLS authentication in a network restricted environment. I configured the Ops Manager utility and encountered some problems on the pre-flight checks Backup daemon startup, since I start the service it stays looping in the backup daemon, looking into the daemon logs I’ve discovered the following errors:[Starting Logging - App Version: 6.0.11.100.20230310T0146Z]\n[main] INFO com.xgen.svc.mms.dao.mongo.MongoSvcUriImpl [MongoSvcUriImpl.java:initMorphiaMapper:189] - Initialized Morphia in 3183ms\n[main] INFO com.xgen.svc.mms.dao.mongo.MongoSvcUriImpl [MongoSvcUriImpl.java::96] - Created MongoSvc with 1 client(s)\n[main] INFO com.xgen.svc.brs.grid.BackupDaemonPreFlightCheck [BackupDaemonPreFlightCheck.java:check:107] - Waiting for a system upgrade to complete before attempting to start the Backup Daemon. Retrying in 60 secondWe’ve currently deployed on MongoDB 4.4.16 Enterprise version and Ops Manager 6.0 version. I’ve seen that that version is “deprecated”. But before the upgrade we would like to setup the utility properly.",
"username": "LJroman"
},
{
"code": "",
"text": "Hello @LJroman ,Welcome to The MongoDB Community Forums! Waiting for a system upgrade to complete before attempting to start the Backup Daemon. Retrying in 60 secondBased on the logs you provided, it seems like the Backup Daemon is waiting for a system upgrade to complete before it can start. This could be due to a variety of reasons, such as missing dependencies, configuration errors, or incompatible versions of MongoDB or Ops Manager.As you are working with MongoDB Ops Manager and MongoDB Enterprise Edition. I would recommend you open a support case at MongoDB Support Portal as they have the required expertise and could help you with Root Cause Analysis and provide best solutions as per your use-case.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Backup daemon not starting | 2023-03-27T11:48:43.449Z | Backup daemon not starting | 1,006 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.