image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[
"server"
] | [
{
"code": "",
"text": "I am having a problem updating mongodb. It was first version 4.2.8 (If I recall correctly). Mongodb @5.0.6 is my current version and it has been updated but I cannot start the services with Homebrew.Similar Problem, but this user’s problem did not include an error 3854, still I followed the solution and it still isn’t resolved. Below is an image of my terminal:\nScreen Shot 2022-03-24 at 11.31.37 AM1364×100 55.4 KB\n",
"username": "Salvador_Joshua_Enrick"
},
{
"code": "",
"text": "Does mongod.log show more details?",
"username": "Ramachandra_Tummala"
},
{
"code": "/usr/local/var/log/mongodb/mongo.logFailed to unlink socket file\nFatal assertion\naborting after fassert() failure\n",
"text": "I am reading /usr/local/var/log/mongodb/mongo.log and I find logs along the lines ofBelow is an image of some of the logs today.\n\nScreen Shot 2022-03-25 at 8.29.18 AM1880×872 187 KB\n",
"username": "Salvador_Joshua_Enrick"
},
{
"code": "",
"text": "Check permissions on that file\nls -ls /tmp/mongodb-27017.sockDo you have any other mongod running?\nps -ef|grep mongod\nstop all mongod\nTry to change permissions (it should be owned by mongod)",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "How do I do this? I removed /tmp/mongodb-27017.sock, but how do I end all mongod processes.\n\nScreen Shot 2022-03-25 at 11.10.22 AM1002×174 74.7 KB\n",
"username": "Salvador_Joshua_Enrick"
},
{
"code": "",
"text": "So after removing the file how did you start mongod?\nps -ef |grep mongod will show only those started manually from command lineJust issue mongo and see if you can connect to default mongod running from your service",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried to run mongod, and this is what it returns.\nScreen Shot 2022-03-25 at 2.58.32 PM1920×1200 454 KB\n",
"username": "Salvador_Joshua_Enrick"
},
{
"code": "",
"text": "When you run mongod without any parameters it will try to start on default port 27017 and default dirpath / data/db\nSince dir is missing it failed to start\nFollow the error message-create the missing dir or give alternate path where mongod can write\nWhat was the result if you try to start from service after deleting .sock file?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you, I will try to create the dir path.After delete the .sock file (no errors), I restarted the services according to Install MongoDB Community Edition on macOS, and the error is still the same.",
"username": "Salvador_Joshua_Enrick"
},
{
"code": "--dbpathmongod --dbpath ~/data/db\n",
"text": "I ran into a similar error and ended up resolving by starting mongod with a --dbpath option",
"username": "Arun_Ramaiah"
},
{
"code": "",
"text": "Please start a new thread. Post the error messages you get.",
"username": "steevej"
},
{
"code": "mongod --dbpath ~/data/db",
"text": "mongod --dbpath ~/data/dbI can’t make the service to run, I restarted services, uninstalled and installed back… Can make it work. im a Mac user",
"username": "Marcelo_Rocha"
}
] | Running MongoDB as a MacOS Service: error 3854 | 2022-03-24T03:33:35.355Z | Running MongoDB as a MacOS Service: error 3854 | 5,209 |
|
null | [
"cxx"
] | [
{
"code": "",
"text": "I have Qt application that uses mongocxx 3.4.0 driver. I am trying to update the driver to 3.7.1.On Windows 10, successfully built 64-bit mongocxx 3.7.1 from source code, using MSYS with gcc, g++ 11.20.\nSubstituted 3.4.0 dll’s with freshly built ones, compiled the application with new libraries.\nCompilation is successful, however, when I run the app, the following error occurs at the startup:“The procedure entry point _ZN7bsoncxx7v_noabi5types10bson_value4viewD1Ev could not be located in the dynamic link library”.Demagled symbol in the error is bsoncxx::v_noabi::types::bson_value::view::~view().\nI experience the same issue with mongocxx 3.6.0, but not with 3.4.2. I see there was folders structure reorganisation in include/bsoncxx/v_noiabi/bsoncxx/types between these versions.I get the same error when I try to execute bsoncxx/view_and_value.exe example code inside mongocxx build. That probably proves that the problem is not with Qt application.\nPlease advise what am I missing to make this work. Thank you for your help.In case you need building info,\ncmake flags used to build 3.7.1 driver:-G “MSYS Makefiles” -DCMAKE_BUILD_TYPE=Release -DBSONCXX_POLY_USE_BOOST=1 -DBUILD_VERSION=3.7.1 -DBOOSTROOT=C:/dev/boost/boost_1_82_0 -DCMAKE_PREFIX_PATH=C:/dev/mongodb/mongo-c-driver/install -DCMAKE_INSTALL_PREFIX=C:/dev/mongodb/mongo-cxx-driver/installconfiguration output:– The C compiler identification is GNU 11.2.0\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Check for working C compiler: C:/Qt/Tools/mingw1120_64/bin/gcc.exe - skipped\n– Detecting C compile features\n– Detecting C compile features - done\nbsoncxx version: 3.7.1\nfound libbson version 1.23.3\n– Found Boost: C:/dev/boost/boost_1_82_0 (found suitable version “1.82.0”, minimum required is “1.56.0”)\nmongocxx version: 3.7.1\nfound libmongoc version 1.23.3\n– Performing Test CMAKE_HAVE_LIBC_PTHREAD\n– Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success\n– Found Threads: TRUE\n– Build files generated for:\n– build system: MSYS Makefiles\n– Configuring done (2.2s)\n– Generating done (1.0s)\n– Build files have been written to: C:/dev/mongodb/mongo-cxx-driver/build",
"username": "Yana_K"
},
{
"code": "",
"text": "Hi @Yana_KAre you using same headers for compiling the library and using the dll?",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Yes, I copied everything from installation folder to my app resources. And, application aside, I see the same error with bsonxx view_and_value example, where my app is not involved.",
"username": "Yana_K"
}
] | "Procedure entry point cannot be located" error with mongocxx 3.7.1 | 2023-05-09T21:21:42.569Z | “Procedure entry point cannot be located” error with mongocxx 3.7.1 | 1,069 |
null | [
"replication",
"performance"
] | [
{
"code": "",
"text": "Hi,We are migrating from one of our data store (LDAP) to MongoDB. We have built migration utility which simply reads data from existing data store and writes to MongoDBIt is observed that migration utility gets completed in 1 hour (with current data store) with Self hosted MongoDB servers (2x servers are setup in AWS) however same migration utility takes 2 hours and 10 minutes while we are using Atlas MongoDB cloud (1x primary, 2x secondary nodes)Profiler doesn’t show any missing indexes and utility is deployed in same AWS account/ region with vpc peering with atlas mongodb cloudWe have tried with M10, M20 and M30 in Atlas cloud to eliminate IOPS issues but still it is showing almost 100% disk utilization in all above cloud configurationsWe are using replica set configurationCan you please suggest any specific performance settings required for Atlas MongoDB cloud vs self hosted?Thanks,",
"username": "Chintan_Chokshi"
},
{
"code": "",
"text": "Hi @Chintan_Chokshi and welcome to MongoDB community forums!!It would be helpful for us to understand the concerns in detail if you could help me with some information for the issues foreseen:We are migrating from one of our data store (LDAP) to MongoDB.The documentation or migration script you are following to move the data from source to the destination.It is observed that migration utility gets completed in 1 hour (with current data store) with Self hosted MongoDB servers (2x servers are setup in AWS) however same migration utility takes 2 hours and 10 minutes while we are using Atlas MongoDB cloud (1x primary, 2x secondary nodes)The deployment configuration for the self hosted MongoDB on AWS like RAM, CPU etc.\nAlso, are the destination deployment in both the above cases using the same configuration.?We have tried with M10, M20 and M30 in Atlas cloud to eliminate IOPS issues but still it is showing almost 100% disk utilization in all above cloud configurationsThe 100% disk utilisation alerts are generated when the requests reaches the threshold. You can learn more about the Disk IO utilisation for further understanding.any specific performance settings required for Atlas MongoDBSince Atlas is self managed, from my understanding, you do not need to do settings explicitly. Please refer to Performance Management Tools for MongoDB AtlasAre there are any indexes created prior to moving the data?Lastly, what are writeconcern set for the self hosted and Atlas deployments.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "To add to Aasawari’s response, you may want to try the following:",
"username": "Alexander_Komyagin"
}
] | Performance of read/write operations in self-hosted MongoDB instances vs Atlas MongoDB is better | 2023-05-02T04:39:56.940Z | Performance of read/write operations in self-hosted MongoDB instances vs Atlas MongoDB is better | 1,123 |
null | [
"golang"
] | [
{
"code": "\t\t\tgames, e = h.service.GameRepo().Find(bson.M{\n\t\t\t\t\"slug\": bson.M{\"$regex\": primitive.Regex{\n\t\t\t\t\tPattern: \"^[0-9]\",\n\t\t\t\t\tOptions: \"i\",\n\t\t\t\t}},\n\t\t\t}, &options.FindOptions{\n\t\t\t\tSort: bson.D{{\"name\", 1}},\n\t\t\t})\n\t\t\tgames, e = h.service.GameRepo().Find(bson.D{{\n\t\t\t\tKey: \"slug\", Value: primitive.Regex{\n\t\t\t\t\tPattern: \"^[0-9]\",\n\t\t\t\t\tOptions: \"i\",\n\t\t\t\t},\n\t\t\t}}, &options.FindOptions{\n\t\t\t\tSort: bson.D{{\"name\", 1}},\n\t\t\t})\n\t\t\tgames, e = h.service.GameRepo().Find(bson.D{{\n\t\t\t\tKey: \"slug\", Value: bson.E{\n\t\t\t\t\tKey: \"$regex\",\n\t\t\t\t\tValue: primitive.Regex{\n\t\t\t\t\t\tPattern: \"^[0-9]\",\n\t\t\t\t\t\tOptions: \"i\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}}, &options.FindOptions{\n\t\t\t\tSort: bson.D{{\"name\", 1}},\n\t\t\t})\n",
"text": "I haveThe result is it returns ALL documents.I also hadsame resultbut alsoWhat do I have to do so the filter actually works?The mgo driver was straightforward, this driver is just awful!\nWhy are there bson.M bson.D bson.A bson.E\nWhy this overcomplication? Development time has increased 10fold with this driver compared to the mgo driver.\nI don’t like SQL but lately I’ve been considering to completely get rid of mongodb in favor of Postgresql, because the drivers available are actually straightforward and don’t let the user jump through hoops of ridiculous data types.\nAlso you never answered my Rust question.",
"username": "dalu"
},
{
"code": "",
"text": "Hi @dalu,Can you let us know what the query looked like with mgo? We can try to help you translate it based on that information.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "\t\t\tgames, e = h.service.GameRepo().Find(bson.M{\n\t\t\t\t\"slug\": bson.M{\"$regex\": \"^[0-9]\", \"$options\": \"i\"}\n\t\t\t}, &options.FindOptions{\n\t\t\t\tSort: bson.D{{\"name\", 1}},\n\t\t\t})\n",
"text": "Topic is a bit old, but I stumbled upon this query while looking for regex query example for my own usecase.\nI think the following would work (without the primitive.Regex):",
"username": "Sharath_K"
}
] | Regex query with the Go driver | 2020-11-03T00:50:44.120Z | Regex query with the Go driver | 10,464 |
null | [] | [
{
"code": "{\n \"cities\":{\n \"toronto\":{\n \"code\": \"TO\",\n \"pop\": 10000\n },\n \"newyork\":{\n \"code\": \"NY\",\n \"pop\": 234000\n },\n ......\n }\n}\n",
"text": "Hi, how do I index the “code” and “pop” fields in the documents like the following in Atlas? Note that the field under the “cities” is dynamic and can be any city in the world. Thanks.",
"username": "Andrew_Wang3"
},
{
"code": "{\n\"cities\" : [\n { cityName : \"toronto\", code : \"TO\" , \"pop\" : 10000 },\n { cityName : \"newyork\", code : \"NY\" , \"pop\" : 234000} ...\n]}\n",
"text": "Hi @Andrew_Wang3 ,The Atlas search UI and API allow you to use a “dynamic” mapping when the index is created.So you can map dynamically anything below “cities”…Learn how to include specific fields in your search index or how to configure Atlas Search to automatically include all supported field types.Now the problem is that the names you mentioned are field names and not values. This is not being indexed for text searching, perhaps you should use a model like this:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks, @Pavel_Duchovny.We cannot change the model as our customers have been using the model for quite a long time. The problem with using dynamic mapping is that we have a lot of fields under each city and we only want to index a few of them. Also, there are fields under each city that are nested documents and we want to index some of the fields in those nested documents as well.Do you see a way to resolve it?Thanks.",
"username": "Andrew_Wang3"
},
{
"code": "",
"text": "@Andrew_Wang3 ,Can you share some examples of what kind of searches you try to achieve?Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " term_display_name: customer_id\n name (of data class): Customer Number\n column_type: string\n term_display_name: Contact Identifier\n{\n \"entity\":{\n \"column_info\": {\n \"CUSTOMER_ID\": {\n \"column_terms\": [\n {\n \"term_id\": \"436fee03-ba36-4627-b5af-96af19cddce0\",\n \"term_display_name\": \"customer_id\",\n \"confidence\": 1,\n \"specification\": \"NAME_MATCHING\"\n }\n ],\n \"data_class\": {\n \"selected_data_class\": {\n \"name\": \"Customer Number\",\n \"id\": \"436fee03-ba36-4627-b5af-96af19cddce0\",\n \"setByUser\": false\n }\n },\n \"type\": \"string\",\n \"rejected_terms\": [],\n ...\n },\n \"CONTACT_ID\": {\n \"column_terms\": [\n {\n \"term_id\": \"436fee03-ba36-4627-b5af-96af19cddce0\",\n \"term_display_name\": \"Contact Identifier\",\n \"confidence\": 1,\n \"specification\": \"ML based term assignment\"\n }\n ]\n }\n }\n }\n}\n",
"text": "Thanks, @Pavel_Duchovny.I tried to simplify the document so that it is easier to communicate; however, it seems that it actually led to more confusion.The example at the bottom is a more realistic snippet of our documents. The fields like “CUSTOMER_ID” and “CONTACT_ID” in the example are dynamically extracted from the customer’s data, which is unknown to us. If I search for documents that meet the following criteria, I would expect the sample document to return.The sample document is also expected to return with the following criteria:Thanks for looking into it!Example Document:",
"username": "Andrew_Wang3"
},
{
"code": "\"*.<FIELD_NAME>\"[{\n $search: {\n compound: {\n must: [\n {\n text: {\n query: 'string',\n path: {\n wildcard: '*.type'\n }\n }\n },\n {\n text: {\n query: 'Customer Number',\n path: {\n wildcard: '*.name'\n }\n }\n },\n {\n text: {\n query: 'customer_id',\n path: {\n wildcard: '*.term_display_name'\n }\n }\n }\n ]\n }\n }\n}]\n*.type*.name*. term_display_name",
"text": "Hi @Andrew_Wang3 ,You can use a wild card path to force that each of the terms will be looked at a field with a specific name using the following “regex” as the path\"*.<FIELD_NAME>\":See how the query using compound to form 3 different compound conditions and using *.type , *.name and *. term_display_name to only hit specific nested fields.In this case my search index was completely dynamic mapping. But if you know that only specific field paths are dynamically queried then only map those with dynamic toggle/flag.hope that helps…Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"fields\": {\n \"product.$**.name\": {\n \"maxGrams\": 9,\n \"minGrams\": 1,\n \"type\": \"autocomplete\"\n }\n }\n}\n",
"text": "I’m having similar problem. Is it possible to define index using wildcard in the middle of the path, like this…?",
"username": "Jakub"
}
] | Atlas search index fields nested in fields with unknown name | 2022-11-10T00:26:47.596Z | Atlas search index fields nested in fields with unknown name | 2,071 |
[
"queries",
"node-js",
"crud"
] | [
{
"code": "await shopCollection.updateOne(\n { Barcode: elem.Barcode },\n {\n $set: {\n Quantity: { $subtract: [\"$Quantity\", -elem.Quantity] },\n SalesQuantity: {\n $add: [{ $ifNull: [\"$SalesQuantity\", 0] }, elem.Quantity],\n },\n Profit: { $subtract: [\"$SalesPrice\", \"$CostPrice\"] },\n TotalProfit: {\n $add: [\"$TotalProfit\", { $multiply: [\"$Profit\", elem.Quantity] }],\n },\n SalesTotal: {\n $add: [\n \"$SalesTotal\",\n { $multiply: [\"$SalesPrice\", elem.Quantity] },\n ],\n },\n },\n },\n { upsert: true, returnOriginal: false }\n );\n const items = req.body.Items;\n const PhoneNumber = req.body.phoneno;\n const paymentMethod = req.body.paymentMethod;\n\n let failureArr = [],\n billingData = [],\n TotalAmount = 0,\n TotalProfit = 0;\n\n for(let j = 0;j<items.length;j++){\n const elem = items[j];\n try {\n const doc = await shopCollection.findOne({ Barcode: elem.Barcode });\n if (!doc) {\n failureArr.push({ message: \"cant find barcode \" + elem.Barcode });\n return;\n }\n\n await shopCollection.updateOne(\n { Barcode: elem.Barcode },\n {\n $set: {\n Quantity: { $subtract: [\"$Quantity\", -elem.Quantity] },\n SalesQuantity: {\n $add: [{ $ifNull: [\"$SalesQuantity\", 0] }, elem.Quantity],\n },\n Profit: { $subtract: [\"$SalesPrice\", \"$CostPrice\"] },\n TotalProfit: {\n $add: [\"$TotalProfit\", { $multiply: [\"$Profit\", elem.Quantity] }],\n },\n SalesTotal: {\n $add: [\n \"$SalesTotal\",\n { $multiply: [\"$SalesPrice\", elem.Quantity] },\n ],\n },\n },\n },\n { upsert: true, returnOriginal: false }\n );\n\n const updatedDoc = await shopCollection.findOne({ Barcode: elem.Barcode });\n\n billingData.push({\n Product: updatedDoc.Product,\n Price: updatedDoc.SalesPrice,\n Quantity: elem.Quantity,\n Amount: elem.Quantity * updatedDoc.SalesPrice,\n });\n TotalAmount += elem.Quantity * updatedDoc.SalesPrice;\n TotalProfit += (doc.CostPrice - updatedDoc.SalesPrice) * elem.Quantity;\n } catch (err) {\n failureArr.push({\n code: err,\n message: `failed to subtract quantity from ${elem.Barcode}`,\n });\n }\n };\n\n const date = new Date();\n const day = date.getDate().toString().padStart(2, \"0\");\n const month = (date.getMonth() + 1).toString().padStart(2, \"0\");\n const year = date.getFullYear().toString().slice(-2);\n const formattedDate = `${day}-${month}-${year}`;\n\n const filter = { date: formattedDate };\n\nconst setOnInsertUpdate = {\n $setOnInsert: {\n Date: formattedDate,\n Volume: 0,\n Profit: 0,\n Customers: 0,\n Log: [],\n }\n};\n\nconst incAndPushUpdate = {\n $inc: {\n Customers: 1,\n Volume: TotalAmount,\n Profit: TotalProfit,\n },\n $push: {\n Log: {\n Bill: billingData,\n Amount: TotalAmount,\n Profit: TotalProfit,\n Id: PhoneNumber ? PhoneNumber : \"Unknown\",\n },\n },\n};\n\nconst options = { upsert: true, returnOriginal: false };\n\n// find the entry and update or create it\nshoppingLog.findOneAndUpdate(filter, setOnInsertUpdate, options)\n .then(() => {\n return shoppingLog.findOneAndUpdate(filter, incAndPushUpdate, options);\n })\n .catch((err) => {\n console.error(err);\n });\n\n\n if (PhoneNumber) {\n const upd1 = {\n $set: {\n LastVisited: formattedDate,\n TimesVisited: {\n $add: [{ $ifNull: [\"$TimesVisited\", 0] }, 1],\n },\n PurchaseVolume: {\n $add: [{ $ifNull: [\"$PurchaseVolume\", 0] }, TotalAmount],\n },\n Profit: {\n $add: [{ $ifNull: [\"$Profit\", 0] }, TotalProfit],\n },\n },\n $push: {\n Log: {\n Bill: billingData,\n Amount: TotalAmount,\n Profit: TotalProfit,\n Id: PhoneNumber ? PhoneNumber : \"Unknown\",\n Method: paymentMethod,\n },\n },\n };\n\n const opts = { returnOriginal: false };\n\n // find the entry and update or create it\n usersCollection.findOneAndUpdate(\n { PhoneNumber: PhoneNumber },\n upd1,\n opts,\n function (err, result) {\n if (err) {\n failureArr.push({\n message: \"Updaing UsersCollection Failed\",\n code: err,\n result,\n upd1,\n });\n }\n }\n );\n }\n\n if (failureArr.length === 0) {\n return res\n .status(200)\n .json({ message: \"Success\", billingData, TotalAmount });\n } else {\n return res.status(400).json({\n message: \"Somestuff failed\",\n billingData,\n failureArr,\n TotalAmount,\n });\n }\n});\n",
"text": "",
"username": "Ds_Adithya"
},
{
"code": "",
"text": "To update a document while referring to other fields you need to use the aggregation syntax.Few things about your code.You return if findOne fails but then you updateOne with upsert:true. The only time the document will be upserted is if findOne succeed and then some other process/thread deletes the said document before the updateOne starts. But in this rare case your $set will not make sense as there will be no fields named, SalesPrice, CostPrice.I do not see any option for updateOne that looks like returnOriginal, there is some close to that for findOneAndUpdate but it is named returnNewDocument which might be much better that doing a 3rd access to the database with the findOne that follows.",
"username": "steevej"
}
] | $Subtract and other artihmetic operators returning Object instead of int | 2023-05-11T07:02:28.804Z | $Subtract and other artihmetic operators returning Object instead of int | 476 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi Team,I have created a read only user but when I login with the credentials and try to test of Read only user is able to do write operations, It is allowing the user to do all write operations.Why is read user able to write as well.When i checked the configuration file it doesn’t have authorization enabled , So should we enable authorization for read user to be created?Now that the server is running without auth enabled and in case I enable what will be the risk to existing user and application connected to it?Will there be breakdown and users cant connect?\nPlease provide info on the same how to get the user created as READ only.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Please provide required information as to how to get this user created?",
"username": "Mamatha_M"
},
{
"code": "",
"text": "I think a lot of reading is in order.Enjoy the documentation.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Sir,I hv created the user with role as READ for a db , that read access is also able to do write operations. My ask was why it is allowing to do write operations when i hv given READ only? My query was should we enable authorization in config file ?",
"username": "Mamatha_M"
},
{
"code": "",
"text": "My query was should we enable authorization in config file ?The link I provide answer this question with YES.",
"username": "steevej"
}
] | Creation of read user | 2023-05-11T05:50:20.850Z | Creation of read user | 617 |
null | [
"atlas-device-sync",
"kotlin",
"flexible-sync"
] | [
{
"code": "class EmployeeModel : RealmObject {\n @PrimaryKey\n var _id: ObjectId = BsonObjectId()\n var store_id: String = \"\"\n\n @PersistedName(\"is_admin\")\n var isAdmin: Boolean = false\n var username: String = \"\"\n var name: String = \"\"\n @PersistedName(\"password\")\n var password: PasswordModel? = null\n var email: String? = null\n var phone: String? = null\n @PersistedName(\"is_activated\")\n var isActivated: Boolean = true\n @PersistedName(\"permission_role\")\n var permissionRole: PermissionRoleModel? = null\n @PersistedName(\"date_created\")\n var dateCreated: Double = System.currentTimeMillis().toDouble()\n @PersistedName(\"date_modified\")\n var dateModified: Double = 0.0\n @PersistedName(\"joining_date\")\n var joiningDate: Double = 0.0\n var kyc: RealmList<KycModel> = realmListOf()\n @PersistedName(\"salary_detail\")\n var salaryDetail: EmployeeSalaryDetail? = null\n @PersistedName(\"salary_account\")\n var salaryAccount: PaymentAccountModel? = null\n @PersistedName(\"salary_advance\")\n var salaryAdvance: EmployeeSalaryAdvance? = null\n var dob: String? = null\n @PersistedName(\"reset_password_on_login\")\n var resetPasswordOnLogin: Boolean = false\n @PersistedName(\"last_logged_on\")\n var lastLoggedOn: Double = 0.0\n}\nclass PasswordModel : EmbeddedRealmObject {\n var salt: String = \"\"\n var hash: String = \"\"\n}\n",
"text": "I am using Flexible sync in my kotlin application.I have a Employee model with the following schemaand Password model asI am using FlexibleSync and my Employee models are getting synced but all the embedded objects such as password is syncing as null",
"username": "Gaurav_Bordoloi"
},
{
"code": "",
"text": "Hello @Gaurav_Bordoloi , Welcome to MongoDB Community!Thank you for raising your concern. The provided information may not be enough to know why the embedded objects are syncing null.Could you please share the code snippets of the Realm Write Transactions as well?I look forward to your response.Cheers, \nHenna",
"username": "henna.s"
}
] | Kotlin realm setting embedded object as null with Flexible Sync | 2023-05-09T19:46:28.670Z | Kotlin realm setting embedded object as null with Flexible Sync | 832 |
null | [
"atlas-functions",
"schema-validation"
] | [
{
"code": "2023-03-03T00:00:-0.000+00:002023-03-03T10:30:30.734+00:00new Date()moment(new Date()).format(\"YYYY-MM-DD\")new Date()import moment from \"moment\";\nexports = function () {\n context.values.get(\"somecluster\");\n const mongodb = context.services.get(\"mongodb-atlas\");\n const collection = mongodb.db(\"some database\").collection(\"collection\");\n collection .insertOne({\n \tnew_date: moment(new Date()).format(\"YYYY-MM-DD\"),\n \tnext_date: new Date(),\n \tcreatedAt: new Date(),\n \tlastUpdatedAt: new Date(),\n });\n\treturn {\n\t\tstatus: \"ok\",\n\t};\n\n};\n",
"text": "I am creating a MongoDB atlas function for an insert operation and I want to know how to validate the data before the insert operation. One more thing I am having an issue with the date as I am saving some fields with a date but the format I want to save is 2023-03-03T00:00:-0.000+00:00 but the date is saving like this 2023-03-03T10:30:30.734+00:00 with new Date().this function moment(new Date()).format(\"YYYY-MM-DD\") is giving the format I need but Mongodb taking it as a string and saving it as a string.\nBut MongoDB takes new Date() as a date but the date is coming with timezones that I don’t need.So I want to know how to make the date as date not as a string in the MongoDB atlas functionHere is my sample function:",
"username": "Zubair_Rajput"
},
{
"code": "exports = function () {\n context.services.get(\"mongodb-cluster\").db(\"test_db\").collection(\"test_coll\").insertOne({\n \tnew_date: new Date(\"2023-05-02\"),\n \tnext_date: new Date(),\n \tcreatedAt: new Date(),\n \tlastUpdatedAt: new Date(),\n });\n\treturn {\n\t\tstatus: \"ok\",\n\t};\n};\ndatemongoshISODateISODate",
"text": "Hello @Zubair_Rajput,Welcome back to the MongoDB Community forums I want to know how to validate the data before the insert operation.Before inserting or updating data using App Services, you can validate the documents by enforcing the schema. Additionally, it’s also possible to validate any modifications made to the documents by specifying a validation expression in the schema.To enforce a schema in Atlas App Services, you can follow the six-step procedure provided.\nOne more thing I am having an issue with the date as I am saving some fields with a date but the format I want to save is 2023-03-03T00:00:-0.000+00:00 but the date is saving like this 2023-03-03T10:30:30.734+00:00 with new Date().You can insert the date in your desired format by executing the following command:It will return the following output:\n\nHowever, just for my understanding, can you explain the reason for the exact requirement for the date field and what further operations you expect to perform?but the date is coming with timezones that I don’t need.Can you please elaborate more, on what you mean by the above statement?Typically mongosh wraps the Date object with the ISODate helper and the ISODate is in UTC format.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hey @Kushagra_Kesav To enforce a schema in Atlas App Services, you can follow the six-step procedure provided.\nThis procedure only validate the existing documents into our collection but didn’t validate the document before insertion. Please help me that how can I enforce a schema on insertOne(), insertMany(), updateOne() and updateMany() methods. Thanks",
"username": "Shahzad_Safdar"
}
] | How to set field validation in mongodb atlas function for crud operation | 2023-03-03T13:02:23.050Z | How to set field validation in mongodb atlas function for crud operation | 1,074 |
null | [
"aggregation",
"sharding",
"database-tools",
"backup",
"time-series"
] | [
{
"code": "",
"text": "Hi All,\nNeed your help in below.\nWe are migrating the sharded regular collections into sharded time series collection. We will be following below approach:My question here:\nAs we are restoring non sharded dump into empty sharded time series collection, where shard keys are already set into empty time series collection.Is mongorestore takes care of sharding and data will be restored to multiple instances as per shardKey and zoneKeyRange into timeseries collection?Or I need to restore the non sharded dump into empty time series collection and shard the collections (using shardCollection) after restore completes?Thank you in advance.",
"username": "Yogesh_Sonawane1"
},
{
"code": "",
"text": "In my experience option one:Is mongorestore takes care of sharding and data will be restored to multiple instances as per shardKey and zoneKeyRange into timeseries collection?is the best way to do this, I’ve found it splits the keys quicker as it moves it while it is restoring instead of restoring to one shard, then having to migrate data as a second action. I have restored a collection with 300+ million documents to an empty sharded collection (geo tags enabled) and it correctly put all documents on the right shard.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thank you so much for your reply.",
"username": "Yogesh_Sonawane1"
},
{
"code": "",
"text": "This seems very slow. Do you know what can help to use mogorestore to sharded timeseries collection and it takes minimum time?\nRight now, it is taking 50-60 minute to restore 20m records.",
"username": "Yogesh_Sonawane1"
}
] | Migrate data from a standalone (non-sharded) instance to sharded mongodb instances using mongorestore | 2023-05-08T13:02:15.437Z | Migrate data from a standalone (non-sharded) instance to sharded mongodb instances using mongorestore | 812 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "private IMongoCollection<Alerts> _alert;\nprivate async Task<List<AndonResponse>> AlertGetByStationIdInternalAsync(string assemblyId, string stationId)\n { \nvar result = await _alert.Aggregate()\n .Match(r => r.AssemblyId == assemblyId)\n .Match(r => r.Andon.StationId == stationId)\n .Project(r => new AndonResponse\n {\n Id = r.Id,\n AndonId = r.Andon.AndonId,\n StationId = r.Andon.StationId,\n Comments = r.Andon.Comments,\n Status = r.Andon.Status,\n })\n .ToListAsync().ConfigureAwait(false);\n return result;\n}\n",
"text": "Hi Team,\nTrrying to write Xunit test cases for the query here Not able to mock the Project in aggregate query.",
"username": "Rishabh_Soni1"
},
{
"code": "AlertGetByStationIdInternalAsyncIAggregateFluent<TResult>AlertGetByStationIdInternalAsyncAlertGetByStationIdInternalAsync",
"text": "Hi, @Rishabh_Soni1,Welcome to the MongoDB Community Forums. I understand that you’re attempting to mock parts of the MongoDB .NET/C# Driver. A widely accepted testing practice is “Don’t mock what you don’t own”. In other words, don’t mock third-party dependencies, but only your own abstractions. In your example, you should mock AlertGetByStationIdInternalAsync (your code), not IAggregateFluent<TResult> (driver code).Using this strategy you can integration test AlertGetByStationIdInternalAsync, which would talk to the database and use the actual driver implementation. You can then separately unit test your code that depends on AlertGetByStationIdInternalAsync without talking to a database or setting up hard-to-configure error conditions. I would recommend full system tests that exercise the full stack, but you wouldn’t have to test all the edge cases (such as your database being offline or containing unexpected data or other errors).I hope that helps in developing your testing strategy.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "@James_KovacsThe intent is how to do unit testing for the function AlertGetByStationIdInternalAsync itself by mocking mongdb related calls inside that function.How you have done unit testing for your mongodb library? can you please share if there is any github repository code where you have done any unit testing?Also, is there any C# library for mocking mongodb?Thanks and Regards\nKeerthi",
"username": "Keerthi_J"
},
{
"code": "AlertGetByStationIdInternalAsyncAlertGetByStationIdInternalAsyncAlertGetByStationIdInternalAsyncIMongoCollection<T>IMongoCollection<T>",
"text": "For code that uses AlertGetByStationIdInternalAsync, you should mock AlertGetByStationIdInternalAsync itself. To test AlertGetByStationIdInternalAsync, you should integration test that against an actual MongoDB database.The problem with trying to mock the driver is that you must make assumptions about the internal implementation of the driver and the underlying MongoDB cluster. This makes your test suite extremely brittle and unreliable. For example let’s say you do mock IMongoCollection<T> and all the associated types, but you use a server feature not supported by your current MongoDB version. Your mocked tests would pass - because you’ve made assumptions about return values - but the actual query would fail in production.There are many mocking frameworks available that would allow you to mock IMongoCollection<T> and its associated interfaces. Even though it is possible, mocking dependencies that you don’t own is a code smell that will result in a brittle test suite. I strongly encourage you to integration test your code that directly interacts with the .NET/C# Driver rather than attempting to unit test it.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thanks for the clarification",
"username": "Keerthi_J"
}
] | Mongo Aggregate query with project unable to Mock data, need to write test cases for It | 2023-05-08T07:22:56.503Z | Mongo Aggregate query with project unable to Mock data, need to write test cases for It | 1,223 |
null | [
"queries",
"python",
"motor-driver"
] | [
{
"code": "--reactor asyncioasync def print_results():\n client = AsyncIOMotorClient(test_db_uri)\n db = client.get_default_database()\n col = db.get_collection(\"user\")\n async for doc in col.find({}):\n print(doc)\n\n\[email protected]\ndef test_motor():\n createSomeUsersWithMongoEngine()\n yield defer.ensureDeferred(print_results())\n\n File \"/usr/local/lib/python3.10/site-packages/twisted/internet/defer.py\", line 1697, in _inlineCallbacks\n result = context.run(gen.send, result)\n File \"test/test_reporting.py\", line 318, in print_results\n async for doc in col.find({}):\n File \"/usr/local/lib/python3.10/site-packages/motor/core.py\", line 1158, in next\n if self.alive and (self._buffer_size() or await self._get_more()):\nRuntimeError: await wasn't used with future\n_get_more",
"text": "Hey folks, I’m using motor for the first time, version 2.5.1. I have a twisted app that is using the asyncio reactor. My tests uses the pytest_twisted plugin with --reactor asyncio. I’m able to reproduce the error with the following code.The error I get is:It’s almost as if _get_more is not returning a future. Any ideas? This is using Python 3.10.11, twisted 22.10.0, and PyMongo 3.12.0.Thanks in advance for any pointers.",
"username": "Robert_DiFalco"
},
{
"code": "import asyncio\nfrom motor.motor_asyncio import AsyncIOMotorClient\n\n\ntest_db_uri = 'mongodb://localhost/test'\n\nasync def print_results():\n client = AsyncIOMotorClient(test_db_uri)\n db = client.get_default_database()\n col = db.get_collection(\"test\")\n async for doc in col.find({}):\n print(doc)\n\n\ndef test_this():\n asyncio.run(print_results())",
"text": "Hi @Robert_DiFalco, I believe the error you are seeing is the same as python - RuntimeError: await wasn't used with future when using twisted, pytest_twisted plugin, and asyncio reactor - Stack Overflow, and is related to pytest_twisted itself. The following pure-asyncio code runs with pytest without error:",
"username": "Steve_Silvester"
},
{
"code": "async def print_results():\n client = AsyncIOMotorClient(test_db_uri)\n db = client.get_default_database()\n col = db.get_collection(\"user\")\n async for doc in col.find({}):\n print(doc)\n\n\[email protected]\ndef test_motor():\n createSomeUsersWithMongoEngine()\n yield Deferred.fromFuture(print_results())\n",
"text": "Yes turns out you are right! Nothing to do with Motor at all. I have to turn the Motor future into a Deferred and then it works fine.Well, one thing I learned in this process is that Motor is actually not an asyncio driver for MongoDB. It’s actually just a wrapper on sync sockets (a wrapper of PyMongo actually) that simply runs the IO bound methods in a ThreadExecutorPool. Kind of disappointing. I was using txmongo that while old and dusty at least is a real async socket driver implementation for Mongo. I wonder if Mongo will ever have a modern trully async driver. I guess I can hope!",
"username": "Robert_DiFalco"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error performing an async for on an AgnosticCursor with _get_more method | 2023-05-05T16:12:24.122Z | Error performing an async for on an AgnosticCursor with _get_more method | 1,115 |
[
"data-modeling",
"mobile-bytes"
] | [
{
"code": "",
"text": "Note: additive and destructive schema changes are now referred to as non-breaking and breaking changes but the concepts are identical.Hello Everybody I hope you are having a good time getting information on the forums. As you are aware, I started as Community Manager for Realm in Dec and I have been going through some questions on the forum and I found Schema Changes to be frequently asked questions and hence the birth of the idea Realm BytesEvery Week, I would be selecting anyone such topic and along with your collaboration, we would be talking more on that.Schema Changes can be additive (backwards compatible) or destructive (not backwards compatible) and care should be taken when making changes to Schema for a running application.A diagram is presented below to help identify whether a schema change is additive or destructive:\nimage1712×663 74.8 KB\nAdditive Changes: These changes do not trigger a re-sync on the server and your SDK data models should be able to start using the new schema seamlessly. Your client-code (schema on mobile) can be a subset of your Cloud UI (schema on the server) Schema. Learn moreNote: Additive Changes can only Remove Fields if they are optional, removing Required fields will be a destructive change.When you make an additive schema change there will be a brief update that takes place on the Realm Sync backend and changes may be delayed in propagating to MongoDB Atlas for a short period of time.Destructive Changes: These changes will require a re-sync i.e Terminate and Re-enable Sync to map the new schema in place, and we strongly recommend making sure you have client reset logic in place first. These changes are preferably made directly in the cloud UI. Learn moreNote: Destructive changes are not allowed to be done via the Realm CLI or Code Deployment.I hope the provided information is helpful.I would love to hear from your experience, what worked and what didn’t. If there is a topic you want me to cover next week, please feel free to reach out. Happy Realming!Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "An exception has been thrown: The following changes cannot be made in additive-only schema mode:\n - Property 'User._id' has been changed from 'object id' to 'string'.\n_idclient file not found. The server has forgotten about the client-side file presented\n by the client. This is likely due to using a synchronized realm after terminating \nand re-enabling sync. Please wipe the file on the client to resume \nsynchronization. { sessionIdent: 1, clientFileIdent: 25 } (ProtocolErrorCode=208)\nclient-resetclient-reset",
"text": "G’Day, Folks, I would love to hear your experience with Additive and Destructive Schema changes I would like to share some more insights today that I discovered while experimenting with my restaurant app.When you have development mode on, you can only make additive changes. If you try to make a destructive change, it will throw an error like this:I wanted to change the type of the _id field in my User Schema and this was a destructive change and I was not able to do it from within the application.I changed the schema directly on the cloud UI and it prompted me to resync my data I had to do that. Now when I try to sync my mobile client app, the server logs show the below error:This is client-reset message. I have not implemented client-reset in my code as this was a test app, so I uninstalled the application and synced again and I am able to run the app without any error.This is not a friendly method for production apps, so the recommendation is to have client-reset implementation in the application.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Hi @henna.s,I’ve been having this issue on destructive change during development mode on and off and it seems the only way to resolve it has been to uninstall the app.My issue is described here: How to update breaking-change schema of a Synced Realm during development?In your example you mentioned that you didn’t have a client reset code, so you had to uninstall the app. It was the same for me, so I wanted to write the client reset code to handle the reset. But it seems that Realm only detects a single client reset, and subsequent resets no longer invokes the clientResetHandler. Which means i will have to uninstall the app anyway.My question is: if we fail to handle the initial client reset, is the only option is to reinstall the app as per your example?",
"username": "lHengl"
},
{
"code": "",
"text": "Oooohhh Did somebody say destructive schema change?Oh boy!!! I got two years of stories!!!But, unfortunately a lot of them may be NDA protected, so I’ll go with what my own findings are in my own tests:Additive changes are always best, unless you plan an outage to then make all of your breaking changes at the same time, and then reinitiate sync.",
"username": "Brock"
},
{
"code": "",
"text": "Client Reset Logic needs to seriously be in the tutorial get started app materials for Realm/Device Sync, even on outside forums and discord servers etc.Next to additive vs destructive schema issues, it’s the lack of knowledge Client Reset Logic even exists, and is not automatic is still something nobody ever knows about until they break something and they don’t understand why a term and resync isn’t working right.I was on a discord call last night as a matter of fact for a food delivery application company out of the UK using Realm, because someone wanted to change a spelling of a field and launched it. Brought the entire app down, they terminated and resync’d the app to find nothing fixed.Lo and behold, they had no client reset logic, I had to walk them through it. Of course I didn’t mention a lot of things I would have liked to, staying professional and all, but 2 years, going on 3 years, it’s STILL a problem.",
"username": "Brock"
},
{
"code": "",
"text": "Hello @lHengl ,Thank you for raising your concern. Please allow me some time to talk to the team and I will be back with feedback.Appreciate your patience in the meantimeCheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hi @lHengl,I am an engineer on the Device Sync team. I hear you and we agree that this is not an ideal experience for our developers. We have just begun a project to allow clients in Development Mode to make breaking schema changes that will automatically handle terminating/re-enabling sync and discarding the local realm file.As for the best procedure to do this in the meantime, we agree that what you are doing is unfortunately the way to do it. Assuming you are testing in an emulator, the best way to go about making breaking schema changes is to:We are very excited to fix this behavior and are actively working on it! I will try to reach back out here when this work is completed.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_Kaye you’re awesome!!! Can it also please be looked into for automated Client Reset Logic?",
"username": "Brock"
},
{
"code": "",
"text": "Thanks @Tyler_Kaye, @Brock , @henna.s, I appreciate and look forward to the improvements. Is there some notifications I can subscribe to for such upcoming changes. E.g. is there an issue tracker?",
"username": "lHengl"
},
{
"code": "",
"text": "Hi All - Caleb here, an engineer on the docs and tutorial projects. I’d love feedback on how we can improve this page on client resets: https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/client-resets/.",
"username": "Caleb_Thompson"
},
{
"code": "",
"text": "Hi @Caleb_Thompson ,Here are my thoughts on the documentation…Client ResetThe Client Resets page should link the reader back the Make Breaking Schema page. to avoid making the mistake in the first place.Making Breaking Schema ChangesThis page needs to be updated to take the reader through a step by step guide on how to correctly perform breaking changes in multiple ways: Create a Data ModelThis page explains how to create models from existing data and from realm objects created by the SDK. Because of this, new developers like me would be relying heavily on this method to generate the models. But what if this fails? How does one manage this? The following sections should be introduced:That’s all for now. Hope to see some updates soon. Thanks!",
"username": "lHengl"
}
] | Mobile Bytes #1: Additive and Destructive Schema Changes | 2022-01-19T09:18:58.993Z | Mobile Bytes #1: Additive and Destructive Schema Changes | 7,009 |
|
null | [
"compass",
"mongodb-shell",
"containers"
] | [
{
"code": "",
"text": "Hi Team,Container is up and running, it show dbs for the command “docker exec -it mongosh”, but unable to connect from MongoDB Compass. Any solution Team…",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "but unable to connect from MongoDB Compass.what’s the error? or time out?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi Kobe, got solution, need to edit the conf file.",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb container is up and running, but unable to connect with Mongodb compass | 2023-05-05T12:02:10.363Z | Mongodb container is up and running, but unable to connect with Mongodb compass | 925 |
null | [
"replication",
"kubernetes-operator"
] | [
{
"code": "secrets-store.csi.k8s.ioapiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: pgadmin\n namespace: {{ .Values.namespace }}\n labels:\n app: pgadmin\n version: v1\nspec:\n revisionHistoryLimit: 0\n selector:\n matchLabels:\n app: pgadmin\n replicas: 1\n template:\n metadata:\n labels:\n app: pgadmin\n istio_version: {{ .Values.istioVersion }}\n spec:\n serviceAccountName: pgadmin-sa\n automountServiceAccountToken: false\n containers:\n - name: pgadmin\n image: {{ .Values.pgAdmin.imageRepository }}:{{ .Values.pgAdmin.imageVersion }}\n imagePullPolicy: IfNotPresent\n env:\n - name: PGADMIN_DEFAULT_EMAIL\n value: {{ .Values.pgAdmin.defaultEmail }}\n - name: PGADMIN_DEFAULT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: pgadmin-credentials\n key: password\n - name: PGADMIN_LISTEN_PORT\n value: \"8080\"\n resources:\n limits:\n cpu: 100m\n memory: 256Mi\n requests:\n cpu: 50m\n memory: 128Mi\n volumeMounts:\n - name: pgadmin-secret\n mountPath: /mnt/secrets-store\n - name: pgadmin-data\n mountPath: /var/lib/pgadmin\n volumes:\n - name: pgadmin-secret\n csi:\n driver: secrets-store.csi.k8s.io\n readOnly: true\n volumeAttributes:\n secretProviderClass: \"pgadmin-secret-spc\"\n - name: pgadmin-data\n emptyDir: {}\ndeployment.spec.template.specapiVersion: mongodbcommunity.mongodb.com/v1\nkind: MongoDBCommunity\nmetadata:\n name: mongo-db\n namespace: {{ .Values.namespace }}\nspec:\n members: 3\n type: ReplicaSet\n version: \"6.4.0\"\n security:\n authentication:\n modes: [\"SCRAM\"]\n users:\n - name: my-admin\n db: admin\n passwordSecretRef: # a reference to the secret that will be used to generate the user's password\n name: mongodb-credentials\n key: password\n roles:\n - name: root\n db: admin\n scramCredentialsSecretName: my-admin-scram\n - name: my-user\n db: admin\n passwordSecretRef: # a reference to the secret that will be used to generate the user's password\n name: mongodb-credentials\n key: password\n roles:\n - name: readWriteAnyDatabase\n db: admin\n scramCredentialsSecretName: my-user-scram\n serviceAccountName: pgadmin-sa\n automountServiceAccountToken: false\n",
"text": "I’m trying to deploy the Kubernetes operator and integrate AWS secrets manager. When I do this in a deployment, I use the secrets-store.csi.k8s.io driver to mount the secret as a volume like below:The service account called out in deployment.spec.template.spec is associated to a role which has the required policy to fetch the secret from AWS secrets manager.I’m trying to accomplish the same thing inside the operator, so that I can use the AWS secret as the user’s password that is setup as part of the operator. The operator deployment as it stands now looks like this:I think I will still need the volumes setup in each pod and the volumeMounts in each container because I think that is how the CSI driver creates the Kubernetes secrets objects (but I’m not sure). I’m sure I will need to be able to run each pod with the following, in order for the pod to be able to access the secret. Otherwise I will get an authorization error:",
"username": "Dan_Haws"
},
{
"code": "",
"text": "Hi @Dan_Haws and welcome to MongoDB community forums!!If I understand your concern correctly, you are trying to integrate AWS secret manager with the Kubernetes community operator.Currently we do not have the direct integration of the AWS secret manager and the Kubernetes Community or the Enterprise Operator.\nHowever, the recommendation would be to use the script to extract the secrets and circulate over the pods in the operator.The other method would be to use Using AWS Secrets Manager secrets with Kubernetes - Amazon EKS to mange the secrets.However, if my understanding for the topic is incorrect, could you help me understand in more brief about the requirements.Regards\nAasawari",
"username": "Aasawari"
}
] | How to run the Kubernetes community operator under the context of a service account | 2023-05-02T12:23:55.003Z | How to run the Kubernetes community operator under the context of a service account | 905 |
null | [] | [
{
"code": "",
"text": "My cluster has 3 nodes but can’t connect atlas, when I check telnet, only 2 nodes are running, the other node is failing. How to handle this error guys?",
"username": "Tung_Ph_m"
},
{
"code": "",
"text": "",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "",
"username": "Tung_Ph_m"
}
] | Cannot access Mongo Atlas from Ubuntu server | 2023-05-10T11:31:20.847Z | Cannot access Mongo Atlas from Ubuntu server | 596 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Cluster: 5 instances split between 2 regionsCreated an integration for prometheus. After applying the config and restarting prometheus I am now seeing metrics flow to prometheus from the cluster. But they are incomplete. Only 3 of the 5 nodes in the cluster are getting scraped. The other 2 are not. The 2 that are not represented are not in the region that is hosting prometheus (the 3 that reporting are). Does the prometheus atlas integration not support cross-region? If not, please document. If it does, please provide information on how it can be achieved.",
"username": "andrew_morcomb"
},
{
"code": "AWS VPC (Prometheus server) Region 1 <-- peering connection 1 --> Atlas VPC Region 1\nAWS VPC (Promeheus server) Region 1 <-- peering connection 2 --> Atlas VPC Region 2\n",
"text": "Hi @andrew_morcomb - Welcome to the community Cluster: 5 instances split between 2 regionsFor the discussion i’ll just reference the following based off your above information but please correct me if this interpretation / example is incorrect:One thing I can think off of the top of my head is if you have a VPC peering connection configured to/from the AWS VPC where the prometheus server exists to each of the Atlas region VPC’s. That is:Note: I am assuming vpc peering due to the prometheus agent being able to scrape off only a singular region:Deployments in Multiple Regions\nAtlas deployments in multiple regions must have a peering connection for each Atlas region.If you believe you have this set up with 2 vpc peering connections then you can maybe do some network tests to ensure the client from the region where the prometheus server exists is able to connect to both regions on the Atlas end via the vpc peering connections just as a troubleshooting step.If this is not the case, you may wish to check with the Atlas in-app chat support team to see if there are any limitations on this set up from the Atlas end. Although private endpoints are currently not supported for the prometheus integration as of today. Please note they won’t have any insight into your AWS networking configurations on your end.Hope the above hopes in any way.Regards,\nJason",
"username": "Jason_Tran"
}
] | Prometheus Integration: Support for Cross-Region Atlas Clusters | 2023-05-10T22:15:28.068Z | Prometheus Integration: Support for Cross-Region Atlas Clusters | 763 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hi all, I noticed that the ability to connect from an EKS workload to MongoDB using EKS IRSA was added to the Golang driver and was planned as a feature in version 1.12.0. This feature would be especially useful for me. Is there an official release date for version 1.12.0 of the MongoDB Golang driver?",
"username": "Long_Bui"
},
{
"code": "",
"text": "Hello @Long_Bui, the release date is subject to change, but 1.12.0 is tentatively planned for mid June.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Release date for MongoDB Golang 1.12.0? | 2023-05-05T00:33:33.098Z | Release date for MongoDB Golang 1.12.0? | 849 |
null | [
"replication",
"sharding"
] | [
{
"code": "",
"text": "“Good morning,\nI’m trying to set up clustering using only Sharding, without replicaSet, so I created 3 instances with the following replSetName shard1, shard2, shard3, and sharding.clusterRole “shardsvr” on all 3. I created an instance with the replSetName config and sharding.clusterRole “configsvr”. And finally, I created a route instance that I didn’t make any specific configuration in the mongod.conf file.\nI opened a connection to this route and ran:\n~$mongos --configdb config/mongo_config_ip:27017\nI opened another connection to this route and ran:\nmongos>sh.addShard(“shard1/mongo_shard1_ip:27017”)\nmongos>sh.addShard(“shard1/mongo_shard2_ip:27017”)\nmongos>sh.addShard(“shard1/mongo_shard3_ip:27017”)\nSo far, everything seems to be working. My question now is how do I create a connection string to connect, should I connect only to the “route” mongo?”",
"username": "Alexandre_Souza_de_Oliveira"
},
{
"code": "#if you don't have authentication enabled\nmongodb://mongos_host:mongos_port\n\n#if you have authentication\nmongodb://username:password@mongos_host:mongos_port/?authSource=<authdb>\n",
"text": "With a sharded cluster you should only connect via the mongos, so it would be like this if you have 1 mongos only:",
"username": "tapiocaPENGUIN"
}
] | How to create sharding cluster server and connection string | 2023-05-10T18:14:59.919Z | How to create sharding cluster server and connection string | 675 |
null | [
"aggregation",
"queries",
"indexes"
] | [
{
"code": "$lookup$lookup_id$lookup_id",
"text": "The base collection has 100,000 documents. All of which will be run through a $lookup stage. The foreign collection that will be looked up is empty.This query takes 30 seconds.When I remove the $lookup stage, this query takes 3 seconds.What explains why it is so slow, despite the foreign collection being empty, having no documents?Similarly, If I join based _id, which is indexed of course, the $lookup could increase the time of the query by 4 times. I know that the index is used because if I join by some other field that is not indexed, the query actually timeout and returns an error. So how come looking up using an indexed field, _id, increase the query time by 4x?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Can you explain why you are doing a $lookup on an empty collection? What is the purpose of this, a $lookup is for joining two collections but in this case it is empty?",
"username": "tapiocaPENGUIN"
},
{
"code": "blocked",
"text": "It is empty now. But it will be populated. The foreign collection is to store blocked users. The base collection is the users collection. After filtering the users collection, I further have to remove the blocked users.",
"username": "Big_Cat_Public_Safety_Act"
}
] | $lookup is super slow despite the foreign collection being empty | 2023-05-10T08:28:12.224Z | $lookup is super slow despite the foreign collection being empty | 630 |
null | [] | [
{
"code": "",
"text": "This is related to the post Core dump on MongoDB 5.0 on RPi 4\"Version 4.4.19 is now also causing a core dump, which basically means that 4.4 can’t be used on a Raspberry Pi 4 as well unless 4.4.18 will be used.Seriously?Kind regards,\nDaniel Faust",
"username": "dfaust"
},
{
"code": "sudo apt-get install mongodb-org-mongos=4.4.18 mongodb-org-tools=4.4.18 mongodb-org-shell=4.4.18 mongodb-org-database-tools-extra=4.4.18 mongodb-org=4.4.18 mongodb-org-server=4.4.18\n\nsudo apt-mark hold mongodb-org-mongos mongodb-org-tools mongodb-org-shell mongodb-org-database-tools-extra mongodb-org mongodb-org-server\n",
"text": "Same problem here.\nFixed it by downgrading to 4.4.18 and holding updates for the next releases.",
"username": "mcury"
},
{
"code": "",
"text": "If you would like to run the latest version(s) of MongoDB on a Pi, check out the Github repo where I have built it from source for the Pi. Keep in mind this is not officially supported and something I have done in my personal capacity.",
"username": "Matt_Kneiser"
},
{
"code": "",
"text": "Very nice, really thanks Matt_Kneiser, awesome news.\nI’ll give it a try pretty soon… This will allow me to update my Graylog server to version 5.0 Edit: mongod launches, changed the port to 27017 and Graylog starts, perfect.\nBut when I try to use mongorestore my old database to the new mongodb, some documents fails but mostly of them are successfully…\nHowever graylog doesn’t start… I’m still checking.",
"username": "mcury"
},
{
"code": "",
"text": "Everything is working perfectly, thank you very much for this binary, much appreciated.",
"username": "mcury"
},
{
"code": "",
"text": "Thank you for your effort.",
"username": "dfaust"
}
] | Core dump on MongoDB 4.4.19 on RPi 4 | 2023-02-27T15:04:14.690Z | Core dump on MongoDB 4.4.19 on RPi 4 | 2,075 |
null | [
"node-js",
"flexible-sync",
"devops",
"app-services-cli"
] | [
{
"code": "cp data_sources/mongodb-atlas/db-development/<collection_name>/schema.json data_sources/mongodb-atlas/db-qa/<collection_name>/",
"text": "Hello,I have three realm synced Apps with the same name, one for “development”, one for “qa” and one for “production”.\n“development” is on a shared cluster and has its own database.\n“qa” and “production” are on a dedicated cluster and have both their own database.I use GitHub to deploy “development” to Atlas bi-directionnaly.\nI use realm-cli to deploy to “qa” and “production” with a GitHub actions script.Pardon my words, but honestly this is a pain in the “a…” to maintain.\nTo have a single source of truth, and because it wouldn’t work otherwise anyway,\nI have my schemas only on the “development” folder in my GitHub repository.When deploying to “qa” or “production” I do the following:cp data_sources/mongodb-atlas/db-development/<collection_name>/schema.json data_sources/mongodb-atlas/db-qa/<collection_name>/And this has worked once to deploy to “qa” and “prod”.But now that I retry deploying to “qa”, it doesn’t work I get the following error:\npush failed: error fetching schema provider for schemas: two schemas have the same title “CollectionTitle”And every time I try re-deploying I get a different “CollectionTitle” in the error.When I look at my apps in Atlas, for each of the apps, only the corresponding database/schemas are defined. The other ones are greyed out. And schemas are not defined for the other apps.\nHas anyone been successful in managing schemas with a single source of truth?Can anyone make sense of this error ?\nThanks.",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "Hi,Sorry to hear you aren’t having a great experience with this. I will try to respond to a few things in parallel:I think this docs page might do a better job explaining this situation: https://www.mongodb.com/docs/atlas/app-services/sync/data-model/data-model-map/#overviewIf the issue is still occurring, I would be happy to look into it more if you can send me your GroupId (you can see this in your URL when on atlas/app services). You can reply here (it is safe and only MongoDB employees can look it up), or you can send it privately to me in the forums if that would make you more comfortable.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "titleDog",
"text": "ld recommend having different clusters for QA and Production. This is great for isolation and will actually improve the performance of some aspects of sync (especially if you are using partition-based sync)Thanks for your reply. So the doc you shared clearly states that “You could not have another schema whose title was Dog in the same cluster.”. So I created a new cluster for my “qa” and now everything seems to be working.",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "I spoke too fast, It was working during a few deploys. And out of nowhere the same error came back, even though my apps databases are on three different clusters.",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "Hi, what is the error now? One thing that definitely gets rid of the issue is that you no longer need to have the database names have -dev -qa -prod as a suffix. The issue before was the fact that you were most likely uploading your QA schemas to the Prod app (which has duplicate titles), but if you just have the same set of schemas/collections across all apps I suspect you will have a much better time.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm app deployment schema issues between env | 2023-05-05T10:02:24.441Z | Realm app deployment schema issues between env | 964 |
null | [
"queries",
"change-streams"
] | [
{
"code": " public async Task RealtionalCollectionCollectionChange(CancellationToken cancellationToken)\n {\n var options = new ChangeStreamOptions\n { \n FullDocument = ChangeStreamFullDocumentOption.UpdateLookup,\n BatchSize = 2\n };\n \n string logHistory = string.Empty;\n var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match(\"{operationType: { $in: [ 'replace', 'insert', 'update', 'delete' ] } }\");\n using (var cursor = await collection.WatchAsync(pipeline, options, cancellationToken))\n {\n \n while (await cursor.MoveNextAsync(cancellationToken))\n { \n if (cancellationToken.IsCancellationRequested)\n {\n break;\n }\n\n foreach (var change in cursor.Current)\n {\n if (change.OperationType == ChangeStreamOperationType.Invalidate)\n {\n _logger.LogWarning(\"Change stream cursor has been invalidated\"); \n break;\n }\n\n var key = change.DocumentKey.GetValue(\"_id\").ToString();\n\n switch (change.OperationType)\n {\n case ChangeStreamOperationType.Insert:\n await InsertIntoHistoryCollection(change);\n await TriggerEmail(change);\n break;\n\n case ChangeStreamOperationType.Delete: \n _logger.LogInformation(\"{Key} has been deleted from Mongo DB\", key); \n var filter = Builders<BsonDocument>.Filter.Eq(\"_id\", ObjectId.Parse(key.ToString()));\n var document = await collectionHistory.Find(filter).FirstOrDefaultAsync();\n\n try\n {\n await _mailService.SendEmail(change, document, logHistory);\n }\n catch (Exception ex)\n {\n _logger.LogError(ex, \"An error occurred while sending email for {Key} for operation type {OperationType}\", key, change.OperationType);\n }\n break;\n }\n }\n }\n cursor.Dispose();\n } \n }\n",
"text": "Hi Team,I am using ChangeStream for catch my backend deletions and send mails which is working fine.\nIn case of bulk deletions I want to send only few e-mails hence I am using BatchSize, some how it is not working.\nAs per my understanding, depending on BatchSize setting those many changes should capture.\nI set BatchSize is 2, when I delete 5 records from collection it should only send 2 mails as I set BatchSize is 2, however it is sending all 5 mails.\nPlease help me to fix this issue. Below is my code:Thanks,\nLalitha.C",
"username": "Lalitha_Chevuru"
},
{
"code": "",
"text": "What i understand on cursor.batchSize() is that, it decides the number of results you get for each batch call, but not the total number of results.So say you want 1000 docs in total, setting batch size to 10 means you will call mongo api 100 times, not necessarily mean that you only get 10 results back.So in your case, you may get the 5 events as 2 + 2 + 1, and if you omit batch size, you may just get 5 in one go, so same result.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you Kobe.\nMy bad, I understood concept wrongly.\nThanks for clarification.Thanks,\nLalitha.C",
"username": "Lalitha_Chevuru"
}
] | BatchSize option is not working in ChangeStream WatchAsync | 2023-05-09T09:52:30.354Z | BatchSize option is not working in ChangeStream WatchAsync | 837 |
null | [
"compass",
"mongodb-shell",
"containers"
] | [
{
"code": "",
"text": "For some days now I have been trying to setup remote access for mongodb server which i am running in our 64gb ram azure machine. I tried using docker, docker-compose and using nginx as reverse proxy for mongodb, but this didnt work. Basically i found out that :So I switched over to not using docker instead running them as services and still configuring nginx as reverse proxy, I still noticed that while I was able to see the nginx default home page when i access the server public ip address (or dns which i have set up in azure) but I am still not able to connect to my mongodb using compass or mongosh using the public ip address which nginx is running on.Now, I am no longer using nginx and I am still not able to configure remote access to mongodb by adding the 0.0.0.0 access anywhere ipaddress in the bindIp parameter. Also i have added a rule in the firewall to allow any ip address to access the port 27017 for mongodb. So i dont know what really is happening with mongodb, Isnt it possible to access mongodb remotely ?",
"username": "Ben_Gab"
},
{
"code": "net:\n port: 27017\n bindIp: 0.0.0.0\n",
"text": "It works. There’s something wrong in your setup. Here’s all it takes in the mongod.conf :You were right as a debugging step to eliminate any proxying.\nIt’s almost certainly something wrong with your Azure setup.\nI recommend you set up MongoDB on a local machine and play with it and see how easy it is to set up for remote access, and then tackle your Azure setup.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "@Jack_Woehr Hello Sir, Thanks for your help. I actually solved the issue, and yes it was something with azure. I had to open the 27017 from my azure network board for that vm.",
"username": "Ben_Gab"
}
] | Unable to setup remote access for mongodb | 2023-05-05T16:13:28.452Z | Unable to setup remote access for mongodb | 1,454 |
null | [
"atlas-search"
] | [
{
"code": "{\n \"searchAnalyzer\": \"lucene.keyword\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"lrn\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"title\": {\n \"analyzer\": \"aks_ngram\",\n \"type\": \"string\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"aks_ngram\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"maxGram\": 10,\n \"minGram\": 2,\n \"type\": \"nGram\"\n }\n }\n ]\n}\n{\n $search: {\n index: \"default\",\n compound: {\n should: [\n {\n text: {\n query: \"558988b6b1661680917206\",\n path: \"title\",\n }\n },\n {\n text: {\n query: \"558988b6b1661680917206\",\n path: \"lrn\"\n }\n }\n ]\n }\n }\n }\n[\n {\n title: 'WXYZQP',\n lrn: '558988b6b1661680917206'\n }\n]\n[\n {\n title: '75669860766',\n lrn: '5832748721661680917174'\n },\n {\n title: 'WXYZQP',\n lrn: '558988b6b1661680917206'\n }\n]\n",
"text": "Hello everyone,I’m new with atlas search and I don’t know how I can build an ‘OR’ operator through multiple fields.Here is what I did:Search index configuration:My search query:What do I expect to get:What do I get:I want on the “title” field to search only by substrings and “lrn” to be perfect match.Thanks! ",
"username": "Catalin_Radu"
},
{
"code": "[\n {\n _id: ObjectId(\"6458ee0dbe43b85019d199e4\"),\n title: 'WXYZQP',\n lrn: '558988b6b1661680917206'\n },\n {\n _id: ObjectId(\"6458ee30be43b85019d199e5\"),\n title: '75669860766',\n lrn: '5832748721661680917174'\n }\n]\ntitlelrn{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"lrn\": {\n \"type\": \"string\"\n },\n \"title\": {\n \"type\": \"string\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'compound': {\n 'should': [\n {\n 'text': {\n 'query': '75669860766', \n 'path': 'title'\n }, \n 'text': {\n 'query': '558988b6b1661680917206', \n 'path': 'lrn'\n }\n }\n ]\n }\n }\n }\n]\n[\n {\n _id: ObjectId(\"6458ee0dbe43b85019d199e4\"),\n title: 'WXYZQP',\n lrn: '558988b6b1661680917206'\n }\n]\n",
"text": "Hi @Catalin_Radu and welcome to MongoDB community forums!!Based on my understanding for the above post, I tried to replicate the issue in my local environment with the same dataset as:If I understand correctly, you need to create search indexes on title and lrn fields and trying to search the documents using the should operator.The search index for the above sample document looks like:and the search query is:returns the document as:If the above query does not work, could you share the sample dataset which would help me reproduce and provide the query in a better way.For reference, you can start learning about how Atlas search works in MongoDB using the University from MongoDB Courses and Trainings | MongoDB University and Atlas search Docs.To learn how SHOULD, MUST and NOT works in Atlas search, you can refer to the documentation for compound operators.Let us know if you have any further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Using OR operator for a ngram field and keyword field | 2023-05-08T11:40:18.674Z | Using OR operator for a ngram field and keyword field | 572 |
null | [
"java",
"transactions"
] | [
{
"code": "Caused by: com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): 'WriteConflict error: this operation conflicted with another operation.\n",
"text": "I am getting from Mongodb Exception:Got this error when used @Transactional Spring Boot Annotation. Tried to insert two documents, one should be inserted successfully and other should roll back because of duplicate key (Unique Index).\nBut this exception was not expected, instead DuplicateKeyException was expected. Before using @Transactional, I was getting DuplicateKeyException.",
"username": "Neha_Maheshwari2"
},
{
"code": "",
"text": "Hello @Neha_Maheshwari2 ,Welcome to The MongoDB Community Forums! Can you share the MongoDB version being used?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur,Thank you !!\nVersion of MongoDB we are currently using is 4.6.1.Regards,\nNeha Maheshwari",
"username": "Neha_Maheshwari2"
}
] | 'WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.' | 2023-05-09T12:05:25.121Z | ‘WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.’ | 951 |
null | [] | [
{
"code": "{\n \"t\": {\n \"$date\": \"2023-05-01T18:01:49.049+00:00\"\n },\n \"s\": \"I\",\n \"c\": \"COMMAND\",\n \"id\": 51803,\n \"ctx\": \"conn53\",\n \"msg\": \"Slow query\",\n \"attr\": {\n \"type\": \"command\",\n \"ns\": \"BI.SubscriptionEventSummaryDaily\",\n \"command\": {\n \"insert\": \"SubscriptionEventSummaryDaily\",\n \"documents\": 999,\n \"ordered\": true,\n \"writeConcern\": {\n \"w\": \"majority\"\n },\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"7c1c9486-634b-0331-b745-9da678b7u7af\"\n }\n },\n \"txnNumber\": 73,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1682964101,\n \"i\": 19\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"WtDDg8+6iS3ynWsDw4vE5kgD2XXX\",\n \"subType\": \"0\"\n }\n },\n \"keyId\": 7178507862000077377\n }\n },\n \"$db\": \"AirBI\"\n },\n \"ninserted\": 999,\n \"keysInserted\": 1998,\n \"numYields\": 0,\n \"reslen\": 230,\n \"locks\": {\n \"ParallelBatchWriterMode\": {\n \"acquireCount\": {\n \"r\": 4\n }\n },\n \"FeatureCompatibilityVersion\": {\n \"acquireCount\": {\n \"w\": 4\n }\n },\n \"ReplicationStateTransition\": {\n \"acquireCount\": {\n \"w\": 5\n }\n },\n \"Global\": {\n \"acquireCount\": {\n \"w\": 4\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"w\": 4\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"w\": 4\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 4\n }\n }\n },\n \"flowControl\": {\n \"acquireCount\": 2,\n \"timeAcquiringMicros\": 4\n },\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"writeConcern\": {\n \"w\": \"majority\",\n \"wtimeout\": 0,\n \"provenance\": \"clientSupplied\"\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 25111,\n \"timeReadingMicros\": 1369\n }\n },\n \"remote\": \"XXX\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 350\n }\n}\n",
"text": "Hello,In my log file there some commands that inserted 999 documents in one of my collection. However, there are no field described in the log file. How I find these documents in my collection?This is an example:",
"username": "Rafael_Martins"
},
{
"code": "",
"text": "I can’t think of a way to do this on mongodb server side other than checking local.oplog.rs (it’s capped though).oplog should have all necessary information as it’s for replication. Try it. (maybe the info is still there.)",
"username": "Kobe_W"
}
] | Find documents inserted in a command | 2023-05-09T08:59:38.955Z | Find documents inserted in a command | 311 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "A$sortAA",
"text": "Mongo playground: a simple sandbox to test and share MongoDB queries onlineI have created the above query. There is an index on A, but before the $sort stage of the aggregation pipeline, data is modified. However, the field that is modified is not A.How it could use the index on A to perform the sort.Will it?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "$sort$match",
"text": "Hello @Big_Cat_Public_Safety_Act,The $sort operator can take advantage of an index if it’s used in the first stage of a pipeline or if it’s only preceded by a $match stage.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ensuring that the $sort stage uses indexes | 2023-05-08T19:20:49.529Z | Ensuring that the $sort stage uses indexes | 695 |
[] | [
{
"code": "",
"text": "hello,\nI’m using a custom score result from Atlas Search.\n\nimage799×455 16.3 KB\nI want the result score to go negative.",
"username": "wrb"
},
{
"code": "",
"text": "Hi @wrb,I want the result score to go negative.Could you provide further use case details regarding the score going negative? If you can also provide some examples to help clarify this then that would also be great.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "we are working on changing the existing recommendation list to atlas search,the user’s activity score is differentiated and given from 10 to -100 in the form of penalty points.\nthe intent is to push you to the bottom of a specific recommendation list.\nof course, i know that it is also possible to give a large base score and reduce it. but I’d like to fix it without changing the existing system significantly, so I’d like to know if it’s possible.\nimage904×800 70.1 KB\n",
"username": "wrb"
}
] | Minimum score for atlas search | 2023-05-09T19:07:23.463Z | Minimum score for atlas search | 539 |
|
null | [
"replication",
"compass",
"containers"
] | [
{
"code": "",
"text": "I have used the MongoDB 6.0 version image in Docker to create three containers, forming a replica set and mapping ports 30001, 30002, and 30003 respectively. While trying to connect using MongoDB Compass, I found that I can successfully connect using the following connection strings: mongodb://localhost:30001/?replicaSet=vision-set, mongodb://localhost:30002/?replicaSet=vision-set, and mongodb://localhost:30003/?replicaSet=vision-set. However, when I tried to connect using the connection string mongodb://localhost:30001,localhost:30002,localhost:30003/?replicaSet=vision-set, it failed to connect. I seek the help of experts to understand what is causing this issue.",
"username": "xx630133368"
},
{
"code": "",
"text": "While logged into a DB successfully with compass can you do a rs.status() and rs.status().members command from the shell command window that is on the bottom of compass?Also can you please provide the error you are getting with the last connection string that has all hosts?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "connect ETIMEDOUT 172.30.0.4:27017",
"username": "xx630133368"
},
{
"code": "rs.status()\n{\n set: 'vision-set',\n date: ISODate(\"2023-05-10T01:46:15.515Z\"),\n myState: 2,\n term: Long(\"1\"),\n syncSourceHost: '172.30.0.4:27017',\n syncSourceId: 0,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n lastCommittedWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n appliedOpTime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n durableOpTime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n lastAppliedWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastDurableWallTime: ISODate(\"2023-05-10T01:46:14.549Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1683683154, i: 1 }),\n electionParticipantMetrics: {\n votedForCandidate: true,\n electionTerm: Long(\"1\"),\n lastVoteDate: ISODate(\"2023-05-09T15:52:23.287Z\"),\n electionCandidateMemberId: 0,\n voteReason: '',\n lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1683647532, i: 1 }), t: Long(\"-1\") },\n maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1683647532, i: 1 }), t: Long(\"-1\") },\n priorityAtElection: 1,\n newTermStartDate: ISODate(\"2023-05-09T15:52:23.366Z\"),\n newTermAppliedDate: ISODate(\"2023-05-09T15:52:23.976Z\")\n },\n members: [\n {\n _id: 0,\n name: '172.30.0.4:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 35642,\n optime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n optimeDurable: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-05-10T01:46:14.000Z\"),\n optimeDurableDate: ISODate(\"2023-05-10T01:46:14.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastDurableWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastHeartbeat: ISODate(\"2023-05-10T01:46:14.708Z\"),\n lastHeartbeatRecv: ISODate(\"2023-05-10T01:46:13.855Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1683647543, i: 1 }),\n electionDate: ISODate(\"2023-05-09T15:52:23.000Z\"),\n configVersion: 102042,\n configTerm: -1\n },\n {\n _id: 1,\n name: '172.30.0.2:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 35642,\n optime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n optimeDurable: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-05-10T01:46:14.000Z\"),\n optimeDurableDate: ISODate(\"2023-05-10T01:46:14.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastDurableWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastHeartbeat: ISODate(\"2023-05-10T01:46:14.709Z\"),\n lastHeartbeatRecv: ISODate(\"2023-05-10T01:46:14.754Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '172.30.0.4:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 102042,\n configTerm: -1\n },\n {\n _id: 2,\n name: '172.30.0.3:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 35648,\n optime: { ts: Timestamp({ t: 1683683174, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-05-10T01:46:14.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n lastDurableWallTime: ISODate(\"2023-05-10T01:46:14.549Z\"),\n syncSourceHost: '172.30.0.4:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 102042,\n configTerm: -1,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1683683174, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1683683174, i: 1 })\n}\n\n",
"text": "",
"username": "xx630133368"
},
{
"code": "",
"text": "I have found and resolved the issue by myself. The problem was caused by using the Docker’s internal network IP addresses for the replica set nodes, while my MongoDB Compass was running on my local machine. When using the connection string mongodb://localhost:30001,localhost:30002,localhost:30003/?replicaSet=vision-set, it returned the internal IP address of the Docker nodes, which is not accessible from my local machine.To fix the issue, I modified the IP addresses of the replica set nodes to my external IP address during the replica set creation, which allowed me to successfully connect to the replica set from my local MongoDB Compass. I appreciate your response, which gave me some inspiration to solve the problem.",
"username": "xx630133368"
}
] | MongoDB-Compass connect error, why? | 2023-05-09T16:24:54.748Z | MongoDB-Compass connect error, why? | 795 |
null | [] | [
{
"code": "",
"text": "Hi, we are looking to migrate our hosted DB to Atlas. We went to download the cluster-to-cluster sync tool from Download MongoDB Command Line Database Tools | MongoDB but it looks like only Ubuntu 18 and 20 are available. Is there any timeline on the Ubuntu 22.04? Is the Ubuntu 20 version expected to work fine on 22?",
"username": "AmitG"
},
{
"code": "",
"text": "Hi Amit,Unfortunately i’m not aware of any timeline for the cluster-to-cluster sync tool being available for Ubuntu 22.04. In saying so, have you taken a look at the Live Migrate documentation for migrating data to Atlas?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for the pointer!",
"username": "AmitG"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Any timelines on cluster-to-cluster sync for Ubuntu 22.04? | 2023-05-01T21:21:08.243Z | Any timelines on cluster-to-cluster sync for Ubuntu 22.04? | 498 |
null | [
"atlas-functions",
"atlas-triggers"
] | [
{
"code": "",
"text": "Hi,I noticed that in the Trigger logs there is a Request ID field. I’d like to include that Request ID in the metadata of an external request I make from inside my scheduled trigger function so that I’m able to correlate that external system to the specific execution of the trigger function that ran. Is there a way to get the Request ID of a function from inside that running function? I checked the context object but wasn’t able to find it there.Thanks!\n-George",
"username": "George_Price"
},
{
"code": "",
"text": "Hey @George_Price - Welcome to the community I’m currently not aware of any features or workarounds for how this would be possible (i.e. Getting the current requestID whilst the function is running).You could raise a feedback request and include your use case details so that others could vote for it too.Regards,\nJason",
"username": "Jason_Tran"
}
] | Getting requestId from inside a trigger function | 2023-05-03T22:21:00.544Z | Getting requestId from inside a trigger function | 818 |
null | [
"node-js",
"mongoose-odm",
"server",
"react-js"
] | [
{
"code": "mongod --version6.0.1brew services start mongodb-communityDatabase connected: mongodb://localhost:27017/wheelPuzzlesIt looks like you are trying to access MongoDB over HTTP on the native driver port{\"t\":{\"$date\":\"2022-11-06T16:41:17.247-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"[::1]:59904\",\"uuid\":\"02520f8e-5a07-43b0-bfb2-64c48b07359a\",\"connectionId\":3,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-11-06T16:41:17.248-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[::1]:59904\",\"uuid\":\"02520f8e-5a07-43b0-bfb2-64c48b07359a\",\"connectionId\":3,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2022-11-06T16:41:17.248-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"[::1]:59905\",\"uuid\":\"03a595a8-c6b8-4f9d-b3e9-9dbc2987c4f2\",\"connectionId\":4,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-11-06T16:41:17.249-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22988, \"ctx\":\"conn4\",\"msg\":\"Error receiving request from client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":17,\"codeName\":\"ProtocolError\",\"errmsg\":\"Client sent an HTTP request over a native MongoDB connection\"},\"remote\":\"[::1]:59905\",\"connectionId\":4}}\n{\"t\":{\"$date\":\"2022-11-06T16:41:17.249-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn4\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"[::1]:59905\",\"uuid\":\"03a595a8-c6b8-4f9d-b3e9-9dbc2987c4f2\",\"connectionId\":4,\"connectionCount\":0}}\n",
"text": "I’m totally new to node/mongodb… I’m trying to retrieve data for my react app using Mongoose on Mac OS Catalina. Running mongod --version gives me 6.0.1. I have MongoDB running (I ran brew services start mongodb-community and my server consoles out Database connected: mongodb://localhost:27017/wheelPuzzles) but when I try to hit my GET endpoint using axios both in browser and Postman, I get:\nIt looks like you are trying to access MongoDB over HTTP on the native driver port.\nI’ve been searching for 2 days now and cannot figure out why I’m getting this error. What am I missing please? Can anyone help? Thanks!This is my error log, in case that helps:",
"username": "TeeEm"
},
{
"code": "axios.get()",
"text": "Welcome to the MongoDB community @TeeEm !The error message you are encountering indicates a misconfigured client is making a connection to your MongoDB deployment and sending HTTP requests instead of using the MongoDB Wire Protocol.How is your Axios app loading Mongoose to connect to your deployment? Can you share a snippet of code with any credentials redacted?I suspect you may have an axios.get() request that is trying to fetch a Mongoose model or MongoDB URI (which would result in this error).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "const app = require(\"express\")();\nconst PORT = 27017;\nconst express = require(\"express\");\nconst Puzzle = require(\"./src/models/puzzle.js\");\n\nconst mongoose = require(\"mongoose\");\nconst url = \"mongodb://localhost:27017/wheelPuzzles\";\nconst db = mongoose.connection;\nconst bodyParser = require(\"body-parser\");\nmongoose.connect(url, { useNewUrlParser: true });\n\ndb.once(\"open\", (_) => {\n console.log(\"Database connected:\", url);\n});\n\ndb.on(\"error\", (err) => {\n console.error(\"connection error:\", err);\n});\n\napp.use(express.json());\napp.use(bodyParser.urlencoded({ extended: true }));\napp.listen(PORT, () => console.log(`IT'S WORKING on http://localhost:${PORT}`));\n\napp.get(\"/puzzle\", (req, res) => {\n Puzzle.aggregate([{ $sample: { size: 1 } }]).then((data) => {\n res.send(data);\n });\n});\nconst mongoose = require(\"mongoose\");\nconst Schema = mongoose.Schema;\n\nconst PuzzleSchema = new Schema({\n category: String,\n puzzle: String,\n});\n\nmodule.exports = mongoose.model(\"Puzzle\", PuzzleSchema);\nuseEffect(() => {\n axios\n .get(\"puzzle\")\n .then(function (response) {\n console.log(response);\n })\n .catch(function (error) {\n console.log(error);\n });\n }, []);\n",
"text": "Thank you for your response Stennie!\nI’m providing some code that’ll hopefully help.This is my index.js:puzzle.js:and useEffect from App.js:",
"username": "TeeEm"
},
{
"code": "",
"text": "Bumping this question, in case anyone has any ideas about how to solve this error? Thanks!",
"username": "TeeEm"
},
{
"code": "",
"text": "The problem is you are using the same port for Node/Express server as MongoDB default port : const PORT = 27017;\nReplace this constant with common port number used for Node/Express, such as 3000, 8000, 9000, etc.\nYou can then try to get data/resource from available endpoint (route) defined by your Node/Express app in the browser, like so :\nlocalhost:3000/puzzle\nassuming you are using 3000 as the port number for your Node server. If this works, the data fetching in your React app should work too.",
"username": "Lex_Soft"
},
{
"code": "",
"text": "That was it! I changed the port for Node/Express to 8000 and it works beautifully now.Your response also helped me solve a second problem I had, where the front end wasn’t connecting properly with the backend. I was using the wrong port in the URL for my API call too and changing the port number there fixed that problem also.Thanks so much for your help!",
"username": "TeeEm"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error in react app: "It looks like you are trying to access MongoDB over HTTP on the native driver port" | 2022-11-06T21:51:46.341Z | Error in react app: “It looks like you are trying to access MongoDB over HTTP on the native driver port” | 11,832 |
null | [] | [
{
"code": "",
"text": "We use partition-based sync for our mobile app. In our public realm, we have an Event (ID, name, start time, end time, etc). In our user realm, we have EventRegistration (EventID, etc). We have a requirement to show a page with all Events grouped by whether or not the user is registered for it. I would love to be able to use GroupedRealmCollection because we could rely completely upon the Realm library to sort the Events into groups. However, it doesn’t seem possible because Event and EventRegistration are in different realms. Am I missing something?I also realize we could use backlinks for this use case. But again: do they work across realms?Probably what I need is some best-practices guidance for designing realms to simplify querying across them.",
"username": "Philip_Hadley"
},
{
"code": "",
"text": "Hello Philip,Short no, long answer yes.\nYou can build and put in an infinite number of Realm apps provided they all use different keys, yada yada, but they don’t talk to each other.So if you want them to interact with each other, it wouldn’t be directly with each other. What you need to do is build packages to take the information Realm is bringing in, and convert/modify accordingly to new JSON documents, with a third Realm to ingest the documents and project the changes your service is creating.So far this is the only means I’ve successfully created results like you’re asking for. (Mind you this was in April of 2022) In a lot of cases this can be a hindrance, while some it can be a major security feature as some compliance requirements even if the app holds the same information, that information has to be segregated somehow.How I did it:\nI synced MongoDB Realm instances to two separate Core Data’s, and then synced the two core data’s so they were “one” core data but both Realms could see the information. This is how I made two Realm instances in the same app see each others collections and from there, I used Swift packages I built to transform results as necessary which then were ingested by a third RealmDB, and then whatever data wasn’t necessary from the other two collections were deleted.Now thinking about it, you could do this without the third, but the third just made it easier to organize.",
"username": "Brock"
},
{
"code": "",
"text": "Also, you can do the above I mentioned to make two Atlas DBs communicate with one another via Realm in the same mobile application. You take the Realm App from one Atlas cluster, and a Realm App from another, implement device sync, implement both Realm apps locally into the app and then connect both Realms to the Core Data in iOS or Rooms in Android, and then each Atlas DB can be synced to each other + the app.You can also do this to make a failover. So if Realm in one Atlas has a Bad changeset, you won’t care so much because the other Realm app will keep you running, and CoreData will maintain integrity between both Realms, so when you term and resync the failed Realm, you won’t lose unsynced data.",
"username": "Brock"
},
{
"code": "",
"text": "Whoa. Seems complicated just to achieve grouping data in a front-end UI.Something I didn’t explain in my original question: We currently have both Events and EventRegistrations in the “user” realm, but we don’t like this configuration because it means the size of the data in Mongo grows so fast as you add more users. Because the Event data is duplicated for each user. So, the idea is to put Events into a “public” realm - this way, we can just store one copy in the back end, and it will sync down to all devices. Problem is, it seems we can’t group the Events using out-of-the-box Realm functionality (e.g. GroupedRealmCollection or backlinks). So, we may end up with a custom solution for this very common problem, which seems messed up. Realm should support this, no?So, this is the Mongo/Realm design issue that we are up against.",
"username": "Philip_Hadley"
},
{
"code": "",
"text": "Oh, I would just use one collection app for users and one for event, and just use app logic to confirm if a user has paid for, checked into, etc an event as the second to just display the events.To be honest with you, there’s actually a lot with an app like that you need to consider, you also should implement another collection for the user demographic data, another for user and event metrics like how often a user attends, how often an event is viewed in the app etc as well.",
"username": "Brock"
},
{
"code": "",
"text": "@Philip_HadleyYou actually need separate apps anyway that can’t talk to each other, because you’re going to need a separate collection/realm app for transactions for a myriad of compliance regulations, and making accounting easier.Such as referencing the user object ID and tying transactions to the user, as you can’t have credit card information etc. be exposed to other user data, should a breach occur on say a users private info, that’s going to be different from a breach occurring and exposing a users credit card and having it also identify the user and the users other private info.Keeping these separated can save you potentially millions in fines by the FCRA and FDIC when they turn around and go after the source of a breach that caused CC Fraud. You do want to make sure that you set this up if you plan to store anything credit card related. Otherwise you’d want to make sure you use a service like Stripe, or WooCommerce, Apple Pay, etc.Another reason is you want to have an easy way to see user financial transactions and make things easier for your bookkeeper for tax withholdings and so on. you’ll want another collection with aggregations in Atlas to compute everything for you. As you’ll also need to make sure all federal and state taxes are observed on the users, too which vary state by state that you’ll need to make sure are in place. This is all easier to organize with separate collections and applicable Realm apps to just route it all where needed, but you can of course keep that within the user category but it would mean more computing needed, more aggregations and indexes to separate the information from the rest of the user information and event information.In recap, have each user get assigned a specific user ID, and have that User ID be what ties a user to transactions in a separate collection, separate realm app.Transaction data must be kept for 7 years for compliance, and will always be auditable by the IRS and any state revenue department. Planning your Realm Apps for this app accordingly will actually save you a lot of time and prevent a lot of problems if you play this smart. And it’ll keep all of your realm data small respectively.",
"username": "Brock"
},
{
"code": "",
"text": "Hi there! Apologies if I’m missing something, but it’s still not clear to me from this why you’re splitting the data across two separate Realms. Nothing you’ve described requires it, I think, and as you’ve pointed out grouping the Events would be much easier if all the data was in the same Realm.\nAs Brock suggests, you can avoid duplication of Events if you had separate Tables for Events and Users, instead of duplicating each Event under each user.\nCould you elaborate why they are separated?",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "I think what they are trying to do is shorten the size of the collections themselves, which splitting events from users would be ideal, as then the event you just tie in via transactions who attended the event by the unique user IDs.But realistically though, at least for financial data, they would want that on another collection for accounting and regulatory simplicity. A lot of aspects in an app like that if it involves commercial sales, particularly for the metrics and user activity. What events a user views more, how long do they spend reading an event description etc.I can see where multiple realm apps in the same mobile app would be ideal like that, as organically on the cluster each Realm App is a collection of its own, and just a single app can balloon in size which is what the OP is realizing.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks all, we really appreciate all the feedback. By the way I’m on the same team as Philip Hadley and what we are really looking for here is to make sure our understanding is correct and possibly get some recommendations.To recap: the business application simply deals with events (in that any user can see all current and upcoming events) as well as registrations (a user can be registered to 0 or many events and only they can see their registration info).As Philip explain in the first iteration in the app we design our documents to contain both the event info, the user contact info, and the registration if the user was registered for an event. Note that we still created a document event when a user was not registered (just without the registration Id). This approach was to ensure that every user of the app fetch all events via the user partition-based sync. As Brock mentioned this meant that the size of the collection on the server would quickly grow and become a burden to manage is that updating 1 event end date for example meant we needed to update the entire set of documents for all users.For this reason, we are changing the implementation now such that events will be in a global partition while registrations would remain as a user partition. While this is solving the initial problem of data proliferation on the mongo side, we came to understand that it is not possible to directly query the data against multiple realms even when there is “loose relationship” (a user registration contains the event code/ID). From the information we have seen this far it would seem that we need to perform 2 queries and relate the data within the app with some custom code. While this is possible it does feel like the sync created a problem that Realm was design to solve (in that Realm can maintain relationships within a data model).Is this conclusion correct? Meaning does the partition sync take away the data relationships that previously could be define in realm? Are there any work arounds? The current example is somewhat simple, however as the app grows, for future use cases we are thinking of migrating to the flexible sync which would create more partitions and further “break” the data model relationships.",
"username": "Patrick_Timothee"
},
{
"code": "",
"text": "This is why I was suggesting having Events separate from users, and you use app logic to then tie the users to the event so then you only have one event for everyone, and the event itself keeps a list of users who signed up to attend.Personally would go Flexible Sync outright from the beginning as it’s simpler and solves a lot of problems you can experience otherwise.",
"username": "Brock"
},
{
"code": "",
"text": "Hello @Patrick_Timothee,Thanks for raising the concern. We have not heard from you in a while, could you please confirm this issue has been resolved for you?s this conclusion correct? Meaning does the partition sync take away the data relationships that previously could be defined in realm?Partition Sync does not take away relationships. It’s defined differently on the cloud i.e. in MongoDB. You can read more information on Realm Relationships here.The difference between Flexible and Partition Sync has also been highlighted in the bytes.Note: There may be a few changes since the time the topic was posted but the general idea is still the same.I hope the provided information is helpful.I look forward to your response.Cheers, \nHenna",
"username": "henna.s"
}
] | Grouping data queried from two realms | 2023-03-28T21:53:41.713Z | Grouping data queried from two realms | 1,539 |
null | [] | [
{
"code": "",
"text": "Mongo 6.0.5 I previously used pre-images in 2 triggers. I have since refactored and redeployed my triggers removing the need for preimages and toggled preimage OFF for both. I had 2 collections set under my Linked Data Sources - Data Source Configuration - Advanced which I disabled preimage collection.1 of my 2 collections re-enables itself after a number of hours. I have attempted to disable both from the command line and from the atlas GUI. I have done this 5 times now. I have verified none of my triggers have preimage collection ON.What else can be causing the automatic re-enablment?",
"username": "Kristen_Varona"
},
{
"code": "",
"text": "Hi @Kristen_Varona - Welcome to the community I previously used pre-images in 2 triggers.1 of my 2 collections re-enables itself after a number of hours.Trying to see if I can replicate this behaviour - Just want to confirm the scenario above to see if my understanding is correct:Have you also checked the deployment history to see if there were any deployments you aren’t aware of that could’ve possibly made this change?I’ll check to see if there are any other possibilities of how this could be occurring as well.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,Today, I re-verified that my pre-image for my collection is still OFF. So this is good news. I thought I was going crazy, but I did have to disable this collection pre-image a total of 5 times. Currently its OFF, I am continuing to monitor to be sure it doesn’t come back.\nEven if it stays off right now, I am very curious of the root cause to ensure I do not get into this situation again.",
"username": "Kristen_Varona"
},
{
"code": "console.log()",
"text": "Hi @Kristen_Varona,Thanks for confirming the information. I created 2 triggers with the same configuration regarding pre-images. I’ve set it to operate on all operations but it just performs a console.log() - This probably won’t matter too much but it would at least allow me to know the trigger is functioning.If the issue occurs again, I would recommend contacting Atlas in-app chat support providing the trigger links although I am hoping it won’t change again.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you Jason, I appreciate the effort.\nI still don’t understand my root cause as to why it was re-enabling and that bothers me. I didn’t do/make any differences in the 5 attempts to turn it off. It seems obvious that I did something to cause it and am hopeful I can eventually figure out what it was to avoid it happening again. If I figure it out, I will most certainly update this thread.",
"username": "Kristen_Varona"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Preimage on collection re-enables itself - why? | 2023-05-05T14:48:03.043Z | Preimage on collection re-enables itself - why? | 924 |
null | [] | [
{
"code": "",
"text": "Saw this error today:translator failed to complete processing batch: failed to update resume token document: connection(ac-tc9kf7s-shard-00-02.hc0ptvn.mesh.mongodb.net:30448[-314285]) socket was unexpectedly closed: EOF",
"username": "vybzteam"
},
{
"code": "",
"text": "Hello @vybzteam ,Thank you for raising your concern. I have moved your question into a new topic, for better visibility and separation of concerns.Could you please confirm if you are still facing this error? Is your app working fine or what impact are you observing?I look forward to your response.Cheers, \nHenna",
"username": "henna.s"
}
] | Atlas Device Sync: Translator Error | 2023-05-04T15:11:01.284Z | Atlas Device Sync: Translator Error | 602 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.18-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.17. The next stable release 5.0.18 will be a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.18-rc1 is released | 2023-05-09T11:22:59.080Z | MongoDB 5.0.18-rc1 is released | 848 |
[
"serverless",
"hyderabad-mug"
] | [
{
"code": "",
"text": "\nMUG_FEB_41440×810 120 KB\nHyderabad MongoDB User Group is organizing a meetup on Saturday, February 4, 2023, 12:00 PM IST at Microsoft Hyderabad office .The whole event is modelled to help you understand Serverless Architecture and MERN-stack development. In the beginning, Vipul Chakravarthy , (Full Stack Developer at BrowserStack) will introduce you to MERN-Stack development.In the second session Atulpriya Sharma , (Developer Advocate at InfraCloud) will be sharing his knowledge on Demystifying Serverless with Fission Open Source Framework. Lunch would be on us from the best restaurant on the list. We will also have a Fun Trivia after the sessions and Networking Time to meet some of the regional developers, customers, architects, and experts. Not to forget there will also be swags to win!To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you RSVPed correctly. You need to be signed in to access the button.Event Type: In-Person\nLocation: Microsoft Building 3, Gachibowli, Telangana 500032\nvipul1920×2025 300 KB\n\n1640086094674800×800 80.6 KB\n\n1655742220279800×800 156 KB\nEvent Type: In-Person\nLocation: Microsoft Building 3, Gachibowli, Hyderabad, Telangana",
"username": "Archy_Gupta"
},
{
"code": "",
"text": "To RSVP - Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you RSVPed. You need to be logged in to access the button. ",
"username": "Harshit"
},
{
"code": "",
"text": "Unable to do RSVP\nIs there any other way to register for this event…?",
"username": "Gangadhar_Bhuvan"
},
{
"code": "",
"text": "Hey @Gangadhar_Bhuvan,\nThe seats are currently full. However, we are confirming availability with the current RSVPs and will open slots based on it.Will post here as soon as more slots open. ",
"username": "Harshit"
},
{
"code": "",
"text": "Is there any possibility for applying now",
"username": "Rahul_Rs2"
},
{
"code": "",
"text": "Hi @Rahul_Rs2. As @Harshit mentioned, we will be announcing new slots very soon (early next week). Do watch out!",
"username": "Yashraj_Kakkad"
},
{
"code": "",
"text": "Will there be any on-spot registrations??",
"username": "Tarun_Aswini"
},
{
"code": "",
"text": "Hi Everyone \nWe have opened 50 more slots Please RSVP if you are planning to attend.Thanks\nHarshit",
"username": "Harshit"
},
{
"code": "",
"text": "I’ve RSVPed. Will I get any confirmation ticket to enter the event on 4 Feb 2023?",
"username": "Abbas_Hussain"
},
{
"code": "",
"text": "Hey @Abbas_Hussain,\nYou will receive a confirmation email from us a couple of days before the event.",
"username": "Harshit"
},
{
"code": "",
"text": "When will we get the confirmation email? Only 3 days left for the event…",
"username": "Mohtasham_Sayeed_Mohiuddin"
},
{
"code": "",
"text": "The confirmation emails will be sent our by Thursday eod ",
"username": "Harshit"
},
{
"code": "",
"text": "i do also want to register but the slots are full ",
"username": "sanjay_Prajapati"
},
{
"code": "",
"text": "Hey @sanjay_Prajapati,\nVery sorry for that, we have limited venue capacity and won’t be able to accommodate more for this event. In the future, we will try and host bigger events to accommodate everyone Please join the group to stay abreast of the events in the future.",
"username": "Harshit"
},
{
"code": "",
"text": "What is the complete agenda and duration of the event?\n@Harshit",
"username": "Mohtasham_Sayeed_Mohiuddin"
},
{
"code": "",
"text": "I have RSVPed, but didn’t got confirmation mail yet. Some of my friends got the mail. What should i do now?",
"username": "Mohd_Abdul_Aleem"
},
{
"code": "",
"text": "Is there…\nAny Whatsapp Group",
"username": "GANAGANI_SAI_CHARAN"
},
{
"code": "",
"text": "Hey All,\nWe have a meetup planned for this Sunday 14th May and have few spots left, please RSVP if you are interested in attending:",
"username": "Harshit"
}
] | Hyderabad MUG: Demystifying Serverless & MERN stack! | 2023-01-20T08:52:45.021Z | Hyderabad MUG: Demystifying Serverless & MERN stack! | 5,011 |
|
[
"atlas-functions",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi,I am trying to implement email/pwd authentication and would like to call the confirmUser sdk method from within an existing Realm function. I don’t want to call confirmUser from the web, a React app, etc. How to I get access to the function?Thank You",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Anyone from Mongo want to chime in? Someone had asked a similar question several months ago and never got a reply.",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "@Herb_Ramos : Have you considered exploring our admin api’s for this purpose?",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "I have not. I will give it a try and report back.",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Wow. I had no idea Realm Apis existed. Mongo needs to do a better job at documentation. Even Google searches don’t mention the Realm Api’s. Thanks Mohit!",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Why not use a confirmation function instead? You certainly can call the admin API instead, but if you’re just trying to confirm users based on some backend information, that’s what confirmation functions are instead. You can select it in the options for email/password authentication.",
"username": "Nathan_Contino"
},
{
"code": "",
"text": "Hi Nathan. I guess I could, but I don’t want the client to confirm. I want to complete the user confirmation in a Mongo function. I’m not sure if I am clear without giving more details on my use case.",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Did you have any luck on solving this problem?",
"username": "Silas_Jeydo"
},
{
"code": "",
"text": "I have the same problem: I didn’t really understood where the confirmUser function should be executed from.\nMy best guess was to create a https endpoint with a function, and passing that endpoint address to my Email URL. However when i try to add realm as a dependency to the endpoint, i get the following error:",
"username": "Riccardo_Cuccia"
}
] | Call confirmUser in Realm function | 2021-08-12T12:22:54.284Z | Call confirmUser in Realm function | 5,444 |
|
null | [
"java"
] | [
{
"code": "Mar 15, 2023, 2:06:35 PM org.apache.catalina.core.StandardWrapperValve invoke\nSEVERE: Servlet.service() for servlet [appServlet] in context with path [/ws] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timeout waiting for a pooled item after 30000 MILLISECONDS; nested exception is com.mongodb.MongoTimeoutException: Timeout waiting for a pooled item after 30000 MILLISECONDS] with root cause\ncom.mongodb.MongoTimeoutException: Timeout waiting for a pooled item after 30000 MILLISECONDS\n<mongo:mongo-client credentials=\"${ddi.mongo.app.user}:${ddi.mongo.app.password}@${ddi.mongo.app.authenticationDatabase}\" id=\"mongoClient\" replica-set=\"${ddi.mongo.app.machine.one}:${ddi.mongo.app.port},${ddi.mongo.app.machine.two}:${ddi.mongo.app.port},${ddi.mongo.app.machine.three}:${ddi.mongo.app.port}\">\n <mongo:client-options connections-per-host=\"5\"\n threads-allowed-to-block-for-connection-multiplier=\"10\"\n connect-timeout=\"100000\"\n max-wait-time=\"30000\"\n socket-keep-alive=\"true\"\n socket-timeout=\"1000000\"\n write-concern=\"NORMAL\"\n read-preference=\"PRIMARY_PREFERRED\"/>\n</mongo:mongo-client>\n",
"text": "Hi Guys,Looking for MongoDB’s help with the issue we are facing\nIssue: MongoDB instance connection timeout issue with instance unavailable message in the log file when MongoDB primary instance goes into lock status while backupNote : We are not able to replicate the issue from the dev instance with the same Mongo client configuration and we are able to read the data from the DB instance from Intellij Idea Database Pane.Error from the tomcat log:MongoDB Client configuration :",
"username": "Prasad_Basutkar"
},
{
"code": "fsyncLock()",
"text": "Hi @Prasad_Basutkar and welcome to MongoDB community forums!!I see that your post hasn’t been addressed yet.\nAre you still facing the issue with the system?MongoDB instance connection timeout issue with instance unavailable message in the log file when MongoDB primary instance goes into lock status while backupBased on the above statement, I understand that you are using the fsync process on the primary member of the replica set?\nAlthough using fsyncLock() is one method to perform a backup, it’s disruptive to the server’s operation since all writes are forbidden while the server is in this state.If you wish to backup a replica set configuration, you could follow the steps mentioned in the documentation Restore a Replica Set from MongoDB Backups.Regards\nAasawari",
"username": "Aasawari"
}
] | mongoDB timeout issue from Java Client | 2023-03-16T09:05:28.759Z | mongoDB timeout issue from Java Client | 1,307 |
null | [] | [
{
"code": "",
"text": "Hi\nI’m building my first back-end project, that will process a lot of data daily.\nIt will get all ongoing streams from Twitch API, make some processing and save detailed data into DB. From what I know already - there will be a lot of documents in some Models. Here is a question:I’m curious if I should probably make some kind of coding/decoding stage for keys, so they will be stored in database as:\n{\na: value,\nb: value,\n…,\nz: value\n}instead of:\n{\nstreamId: value,\nstartedAt: value,\nconcurrentViewers: value\n…\n}",
"username": "Rafal_Nawojczyk"
},
{
"code": "",
"text": "Does short keys really save some disk spaceYes.if I should probably make some kind of coding/decoding stage for keysTradeoff: human readability vs storage cost/overhead",
"username": "Kobe_W"
},
{
"code": "",
"text": "Make yourself a favour. Use readable names while you develop it will help you. Do this kind of optimization later and only if you have an issue. Early optimization is never a good idea.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for replies guys.\nI will have probably like 10MIL+ documents in channels collection, lots more in streams collection. That’s why I’m looking for options that early.\nOf course - I will run my app without these optimizations yet, and will check if they make sense, but I will know where to seek these optimizations now. Thanks once again",
"username": "Rafal_Nawojczyk"
}
] | Are short key names saving space in mongoDB? | 2023-05-08T15:24:26.029Z | Are short key names saving space in mongoDB? | 315 |
null | [
"queries",
"java",
"spring-data-odm"
] | [
{
"code": "spring-boot-starter-data-mongodb 2.7.13.0.0 <mongo:db-factory id=\"mongoDbFactory\" connection-string=\"mongodb+srv://user:pwd@host/database?retryWrites=true\"/>\n <bean id=\"mongoTemplate\" class=\"org.springframework.data.mongodb.core.MongoTemplate\" >\n <constructor-arg ref=\"mongoDbFactory\" />\n </bean>\n <bean class=\n \"org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor\"/>\n java.lang.IllegalStateException: state should be: open\njava.lang.IllegalStateException: state should be: open\n\tat com.mongodb.assertions.Assertions.isTrue(Assertions.java:79)\n\tat com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:165)\n\tat com.mongodb.internal.connection.AbstractMultiServerCluster.getDescription(AbstractMultiServerCluster.java:50)\n\tat com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:144)\n\tat com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:101)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:291)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:183)\n\tat com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)\n\tat com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)\n\tat org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2968)\n\tat org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2692)\n\tat org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.doFind(ExecutableFindOperationSupport.java:220)\n\tat org.springframework.data.mongodb.core.ExecutableFindOperationSupport$ExecutableFindSupport.oneValue(ExecutableFindOperationSupport.java:132)\n\tat org.springframework.data.mongodb.repository.query.AbstractMongoQuery.lambda$getExecution$6(AbstractMongoQuery.java:188)\n\tat org.springframework.data.mongodb.repository.query.AbstractMongoQuery.doExecute(AbstractMongoQuery.java:152)\n\tat org.springframework.data.mongodb.repository.query.AbstractMongoQuery.execute(AbstractMongoQuery.java:127)\n\tat org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137)\n\tat org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121)\n\tat org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:160)\n\tat org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:139)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:81)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)\n\tat com.sun.proxy.$Proxy39.findByName(Unknown Source)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)\n\tat org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\n\tat org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:137)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)\n\tat com.sun.proxy.$Proxy44.findByName(Unknown Source)\n\tat edu.illinois.techservices.cloudbroker.core.http.Oauth2StateRepoService.findByName(Oauth2StateRepoService.java:22)\n\tat edu.illinois.techservices.cloudbroker.box.CoreConnectionState$TokenReader.call(CoreConnectionState.java:360)Preformatted text\n",
"text": "Hello,I have a Java application that uses spring-boot-starter-data-mongodb 2.7.1(also tries 3.0.0) to connect to MongoDB. The configuration is in XML.The connection has been established when the application starts and I do see that the data has been retrieved from the mongo atlas database. Once, the user request is sent, and I try to get the data I seePlease find the log trace here:This used to work before. I tried to figure out where the connection has lost. Any help would be appreciated.Thanks,\nPrasanna",
"username": "Prasanna_Bale"
},
{
"code": "2023-04-27 15:39:13,498 : INFO : [cluster-ClusterId{value='644a971fd5dd176dd4946323', description='null'}-cloud-dashboard-ohio-shard-00-00.sa7ls.mongodb.net:27017] : org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=cloud-dashboard-ohio-shard-00-00.sa7ls.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1240940200, setName='atlas-yyxdyg-shard-0', canonicalAddress=cloud-dashboard-ohio-shard-00-00.sa7ls.mongodb.net:27017, hosts=[cloud-dashboard-ohio-shard-00-00.sa7ls.mongodb.net:27017, cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017, cloud-dashboard-ohio-shard-00-02.sa7ls.mongodb.net:27017], passives=[], arbiters=[], primary='cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='US_EAST_2'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=2, topologyVersion=TopologyVersion{processId=64494b2a9e2222a038e462d8, counter=4}, lastWriteDate=Thu Apr 27 15:39:12 UTC 2023, lastUpdateTimeNanos=1881230175253897}\n2023-04-27 15:39:13,504 : INFO : [cluster-ClusterId{value='644a971fd5dd176dd4946323', description='null'}-cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017] : org.mongodb.driver.cluster : Setting max election id to 7fffffff0000000000000010 from replica set primary cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017\n2023-04-27 15:39:13,504 : INFO : [cluster-ClusterId{value='644a971fd5dd176dd4946323', description='null'}-cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017] : org.mongodb.driver.cluster : Setting max set version to 2 from replica set primary cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017\n2023-04-27 15:39:13,505 : INFO : [cluster-ClusterId{value='644a971fd5dd176dd4946323', description='null'}-cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017] : org.mongodb.driver.cluster : Discovered replica set primary cloud-dashboard-ohio-shard-00-01.sa7ls.mongodb.net:27017\n\n",
"text": "Here are the logs that shows the connection to mongo atlas is made:",
"username": "Prasanna_Bale"
},
{
"code": "MongoClientMongoClientMongoClientMongoClient",
"text": "Hello @Prasanna_Bale, welcome to the MongoDB community forums.Based on the error message you provided, it appears that you might be encountering a similar issue to the one described in JAVA-2609: mongo throws IllegalStateException: state should be: open. The error could be caused by another process or thread closing the connection with your MongoDB client, which later results in an error when trying to connect again.To resolve this issue, you may need to review your application’s code to determine when to keep the connection open and when to close the MongoClient connection. It is recommended to create a single MongoClient instance with a required connection pool and reuse it across multiple threads instead of creating a new MongoClient instance each time you need to access the MongoDB Server deployment.I tried to replicate the error message by querying on MongoDb after the connection was closed and ran into the same error log It would be helpful if you could trace any changes made to the application code, such as the scope of the MongoClient. For instance, there could be a possible addition or removal of try{}… catch{} code snippet.Please reach out to us for further questions.Best regards,\nAasawari",
"username": "Aasawari"
}
] | IllegalStateException: state should be : open | 2023-04-26T23:05:31.121Z | IllegalStateException: state should be : open | 4,324 |
null | [
"aggregation",
"queries",
"indexes",
"performance",
"atlas-search"
] | [
{
"code": "// Stores schema:\n{\n _id: ObjectId,\n name: String\n}\n\n// Products schema:\n{\n _id: ObjectId,\n name: String,\n store: ObjectId\n}\n{\n collectionName: 'products',\n name: 'productName_index',\n mappings: {\n dynamic: false,\n fields: {\n store: {\n type: \"objectId\",\n },\n name: [\n { type: \"string\" },\n { type: \"autocomplete\" }\n ]\n }\n }\n}\n // Known store _id\nconst storeId = new ObjectId()\n\nconst searchQuery = \"someProductName\"\n\nconst pipeline = {\n $search: {\n index: \"productName_index\",\n compound: {\n filter: [\n { equals: {\n path: \"store\",\n query: storeId\n }}\n ],\n should: [\n { text: {\n path: \"name\",\n query: searchQuery\n }},\n { autocomplete: {\n path: \"name\",\n query: searchQuery\n }}\n ],\n minimumShouldMatch: 1\n }\n }\n}\nproductsName_index{ store: 1, name: 1 }",
"text": "Is it possible to use the $search aggregation pipeline stage to efficiently filter out documents with some field not containing, for example, a certain object id, and then perform full-text search on other fields? By efficient I mean that the documents with the desired object id in the given field can be found without scanning all indexed data, similar to how a database index on the given field would avoid a collection scan.To better explain my question, I’ll present a simplified scenario.Consider a stores collection and a products collection. The document schema for both collections is as follows:Every product has a name and belongs to a store.Consider an application where the user is able to choose a store, and then full-text seach for products in that store by name.To achieve this, I’d create the following search index:And use the following aggregation pipeline to query:I think that for this query, all indexed data for the productsName_index is scanned.If instead, I were to use a compound database index: { store: 1, name: 1 }, I could use an aggregation pipeline with a $match stage to filter out products that do not belong to a store, without performing a collection scan. But then, I would no longer be able to full-text search.So then, how does it work with search indexes? Would the above query have to check every indexed store field? If so, I’m curious if it’d ever be possible to build a search index that supports this kind of queries more efficiently.",
"username": "Octavio_Araiza"
},
{
"code": "mongod$searchfiltermongod$search$search$match",
"text": "Hi @Octavio_Araiza,Apologies as i’m a bit confused with the title as you have stated “collection(index) scan” - Can you clarify what you are referring to here? A collection scan indicates that the mongod had to scan the entire collection document by document to identify the results.In terms of the $search pipeline, the use of filter helps reduce the amount of documents that the mongod needs to fetch. As per the Atlas Search Query Performance documentation:Using a $match aggregation pipeline stage after a $search stage can drastically slow down query results. If possible, design your $search query so that all necessary filtering occurs in the $search stage to remove the need for a $match stage. The Atlas Search compound operator is helpful for queries that require multiple filtering operations.Although I understand you’ve noted this is a simplified scenario - I’m wondering just for context on this topic, is the pipeline you’ve provided executing in an abnormal or unexpected time frame? If so, please provide some further details regarding this if possible.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "filterObjectIdfilter$match$searchfilter",
"text": "Regarding “collection(index) scan” in the title; I’m not sure how search index queries work, but I’m assuming that when the filter stage is used (e.g., to only retrieve documents where some field’s value equals some ObjectId), all the indexed data have to be ‘scaned’, which I think would be — in a practical way — similar to a collection scan for a database index.Then, I understand the filter stage is more performant with respect to using $match after $search, but I’d like to know if, for the presented case, all indexed data is ‘scanned’.The pipeline is not executing in an abnormal or unexpected time frame. There’s no problem with the pipeline itself. I’m just curious about the filter stage .",
"username": "Octavio_Araiza"
},
{
"code": "filterObjectIdfilter_idmongod",
"text": "Regarding “collection(index) scan” in the title; I’m not sure how search index queries work, but I’m assuming that when the filter stage is used (e.g., to only retrieve documents where some field’s value equals some ObjectId), all the indexed data have to be ‘scaned’, which I think would be — in a practical way — similar to a collection scan for a database index.It could possibly be my misunderstanding / interpretation of your statements above but I do not believe the filter stage works in the same manner as a collection scan. To my knowledge, the _id’s of the matching documents returned from the search process are passed to the mongod process for the documents to be retrieved so a collection scan is not performed here.Regards,\nJason",
"username": "Jason_Tran"
}
] | Avoid collection(index) scan for search index queries. Database index query performace optimizations for search indexes | 2023-02-24T17:47:17.958Z | Avoid collection(index) scan for search index queries. Database index query performace optimizations for search indexes | 1,368 |
null | [] | [
{
"code": "{\n\t\"message\" : \"not authorized on local to execute command { compact: 'oplog.rs', $clusterTime...\n",
"text": "Hi.\nI use MongoDB Atlas M20 instance.\nI had an issue with huge oplog size(50GB) with storage auto scale option enabled.\nSo I turned off auto scaling and set Maximum Oplog Size as 990MB to prevent oplog size going to large.\nNext, I wanted to run compact command on oplog collection to make disk size smaller but failed to run with authentication failed message.I found related articles but nobody seems to have an authentication issue like me.Can anyone give me an advice?",
"username": "Kyle_Yoon"
},
{
"code": "",
"text": "This is a more detailed status of my instance.VERSION5.0.15REGIONAWS / Tokyo (ap-northeast-1)CLUSTER TIERM20 (General)TYPEReplica Set - 3 nodes",
"username": "Kyle_Yoon"
},
{
"code": "",
"text": "Atlas admin role may not be having privilege to run compact command\nMay be you have to create a custom role giving explicit privileges/actions",
"username": "Ramachandra_Tummala"
},
{
"code": "dbAdmin@local",
"text": "You can try dbAdmin@local to see if that works for you.",
"username": "Jason_Tran"
}
] | [Atlas] Authentication failed for running compact command on oplog.rs with atlasAdmin role | 2023-04-27T05:07:41.214Z | [Atlas] Authentication failed for running compact command on oplog.rs with atlasAdmin role | 568 |
null | [
"aggregation",
"queries",
"indexes"
] | [
{
"code": "db.users.aggregate([{ \n $match: { \n audience_id: {$in: [ObjectId('633eaba80cb7a3b1d910e98b'),ObjectId('63b6a5325bf3f04f18170e84')]\n }, \n status: 'Subscribed' \n } \n }, { \n $group: {\n _id: '$contact_id',\n contact_meta: {\n $first: '$ROOT'\n }\n }\n }, {\n $lookup: {\n from: 'user_data',\n 'let': {\n id: '$_id'\n },\n pipeline: [\n {\n $match: {\n $and: [\n {\n $expr: {\n $eq: [\n '$_id',\n '$id'\n ]\n }\n },\n {\n email: {\n $exists: true,\n $ne: ''\n }\n },\n {\n $or: [\n {\n is_bounced: {\n $exists: false\n }\n },\n {\n is_bounced: {\n $ne: true\n }\n }\n ]\n },\n {\n $or: [\n {\n not_valid: {\n $exists: false\n }\n },\n {\n not_valid: {\n $ne: true\n }\n }\n ]\n }\n ]\n }\n }\n ],\n as: 'contact' }\n }, \n{\n $unwind: {\n path: '$contact',\n preserveNullAndEmptyArrays: false\n }\n},\n{\n$addFields: {'contact.audience_id': '$contact_meta.audience_id'\n}\n}, \n{\n $replaceRoot: {newRoot: '$contact'\n}\n}]) \n",
"text": "I have two collection named as users and user_data, users is a primary collection and the user_data is a secondary collection. I have perform an aggregate query for retriving millions of data but the query is taking too much time for execution. So, i try to use compound indexes on the fields of both the collections for e’g: audience_id: 1, contact.is_bounced: 1, contact.not_valid: 1 but on the time of exection secondary collection indexes are not use. Is there any way to use indexes on joined collection.Db query :-",
"username": "Aditya_Sharma7"
},
{
"code": "db.collection.getIndexes()",
"text": "Hi @Aditya_Sharma7 and welcome to MongoDB community forums!!Depending on the version you are on, index being used by the stages of the aggregation pipeline changes.For instance, before MongoDB version 3.6, the index is only used by the first stage of the aggregation pipeline.\nSee the Aggregation Pipeline — MongoDB Manual for further reference.As of latest MongoDB version 6.0, the following are the limitations on using index with $lookup stages.However, to understand your case efficiently, could you help me withRegards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hey @Aasawari i have added the screenshots of the indexes and output, added the sample dataset in the json format and mentioned the steps i have tried and the problem i have faced. Please check it once.\nIndexes of users_data collection :-\n\nIndexes of users collection :-\n\nSample dataset of user_data collection :-\ncontacts.json (2.3 KB)\nSample dataset of users collection :-\ncontacts_meta.json (3.0 KB)Basically i have tried a match on audience_id and status of users collection. After that i use the group stage on the basis of $contact_id of users collection and use $ROOT, than i use let lookup with the collection users_data. As explained in the above query. The problem is the query is not time efficient it is taking to much time so i use indexes on users collection which reduce some time and after that for more time efficient i want to use the indexes on the fields of users_data collection as “is_bounced”,“not_valid” but i am not able to do so the indexes are created but they does not use in the query and there is no difference in the time.\nThe output i needed with the audience_id [ObjectID(“62199c94eb3129e94bdeacb5”)] is as\n:-\n\nMongodb version - 6.0.5\nWith Regards.",
"username": "Aditya_Sharma7"
},
{
"code": "$exists: false$ne: true$ne: true$eq: falsedb.users.aggregate([\n {\n '$lookup': {\n 'from': 'users_data', \n 'localField': '_id', \n 'foreignField': 'contact_id', \n 'as': 'audience_id'\n }\n }, {\n '$unwind': {\n 'path': '$audience_id'\n }\n }, {\n '$addFields': {\n 'audience_id': '$audience_id.audience_id'\n }\n }\n ])\n....\n{\n _id: ObjectId(\"63fc6a7ae47bdd0489381512\"),\n first_name: 'Last',\n last_name: 'Test',\n full_name: 'Last Test',\n phone_number: '9816428369',\n email: '[email protected]',\n created_on: 1677486714,\n date: ISODate(\"2023-02-27T08:31:54.881Z\"),\n is_new: 0,\n note: '',\n is_bounced: false,\n not_valid: false,\n is_verified: true,\n audience_id: ObjectId(\"62199c94eb3129e94bdeacb5\")\n },\n....\nindexUsed{\n '$lookup': {\n from: 'users_data',\n as: 'audience_id',\n localField: '_id',\n foreignField: 'contact_id',\n unwinding: { preserveNullAndEmptyArrays: false }\n },\n totalDocsExamined: Long(\"8\"),\n totalKeysExamined: Long(\"8\"),\n collectionScans: Long(\"0\"),\n indexesUsed: [ 'contact_id_1' ],\n nReturned: Long(\"9\"),\n executionTimeMillisEstimate: Long(\"1\")\n },\n",
"text": "Hi @Aditya_Sharma7Thank you for sharing the sample data and the necessary information.There are a few points which I would like to mention after we have triaged the issue.From the sample data posted and the query posted, the query does not seem to return any data as a response. Can you confirm, if the query is expected to perform in the similar way?The aggregation query mentioned in he post looks complicated. I noted that you used $exists: false , and $ne: true . Note that in general, indexes and databases don’t perform best when you ask it to find things that doesn’t exist or doesn’t match. For example, instead of $ne: true , would it be possible to use $eq: false ? Can you help me understand to breakdown the query?Based on the sample data and the expected response posted, I tried to create the aggregation pipeline with a new index created on contact_id.\nThe aggregation query looks lie:and return the response as:As mentioned in the SERVER-22622: Improve $lookup explain to indicate query plan on the “from” collection, the indexUsed parameter of the explain output defines the index used in the $lookup stage of the pipeline.Can you help me by confirming if the above query help in improving the performance of the query for your use case. If not, please share more details on what you are trying to achieveLet us know if you have any further questions.\nRegards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "e responThe previous query execution time for the query is 237 milliSeconds.\nAs per the changes according to you the new execution time is 112 milliSeconds.\nNote - Total docs in primary collection - 19339, secondary collection - 662.\nHey @Aasawari it effect the query, the execution time is decreased by this query.\nThankyou so much for your efforts ma’am. It means aloat.\nRegards\nAditya Sharma",
"username": "Aditya_Sharma7"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I want to use indexes on the secondary collection in mongodb | 2023-04-26T05:19:47.535Z | I want to use indexes on the secondary collection in mongodb | 1,080 |
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "{\n\tamount:{ type:Number} ,\n\t_entityBuyer: {\n\t\ttype: mongoose.Schema.Types.ObjectId,\n\t\tref: 'Entity',\n\t},\n}\n{\n\tamount:150,\n\t_entityBuyer: {\n\t\t_id:'1234',\n\t\tname:'lee',\n\t\thandle:'lee'\n\t}\n}\n{\n\tindex: 'search-index-name',\n\tcompound: {\n\t\tshould: [\n\t\t\t{\n\t\t\t\tembeddedDocument: {\n\t\t\t\t\tpath: '_entityBuyer',\n\t\t\t\t\toperator: {\n\t\t\t\t\t\tcompound: {\n\t\t\t\t\t\t\tshould: [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tautocomplete: {\n\t\t\t\t\t\t\t\t\tquery: 'lee',\n\t\t\t\t\t\t\t\t\tpath: '_entityBuyer.name',\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t],\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t},\n\t],\n},\n{\n \"mappings\": {\n \"fields\": {\n \"_entityBuyer\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\n",
"text": "Im attempting to Search a collection by a populated subdocument, however Im unable to achieve this.My model (purchases) is the following:Example of model:So I’m wanting to lookup all purchases by the _entityBuyer.name via the following:My Search Index is the following:Currently running the above returns an empty result - however I have correctly seeded my database with _entityBuyer.name with the value of ‘lee’Any help or guidance would be greatly appreciated ",
"username": "Lee_Marshall"
},
{
"code": "_entityBuyer.name{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"_entityBuyer\": {\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"document\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'autocomplete': {\n 'query': 'le', \n 'path': '_entityBuyer.name'\n }\n }\n }\n]\n_entityBuyer_entityBuyerembeddedDocuments{\n\tamount:150,\n\t_entityBuyer: [\n {\n\t\t_id:'1234',\n\t\tname:'lee',\n\t\thandle:'lee'\n\t},\n {\n\t\t_id:'8990',\n\t\tname:'xxx',\n\t\thandle:'xxx'\n\t}]\n}\n{\n \"mappings\": {\n \"fields\": {\n \"_entityBuyer\": {\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'embeddedDocument': {\n 'path': '_entityBuyer', \n 'operator': {\n 'compound': {\n 'should': [\n {\n 'autocomplete': {\n 'path': '_entityBuyer.name', \n 'query': 'lee'\n }\n }\n ]\n }\n }\n }\n }\n }\n]\n",
"text": "Hi @Lee_Marshall and welcome to MongoDB community forums!!Based the sample documents provided, below are a few examples that describe the difference between the use of autocomplete and embeddedDocuments operators.I tried to create and autocomplete index on the _entityBuyer.name and used the following search query that returned the data.Index Definition:and the search query:From the documents shared, it seems there is only one sub document inside the _entityBuyer field. Can you confirm if there are more subdocuments to the field or that all documents within your collection only have a single subdocument?\nIf there are multiple entities within the _entityBuyer field, you can also consider converting it to an array so that you can use the embeddedDocuments in the index definition. However, please ensure this suits your use case and test thoroughly before doing so.For example:If the sample document looks like:The index definition in this case would look like:and the search query would be:Let us know if you have further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Index Search by subdocument with autocomplete | 2023-05-04T10:09:26.195Z | Index Search by subdocument with autocomplete | 794 |
[
"sharding"
] | [
{
"code": "_id",
"text": "Environment Setup:\nthree 8 core 16GB Mongos\nfive shards each with 8 core 16GB MongodWorkload:\nSYSBENCH\nDatabase: YCSB; Collection: t_0;\nSet {field0: 1} as shard key, perform pure read workload with _idWe were surprised to find that when setting taskExecutorPoolSize to 8, the SYSBENCH qps ~1300, but when setting taskExecutorPoolSize to 1, the performance improve to ~5400 qps. Then we found this jira, but this jira explains little, can someone explain why setting taskExecutorPoolSize will cause such a difference.We analyze the Flame Graph( it’s too big to upload, so we just show the pic)\nwhen setting taskExecutorPoolSize = 8, we found there is a lot of futex syscall.\n\n图片1920×1216 387 KB\n\nwhen setting taskExecutorPoolSize = 1, there is less lock operation.",
"username": "issazhang_zhang"
},
{
"code": "",
"text": "this is the flame graph when setting taskExecutorPoolSize = 1\n\n图片1920×1249 280 KB\n",
"username": "issazhang_zhang"
}
] | Mongos: setting taskExecutorPoolSize larger than 1 cause performance drop significantly in mongodb 5.0 | 2023-05-09T02:53:58.524Z | Mongos: setting taskExecutorPoolSize larger than 1 cause performance drop significantly in mongodb 5.0 | 656 |
|
null | [
"swift"
] | [
{
"code": "@ObservedResults(AccountType.self) var accountTypes\nprint(\"Account Types: \\(accountTypes), Count: \\(accountTypes.count)\")\nCustomPickerView(itemIndex: $accountTypeIndex, items: accountTypes.map { return $0.typeDescription }, length: accountTypes.count, pickerName: Constants.accountTypePicker, offsetX: -95)\n",
"text": "I’m having issues after changing the Realm to a Sync Realm Db, when using the local realm it was working well, the issue here is that I’m using an ObservedResults object and passing it to the custom picker view to display the content of the results, if I print the content in the init of the first view, I can see the data but when I pass it to the custom Picker View the array is empty.This is when I’m getting the result with the ObservedResultsWhen I print the results in the init it prints the data correctly, it shows that I have 3 items in the collection, which is correctHere I’m mapping the data from the results to pass it into the custom picker view, here is the issue, the result at this point is empty, any thoughts?And the data that is coming in the array I cannot see it in the Atlas server, it’s supposed to be a sync realm but for some reason is not working. Any help would be greatly appreciated.If any other code snippet is needed I can add it.",
"username": "Waner_Pena"
},
{
"code": "",
"text": "This is working well now, I was using Partition Sync, but I changed that to use Flexible Sync and it’s working correctly.",
"username": "Waner_Pena"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ObservedResults is empty when using it in the body RealmSync SwiftUI | 2023-04-30T23:53:39.379Z | ObservedResults is empty when using it in the body RealmSync SwiftUI | 744 |
[
"swift"
] | [
{
"code": "",
"text": "Hey all, I’m working in a SwiftUI app and I want the user only to read/write its own data, but when I add the role to be read/write for the ownerId is not working, the data is not coming from the server, if I remove the role then I can see the data correctly, this is what I have in the roles section in Atlas server:\n\nScreenshot 2023-05-06 at 6.15.39 PM919×231 14.9 KB\nAnd this is in my queryable fields:\n\nScreenshot 2023-05-06 at 6.16.12 PM749×172 11.5 KB\nI’m following this documentation to set the role:",
"username": "Waner_Pena"
},
{
"code": "",
"text": "I was able to fix it, but the issue was that I had ownerId in my local models, changing that to be the same as the json in the rules fixed the issue, so I put owner_id in my models and everything is working good now.",
"username": "Waner_Pena"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | FlexibleSync data not coming down when using role ownerId read/write | 2023-05-06T22:17:38.750Z | FlexibleSync data not coming down when using role ownerId read/write | 680 |
|
null | [
"storage"
] | [
{
"code": "",
"text": "Hi,I saw this storage engine for the first time today and was only able to find documentation for it here. Which seems to only for version 4.0Edit: I did additional research on it, and it seems to be deprecated since version 4.0 of the community version. Anyone has any more information on whether it is supported by other version?",
"username": "Khoa_Bui1"
},
{
"code": "",
"text": "Hi @Khoa_Bui1 welcome to the community!MMAPv1 was deprecated in MongoDB 4.0, and removed in MongoDB 4.2 (see Storage Engines). WiredTiger has been the default (and now only) builtin storage engine since MongoDB 3.2.Since WiredTiger is a much more modern storage engine with massively upgraded capabilities vs. MMAPv1, is this for historical purposes, or curiosity? I do hope that you’re not using MMAPv1 in production today Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,Thanks for the response. No just out of curiosity, I saw it on an old configuration file and wasn’t sure what it was.",
"username": "Khoa_Bui1"
}
] | When is the last version that supports MMAPv1 storage engine? | 2023-05-05T14:32:17.523Z | When is the last version that supports MMAPv1 storage engine? | 739 |
null | [
"cluster-to-cluster-sync"
] | [
{
"code": "curl localhost:27182/api/v1/commit -XPOST --data '{ }'\n{\"progress\":{\"state\":\"COMMITTED\",\"canCommit\":false,\"canWrite\":true,\"info\":\"commit completed\",\"lagTimeSeconds\":0,\"collectionCopy\":{\"estimatedTotalBytes\":39744723179,\"estimatedCopiedBytes\":39744864710},\"directionMapping\":{\"Source\":\"cluster0: mongo.acme.com:27017\",\"Destination\":\"cluster1: mongo.acme-staging.com:27017\"},\"mongosyncID\":\"coordinator\",\"coordinatorID\":\"coordinator\"}}\n{\"success\":false,\"error\":\"InvalidStateTransition\",\"errorDescription\":\"Invalid state transition, expected Current State PAUSED, current State: COMMITTED, target State: RUNNING\"}\n{\"success\":false,\"error\":\"InvalidStateTransition\",\"errorDescription\":\"Invalid state transition, expected Current State IDLE, current State: COMMITTED, target State: RUNNING\"}\n",
"text": "Hi, we are using mongosync to migrate data from one cluster to another cluster.We have gone through the process and the data migrated fine and finally we committed everything.The mongosync is now in the following state:curl localhost:27182/api/v1/progress -XGETNow that we know this works, we want to continue the process after it has been commited, since commiting, new data has been incoming and we want to fetch that new data (preferably without having to refetch everything again is that possible?).Now we would like to continue and keep on syncing data, how do we “resume” or “start” the process again and continue fetching new data.Trying to “start” or “restume” again using the api does not workWe also tried starting the mongosync web server again but it knows its already in a commited state.",
"username": "Kay_Khan"
},
{
"code": "",
"text": "Hi Kay,The COMMIT operation is irreversible, as its purpose is to enable the cutover to the new cluster. If you want to keep the sync going, then you don’t need to call Commit (you can still Pause the sync or shutdown the mongosync process as needed to pause the data replication). In the current state, unfortunately, you will have to do a full resync by dropping all the data on the destination database, including the “mongosync_reserved_for_internal_use” namespace.-Alex",
"username": "Alexander_Komyagin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongosync how to continue after commited state | 2023-05-08T12:29:11.636Z | Mongosync how to continue after commited state | 967 |
null | [
"queries",
"dot-net",
"backup"
] | [
{
"code": "",
"text": "Hello,\nI have a problem for moving data from server A to server B.\nHow to move millions of records between two MongoDB servers? I look for fast solution.",
"username": "Ammar"
},
{
"code": "",
"text": "",
"username": "tapiocaPENGUIN"
},
{
"code": "mongodumpmongorestore",
"text": "Hello @Ammar,Welcome to the MongoDB Community forums I have a problem for moving data from server A to server B.Could you please elaborate on the problem and the issue you are facing? Additionally, could you share the following details to better understand the case:How to move millions of records between two MongoDB servers?As @tapiocaPENGUIN mentioned, please refer to Back Up and Restore with MongoDB Tools to read more on how to leverage mongodump and mongorestore to migrate the data.Furthermore, could you share if you have considered any method so far? If yes, please share the workflow with us.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello,\nSize: 200 millions record\nVer: MongoDB 6\nOn-premise setup\nI want move 100 millions record to another server for backup.",
"username": "Ammar"
},
{
"code": "",
"text": "I’m not sure what query or pattern the documents follow but you can do a mongodump with the --query flag to specify which ones get backed up.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "If the server can be temporarily stopped, direct file copy can be also an option. (Of course, more requirements on the version/hardware…)",
"username": "Kobe_W"
},
{
"code": "",
"text": "We can’t stop the server. We should move data to another server every 2 days",
"username": "Ammar"
}
] | How to move millions of records between two mongodb servers? | 2023-05-08T12:45:17.018Z | How to move millions of records between two mongodb servers? | 695 |
null | [
"queries"
] | [
{
"code": "const requests = documents.map(doc => doc.save())\nawait Promise.allSettled(requests)\n",
"text": "Hi!I need to query for multiple documents in my app concurrently, and I will use Model.Find() for that.\nThere will be around 5000 documents to get, process and then save.\nI would rather avoid saving them one after another.\nThere is an option to make an array of requests as:And it should make this task way quicker.\nIs there an option to save multiple documents at once?\nUnfortunately I’m not updating to the same values, so I need to query for documents, then update them one by one and on the end save them.",
"username": "Rafal_Nawojczyk"
},
{
"code": "",
"text": "Technically, there’s no “save” method in mongodb. Only “write”.You can check bulk write.",
"username": "Kobe_W"
}
] | Optimizing mutliple documents saves | 2023-05-08T15:29:18.040Z | Optimizing mutliple documents saves | 367 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.22-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.21. The next stable release 4.4.22 will be a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.22-rc0 is released | 2023-05-08T18:43:21.286Z | MongoDB 4.4.22-rc0 is released | 795 |
null | [] | [
{
"code": "",
"text": "Can someone please suggest on how beneficial creating view would be.I have a collections with ~30M documents. When i created a view it is not even being listed.but i got a message saying {OK} but db.currentOp() is also Inprog as empty array.Any suggestions would be greatly appreciated.",
"username": "Geetha_M"
},
{
"code": "",
"text": "Show the code you used to create the view.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Here is the script which I have used @Jack_Woehrdb.createCollection(\n“order_trans_view”,\n{\n“viewOn” : “order_trans”,\n“pipeline” : [\n{\n$unwind: {\npath: “$lineItems”,\npreserveNullAndEmptyArrays: false,\n},\n},\n{\n$project:\n/**\n* specifications: The fields to\n* include or exclude.\n*/\n{\nlineItemsdeviceId: {\n$ifNull: [\n“$lineItems.deviceId”,\n“null”,\n],\n},\n…Required field like above\n},\n},\n],\n// “collation” : { }\n}\n)//db.order_trans.find()",
"username": "Geetha_M"
},
{
"code": "",
"text": "This code snippet is messed up. Can you please paste in a little more carefully and use the code widget “</>” to make it easier to read? And please do not use Windows “smart quotes” in your samples.",
"username": "Jack_Woehr"
},
{
"code": "db.createCollection(\n \"PMP_view\",\n {\n \"viewOn\" : \"payee_mapped_transactions\",\n \"pipeline\" : [\n {\n $match:\n {\n periodId: \"xxxx\",\n },\n },\n \n {\n $project: \n {\n _id: \"$_id\",\n periodId: \"$periodId\",\n repAttuid: \"$repAttuid\",\n },\n },\n \n],\n // \"collation\" : { <collation> }\n }\n)\n",
"text": "",
"username": "Geetha_M"
},
{
"code": "",
"text": "I have tried the same script in local and it is working fine but in the actual environment where the collection are sharded it is not inserting records only view is getting created.",
"username": "Geetha_M"
},
{
"code": "",
"text": "Hmm, I am not sure what could be causing that. Misconfiguration of the remote environment?",
"username": "Jack_Woehr"
}
] | Creating Views in Mongodb is not working | 2023-05-05T17:08:51.488Z | Creating Views in Mongodb is not working | 451 |
null | [
"aggregation",
"queries"
] | [
{
"code": ">> db.foo.find() \n[ { _id: } ] \n>> db.foo.aggregate({$project: {_id: , isNumber: {$isNumber: NaN} }})\n[ { _id: , isNumber: true } ]\n",
"text": "MongoDB Web Shell$isNumber must return false as NaN is not a number",
"username": "Abhishek_Chaudhary1"
},
{
"code": "$inNumberNaNtest> db.collection.aggregate({$project: {_id: 1, isNumber: {$isNumber: NaN} }})\n[ { _id: ObjectId(\"645919bea5f91f2bb30ec1ab\"), isNumber: true } ]\n\"decimal\"\"long\"\"int\"\"double\"\"isNumber\": true,",
"text": "Hello @Abhishek_Chaudhary1,Welcome to the MongoDB Community forums NaN is not a numberAs per the NaN - MDN Web Docs,The NaN (Not a Number) is a numeric data type that means an undefined value or value that cannot be represented, especially results of floating-point calculations.$isNumber must return false as NaN is not a numberTherefore, when using the $inNumber operator in MongoDB, it will return true for NaN.Also, as per the $isNumeric - documentation, \"decimal\", \"long\", \"int\", \"double\" all leads to \"isNumber\": true, because they all are of numeric type.Hope it clarifies your doubts. Feel free to reach out in case you have any further queries.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $isNumber (aggregate) produces incorrect result when NaN is passed to it | 2023-05-08T13:37:51.560Z | $isNumber (aggregate) produces incorrect result when NaN is passed to it | 486 |
null | [
"queries",
"python",
"atlas-cluster"
] | [
{
"code": "ServerSelectionTimeoutError\npymongo.errors.ServerSelectionTimeoutError: ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992),ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992),ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992), Timeout: 30s, Topology Description: <TopologyDescription id: 6458fac66937013406d9805d, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>, <ServerDescription ('ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>, <ServerDescription ('ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>]>\n\nTraceback (most recent call last)\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 2486, in __call__\nreturn self.wsgi_app(environ, start_response)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 2466, in wsgi_app\nresponse = self.handle_exception(e)\n ^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 2463, in wsgi_app\nresponse = self.full_dispatch_request()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 1760, in full_dispatch_request\nrv = self.handle_user_exception(e)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 1758, in full_dispatch_request\nrv = self.dispatch_request()\n ^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/app.py\", line 1734, in dispatch_request\nreturn self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Users/kelly/Desktop/argonne/git lab repos/savvyshopper/frontend/src/app.py\", line 13, in index\nreturn render_template('index.html', data = data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/templating.py\", line 147, in render_template\nreturn _render(app, template, context)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/flask/templating.py\", line 130, in _render\nrv = template.render(context)\n ^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jinja2/environment.py\", line 1304, in render\nself.environment.handle_exception()\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jinja2/environment.py\", line 925, in handle_exception\nraise rewrite_traceback_stack(source=source)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Users/kelly/Desktop/argonne/git lab repos/savvyshopper/frontend/src/templates/index.html\", line 53, in <module>\n{% for row in data %}\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/cursor.py\", line 1248, in next\nif len(self.__data) or self._refresh():\n ^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/cursor.py\", line 1139, in _refresh\nself.__session = self.__collection.database.client._ensure_session()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/mongo_client.py\", line 1712, in _ensure_session\nreturn self.__start_session(True, causal_consistency=False)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/mongo_client.py\", line 1657, in __start_session\nself._topology._check_implicit_session_support()\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/topology.py\", line 538, in _check_implicit_session_support\nself._check_session_support()\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/topology.py\", line 554, in _check_session_support\nself._select_servers_loop(\n^\nFile \"/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymongo/topology.py\", line 238, in _select_servers_loop\nraise ServerSelectionTimeoutError(\n^\npymongo.errors.ServerSelectionTimeoutError: ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992),ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992),ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992), Timeout: 30s, Topology Description: <TopologyDescription id: 6458fac66937013406d9805d, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-00.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>, <ServerDescription ('ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-01.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>, <ServerDescription ('ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-44cplg1-shard-00-02.hpw8h6y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')>]>\nThe debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error.\nTo switch between the interactive traceback and the plaintext one, you can click on the \"Traceback\" headline. From the text traceback you can also create a paste of it. For code execution mouse-over the frame you want to debug and click on the console icon on the right side.\n\nYou can execute arbitrary Python code in the stack frames and there are some extra helpers available for introspection:\n\ndump() shows all variables in the frame\ndump(obj) dumps all that's known about the object\n\nfrom flask import Flask, render_template\nfrom flask_pymongo import PyMongo\n\napp = Flask(__name__)\napp.config[\"MONGO_URI\"] = \"mongodb+srv://<hidingusernameforpost>:<hidingpwdforpost>@cluster0.hpw8h6y.mongodb.net/dbName\"\n\nmongo = PyMongo(app)\n\[email protected](\"/\")\ndef index():\n collection = mongo.db.dbName\n data = collection.find()\n return render_template('index.html', data = data)\n\n\nif __name__ == \"__main__\":\n app.run(host=\"0.0.0.0\", port=8005, debug=True) # nosec\n\n?ssl=true?ssl=true&ssl_cert_reqs=CERT_NONEssl=true&ssl_ca_certs=/path/to/ca.pem",
"text": "I keep getting this error when running my app:Here is my code:Things I’ve tried:Instructions from similar topicsConnecting to my db successfully through atlasConnecting to it successfully via vscode mongodb pluginUtilizing ?ssl=true, ?ssl=true&ssl_cert_reqs=CERT_NONE, ssl=true&ssl_ca_certs=/path/to/ca.pem, in the trailing uriThank you in advance for helping me. I know discourse support is not easy ",
"username": "Kelly_Moreira"
},
{
"code": "",
"text": "Hi @Kelly_Moreira, which versions of PyMongo and Python are you using?",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "Hi @Steve_Silvester,Thanks so much for helping me!Python 3.11.2\nPyMongo 4.2.0",
"username": "Kelly_Moreira"
},
{
"code": "",
"text": "I also just tried it with these version and it continues to throw the same error:Python 3.11.2\nPyMongo 4.3.3",
"username": "Kelly_Moreira"
},
{
"code": "",
"text": "Okay, that explains it. Those URI options were renamed in PyMongo 4: PyMongo 4 Migration Guide — PyMongo 4.3.3 documentation",
"username": "Steve_Silvester"
},
{
"code": "?tls=true&tlsAllowInvalidCertificates=true",
"text": "It’s working!Thanks, @Steve_Silvester !For those reading I added this to the end of my uri ?tls=true&tlsAllowInvalidCertificates=true",
"username": "Kelly_Moreira"
},
{
"code": "",
"text": "Great, thanks for following up!",
"username": "Steve_Silvester"
},
{
"code": "pip install --upgrade certififrom urllib.parse import quote_plus\n\napp.config[\"MONGO_URI\"] = (\n f\"mongodb+srv://<hidingusernameforpost>:<hidingpwdforpost>@cluster0.hpw8h6y.mongodb.net/dbName?tlsCAFile={quote_plus(certifi.where())}\")\n",
"text": "Using tlsAllowInvalidCertificates=true or ssl_cert_reqs=CERT_NONE makes your TLS connection insecure. Instead your should install certifi (pip install --upgrade certifi) and pass tlsCAFile=certifi.where():",
"username": "Shane"
}
] | ServerSelectionTimeoutError with PyMongo and Flask | 2023-05-08T13:59:27.258Z | ServerSelectionTimeoutError with PyMongo and Flask | 1,177 |
null | [
"node-js",
"data-modeling",
"mongoose-odm",
"schema-validation"
] | [
{
"code": "const { ObjectId } = require(\"mongodb\");\nconst mongoose = require(\"mongoose\");\n\nconst userSchema = new mongoose.Schema(\n {\n first_name: {\n type: String,\n required: [true, \"first name is required\"],\n trim: true,\n //za search se koristi\n text: true,\n },\n last_name: {\n type: String,\n required: [true, \"last name is required\"],\n trim: true,\n text: true,\n },\n email: {\n type: String,\n required: [true, \"Email is required\"],\n trim: true,\n unique: true,\n match: [/^\\S+@\\S+\\.\\S+$/, \"Please enter a valid email address\"],\n },\n password: {\n type: String,\n required: [true, \"password is required\"],\n },\n //referenca na bazu klubova?\n current_club: {\n club: {\n type: ObjectId,\n ref: \"Club\",\n required: [true, \"Club is required\"],\n },\n start_date: {\n year: {\n type: Number,\n required: [true, \"Start year is required\"],\n },\n season: {\n type: String,\n enum: [\"Leto\", \"Zima\"],\n required: [true, \"Start season is required\"],\n },\n },\n },\n primary_position: {\n type: String,\n enum: [\n \"Golman\",\n \"Štoper\",\n \"Levi Bek\",\n \"Levi Krilni Bek\",\n \"Desni Krilni Bek\",\n \"Desni Bek\",\n \"Zadnji Vezni\",\n \"Centralni Vezni\",\n \"Prednji Vezni\",\n \"Levo Krilo\",\n \"Desno Krilo\",\n \"Špic\",\n ],\n required: true,\n },\n secundary_position: {\n type: String,\n enum: [\n \"Golman\",\n \"Štoper\",\n \"Levi Bek\",\n \"Levi Krilni Bek\",\n \"Desni Krilni Bek\",\n \"Desni Bek\",\n \"Zadnji Vezni\",\n \"Centralni Vezni\",\n \"Prednji Vezni\",\n \"Levo Krilo\",\n \"Desno Krilo\",\n \"Špic\",\n ],\n validate: [\n (arrayLimit) => arrayLimit.length <= 2,\n \"Positions array must contain no more than 2 elements\",\n ],\n },\n picture: {\n type: String,\n //ovde nemoj da zaboravis da ubacis link ka slici\n default: \"defaultpicture\",\n trim: true,\n },\n cover: {\n type: String,\n trim: true,\n },\n gender: {\n //zasto string zasto ne enum?\n type: String,\n enum: [\"Muški\", \"Ženski\", \"Ostalo\"],\n required: [true, \"gender is required\"],\n trim: true,\n },\n birthday: {\n year: {\n type: Number,\n required: true,\n },\n month: {\n type: Number,\n required: true,\n },\n day: {\n type: Number,\n required: true,\n },\n },\n verified: {\n type: Boolean,\n default: false,\n },\n open_for_transfer: {\n type: {\n play_for_free: {\n type: Boolean,\n required: [true, \"Play for free is required\"],\n },\n salary: {\n type: {\n minimum: {\n type: Number,\n },\n contract: {\n type: String,\n },\n },\n required: [true, \"Salary is required\"],\n },\n },\n required: [true, \"Open for transfer is required\"],\n },\n //obrati paznju ovde verovatno ti ne trebaju sva 3 vec samo connections posebno za pocetak\n connections: {\n type: Array,\n default: [],\n },\n following: {\n type: Array,\n default: [],\n },\n followers: {\n type: Array,\n default: [],\n },\n requests: {\n type: Array,\n default: [],\n },\n searchHistory: [\n {\n user: {\n type: ObjectId,\n ref: \"User\",\n },\n },\n ],\n details: {\n bio: {\n type: String,\n },\n phone: {\n type: {\n number: {\n type: String,\n trim: true,\n },\n show_number: {\n type: Boolean,\n default: false,\n },\n },\n },\n strongerFoot: {\n type: String,\n enum: [\"Right\", \"Undefined\", \"Left\"],\n },\n player_qualities: {\n type: String,\n enum: [\n \"Šut\",\n \"Dribling\",\n \"Komunikacija\",\n \"Liderstvo\",\n \"Vazdušni duel\",\n \"Brzina\",\n \"Snaga\",\n \"Kondicija\",\n \"Igra glavom\",\n \"Defanziva\",\n \"Prekid\",\n \"Tehnika\",\n \"Penali\",\n \"Kreator\",\n \"Završnica\",\n \"Igra leđima\",\n \"Duga lopta\",\n \"Dodavanje\",\n \"Agresivnost\",\n \"Slabija noga\",\n \"Centaršut\",\n ],\n },\n goalkeeper_qualities: {\n type: String,\n enum: [\n \"Odbrana penala\",\n \"Refleksi\",\n \"Istrčavanje\",\n \"Komunikacija\",\n \"Liderstvo\",\n \"Fizička sprema\",\n \"Pozicioniranje\",\n \"Timski igrač\",\n \"Distribucija lopte\",\n \"Fokus\",\n \"Slabija noga\",\n \"Rad nogu\",\n \"Brzina\",\n \"Praćenje igre\",\n \"Igra nogom\",\n ],\n },\n experience: {\n type: [\n {\n club: {\n type: ObjectId,\n ref: \"Club\",\n required: [true, \"Club is required\"],\n },\n start_date: {\n year: {\n type: Number,\n required: [true, \"Start year is required\"],\n },\n season: {\n type: String,\n enum: [\"Leto\", \"Zima\"],\n required: [true, \"Start season is required\"],\n },\n },\n end_date: {\n year: {\n type: Number,\n required: [true, \"End year is required\"],\n },\n season: {\n type: String,\n enum: [\"Leto\", \"Zima\"],\n required: [true, \"End season is required\"],\n },\n },\n },\n ],\n default: [],\n },\n height: {\n type: Number,\n },\n weight: {\n type: Number,\n },\n current_location: {\n city: {\n type: ObjectId,\n ref: \"City\",\n },\n },\n nationality: {\n type: ObjectId,\n ref: \"Nationality\",\n },\n contract_with_club: {\n type: Boolean,\n default: false,\n },\n videos: {\n type: Array,\n default: [],\n },\n },\n\n next_game: {\n location: {\n city: {\n type: {\n name: {\n type: ObjectId,\n ref: \"City\",\n required: [true, \"City name is required\"],\n },\n },\n required: [true, \"City is required\"],\n },\n country: {\n type: {\n name: {\n type: ObjectId,\n ref: \"Nationality\",\n required: [true, \"Country name is required\"],\n },\n },\n required: [true, \"Country is required\"],\n },\n },\n club: {\n type: {\n name: {\n type: ObjectId,\n ref: \"Club\",\n required: [true, \"Club name is required\"],\n },\n },\n required: [true, \"Club is required\"],\n },\n date: {\n type: Date,\n required: [true, \"Date is required\"],\n },\n time: {\n type: String,\n required: [true, \"Time is required\"],\n },\n },\n \n savedPosts: [\n {\n post: {\n type: ObjectId,\n ref: \"Post\",\n },\n savedAt: {\n type: Date,\n default: new Date(),\n },\n },\n ],\n },\n {\n timestamps: true,\n }\n);\n\nmodule.exports = mongoose.model(\"User\", userSchema);\n",
"text": "Hello everyone, im developing a social media app for Footballers, im new to programing and mongoDB, so i would like to hear different opinions on my schema. Thanks!",
"username": "Filip_Trivan"
},
{
"code": "",
"text": "Hey @Filip_Trivan,Welcome to the MongoDB Community Forums! Going over your schema, it seems to be well thought out and comprehensive. You have included validation for most fields to ensure data integrity and correctness which seems great too. As a next step, I would recommend you start figuring out the kind of queries that you will be using in your app the most. This will help you decide if any changes to your schema are needed or not since schema design is very much dependent on the query you would be using. This would also help you figure out which fields you need to index and the indexing strategy that you can use, as this will improve your query performance.Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Football Social Media Schema | 2023-05-04T17:43:53.054Z | Football Social Media Schema | 846 |
[
"queries",
"flutter"
] | [
{
"code": "",
"text": "\n343937949_182903264698442_2989345518697270551_n2694×1020 208 KB\nIn Realm studio i want to knowing the command rql for sort of totalso total get value from s1+s2+s3 then sort desc“TRUEPREDICATE SORT(s1 DESC)”\nhow to adapt to this command or thank you if you suggest for me.thank you for help me.realm + flutter mobile",
"username": "zmonx_gg"
},
{
"code": "",
"text": "This is not currently possible with RQL. You’ll need to either store the sum as a separate column or sort the results in memory.",
"username": "nirinchev"
}
] | Sort the collection so sum between 3 column to calculate | 2023-05-08T12:48:35.698Z | Sort the collection so sum between 3 column to calculate | 692 |
|
null | [
"php"
] | [
{
"code": "$r = new ReflectionExtension('mongodb');\nprint ($r->getVersion()); \\\\ currently prints '1.15.1'\n",
"text": "I can fetch the version string for the MongoDB PHP Extension …Is there also a way to fetch the version of the MongoDB PHP Library?\nOr are the extension and library in lock-step?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "The major/minor version number of the library and extension are now kept in sync, but the patch version may differ depending on whether there’s a bug fix to make. If you are using Composer 2 (which you most likely are), there is a built-in way to get versions for installed packages: Knowing the version of package X.",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "Thanks Andreas, wfm. Didn’t know that facility was there in Composer ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Get MongoDB PHP Library Version? | 2023-05-07T03:00:03.592Z | Get MongoDB PHP Library Version? | 757 |
null | [
"queries"
] | [
{
"code": "find({_id: 12345})find({_id: Long(\"12345\")})find({_id: NumberLong(\"12345\")})",
"text": "I have a document that is storing the _id field as: Long(“12345”).When querying that document using find({_id: 12345})it’s returning incorrect results.I’ve tested by wrapping the find logic in a Long and NumberLong function and that returns the correct row.\nExamples below work:I’m mostly wondering that once a document is storing a field as Long(“12345”), Mongo must not be able to implicitly convert the find logic to get the correct row, unless you wrap it in the Long or NumberLong function?",
"username": "mongo_maas"
},
{
"code": "find({_id: 12345})mongosh{_id: Long(\"12345\")}test> db.new_coll.insertOne({ _id: Long(\"12345\")})\n{ acknowledged: true, insertedId: Long(\"12345\") }\ntest> db.new_coll.findOne()\n{ _id: Long(\"12345\") }\ntest> db.new_coll.findOne({_id: 12345})\n{ _id: Long(\"12345\") }\ntest> db.new_coll.findOne({_id: Long(\"12345\")})\n{ _id: Long(\"12345\") }\ntest> db.new_coll.findOne({_id: NumberLong(\"12345\")})\n{ _id: Long(\"12345\") }\n\ntest> db.new_coll.insertOne({ _id: Long(\"1234554344343353\")})\n{ acknowledged: true, insertedId: Long(\"1234554344343353\") }\ntest> db.new_coll.findOne({_id: 1234554344343353})\n{ _id: Long(\"1234554344343353\") }\n",
"text": "Hello @mongo_maas,Welcome back to the MongoDB Community forums Apologies for the late response.I have a document that is storing the _id field as: Long(“12345”).\nWhen querying that document using find({_id: 12345}) it’s returning incorrect results.I tested it in my environment using mongosh in MongoDB 6.0.5 after inserting {_id: Long(\"12345\")} and it worked fine for me. Sharing the command snippet for your reference:I’m mostly wondering that once a document is storing a field as Long(“12345”), Mongo must not be able to implicitly convert the find logic to get the correct row, unless you wrap it in the Long or NumberLong function.Could you please share how you executed those commands and which specific MongoDB version you are using? Also, let me know if you have any further questions or doubts.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Querying Long() data returning wrong results | 2023-04-05T15:17:12.811Z | Querying Long() data returning wrong results | 734 |
null | [
"ruby",
"mongoid-odm"
] | [
{
"code": "",
"text": "Hi,\nWe are testing Mongoid 8.0.3 using MongoDB 6 Community. We noticed that database insertions using rspec are happening multitudes slower. It’s really odd. The older version Mongoid 7.5 inserts 150 records in 0.550355 seconds but Mongoid 8.0.3 would take over 2 seconds.\nQueryCache is disabled in both cases.\nAnyone else noticing this issue?",
"username": "netwire"
},
{
"code": "",
"text": "Hi netwire,This sounds very serious! We’ll try to reproduce the issue; it would be helpful if you can provide us with some additional info:",
"username": "Dmitry_Rybakov"
}
] | Mongoid 8 slow insertions? | 2023-05-07T15:00:30.109Z | Mongoid 8 slow insertions? | 799 |
null | [
"aggregation",
"compass",
"mongodb-shell",
"atlas-search",
"graphql"
] | [
{
"code": "{\n \"analyzer\": \"lucene.standard\",\n \"searchAnalyzer\": \"lucene.standard\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"attributes\": {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n \"createdAt\": {\n \"type\": \"date\"\n },\n \"isActive\": {\n \"type\": \"boolean\"\n },\n \"isFavourite\": {\n \"type\": \"boolean\"\n },\n \"note\": {\n \"analyzer\": \"htmlStrippingAnalyzer\",\n \"type\": \"string\"\n },\n \"title\": {\n \"multi\": {\n \"keywordAnalyzer\": {\n \"analyzer\": \"ngramShingler\",\n \"type\": \"string\"\n }\n },\n \"type\": \"string\"\n },\n \"typeOfAsset\": {\n \"type\": \"string\"\n },\n \"updatedAt\": {\n \"type\": \"date\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"ngramShingler\",\n \"tokenFilters\": [\n {\n \"maxShingleSize\": 3,\n \"minShingleSize\": 2,\n \"type\": \"shingle\"\n }\n ],\n \"tokenizer\": {\n \"maxGram\": 5,\n \"minGram\": 2,\n \"type\": \"nGram\"\n }\n },\n {\n \"charFilters\": [\n {\n \"ignoredTags\": [\n \"a\",\n \"div\",\n \"p\",\n \"strong\",\n \"em\",\n \"img\",\n \"figure\",\n \"figcaption\",\n \"ol\",\n \"ul\",\n \"li\",\n \"span\"\n ],\n \"type\": \"htmlStrip\"\n }\n ],\n \"name\": \"htmlStrippingAnalyzer\",\n \"tokenFilters\": [],\n \"tokenizer\": {\n \"type\": \"standard\"\n }\n }\n ]\n}\nreturn new Promise( async (resolve, reject) => {\n try {\n\n const search = {\n $search: {\n index: 'assets',\n compound: { \n should: [{\n text: {\n query: args.phraseToSearch,\n path: [{ value: 'title', multi: 'keywordAnalyzer' }],\n score: { boost: { value: 3 } }\n }\n }, {\n text: {\n query: args.phraseToSearch,\n path: 'note'\n }\n }]\n }\n }\n }\n\n const project = {\n $project: {\n _id: 0,\n id: '$_id',\n userId: 1,\n folderId: 1,\n title: 1,\n note: 1,\n typeOfAsset: 1,\n isFavourite: 1,\n createdAt: 1,\n updatedAt: 1,\n isActive: 1,\n attributes: 1,\n preferences: 1,\n score: {\n $meta: 'searchScore'\n }\n }\n }\n\n const match = {\n $match: {\n userId: args.userId\n }\n }\n\n const skip = {\n $skip: args.skip\n }\n\n const limit = {\n $limit: args.first\n }\n\n const group = {\n $group: {\n _id: null,\n count: { $sum: 1 }\n }\n }\n\n const sort = {\n $sort: {\n [args.orderBy]: args.orderDirection === 'asc' ? 1 : -1\n }\n }\n\n const searchAllAssets = await Models.Assets.schema.aggregate([\n search, project, match, sort, skip, limit\n ])\n\n const [ totalNumberOfAssets ] = await Models.Assets.schema.aggregate([\n search, project, sort, match, group\n ])\n\n return await resolve({\n searchAllAssets: searchAllAssets,\n totalNumberOfAssets: totalNumberOfAssets.count\n })\n\n } catch (exception) {\n return reject(new Error(exception))\n }\n})\nmaxClauseCount",
"text": "Hi, I’m in the process of migrating the search in an application from Elastic to MongoDB Atlas, and the results have been good.I’m now attempting to replicate some of the features I’ve been using in Elastic, such as the analyzers:… and the code in Node is:rWhen I use the search I get the following error:[GraphQL error]: Message: MongoServerError: PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: query has expanded into too many sub-queries internally: maxClauseCount is set to 1024I’ve Googled maxClauseCount but found nothing useful.I don’t have a lot of experience debugging queries (I’m using the Compass client, and make occasional use of MONGOSH), and it’s possible I’ve got something wrong with the index (I’ve copied and pasted the two analyzers from the documentation and then made a few tweaks).Any advice would be much appreciated.",
"username": "Wayne_Smallman"
},
{
"code": "db.assets.aggregate([\n {\n $search: {\n index: 'assets',\n compound: { \n should: [{\n text: {\n query: 'machine learning',\n path: [{ value: 'title', multi: 'keywordAnalyzer' }],\n score: { boost: { value: 3 } }\n }\n }, {\n text: {\n query: 'machine learning',\n path: 'note'\n }\n }]\n }\n }\n }\n])\n",
"text": "I ran the following statement in MONGOSH in Compass:So that at least eliminates the remaining parts of the aggregate method.",
"username": "Wayne_Smallman"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"createdAt\": {\n \"type\": \"date\"\n },\n \"isActive\": {\n \"type\": \"number\"\n },\n \"isFavourite\": {\n \"type\": \"boolean\"\n },\n \"note\": {\n \"analyzer\": \"htmlStrippingAnalyzer\",\n \"searchAnalyzer\": \"htmlStrippingAnalyzer\",\n \"type\": \"string\"\n },\n \"title\": {\n \"analyzer\": \"lucene.keyword\",\n \"multi\": {\n \"keywordAnalyzer\": {\n \"analyzer\": \"ngramShingler\",\n \"searchAnalyzer\": \"ngramShingler\",\n \"type\": \"string\"\n }\n },\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"typeOfAsset\": {\n \"type\": \"string\"\n },\n \"updatedAt\": {\n \"type\": \"date\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"ngramShingler\",\n \"tokenizer\": {\n \"type\": \"standard\"\n },\n \"tokenFilters\": [\n {\n \"type\": \"englishPossessive\"\n },\n {\n \"type\": \"nGram\",\n \"minGram\": 4,\n \"maxGram\": 7\n }\n ]\n },\n {\n \"charFilters\": [\n {\n \"ignoredTags\": [\n \"a\",\n \"div\",\n \"p\",\n \"strong\",\n \"em\",\n \"img\",\n \"figure\",\n \"figcaption\",\n \"ol\",\n \"ul\",\n \"li\",\n \"span\"\n ],\n \"type\": \"htmlStrip\"\n }\n ],\n \"name\": \"htmlStrippingAnalyzer\"\n }\n ]\n}\n",
"text": "I’m still struggling with this problem, and it seems to be the index, which I’ve since tweaked:The problem is, this is also failing in the Search Tester, in spite of having copied and pasted from the official documentation to create the analyzer in this index.",
"username": "Wayne_Smallman"
},
{
"code": "BooleanQueryminGramnGramedgeGram",
"text": "Hi @Wayne_Smallman,query has expanded into too many sub-queries internally: maxClauseCount is set to 1024Thanks for providing the search details and the error message. I assume you’re getting this error against an M0, M2 or M5 tier cluster - please correct me if I am wrong here. This assumption is based off the Atlas Search M0 (Free Cluster), M2, and M5 Limitations documentation which states:More details on the clause error and its possible cause(s) can be found in the Lucene documentation here.There a few options you may wish to consider or test out:Just to also dive a bit deeper into the issue, could you also advise what search terms you’re using that are generating this error?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran, much appreciated!",
"username": "Wayne_Smallman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error querying Atlas index | 2022-06-15T10:47:44.575Z | Error querying Atlas index | 3,250 |
[] | [
{
"code": "",
"text": "I was trying to attend a quiz in mongodb but the page is getting stuck and I am not able to complete my assessment\n\nScreenshot 2023-05-06 1034091882×717 52.2 KB\n",
"username": "Adharv_V_P"
},
{
"code": "",
"text": "Hey @Adharv_V_P,Welcome to the MongoDB Community Forums! Did trying to clear the cookies or cache or refreshing the page help? If you’re still experiencing the issue, kindly mail [email protected] describing the problem for our university team to investigate this further.Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot attend quiz in mongodb course | 2023-05-06T05:05:17.902Z | Cannot attend quiz in mongodb course | 910 |
|
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi all,\nI am running into an issue connecting mongo-express to mongo db. Could any help me in heree if I am missing anything?I have already asked here - MongoNetworkError: failed to connect to server - Mongo-Express keeps restartingThank you!",
"username": "Adam_Petera"
},
{
"code": "",
"text": "Check your connect string again\nWhen you are connecting to your replica with replicaset name you have to pass all the 3 hostnames in your string",
"username": "Ramachandra_Tummala"
},
{
"code": "ME_CONFIG_MONGODB_URL: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongo1:27017,mongo2:27017,mongo3:27017/?authSource=admin&replicaSet=replicaSetConfig",
"text": "Hi @Ramachandra_Tummala,I just tried adding all hostnames - ME_CONFIG_MONGODB_URL: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongo1:27017,mongo2:27017,mongo3:27017/?authSource=admin&replicaSet=replicaSetConfigAnyway the startap still fails ",
"username": "Adam_Petera"
},
{
"code": "network",
"text": "Okay, found out the issue:It was a missconfig with defined network. Just deleted that and added all hostnames to connectionString and everything seems to be okay from now on.Thank you @Ramachandra_Tummala for your time and minly help! ",
"username": "Adam_Petera"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoNetworkError: failed to connect to server - Mongo-Express keeps restarting | 2023-05-07T18:14:44.513Z | MongoNetworkError: failed to connect to server - Mongo-Express keeps restarting | 917 |
null | [
"aggregation",
"database-tools",
"backup"
] | [
{
"code": "mongodump --uri=\"mongodb://127.0.0.1:57777\" --db memaback --archive=/home/bob/mongoback/${timestamp}_memaback.archive.gz --gzip-rw-r--r-- 1 root root 2.3G May 7 07:05 20230507-070001_memaback.archive.gzmongorestore --gzip --host 127.0.0.1 --port 57777 --db memaback --collection art 20230507-070001_memaback.archive.gzFailed: error scanning filesystem: file 20230507-070001_memaback.archive.gz does not have .bson or .bson.gz extensionFailed: memaback.art: error restoring from 20230507-070001_memaback.archive.bson.gz: reading bson input: invalid BSONSize: -2120621459 bytes is less than 5 bytesmongorestore --nsInclude=memaback.art --port 57777 20230507-070001_memaback.archive.bsonmongorestore version: 100.7.0\ngit version: 17946f45f5fabfcdd99f8960b9109f976d370631\nGo version: go1.19.3\n os: linux\n arch: amd64\n compiler: gc\nmongos version v4.4.19\nBuild Info: {\n \"version\": \"4.4.19\",\n \"gitVersion\": \"9a996e0ad993148b9650dc402e6d3b1804ad3b8a\",\n \"openSSLVersion\": \"OpenSSL 1.0.2g 1 Mar 2016\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu1604\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n",
"text": "I am in trouble an aggregation pipeline I ran corrupted one of my collections. Thought I was lucky since I have a backup of the DB dating only one hour earlier. Currently not able to restore. Here are the details:The backup is created as follows:mongodump --uri=\"mongodb://127.0.0.1:57777\" --db memaback --archive=/home/bob/mongoback/${timestamp}_memaback.archive.gz --gzipin a cron shell script that runs daily (and provides a timestamp for the datetime)Resulting file is:-rw-r--r-- 1 root root 2.3G May 7 07:05 20230507-070001_memaback.archive.gzNow as I destroyed only my art collection in the memaback db I tried restoring the above file with the following command:mongorestore --gzip --host 127.0.0.1 --port 57777 --db memaback --collection art 20230507-070001_memaback.archive.gzwhich gives an error:Failed: error scanning filesystem: file 20230507-070001_memaback.archive.gz does not have .bson or .bson.gz extensionso tried to rename the file to .bson.gz and ran the same command with the new filename but get:Failed: memaback.art: error restoring from 20230507-070001_memaback.archive.bson.gz: reading bson input: invalid BSONSize: -2120621459 bytes is less than 5 bytesThe file is 2.3GB. The Linux ‘file’ command confirms its a gzipped file.I also tried gunzipping it and the top of the resulting file seems promising (hopefully) and tried with this other syntax:mongorestore --nsInclude=memaback.art --port 57777 20230507-070001_memaback.archive.bson\nbut get the same size error.mongorestore --version:running on a VPS with Ubuntu 16.04 (yes stuck there unluckily)Thanks for any help !!!",
"username": "Robert_Alexander"
},
{
"code": "mongorestore--archive=filename",
"text": "As the backup is an archive format the mongorestore will also need to use the --archive=filename",
"username": "chris"
},
{
"code": "",
"text": "Wow ok thanks so there’s hope As I’ve started rebuilding that collection via my programs, but it’s a very long process lasting almost a week, how would you advise I proceed to avoid mixing docs from the restore with those from the restore? Maybe using the --collection parameter with a temporary new name?Thanks a lot.",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Well, first renamed the rebuilding collection and then thanks to your correct suggestion restore 150587 documents (over 8GB) in under a minute.What a relief. Lesson learned.Thanks a LOT !!!",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Huge problem with a restore of backup | 2023-05-07T06:46:00.705Z | Huge problem with a restore of backup | 1,192 |
null | [
"database-tools",
"backup"
] | [
{
"code": "2023-04-28T12:48:43.064+0000 Failed: <redacted>.games: error creating indexes for redacted.games: createIndex error: connection(mongo.redacted-test.com:27017[-6]) socket was unexpectedly closed: EOF\nmongodump --host=mongo.<redacted>.com --port=27017 --authenticationDatabase=\"admin\" -u=\"admin\" -p=\"<redacted>\" --gzip --out=new\n mongorestore --verbose --host=mongo.redacted.com --port=27017 --convertLegacyIndexes --authenticationDatabase=\"admin\" -u=\"databaseAdmin\" -p=\"<redacted>\" --gzip ./new/\n",
"text": "I have 1 mongodb cluster im am backing up and attempting to restore into another mongodb cluster. The data seems to all restore successfully, but we are having a problem with restoring the indexes.dump\nmongodb: v5restore\nmongodb: v6",
"username": "Kay_Khan"
},
{
"code": "est> db.sample.findOne()\n{\n _id: ObjectId(\"6450e271f1a41838152c7349\"),\n DaysSinceLastSale: 96,\n IsResidential: true,\n Owner1NameFull: 'Adrian Griffith',\n AddressCity: 'Oakland',\n AddressState: 'Oregon',\n AddressZIP: 890484\n}\ntest> db.sample.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n { v: 2, key: { Owner1NameFull: 1 }, name: 'Owner1NameFull_1' }\n]\n mongodump --db=test --collection=sample --gzip --out=newmongorestore --db=test --collection=sampleNew --gzip new/test/sample.bson.gz --port 27018",
"text": "Hi @Kay_Khan and welcome to MongoDB community forums!!Based on the above posts, it seems you are trying to do a mongodump and mongorestore between version 5.0 and 6.0\nI tried to replicate the issue in my local environment with a sample dataset between the latest patch version for version 5.0 and version 6.0 which are version 5.0.17 and version 6.0.5 respectively.I tried the following mongodump and mongorestore with index being created in the collection. mongodump --db=test --collection=sample --gzip --out=newmongorestore --db=test --collection=sampleNew --gzip new/test/sample.bson.gz --port 27018Please note that featureCompatibilityVersion for both the version are set to 5.0 and 6.0 respectively.If you are still following issues could you help me with the folloing information:Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "H thank you for the response,Your example is quite small. If we are dealing with gb’s in size of data. The index takes long to complete and maybe thats why we are getting an error socket was unexpectedly closed ? What would be the solution to this problem?",
"username": "Kay_Khan"
},
{
"code": "mongorestoremongodkill -9 <PID>2023-05-04T12:33:42.648+0530\tFailed: test.samplerestore2: error creating indexes for test.samplerestore2: createIndex error: connection(localhost:27018[-7]) socket was unexpectedly closed: EOF\n2023-05-04T12:33:42.648+0530\t24000000 document(s) restored successfully. 0 document(s) failed to restore.\n\nmongodmongodmongodumpmongorestoremongodmongorestore--drop",
"text": "Hi @Kay_KhanBased on the error logs provided, I tried to replicate the issue in my local environment.Can you confirm if there has been any termination in the mongod process while the index was restored.I tried to replicate using the latest patch versions of mongoDB (5.0.17 and 6.0.5), could you help with the versions of mongod, mongodump and mongorestore ?If you find that the mongod process was shut down unexpectedly by the Linux OOMkiller, you might be seeing the effect of SERVER-68125: Index build on multi-key fields can consume more memory than limit especially if you have a lot of data and multi-key indexes. This was fixed in MongoDB 6.0.4, so if you’re not using the latest version, please upgrade to the latest one for bug fixes and improvementsLastly, this issue mainly would effect on the large collections and one possible workaround we could suggest would be to pre-create the collection and indexes before mongorestore. This would avoid large index building process.\nAlso, --drop parameter should not be used in the mongorestore command if you decide to pre-create the collection and indexes.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
}
] | Mongorestore indexes error socket was unexpectedly closed: EOF | 2023-04-28T12:54:55.923Z | Mongorestore indexes error socket was unexpectedly closed: EOF | 1,878 |
[] | [
{
"code": "",
"text": "Hey,I´m not a programmer yet and I wanted to 24/7 my second discord bot but it wont let me.\n\nScreenshot 2023-05-06 0842071920×1080 92.4 KB\nI just want a free cluster again.",
"username": "Leo_Konig"
},
{
"code": "",
"text": "Only one free cluster allowed per project\nTry to create another project and a new cluster under it",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "In addition to what Ramachandra has mentioned, I would also go over the Atlas M0 (Free Cluster), M2, and M5 Limitations documentation which may help you in future.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I get a error code when creating my second free cluster | 2023-05-06T06:46:18.208Z | I get a error code when creating my second free cluster | 593 |
|
[] | [
{
"code": "",
"text": "During Friday 5th of May around 4pm UTC, my MongoDB M0 shared cluster’s connections randomly went from ~6 to 500 connections, despite there being no activity on the app. The database completely stopped working during that time, I couldn’t connect to it or even view the collections in Atlas UI. I turned off the app server and DataGrip on my machine, waited one hour and by then the connections had went to 0 again. This app has been running for months without this ever happening before so this was definitely out of the ordinary.I am concerned because I was not able to debug from where those connections were coming from, as logs do not work for shared tiers, and as the DB was unresponsive I couldn’t run any queries to get active connections. This makes me vary about continuing on shared tier in the future, as the app is currently not yet in production but will be soon.Is there anyone else who can share their experiences related to this?\nScreenshot 2023-05-07 at 16.47.52884×550 15.2 KB\n",
"username": "Kormakur_N_A"
},
{
"code": "",
"text": "Hi @Kormakur_N_A,I would contact the Atlas in-app chat support regarding this. The team would have more insight into your project / cluster where the connection spike occurred. Please provide them with the cluster name / link when you open a chat with them and the same time frames when the connection spike occured similar to what you have provided here on the post.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Unexpected Surge in MongoDB Shared Cluster Connections; Concerns About Debugging and Reliability | 2023-05-07T16:59:57.036Z | Unexpected Surge in MongoDB Shared Cluster Connections; Concerns About Debugging and Reliability | 427 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "[BsonDiscriminator(\"app_user_data\")]\npublic class AppUserData\n{\n [BsonElement(\"user_permissions\")]\n public Dictionary<string, UserPermissionObject> UserPermissions { get; }\n}\nUserPermissionObject",
"text": "I want to store a Dictionary in my Custom User Data object for realm sync, something like the following:Where UserPermissionObject is just a regular object with a couple of properties.Is this possible, and if so how would it be implemented in the mongoDb side and how could I insert a dictionary into an entry programatically (i.e. using a .NET application or the Realm UI) if required?",
"username": "ScribeDev"
},
{
"code": "",
"text": "Any ideas for this question?",
"username": "ScribeDev"
},
{
"code": "",
"text": "Did you managed to this?I was able to bringin IDictonary for usual (realmbased) collections. But not sure about user customdata?",
"username": "Marvin_the_martian"
}
] | Custom user data - create a dictionary property | 2021-10-28T18:08:33.126Z | Custom user data - create a dictionary property | 2,806 |
null | [
"java"
] | [
{
"code": "",
"text": "Currently our application is developed using 3.12.x driver . I understand from the documentation there would not be any more development on this branch and mongo recommends to migrate to 4.x.In 4.x , I see 2 different drivers maintained by Mongohttps://mvnrepository.com/artifact/org.mongodb/mongodb-driver-legacy\nhttps://mvnrepository.com/artifact/org.mongodb/mongodb-driver-syncUsing mongodb-driver-sync requires to use new API ( https://www.mongodb.com/docs/drivers/java/sync/current/legacy/) in our application which is a lot of effort.If I use 4.9.1 mongodb-driver-legacy driver , can the old legacy API will continue to work seamlessly ? { Mongo server 4.4 } [ JFYI - I don’t intend to use any new features supported on the mongo server . There is a roadmap to move to 5.x and then to 6.x]. I don’t intend to making changes to application code or want to keep the changes to minimum.What would I lose , if I continue to use mongodb-driver-legacy driver and not migrate to mongodb-driver-sync driver ?What is the difference between the 2 driver flavours in 4.x train ? I have not been able to find any explanation regarding this.Any help on this is much appreciated.",
"username": "Udaya_Bhaskar_chimak"
},
{
"code": "com.mongodb.MongoClientcom.mongodb.client.MongoClientgetDatabasecom.mongodb.client.MongoDatabasegetDBcom.mongodb.DBMongoDatabaseDB",
"text": "Hi @Udaya_Bhaskar_chimak,Except for what’s mentioned in the Upgrading Guide, you should be able to upgrade your your application from 3.12 to 4.9 without any noticeable issues.In terms of packaging, the legacy driver is more of a superset of the sync driver than an alternative to it. Both the legacy com.mongodb.MongoClient class and the com.mongodb.client.MongoClient interface have a getDatabase method that returns a com.mongodb.client.MongoDatabase instance, where new features are generally added. The main difference is that the legacy class also has a getDB method that returns a com.mongodb.DB instance. This is where the bulk of the “legacy” API lives and what is generally not enhanced with support for new features.So if your application is already using MongoDatabase then making the switch to driver-sync is quite simple. But if it’s using DB, then it’s more complicated and likely easiest to just stay on driver-legacy.Hope this helps.Regards,\nJeff Yemin",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thanks a lot Jeff for your reply !My Application code is written using DB. So I suppose, I will stay with driver-legacy for now to avoid the code changes.I assume there would not be any performance-related issues with the the above usage . Hopefully Mongo internally tests the latest driver / server versions with this combination too.Thanks\nudaya",
"username": "Udaya_Bhaskar_chimak"
}
] | Driver upgrade from 3.12.x to 4.9 mongodb-driver-legacy driver | 2023-05-04T16:05:11.382Z | Driver upgrade from 3.12.x to 4.9 mongodb-driver-legacy driver | 843 |
null | [] | [
{
"code": "",
"text": "the username and password was right, but mongodb keep give feedback could connect using that connectionString, any idea of this?Thanks",
"username": "Bisma_Wahyu_Anaafie"
},
{
"code": "",
"text": "Your connect string is not correct\nWhere is your mongodb hosted?\nLocal machine or Atlas?",
"username": "Ramachandra_Tummala"
}
] | Could not connect to database using connectionString: mongodb://admin:xxxxxxxx@mongo:27017/" | 2023-05-07T11:59:43.346Z | Could not connect to database using connectionString: mongodb://admin:xxxxxxxx@mongo:27017/” | 2,491 |
[
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "const express = require('express');\nconst cors = require('cors');\nconst dotenv = require('dotenv');\nconst mongoose = require('mongoose');\n\nconst Product = require('./models/Product');\nconst productRoutes = require('./routes/productRoutes');\n\ndotenv.config();\n\nconst app = express();\nconst PORT = process.env.PORT;\n\napp.use(cors());\napp.use(express.json());\n\napp.use('/api/products', productRoutes);\n\napp.get('/', (req, res) => {\n res.send('Server is running');\n});\n\n// connectiion to db\nconst MONGO_URI = process.env.MONGO_URI;\nmongoose.connect(MONGO_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n // useCreateIndex: true,\n // useFindAndModify: false\n}).then(() => {\n console.log('index.js MongoDB Atlast connected');\n\n app.listen(PORT, () => { // listen to request only if already connected to db\n console.log(`index.js Server running on port ${PORT}`);\n });\n\n // Close the database connection when the Node.js process is exiting\n process.on('SIGINT', () => {\n mongoose.connection.close(() => {\n console.log('index.js Mongoose connection closed');\n process.exit(0);\n });\n });\n })\n .catch((err) => console.log(err));\nconst express = require('express');\nconst router = express.Router();\nconst { getProducts } = require('../controllers/productController');\n// GET /api/products\nrouter.get('/', getProducts);\nmodule.exports = router;\nconst Product = require('../models/Product');\nconst getProducts = async (req, res) => {\n try {\n const products = await Product.find({});\n console.log(\"products\");\n res.json(products);\n } catch (error) {\n console.error(error);\n res.status(500).json({ message: 'Server Error' });\n }\n}\nmodule.exports = { getProducts };\n",
"text": "How do i display data or did i fetch any data from API request?\n\nimage865×336 21.6 KB\nat index.jsat productRoutes.jsat productController.jsjust trying to test API.",
"username": "cashdomains"
},
{
"code": "",
"text": "I have collections in my db.\n\nimage1118×429 31.9 KB\n",
"username": "cashdomains"
},
{
"code": "console.log",
"text": "Hello @cashdomains, Welcome to the MongoDB community forum,Make sure you are connected with the right database and fetching data from the right collection name in your model, can you show the model code?I can help you with specific errors, but I can’t see any specific issues in your code, you have to debug your code by console.log step by step where is the exact issue.",
"username": "turivishal"
},
{
"code": "const mongoose = require('mongoose');\n\nconst productSchema = new mongoose.Schema({\n name: { type: String, required: true },\n description: { type: String, required: true },\n price: { type: Number, required: true },\n image: { type: String, required: true },\n countInStock: { type: Number, required: true },\n});\n\nconst Product = mongoose.model('Product', productSchema);\n\nmodule.exports = Product;\n",
"text": "I just want to test API and if it can fetch data but it seems it fetches nothing because POSTMAN only returns empty square brackets ( ). I know for sure that I’m connected to the database since atlast chart shows some data that was fetched. There is also a collection called “products” under cluster ecomm. I have no clue where did I miss.here is my model but I think it has nothing to do since my atlast collection is inserted to database itself and not from vscode.at models/Product.jsThanks for the help anyways.",
"username": "cashdomains"
},
{
"code": "res.json(products);res.json({ products: products });\n",
"text": "Hello @cashdomains,If you understand JSON, it requires a key/property, here you are returning a whole array in JSONres.json(products);instead, you need to respond in with a key/property,I don’t see any other issues, otherwise, you need to debug yourself first as I have told you in a previous reply.",
"username": "turivishal"
},
{
"code": "const mongoose = require(\"mongoose\");\nrequire('dotenv').config();\n\nconst connectDB = async () => {\n try {\n await mongoose.connect(process.env.MONGO_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log(`Connected to MongoDB Atlas cluster: ${mongoose.connection.host}, database: ${mongoose.connection.db.databaseName}`);\n } catch (err) {\n console.log(err.message);\n process.exit(1);\n }\n\n mongoose.connection.on('disconnected', () => {\n console.log('Mongoose disconnected');\n });\n mongoose.connection.on('error', (err) => {\n console.log(`Mongoose connection error: ${err}`);\n });\n};\n\nmodule.exports = connectDB;\n",
"text": "Thank you for answering @turivishal, I think I had found the problem. Can you teach me how to fix this?\nimage1333×548 43.7 KB\nat db.jsin console.log it tells that it is connected to test and not on ecomm. I dont know also where that test come from, i just inserted datas to test.products and it works. Here is the question now, how do i fetch ecomm.product instead of test.products?",
"username": "cashdomains"
},
{
"code": "const mongoose = require(\"mongoose\");\nrequire('dotenv').config();\n\nconst connectDB = async () => {\n try {\n await mongoose.connect(process.env.MONGO_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n dbName: 'ecomm' // specify the database name here\n });\n console.log(`Connected to MongoDB Atlas cluster: ${mongoose.connection.host}, database: ${mongoose.connection.db.databaseName}`);\n } catch (err) {\n console.log(err.message);\n process.exit(1);\n }\n\n mongoose.connection.on('disconnected', () => {\n console.log('Mongoose disconnected');\n });\n mongoose.connection.on('error', (err) => {\n console.log(`Mongoose connection error: ${err}`);\n });\n};\n\nmodule.exports = connectDB;\n",
"text": "@turivishal, i found the solution… thank you very much. I just need to specify the database name.",
"username": "cashdomains"
},
{
"code": "mongodb+srv://<username>:<password>@<host>/<db name>\nmongodb://localhost:27017/<db name>\n",
"text": "Glad you solved your problem, Usually can pass the DB name in the connection string, like thisAnd for local setupRefer to the reference doc,\nMongoDB Connection String | Introduction | MongoDB | MongoDB.",
"username": "turivishal"
}
] | Can't show data after API request, I use POSTMAN | 2023-05-06T13:07:03.548Z | Can’t show data after API request, I use POSTMAN | 1,396 |
|
null | [
"queries",
"indexes",
"atlas-search",
"text-search"
] | [
{
"code": "namefamilyName{\n \"v\": 2,\n \"key\": {\n \"_fts\": \"text\",\n \"_ftsx\": 1\n },\n \"name\": \"personsFullname\",\n \"weights\": {\n \"familyName\": 1,\n \"name\": 1\n },\n \"default_language\": \"es\",\n \"language_override\": \"language\",\n \"textIndexVersion\": 3\n}\n[\n {\n \"_id\": \"aaaaaaa\",\n \"name\": \"Roberto \",\n \"familyName\": \"Torres García \"\n },\n {\n \"_id\": \"bbbbbbb\",\n \"name\": \"Ruben A\",\n \"familyName\": \"Parras García\"\n },\n {\n _id:\"ccccc\",\n \"name\": \"Karla\",\n \"familyName\": \"Rosas García\"\n }\n]\nGarcíadb.getCollection(\"personsData\").find({ \"$text\": { \"$search\": \"García\" } })Garciadb.getCollection(\"personsData\").find({ \"$text\": { \"$search\": \"Garcia\" } })",
"text": "Hi,I’m struggling to find out how to correctly set a diacritic insensitive text index for my collection of persons. It’s a normal collection without collation.The MongoDB version is 5.0.15I need a text index (not using mongo atlas) for the name and familyName fields. I created an index with this config:The problem is that even though the MongoDB manual says that from version 3 the text search is diacritic insensitive it doesn’t work that way.Suppose I have these 3 records:If I search for García (using diacritic for i):db.getCollection(\"personsData\").find({ \"$text\": { \"$search\": \"García\" } })It finds the 3 records.But if I search for Garcia (Not using diacritic for i):db.getCollection(\"personsData\").find({ \"$text\": { \"$search\": \"Garcia\" } })It finds no records.What am I missing here?Thank you in advance.",
"username": "Ricardo_Montoya"
},
{
"code": "\"v\": 2,\"textIndexVersion\": 3",
"text": "I’m not sure if the version refers too the prop \"v\": 2, or \"textIndexVersion\": 3.\nAny help or hint is pretty much appreciated.",
"username": "Ricardo_Montoya"
},
{
"code": "db.persons.createIndex(\n { name: \"text\", familyName: \"text\" },\n { default_language: \"es\",\n language_override: \"language\",\n textIndexVersion: 3,\n collation: { locale: \"es\", strength: 2 }\n }\n);\ndb.persons.find(\n { $text: { $search: \"SubjectorName\" } },\n { score: { $meta: \"textScore\" } }\n).collation({ locale: \"es\", strength: 2 })\nexports.searchArticles = function(searchTerm) {\n const articlesCollection = context.services.get(\"mongodb-atlas\").db(\"mydb\").collection(\"articles\");\n return articlesCollection.find({$text: {$search: searchTerm, $language: \"zh\"}});\n};\n",
"text": "I made this last year for a Realm customer who needed to route Mandarin, it should “just work” with Spanish. I know this works with Chinese in making things diacritic, and Korean too, but do let me know if this also helps you for your Spanish. I literally just changed the items to es in default_language etc. for Spanish.This is what you’d use to query your collection.Oh, @Ricardo_Montoya This will help you do this in Realm, too. Just change the language to Spanish.",
"username": "Brock"
},
{
"code": "exports.searchArticles = function(searchTerm) {\n const articlesCollection = context.services.get(\"mongodb-atlas\").db(\"mydb\").collection(\"articles\");\n const pipeline = [\n {\n $match: {\n $text: {\n $search: searchTerm,\n $language: \"zh\"\n }\n }\n },\n {\n $project: {\n title: 1,\n author: 1,\n publicationDate: 1\n }\n },\n {\n $sort: {\n publicationDate: -1\n }\n }\n ];\n \n try {\n return articlesCollection.aggregate(pipeline).toArray();\n } catch (error) {\n console.error(\"Error executing searchArticles pipeline:\", error);\n throw new Error(\"An error occurred while searching for articles.\");\n }\n};\n\n",
"text": "@Ricardo_MontoyaI forgot to also add the Realm aggregation for it, too.This is what you can use in a Realm app if you’re using one of those, also, just again, change the language to Spanish. It also has error handling so you will get an error response if something isn’t quite right. (I like my error handlers when I use Realm lol)",
"username": "Brock"
},
{
"code": "MongoServerError: Error in specification { default_language: \"es\", language_override: \"language\", textIndexVersion: 3, key: { name: \"text\", familyName: \"text\" }, name: \"name_text_familyName_text\", v: 2, collation: { locale: \"es\", caseLevel: false, caseFirst: \"off\", strength: 2, numericOrdering: false, alternate: \"non-ignorable\", maxVariable: \"punct\", normalization: false, backwards: false, version: \"57.1\" } } :: caused by :: Index type 'text' does not support collation: { locale: \"es\", caseLevel: false, caseFirst: \"off\", strength: 2, numericOrdering: false, alternate: \"non-ignorable\", maxVariable: \"punct\", normalization: false, backwards: false, version: \"57.1\" }\nIndex type 'text' does not support collation",
"text": "Thank you.Unfortunately I get and error when I try to execute de index creation.The summary is Index type 'text' does not support collation",
"username": "Ricardo_Montoya"
},
{
"code": "{\n \"name\": \"name_text_familyName_text\",\n \"key\": {\n \"name\": \"text\",\n \"familyName\": \"text\"\n },\n \"default_language\": \"es\",\n \"language_override\": \"language\",\n \"textIndexVersion\": 3\n}\n",
"text": "What version of MDB are you using?Ok, so oddly enough the collation works on Alibaba… I don’t know why, but removing the collation then fixes the error.This is working on my local 6.0@Ricardo_MontoyaNeed to go through and remove the collations.",
"username": "Brock"
},
{
"code": "db.consultas.createIndex(\n { diagnostico: \"text\" },\n);\nuse(\"clinica\");\n\ndb.consultas.insertMany([\n {\n nombre: \"Juan Perez\",\n especialidad: \"general\",\n diagnostico: \"Dolor abdominal, Fiebre alta, tos, posible caso de COVID\",\n },\n {\n nombre: \"María Pelaez\",\n especialidad: \"general\",\n diagnostico: \"Tensión alta, posible episodio de ataque de ansiedad\",\n },\n {\n nombre: \"Javier Garcia\",\n especialidad: \"cardiología\",\n diagnostico: \"Arritmias, acompañado de tensión alta, enfermería\",\n },\n {\n nombre: \"Manuel Gómez\",\n especialidad: \"general\",\n diagnostico: \"Fiebre alta, tos y mucosidades, enfermería\",\n },\n]);\ndb.consultas.createIndex(\n { diagnostico: \"text\" },\n { defaultLanguage: \"es\"}\n);\ndb.consultas.find({ $text: { $search: \"enfermeria\" } });\ndb.consultas.createIndex(\n { diagnostico: \"text\" },\n {\n defaultLanguage: \"es\",\n textIndexVersion: 3,\n }\n);\ndb.consultas.find({\n $text: {\n $search: \"enfermeria\",\n $diacriticSensitive: false,\n },\n});\n",
"text": "Following this thread from StackOverflowIs this a known bug on MongoDB or am I doing something wrong ?On the online mongoPlayground, works:The playground link: Mongo playgroundAnd if you want to reproduce the example in a local instance (I’m using docker 6.0.5):Creating the indexAnd launching the query (you can try both options enfermería and enfermeria you get resultsI didn’t need to go for the ellaborated versionI read on other posts to tryTHIS SEEMS NOT TO BE NEEDED ON VERSION 6And in the query indicate to ignore diacritics:",
"username": "Braulio_Diez_Botella"
}
] | How to correctly set diacritic insensitive $text index for spanish lang | 2023-04-12T04:21:39.050Z | How to correctly set diacritic insensitive $text index for spanish lang | 1,270 |
[
"queries",
"atlas-search"
] | [
{
"code": "$search: {\n index: 'User',\n autocomplete: {\n query: values.query,\n path: 'username',\n }\n }\n$search: {\n index: 'User',\n autocomplete: {\n query: values.query,\n path: ['username', 'fname', 'lname']\n }\n}\n\n$search: {\n index: 'User',\n filter {\n $or {\n autocomplete: {\n query: values.query,\n path: 'username'\n },\n autocomplete: {\n query: values.query,\n path: 'fname'\n },\n autocomplete: {\n query: values.query,\n path: 'lname'\n }\n }\n }\n}\n",
"text": "I have this search index:\n\nScreenshot 2023-05-04 at 15.02.432028×1374 257 KB\nI’m trying to create a search query that receives a string and returns all users of whom username OR fname OR lname includes that string.This is what I have for username search:How do I add OR fname OR lname?What I’ve tried so far (none worked):",
"username": "Itamar_Gil"
},
{
"code": "OR",
"text": "When you say “none worked”, do you mean you got not results or it didn’t work like an OR statement?If the latter, try formatting your query using compound with should statements.",
"username": "Elle_Shwer"
},
{
"code": "$search: {\n index: \"User\",\n compound: {\n should: [{\n autocomplete: {\n query: \"marty\",\n path: \"username\"\n }\n }],\n should: [{\n autocomplete: {\n query: \"marty\",\n path: \"fname\"\n }\n }],\n should: [{\n autocomplete: {\n query: \"marty\",\n path: \"lname\"\n }\n }]\n }\n }\n",
"text": "I meant that these attempts resulted in an error.To your suggestion - did you mean to run such query:It doesn’t give me all possible results (I use Search Tester which should retrieve up to 10 documents, yet it only retrieves 3 - satisfying one of each should).",
"username": "Itamar_Gil"
},
{
"code": "autocompleteshould\n{\n\t'$search': {\n 'index': 'User',\n 'compound':{\n should: [\n \t{\n autocomplete: {\n query: 'marty',\n path: 'username'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'fname'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'lname'\n }\n }\n ]\n }\n\t}\n}\n",
"text": "It doesn’t give me all possible results (I use Search Tester which should retrieve up to 10 documents, yet it only retrieves 3 - satisfying one of each should).Are you able to provide some sample documents and the expected output? This would make it easier to reproduce what you’re seeing on your side.From your example, you can also try putting all the autocomplete’s into a single should operator, something like:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "'$search': {\n 'index': 'User',\n 'compound':{\n should: [\n \t{\n autocomplete: {\n query: 'marty',\n path: 'username'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'fname'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'lname'\n }\n }\n ]\n }\n\t}\n",
"text": "This one worked, thank you!",
"username": "Itamar_Gil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search - how to create OR query on multiple autocomplete fields | 2023-05-04T12:06:30.455Z | Atlas Search - how to create OR query on multiple autocomplete fields | 1,020 |
|
null | [
"python"
] | [
{
"code": "",
"text": "This morning I got bitten by a stupid mistake. I have two similarly named python dictionary fields: articledata[‘editiondate’] which holds string values such as ‘20230505’ and articledata[‘dateISO’] which holds a datetime value.By mistake I badly edited my program and wound up inserting new documents with a dateISO field which wrongly the string value and not the intended date value.Is there a pymongo/mongodb way of raising an exception in cases such as this. A way of stating that a given field is of a given type? A way that if the above is not respected I can rais an exception and perhaps log what happened without actually inserting?Thanks a lot",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "You can configure server side schema validation to prevent this kind of bug via: https://www.mongodb.com/docs/manual/core/schema-validation/specify-json-schema/#std-label-schema-validation-json",
"username": "Shane"
},
{
"code": "",
"text": "Thanks a lot this is great!",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Pymongo: can a field type be enforced with insertOne? | 2023-05-05T18:48:01.104Z | Pymongo: can a field type be enforced with insertOne? | 544 |
null | [
"java",
"atlas-cluster",
"serverless"
] | [
{
"code": "[04:47:15 INFO]: [org.mongodb.driver.cluster] Cluster created with id ClusterId{value='64531cb384d088474196fcc1', description='null'} and settings {hosts=[127.0.0.1:27017], srvHost=portalbox.2lfg5zm.mongodb.net, mode=LOAD_BALANCED, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}\n[04:47:15 INFO]: [org.mongodb.driver.cluster] SRV resolution completed with hosts: [ac-fsuln4x-lb.2lfg5zm.mongodb.net:27017]\n[04:47:15 WARN]: com.mongodb.MongoSocketException: ac-fsuln4x-lb.2lfg5zm.mongodb.net: Temporary failure in name resolution\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:165)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.UsageTrackingInternalConnection.open(UsageTrackingInternalConnection.java:53)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.open(DefaultConnectionPool.java:495)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.DefaultConnectionPool$OpenConcurrencyLimiter.openOrGetAvailable(DefaultConnectionPool.java:855)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.DefaultConnectionPool$OpenConcurrencyLimiter.openOrGetAvailable(DefaultConnectionPool.java:805)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:154)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.DefaultConnectionPool.get(DefaultConnectionPool.java:144)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.connection.LoadBalancedServer.getConnection(LoadBalancedServer.java:130)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(ClusterBinding.java:141)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.ClientSessionBinding$SessionBindingConnectionSource.getConnection(ClientSessionBinding.java:163)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.operation.CommandOperationHelper.lambda$executeCommand$4(CommandOperationHelper.java:190)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:189)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:184)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.internal.operation.CommandReadOperation.execute(CommandReadOperation.java:58)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:184)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.MongoDatabaseImpl.executeCommand(MongoDatabaseImpl.java:195)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:164)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:159)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:149)\n[04:47:15 WARN]: at core-1.0.0-all.jar//vegas.pvp.core.storage.Database.<clinit>(Database.kt:26)\n[04:47:15 WARN]: at core-1.0.0-all.jar//vegas.pvp.core.player.manager.PlayerManager.register(PlayerManager.kt:37)\n[04:47:15 WARN]: at core-1.0.0-all.jar//vegas.pvp.core.player.manager.PlayerManager.onPlayerJoin(PlayerManager.kt:63)\n[04:47:15 WARN]: at com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor14.execute(Unknown Source)\n[04:47:15 WARN]: at org.bukkit.plugin.EventExecutor$2.execute(EventExecutor.java:77)\n[04:47:15 WARN]: at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80)\n[04:47:15 WARN]: at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70)\n[04:47:15 WARN]: at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:678)\n[04:47:15 WARN]: at net.minecraft.server.players.PlayerList.postChunkLoadJoin(PlayerList.java:372)\n[04:47:15 WARN]: at net.minecraft.server.players.PlayerList.lambda$placeNewPlayer$0(PlayerList.java:309)\n[04:47:15 WARN]: at net.minecraft.server.TickTask.run(TickTask.java:18)\n[04:47:15 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.d(IAsyncTaskHandler.java:153)\n[04:47:15 WARN]: at net.minecraft.util.thread.IAsyncTaskHandlerReentrant.d(IAsyncTaskHandlerReentrant.java:24)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.b(MinecraftServer.java:1368)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.d(MinecraftServer.java:185)\n[04:47:15 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.x(IAsyncTaskHandler.java:126)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.bh(MinecraftServer.java:1345)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.x(MinecraftServer.java:1338)\n[04:47:15 WARN]: at net.minecraft.util.thread.IAsyncTaskHandler.c(IAsyncTaskHandler.java:136)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.a(MinecraftServer.java:1416)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.v(MinecraftServer.java:1194)\n[04:47:15 WARN]: at net.minecraft.server.MinecraftServer.lambda$spin$0(MinecraftServer.java:310)\n[04:47:15 WARN]: at java.base/java.lang.Thread.run(Thread.java:833)\n[04:47:15 WARN]: Caused by: java.net.UnknownHostException: ac-fsuln4x-lb.2lfg5zm.mongodb.net: Temporary failure in name resolution\n[04:47:15 WARN]: at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)\n[04:47:15 WARN]: at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:934)\n[04:47:15 WARN]: at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1543)\n[04:47:15 WARN]: at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)\n[04:47:15 WARN]: at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1533)\n[04:47:15 WARN]: at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1385)\n[04:47:15 WARN]: at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306)\n[04:47:15 WARN]: at core-1.0.0-all.jar//com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203)\n[04:47:15 WARN]: ... 45 more\n[04:47:15 INFO]: [org.mongodb.driver.cluster] Cluster closed with id ClusterId{value='64531cb384d088474196fcc1', description='null'}\n",
"text": "I am trying to connect to my Atlas cluster through my Minecraft plugin.I cannot get in. When I join the server (and the first query runs), I get this error:My dedicated server uses Ubuntu 22.04, and the game servers are using Pterodactyl. I’m using the Java MongoDB driver sync 4.3.1 (I tried latest: same result). My cluster is serverless.This does not happen locally, so I assume it’s something to do with the settings of my dedi. I have little to no experience in networking, ports etc. Is there something I need to change to allow my MC plugin to connect to the Mongo?",
"username": "Stephen_N_A"
},
{
"code": "[04:47:15 WARN]: Caused by: java.net.UnknownHostException: ac-fsuln4x-lb.2lfg5zm.mongodb.net: Temporary failure in name resolution\n",
"text": "Hi @Stephen_N_A,Welcome to the MongoDB Community forums My dedicated server uses Ubuntu 22.04The particular error seems to be related to the Linux OS and DNS, and it can occur due to various reasons such as network connectivity issues, DNS configuration problems, or issues with the local host file.Here are few links that discuss similar problems and their resolutions, which might be helpful for you:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi there,Thanks for the warm welcome!Update: I’ve tried running my Java plugin in a screen in my dedicated server, and it works no problem. So my suspicion is that the fact it’s on Pterodactyl is causing the issue.I know it isn’t exactly your domain - but have you heard of people having this error within Ptero servers?Best,\nStephen",
"username": "Stephen_N_A"
}
] | "Temporary failure in name resolution" when connecting to Atlas | 2023-05-04T03:45:06.766Z | “Temporary failure in name resolution” when connecting to Atlas | 1,859 |
null | [
"node-js",
"react-native"
] | [
{
"code": "",
"text": "I have a fullStack mobile application that interfaces with a node backend to communicate to my mongodb atlas, meaning I hit an endpoint to process any request from users. I would like to add offline capabilities to it through realm, does that mean i do not need the endpoints any more? How do i go about it (any rough description will help me a mile).Thank you!",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "Hi, I will try my best to answer your question, but it is a little open-ended. The short answer is “Yes, Atlas Device Sync should be able to help you out here”. I would recommend reading through a tutorial in the language of your choice: https://www.mongodb.com/docs/atlas/app-services/sync/get-started/One key is that you can migrate functionality slowly to use ADS by starting to define a subset of your schemas in ADS/Realm and add more and more as you remove functionality from your node backend.As an aside, it sounds like some of the friction in your questions is around the question of “what exactly is ADS and does it really solve my use case?”. I will selfishly point you to an article I wrote recently that I think might be a good place to start: MongoDB Realm Flexible Sync: A primer by an engineer, for an engineer (Part 1) | by Tyler Kaye | Realm Blog | MediumIf you have any more specific questions about your use case and challenges you are running into I would be happy to chat more and think through some issues with you.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you Tyler,\nlet me go through the resources when I get stuck I will reach out for further assistance.",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "Hello again @Tyler_Kaye ,\nyour resources were of great help, I have successfully been able to implement the basic react native features to get my device sync working,(I know this cause i can see my sync events in the Device sync section) however I have not tested it and I wanted to try it out first by the login process and specifically using data from my collection, I tried using custom user data and imported the collection containing the data but i see zero user list, Also as it is on my application it is convenient for my users to login with phone number and password instead of email.\nhow do I go about this\nthanks",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "Hi, we support various authentication providers; however, you are correct in noting that a lot of people want to have their own authentication service (or another external one like Auth0). For these use cases, I would recommend using the Custom JWT integration. This just lets App Services be a bit more passive and allows you to use a different authentication provider.Let me know if that works.\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Okay , i will check it out.\nDo you have any Idea why I cannot see my users from the imported collection, or what can lead to them not being seen",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "I am not entirely sure what you are asking about. Can you add some more detail about what you are doing, what you are seeing, and what you would expect to be seeing?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "\nimage1160×637 26.1 KB\nI selected the custom user data and I have passed the collection that has my user data…i would like to use that data to handle the logins…Is it not supposed to show that list under the “Users” section or how exactly does it work?",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "Hi, I think the issue here might just be a slight misunderstanding in what custom user data is. I would recommend reading through this docs page: https://www.mongodb.com/docs/atlas/app-services/users/custom-metadata/The TLDR is that we allow you to use “rules” to set permissions on who has access to what data. If you want additional information about a user to be present at the time of rules evaluation (you have roles, access lists, etc) you can use custom user data, and when a user logs in we will get that user with that user_id field (if it exists) and you can have a rule that references it in some way.Therefore, the issue seems to be that you think this is seeding the user’s list, but really this is just metadata that is used as a lookup during permissions evaluation.Another page that might be helpful is this: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/device-sync-permissions-guide/#std-label-restricted-news-feedAs a last note, I generally tend to push people to spend more time developing their app before diving too far into permissions. That should be considered in your development, but I think it would be valuable for you to initially build your app, play with Realm, design your schema, etc before you think about permissions and custom user data (a lot of people do not need to use custom user data).Best,\nTyler",
"username": "Tyler_Kaye"
}
] | Adding mongorealm to my existing FullStack mobile App | 2023-05-05T07:14:02.157Z | Adding mongorealm to my existing FullStack mobile App | 845 |
null | [
"connecting"
] | [
{
"code": "mongo -u user my_url/admin",
"text": "I have followed several guides on installing and configuring mongodb. None are working, but I think I’m close… I’ve have managed to get to a point that I can connect to the db server from the local server using the following command:mongo -u user my_url/adminHowever, I’m not able to connect to the server using the above command from a remote machine.I’ve tried both with authentication enabled and disabled, and I’ve tried setting the bindIP to 0.0.0.0, or adding the remote url after a comma, or commenting out the line but nothing worked. I also created the firewall rules for port 127 with the following commands:sudo iptables -A INPUT -p tcp --destination-port 27017 -m state --state NEW,ESTABLISHED -j ACCEPT\nsudo iptables -A OUTPUT -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPTWhat am I doing wrong? I am a beginner and it might be some stupid error ",
"username": "Filippo_Ferrari"
},
{
"code": "",
"text": "remote machineBy remote, do you mean a different machine on the same network or a different machine from another location?If a different machine on the same network, it should work easily.As for the other case, you need to connect using the public IP address of the server. That is not necessarily the IP address of the machine running mongod, if behind a NAT router or VPN. You may find the current public IP using https://www.whatismyip.com/.",
"username": "steevej"
},
{
"code": " # network interfaces\n net:\n port: 27017\n bindIp: 127.0.0.1,1.2.3.4\nsudo ifconfigens160mongo -u user 1.2.3.4/adminMongoDB shell version v4.4.6\nconnecting to: mongodb://1.2.3.4:27017/admin?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 131.159.56.49:27017, connection attempt failed: NetworkTimeout: Error connecting to 131.159.56.49:27017 :: caused by :: Socket operation timed out :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n",
"text": "Hi Steeve, thank you for your answer. I am using a VM from the data center of my university and in order to access it I need to be connected to a VPN from the same data center. As I am still connected to the VPN when I try to access the server on the VM, I guess that we should be on the same network. Nevertheless as none of what I read is working I am not sure.This is how I configured the mongo.conf file:I got 1.2.3.4 by running sudo ifconfig on the virtual machine and looking in the result for ens160When I run mongo -u user 1.2.3.4/admin on the VM it works, however when I do it from my pc I get this error:Am I using the wrong urls in the mongo.conf?",
"username": "Filippo_Ferrari"
},
{
"code": "connecting to: mongodb://1.2.3.4:27017/...\nError: couldn't connect to server 131.159.56.49:27017...\n",
"text": "The first thing I would try would be to bindIp:0.0.0.0. 1.2.3.4 might not be the address the correct address. The messageseems to indicate that there is some mapping happening between the client and the server.Then, Socket operation timed out might means a routing or firewall issue.Is the VM host your PC or another machine? If another machine then the 1.2.3.4 mapping might be different from the one on the VM host.\nCan you ping 131.159.56.49?\nWhere did you create the firewall rules?",
"username": "steevej"
},
{
"code": "1.2.3.4",
"text": "Is it really 1.2.3.4 or is that to obfuscate your actual ip address ?",
"username": "chris"
},
{
"code": "",
"text": "It is my obscurated IP address",
"username": "Filippo_Ferrari"
},
{
"code": "",
"text": "I just noticed I didn’t obscurate my IP address properly. 131.159.56.49 is 1.2.3.4",
"username": "Filippo_Ferrari"
},
{
"code": "",
"text": "Security through obscurity (Security through obscurity - Wikipedia) has never been favourable.And it does not help us helping you because we do not have the real facts.",
"username": "steevej"
},
{
"code": "",
"text": "I am experiencing this same issue Unable to setup remote access for mongodb",
"username": "Ben_Gab"
}
] | Cannot connect remotely to mongodb | 2021-09-13T10:10:42.249Z | Cannot connect remotely to mongodb | 14,275 |
[] | [
{
"code": "{\n \"_id\": <ObjectId>\n \"userId\": \"6441afd1b22e3758a2d4910c\",\n \"folder\": \"cc\",\n}\n{\n \"folder\": \"%%user.custom_data.folder\"\n}\n",
"text": "Hi,we are exploring a rather dynamic permission use-case with MongoDB realm.The idea is, that some resources that a given user should have access to would be stored as a custom_data field (“folder”); and in document permissions we would use these as expensions.E.g. custom data object isA document permission for an “Item” collection is:In a “static way” (i.e. custom_data is not changed) this works like a charm, if we update an “Item” document (or the rule itself).However, if we update the custom_data document itself (e…g changing folder value) then this is not reflected via sync, only if the user logs out and logs back in within the Realm app.According to the docs, Flexible sync should be capabile to iniate a “client reset” in such case:“Your rules reference custom user data to determine permissions dynamically, and the value of that custom user data has changed since the last Sync session.”What do we miss? Is “Flexible Sync” capable of this, or we need to do a user logout-login to trigger a client reset for such a use-case?Any hint would be greatly appreciated \nGabor",
"username": "Gabor_Rendes"
},
{
"code": "",
"text": "Hi @Gabor_Rendes,However, if we update the custom_data document itself (e…g changing folder value) then this is not reflected via sync, only if the user logs out and logs back in within the Realm app.Could you elaborate on what you mean by “this is not reflected via sync”? Ie, are you trying to sync on the custom user data collection, and you are not seeing changes to a field in that collection propagate?According to the docs, Flexible sync should be capabile to iniate a “client reset” in such case:https://www.mongodb.com/docs/atlas/app-services/rules/sync-compatibility/#permission-changes“Your rules reference custom user data to determine permissions dynamically, and the value of that custom user data has changed since the last Sync session.”What do we miss? Is “Flexible Sync” capable of this, or we need to do a user logout-login to trigger a client reset for such a use-case?What’s the question here exactly? If a client establishes a sync session with evaluated read permissions P1, and then the client disconnects, ends their session, and tries to establish a new one with evaluated read permissions P2 != P1, then the behavior is for the server to send a client reset error to the client, as you linked to in the docs.Which SDK are you using? The SDKs provide a way to pause / resume synchronization (for example, see here for the Swift docs) – the end result of doing this is the client will terminate it’s current session and then attempt to establish a new one. If the read permissions have changed in between the two sessions, then the client should see a client reset error message in the logs. We also have a docs page on how to handle client resets here, which you may find useful.Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "{\n \"name\": \"a\"\n \"folder\": \"a\"\n}\n{\n \"name\": \"b\"\n \"folder\": \"b\"\n}\n{\n \"userId\": <user's id>\n \"folder\": \"a\"\n}\n",
"text": "Hi @Jonathan_Lee ,thanks for the quick response.Not exactly i try to elaborate.Let’s say we have two Items:Item A:Item B:Furthermore we have a user X with following custom_data:Now if I start a Realm app, and login with user X, we’ll see Item a.\nIf we would add other items with “folder”: “a”, they would be synced “real-time” in the app, that’s fine so far.However, if we change in user X’s custom_data the folder value to “b”, then our expection would be that instead of Item A, Item B would be synced. But this never happens. I have to log out & login to trigger a “client reset” and see Item B only.I hope i could made the use-case / idea clear. Is this how Realm/Flexible sync supposed to work, or we do miss something?(btw we use Flutter SDK, but I am not sure whether this is important/relevant)",
"username": "Gabor_Rendes"
},
{
"code": "",
"text": "Ah ok, I see what you’re saying now. So every time a new sync connection comes in, we lookup the custom user data for the user associated with the connecting device. As the sync session becomes established for the connection, that custom user data is used in evaluating permissions. When custom user data changes, we do not refresh the permissions mid-session; instead, the changes to custom user data will only be reflected on the next sync connection to the server. This is works as designed, although may be subject to change in the future – we have project planned to handle permission changes gracefully on the server without causing a client reset.One way of potentially getting around this is to manually force a reconnect by pausing and then resuming synchronization; that way, on session resume (restart), permissions will be evaluated again using the updated custom user data. The Flutter SDK has docs on how to accomplish this here.Let me know if you have any more questions,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "realm.syncSession.pause();\nawait currentUser!.refreshCustomData();\nrealm.syncSession.resume();\n",
"text": "Hi Jonathan!Thank you for your detailed response, I really appreciate Yes, I can confirm that pause & resume can resolve this “refresh” issue, although it required an extra nudge:(excerpt from Flutter code)I am really not sure, why “refreshCustomData()” is needed but it does the trick Thank you once again, have a nice day!",
"username": "Gabor_Rendes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change CustomData for Document Permissions? | 2023-05-04T16:36:29.396Z | Change CustomData for Document Permissions? | 681 |
|
null | [] | [
{
"code": "",
"text": "“I am in a big trouble. Can someone please help me to solve this? When I try to connect MongoDB to my local server, it’s not connecting but the code is still running. So, I decided to change the IP address from “127.0.0.1:27017” to “localhost”. However, this time it shows an error “ECONNREFUSED”. What should I do? I have watched more than 30 videos on YouTube and tried everything, but still have not found a solution. I have also checked after turning off the firewall and tried another network connection. Please help.”",
"username": "Sayandh_Ms"
},
{
"code": "",
"text": "How did you start the mongod server (e.g. what options)?How did you connect to 127.xxx ? what error did you see?",
"username": "Kobe_W"
},
{
"code": "",
"text": "\nScreenshot (3)1920×1080 189 KB\n\nsorry for the late reply\nThis is my source code. My terminal is stuck like this",
"username": "Sayandh_Ms"
},
{
"code": "",
"text": "Did you verify that mongodb server is indeed listening on 27017? you can use something like netstat to check that.And if yes, does mongodb log show anything?",
"username": "Kobe_W"
},
{
"code": "",
"text": "",
"username": "Sayandh_Ms"
},
{
"code": "",
"text": "It says ‘Established’, but my terminal is not showing any further output or response, or it appears to be still running.",
"username": "Sayandh_Ms"
},
{
"code": "mongosh mongodb://127.0.0.1:27017\n",
"text": "I would download the Mongo Shell (mongosh) and try to run the following command to eliminate the nodejs code. This way we can work around some possible issues. (https://www.mongodb.com/docs/mongodb-shell/install/)Another test could be to use MongoDB Compass and connect to 127.0.0.1.Please post any errors you get when doing these tests",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "You cold also try following this blog post from MongoDB on connecting with NodeJS and seeing if there is an issue with your code.Node.js and MongoDB is a powerful pairing and in this Quick Start series we show you how",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Now, I am able to connect with the MongoDB cluster, but I am still unable to connect with my local server.",
"username": "Sayandh_Ms"
},
{
"code": "",
"text": "Thankyou for recommending this blog",
"username": "Sayandh_Ms"
},
{
"code": "\"127.0.0.1\"\"localhost\"\"::1\"",
"text": "did your app/code/server work with \"127.0.0.1\"? if it does, then you may need to change your settings for IPv6 as the \"localhost\" would be \"::1\", or check your system for this. there are other discussions about this IPv6 effect on the Forums. check them if you need.if this is not the case, then you need to supply more info about your configuration.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "\nScreenshot (4)1920×1080 191 KB\n\nI solved my problem with this code .Thankyou all for your support ",
"username": "Sayandh_Ms"
},
{
"code": "\"new MongoClient(uri)\"\"localhost\"\"127.0.0.1\"",
"text": "To make better use of async/await, you can say this \"new MongoClient(uri)\" is the modern way to use the driver. Keep that in mind when you read old tutorials with lots of “callback” functions.anyways, with this working code, will you please also use \"localhost\" instead of \"127.0.0.1\" to check for IPv6, because part of your resolution is not clear about this statement:I decided to change the IP address from “127.0.0.1:27017” to “localhost”. However, this time it shows an error “ECONNREFUSED”",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hello @Sayandh_MsLook at this example: if you have any other questions I am available.Successfully connected to MongoDB. Contribute to patbi/Successfully_connected_to_MongoDB development by creating an account on GitHub.",
"username": "Patrick_Biyaga"
}
] | MongoDB connection refused | 2023-04-27T17:48:28.554Z | MongoDB connection refused | 2,301 |
[
"node-js",
"server"
] | [
{
"code": "",
"text": "Hello,PLEASE PLEASE PLEASE HELP!!!It’s been 4 days since I’m trying to install and run the [email protected] on my new machine and trying to run my old projects I was working on.But after trying several methods (even from the official documentation), nothing is working. I’m able to install the [email protected] but as soon as I start my app, it disconnects automatically.\nScreenshot 2023-05-01 at 1.40.55 PM1856×490 240 KB\nI’m using Nodejs@18 and macOS 13.3.1 (Ventura). Please help if anyone could! I have tried to install the older versions as well but they’re also seem to be not working. The same issue with [email protected], [email protected] as well.I’ll really appreciate it.",
"username": "Meghraj_Suthar"
},
{
"code": "",
"text": "\nScreenshot 2023-05-01 at 1.40.35 PM1920×768 156 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "mongod.logmongod",
"text": "Hey @Meghraj_Suthar, what does the mongod.log contain. It should contain all the information you need to troubleshoot why the mongod process isn’t starting on this machine. Once you get that sorted out your node application should work again as expected ",
"username": "alexbevi"
},
{
"code": "",
"text": "Thanks for reply…I checked but there isn’t much to know what’s going wrong. Here it is:{“t”:{“$date”:“2023-05-02T13:19:48.411+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn2”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“grocify.products”,“command”:{“createIndexes”:“products”,“indexes”:[{“name”:“inStock_1”,“key”:{“inStock”:1},“background”:true}],“writeConcern”:{“w”:1},“lsid”:{“id”:{“$uuid”:“0201d79c-7b15-4cb2-8b7f-ead337d3a050”}},“$db”:“grocify”},“numYields”:0,“reslen”:114,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:5}},“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:4,“w”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:6}},“Global”:{“acquireCount”:{“r”:4,“w”:2}},“Database”:{“acquireCount”:{“r”:4,“w”:1}},“Collection”:{“acquireCount”:{“r”:4,“W”:1}},“Mutex”:{“acquireCount”:{“r”:5}}},“flowControl”:{“acquireCount”:2,“timeAcquiringMicros”:6},“writeConcern”:{“w”:1,“wtimeout”:0,“provenance”:“clientSupplied”},“storage”:{},“remote”:“[::1]:49232”,“protocol”:“op_msg”,“durationMillis”:248}}\n{“t”:{“$date”:“2023-05-02T13:19:48.411+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22237, “ctx”:“TimestampMonitor”,“msg”:“Completing drop for ident”,“attr”:{“ident”:“index-63-638419270593250689”,“dropTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.415+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20438, “ctx”:“conn2”,“msg”:“Index build: registering”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“28768239-7ca0-45da-b3e7-1ed82dd1979c”}},“namespace”:“grocify.purchase-invoices”,“collectionUUID”:{“uuid”:{“$uuid”:“3c993a50-fe5d-4298-9914-d3cc62fc8666”}},“indexes”:1,“firstIndex”:{“name”:“supplier.country_1”},“command”:{“createIndexes”:“purchase-invoices”,“v”:2,“indexes”:[{“name”:“supplier.country_1”,“key”:{“supplier.country”:1},“background”:true}],“ignoreUnknownIndexOptions”:false}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.432+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4715500, “ctx”:“conn2”,“msg”:“Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold”,“attr”:{“numActiveIndexBuilds”:3,“maxNumActiveUserIndexBuilds”:3,“indexSpecs”:[{“name”:“supplier.country_1”,“key”:{“supplier.country”:1},“background”:true,“v”:2}],“buildUUID”:{“uuid”:{“$uuid”:“28768239-7ca0-45da-b3e7-1ed82dd1979c”}},“collectionUUID”:{“uuid”:{“$uuid”:“3c993a50-fe5d-4298-9914-d3cc62fc8666”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.455+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“conn3”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{“$uuid”:“b653e4f1-66da-483e-ad3c-d8e9f4d53a85”}},“namespace”:“grocify.purchase-returns”,“index”:“supplier.city_1”,“ident”:“index-171–7699691461290141377”,“collectionIdent”:“collection-84-260806588614309943”,“commitTimestamp”:null}}\n{“t”:{“$date”:“2023-05-02T13:19:48.456+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20440, “ctx”:“conn3”,“msg”:“Index build: waiting for index build to complete”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“1007a6ed-3965-47e7-9f82-3b8fc308f668”}},“deadline”:{“$date”:{“$numberLong”:“9223372036854775807”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.474+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20447, “ctx”:“conn3”,“msg”:“Index build: completed”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“1007a6ed-3965-47e7-9f82-3b8fc308f668”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.474+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn3”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“grocify.purchase-returns”,“command”:{“createIndexes”:“purchase-returns”,“indexes”:[{“name”:“supplier.city_1”,“key”:{“supplier.city”:1},“background”:true}],“writeConcern”:{“w”:1},“lsid”:{“id”:{“$uuid”:“5a8bad22-0dcf-4f41-a142-cad57a30a5ca”}},“$db”:“grocify”},“numYields”:0,“reslen”:114,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:5}},“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:4,“w”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:6}},“Global”:{“acquireCount”:{“r”:4,“w”:2}},“Database”:{“acquireCount”:{“r”:4,“w”:1}},“Collection”:{“acquireCount”:{“r”:4,“W”:1}},“Mutex”:{“acquireCount”:{“r”:5}}},“flowControl”:{“acquireCount”:2,“timeAcquiringMicros”:8},“writeConcern”:{“w”:1,“wtimeout”:0,“provenance”:“clientSupplied”},“storage”:{},“remote”:“[::1]:49233”,“protocol”:“op_msg”,“durationMillis”:269}}\n{“t”:{“$date”:“2023-05-02T13:19:48.478+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20438, “ctx”:“conn3”,“msg”:“Index build: registering”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“f0ee55ea-a036-45df-8573-5ddc9f4241d2”}},“namespace”:“grocify.products”,“collectionUUID”:{“uuid”:{“$uuid”:“3d29a7fb-3076-4ddf-ae98-1dc67243a85f”}},“indexes”:1,“firstIndex”:{“name”:“isComingSoon_1”},“command”:{“createIndexes”:“products”,“v”:2,“indexes”:[{“name”:“isComingSoon_1”,“key”:{“isComingSoon”:1},“background”:true}],“ignoreUnknownIndexOptions”:false}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.496+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4715500, “ctx”:“conn3”,“msg”:“Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold”,“attr”:{“numActiveIndexBuilds”:3,“maxNumActiveUserIndexBuilds”:3,“indexSpecs”:[{“name”:“isComingSoon_1”,“key”:{“isComingSoon”:1},“background”:true,“v”:2}],“buildUUID”:{“uuid”:{“$uuid”:“f0ee55ea-a036-45df-8573-5ddc9f4241d2”}},“collectionUUID”:{“uuid”:{“$uuid”:“3d29a7fb-3076-4ddf-ae98-1dc67243a85f”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.497+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“conn1”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{“$uuid”:“5f1a7cff-b5d7-4847-bf92-f3ef8f1ef771”}},“namespace”:“grocify.subscriptions”,“index”:“outlet_1”,“ident”:“index-172–7699691461290141377”,“collectionIdent”:“collection-54-260806588614309943”,“commitTimestamp”:null}}\n{“t”:{“$date”:“2023-05-02T13:19:48.498+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20440, “ctx”:“conn1”,“msg”:“Index build: waiting for index build to complete”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“0b07f312-46cf-4e87-90a9-3c34b4678a5c”}},“deadline”:{“$date”:{“$numberLong”:“9223372036854775807”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.517+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20447, “ctx”:“conn1”,“msg”:“Index build: completed”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“0b07f312-46cf-4e87-90a9-3c34b4678a5c”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.517+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn1”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“grocify.subscriptions”,“command”:{“createIndexes”:“subscriptions”,“indexes”:[{“name”:“outlet_1”,“key”:{“outlet”:1},“background”:true}],“writeConcern”:{“w”:1},“lsid”:{“id”:{“$uuid”:“e4782b10-699f-4adf-82e6-a624db46e36f”}},“$db”:“grocify”},“numYields”:0,“reslen”:114,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:5}},“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:4,“w”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:6}},“Global”:{“acquireCount”:{“r”:4,“w”:2}},“Database”:{“acquireCount”:{“r”:4,“w”:1}},“Collection”:{“acquireCount”:{“r”:4,“W”:1}},“Mutex”:{“acquireCount”:{“r”:5}}},“flowControl”:{“acquireCount”:2,“timeAcquiringMicros”:6},“writeConcern”:{“w”:1,“wtimeout”:0,“provenance”:“clientSupplied”},“storage”:{},“remote”:“[::1]:49231”,“protocol”:“op_msg”,“durationMillis”:251}}\n{“t”:{“$date”:“2023-05-02T13:19:48.522+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20438, “ctx”:“conn1”,“msg”:“Index build: registering”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“bca587d7-b942-4344-a8c1-b0b3ed4a422f”}},“namespace”:“grocify.subscriptions”,“collectionUUID”:{“uuid”:{“$uuid”:“5f1a7cff-b5d7-4847-bf92-f3ef8f1ef771”}},“indexes”:1,“firstIndex”:{“name”:“slot_1”},“command”:{“createIndexes”:“subscriptions”,“v”:2,“indexes”:[{“name”:“slot_1”,“key”:{“slot”:1},“background”:true}],“ignoreUnknownIndexOptions”:false}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.538+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4715500, “ctx”:“conn1”,“msg”:“Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold”,“attr”:{“numActiveIndexBuilds”:3,“maxNumActiveUserIndexBuilds”:3,“indexSpecs”:[{“name”:“slot_1”,“key”:{“slot”:1},“background”:true,“v”:2}],“buildUUID”:{“uuid”:{“$uuid”:“bca587d7-b942-4344-a8c1-b0b3ed4a422f”}},“collectionUUID”:{“uuid”:{“$uuid”:“5f1a7cff-b5d7-4847-bf92-f3ef8f1ef771”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.540+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“conn4”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{“$uuid”:“8726a9ab-4c25-4927-a7d7-9f0a0342fa7a”}},“namespace”:“grocify.users”,“index”:“createdAt_-1”,“ident”:“index-173–7699691461290141377”,“collectionIdent”:“collection-7-260806588614309943”,“commitTimestamp”:null}}\n{“t”:{“$date”:“2023-05-02T13:19:48.540+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20440, “ctx”:“conn4”,“msg”:“Index build: waiting for index build to complete”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“af46421f-b327-4b0f-b659-c2e1d1842e68”}},“deadline”:{“$date”:{“$numberLong”:“9223372036854775807”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.559+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20447, “ctx”:“conn4”,“msg”:“Index build: completed”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“af46421f-b327-4b0f-b659-c2e1d1842e68”}}}}\n{“t”:{“$date”:“2023-05-02T13:19:48.559+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn4”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“grocify.users”,“command”:{“createIndexes”:“users”,“indexes”:[{“name”:“createdAt_-1”,“key”:{“createdAt”:-1},“background”:true}],“writeConcern”:{“w”:1},“lsid”:{“id”:{“$uuid”:“f2d416c4-264d-4048-815a-a003a982c8c5”}},“$db”:“grocify”},“numYields”:0,“reslen”:114,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:5}},“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:4,“w”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:6}},“Global”:{“acquireCount”:{“r”:4,“w”:2}},“Database”:{“acquireCount”:{“r”:4,“w”:1}},“Collection”:{“acquireCount”:{“r”:4,“W”:1}},“Mutex”:{“acquireCount”:{“r”:5}}},“flowControl”:{“acquireCount”:2,“timeAcquiringMicros”:7},“writeConcern”:{“w”:1,“wtimeout”:0,“provenance”:“clientSupplied”},“storage”:{},“remote”:“[::1]:49234”,“protocol”:“op_msg”,“durationMillis”:228}}\n{“t”:{“$date”:“2023-05-02T13:19:48.560+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22237, “ctx”:“TimestampMonitor”,“msg”:“Completing drop for ident”,“attr”:{“ident”:“index-49-638419270593250689”,“dropTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.077+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“terminate() called. An exception is active; attempting to gather more information\\n”}}\n{“t”:{“$date”:“2023-05-02T13:19:49.078+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“DBException::toString(): FileNotOpen: Failed to open interim file /usr/local/var/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)38, mongo::AssertionException>\\n\\n”}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31380, “ctx”:“ftdc”,“msg”:“BACKTRACE”,“attr”:{“bt”:{“backtrace”:[{“a”:“10E5AFC25”,“b”:“10B4FA000”,“o”:“30B5C25”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE”,“C”:“mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)”,“s+”:“D5”},{“a”:“10E5B0DE8”,“b”:“10B4FA000”,“o”:“30B6DE8”,“s”:“_ZN5mongo15printStackTraceEv”,“C”:“mongo::printStackTrace()”,“s+”:“28”},{“a”:“10E5AC6C0”,“b”:“10B4FA000”,“o”:“30B26C0”,“s”:“_ZN5mongo12_GLOBAL__N_111myTerminateEv”,“C”:“mongo::(anonymous namespace)::myTerminate()”,“s+”:“F0”},{“a”:“7FF818C2D6DB”,“b”:“7FF818C1F000”,“o”:“E6DB”,“s”:“_ZSt11__terminatePFvvE”,“C”:“std::__terminate(void ()())“,“s+”:“6”},{“a”:“7FF818C2D696”,“b”:“7FF818C1F000”,“o”:“E696”,“s”:”_ZSt9terminatev\",“C”:“std::terminate()”,“s+”:“36”},{“a”:“10B7FE3AA”,“b”:“10B4FA000”,“o”:“3043AA”,“s”:“_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ”,“C”:\"void std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct>, mongo::stdx::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::‘lambda’()>>(void*)”,“s+”:“3A”},{“a”:“7FF818C741D3”,“b”:“7FF818C6E000”,“o”:“61D3”,“s”:“_pthread_start”,“s+”:“7D”},{“a”:“7FF818C6FBD3”,“b”:“7FF818C6E000”,“o”:“1BD3”,“s”:“thread_start”,“s+”:“F”}],“processInfo”:{“mongodbVersion”:“6.0.5”,“gitVersion”:“c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d”,“compiledModules”:,“uname”:{“sysname”:“Darwin”,“release”:“22.4.0”,“version”:“Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:17 PST 2023; root:xnu-8796.101.5~3/RELEASE_X86_64”,“machine”:“x86_64”},“somap”:[{“path”:“/usr/local/Cellar/mongodb-community/6.0.5/bin/mongod”,“machType”:2,“b”:“10B4FA000”,“vmaddr”:“100000000”,“buildId”:“94A4BEFC5CC931EB890C4148A64D85E8”},{“path”:“/usr/lib/libc++abi.dylib”,“machType”:6,“b”:“7FF818C1F000”,“vmaddr”:“7FF8003A3000”,“buildId”:“4053AFDD601E3205A89A82B38A77514A”},{“path”:“/usr/lib/system/libsystem_pthread.dylib”,“machType”:6,“b”:“7FF818C6E000”,“vmaddr”:“7FF8003F2000”,“buildId”:“86DFA54395FA36B483C6BF03D01B2AAD”}]}}},“tags”:}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10E5AFC25”,“b”:“10B4FA000”,“o”:“30B5C25”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE”,“C”:“mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)”,“s+”:“D5”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10E5B0DE8”,“b”:“10B4FA000”,“o”:“30B6DE8”,“s”:“_ZN5mongo15printStackTraceEv”,“C”:“mongo::printStackTrace()”,“s+”:“28”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10E5AC6C0”,“b”:“10B4FA000”,“o”:“30B26C0”,“s”:“_ZN5mongo12_GLOBAL__N_111myTerminateEv”,“C”:“mongo::(anonymous namespace)::myTerminate()”,“s+”:“F0”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF818C2D6DB”,“b”:“7FF818C1F000”,“o”:“E6DB”,“s”:“_ZSt11__terminatePFvvE”,“C”:“std::__terminate(void ()())“,“s+”:“6”}}}\n{“t”:{”$date\":“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF818C2D696”,“b”:“7FF818C1F000”,“o”:“E696”,“s”:“_ZSt9terminatev”,“C”:“std::terminate()”,“s+”:“36”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10B7FE3AA”,“b”:“10B4FA000”,“o”:“3043AA”,“s”:“_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ”,“C”:\"void std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct>, mongo::stdx::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::‘lambda’()>>(void*)”,“s+”:“3A”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF818C741D3”,“b”:“7FF818C6E000”,“o”:“61D3”,“s”:“_pthread_start”,“s+”:“7D”}}}\n{“t”:{“$date”:“2023-05-02T13:19:49.098+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF818C6FBD3”,“b”:“7FF818C6E000”,“o”:“1BD3”,“s”:“thread_start”,“s+”:“F”}}}\nScreenshot 2023-05-02 at 1.22.59 PM1920×989 397 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "# double check this command before running it so you\n# don't accidentally delete all your data ;)\nrm -rf /usr/local/var/mongodb/diagnostic.data\nbrew services restart [email protected]\ndiagnostic.datamongod",
"text": "@Meghraj_Suthar the issue appears to be due to a temporary file not being open during startup.{“t”:{“$date”:“2023-05-02T13:19:49.077+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“terminate() called. An exception is active; attempting to gather more information\\n”}}\n{“t”:{“$date”:“2023-05-02T13:19:49.078+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“DBException::toString(): FileNotOpen: Failed to open interim file /usr/local/var/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)38, mongo::AssertionException>\\n\\n”}}This is odd, but you can flush that path out completely as it only contains diagnostic data for MongoDB Support to use when troubleshooting a node (see this blog post for more info).Try the following to see if it addresses the issue:This will remove the diagnostic.data entirely and allow the mongod process to recreate it. This should hopefully address the issue if the only issue that was affecting you was this file not being accessible.",
"username": "alexbevi"
},
{
"code": "{\"t\":{\"$date\":\"2023-05-03T13:08:39.048+05:30\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20438, \"ctx\":\"conn3\",\"msg\":\"Index build: registering\",\"attr\":{\"buildUUID\":{\"uuid\":{\"$uuid\":\"5b6da264-368f-4ab5-90c4-beda2202aa23\"}},\"namespace\":\"grocify.orders\",\"collectionUUID\":{\"uuid\":{\"$uuid\":\"2de2bfd8-5f95-47b4-b40a-ee39c2c57078\"}},\"indexes\":1,\"firstIndex\":{\"name\":\"community_1\"},\"command\":{\"createIndexes\":\"orders\",\"v\":2,\"indexes\":[{\"name\":\"community_1\",\"key\":{\"community\":1},\"background\":true}],\"ignoreUnknownIndexOptions\":false}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.065+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4715500, \"ctx\":\"conn3\",\"msg\":\"Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold\",\"attr\":{\"numActiveIndexBuilds\":3,\"maxNumActiveUserIndexBuilds\":3,\"indexSpecs\":[{\"name\":\"community_1\",\"key\":{\"community\":1},\"background\":true,\"v\":2}],\"buildUUID\":{\"uuid\":{\"$uuid\":\"5b6da264-368f-4ab5-90c4-beda2202aa23\"}},\"collectionUUID\":{\"uuid\":{\"$uuid\":\"2de2bfd8-5f95-47b4-b40a-ee39c2c57078\"}}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.074+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\\n\"}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.086+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileNotOpen: Failed to open interim file /usr/local/var/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)38, mongo::AssertionException>\\n\\n\"}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.087+05:30\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"conn4\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"b653e4f1-66da-483e-ad3c-d8e9f4d53a85\"}},\"namespace\":\"grocify.purchase-returns\",\"index\":\"supplier._id_1\",\"ident\":\"index-170--5176069391161173466\",\"collectionIdent\":\"collection-84-260806588614309943\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.088+05:30\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20440, \"ctx\":\"conn4\",\"msg\":\"Index build: waiting for index build to complete\",\"attr\":{\"buildUUID\":{\"uuid\":{\"$uuid\":\"67cdf602-3b18-430b-9589-2f16ff06fce3\"}},\"deadline\":{\"$date\":{\"$numberLong\":\"9223372036854775807\"}}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.096+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"10AA2FC25\",\"b\":\"10797A000\",\"o\":\"30B5C25\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"C\":\"mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)\",\"s+\":\"D5\"},{\"a\":\"10AA30DE8\",\"b\":\"10797A000\",\"o\":\"30B6DE8\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mongo::printStackTrace()\",\"s+\":\"28\"},{\"a\":\"10AA2C6C0\",\"b\":\"10797A000\",\"o\":\"30B26C0\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"C\":\"mongo::(anonymous namespace)::myTerminate()\",\"s+\":\"F0\"},{\"a\":\"7FF817BCD6DB\",\"b\":\"7FF817BBF000\",\"o\":\"E6DB\",\"s\":\"_ZSt11__terminatePFvvE\",\"C\":\"std::__terminate(void (*)())\",\"s+\":\"6\"},{\"a\":\"7FF817BCD696\",\"b\":\"7FF817BBF000\",\"o\":\"E696\",\"s\":\"_ZSt9terminatev\",\"C\":\"std::terminate()\",\"s+\":\"36\"},{\"a\":\"107C7E3AA\",\"b\":\"10797A000\",\"o\":\"3043AA\",\"s\":\"_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$_0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ_\",\"C\":\"void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mongo::stdx::thread::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::'lambda'()>>(void*)\",\"s+\":\"3A\"},{\"a\":\"7FF817C141D3\",\"b\":\"7FF817C0E000\",\"o\":\"61D3\",\"s\":\"_pthread_start\",\"s+\":\"7D\"},{\"a\":\"7FF817C0FBD3\",\"b\":\"7FF817C0E000\",\"o\":\"1BD3\",\"s\":\"thread_start\",\"s+\":\"F\"}],\"processInfo\":{\"mongodbVersion\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Darwin\",\"release\":\"22.4.0\",\"version\":\"Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:17 PST 2023; root:xnu-8796.101.5~3/RELEASE_X86_64\",\"machine\":\"x86_64\"},\"somap\":[{\"path\":\"/usr/local/Cellar/mongodb-community/6.0.5/bin/mongod\",\"machType\":2,\"b\":\"10797A000\",\"vmaddr\":\"100000000\",\"buildId\":\"94A4BEFC5CC931EB890C4148A64D85E8\"},{\"path\":\"/usr/lib/libc++abi.dylib\",\"machType\":6,\"b\":\"7FF817BBF000\",\"vmaddr\":\"7FF8003A3000\",\"buildId\":\"4053AFDD601E3205A89A82B38A77514A\"},{\"path\":\"/usr/lib/system/libsystem_pthread.dylib\",\"machType\":6,\"b\":\"7FF817C0E000\",\"vmaddr\":\"7FF8003F2000\",\"buildId\":\"86DFA54395FA36B483C6BF03D01B2AAD\"}]}}},\"tags\":[]}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20447, \"ctx\":\"conn4\",\"msg\":\"Index build: completed\",\"attr\":{\"buildUUID\":{\"uuid\":{\"$uuid\":\"67cdf602-3b18-430b-9589-2f16ff06fce3\"}}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"10AA2FC25\",\"b\":\"10797A000\",\"o\":\"30B5C25\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"C\":\"mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)\",\"s+\":\"D5\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"10AA30DE8\",\"b\":\"10797A000\",\"o\":\"30B6DE8\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mongo::printStackTrace()\",\"s+\":\"28\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"10AA2C6C0\",\"b\":\"10797A000\",\"o\":\"30B26C0\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"C\":\"mongo::(anonymous namespace)::myTerminate()\",\"s+\":\"F0\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn4\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"grocify.purchase-returns\",\"command\":{\"createIndexes\":\"purchase-returns\",\"indexes\":[{\"name\":\"supplier._id_1\",\"key\":{\"supplier._id\":1},\"background\":true}],\"writeConcern\":{\"w\":1},\"lsid\":{\"id\":{\"$uuid\":\"bfa4c556-4614-4283-a309-8e95fd10720f\"}},\"$db\":\"grocify\"},\"numYields\":0,\"reslen\":114,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":5}},\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":4,\"w\":2}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":6}},\"Global\":{\"acquireCount\":{\"r\":4,\"w\":2}},\"Database\":{\"acquireCount\":{\"r\":4,\"w\":1}},\"Collection\":{\"acquireCount\":{\"r\":4,\"W\":1}},\"Mutex\":{\"acquireCount\":{\"r\":5}}},\"flowControl\":{\"acquireCount\":2,\"timeAcquiringMicros\":6},\"writeConcern\":{\"w\":1,\"wtimeout\":0,\"provenance\":\"clientSupplied\"},\"storage\":{},\"remote\":\"[::1]:52581\",\"protocol\":\"op_msg\",\"durationMillis\":255}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF817BCD6DB\",\"b\":\"7FF817BBF000\",\"o\":\"E6DB\",\"s\":\"_ZSt11__terminatePFvvE\",\"C\":\"std::__terminate(void (*)())\",\"s+\":\"6\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.107+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF817BCD696\",\"b\":\"7FF817BBF000\",\"o\":\"E696\",\"s\":\"_ZSt9terminatev\",\"C\":\"std::terminate()\",\"s+\":\"36\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.108+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"107C7E3AA\",\"b\":\"10797A000\",\"o\":\"3043AA\",\"s\":\"_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$_0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ_\",\"C\":\"void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, mongo::stdx::thread::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::'lambda'()>>(void*)\",\"s+\":\"3A\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.108+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF817C141D3\",\"b\":\"7FF817C0E000\",\"o\":\"61D3\",\"s\":\"_pthread_start\",\"s+\":\"7D\"}}}\n{\"t\":{\"$date\":\"2023-05-03T13:08:39.108+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF817C0FBD3\",\"b\":\"7FF817C0E000\",\"o\":\"1BD3\",\"s\":\"thread_start\",\"s+\":\"F\"}}}\n",
"text": "Thanks @alexbevi! But still the issue is same…not resolved.\nScreenshot 2023-05-03 at 1.13.41 PM1920×1061 402 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "",
"text": "{“t”:{“$date”:“2023-05-03T13:08:39.048+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20438, “ctx”:“conn3”,“msg”:“Index build: registering”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“5b6da264-368f-4ab5-90c4-beda2202aa23”}},“namespace”:“grocify.orders”,“collectionUUID”:{“uuid”:{“$uuid”:“2de2bfd8-5f95-47b4-b40a-ee39c2c57078”}},“indexes”:1,“firstIndex”:{“name”:“community_1”},“command”:{“createIndexes”:“orders”,“v”:2,“indexes”:[{“name”:“community_1”,“key”:{“community”:1},“background”:true}],“ignoreUnknownIndexOptions”:false}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.065+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4715500, “ctx”:“conn3”,“msg”:“Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold”,“attr”:{“numActiveIndexBuilds”:3,“maxNumActiveUserIndexBuilds”:3,“indexSpecs”:[{“name”:“community_1”,“key”:{“community”:1},“background”:true,“v”:2}],“buildUUID”:{“uuid”:{“$uuid”:“5b6da264-368f-4ab5-90c4-beda2202aa23”}},“collectionUUID”:{“uuid”:{“$uuid”:“2de2bfd8-5f95-47b4-b40a-ee39c2c57078”}}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.074+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“terminate() called. An exception is active; attempting to gather more information\\n”}}\n{“t”:{“$date”:“2023-05-03T13:08:39.086+05:30”},“s”:“F”, “c”:“CONTROL”, “id”:6384300, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“DBException::toString(): FileNotOpen: Failed to open interim file /usr/local/var/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)38, mongo::AssertionException>\\n\\n”}}\n{“t”:{“$date”:“2023-05-03T13:08:39.087+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20345, “ctx”:“conn4”,“msg”:“Index build: done building”,“attr”:{“buildUUID”:null,“collectionUUID”:{“uuid”:{“$uuid”:“b653e4f1-66da-483e-ad3c-d8e9f4d53a85”}},“namespace”:“grocify.purchase-returns”,“index”:“supplier._id_1”,“ident”:“index-170–5176069391161173466”,“collectionIdent”:“collection-84-260806588614309943”,“commitTimestamp”:null}}\n{“t”:{“$date”:“2023-05-03T13:08:39.088+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20440, “ctx”:“conn4”,“msg”:“Index build: waiting for index build to complete”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“67cdf602-3b18-430b-9589-2f16ff06fce3”}},“deadline”:{“$date”:{“$numberLong”:“9223372036854775807”}}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.096+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31380, “ctx”:“ftdc”,“msg”:“BACKTRACE”,“attr”:{“bt”:{“backtrace”:[{“a”:“10AA2FC25”,“b”:“10797A000”,“o”:“30B5C25”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE”,“C”:“mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)”,“s+”:“D5”},{“a”:“10AA30DE8”,“b”:“10797A000”,“o”:“30B6DE8”,“s”:“_ZN5mongo15printStackTraceEv”,“C”:“mongo::printStackTrace()”,“s+”:“28”},{“a”:“10AA2C6C0”,“b”:“10797A000”,“o”:“30B26C0”,“s”:“_ZN5mongo12_GLOBAL__N_111myTerminateEv”,“C”:“mongo::(anonymous namespace)::myTerminate()”,“s+”:“F0”},{“a”:“7FF817BCD6DB”,“b”:“7FF817BBF000”,“o”:“E6DB”,“s”:“_ZSt11__terminatePFvvE”,“C”:“std::__terminate(void ()())“,“s+”:“6”},{“a”:“7FF817BCD696”,“b”:“7FF817BBF000”,“o”:“E696”,“s”:”_ZSt9terminatev\",“C”:“std::terminate()”,“s+”:“36”},{“a”:“107C7E3AA”,“b”:“10797A000”,“o”:“3043AA”,“s”:“_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ”,“C”:\"void std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct>, mongo::stdx::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::‘lambda’()>>(void*)”,“s+”:“3A”},{“a”:“7FF817C141D3”,“b”:“7FF817C0E000”,“o”:“61D3”,“s”:“_pthread_start”,“s+”:“7D”},{“a”:“7FF817C0FBD3”,“b”:“7FF817C0E000”,“o”:“1BD3”,“s”:“thread_start”,“s+”:“F”}],“processInfo”:{“mongodbVersion”:“6.0.5”,“gitVersion”:“c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d”,“compiledModules”:,“uname”:{“sysname”:“Darwin”,“release”:“22.4.0”,“version”:“Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:17 PST 2023; root:xnu-8796.101.5~3/RELEASE_X86_64”,“machine”:“x86_64”},“somap”:[{“path”:“/usr/local/Cellar/mongodb-community/6.0.5/bin/mongod”,“machType”:2,“b”:“10797A000”,“vmaddr”:“100000000”,“buildId”:“94A4BEFC5CC931EB890C4148A64D85E8”},{“path”:“/usr/lib/libc++abi.dylib”,“machType”:6,“b”:“7FF817BBF000”,“vmaddr”:“7FF8003A3000”,“buildId”:“4053AFDD601E3205A89A82B38A77514A”},{“path”:“/usr/lib/system/libsystem_pthread.dylib”,“machType”:6,“b”:“7FF817C0E000”,“vmaddr”:“7FF8003F2000”,“buildId”:“86DFA54395FA36B483C6BF03D01B2AAD”}]}}},“tags”:}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“INDEX”, “id”:20447, “ctx”:“conn4”,“msg”:“Index build: completed”,“attr”:{“buildUUID”:{“uuid”:{“$uuid”:“67cdf602-3b18-430b-9589-2f16ff06fce3”}}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10AA2FC25”,“b”:“10797A000”,“o”:“30B5C25”,“s”:“_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE”,“C”:“mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*)”,“s+”:“D5”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10AA30DE8”,“b”:“10797A000”,“o”:“30B6DE8”,“s”:“_ZN5mongo15printStackTraceEv”,“C”:“mongo::printStackTrace()”,“s+”:“28”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“10AA2C6C0”,“b”:“10797A000”,“o”:“30B26C0”,“s”:“_ZN5mongo12_GLOBAL__N_111myTerminateEv”,“C”:“mongo::(anonymous namespace)::myTerminate()”,“s+”:“F0”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:51803, “ctx”:“conn4”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“grocify.purchase-returns”,“command”:{“createIndexes”:“purchase-returns”,“indexes”:[{“name”:“supplier._id_1”,“key”:{“supplier._id”:1},“background”:true}],“writeConcern”:{“w”:1},“lsid”:{“id”:{“$uuid”:“bfa4c556-4614-4283-a309-8e95fd10720f”}},“$db”:“grocify”},“numYields”:0,“reslen”:114,“locks”:{“ParallelBatchWriterMode”:{“acquireCount”:{“r”:5}},“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:4,“w”:2}},“ReplicationStateTransition”:{“acquireCount”:{“w”:6}},“Global”:{“acquireCount”:{“r”:4,“w”:2}},“Database”:{“acquireCount”:{“r”:4,“w”:1}},“Collection”:{“acquireCount”:{“r”:4,“W”:1}},“Mutex”:{“acquireCount”:{“r”:5}}},“flowControl”:{“acquireCount”:2,“timeAcquiringMicros”:6},“writeConcern”:{“w”:1,“wtimeout”:0,“provenance”:“clientSupplied”},“storage”:{},“remote”:“[::1]:52581”,“protocol”:“op_msg”,“durationMillis”:255}}\n{“t”:{“$date”:“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF817BCD6DB”,“b”:“7FF817BBF000”,“o”:“E6DB”,“s”:“_ZSt11__terminatePFvvE”,“C”:“std::__terminate(void ()())“,“s+”:“6”}}}\n{“t”:{”$date\":“2023-05-03T13:08:39.107+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF817BCD696”,“b”:“7FF817BBF000”,“o”:“E696”,“s”:“_ZSt9terminatev”,“C”:“std::terminate()”,“s+”:“36”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.108+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“107C7E3AA”,“b”:“10797A000”,“o”:“3043AA”,“s”:“_ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5mongo4stdx6threadC1IZNS7_14FTDCController5startEvE3$0JELi0EEET_DpOT0_EUlvE_EEEEEPvSJ”,“C”:\"void std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct>, mongo::stdx::thread<mongo::FTDCController::start()::$_0, 0>(mongo::FTDCController::start()::$_0)::‘lambda’()>>(void*)”,“s+”:“3A”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.108+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF817C141D3”,“b”:“7FF817C0E000”,“o”:“61D3”,“s”:“_pthread_start”,“s+”:“7D”}}}\n{“t”:{“$date”:“2023-05-03T13:08:39.108+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:31445, “ctx”:“ftdc”,“msg”:“Frame”,“attr”:{“frame”:{“a”:“7FF817C0FBD3”,“b”:“7FF817C0E000”,“o”:“1BD3”,“s”:“thread_start”,“s+”:“F”}}}",
"username": "Meghraj_Suthar"
},
{
"code": "",
"text": "\nScreenshot 2023-05-03 at 1.09.29 PM1860×582 129 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "/user/local/var/mongodbmaggiechmod 777 -R /usr/local/var/mongodb\nchown -R maggie /usr/local/var/mongodb\n",
"text": "@Meghraj_Suthar this smells like it might just be a permissions error. Can you ensure the /user/local/var/mongodb path has read/write/execute recursively and is owned by the same user that owns the process.For example, if the user is maggie try something like:",
"username": "alexbevi"
},
{
"code": "",
"text": "Still the same problem (attached screenshot of steps I’ve tried you mentioned above)\nScreenshot 2023-05-03 at 9.47.30 PM1920×1159 270 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "",
"text": "Here’s the screenshot of the directory permission:\nScreenshot 2023-05-03 at 9.50.07 PM1152×264 88.4 KB\n",
"username": "Meghraj_Suthar"
},
{
"code": "",
"text": "Please help so we can start our development work.",
"username": "Meghraj_Suthar"
},
{
"code": "mongodmkdir data\nmongod --dbpath data\nmongoddata",
"text": "@Meghraj_Suthar if this issue is holding up development you can setup a free cluster in MongoDB Atlas as well.Unfortunately I’m not familiar with managing a node in OSX that was installed via homebrew, however you could try just running the mongod process manually to see if it starts successfully.This will just start a mongod on port 27017 without authentication and write to the data directory you just created.",
"username": "alexbevi"
},
{
"code": "",
"text": "I tried with this too…but still doesn’t work.\nScreenshot 2023-05-05 at 8.01.19 PM1920×923 198 KB\nThe reason I’m setting up is because of development. I cannot check on mongodb cloud for local development changes. It becomes very time consuming. That’s why setting up on local machine makes sense.Could you please check with your team who’s experienced in this macOS?",
"username": "Meghraj_Suthar"
}
] | MongDB Community @ 6.0 is not working with macOS 13.3.1 (Ventura) | 2023-05-01T08:18:39.747Z | MongDB Community @ 6.0 is not working with macOS 13.3.1 (Ventura) | 1,061 |
|
null | [] | [
{
"code": "",
"text": "Good afternoon all,\nI keep receiving every few minutes this warnings into the log:W STORAGE [FlowControlRefresher] Flow control is engaged and the sustainer point is not moving. Please check the health of all secondaries.What can I check to be sure that is not causing problem? Storage check? Replica check? Secondaries?Many thanks in advance,\nA",
"username": "Antonella_Viteritti"
},
{
"code": "",
"text": "Flow control engaged means your secondaries can not catch up with write speed from primary node, and primary has to slow down, so you should check everything that may cause “slow replication”, including but not limited to disk use, network use, connection health…",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you. I have checked network performance and is okay.\nCould you please provide some useful commands that I can run from mongo db perspective, please? For example db.printSecondaryReplicationInfo()\nThank you in advance",
"username": "Antonella_Viteritti"
}
] | Replica check on secondaries | 2023-05-04T13:47:47.834Z | Replica check on secondaries | 821 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.5 of the MongoDB Go Driver.This release fixes a bug that can squash the FullDocument configuration value when merging multiple ChangeStreamOptions structs. For more information please see the 1.11.5 release notes.You can obtain the driver source from GitHub under the v1.11.5 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team",
"username": "Qingyang_Hu1"
},
{
"code": "",
"text": "It seems there is something goes wrong with:\ngo mod tidy\ngo: finding module for package go.mongodb.org/mongo-driver/internal/assert\nXXXXXXXXXX imports\ngo.mongodb.org/mongo-driver/mongo/options tested by\ngo.mongodb.org/mongo-driver/mongo/options.test imports\ngo.mongodb.org/mongo-driver/internal/assert: module go.mongodb.org/mongo-driver@latest found (v1.11.5), but does not contain package go.mongodb.org/mongo-driver/internal/assert",
"username": "Jerome_LAFORGE"
},
{
"code": "",
"text": "Hello @Jerome_LAFORGE, thank you for your post.\nYes, we noticed the import failure. We are going to retract 1.11.5 and release 1.11.6 with a fix.\nWe will post the new release when it is ready.",
"username": "Qingyang_Hu1"
},
{
"code": "",
"text": "Do let us know in this thread when 1.11.6 is released",
"username": "Ajoy_Das"
},
{
"code": "",
"text": "We’ve released Go driver v1.11.6. Meanwhile, v1.11.5 has been retracted due to the import failure. Please use version 1.11.6 or higher.",
"username": "Qingyang_Hu1"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.11.5 Released | 2023-05-03T13:38:34.536Z | MongoDB Go Driver 1.11.5 Released | 1,037 |
null | [] | [
{
"code": "",
"text": "I have a collection which stores documents worth 1 or 5 mb each. I do not want mongodb caching these images in ram, as there are 100s of thousands of these which may kill performance.\nIs there a way to specifically disable caching of documents in ram for specific collections? I couldnt find it in the documentation sorry.",
"username": "Vishwa_Mithra"
},
{
"code": "",
"text": "I couldnt find it in the documentation sorry.Do not be sorry. There is no such feature. If a document is not recently in use, then it will not take any space in RAM. When it is needed and not in RAM it will be read from disk. If your 100s of thousands of 1 to 5 mb documents are constantly needed, then you want them in RAM. Reading them from disk will kill performance an order of magnitude compared to have them in RAM.",
"username": "steevej"
}
] | How do I disable document caching in ram for certain collections? | 2023-05-05T08:33:09.526Z | How do I disable document caching in ram for certain collections? | 493 |
null | [
"compass",
"containers"
] | [
{
"code": "docker run --name mongodb -d -p 64000:64000 mongomongodb://localhost:64000connection <monitor> to 127.0.0.1:64000 closed{\"t\":{\"$date\":\"2022-03-23T01:04:58.599Z\"},\"s\":\"I\",\"c\":\"COMPASS-CONNECT-UI\",\"id\":1001000004,\"ctx\":\"Connection UI\",\"msg\":\"Initiating connection attempt\"}\n{\"t\":{\"$date\":\"2022-03-23T01:04:58.602Z\"},\"s\":\"I\",\"c\":\"COMPASS-DATA-SERVICE\",\"id\":1001000014,\"ctx\":\"Connection 2\",\"msg\":\"Connecting\",\"attr\":{\"url\":\"mongodb://localhost:64000/?readPreference=primary&appname=MongoDB+Compass&directConnection=true&ssl=false\"}}\n{\"t\":{\"$date\":\"2022-03-23T01:04:58.616Z\"},\"s\":\"I\",\"c\":\"COMPASS-CONNECT\",\"id\":1001000010,\"ctx\":\"Connect\",\"msg\":\"Resolved SRV record\",\"attr\":{\"from\":\"mongodb://localhost:64000/?readPreference=primary&appname=MongoDB+Compass&directConnection=true&ssl=false\",\"to\":\"mongodb://localhost:64000/?readPreference=primary&appname=MongoDB+Compass&directConnection=true&ssl=false\"}}\n{\"t\":{\"$date\":\"2022-03-23T01:04:58.618Z\"},\"s\":\"I\",\"c\":\"COMPASS-CONNECT\",\"id\":1001000009,\"ctx\":\"Connect\",\"msg\":\"Initiating connection\",\"attr\":{\"url\":\"mongodb://localhost:64000/?readPreference=primary&appname=MongoDB+Compass&directConnection=true&ssl=false\",\"options\":*{\"monitorCommands\":true}*}}\n",
"text": "I am running an empty mongodb instance in a docker desktop container using: docker run --name mongodb -d -p 64000:64000 mongoAm running Compass v1.30.1 and attempting to connect to the mongodb running on localhost using the connection string: mongodb://localhost:64000In the Compass UI, I receive the following visible error: connection <monitor> to 127.0.0.1:64000 closedI have looked at the Compass log file and can see these entries:Also disabled my firewall temporarily, with no effect.Any help appreciated. Thanks in advance.",
"username": "kidcoconut"
},
{
"code": "docker run --name -mongodb -d -p 64000:27017 mongo \ndocker exec -it mongodb\nmongodb://localhost:64000",
"text": "Hello @kidcoconut\nWelcome to the community!!By default the MongoDB runs on port 27017, if you wish to change the port for your machine, you can use the command as:and login to the shell.\nFor Compass, the following URL would work:mongodb://localhost:64000Please let us know, if you are able to connect in the above mentioned way or if more information is needed from our end.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari ,\nThankyou for the warm welcome.Some updates since my last note. In summary … I followed your tips with no success, then performed more analysis and troubleshooting. Things are working now, but I am left to think something was wrong with my Compass install and/or node_module libraries and dependencies.For simplicity, I recreated a new docker container for mongodb using 27018:27017. There was no change re ability to connect through Compass. Same connection-monitor-closed error and log.I then went into Compass and inspected the Dev Tools (toggle) and attempted to run some of the troubleshooting/test scripts. Loading the script did not work as there were errors/warnings re missing libraries under node_modules. I then shut everything down (docker/compass) and explicitly installed nvm and node (latest). I reinstalled Compass (win_msi), recreated the docker container for mongodb@27018:27017, and launched Compass. Success.I’ll keep an eye on it and report back if I find anything else.Pls note for the benefit of others reading this, your sample commands need some adjustment to make them run properly (typos, missing flags, etc).Thanks again.",
"username": "kidcoconut"
},
{
"code": "",
"text": "Hi Maam,\nI have some kind of similar issue. Mongodb can be accessible from the container (with the command docker exec -it mongodb mongosh\"). I can see dbs with the command “show dbs”, but the mongodb is not accessible from MongoDB Compass. The error it throwing is “timedout”. I have used container IP and 27017 port. What I could try to resolve the issue…\nThanks you in advance…",
"username": "Gowtham_Chendra"
}
] | Unable to connect to mongodb running in a docker container using Compass | 2022-03-23T02:13:08.676Z | Unable to connect to mongodb running in a docker container using Compass | 39,089 |
[
"upgrading"
] | [
{
"code": "",
"text": "Hello Team,To upgrade MongoDB from 4.4 to 5.0 on Ubuntu 20.04 LTS , I found the below link.This method of upgrade tells that we need to replace existing binary with new new binary.How to achieve this replacing of binaries. Does this mean we will simply over write the existing one with new one just by copying the new binary to MongoDB installation path.Is there any alternate method to upgrade MongoDB other than binary replacement.Thanks and Regards\n@Satya_Oradba",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Follow the upgrade documentation in the release notes applicable to your deployment:",
"username": "chris"
},
{
"code": "",
"text": "Is there any alternate method to upgrade MongoDB other than binary replacement.If you install via tarball the binaries can exist in different directories. Installing and upgrading via distribution repos and tools is common and they almost always replace the existing binaries, there is no real issue doing it.",
"username": "chris"
},
{
"code": "Upgrade the instance by replacing the existing binaries with new binaries.",
"text": "Hi @chris\nThanks for the update.It seems you did not get what I am asking.In this link pasted below:It was mentioned thatUpgrade the instance by replacing the existing binaries with new binaries.What is meant by “replace existing binaries with new binaries” . Is it simply over writing the existing one with new one just by copying the new binary to MongoDB installation path.??ORInstallation of new version will automatically upgrade the existing one.??Please shed some light on it.Thanks and Regards\n@Satya_Oradba",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Either, it depends on your installation method.Package managers(yum, dnf, apt) will replace the existing binaries. With manual installation the operator can overwrite the existing binaries or install them in a separate directory.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to upgrade MongoDB from 4.4 to 5.0 on Ubuntu 20.04 LTS | 2023-05-03T10:06:33.243Z | How to upgrade MongoDB from 4.4 to 5.0 on Ubuntu 20.04 LTS | 2,503 |
|
null | [
"node-js",
"crud",
"mongoose-odm",
"compass"
] | [
{
"code": "{\n \"businessId\": \"Bike1\",\n \"name\": \"Bert\",\n \"properties\": {\n \"type\": \"Electric\",\n \"weight\": \"1337mg\"\n }\n}\n{\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n}\n{\n \"_id\": \"6453cb0914251323e1b9f40e\",\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n \"__v\": 0\n}\n const doc = await bikeModel.findOneAndUpdate({ businessId: businessid }, { $set: { 'properties.type': 'Gas', 'properties.weight': '1kg' } }, { returnDocument: \"after\", runValidators: true });{\n \"_id\": \"6453cb0914251323e1b9f40e\",\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n \"__v\": 0,\n \"properties\": {\n \"type\": \"Gas\",\n \"weight\": \"1kg\",\n \"_id\": \"6453d38e14251323e1b9f415\"\n }\n}\n{\n \"items\": [\n {\n \"_id\": \"6453cb0914251323e1b9f40e\",\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n \"__v\": 0,\n \"properties\": {\n \"type\": \"Gas\",\n \"weight\": \"1kg\"\n }\n }\n ],\n \"size\": 1,\n \"hasMore\": false\n}\n{\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n \"properties\": { }\n}\n{\n \"_id\": \"6453cb0914251323e1b9f40e\",\n \"businessId\": \"Bike2\",\n \"name\": \"Ernie\",\n \"properties\": {\n \"_id\": \"6454bfdd9bb9cb2afd60b6e0\"\n },\n \"__v\": 0\n}\n$set$set",
"text": "Context\nWhenever a new document is created without an embedded object the update with the $set operator does not generate the _id. For this discussion I’ve used the nodejs driver. Consider the following JSON as an example of the bike modelSuppose I create a new record without specifying the properties like:This results in a new record.At this point the properties are not known. Later on the properties object might be added to the document. Now I want to add the type and weight of Bike2. For this I use a findOneAndUpdate. const doc = await bikeModel.findOneAndUpdate({ businessId: businessid }, { $set: { 'properties.type': 'Gas', 'properties.weight': '1kg' } }, { returnDocument: \"after\", runValidators: true });This findOneAndUpdate results in:Note that the returnDocument contains an _id in the properties. Whenever I retrieve this record the _id does not exist on my document. Neither does the _id exist in the when viewing this record in Compass. See below the retrieved recordWhenever I add the properties to the initial creation of the record it does work. Instead of the previous create I now add an empty object. Suppose I create a new record without specifying the properties like:This results in a record with _id on the properties object. With this situation the $set operation works as expected. It results in the creation of a type and weight on the existing properties object.According to the $set documentation:“If the field does not exist, $set will add a new field with the specified value, provided that the new field does not violate a type constraint. If you specify a dotted path for a non-existent field, $set will create the embedded documents as needed to fulfill the dotted path to the field.”Questions\nTo me it’s unclear why the $set operator is not able to generate a properties object with _id. Perhaps I misinterpreted the documentation.My apologies for the long description.\nThank you in advance.",
"username": "CdR"
},
{
"code": "\"__v\": 0",
"text": "The following seems to indicate that you are using Mongoose.1 - The presence of __v field, which is not a feature of pure MongoDB.\"__v\": 02 - Using model rather than collection for your CRUDawait bikeModel.findOneAndUpdate3 - The presence of an automatic _id in sub-documentNote that the returnDocument contains an _id in the properties.So you seem to be confused between Mongoose layer and native MongoDB because despite using Mongoose’s API, you link to the native MongoDB’s $set documentation.Just to be clear, pure MongoDB has no automatic concept of __v and no automatic concept of _id inside embedded document. The only automatic concept in pure MongoDB is _id in top level document.Note that what ever automatic field generation or update performed by Mongoose will not be done when using Compass. However, the fields generated and updated by Mongoose should be visible.I have tagged your post with mongoose to attract attention to mongoose user.",
"username": "steevej"
}
] | $set operator missing _id on new embedded object | 2023-05-05T09:06:35.486Z | $set operator missing _id on new embedded object | 547 |
null | [] | [
{
"code": "",
"text": "Hello,\nI am trying to upgrade an on premise mongo DB cluster from 4.4.3 to 4.4.20 on Ubuntu 18.04 and I am getting the following error:sudo apt-get install gnupg\"----- ERROR ----\nSetting up grub-efi-amd64-signed (1.187.2~18.04.1+2.06-2ubuntu14) …\nInstalling for x86_64-efi platform.\ngrub-install: error: cannot find EFI directory.\ndpkg: error processing package grub-efi-amd64-signed (–configure):\ninstalled grub-efi-amd64-signed package post-installation script subprocess returned error exit status 1\ndpkg: dependency problems prevent configuration of shim-signed:\nshim-signed depends on grub-efi-amd64-signed (>= 1.167~) | grub-efi-arm64-signed (>= 1.167~); however:\nPackage grub-efi-amd64-signed is not configured yet.\nPackage grub-efi-arm64-signed is not installed.dpkg: error processing package shim-signed (–configure):\ndependency problems - leaving unconfigured\nNo apport report written because the error message indicates its a followup error from a previous failure.\nErrors were encountered while processing:\ngrub-efi-amd64-signed\nshim-signedI know it is an Ubuntu issue but just checking to see if anyone had this issue and how they were resolve it\nThanks",
"username": "Premchand_Budhu"
},
{
"code": "grub-install: error: cannot find EFI directory.\n",
"text": "Hi @Premchand_Budhu welcome to the community!It’s been some time since you posted this question. Have you managed to solve it?If not, the error that caught my eye is this one:Quick search on that message leads me to this topic in Unix StackExchange: linux - Cannot find EFI directory: issue with grub-install - Unix & Linux Stack Exchange and this topic in ServerFault Ubuntu 18.04 \"grub-install: error: cannot find EFI directory\" - Server FaultI’m not a Linux expert, so this might be way off and doesn’t solve your issue. I’m hoping this should provide you a good start if you’re still seeing this issue though Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank Kevin.\nYes , I was able to solve my issue\nI followed the solution from sbasurto",
"username": "Premchand_Budhu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error running gnupg command on ubuntu 18.04 | 2023-04-24T18:13:16.172Z | Error running gnupg command on ubuntu 18.04 | 897 |
null | [
"aggregation",
"queries",
"data-modeling",
"replication"
] | [
{
"code": "{\n \"_id\":{\n \"$binary\":{\n \"base64\":\"EyYO+A68T2WJm/p+ny+buw==\",\n \"subType\":\"04\"\n }\n },\n \"marketIntel\":{\n \"general\":{\n \"sentiment\":{\n \"totalPositiveness\":{\n \"$numberDouble\":\"0.501\"\n },\n \"sentimentBySource\":[\n {\n \"countries\":[\n {\n \"monthly\":[\n {\n \"date\":\"2018-10\",\n \"negative_count\":{\n \"$numberInt\":\"0\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n }\n ],\n \"country\":\"DE\",\n \"positive_polarity\":{\n \"$numberDouble\":\"0.9865\"\n }\n },\n {\n \"monthly\":[\n {\n \"date\":\"2018-07\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2018-08\",\n \"negative_count\":{\n \"$numberInt\":\"0\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2018-09\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"0\"\n }\n },\n {\n \"date\":\"2018-10\",\n \"negative_count\":{\n \"$numberInt\":\"0\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2018-11\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2018-12\",\n \"negative_count\":{\n \"$numberInt\":\"0\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2019-01\",\n \"negative_count\":{\n \"$numberInt\":\"0\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2019-02\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2019-03\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2019-04\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"0\"\n }\n },\n {\n \"date\":\"2019-05\",\n \"negative_count\":{\n \"$numberInt\":\"1\"\n },\n \"positive_count\":{\n \"$numberInt\":\"1\"\n }\n }\n ],\n \"country\":\"N/A\",\n \"positive_polarity\":{\n \"$numberDouble\":\"0.479\"\n }\n }\n ],\n \"source\":\"All\"\n }\n ],\n \"lastModified\":\"2020-06-08 20:10:33.627029\",\n \"totalNegativeCount\":{\n \"$numberInt\":\"10\"\n },\n \"totalPositiveCount\":{\n \"$numberInt\":\"13\"\n }\n },\n \"vindowCorporateScore\":false,\n \"vindowScore\":{\n \"$numberDouble\":\"0.2388\"\n },\n \"reviewsCount\":{\n \"$numberInt\":\"15\"\n }\n },\n \"sentiment\":{\n \"totalPositiveness\":{\n \"$numberDouble\":\"0.6771\"\n },\n \"sentimentBySource\":[\n {\n \"countries\":[\n {\n \"country\":\"N/A\",\n \"monthly\":[\n {\n \"date\":\"2020-02\",\n \"positive_count\":{\n \"$numberInt\":\"7\"\n },\n \"negative_count\":{\n \"$numberInt\":\"3\"\n }\n },\n {\n \"date\":\"2020-03\",\n \"positive_count\":{\n \"$numberInt\":\"1\"\n },\n \"negative_count\":{\n \"$numberInt\":\"1\"\n }\n },\n {\n \"date\":\"2020-04\",\n \"positive_count\":{\n \"$numberInt\":\"1\"\n },\n \"negative_count\":{\n \"$numberInt\":\"0\"\n }\n },\n {\n \"date\":\"2020-09\",\n \"positive_count\":{\n \"$numberInt\":\"1\"\n },\n \"negative_count\":{\n \"$numberInt\":\"0\"\n }\n },\n {\n \"date\":\"2020-10\",\n \"positive_count\":{\n \"$numberInt\":\"1\"\n },\n \"negative_count\":{\n \"$numberInt\":\"0\"\n }\n }\n ],\n \"positive_polarity\":{\n \"$numberDouble\":\"0.6771\"\n }\n }\n ],\n \"source\":\"All\"\n }\n ],\n \"totalNegativeCount\":{\n \"$numberInt\":\"4\"\n },\n \"totalPositiveCount\":{\n \"$numberInt\":\"11\"\n },\n \"lastModified\":\"2021-02-09 16:10:36.835867\"\n },\n \"reviewsBySourceCount\":{\n \"All\":{\n \"$numberInt\":\"15\"\n },\n \"Google\":{\n \"$numberInt\":\"15\"\n },\n \"Booking\":{\n \"$numberInt\":\"8\"\n },\n \"TripAdvisor\":{\n \"$numberInt\":\"2\"\n },\n \"Booking_com\":{\n \"$numberInt\":\"3\"\n }\n },\n \"scores\":[\n {\n \"source\":\"All\",\n \"score\":{\n \n },\n \"lastModified\":\"2021-03-02T20:22:32.181Z\",\n \"vindowScore\":{\n \"$numberDouble\":\"0.2388\"\n }\n }\n ],\n \"recommendation\":{\n \"reducedFeaturesVector\":[\n {\n \"$numberDouble\":\"2.2315\"\n },\n {\n \"$numberDouble\":\"14.7779\"\n }\n ],\n \"clusterId\":{\n \"$numberInt\":\"29\"\n }\n },\n \"corporateScores\":[\n {\n \"source\":\"All\",\n \"score\":{\n \n },\n \"lastModified\":\"2021-02-09 16:10:37.204984\",\n \"vindowCorporateScore\":false\n }\n ],\n \"topicAnalysis\":[\n {\n \"source\":\"All\",\n \"negative\":[\n {\n \"word\":\"bed\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"And the bed I had in my room was just... bad.\"\n ]\n },\n {\n \"word\":\"room\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"And the bed I had in my room was just... bad.\"\n ]\n }\n ],\n \"positive\":[\n {\n \"word\":\"room\",\n \"relevance\":{\n \"$numberInt\":\"4\"\n },\n \"phrases\":[\n \"The rooms are large.\",\n \"Loved the wine and food.\",\n \"I'm sure that the bed thing can be fixed easily and probably just my room.\",\n \"Great rooms.\",\n \"The view from the window of my room 15 is gorgeous.\"\n ]\n },\n {\n \"word\":\"bed\",\n \"relevance\":{\n \"$numberInt\":\"3\"\n },\n \"phrases\":[\n \"It had straight up a hole in the centre of the bed spring and every night felt like sleeping in a hill.\",\n \"I'm sure that the bed thing can be fixed easily and probably just my room.\",\n \"If the beds are fixed in the future I'd be glad to come back !\"\n ]\n },\n {\n \"word\":\"place\",\n \"relevance\":{\n \"$numberInt\":\"2\"\n },\n \"phrases\":[\n \"Perfect place.\",\n \"Perhaps one of the best places on Rubinstein.\"\n ]\n },\n {\n \"word\":\"hotel\",\n \"relevance\":{\n \"$numberInt\":\"2\"\n },\n \"phrases\":[\n \"A very good hotel with a very friendly staff.\",\n \"In the end : a very good hotel if you're not on an unlimited budget and want to save.\"\n ]\n },\n {\n \"word\":\"staff\",\n \"relevance\":{\n \"$numberInt\":\"2\"\n },\n \"phrases\":[\n \"A very good hotel with a very friendly staff.\",\n \"Location is good staff are nice.\"\n ]\n },\n {\n \"word\":\"location\",\n \"relevance\":{\n \"$numberInt\":\"2\"\n },\n \"phrases\":[\n \"Location is good staff are nice.\",\n \"Great location in the city center.\"\n ]\n },\n {\n \"word\":\"wine\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"The rooms are large.\",\n \"Loved the wine and food.\"\n ]\n },\n {\n \"word\":\"food\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"The rooms are large.\",\n \"Loved the wine and food.\"\n ]\n },\n {\n \"word\":\"cleanliness\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"The cleanliness is perfect.\"\n ]\n },\n {\n \"word\":\"hole\",\n \"relevance\":{\n \"$numberInt\":\"1\"\n },\n \"phrases\":[\n \"It had straight up a hole in the centre of the bed spring and every night felt like sleeping in a hill.\"\n ]\n }\n ],\n \"lastModified\":\"2021-02-09 16:10:37.009042\"\n }\n ]\n },\n \"distanceToAirports\":{\n \"32d5f078-aa9b-4200-8d2b-128bf4c5dcb3\":{\n \"distanceInMiles\":{\n \"$numberDouble\":\"12.052283170140424\"\n },\n \"drivingTimeInMinutes\":{\n \"$numberDouble\":\"40.25\"\n },\n \"drivingDistanceInMiles\":{\n \"$numberDouble\":\"21.452833775\"\n },\n \"airportId\":\"32d5f078-aa9b-4200-8d2b-128bf4c5dcb3\"\n },\n \"d429db4d-0b1e-41a3-8350-dd5c64cb8c41\":{\n \"distanceInMiles\":{\n \"$numberDouble\":\"16.88938793551284\"\n },\n \"drivingTimeInMinutes\":{\n \"$numberDouble\":\"42.21666666666667\"\n },\n \"drivingDistanceInMiles\":{\n \"$numberDouble\":\"21.501300713000003\"\n },\n \"airportId\":\"d429db4d-0b1e-41a3-8350-dd5c64cb8c41\"\n },\n \"89c8b15f-87d5-41de-89dc-caeab5bab3c9\":{\n \"distanceInMiles\":{\n \"$numberDouble\":\"9.361319890017072\"\n },\n \"drivingTimeInMinutes\":{\n \"$numberDouble\":\"31.083333333333332\"\n },\n \"drivingDistanceInMiles\":{\n \"$numberDouble\":\"13.494935378000001\"\n },\n \"airportId\":\"89c8b15f-87d5-41de-89dc-caeab5bab3c9\"\n },\n \"88c86070-05f3-4c83-8e0d-8fa5437c4018\":{\n \"distanceInMiles\":{\n \"$numberDouble\":\"41.00256449186227\"\n },\n \"drivingTimeInMinutes\":{\n \"$numberDouble\":\"76.55\"\n },\n \"drivingDistanceInMiles\":{\n \"$numberDouble\":\"50.31862358\"\n },\n \"airportId\":\"88c86070-05f3-4c83-8e0d-8fa5437c4018\"\n }\n },\n \"hotelId\":\"13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb\",\n \"foods\":{\n \"restaurants\":[\n \n ],\n \"breakfastIds\":[\n \"080d7030-591e-4844-8077-238cd02778ed\",\n \"080d7030-591e-4844-8077-238cd02778ed\"\n ]\n },\n \"active\":true,\n \"expediaId\":{\n \"$numberInt\":\"18698320\"\n },\n \"ntmCode\":\"EXP-18698320\",\n \"hotelAmenityIds\":[\n \"49f6b262-1b2a-4b7d-85d6-af4d08fb55fc\",\n \"909470da-7cae-490c-849b-47965aef2f44\",\n \"39a61cb3-79f1-4a6e-8627-e951aafb0fcc\",\n \"ad0949dd-c696-4695-8502-19f4ceebb4e3\",\n \"930c10e5-8f52-4154-835c-87319f4113ff\",\n \"c518cc99-911a-47c1-8054-765855c4ba62\",\n \"944bbf7e-8c8d-4afd-8f2f-8dcee8ae972b\",\n \"ae74d71f-38bc-4720-8715-be4a142a1f96\",\n \"cda480a3-407d-41f2-8341-2c6dae7a1bb0\",\n \"9badeede-2a67-4261-85c2-56defcce81ad\"\n ],\n \"updatedAt\":{\n \"$date\":{\n \"$numberLong\":\"1678807851107\"\n }\n },\n \"rooms\":{\n \"amenityIds\":[\n \"3e39107f-398f-4247-88ef-899f2780d20e\",\n \"3aeb7e4e-6127-4105-8364-b6d72d05a60a\",\n \"f6f0c79b-a5a7-4b50-858e-bff7777cc759\",\n \"973bf297-f87d-4781-856d-c922e0ce64bb\",\n \"9871e1d3-34bc-4900-8279-aa89a540fc66\",\n \"40e3709e-fd0a-41ae-8d74-d1cbbfbd7d70\",\n \"12b046db-dca7-4b83-81fe-207594d1da41\",\n \"28641076-8eb6-4fd6-8b01-ca644889447e\",\n \"560f5c2c-2bda-4422-86be-bf3ab3f8fc9b\",\n \"2429f72e-9227-4ce0-8820-fa287ec19f2d\",\n \"f4a81c00-8dfd-497f-8dcc-e680a8c86c65\",\n \"c24f3bc4-df77-424d-8b44-364be4a44a46\"\n ],\n \"roomTypes\":[\n \n ]\n },\n \"crawling\":{\n \"googlePlaces\":{\n \"cid\":\"15923351505077507338\",\n \"lastDateCrawledBySource\":{\n \"tripadvisor\":\"2022-09-20T03:31:36.780Z\",\n \"priceline\":\"2022-09-20T03:30:40.754Z\",\n \"agoda\":\"2022-09-20T03:29:39.112Z\",\n \"booking\":\"2022-09-20T03:31:36.734Z\",\n \"hotels\":\"2022-09-20T03:30:33.983Z\",\n \"orbitz\":\"2022-09-20T03:30:37.895Z\",\n \"marriot\":\"2022-09-20T03:29:40.412Z\",\n \"travelocity\":\"2022-09-20T03:30:40.988Z\",\n \"google\":\"2022-09-20T03:29:39.227Z\",\n \"expedia\":\"2022-09-20T03:31:36.786Z\"\n },\n \"lastReviewIdBySource\":{\n \"google\":\"105258523960198986909\"\n },\n \"reviewsCountBySource\":{\n \"tripadvisor\":{\n \"$numberInt\":\"0\"\n },\n \"priceline\":{\n \"$numberInt\":\"0\"\n },\n \"agoda\":{\n \"$numberInt\":\"0\"\n },\n \"booking\":{\n \"$numberInt\":\"0\"\n },\n \"hotels\":{\n \"$numberInt\":\"0\"\n },\n \"orbitz\":{\n \"$numberInt\":\"0\"\n },\n \"marriot\":{\n \"$numberInt\":\"0\"\n },\n \"travelocity\":{\n \"$numberInt\":\"0\"\n },\n \"google\":{\n \"$numberInt\":\"1\"\n },\n \"expedia\":{\n \"$numberInt\":\"0\"\n }\n }\n },\n \"tripAdvisor\":{\n \"url\":\"https://www.tripadvisor.com/Hotel_Review-g298507-d12643830-Reviews-LARGO_Hotel-St_Petersburg_Northwestern_District.html\",\n \"lastDateCrawled\":\"2021-02-09T14:37:51.199Z\",\n \"lastReviewId\":{\n \"$numberInt\":\"624661546\"\n },\n \"reviewsCount\":{\n \"$numberInt\":\"2\"\n }\n }\n },\n \"images\":{\n \"41d05eee-4dfe-4a11-8e66-5b6aea5a0589\":{\n \"title\":\"Coffee and/or Coffee Maker\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/41d05eee-4dfe-4a11-8e66-5b6aea5a0589.jpg\",\n \"imageId\":\"41d05eee-4dfe-4a11-8e66-5b6aea5a0589\"\n },\n \"abd4fd88-bfcf-43d6-8c84-fa0849299e91\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/abd4fd88-bfcf-43d6-8c84-fa0849299e91.jpg\",\n \"imageId\":\"abd4fd88-bfcf-43d6-8c84-fa0849299e91\"\n },\n \"a026b9fb-a2f2-4f5f-832c-c5dc5394f5a6\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/a026b9fb-a2f2-4f5f-832c-c5dc5394f5a6.jpg\",\n \"imageId\":\"a026b9fb-a2f2-4f5f-832c-c5dc5394f5a6\"\n },\n \"9f8c941d-5d4e-4547-8ee9-354ef4644137\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/9f8c941d-5d4e-4547-8ee9-354ef4644137.jpg\",\n \"imageId\":\"9f8c941d-5d4e-4547-8ee9-354ef4644137\"\n },\n \"55c81c75-6999-4ad8-8c47-f2ee91e93c3c\":{\n \"title\":\"Breakfast Area\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/55c81c75-6999-4ad8-8c47-f2ee91e93c3c.jpg\",\n \"imageId\":\"55c81c75-6999-4ad8-8c47-f2ee91e93c3c\"\n },\n \"4d1f2ad1-6428-4f35-85ef-375cbbd5e5cc\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/4d1f2ad1-6428-4f35-85ef-375cbbd5e5cc.jpg\",\n \"imageId\":\"4d1f2ad1-6428-4f35-85ef-375cbbd5e5cc\"\n },\n \"b0352a6b-ccd8-43c8-83a8-9593c247123d\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/b0352a6b-ccd8-43c8-83a8-9593c247123d.jpg\",\n \"imageId\":\"b0352a6b-ccd8-43c8-83a8-9593c247123d\"\n },\n \"c36ae604-bcf4-4a6a-8ac0-4d664e31c510\":{\n \"title\":\"View from Property\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/c36ae604-bcf4-4a6a-8ac0-4d664e31c510.jpg\",\n \"imageId\":\"c36ae604-bcf4-4a6a-8ac0-4d664e31c510\"\n },\n \"db3ce258-5f4b-4e22-8a77-4c0d932e1853\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/db3ce258-5f4b-4e22-8a77-4c0d932e1853.jpg\",\n \"imageId\":\"db3ce258-5f4b-4e22-8a77-4c0d932e1853\"\n },\n \"fafc74d5-f413-4b42-8f2c-ce82ea6ee2f8\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/fafc74d5-f413-4b42-8f2c-ce82ea6ee2f8.jpg\",\n \"imageId\":\"fafc74d5-f413-4b42-8f2c-ce82ea6ee2f8\"\n },\n \"e059dda4-ef7b-4132-8b80-6f6f9a38b116\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/e059dda4-ef7b-4132-8b80-6f6f9a38b116.jpg\",\n \"imageId\":\"e059dda4-ef7b-4132-8b80-6f6f9a38b116\"\n },\n \"575b6ce4-e4d5-4875-8a95-b6170b411804\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/575b6ce4-e4d5-4875-8a95-b6170b411804.jpg\",\n \"imageId\":\"575b6ce4-e4d5-4875-8a95-b6170b411804\"\n },\n \"14811ac0-a6ce-43e0-8dd0-dff22b8ed18b\":{\n \"title\":\"Bathroom\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/14811ac0-a6ce-43e0-8dd0-dff22b8ed18b.jpg\",\n \"imageId\":\"14811ac0-a6ce-43e0-8dd0-dff22b8ed18b\"\n },\n \"d292bf33-36ca-460c-8624-bc5c906aa23a\":{\n \"title\":\"Front of Property\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/d292bf33-36ca-460c-8624-bc5c906aa23a.jpg\",\n \"imageId\":\"d292bf33-36ca-460c-8624-bc5c906aa23a\"\n },\n \"3de7facb-cfa7-4dbf-801e-d7524841c082\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/3de7facb-cfa7-4dbf-801e-d7524841c082.jpg\",\n \"imageId\":\"3de7facb-cfa7-4dbf-801e-d7524841c082\"\n },\n \"9b92c046-2702-4eae-8e01-8d2e2ba7c236\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/9b92c046-2702-4eae-8e01-8d2e2ba7c236.jpg\",\n \"imageId\":\"9b92c046-2702-4eae-8e01-8d2e2ba7c236\"\n },\n \"99264784-0c4a-4400-8783-9236e661ac69\":{\n \"title\":\"Featured Image\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/99264784-0c4a-4400-8783-9236e661ac69.jpg\",\n \"imageId\":\"99264784-0c4a-4400-8783-9236e661ac69\"\n },\n \"cb0269c7-c406-4d4f-898b-173e0e3577b1\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/cb0269c7-c406-4d4f-898b-173e0e3577b1.jpg\",\n \"imageId\":\"cb0269c7-c406-4d4f-898b-173e0e3577b1\"\n },\n \"e4a4d44a-fcc8-4a62-87c4-c863dbdd1b8e\":{\n \"title\":\"Property Entrance\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/e4a4d44a-fcc8-4a62-87c4-c863dbdd1b8e.jpg\",\n \"imageId\":\"e4a4d44a-fcc8-4a62-87c4-c863dbdd1b8e\"\n },\n \"6ff14e0f-b130-46a6-813b-2d040c323c22\":{\n \"title\":\"Property Entrance\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/6ff14e0f-b130-46a6-813b-2d040c323c22.jpg\",\n \"imageId\":\"6ff14e0f-b130-46a6-813b-2d040c323c22\"\n },\n \"9609cc92-dc9b-485f-8831-6edd6b4ef268\":{\n \"title\":\"Room\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/9609cc92-dc9b-485f-8831-6edd6b4ef268.jpg\",\n \"imageId\":\"9609cc92-dc9b-485f-8831-6edd6b4ef268\"\n },\n \"637e78f4-c7d2-4ce4-8dd8-7ac4696f6326\":{\n \"title\":\"Breakfast Area\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/637e78f4-c7d2-4ce4-8dd8-7ac4696f6326.jpg\",\n \"imageId\":\"637e78f4-c7d2-4ce4-8dd8-7ac4696f6326\"\n },\n \"c5f16d1b-f750-4705-8a03-0b809df4ef4a\":{\n \"title\":\"Lobby Lounge\",\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/c5f16d1b-f750-4705-8a03-0b809df4ef4a.jpg\",\n \"imageId\":\"c5f16d1b-f750-4705-8a03-0b809df4ef4a\"\n }\n },\n \"general\":{\n \"crewFriendly\":{\n \"crewInHouse\":false,\n \"hadCrewRecently\":false\n },\n \"thumbnail\":{\n \"fileName\":\"/hotels/13260ef8-0ebc-4f65-899b-fa7e9f2f9bbb/images/thumbnail.jpg\",\n \"imageId\":\"thumbnail\"\n },\n \"address\":{\n \"address\":\"Ulitsa Rubinshteina 26\",\n \"cityId\":\"russia__saintpetersburg__saintpetersburg\",\n \"countryId\":\"russia\",\n \"stateId\":\"russia__saintpetersburg\"\n },\n \"totalNumberOfRooms\":{\n \"$numberInt\":\"15\"\n },\n \"phone\":\"+7 812 438-03-31\",\n \"rating\":{\n \"$numberInt\":\"3\"\n },\n \"location\":{\n \"lat\":{\n \"$numberDouble\":\"59.929426\"\n },\n \"lon\":{\n \"$numberDouble\":\"30.344232\"\n }\n },\n \"hotelName\":\"Largo hotel\",\n \"standardCheckOutTime\":\"1200\",\n \"hotelDescription\":\"The hotel offers a coffee shop/café. A complimentary breakfast is offered each morning. Wireless Internet access is complimentary. For a surcharge, an airport shuttle (available 24 hours) is offered to guests. This bu...\",\n \"standardCheckInTime\":\"1400\",\n \"brandId\":\"822c07db-57d9-4f8c-9588-e2c03795eb75\"\n },\n \"vindow15Id\":{\n \"$numberInt\":\"379583\"\n },\n \"updatedBy\":\"SCRIPT-V20T-8310\",\n \"hotelOpenStatus\":\"Open\",\n \"vervotechId\":{\n \"$numberInt\":\"32363897\"\n },\n \"vervotechModifiedAt\":{\n \"$date\":{\n \"$numberLong\":\"1650514753000\"\n }\n },\n \"vervotechIdUpdatedAt\":{\n \"$date\":{\n \"$numberLong\":\"1671625756523\"\n }\n },\n \"primero\":{\n \n },\n \"externalIds\":{\n \"seqId\":{\n \"$numberInt\":\"1\"\n }\n }\n}\nPlanExecutor error during aggregation:: caused by:: Sort exceeded memory limit of 104857600 bytes, but did not opt into external sorting. Aborting operation. Pass allowDiskUse: true to opt-in.\nallowDiskUse: true",
"text": "I have data Like this (shown below).When I run the AWS Glue crawler on this data, I get the error mentioned in the topic nameThe error I get:Since I am running an AWS Crawler using AWS’s inbuilt MongoDB connector, I am not sure where I will add allowDiskUse: true. It would be very helpful if I can get some input on how to fix this issue.Looking forward to your response. Thank you in Advance!Best,Prasanna",
"username": "Prasanna_Sundarajan"
},
{
"code": "",
"text": "Seems more like an issue with the server capacity. Pls share with us the MongoDB Atlas cluster type you are using for this and also confirm, if AWS Glue connections are successful.",
"username": "Babu_Srinivasan1"
},
{
"code": "",
"text": "Hello Babu.1.) Yes AWS Glue connections are successful when I connect to different collections within the same database. I am able to run the crawlers successfully.2.) I am using a “dedicated cluster” in AWS (us-east-1 region). Cluster tier → M10 General. Would “serverless” cluster be a better option for this implementation?Some additional details for this particular collection.\nSTORAGE SIZE: 3.83GB\nLOGICAL DATA SIZE: 7.03GB\nTOTAL DOCUMENTS: 407734\nINDEXES TOTAL SIZE: 599.47MBPlease let me know if you need any additional information from my side. Really appreciate your help. Happy to get on a call if that would work for you.Best Regards,Prasanna",
"username": "Prasanna_Sundarajan"
},
{
"code": "",
"text": "Thanks for sharing the details. Please ensure you are selecting the “Enable data sampling” option while creating the data source of the crawler. This will avoid the crawler to look out for the entire document.\nimage816×660 82.3 KB\n",
"username": "Babu_Srinivasan1"
},
{
"code": "",
"text": "Yes. This option has been checked. I use this option as default and I am not crawling the entire table. The error message I posted above is what I get when I have this option checked.Please advise.Best Regards,Prasanna",
"username": "Prasanna_Sundarajan"
},
{
"code": "",
"text": "Would advise raising a ticket with the MongoDB Support team to deep dive into the dumps and identify the core issue. Meanwhile, as a workaround, you can try to reduce the number of documents and check if the crawler is running successfully.",
"username": "Babu_Srinivasan1"
},
{
"code": "",
"text": "yes that would work great ! can you please provide me the steps/link to raise a ticket with MongoDB Support?",
"username": "Prasanna_Sundarajan"
},
{
"code": "",
"text": "",
"username": "Babu_Srinivasan1"
}
] | AWS Glue crawler x MongoDB Atlas Connector issue | 2023-05-03T15:50:50.218Z | AWS Glue crawler x MongoDB Atlas Connector issue | 998 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello team,I am using nodejs mongodb driver 5.3.0.\nPlease can you help me with sample program for below functionalities. THe only thing is I need blocking calls and not async call.",
"username": "Anand_Vaidya1"
},
{
"code": "",
"text": "It looks like most of this has been already answered with",
"username": "steevej"
},
{
"code": "",
"text": "What is missing isand",
"username": "steevej"
},
{
"code": "",
"text": "The Nodejs mongodb driver form “5.3.0” upholds obstructing Programming interface calls. This implies that while an obstructing call is made, it will hinder the execution of some other code until the call has finished. It is essential to take note of that impeding calls can adversely affect the general presentation of your application. Thusly, it is suggested that you use non-hindering tasks any place workable for better versatility and responsiveness of your application.",
"username": "Jacelyn_Sia"
}
] | Nodejs mongodb driver@^5.3.0" blocking API calls | 2023-05-03T12:00:42.501Z | Nodejs mongodb driver@^5.3.0” blocking API calls | 613 |
null | [] | [
{
"code": "",
"text": "I have a MongoDB server (4.4.19) that has a M40 Tier and maximum of 3000 connections.\nIn the past few weeks, the connections have been filled up and we had no other choice but to Test Resilience and failover to a secondary node to get free connections again.\nBut as soon as we have available connections, they are filled up pretty quick and in less than a day, it goes up to 2700 and the cycle continues.\nWe want to find out what are these connections filling up the server and work with the appropriate clients and limit their connection parameters.\nMy question is: How do we find out these consuming connections that are filling up the available connection slots? We would like to have the output by clients names (application names) rather than IPs. It would be great to have the top 10 consuming connections (e.g.: If 1 particular client, has 500+ connections, it would be #1 )\nAny input is greatly appreciated.\nThank you!",
"username": "Juan_Polanco"
},
{
"code": "",
"text": "Hi @Juan_Polanco - Welcome to the community.My question is: How do we find out these consuming connections that are filling up the available connection slots?Have you checked the logs for the cluster to help possibly narrow down where the issue connection(s) are coming from?The Atlas in-app chat support may be able to assist you with this in terms of providing the application / driver’s connecting if you can specify approximately the time frames when it occurs although you may need to double check with them. Alternatively, you could raise a support case if you have a support subscription (above Basic Support level) as well.The following page may be of use to you regarding connection issues.Regards,\nJason",
"username": "Jason_Tran"
}
] | Print out the top consuming connections on my server | 2023-04-20T09:10:03.682Z | Print out the top consuming connections on my server | 470 |
null | [
"replication"
] | [
{
"code": "local-mongodb-one:SECONDARY> db.test_catch.stats()\n{\n \"ns\" : \"mdworkflow.test_catch\",\n \"size\" : NumberLong(\"5555713675259\"),\n \"count\" : 18994123,\n \"avgObjSize\" : 292496,\n \"storageSize\" : 811467612160,\n \"freeStorageSize\" : 30761287680,\n \"capped\" : false\n}\nlocal-mongodb-one:SECONDARY> db.runCommand({ compact:\"test_catch\" } )\n{ \"bytesFreed\" : 0, \"ok\" : 1 }\nlocal-mongodb-one:SECONDARY> db.test_catch.stats()\n{\n \"ns\" : \"mdworkflow.test_catch\",\n \"size\" : NumberLong(\"5555714175181\"),\n \"count\" : 18994126,\n \"avgObjSize\" : 292496,\n \"storageSize\" : 811467612160,\n \"freeStorageSize\" : 30761205760,\n \"capped\" : false\n}\n",
"text": "MongoDB three-node replica set, version v4.4.5.\nI removed some data in the collection on the PRIMARY node, and then I executed the compact command on the SECONDARY node, but the freeStorageSize was not released after execution, why?",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "You could try updating to the latest patch release and attempt the compact again. 4.4.5 contains a patched critical issue and is not recommended for production use.Your database could also be hitting https://jira.mongodb.org/browse/SERVER-41596As an alternative to compaction you could resync the node.",
"username": "chris"
},
{
"code": "",
"text": "You could try updating to the latest patch release and attempt the compact again. 4.4.5 contains a patched critical issue and is not recommended for production use.Your database could also be hitting https://jira.mongodb.org/browse/SERVER-41596As an alternative to compaction you could resync the node.I have deployed a replica set of MongoDB with the same version, and it has been verified that executing “compact” releases disk space in the new environment.The data in this environment was originally migrated from v3.4.24 to v4.4.5. I am not sure if this has any impact on the issue. Resyncing the replica node data does release space, but it is too time-consuming. My intention is to solve the problem of why “compact” does not release disk space in this environment.If you have any better ideas, please let me know. Thank you very much.",
"username": "Yanfei_Wu"
}
] | Compact doesn't free space | 2023-05-04T07:51:44.694Z | Compact doesn’t free space | 925 |
null | [
"dot-net"
] | [
{
"code": "ISet<>\"array\"\"uniqueItems\": \"true\"PropertyChangedIList<>\"uniqueItems\"PropertyChangedCollectionChanged",
"text": "I have Realm objects with members that are sets (c# ISet<>, schema type \"array\" with \"uniqueItems\": \"true\"). When an element is added to or removed from the set, I used to get PropertyChanged events. Now (dot-net driver v10.21.1) I don’t.Is this change intentional? Do I now have to register for CollectionChanged as well as PropertyChanged?Update: Hmm, no, maybe this is a bug after all? Because I also have some IList<> members (without the \"uniqueItems\" constraint) that no longer get PropertyChanged events either, and they don’t seem to support CollectionChanged…?",
"username": "polymath74"
},
{
"code": "CollectionChangedas INotifyCollectionChanged",
"text": "Ok, maybe they do support CollectionChanged - casting with as INotifyCollectionChanged seems to work, even though the types are not actually labelled as supporting the interface!",
"username": "polymath74"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ISet<> PropertyChanged no longer raised for membership changes | 2023-05-04T23:42:28.501Z | ISet<> PropertyChanged no longer raised for membership changes | 460 |
[
"replication",
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "",
"text": "Hi everybody,i have a k3s cluster with six nodes on which I deployed MongoDB Enterprise Kubernetes Operator. The Operator ist working fine and behaves generally as expected.My problem is that MongoDB Ops Manager says the primary member of my replica set unavailable:\nBildschirmfoto 2023-04-27 um 09.43.431651×224 25.3 KB\nChecking the “Servers” tab in MongoDB Ops Manager tells otherwise:\nBildschirmfoto 2023-04-27 um 09.44.511663×358 58.4 KB\nAs you can see the MongoDB Members are working as expected with all the features enabled. Also, the “Metrics” tab shows all the metrics I am interested in. Furthermore, these findings imply that MongoDB Automation Agents are working correctly too. So I come to the conclusion that all components, that is mongod and mongodb agent, are well and healthy which is confirmed by the logs of MongoDB Operator:\nBildschirmfoto 2023-04-27 um 09.47.201206×21 5.51 KB\nIs anybody familiar with this issue? If so, can anybody hint me one or maybe two options on how to solve this?Thanks in advance.",
"username": "Marco_80669"
},
{
"code": "",
"text": "Well, I restarted the deployment and that solved the problem.But I don’t think this is a long term solution. Especially in enterprise environments. If anybody out there is familiar with this situation please have me know. I appreciate any hint.",
"username": "Marco_80669"
},
{
"code": "",
"text": "Hi @Marco_80669Glad you have found the solution to the issue.I don’t think this is a long term solution. Especially in enterprise environments.I agree this is not an ideal situation in mission-critical enterprise environment. Frequently, these kind of issues are caused by the environment, and a specialized 1-1 support is usually needed to resolve it. Since Ops Manager is part of the enterprise advanced subscription, you will have access to support and thus would be able to contact support when these type of issue surfaces.If you’re evaluating Ops Manager and would like to know more, please feel free to send a DM to me so I can connect you to the right people.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,thanks for replying to my issue. Do you have any hints on what environmental topics might be reason for this to occur? We are focussed to understand our environment deeply, thus we’d like to do some analysis on ourself so we can better explain what might be the reason for this.We’d appreciate any hint. Each is of great value for us.Thanks in advance.",
"username": "Marco_80669"
},
{
"code": "",
"text": "thanks for replying to my issue. Do you have any hints on what environmental topics might be reason for this to occur?That’s impossible to say without exact knowledge of the infrastructure and deployment methods. However in a very, very general sense, I would say it can be caused by the Ops Manager installation itself (i.e. it was not installed properly), or perhaps network issues. My first suggestion is to check with support since they’ll have more experience troubleshooting Ops Manager deployments, but if you can provide them with observed patterns when/if these issues are occuring, that would be one of the first steps as well.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Ops Manager shows Primary Member of MongoDB Replicaset is not available although it is! | 2023-04-27T07:52:23.120Z | MongoDB Ops Manager shows Primary Member of MongoDB Replicaset is not available although it is! | 989 |
|
[] | [
{
"code": "expires: { $lt: new Date() }\nconst expiresWhen = (time * 60) * 60\nconst expires = new Date()\nexpires.setSeconds(expires.getSeconds() + expiresWhen)\n",
"text": "Hello,When using:it works, however, when I set the time:if expires becomes over 2.88 hours (around 10,000 seconds) it doesnt work.How expires is stored:\n\nScreenshot 2023-04-28 at 19.08.09792×38 7.75 KB\nAny help or advice would be helpful, thanks!",
"username": "Hufeepufee_123"
},
{
"code": "const expiresWhen = (time * 60) * 60time",
"text": "Hi @Hufeepufee_123 - Welcome to the communityconst expiresWhen = (time * 60) * 60What is the value of time here?Can you also provide full sample documents in JSON format in case we need to reproduce this on a test environment. In addition to this, can you also detail where / how you’re running the code snippets? I assume from a MongoDB driver but please provide details of those.Regards,\nJason",
"username": "Jason_Tran"
}
] | $lt: new Date() help | 2023-04-28T18:10:27.657Z | $lt: new Date() help | 398 |
|
null | [
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] | [
{
"code": "",
"text": "Hi everyone,I am currently trying to connect an app to the MongoDB Driver connection through Node.js, but I have had some errors in the code.First, the error was pointing that the use of “URL=mongodb+srv://” was not valid and I should instead use just “URL=mongodb://”. When I corrected as suggested by the console I started to get this errorMongoose disconnected from database\nMongoose connection error: MongoError: failed to connect to server [finstearchcluster1.gmhq7qx.mongodb.net:27017] on first connect [Error: getaddrinfo ENOTFOUND finstearchcluster1.gmhq7qx.mongodb.net\nat GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {\nname: ‘MongoError’\n}]Not quite sure what is wrong on my side. If someone can give me a hint or lead me to a related topic would be awesome. Thank you!",
"username": "Cristian_Barria"
},
{
"code": "finstearchcluster1.gmhq7qx.mongodb.netmongodb+srv://<user>:<pass>@finstearchcluster1.gmhq7qx.mongodb.net/",
"text": "Hi @Cristian_Barria,finstearchcluster1.gmhq7qx.mongodb.net is an SRV record, so you need to ensure your connection string in Mongoose is something like mongodb+srv://<user>:<pass>@finstearchcluster1.gmhq7qx.mongodb.net/.The error you likely got is because you tried to append a port (27017) to the connection string when connecting using SRV.",
"username": "alexbevi"
},
{
"code": "",
"text": "Hi @alexbevi ,I am just using the string provided by the driver connectormongodb+srv://CristianBarria:@finstearchcluster1.gmhq7qx.mongodb.net/?retryWrites=true&w=majorityI am changing the password when I place it in VS CodeShould the URL go like this then?\nmongodb+srv://CristianBarria:@finstearchcluster1.gmhq7qx.mongodb.net/",
"username": "Cristian_Barria"
},
{
"code": "mongodb+srv://CristianBarria:<password>@finstearchcluster1.gmhq7qx.mongodb.net/mongosh",
"text": "mongodb+srv://CristianBarria:<password>@finstearchcluster1.gmhq7qx.mongodb.net/ would be correct. Does this work when you connect via the mongosh shell? Have you configured the appropriate IP Access List to ensure you can connect to the cluster?",
"username": "alexbevi"
},
{
"code": "D:\\..\\server\\node_modules\\muri\\lib\\index.js:28\n throw new Error('Invalid mongodb uri \"' + str + '\". Must begin with \"mongodb://\"'); \n ^\n\nError: Invalid mongodb uri mongodb+srv://CristianBarria:[email protected]/.\". Must begin with \"mongodb://\"\n at muri (D:\\MERN Dashboard\\server\\node_modules\\muri\\lib\\index.js:28:11)\n at Connection.openUri (D:\\MERN Dashboard\\server\\node_modules\\mongoose\\lib\\connection.js:766:18)\n at Mongoose.connect (D:\\MERN Dashboard\\server\\node_modules\\mongoose\\lib\\index.js:262:17)\n at file:///D:/MERN%20Dashboard/server/index.js:24:10\n at ModuleJob.run (node:internal/modules/esm/module_job:193:25)\nEmitted 'error' event on NativeConnection instance at:\n at Connection.error (D:\\MERN Dashboard\\server\\node_modules\\mongoose\\lib\\connection.js:673:8)\n at Connection.openUri (D:\\MERN Dashboard\\server\\node_modules\\mongoose\\lib\\connection.js:775:10)\n at Mongoose.connect (D:\\MERN Dashboard\\server\\node_modules\\mongoose\\lib\\index.js:262:17)\n at file:///D:/MERN%20Dashboard/server/index.js:24:10\n at ModuleJob.run (node:internal/modules/esm/module_job:193:25)\n\nNode.js v19.9.0\n",
"text": "Hi @alexbevi ,with the new URL I am receiving the following error:I have set the IP access list prior to the connection and even set up to public sharing for this try out. I haven´t look mongosh shell yet, but I´ll try it now",
"username": "Cristian_Barria"
},
{
"code": "",
"text": "Hi @alexbevi ,I solved the issue. There was an issue with the versioning on the packages I installed in my code. Now its running. Thank you!",
"username": "Cristian_Barria"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Fail in connection to MongoDB Node.JS Driver in Atlas | 2023-05-04T16:42:11.899Z | Fail in connection to MongoDB Node.JS Driver in Atlas | 966 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "",
"text": "I found on this thread that we can’t use $search with the find method: Looking for a way to use $search fuction via Rest APIand that the aggregation pipeline must be used instead.Ok but I need to limit and skip because i’m doing a search input in a UI listing.\nHow would you do that ?",
"username": "David_N_A"
},
{
"code": "pipeline$limit$skip$search$projectcurl --location --request POST 'https://data.mongodb-api.com/app/<REDACTED>/endpoint/data/v1/action/aggregate' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: <REDACTED>' \\\n--data-raw '{\n \"collection\":\"location\",\n \"database\":\"myFirstDatabase\",\n \"dataSource\":\"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"default\",\n \"range\": {\n \"gte\": 1,\n \"path\": \"a\"\n }\n }\n }.\n {\n \"$project\": {\"_id\": 0}\n }\n ]\n}'\n{\"documents\":[{\"a\":19},{\"a\":18},{\"a\":17},{\"a\":16},{\"a\":15},{\"a\":14},{\"a\":13},{\"a\":12},{\"a\":11},{\"a\":10},{\"a\":9},{\"a\":8},{\"a\":7},{\"a\":6},{\"a\":5},{\"a\":4},{\"a\":3},{\"a\":2},{\"a\":1}]}%\n$limitcurl --location --request POST 'https://data.mongodb-api.com/app/<REDACTED>/endpoint/data/v1/action/aggregate' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: <REDACTED>' \\\n--data-raw '{\n \"collection\":\"location\",\n \"database\":\"myFirstDatabase\",\n \"dataSource\":\"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"default\",\n \"range\": {\n \"gte\": 1,\n \"path\": \"a\"\n }\n }\n },\n {\n \"$limit\": 3\n },\n {\n \"$project\": {\"_id\": 0}\n }\n ]\n}'\n{\"documents\":[{\"a\":19},{\"a\":18},{\"a\":17}]}%\n$skip",
"text": "Hey David Ok but I need to limit and skip because i’m doing a search input in a UI listing.\nHow would you do that ?Not sure if this is what you are after but have you tested in the pipeline with $limit and $skip?As an example similar to the post you linked, using $search and $project only:Which has a response:Using $limit in the pipeline:That gives the following response:I’ve not tested with $skip in the above example but you can test it out and let me know if this works for you.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks i will try it, i hope it will work i did not see this in the doc for pipeline.\nI’ll let you know.",
"username": "David_N_A"
},
{
"code": "pipelineMongoDB Aggregation Pipeline",
"text": "Sounds good, hope it works for you.Thanks i will try it, i hope it will work i did not see this in the doc for pipeline.Could you link what documentation you were referring to? From the Data API documentation I have taken a look at the pipeline request body states a MongoDB Aggregation Pipeline to be passed through - I’m not sure if there would examples in the docs for each available aggregation stage within Data API context.I just wanted to clarify in case the docs could be improved.",
"username": "Jason_Tran"
}
] | Aggregate pipeline with limit and skip | 2023-05-04T22:05:35.371Z | Aggregate pipeline with limit and skip | 939 |
null | [] | [
{
"code": "MongoServerError: not authorized on admin to execute command\n",
"text": "Hello, I have a mongo instance where there is only one user, which is admin. I am unable to run any commands such as db.getUsers() or even anything. I am trying to make the user root, so that it would be able to run any command but I am unable to. I am stuck with one user that cannot run any commands, and am unsure what to do next:any advice on this would be greatly appreciated",
"username": "Edward_Lee2"
},
{
"code": "",
"text": "How did you connect to your instance\nDid you authenticate against Authorization database?\nDoes other commands work like\nDb\nSho dbs\nShow users etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Edward_Lee2,\nAre you sure this user had the admin privileges?\nIt was created in the admin database?\nI suggest you to create a new user in the admin database by following this steps:BR",
"username": "Fabio_Ramohitaj"
}
] | Admin User in Instance Cannot Execute Any Commands | 2023-05-04T15:09:01.788Z | Admin User in Instance Cannot Execute Any Commands | 393 |
null | [
"aggregation",
"node-js",
"change-streams"
] | [
{
"code": "mongodbChangeStream.tryNextTChangeDocumentdb.command()readConcernwriteConcerncommentChangeStream.tryNext()mongodb",
"text": "The MongoDB Node.js team is pleased to announce version 5.4.0 of the mongodb package!We have corrected the tryNext method on ChangeStream to use the TChange schema generic instead of the untyped Document interface. This may increase strictness for existing usages but aligns with the rest of the methods on the change stream class to accurately reflect the type returned from the driver.The db.command() API has a number of options deprecated that were incorrectly included in the typescript interface the method reportedly accepts. A majority of the options relate to fields that must be attached to the command directly: readConcern, writeConcern, and comment.Additionally, the collStats helper has been deprecated in favor of using database aggregations to get the same result: https://www.mongodb.com/docs/manual/reference/operator/aggregation/collStats/NOTE: This release includes some experimental features that are not yet ready for production use. As a reminder, anything marked experimental is not a part of the stable driver API and is subject to change without notice.We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "neal"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Node.js Driver 5.4.0 Released | 2023-05-04T19:47:12.907Z | MongoDB Node.js Driver 5.4.0 Released | 838 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.