image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"aggregation"
] | [
{
"code": "",
"text": "My database is a collection of items on ebay. I want to list the number of related items with a certain item. I only want the number of related items as the answer. So far i have the code as shown below which gives me an error. Could someone please help me.",
"username": "Jain_Shah"
},
{
"code": "db.getCollection(\"products\").aggregate([\n\t{ $match: { product: \"Stop Watch\" } },\n\t{\n\t\t$group: {\n\t\t\t_id: { related: \"$related\", product: \"$title\" },\n\t\t\tcount: { $sum: 1 }\n\t\t}\n\t}\n]);\n",
"text": "Hey @Jain_ShahI think you are looking for something like the following",
"username": "Natac13"
},
{
"code": "",
"text": "Hey @Jain_ShahSo the code I provided is just your code but with the syntax corrected.I would need to see what your collection documents look like to attempt to write the aggregation pipeline fully.",
"username": "Natac13"
},
{
"code": "",
"text": "Some general comments:1.It is very hard to implement collection specific code from a screenshot as we have to type by hand sample documents. A straight plain ascii list of json documents is much more easier and faster to import in our server. Preferably within a pre html tag.2.You write product called 'Stop Watch’ but I do not see any fields bearing a name close to product. If you had shown at least one document with the string ‘Stop Watch’ we could have infer the name of the field.",
"username": "steevej"
}
] | Aggregation pipeline | 2020-03-03T18:54:11.009Z | Aggregation pipeline | 1,590 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "The previous forums had a pinned topic How to report Realm Cloud operational issues. Is this still the proper way to report operational issues?I reported an urgent issue hours ago, making our service entirely unusable (issue # 5847), and I am still waiting to hear back.I know we are on the standard $30 plan, but when we were selecting technology, you had, and a $200 Pro plan that we wanted to go for, and a self-hosting option that we considered a safety net. By the time we were ready, you had taken these two options away…",
"username": "Yves-Eric_Martin"
},
{
"code": "",
"text": "The previous forums had a pinned topic How to report Realm Cloud operational issues. Is this still the proper way to report operational issues?Yes, creating a support ticket is still the proper way to report operational issues for the Standard Realm Cloud plan.The Standard $30 plan does not currently include a support SLA, but you can contact [email protected] to discuss options for a support plan.I see that our support team is already investigating the issue you raised earlier today.Regards,\nStennie",
"username": "Stennie_X"
}
] | How to report urgent Realm Cloud operational issues | 2020-03-04T11:25:14.691Z | How to report urgent Realm Cloud operational issues | 1,499 |
null | [
"aggregation"
] | [
{
"code": "---------- c_accounts --------------\n// 01 \n{ \n \"accountId\" : \"12345\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12345\", \n \"balance\" : 3242.2, \n \"balanceAed\" : 32423.23\n}, \n// 02\n{ \n \"accountId\" : \"12346\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12346\", \n \"balance\" : 12131, \n \"balanceAed\" : 123.1\n}\n---------c_transactions----------\n// 01 \n{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"transactionId\" : \"T12345\", \n \"referenceNumber\" : \"R12345\"\n}, \n// 02\n{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12346\", \n \"referenceNumber\" : \"R12346\"\n}\n-------------c_cards---------------\n// 01\n{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"cardId\" : \"C1234\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n\n},\n// 02\n{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"cardId\" : \"C1236\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n\n}\n db.getCollection(\"accounts\").aggregate(\n [\n { \n \"$match\" : { \n \"customerId\" : \"1234\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"transactions\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Transactions\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"cards\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Cards\"\n }\n }, \n { \n \"$unwind\" : { \n \"path\" : \"$Cards\"\n }\n }, \n { \n \"$project\" : { \n \"_id\" : 0.0, \n \"Transactions._id\" : 0.0, \n \"Cards._id\" : 0.0\n }\n }, \n { \n \"$project\" : { \n \t\t\t\t\t// C-Accounts\n \t\t\t\t\t\t\t\"accountId\" : 1.0, \n \t\t\t\t\t\t\t\"customerId\" : 1.0, \n \t\t\t\t\t\t\t\"accountNumber\" : 1.0, \n \t\t\t\t\t\t\t\"balance\" : 1.0, \n \t\t\t\t\t\t\t\"balanceAed\" : 1.0, \n \t\t\t\t\t// C-Transactions\n \t\t\t\t\t\t\t\"Transactions.accountId\" : 1.0, \n \t\t\t\t\t\t\t\"Transactions.customerId\" : 1.0, \n \t\t\t\t\t\t\t\"Transactions.transactionId\" : 1.0, \n \t\t\t\t\t\t\t\"Transactions.referenceNumber\" : 1.0, \n \t\t\t\t\t// C-Cards\n \t\t\t\t\t\t\t\"Cards.customerId\" : 1.0, \n \t\t\t\t\t\t\t\"Cards.accountId\" : 1.0, \n \t\t\t\t\t\t\t\"Cards.cardId\" : 1.0, \n \t\t\t\t\t\t\t\"Cards.cardHolderName\" : 1.0, \n\n \t\t\t\t\t\t\t\"Cards.LimitAmount\" : { \n \t\t\t\t\t\t\t\t\"$divide\" : [\n \t\t\t\t\t\t\t\t\t\"$Cards.LimitAmount\", \n \t\t\t\t\t\t\t\t\t5.0\n \t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t}, \n \t\t\t\t\t\t\t\"Cards.PaymentAmount\" : { \n \t\t\t\t\t\t\t\t\"$divide\" : [\n \t\t\t\t\t\t\t\t\t\"$Cards.PaymentAmount\", \n \t\t\t\t\t\t\t\t\t5.0\n \t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t}\n \t\t\t\t\t}\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n );\t\t\t\t\n[\n {\n \"accountId\" : \"12345\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12345\", \n \"balance\" : 3242.2, \n \"balanceAed\" : 32423.23\n },{\n \"accountId\" : \"12346\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12346\", \n \"balance\" : 12131, \n \"balanceAed\" : 123.1\n },\nTransactions:[\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"transactionId\" : \"T12345\", \n \"referenceNumber\" : \"R12345\"\n}, \n{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12346\", \n \"referenceNumber\" : \"R12346\"\n }\n],\nCards: [{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"cardId\" : \"C1234\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 5000.5, \n \"PaymentAmount\" : 1000.5\n\n},\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"cardId\" : \"C1236\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 3000.5, \n \"PaymentAmount\" : 1000.5\n }\n]\n]\n\n",
"text": "Hello everyone, I’ve 3 collections and i want to show all of them and for that i use lookup.And after that i want to use divide method on one of the collection to divide some fields by some value(i.e. 5) but it returns an error that $divide can’t be used on array, so i used unwind and apply the divide method and it was working perfectly but the issue is that i want to merge them back in one array. Followed are the collections and query and required output.----------------------- Query --------------------------- Expected Output ----------",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "I am not to sure if I understand your requirements but I would take a look at the $push in https://docs.mongodb.com/manual/reference/operator/update/push/#up._S_push",
"username": "steevej"
},
{
"code": "$map$project$unwind$map$map$unwind$group",
"text": "Have you considered using the $map operator in the $project stage instead of $unwind?$map will apply an operation to each element in the array:Using $map will remove the need to $unwind with a subsequent $group.",
"username": "Justin"
},
{
"code": "",
"text": "$push with $group, but the issue is with the main collection. here is the group clause:\n{\n_id: “customerId”,\n“Accounts” : {$push: “$accounts”},\n“Transactions” : {$push: “$Transactions”},\n“Cards” : {$push: “$Cards”}\n}",
"username": "Nabeel_Raza"
},
{
"code": "$unwind$group",
"text": "Don’t use $unwind or $group - just use $divide inside of $map, like Justin suggested.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Then it show null value in the field.",
"username": "Nabeel_Raza"
},
{
"code": "$map$unwind$unwind$divide$mapdb.accounts.aggregate([\n {\"$match\" : { \"customerId\" : \"1234\" } }, \n { \n \"$lookup\" : { \n \"from\" : \"transactions\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Transactions\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"cards\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Cards\"\n }\n }, \n {\"$addFields\": {\n \"Cards\": {\n \"$map\":{\n \"input\":\"$Cards\", \n \"in\": {\n \"customerId\": \"$$this.customerId\",\n \"accountId\": \"$$this.accountId\",\n \"cardId\": \"$$this.cardId\",\n \"cardHolderName\": \"$$this.cardHolderName\",\n \"LimitAmount\": { \"$divide\": [ \"$$this.LimitAmount\", 5.0 ] }, \n \"PaymentAmount\": { \"$divide\": [ \"$$this.PaymentAmount\", 5.0 ] }, \n }\n }\n }, \n }\n }, \n {\"$project\": {\n \"_id\":0, \n \"Transactions._id\":0\n }\n }\n])\n$unwindaccountstransactionscardscustomerId$lookupdb.accounts.aggregate([\n {\"$match\" : { \"customerId\" : \"1234\"} }, \n {\n \"$group\": {\n \"_id\": null, \n \"Accounts\":{\"$push\":\"$$ROOT\"}\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"transactions\", \n \"localField\" : \"Accounts.accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Transactions\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"cards\", \n \"localField\" : \"Accounts.accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Cards\"\n }\n }, \n {\"$addFields\": {\n \"Cards\": {\n \"$map\":{\n \"input\":\"$Cards\", \n \"in\": {\n \"customerId\": \"$$this.customerId\",\n \"accountId\": \"$$this.accountId\",\n \"cardId\": \"$$this.cardId\",\n \"cardHolderName\": \"$$this.cardHolderName\",\n \"LimitAmount\": { \"$divide\": [ \"$$this.LimitAmount\", 5.0 ] }, \n \"PaymentAmount\": { \"$divide\": [ \"$$this.PaymentAmount\", 5.0 ] }, \n }\n }\n }, \n }\n }, \n {\"$project\" : { \n \"_id\" : 0, \n \"Accounts._id\": 0,\n \"Transactions._id\" : 0, \n \"Cards._id\" : 0\n }\n }\n ])\n",
"text": "Hi @Nabeel_Raza,Thanks for providing the example input, the expected output and the aggregation pipeline that you’ve tried.i used unwind and apply the divide method and it was working perfectly but the issue is that i want to merge them back in one array.As mentioned by Justin and Asya, you can utilise $map instead of $unwind here. So replacing all of the pipeline stages after $unwind from your example, below is an example on how you could utilise $divide inside of $map :This should have solved your issue with using $unwind and having to merge them back in. See also $addFields for more information.However looking at your desired output, it looks like there is another issue. It seems that you’re trying to group all documents in accounts, transactions and cards collections matching customerId to a single document. If this is the case, then you need to group before the $lookup, for example:Having said all the above, after looking at the collection schemas and the desired output of your aggregation pipeline, if this is a frequently used query I’d recommend to reconsider the collection schemas. Please review Schema Design: Summary for examples of different patterns.Regards,\nWan.",
"username": "wan"
},
{
"code": "{\n \"customerId\" : \"1234\",\n \"accountId\" : \"12345\",\n \"cardId\" : \"C1238\",\n \"cardHolderName\" : \"John Moe\",\n \"LimitAmount\" : null,\n \"PaymentAmount\" : null\n }\n",
"text": "@wan it through null values. how can we avoid that.",
"username": "Nabeel_Raza"
},
{
"code": "db.getCollection(\"accounts\").aggregate(\n [\n { \n \"$match\" : { \n \"customerId\" : \"1234\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"transactions\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Transactions\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"cards\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Cards\"\n }\n }\n , \n { \n \"$unwind\" : { \n \"path\" : \"$Cards\"\n }\n }, \n { \n \"$project\" : { \n \"accountId\" : 1.0, \n \"customerId\" : 1.0, \n \"accountNumber\" : 1.0, \n \"balance\" : 1.0, \n \"balanceAed\" : 1.0, \n\n \"Transactions.accountId\" : 1.0, \n \"Transactions.customerId\" : 1.0, \n \"Transactions.transactionId\" : 1.0, \n \"Transactions.referenceNumber\" : 1.0, \n\n \"Cards.customerId\" : 1.0, \n \"Cards.accountId\" : 1.0, \n \"Cards.cardId\" : 1.0, \n \"Cards.cardHolderName\" : 1.0, \n \"Cards.LimitAmount\" : { \n \"$divide\" : [\n \"$Cards.LimitAmount\", \n 5.0\n ]\n }, \n \"Cards.PaymentAmount\" : { \n \"$divide\" : [\n \"$Cards.PaymentAmount\", \n 5.0\n ]\n }, \n }\n },\n {\n $group:\n {\n _id: {\n \"accountId\" : \"$accountId\", \n \"customerId\" : \"$customerId\", \n \"accountNumber\" : \"$accountNumber\", \n \"balance\" : \"$balance\", \n \"balanceAed\" : \"$balanceAed\", \n },\n Transactions: { $addToSet: \"$Transactions\" }\n ,Cards: { $addToSet: \"$Cards\" }\n }\n },\n { \n \"$project\" : { \n \"_id\" : 0, \n \"Accounts\":\"$_id\",\n \"Transactions\" : \"$Transactions\", \n \"Cards\" : \"$Cards\"\n }\n }\n ], \n { \n \"allowDiskUse\" : true\n }\n);",
"text": "Here is the solution of the above problem.",
"username": "Nabeel_Raza"
},
{
"code": "db.accounts.aggregate( [\n { \"$match\" : { \"customerId\" : \"1234\"} }, \n { \n \"$lookup\" : { \n \"from\" : \"transactions\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Transactions\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"cards\", \n \"localField\" : \"accountId\", \n \"foreignField\" : \"accountId\", \n \"as\" : \"Cards\"\n }\n }, \n {\"$addFields\": {\n \"Accounts.accountId\": \"$accountId\",\n \"Accounts.customerId\": \"$customerId\", \n \"Accounts.accountNumber\": \"$accountNumber\", \n \"Accounts.balance\":\"$balance\", \n \"Accounts.balanceAed\":\"$balanceAed\", \n \"Cards\": {\n \"$map\":{\n \"input\":\"$Cards\", \n \"in\": {\n \"customerId\": \"$$this.customerId\",\n \"accountId\": \"$$this.accountId\",\n \"cardId\": \"$$this.cardId\",\n \"cardHolderName\": \"$$this.cardHolderName\",\n \"LimitAmount\": { \"$divide\": [ \"$$this.LimitAmount\", 5.0 ] }, \n \"PaymentAmount\": { \"$divide\": [ \"$$this.PaymentAmount\", 5.0 ] }, \n }\n }\n }, \n }\n }, \n { \n \"$project\" : { \n \"_id\" : 0, \n \"Transactions._id\" : 0, \n \"Cards._id\" : 0,\n \"accountId\":0, \n \"customerId\":0, \n \"accountNumber\":0, \n \"balance\":0, \n \"balanceAed\":0\n }\n }, \n])\n$unwindAccountsTransactionsLimitAmountPaymentAmount",
"text": "@Nabeel_Raza, glad that you’ve found a solution.I noticed that the output of the aggregation pipeline is different to the expected output from the original post. Given your collection/document examples from the original post, an example pipeline to get similar output to the pipeline above is below:This should avoid using $unwind and grouping by Accounts collection. Also, avoiding a result of an array of arrays for Transactions field.Worth mentioning that if you’re dealing with monetary data i.e. LimitAmount and PaymentAmount and you need to emulate decimal rounding with exact precision I would recommend to look into Decimal BSON type. See also Model Monetary Data for more information.Still related to data modelling, I would suggest to review Schema Design: Summary for example of different patterns.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks @wan. Your query is still showing null vlaues. ",
"username": "Nabeel_Raza"
}
] | Reverse $unwind into $lookup array | 2020-02-28T05:36:20.026Z | Reverse $unwind into $lookup array | 12,862 |
null | [
"production",
"cxx"
] | [
{
"code": "cxx-driver",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.4.1. This patch release addresses issues when compiling without a polyfill with C++17 supporting compilers.Please note that this version of mongocxx requires the MongoDB C driver 1.13.0 or newer.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.4.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx-driver. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.4.1 Released | 2020-03-03T22:36:19.763Z | MongoDB C++11 Driver 3.4.1 Released | 1,954 |
null | [
"golang"
] | [
{
"code": "cur, err := collection.Find(context.TODO(), bson.D{{}}).Select(bson.M{\"id\": 1, \"_id\": 1, \"title\": 1, \"description\": 1, \"date\": 1, \"formatteddate\": 1})",
"text": "I want to filter fields in a find query. For example, I have a collection called Users, where a field is an image in base64. I need to get all columns except image column.I tried the next code, but Select not exist:\ncur, err := collection.Find(context.TODO(), bson.D{{}}).Select(bson.M{\"id\": 1, \"_id\": 1, \"title\": 1, \"description\": 1, \"date\": 1, \"formatteddate\": 1})Any help please?",
"username": "Miguel_Rodriguez_Cre"
},
{
"code": "foobaropts := options.Find().SetProjection(bson.D{{\"foobar\", 0}})\ncursor, err := collection.Find(context.TODO(), bson.M{}, opts)\n",
"text": "Hi @Miguel_Rodriguez_Cre, welcome!You can utilise mongo/options to specify query projection. Please see Project Fields To Return From Query for more information on query projection.An example to omit a field foobar:See also Collection.Find()Regards,\nWan.",
"username": "wan"
}
] | Select fields on Find function | 2020-03-03T18:54:03.784Z | Select fields on Find function | 2,210 |
null | [
"golang"
] | [
{
"code": "",
"text": "First, the context. Then, the question.I have the need for a database to exist in a sort of “pre-provisioned” state, with certain - call them “original” - collections existing (with no documents) that have uniqueIndexes applied (prior to any writes, of course). Then, at runtime, I am dynamically creating collections - call them “novel” - to which I apply uniqueIndexes immediately.The problem is that multiple applications will be starting independently, each with write access to the “original” collections. There’s no clear point at which unique-createIndexes should be applied, unlike the “novel” collections which can be uniquely-indexed by the application which creates them at runtime. I am considering 2 approaches to the “provisioning” of these “original” collections’ indexes:The 1st approach has the disadvantage of an extra and awkward step in the deployment procedure. Also, if someone forgets to run this application (i.e. ./buildIndexes) then there’s no failsafe. The disadvantage of the second case is two-pronged: 1) it relies on the error string (this can probably be avoided using an error value I’m not aware of); 2) it (kind of) requires each application to take the responsibility of indexing these collections, duplicating work.My question is this: What is the best practice/recommended procedure for establishing a unique set of indexes on a set of collections that a developer knows will be written to independently by multiple applications?",
"username": "John_Rinehart"
},
{
"code": "IndexView.CreateOneIndexView.CreateManyIndexView.CreateOne",
"text": "Hi John,I tried running some code to create the same index multiple times and did not get back any errors from a 4.2 server. My understanding is that IndexView.CreateOne/IndexView.CreateMany should be a no-op if all of the specified indexes already exist. Note that specifying an index with the same name as one that already exists but a different key pattern will return an error because the index specifications don’t match.Can you try running some code against your servers to call IndexView.CreateOne multiple times to see if you get back any errors? If you do, please post your code and the error in a reply to this conversation.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "*mongo.Collection.Indexes().CreateOnemongo.IndexModelindexCollectionindexcnt == 1➜ ./mongo_test\n2020/03/03 12:50:24 indexed the collection 1 time(s).\n2020/03/03 12:50:24 failed to index DB: (IndexOptionsConflict) Inde\nx with name: param3_1_param1_1_param2_1 already exists with a diffe\nrent name\nmongo-go-driver v1.3.1go 1.14mongod➜ ./bin/mongod -version\ndb version v4.2.2\ngit version: a0bbbff6ada159e19298d37946ac8dc4b497eadf\nallocator: system\nmodules: none\nbuild environment:\n distarch: x86_64\n target_arch: x86_64\npackage main\n\nimport (\n\t\"context\"\n\t\"flag\"\n\t\"log\"\n\t\"time\"\n\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n\t\"go.mongodb.org/mongo-driver/mongo/readpref\"\n)\n\nfunc main() {\n\tflagURI := flag.String(\"uri\", \"mongodb://localhost:27017\", \"URI of the MongoDB host (e.g. mongodb://localhost:27017\")\n\tflag.Parse()\n\tclient, err := mongo.NewClient(options.Client().ApplyURI(*flagURI))\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to obtain a MongoDB client: %s\", err)\n\t}\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)\n\tdefer cancel()\n\terr = client.Connect(ctx)\n\tif err != nil {\n\t\tlog.Fatalf(\"failed to connect to mongod: %s\", err)\n\t}\n\tif err := client.Ping(ctx, readpref.Primary()); err != nil {\n\t\tlog.Fatalf(\"failed to ping mongod: %s\", err)\n\t}\n\tdb := client.Database(\"brand-new-db\")\n\tcoll := db.Collection(\"brand-new-collection\")\n\tvar indexcnt int\n\tif err := indexCollection(coll); err != nil {\n\t\tlog.Printf(\"indexed the collection %d time(s).\", indexcnt)\n\t\tlog.Fatalf(\"failed to index DB: %s\", err)\n\t}\n\tindexcnt++\n\tif err := indexCollection(coll); err != nil {\n\t\tlog.Printf(\"indexed the collection %d time(s).\", indexcnt)\n\t\tlog.Fatalf(\"failed to index DB: %s\", err)\n\t}\n}\n\nfunc indexCollection(coll *mongo.Collection) error {\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)\n\tdefer cancel()\n\tt := true\n\t_, err := coll.Indexes().CreateOne(ctx, mongo.IndexModel{\n\t\tKeys: map[string]int{\n\t\t\t\"param1\": 1,\n\t\t\t\"param2\": 1,\n\t\t\t\"param3\": 1,\n\t\t},\n\t\tOptions: &options.IndexOptions{\n\t\t\tUnique: &t,\n\t\t},\n\t})\n\treturn err\n}\n",
"text": "To be clear, I’m trying to replicate this behavior: https://docs.mongodb.com/manual/core/index-unique/#unique-constraint-across-separate-documents.I’m calling *mongo.Collection.Indexes().CreateOne with a mongo.IndexModel argument. My code is below. It errors on the second call to indexCollection (indexcnt == 1) with:Ahh, so let me give this index a common name across my applications (and in this test code) to see if it doesn’t break.I’m using mongo-go-driver v1.3.1 with go 1.14. My mongod version is",
"username": "John_Rinehart"
},
{
"code": "indexCollectionname➜ go build && ./mongo_test\n2020/03/03 13:14:56 indexed the collection 1 time(s).\n2020/03/03 13:14:56 failed to index DB: (IndexKeySpecsConflict) Index must have unique name.The existing index: { v: 2, unique: true, key: { param1: 1, param2: 1, param3: 1 }, name: \"3params\", ns: \"brand-new-db.brand-new-collection\" } has the same name as the requested index: { v: 2, unique: true, key: { param3: 1, param1: 1, param2: 1 }, name: \"3params\", ns: \"brand-new-db.brand-new-collection\" }\nmongo.Dfunc indexCollection(coll *mongo.Collection) error {\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)\n\tdefer cancel()\n\t_, err := coll.Indexes().CreateOne(ctx, mongo.IndexModel{\n\t\tKeys: map[string]int{\n\t\t\t\"param1\": 1,\n\t\t\t\"param2\": 1,\n\t\t\t\"param3\": 1,\n\t\t},\n\t\tOptions: options.Index().SetName(\"3params\").SetUnique(true),\n\t})\n\treturn err\n}",
"text": "I have modified the indexCollection function to the below, to use a name. It still fails with the following error:I think the problem is that I’m not using an ordered mongo.D (document) type. I’ll run one more test and report back.",
"username": "John_Rinehart"
},
{
"code": "func indexCollection(coll *mongo.Collection) error {\n\tctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)\n\tdefer cancel()\n\t_, err := coll.Indexes().CreateOne(ctx, mongo.IndexModel{\n\t\tKeys: bson.D{\n\t\t\t{\"param1\", 1},\n\t\t\t{\"param2\", 1},\n\t\t\t{\"param3\", 1},\n\t\t},\n\t\tOptions: options.Index().SetName(\"3params\").SetUnique(true),\n\t})\n\treturn err\n}",
"text": "Got eeee. Thanks Divjot. No errors, now. Maybe the name isn’t necessary? But, I’ll keep it.",
"username": "John_Rinehart"
},
{
"code": "Keysbson.D",
"text": "Hi John,Seems like you’ve figured it out. For completeness, the reason it was erroring before is because Go’s maps do not guarantee any ordering and the driver iterates over the Keys field to create the index specification document. This meant that different attempts would generate different documents, which actually represent different indexes on the server. Using bson.D is the way to go for these kinds of things as it guarantees ordering.As for the name, you’re right that it isn’t necessary. If not specified, the driver will generate a name from the specification (e.g. for your Keys field, the name would be param1_1_param2_1_param3_1). The name option is there for you to override this behavior if you want to give the index a more meaningful name.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "You’re the man Divjot. Thanks for confirming everything.",
"username": "John_Rinehart"
}
] | Dynamic collection creation with unique indexes | 2020-03-02T23:58:57.860Z | Dynamic collection creation with unique indexes | 6,569 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi All,We are doing bulk inserts continuously. With PSA Architecture with enableMajorityReadConcern false parameter, Secondary goes down after an hour or so and is blacklisted . Please recommend to solve this issueRegards,\nSaurav",
"username": "saurav_gupta"
},
{
"code": "",
"text": "The first thing to do is to look at the logs.",
"username": "steevej"
},
{
"code": "",
"text": "Hi All,Below are the logs and parameters and Kindly advise :2020-01-29T17:21:47.124+0530 I REPL [replication-0] We are too stale to use 10.95.147.92:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp: Timestamp(1580295549, 1) is before their earliest timestamp: Timestamp(1580297909, 30285) for 1min until: 2020-01-29T17:22:47.124+0530replication:\nreplSetName: rs1\nenableMajorityReadConcern: falseRegards,\nSaurav",
"username": "saurav_gupta"
},
{
"code": "",
"text": "I think the log from blacklisted host will be more useful. Are your machine using the same NTP servers?",
"username": "steevej"
},
{
"code": "w:1w:2wtimeoutw:2w:majority",
"text": "We are doing bulk inserts continuouslyWhat write concern are you using for your bulk inserts? If you use w:1, write operations will only wait for acknowledgement from the primary. Your secondary will eventually become stale if continuous writes can be acknowledged faster on the primary than they can be applied via replication on the secondary.If you instead use a write concern of w:2, write operations will wait for acknowledgement from your secondary so it should not become stale. Since you only have one secondary, you should also set a wtimeout value to avoid a w:2 write concern blocking indefinitely if your secondary is down.If you want a more resilient configuration, upgrade your replica set to a Primary-Secondary-Secondary configuration and use a w:majority write concern.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I liked the post because we are learning so much by hanging around.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Stennie,I am using w:1 in PSA Architecture , here w=1 is recommended by MongoDB , still it is blacklisted after the secondary or primary is down for 10 minutes. Please adviseRegards,\nSaurav",
"username": "saurav_gupta"
},
{
"code": "w:1",
"text": "I am using w:1 in PSA Architecture , here w=1 is recommended by MongoDB , still it is blacklisted after the secondary or primary is down for 10 minutes.Per my earlier comment, w:1 only waits for acknowledgement from the primary so your secondary will eventually become stale if continuous writes are acknowledged faster on the primary than they can be applied via replication on the secondary.You need to increase your write concern (and ideally upgrade to a Primary-Secondary-Secondary configuration) to avoid this issue.If there are pauses in your write activity that might allow your secondary to catch up, you could also try to mitigate the issue by increasing your oplog sizes. However, increased write concern is a better fix to ensure writes don’t outpace replication.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello Stennie,\nWe’re facing a similar issue with PSS architecture that receives a heavy influx of writes into the primary. Increasing the oplog size seems to have fixed the issue for now. Changing to w:2 was not a viable option for us due to the latency. In fact we had to switch to w:0 (although it provides no guarantee of write acknowledgement by even the primary) since the insertion speeds were too slow. Are there any wt settings that can be adjusted to speed up the insertions?\nThe 3 servers host 3TB data, 256GB RAM, SSD, RAID5",
"username": "Kanishka_Ramamoorthy"
},
{
"code": "",
"text": "RAID5 is not a great choice for a performant database(of any type) it just does not give the throughput, you’ll likely see this as IO wait. You’ll want to switch that up to RAID 10.w:0 can be a scary place to be if you care about your data.",
"username": "chris"
},
{
"code": "",
"text": "Thank you Chris. We’re in the works of getting a raid10 cluster set up. In the meantime are there any mongodb specific configs that can be altered to improve write throughput?\nFor instance, mysql acquires global mutex on it’s key cache so turning off query cache and key cache gave us a heavy performance boost for our write heavy apps. Does mongodb have any similar cache settings that can be altered?",
"username": "Kanishka_Ramamoorthy"
},
{
"code": "",
"text": "Hi Stennie,I increased the opsize to 200 GB .After 1 hour of Primary Shutdown.,I started Mongo again but this time it goes into recovering state. It goes into this state for 5-6 hours.\nPlease suggest how to recover fast",
"username": "saurav_gupta"
},
{
"code": "w:0",
"text": "I increased the opsize to 200 GB .After 1 hour of Primary Shutdown.,I started Mongo again but this time it goes into recovering state.@saurav_gupta Increased oplog sizing will buffer more write operations for replication, but if your continuous write load is outpacing how quickly writes can be replicated and applied on your secondaries, this will not address the underlying problem. You should also review your deployment metrics and consider whether your replication throughput is being limited by resources such as network or I/O bandwidth.If you want to ensure reliable replication to secondaries, you need to throttle writes to what can reasonably be replicated to your secondaries given the buffer provided by your oplog sizing.If you can upgrade to MongoDB 4.2, there’s a new replication flow control feature which is enabled by default and provides administrative control over the maximum secondary lag before queueing writes on the primary. This would impose a similar effect to using a majority write concern but with a bit more tolerance on acceptable lag as well as server-side admin control. This feature has an associated group of flowControl metrics in serverStatus that provides more insight into the activity on the primary.Changing to w:2 was not a viable option for us due to the latency. In fact we had to switch to w:0 (although it provides no guarantee of write acknowledgement by even the primary) since the insertion speeds were too slow@Kanishka_Ramamoorthy Please create a separate thread if you’d like to discuss issues specific to your deployment and use case.Changing write concern to w:0 will make a replication throughput problem even worse: you’ll be writing without any acknowledgement and sending requests at the primary as fast as possible. If you have continuous writes this will exacerbate any problems due to secondary lag (your application isn’t even waiting to confirm the primary accepted the write).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you @Stennie_X I’m opening a new thread",
"username": "Kanishka_Ramamoorthy"
}
] | PSA Architecture with enableMajorityReadConcern false- Secondary down | 2020-02-19T21:39:32.100Z | PSA Architecture with enableMajorityReadConcern false- Secondary down | 3,589 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of v1.3.1 of the MongoDB Go Driver.This release contains several bug fixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.1 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.3.1 Released | 2020-03-03T18:47:03.390Z | MongoDB Go Driver 1.3.1 Released | 2,262 |
null | [
"java",
"transactions"
] | [
{
"code": "class MongoSpecification extends Specification {\n\n MongoClient mongoClient;\n MongoCollection collection;\n\n def setup() {\n mongoClient = MongoClients.create();\n MongoDatabase db = mongoClient.getDatabase(\"test\");\n collection = db.getCollection(\"test\");\n\n AsyncConditions conditions = new AsyncConditions(1)\n\n // make sure to drop the db to start with a clean state\n Single.fromPublisher(db.drop()).subscribe({ s ->\n conditions.evaluate({\n s != null\n })\n }, { t -> t.printStackTrace()})\n conditions.await()\n }\n\n def 'test'() {\n when:\n AsyncConditions conditions = new AsyncConditions(1)\n Single.fromPublisher(mongoClient.startSession()).flatMap({session ->\n session.startTransaction();\n return Single.fromPublisher(collection.insertOne(new Document(\"_id\", 1).append(\"msg\", \"test\")))\n .map({ s ->\n System.out.println(\"aborting transaction\");\n session.abortTransaction();\n return s;\n })\n }).subscribe({ success ->\n System.out.println(\"insert result: \" + success);\n conditions.evaluate({\n success != null\n })\n }, { t ->\n t.printStackTrace()\n })\n\n then:\n conditions.await();\n\n when:\n conditions = new AsyncConditions(1)\n\n Single.fromPublisher(mongoClient.startSession()).flatMap({session ->\n return Single.fromPublisher(collection.find(new Document(\"_id\", 1)).first())\n }).subscribe({ document ->\n System.out.println(\"found document: \" + document);\n conditions.evaluate({\n document != null\n })\n }, { t ->\n t.printStackTrace();\n })\n\n then:\n conditions.await()\n }\n}\naborting transaction\ninsert result: SUCCESS\nfound document: [_id:1, msg:test]\n",
"text": "I was finally trying out mongodb transactions with a local single node replicaset (version 4.2.3 MacOS), however I was unable to rollback any modifications I made within that transaction. I am using the reactive java driver version 1.13.0.As I was unable to reproduce expected behaviour, I have created a simple Spock test which would insert a document, abort the transaction and then try to find the just inserted document in a new session (just to make sure, however I am able to see identical results if I add breakpoints and use the CLI to verify)The output is:If I check session.hasActiveTransaction() before aborting the transaction, it returns true.The mongo driver lists transaction support since version 1.9.0, so there must be something wrong how I try to handle transaction within the reactive context. Unfortunately I was unable to find any documentation regarding the use of the reactive driver together with transactions.Does anyone have an idea what is going on here?",
"username": "st-h"
},
{
"code": "session.abortTransaction()abortTransaction",
"text": "Hi @st-h,I think I see the issue, session.abortTransaction() returns a publisher, so that must be subscribed to for the transaction to be aborted.Note: With the reactive streams driver all publishers are Cold publishers, so nothing happens unless they are subscribed to and data is requested. In this case the abortTransaction command is never actually requested.I hope that helps,Ross",
"username": "Ross_Lawley"
},
{
"code": "static Completable insert(MongoClient mongoClient, MongoCollection<Document> collection) {\n return Single.fromPublisher(mongoClient.startSession())\n .flatMapCompletable(clientSession -> {\n clientSession.startTransaction();\n return Single.fromPublisher(collection.insertOne(new Document(\"_id\", 1).append(\"msg\", \"test\")))\n .flatMapCompletable(success -> Completable.fromPublisher(clientSession.abortTransaction()));\n });\n}\n\nstatic Single<Document> findWithSession(MongoClient mongoClient, MongoCollection<Document> collection) {\n return Single.fromPublisher(mongoClient.startSession()).flatMap(clientSession -> {\n clientSession.startTransaction();\n return Single.fromPublisher(collection.find(new Document(\"_id\", 1)).first()).map(document -> {\n clientSession.close();\n return document;\n });\n });\n}\ndef 'test2' () {\n when: 'make sure the document is not present'\n MongoTest.findWithSession(mongoClient, collection).blockingGet()\n\n then:\n NoSuchElementException e1 = thrown()\n\n when:\n Throwable t = MongoTest.insert(mongoClient, collection).blockingGet()\n\n then:\n t == null\n\n when: 'insert transaction has been aborted, document still should not be present'\n Document document = MongoTest.findWithSession(mongoClient, collection).blockingGet()\n System.out.println(\"found document: \" + document)\n\n then: 'test fails here, as no exception is thrown'\n NoSuchElementException e2 = thrown()\n}\n2020-03-02 12:20:56.819 [INFO ] [main ] cluster - Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}\n2020-03-02 12:20:56.924 [INFO ] [cluster-ClusterId{value='5e5cec18ef235923e7964245', description='null'}-localhost:27017] connection - Opened connection [connectionId{localValue:1, serverValue:295}] to localhost:27017\n2020-03-02 12:20:56.929 [INFO ] [cluster-ClusterId{value='5e5cec18ef235923e7964245', description='null'}-localhost:27017] cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 3]}, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=3316268, setName='local', canonicalAddress=127.0.0.1:27017, hosts=[127.0.0.1:27017], passives=[], arbiters=[], primary='127.0.0.1:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000001, setVersion=1, lastWriteDate=Mon Mar 02 12:20:49 CET 2020, lastUpdateTimeNanos=433887177766044}\n2020-03-02 12:20:56.999 [INFO ] [Thread-4 ] connection - Opened connection [connectionId{localValue:2, serverValue:296}] to localhost:27017\nfound document: [_id:1, msg:test]\n",
"text": "Hi @Ross_Lawley,thanks a lot for your comment. I totally overlooked that abortTransaction would return a publisher after startTransaction just returns void. I have rewritten my code so that it would subscribe to the abortTransaction publisher. I also added a breakpoint within ClientSessionImpl::abortTransaction and it looks like the callback there is called. However, I still see the same results:this is the Spock test, calling the two methodsthis is the full output including info statements form the mongo driverDo you have any idea what I am still doing wrong?On another note: It would be incredibly helpful for people like me, who just started to get into the reactive world as a means to get rid of the deprecated mongo driver, if there would be more examples on how to do things like making use of transactions within the docs of the reactive driver.",
"username": "st-h"
},
{
"code": "insertOnecollection.insertOne(clientSession, new Document(\"_id\", 1).append(\"msg\", \"test\"));Publishers",
"text": "Hi @st-h,Nearly there - you just need to add the session in the insertOne call, so that the session is used and then it should work as expected. eg:collection.insertOne(clientSession, new Document(\"_id\", 1).append(\"msg\", \"test\"));Regarding documentation, I agree and this is something we as a team are looking at improving in the future as well as potentially a friendlier API. The main issue is using raw Publishers would make the documentation extremely verbose and confusing. However, I can see using a library like Reactor or RxJava should really reduce the complexity of the code and that may be the way forward.All the best,Ross",
"username": "Ross_Lawley"
},
{
"code": "thisnotifyMessageSentCannot create namespace test.test in multi-document transaction.",
"text": "Ah, awesome. I was just debugging through ClientSession and noticed that I am dealing with different versions of this when starting and aborting the transaction and when notifyMessageSent is called. I was just thinking that I probably need to tell the operation to use the session somehow, as that somehow got lost and it wasn’t clear to me how the operation would even know about the session (there are no guarantees that it would be called from the same thread etc.)Thanks a lot for pointing that out. I probably still would have missed it as I would have been looking for builder style setter and not an optional first argument The following error was way easier to solve:\nCannot create namespace test.test in multi-document transaction.\nwhich is causes by the statement to drop the collection in the setup of the test. replacing it with remove all elements for the collection and everything works as expected. Wohooo. That was a wild ride though Regarding documentation, I think something like the GridFS tour for reactive streams, which could make use of RxJava or Reactive or both would, on its own, be extremely helpful. Just to show some examples how to use stuff - and it shouldn’t cause much bloat to the documentation. I think the learning curve when switching to the reactive world is quite steep and often there is a lot of doubt involved if the reactive stuff on its own is applied correctly. More examples would be totally awesome here.Thanks again, Ross.",
"username": "st-h"
},
{
"code": "if (session) { \n collection.find(session, ..... )\n} else {\n collection.find(....)\n}\ncollection.withSession(session).find(...)\n",
"text": "Ross, if possible at all, could you please add some information about why the session/transaction has been added as an optional first parameter to all methods? I just would like to understand, as this not only means to have each mongo api method twice, but this also continues in the consuming application (if an app mixes transactional and non transactional queries)As far as I understand, opening a transaction/session is not necessary to read consistent data (so that the client never sees any uncommitted writes). Speaking of a simple web application, one might want to use no transaction/session for all the methods that would only retrieve data. Atomic writes would not need a transaction as well, so the only case would be methods that perform multi document updates/inserts. For reads, the only case that would require the session seems to be when one wants to read data, that has been written in the same session, but hasn’t been committed yet?Now, when writing some sort of data access layer, most cases that use transactions somewhere would be dealing with writing methods that would be available to be used within a session/transaction and a similar method to be used without a session.So, it’s either:or doubling those methods, to provide one that has a client session parameter and one that doesn’t (just like the mongo client does).Long story short:\nWhy did you choose to not do something like:I am mainly asking, because I feel like there is something fundamentally wrong with my understanding and there probably has been a very good reason for doing it this way.",
"username": "st-h"
}
] | session.abortTransaction does not rollback elements inserted after startTransaction | 2020-03-01T22:03:18.591Z | session.abortTransaction does not rollback elements inserted after startTransaction | 8,324 |
null | [
"charts",
"on-premises",
"licensing"
] | [
{
"code": "MongoDB Enterprise Server on-premises,MongoDB Charts requires a MongoDB database to store Charts users, dashboards, data sources, etc",
"text": "According to this link Mongodb charts is now generally available it requires a MongoDB Enterprise Server on-premises, in order to run.However when I visit: https://docs.mongodb.com/charts/19.12/installation/ it says that only a MongoDB Charts requires a MongoDB database to store Charts users, dashboards, data sources, etcSo I’m confused on which it is, can I run it locally for free or do I require a enterprise license?",
"username": "Jack_Dalrymple_Hamil"
},
{
"code": "",
"text": "Welcome @Jack_Dalrymple_HamilI set one of these up the other day. Did not require enterprise.",
"username": "chris"
},
{
"code": "",
"text": "@Jack_Dalrymple_Hamil I am running charts on a docker container and it connects to MongoDB Community Server 4.3.2 (both on EC2)Check out @tomhollander 's reply about licensing → Student project : Charts free licensing - #2 by tomhollander",
"username": "coderkid"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How does pricing work for MongoDB Charts on prem? | 2020-03-03T10:53:51.160Z | How does pricing work for MongoDB Charts on prem? | 6,038 |
[
"configuration"
] | [
{
"code": "",
"text": "Hi, I’m trying to setup a replication cluster, but on my oracle cloud vm I can’t set the hostname without giving an error.I used nslookup to make sure if the A record was correct in the vm’s dns server and it was, on another vps I have somewhere else it works fine.\nThis is the config file template I’m using for all my servers and the error log: https://gist.github.com/GameMaster2030/b714d164577d4f1808aa3cfa0e53ef1dXX is a number, so 01, 02 etc.\n\n1041×155 8.89 KB\nHere is my DNS config.\nAny idea why it is giving this error and how I can fix it?",
"username": "GameMaster2030"
},
{
"code": "01.mongodb.DOMAIN.nlpingnslookupDOMAIN.nlifconfig -a | grep \"inet\"",
"text": "@GameMaster2030The message “Cannot assign requested address” suggests that the hostname/IP you are trying to bind does not resolve to a local network interface.A few things to check:Are the full hostnames (eg 01.mongodb.DOMAIN.nl) resolvable on all of your replica set members? For a quick test, try ping or nslookup using the expected hostname. Your screenshot of DNS details suggests that the replica set members would need to include DOMAIN.nl in their DNS search domain in order to resolve the full hostname.Does the hostname resolve to an IP address that is also local to the machine (eg on Linux the IP should be listed in the output of ifconfig -a | grep \"inet\").If the name resolution appears to check out, can you confirm what O/S version is running in your VMs?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "All the hostnames are resolvable on all the hosts:\nHost 1:\n\nHost 2:\n\nHost 3:\nThe ip isn’t listed on host 2 and 3, as seen on the screenshots. Host 1 does have the ip listed and is the only host that works. Maybe that is the issue, how can I add the IP?All of them are running Ubuntu 18.04",
"username": "GameMaster2030"
},
{
"code": "10.*",
"text": "The ip isn’t listed on host 2 and 3, as seen on the screenshots. Host 1 does have the ip listed and is the only host that works. Maybe that is the issue, how can I add the IP?It sounds like DNS resolution is fine on all your VMs, but two of them are missing the expected local IPs.If an IP isn’t associated with a local network interface, the original error message that you encountered would be expected. The MongoDB process cannot bind (aka “listen to”) an IP address that isn’t local.The fix for this would be outside of MongoDB: your VMs need to have the extra network interfaces assigned & configured.However, I also note that the IPs you are trying to add are public IPs (the existing 10.* IPs are private IPv4 addresses). Typically you would only want to bind to private IPs and have access to the database servers secured via VPN or SSH tunnel from trusted application servers. For more information on security measures please see the MongoDB Security Checklist.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "E QUERY [js] Error: couldn't connect to server 02.mongodb.DOMAIN.nl:27017, connection attempt failed: NetworkTimeout: Error connecting to 02.mongodb.DOMAIN.nl:27017 (X.X.X.X:27017) :: caused by :: Socket operation timed out :\nconnect@src/mongo/shell/mongo.js:344:17\n@(connect):2:6\nexception: connect failed\n",
"text": "So I got it to work and it’s up now, but now when I try to connect to hosts 2 and 3 externally, it gives the following error:But when I connect to the hostname on the host itself it works fine.",
"username": "GameMaster2030"
},
{
"code": "Socket operation timed outmongodmongodreplicaSet",
"text": "Error connecting to 02.mongodb.DOMAIN.nl:27017 (X.X.X.X:27017) :: caused by :: Socket operation timed outI assume the expected IP is logged here, so the issue is probably related to firewall rather than DNS.Drivers/clients connecting to a replica set will use the hostnames listed in the replica set config for server discovery & monitoring. A Socket operation timed out error is expected if a driver cannot connect to a mongod instance using the hostnames and ports discovered via the replica set configuration.Possible solutions are:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "2020-02-17T15:30:48.483+0000 E QUERY [js] Error: couldn't connect to server 03.mongodb.DOMAIN.nl:27017, connection attempt failed: SocketException: Error connecting to 03.mongodb.DOMAIN.nl:27017 (X.X.X.X:27017) :: caused by :: No route to host :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-02-17T15:30:42.322+0000 F - [main] exception: connect failed\n2020-02-17T15:30:42.322+0000 E - [main] exiting with code 1\n",
"text": "\n973×557 17.4 KB\n\nFigured out where the firewall is on Oracle, so I added this rule. (Will change the IP range later to only allow trusted IP’s)\nBut now I get the following error:",
"username": "GameMaster2030"
},
{
"code": "No route to host10.*",
"text": "03.mongodb.DOMAIN.nl:27017 (X.X.X.X:27017) :: caused by :: No route to hostNo route to host indicates a networking problem: there currently isn’t a valid network route to communicate with the target IP address.If the target IP in that message is as expected and your firewall rules are open, one likely cause is that you may be trying to connect to private IPs (for example, 10.* in one of your earlier output examples) from a public IP. Private IPs are only routable within the same local network or VPN.If the destination IPs are definitely not private IPs, this may be a networking or firewall issue to follow up on with your ISP/hosting providers.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "no route to host> rs.initiate( { _id : \"rs0\", members: [ { _id: 0, host: \"01.mongodb.DOMAIN.nl:27017\" }, { _id: 1, host: \"02.mongodb.DOMAIN.nl:27017\" }, { _id: 2, host: \"03.mongodb.DOMAIN.nl:27017\" }\n ] })\n{\n \"ok\" : 0,\n \"errmsg\" : \"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: 02.mongodb.DOMAIN.nl:27017 failed with Error connecting to 02.mongodb.DOMAIN.nl:27017 (10.8.0.3:27017) :: caused by :: No route to host, 03.mongodb.DOMAIN.nl:27017 failed with Error connecting to 03.mongodb.DOMAIN.nl:27017 (10.8.0.2:27017) :: caused by :: No route to host\",\n \"code\" : 74,\n \"codeName\" : \"NodeNotFound\"\n",
"text": "Hi @Stennie_X,I took the time to set up a VPN to connect the servers together, but I still get the no route to host error.They can ping each other just fine and connect to each other with ssh.",
"username": "GameMaster2030"
},
{
"code": "",
"text": "@Stennie_X Do you know what’s causing this? I’m using OpenVPN",
"username": "GameMaster2030"
}
] | Failed to set up listener: SocketException: Cannot assign requested address when using hostname | 2020-02-09T20:56:32.424Z | Failed to set up listener: SocketException: Cannot assign requested address when using hostname | 43,152 |
|
null | [
"replication"
] | [
{
"code": "",
"text": "Hello everyone,We are planning to upgrade our mongodb cluster and I may need some advices . Our current setup is:These are managed by Cloud Manager (or at least used to be managed as mongodb 2.6 is no longer supported by Cloud Manager).What we want to do is to “merge” these two RS into a single one running mongodb 3.6. We also want to update our OS, hardware and all.If I had only the mongodb 3.4 RS, it would be “easy”. I would just add 3 more nodes to the RS, wait for it to synchronize and remove old servers from the RS. Then just upgrade to version 3.6 using cloud manager.However here I also want to deal with the mongodb 2.6 RS. Not sure how can I proceed without “downtime”. Any ideas ?",
"username": "Robin_Monjo"
},
{
"code": "mongodumpmongorestore",
"text": "Welcome @Robin_Monjo!What we want to do is to “merge” these two RS into a single one running mongodb 3.6.Members of a single replica set need to share a common history of changes via the oplog. For members of two distinct replica sets the only way to merge data would be dumping & restoring data from the replica set you want to retire into the replica set you want to keep.You could backup using mongodump & mongorestore, but may need something more bespoke depending on the overlap of data between your two replica sets. For example, if both replica sets have common databases you may want to merge, rename, or ignore duplicate databases & collections. Merging data would be more straightforward if both replica sets contain different database names.We also want to update our OS, hardware and all.I would approach this in several stages so you don’t conflate potential issues due to hardware or O/S upgrades with your major version upgrade and replica set consolidation. Changing many variables at once may save on elapsed time, but could dramatically complicate diagnostics and troubleshooting should anything not go to plan.For example, you could:The upgrade of your 3.4 replica set as well as the major version upgrade from 3.4 to 3.6 should have negligible impact on availability so long as you follow rolling maintenance procedures and keep a strict majority of configured voting members online during replica set upgrades.You could perform some of these steps in a different order (such as migrating documents from 2.6 to 3.4 prior to the 3.6 upgrade) if that better suits your business requirements.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello @Stennie_X,Thank you very much for your answer. That confirms what I thought, I will need to schedule some maintenance time for the apps running on the 2.6 RS. Databases between the 2 RS do not overlap so that’s already a good thing !Other question, I do not find an AMI maintained by mongodb that automates what’s described in here : Maximizing MongoDB Performance on AWS | MongoDB BlogThat would be great if mongodb provided an AMI for AWS, without any mongodb version installed, just one that is optimized to run mongo.Anyway thank you for your answer,Kind regards",
"username": "Robin_Monjo"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Merging 2 Replicaset into one | 2020-03-02T17:48:02.524Z | Merging 2 Replicaset into one | 3,226 |
null | [] | [
{
"code": "",
"text": "Just migrated to this forum from the google group and unfortunately I have to say that the limitation of not being able to ask questions without approval is quite annoying as it delays ongoing conversations and finding solutions to issues. I can understand that it might be necessary to keep spammers out of the forum. However, if a spammer is able to fake one post that is approved, the spammer is likely able to repeat that. I would like to put the idea out there, to relax the rule so that one just needs 1 approved post, to be able to post without approval.",
"username": "st-h"
},
{
"code": "",
"text": "@st-h Thanks for the feedback!New user restrictions are part of minimising drive-by spam and encouraging quality posts as users become more familiar with the community. Google Groups has similar moderation for initial posts which has worked well to mitigate drive-by spam (which is unfortunately problematic).It doesn’t take long to earn the next trust level. See: Trust Levels in the community welcome post for more information.We have a global team of moderators so there shouldn’t be too much delay between initial post & approval, but we will definitely consider adjusting policies as the community grows.If someone does earn a trust level and starts spamming, there is also an option to flag suspect posts for review by the moderation team.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "thanks for your reply. It really did not take too long until the trust level was raised. It was just such a bummer, switching over from google group, while I was dealing with a mongo related issue and I was not able to even post the issue and reply to any suggestions. However, I understand that it is nearly impossible to find a solution which will work for everyone here.\nI had a look at trust levels before posting though, and I did not find anything related to “posts need to be approved”, but maybe I just overlooked.",
"username": "st-h"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Relax Posting needs approval rule | 2020-03-02T11:55:39.576Z | Relax Posting needs approval rule | 4,438 |
null | [] | [
{
"code": "",
"text": "Hello,So, i was looking at GitHub and there is Terraform Provider for MongoDB Cloud Manager but it’s not supported by MongoDB. GitHub - mongodb-labs/terraform-provider-mongodb: Terraform Provider for MongoDB Cloud resources (This repository is NOT a supported MongoDB product) (last push to master was 4 months ago)On the other hand, I just saw that you released Terraform Provider for MongoDB Atlas. Do you have plans to start supporting both?Best Regards,\nMario Pereira",
"username": "Mario_Pereira"
},
{
"code": "mongodb-labs",
"text": "@Mario_Pereira The mongodb-labs GitHub org is for experimental projects from MongoDB, Inc. These projects & prototypes are shared for feedback, but are not officially supported, recommended, or actively maintained.The Terraform Provider in mongodb-labs is a project created at our internal Engineering Skunkworks (aka “hack week”) last year, and it isn’t production ready or related to the Terraform Provider for MongoDB Atlas.If we do officially support a provider like this in future, it would live in a more public GitHub org without the support disclaimers :).FYI: the repo for the officially maintained Terraform MongoDB Atlas provider is GitHub - hashicorp/terraform-provider-mongodbatlas: This is moved to https://github.com/mongodb/terraform-provider-mongodbatlas.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Cloud Manager Terraform | 2020-02-26T16:15:57.193Z | MongoDB Cloud Manager Terraform | 2,384 |
null | [
"compass"
] | [
{
"code": "",
"text": "after setting up the first stage $match…\nis there no resulting doc count?\nI see the 'Output after $match stage (Sample of 20…) pane on right but cannot seem to see anywhere the count of docs that result from the match",
"username": "James_Bailie"
},
{
"code": "$match$count$match",
"text": "I see the 'Output after $match stage (Sample of 20…) pane on right but cannot seem to see anywhere the count of docs that result from the matchCompass’ Aggregation Pipeline Builder currently only shows a preview of documents for a $match stage to help you develop your pipeline. Where possible, Compass is trying to avoid executing the full pipeline (for example, sampling the collection by default) so the editing experience is more responsive.If you want to see a full count of matching documents you can add a $count stage after the $match, but I don’t believe there is an equivalent UI toggle for this in the current stable version of Compass (1.20.5).I suggest posting this as a feature suggestion on the MongoDB Feedback site for others to upvote & watch.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I get what you’re saying…running the query in a large db when building a pipeline would be an issue…doing a $count stage is viable…but when you do want to run it - no run button? it’s running the 20 samples on the fly …so for a full result review…you have to copy the code over to shell command I guess…maybe have it write to a file… hmmmmm…no designer friendly view of query results…whether command shell or that small window with the horizontal slider in Compass…kind of have morphed into a different topic though…still learning …",
"username": "James_Bailie"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compass aggregate pipeline doc count | 2020-03-02T02:51:46.652Z | Compass aggregate pipeline doc count | 2,565 |
null | [
"golang"
] | [
{
"code": "\terr = session.DB(\"\").C(\"place\").EnsureIndex(mgo.Index{\n\t\tKey: []string{\"$2dsphere:location\", \"user_id\"},\n\tsecondIndex := mongo.IndexModel{\n\t\tKeys: bson.M{\"location\": \"2dsphere\", \"user_id\": 1},\n\t}\n\tif _, err := db.Collection(\"place\").Indexes().CreateOne(context.TODO(), secondIndex); err != nil {\n\t\treturn err\n\t} \n\tif _, err := db.Collection(place.Namespace).Indexes().CreateOne(context.TODO(), secondIndex); err != nil {\n\t\tif !strings.Contains(err.Error(), \"IndexKeySpecsConflict\") {\n\t\t\treturn err\n\t\t}\n\t}\n",
"text": "Hello, I am replacing the globalsign/mgo by mongo-driver. I got a problem when I replacing ensureIndex in mgo by CreateIndex in mongo-driver.One of the collections has an index created by mgo with default name (without setting the name of the index when create it), for example, in mgo the index is created as:and I replace by the code below:when I run the code I will get error:panic: (IndexKeySpecsConflict) Index must have unique name.The existing index: { v: 2, key: { location: “2dsphere”, user_id: 1 }, name: “location_2dsphere_user_id_1”, ns: “place”, 2dsphereIndexVersion: 3 } has the same name as the requested index: { v: 2, key: { user_id: 1, location: “2dsphere” }, name: “location_2dsphere_user_id_1”, ns: place\", 2dsphereIndexVersion: 3 }For the other indexes created with a name, I can SetName in IndexOption (with the same name as mgo) and no such error.The database has been running on product server, so I want to skip the IndexKeySpecsConflict error and keep going forward, i.e.is that a correct way? any other suggestions?Thanks for your help!James",
"username": "Zhihong_GUO"
},
{
"code": "mongo.CommandErrorIndexKeySpecsConflict",
"text": "Hi James,Per mongo/error_codes.yml at master · mongodb/mongo · GitHub, the error code for this error is 86. Given this, you can type-cast your error to mongo.CommandError and check that the Code field is 86 or that the Name field is IndexKeySpecsConflict.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "OK, thanks for the quick answer.",
"username": "Zhihong_GUO"
}
] | Issue of creating index with default name using Go driver | 2020-03-02T22:06:10.073Z | Issue of creating index with default name using Go driver | 6,712 |
null | [
"atlas",
"charts"
] | [
{
"code": "{\n \"status\": {\n \"active\": true,\n \"admin\": true,\n ...\n },\n \"firstName\": \"Sean\",\n \"lastName\": \"Campbell\",\n \"courses\": [\n {\n \"courseId\": \"some id\",\n \"courseName\": \"Working At Heights\",\n \"courseType\": \"SAFETY\",\n \"datesTrained\": [\n \"1976-05-11T08:19:05.000Z\",\n \"1987-04-12T10:29:45.000Z\",\n \"2019-01-12T00:00:00.000Z\"\n ],\n \"expiryWarningSent\": false\n },\n {\n \"courseId\": \"some other id\",\n \"courseName\": \"WHMIS 2015\",\n \"courseType\": \"SAFETY\",\n \"datesTrained\": [\n \"1974-01-02T22:47:37.000Z\",\n \"2017-06-30T14:24:18.796Z\"\n ],\n \"expiryWarningSent\": false\n }\n ]\n },\ncourseNamecoursesdatesTrained[\n {\n '$unwind': {\n 'path': '$courses', \n 'preserveNullAndEmptyArrays': false\n }\n }, {\n '$match': {\n 'courses.courseType': 'SAFETY', \n 'courses.datesTrained.0': {\n '$exists': true\n }\n }\n }, {\n '$sortByCount': '$courses.courseName'\n }\n]\n",
"text": "I have a collection of member records that look like the followingI am looking to build a column chart with MongoDB Charts that will show the number of members who have been trained in each of the courses.\nWhere the x-axis is the courseName from the unwound (unwind option with charts). But I cannot get the y-axis right. I am given the option to unwind the courses array, but cannot find a way to get the graph to display the count of datesTrained length greater than 1.I have an aggregation pipeline that returns the values I am looking for from the members collection.Any help is very much appreciated.",
"username": "Natac13"
},
{
"code": "",
"text": "\nScreen Shot 2020-03-02 at 17.47.242126×1888 371 KB\nWhy you want to count dates? Just count the _idIs this something you are looking for?",
"username": "coderkid"
},
{
"code": "datesTrainedcoursecoursesdatesTraineddateTrainedcourses",
"text": "@coderkidYour suggestion returned the same values. Which are different then what the aggregation pipeline is returning.\nWhen the datesTrained array of any course is empty then the member has not been trained in the course, but they may have the course record since I am using that array (courses on the member document) to track other thing which may leave the datesTrained empty.Then to complicate the situation more, in the future I would like to test the last dateTrained is not passed the expiry date, which I would like to calculate based off a different collection, the courses collection. Which I am not sure if Chart will be able to do…",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @Natac13. When you have unusual requirements like this, it’s often useful to preprocess the data with your own custom aggregation pipeline. You can take the pipeline you’ve already written and paste it directly into the Charts query bar, or if you want to build multiple charts with this data you could save the pipeline against the data source.Does that work for you?\nTom",
"username": "tomhollander"
},
{
"code": "$sortByCountcourseName_id",
"text": "BINGO!!Thanks @tomhollander, I just copy and pasted my pipeline above without the $sortByCount stage. Then using the suggestion from @coderkid I did x-axis courseName and y-axis _id and the chart looks beautiful!",
"username": "Natac13"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Help building column charts | 2020-03-02T22:01:49.905Z | Help building column charts | 2,597 |
null | [
"charts"
] | [
{
"code": "",
"text": "I didn’t try other chart types; I am trying to create donut charts, and “Limit Results” under “Label” doesn’t work as expected.First of all, Up/Down arrow does change the number but it doesn’t update the chart, if you type the number in and hit enter it updates the chart.Most importantly, If I type 3, it shows me 12, If I type 2; it shows 2, If I type 6, it doesn’t do anything. I couldn’t find a pattern.",
"username": "coderkid"
},
{
"code": "",
"text": "Hi KayThanks for the question. The numeric entry text box is a bit flaky on some browsers. We do have a new version of that component which we’ll push out on that card. Apologies for the inconvenience.The rendering behaviour you’re seeing is likely caused by having multiple data points with the same value. Charts will render more points than you ask for if there are additional points with the same value as those within the limit. For example if your data points are 9, 7, 6, 5, 4, 4, 4, 3 and you ask for the top 5, you’ll get 7 as all the 4s are of equal size. It is a little strange but we thought it was better than arbitrarily choosing which values to render.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hello @tomhollander,Thank you so much for fast reply and explanation. Now, I do understand the behavior.Best,\n-Kay",
"username": "coderkid"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "Limit Results" doesn't work as expected | 2020-03-02T19:47:54.697Z | “Limit Results” doesn’t work as expected | 1,821 |
null | [
"atlas-functions",
"stitch"
] | [
{
"code": "",
"text": "Hi, In the docs it is written that it shall be possible to use specific user id or a run a script to determine user id before calling a function: https://docs.mongodb.com/stitch/functions/define-a-function/It does not seem to work - at least for triggers and for “test” runs.Example:Am I doing something wrong ?",
"username": "Dimitar_Kurtev"
},
{
"code": "",
"text": "Hi Dimitar –Triggers aren’t called by a specific user, and therefore will always run as “System”. As for the console, it will default to using the System User. However, you can “Change User” in the UI and run as any user within your Stitch Authentication. Have you tried this?\nimage904×150 9.57 KB\n",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi Drew,thanks for you reply. Yes I tried and it works, executing the function as a different user using “Change User”.What confuses me is the following.\n\nAccording to the docs and Stitch UI, I shall be able to choose how to authenticate a function. For example, this is what the docs state for User Id - “This type of authentication configures a function to always run as a specific application user”.\nWhy can I set this if it is not used? Maybe more info in the docs will be good.To clarify why I need this to work:\nMy use case is that I would like periodically to send emails to my users. For that I need their email addresses. The email address of a user is stored in the internal stitch database to which I don’t have direct access (except with Stitch API). I was hoping that I can use the “Script” authentication method when I run my “sendEmail” function. The script should get the parameters of the “sendEmail” function and return the user id to which I want to send the email to. Does this make any sense ? Thanks again!",
"username": "Dimitar_Kurtev"
},
{
"code": "",
"text": "Appreciate the feedback, we’ll look into clarifying this in documentation. Since Triggers don’t use Authentication – they are just backend functions that respond to set events – they don’t have an associated user or use authentication.Specifically, for the case that you mention, you could use the Stitch Admin API or keep information about the Stitch user in a separate collection.",
"username": "Drew_DiPalma"
}
] | Setting function’s custom execution user does work | 2020-02-29T20:39:50.288Z | Setting function’s custom execution user does work | 2,223 |
null | [
"morphia-odm"
] | [
{
"code": "",
"text": "As the title suggests, 2.0.0-BETA1 is out. I have a somewhere more complete announcement here (https://twitter.com/evanchooly/status/1234468735282565120). But please do try it out and file issues against any bugs or incompatibilities you might find.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "@Justin_Lee I was not aware of this project. That is great! I am looking for to use it on my next project. Thank you!",
"username": "coderkid"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Morphia 2.0.0-BETA1 released | 2020-03-02T14:14:33.247Z | Morphia 2.0.0-BETA1 released | 3,553 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi\nI’m an experienced software dev and have worked with every major RDBMS, always been interested in Mongo but never really felt like I had something that fit with Mongo. However, I think I do now, so I wanted to double check. Regardless of the answer I’m enrolled and doing the Mongo courses and really enjoying them and will continue to do so.So my next task that I am seriously considering using Mongo for, is building a reporting system. I’ve built a few before and really enjoy them. But this one is a little different. We don’t directly own any of the data. I’m basically going to have to aggregate/combine results from Jira and TestRail. Because of the scale this data, I can’t just make new requests and then aggregate/combine responses results sets in the 1000s. The performance will be poor. So I’m looking at persisting the data in Mongo.Reasons being:a) The responses from both APIs can be custom, or somewhat flexible\nb) They’re JSON to start with and being potentially custom, mapping them into DB tables would be tedious and probably error prone due to the various types that can be returned.\nc) The potential scale, so I’ll elaborate. In Jira, we’ll have many projects, a project is a system in our case. So those will have 100s/1000s of stories, then subsequently in TestRail, each story will have multiple test cases, and each test will have a test result for each test run. These are all automated, so may run nightly, so quickly a test will have dozens of results so quickly you have a load of data.So although I’m not through the data modelling course, I am thinking that there is a lot of scale potential, especially when you start trying to do work around trends over time. There’s what appear to me currently going to be lots of 1:N relations.\nProjects 1:N Stories\nStories 1:N Test Cases\nTest Cases 1:N Test ResultsSo I’m at this early stage thinking of collections for each major item (Stories, Test Cases, Test Results). I just wondered what peoples thoughts were on this approach and whether Mongo is a suitable choice here? As mentioned either way, I’m going to continue with learning Mongo as it’s cool. I certainly see that this could be modeled in an RDBMS, but the flexibility and lack of control I have on what custom things people put into TR and Jira make modelling that in something like Postgres a bit harder. PG was my original idea and just use json columns to store the responses, but I’m not convinced reporting through the json columns would be that easy/performant.Anyway, I’d be interested to hear some thought ",
"username": "Jonny"
},
{
"code": "",
"text": "The main entities are Projects, Stories, Test Cases and Test Results and each with one-to-many relationship with the following entity. I am assuming you already have some idea about how the data is collected into the MongoDB database.There are couple of questions that come to mind, to start with. How much data? And, how this data is going to be used (what are the main queries or reports)?These are some relevant references:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for the links, I’ve just read the 6 rules. In terms of data we’ll have dozens of systems, with 1000s of stories, each with 1-15 associated tests, the test results for these test cases will be automated, so probably run weekly. So a 50 test results per test case in a given year.Query wise, the report is basically the state of testing for applications, I am considering making an aggregated document with the number of requirements, number of tests, requirement coverage, test coverage etc.",
"username": "Jonny"
}
] | Mongo as API aggregator | 2020-02-29T20:27:25.124Z | Mongo as API aggregator | 1,811 |
null | [] | [
{
"code": "",
"text": "Why we need it… real time use… please share your expertise solution",
"username": "bhushan_22818"
},
{
"code": "",
"text": "Compass is a Graphical user interface for interacting with a MongoDB cluster.\nI has:Compass is a very powerful tool!",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @bhushan_22818,In addition to @natac13,For further information you can also refer our documentation.Thanks,\nShubham Ranjan\nCurriculum Services Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | WHat is mongodb Compass | 2020-03-01T10:08:26.825Z | WHat is mongodb Compass | 1,023 |
null | [
"graphql",
"stitch"
] | [
{
"code": "",
"text": "I have a really hard decision to make for an App I currently Develop please help me. I have 2 options one is to use WordPress as a Backend via WP-REST API/WP-GraphQL second is to use Stitch with the new GraphQL feature.Option 1:\nWordPress as headless cms for my Application. It has good API integration and WooCommerce will make it easy to make in-app charges (Recurring Payments). WordPress Hosting will be cheap since it only will function as a backend so no one will go to this site, only my App will get data from it via API.PROS:CONSOPTION 2:\nI’m a geek, I love JavaScript, React, MongoDB, Nodejs so everything that is cool today\nI was so lucky when I heard GraphQL coming to Stitch but there are also a few pros and cons that I need to consider.PROSbut the cons are big here…CONSSo I hope anyone with experience here can give me some good advice on what I should do here?Thank you.",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "I like NoSQL more.I’m a geek, I love JavaScript, React, MongoDB, Nodejs so everything that is cool todayThis sounds like you have already made your choice.My experience:\nRight now I am rebuilding my SaaS application to use AWS Lambda to access mongo db through an Apollo GraphQL backend using the Serverless Framework. Therefore I somewhat avoid your second con for options 2 and avoid your point one for option 2 since Apollo is the Graphql layer.\nI choose this route to try and keep costs as low as possible.\nI do not have any experience with Stitch at the moment and would be interesting in hearing anyone else’s thoughts on this matter as wellI would also be interested in asking MongoDB if Stitch is run off AWS Lambda or equivalent fro other cloud providers?",
"username": "Natac13"
}
] | Hard decision Stitch + GraphQL or WordPress? | 2020-03-01T22:04:15.876Z | Hard decision Stitch + GraphQL or WordPress? | 2,780 |
null | [
"python"
] | [
{
"code": "{\n \"_id\" : ObjectId(\"5e57d89d5a7f537828c12a53\"),\n \"field\" : \"my \\\"string\\\" with escaped chars\"\n}\ndoc = collection.find_one({})\nprint(doc)\n{'_id': ObjectId('5e57d89d5a7f537828c12a53'), 'field': 'my \"string\" with escaped chars'}",
"text": "I’m using pymongo to access MongoDB from python.I have a collection with a document that include escaped quotes:However ,when I retrieve the document like this:I get this:{'_id': ObjectId('5e57d89d5a7f537828c12a53'), 'field': 'my \"string\" with escaped chars'}which is not what I want as the escaped quotes have disappeared.What should I do to retrieve the document as is in the database, that is, with the escaped quotes?Thanks",
"username": "Didac_Busquets"
},
{
"code": "",
"text": "Any reason why you want the backslash? In principle you use the backslash in your code because you want the quotes. I am not sure in python but in some other languages you put 3 backslashes. The first one escapes the second one and the third escape the quotes.",
"username": "steevej"
},
{
"code": "",
"text": "When you use it, why don’t you define your string as raw using r’text’? I should keep escape characters.",
"username": "coderkid"
},
{
"code": "",
"text": "Thanks @steevej and @coderkid.I do need the backslashes. Actually, the documents I want to retrieve are much more complicated than\nthe simple example I gave, and they are in a database that I cannot modify.The documents are correctly retrieved by a Java application, but for some reason pymongo “unescapes” the backslashes.",
"username": "Didac_Busquets"
},
{
"code": "return str(doc)strreturn json.dumps(doc)",
"text": "Ok, so I’ve found what the problem was. It’s the “print” bit, that is not displaying the backslashes. My code was actually doing a return str(doc) as it was a REST API server. The str was then removing the backslashes.I’ve changed that for return json.dumps(doc) and it keeps them.So problem solved!",
"username": "Didac_Busquets"
}
] | Retrieving escaped characters using PyMongo | 2020-02-27T15:15:53.966Z | Retrieving escaped characters using PyMongo | 4,884 |
[
"sharding",
"transactions"
] | [
{
"code": "",
"text": "i have a question.\n\nSnipaste_2020-02-23_23-10-051520×199 18.2 KB\n\n\n微信图片_202002232309271530×256 96.3 KB\n",
"username": "1116"
},
{
"code": "",
"text": "@1116 This is expected as per the documented Production considerations for transactions in sharded clusters.Arbiters are not supported for multi-shard transactions:On a sharded cluster, transactions that span multiple shards will error and abort if any involved shard contains an arbiter.The reasoning behind this is that arbiters help maintain a quorum of voting members if a voting secondary is unavailable, but cannot contribute to acknowledging majority write concern since they do not store any data. Without majority write acknowledgement, data could potentially be rolled back which would go against one of the guarantees of transactions (all-or-nothing commit semantics).The solution for this would be to replace your arbiters with data-bearing secondaries, which will provide more resilience and consistent majority write behaviour in the event a voting member of a replica set is unavailable.Regards,\nStennie",
"username": "Stennie_X"
}
] | Shard cluster transaction aborted! abortCause:ReadConcernMajorityNotEnabled | 2020-02-23T20:13:31.300Z | Shard cluster transaction aborted! abortCause:ReadConcernMajorityNotEnabled | 2,394 |
|
[] | [
{
"code": "",
"text": "Hi ! I use the iOS Discourse app on a daily basis (more than 30 forums). Unfortunately, I cannot log in this forum.\n\nIMG_4567.PNG1242×2208 179 KB\nDoes anyone has a clue about this issue ?Thanks in advance folks ! AA",
"username": "Amalik_Amriou"
},
{
"code": "",
"text": "I’ve asked on Discourse forum, I’ll let you know \nhttps://meta.discourse.org/t/i-cannot-sign-in-mongodb-forum-through-discourse-ios-app/142386?u=amalik",
"username": "Amalik_Amriou"
},
{
"code": "",
"text": "My suspicion would be that this is related to the fact that we are a self-hosted implementation and not a Discourse-hosted instance. I’ll see if I can find out more. + @Deepak_Shah",
"username": "Jamie"
},
{
"code": "",
"text": "@Amalik_Amriou What authentication method are you using to log in (Google auth or Single Sign-On with email)?I’m able to login using Single Sign-On and the Android Discourse app.Also, what happened with your thread on the Discourse Meta? It appears to have been removed or made private.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X !I’ve tried to log in through email SSO and Google Auth.The thread has been closed and removed on the Discourse Meta indeed. It wasn’t the right place to be for the topic…Regards,AA",
"username": "Amalik_Amriou"
},
{
"code": "",
"text": "Hi @Amalik_Amriou,I’m able to sign-in successfully using IOS Discourse Hub app through SSO and also Google Auth (using another account).Could you elaborate more on which stage of the authentication process you are getting the error ?Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | I cannot sign into MongoDB forum through Discourse iOS app | 2020-02-23T11:59:11.065Z | I cannot sign into MongoDB forum through Discourse iOS app | 3,936 |
|
null | [
"sharding"
] | [
{
"code": "",
"text": "How to terminate a query ran from mongos on a 3 shards sharded cluster, without terminating from individual nodes.\nVersion: 4.0.14",
"username": "Dheeraj_Gunda"
},
{
"code": "",
"text": "You can use the killOp command.Starting in MongoDB 4.0, the killOp command can be run on a mongos and can kill queries (i.e. read operations) that span shards in a cluster.",
"username": "Prasad_Saya"
}
] | How to terminate query started from mongos? | 2020-03-01T22:05:50.908Z | How to terminate query started from mongos? | 1,994 |
null | [
"c-driver"
] | [
{
"code": "",
"text": "Hey, im a newbie and im want to use scrot to make screenshots and than add them to my mongodb. I’ve read the guides on GridFS and C Drivers but im still kinda lost in what to add to the scrot script so it can send them to mongodb.",
"username": "Vasil_N_A"
},
{
"code": "mongoc_gridfs_bucket_t",
"text": "Hi @Vasil_N_A!I’m not too familiar with scrot, but if you want to use it to capture screenshots, and store the screenshot contents in a MongoDB collection via the C driver, perhaps start with an application that takes a filename as an argument and uses the C driver’s mongoc_gridfs_bucket_t to save it? There is an example of that GridFS code here: mongoc_gridfs_bucket_t — libmongoc 1.23.2Best,\nKevin",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thanks for the answer it was somewhat useful, but i was wondering if you can save all existing photos from a folder to mongodb?",
"username": "Vasil_N_A"
}
] | GridFS image insert with C driver | 2020-02-24T19:41:18.828Z | GridFS image insert with C driver | 2,067 |
null | [
"security"
] | [
{
"code": "",
"text": "when i try to generate ssl keyfile for replSet authentication after specifying the path of the keyfile\na message that goes like “Is a directory” shows up with the path I’ve already specifiedhow to fix this ?",
"username": "Karrar_Mohammed"
},
{
"code": "",
"text": "The path you have given for the keyfile is a directory.\nRename the directory or rename the keyfile.",
"username": "chris"
}
] | "Is a directory" message when specifying keyfile path | 2020-03-01T06:42:17.647Z | “Is a directory” message when specifying keyfile path | 1,446 |
null | [] | [
{
"code": "",
"text": "How can I find the replicaset of my cluster? kindly help",
"username": "Giriraj_AR"
},
{
"code": "db.isMaster().setName",
"text": "Using the mongo cli:\ndb.isMaster().setName\nThis does not require authenticaton.",
"username": "chris"
}
] | How to find the replicaset? | 2020-03-01T06:45:54.968Z | How to find the replicaset? | 1,441 |
null | [
"swift",
"release-candidate"
] | [
{
"code": "MongoSwiftMongoSwiftSyncNIOThreadPoolthreadPoolSizeClientOptionsExamples/MongoClientPackage.swiftMongoSwiftSyncimport MongoSwiftimport MongoSwiftSynccSettingsPackage.swiftswift package generate-xcodeproj.xcodeprojmake project.xcodeprojErrorstructenumstructResultMongoCursorChangeStreamMongoSwiftSyncResult<T>?next()T?ResultResulterrorfor result in cursor {\n switch result {\n case let .success(doc):\n // do something with doc\n case let .failure(error):\n // handle error \n }\n}\ngetResultfor result in cursor {\n let doc = try result.get()\n // do something with doc\n}\nerrornext()ChangeStreamMongoCursornext()nilnext()killtryNext()MongoCursornext()tryNext()MongoCursorChangeStreamSequenceforSequencenext()LazySequenceProtocolMongoCursorChangeStreamLazySequenceProtocolSequencemapfilterSequencemap// 1. Create a cursor\nlet cursor = try myCollection.find()\n\n// 2. Add a call to `map` that transforms each result in the cursor by adding a new key\nlet transformed = cursor.map { result in\n // try to get the result, and if we succeed add a key \"a\" to it. if we fail, return\n // a failed result containing the error\n Result { () throws -> Document in\n var doc = try result.get()\n doc[\"a\"] = 1\n return doc\n }\n}\n\n// 3. Iterate the transformed cursor\nfor result in transformed {\n // ...\n}\nLazySequenceProtocolmapfilterSequenceMongoClientNotificationCenterClientOptionsNotificationCenterNotificationCenterCommandEventHandlerSDAMEventHandlerMongoClient.addCommandEventHandlerMongoClient.addSDAMEventHandlerNotificationCenterclient.addCommandEventHandler { event in\n print(event)\n // be sure not to strongly capture client in here!\n}\nstructenumSDAMEventCommandEventenumstructfindOnetoArrayforEachResult<T>MongoCursorResult<T>ChangeStreamautoIndexId",
"text": "We are very excited to announce the first release candidate for our upcoming 1.0.0 release.This release contains a number of major changes to the driver, as detailed in the following sections.The driver now contains both asynchronous and synchronous APIs for working with MongoDB from Swift.\nThese APIs are contained in two modules, named MongoSwift (async) and MongoSwiftSync (sync). Depending on which API you would like to use, you can depend on either one of those modules.The asynchronous API is implemented by running all blocking code off the calling thread in a SwiftNIO NIOThreadPool. The size of this thread pool is configurable via the threadPoolSize property on ClientOptions.Vapor developers: please note that since we depend on SwiftNIO 2, as of this reelase the driver will not be compatible with Vapor versions < 4, as Vapor 3 depends on SwiftNIO 1.0.All of the web framework examples in the Examples/ directory of this repository have now been updated to use the asynchronous API.The synchronous API has been reimplemented as a wrapper of the asynchronous API. You may also configure the size of the thread pool when constructing a synchronous MongoClient as well.If you are upgrading from a previous version of the driver and would like to continue using the synchronous API, you should update your Package.swift to make your target depend on MongoSwiftSync, and replace every occurrence of import MongoSwift with import MongoSwiftSync.Previously, the driver would link to a system installation of the MongoDB C driver, libmongoc. We have now vendored the source of libmongoc into the driver, and it is built using SwiftPM.libmongoc does link to some system libraries itself for e.g. SSL suport, so depending on your operating system and system configuration you may still need to install some libraries. Please see the updated installation instructions for more details.Note: Unfortunately, due to an issue with the Xcode SwiftPM integration where Xcode ignores cSettings (necessary for building libmongoc), as of Xcode 11.3 the driver currently cannot be added to your project as a dependency in that matter. Please see #387 and SR-12009 for more information. In the meantime, you can work around this by:Alternatively, as described in #387 you can clone the driver, run make project from its root directory to generate a corresponding .xcodeproj, and add that to an Xcode workspace.Like many Swift libraries, the driver previously used enums to represent a number of different error types. However, over time we realized that enums were a poor fit for modeling MongoDB errors.\nAnytime we wished to add an additional associated value to one of the error cases in an enum, or to add a new case to one of the enums, it would be a breaking change.\nOver time the MongoDB server has added more and more information to the errors it returns, and has added various new categories of errors. Enums made it difficult for our errors to evolve gracefully along with the server.Now, each type of error that was previously an enum case is represented as a struct, and similar errors are grouped together by protocols rather than by being cases in the same enum.Please see the updated error handling guide for more information on the types of errors and best practices for working with them.The synchronous variants of MongoCursor and ChangeStream (defined in MongoSwiftSync) now return a Result<T>? from their next() methods rather than a T?.\nYou can read more about the Swift Standard Library’s Result type here.\nThis change enables to propagate errors encountered while iterating, for example a network error, via a failed Result. Previously, users had to inspect the error property of a cursor/change stream, which was unintuitive and easy to forget.Iterating over a cursor would now look like this:Alternatively, you may use the get method on Result:Since errors are now propagated in this way, the error property has been removed from both types and inspecting it is no longer necessary.This change only affects ChangeStreams and tailable MongoCursors. (See: change streams, tailable cursors.) (By default, cursors are not tailable.)\nThese types will stay alive even after their initial results have been exhausted, and will continue to receive new matching documents (or events in the case of change streams) if and when they become available.In the past, next() would simply return nil immediately if a next result was not available. This would require a user who wants to wait for the next result to continuously loop and check for a non-nil result.\nNow, next() will internally poll until a new result is obtained or the cursor is killed (you can trigger this yourself by calling kill).If you wish to use the old behavior where the method would not continuously poll and look for more results, you can use the newly introduced tryNext() which preserves that behavior.For non-tailable MongoCursors, the cursor is automatically killed as soon as all currently available results are retrieved, so next() will behave exactly the same as tryNext().Note that a consequence of this change is that working with a tailable MongoCursor or a ChangeStream via Sequence methods or a for loop can block while waiting for new results, since many Sequence methods are implemented via next().MongoCursor and ChangeStream now conform to LazySequenceProtocol, which inherits from Sequence (which these types conformed to previously).This allows the standard library to defer applying operations such as map and filter until the elements of the resulting Sequence are actually accessed. This is beneficial for cursors and change streams as you can transform their elements without having to load the entire result set into memory at once. For example, consider the following snippet. The map call will be lazily applied as each element is read from the cursor in step 3:Note: If you wish to take advantage of LazySequenceProtocol, you cannot throw from the closure passed to map / filter / etc. Those variants only exist on Sequence, and calling them will result in the sequence being eagerly loaded into memory before the closure is applied.Prior to this release, MongoClient posted all monitoring events to a NotificationCenter, either one provided to it via ClientOptions or the application’s default center. This was overly restrictive, as it required you to interface with NotificationCenter in order to receive monitoring events, even if NotificationCenter wasn’t used anywhere else in your application.Starting in this release, you can attach your own handler types that conform to the new CommandEventHandler and SDAMEventHandler protocols to via MongoClient.addCommandEventHandler and MongoClient.addSDAMEventHandler respectively. The appropriate monitoring events will then be passed to them directly via the protocol requirement methods. From there, you can do whatever processing of the events you want, including, but not limited to, posting to a NotificationCenter.Also, there are ergonomic overloads for both of the handler adding methods that take in callbacks if you don’t want to define your own handler type:Prior to this release, all monitoring events were defined as their own structs, and extracting the right event type required lots of downcasting. Starting in this release, common event types are grouped into enums, namely into the SDAMEvent and CommandEvent enums, whose cases’ associated values are the existing event structs. This models events in a way that makes better use of the Swift type system by removing the need for downcasting, allowing like events to be grouped together, and enabling relevant event types to be switched over exhaustively.",
"username": "kmahar"
},
{
"code": "",
"text": "This is great as a SSS fan! ",
"username": "Jonny"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Swift Driver 1.0.0-rc0 released | 2020-02-27T16:36:43.851Z | MongoDB Swift Driver 1.0.0-rc0 released | 2,748 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Thinking to manage clustering of mongodb thinking use shards.I am confuse if we use _id as shard key is it a best practice can we later change shard key?",
"username": "Coder_codaira"
},
{
"code": "_id_id",
"text": "can we later change shard key?Also from my learning a Mongo University the _id field is not a good selection for the shard key as it is based off time which is a monotonically increasing value. You could make a compound shard key with the _id field though.",
"username": "Natac13"
},
{
"code": "_id_id",
"text": "Thinking to manage clustering of mongodb thinking use shards.Sharding is about horizontal scaling; i.e., distributing a collection’s data across shards in a cluster. There are two types of sharding: Ranged and Hashed sharding.I am confuse if we use _id as shard key is it a best practice can we later change shard key?Yes, you can use _id as shard key or part of the shard key. Before that, you have to find if it makes a good shard key for your application. A good shard key will allow the data to be distributed evenly among the shards. And, the important queries that access the data must use the shard key as part of the query filter criteria. Note that without the shard key as part of the query criteria the queries will be very slow and inefficient.The shard key field is used to shard a collection. Once a collection is sharded you cannot change the shard key field (e.g., if a collection is sharded using the “product_id” field, later you cannot change the shard key to “product_name”).Also, see documentaion about Choosing a Shard Key and Monotonically Changing Shard Keys.Note, the _id field is of the category of “Monotonically Changing Shard Keys”.",
"username": "Prasad_Saya"
},
{
"code": "_id",
"text": "Also from my learning a Mongo University the _id field is not a good selection for the shard key as it is based off time which is a monotonically increasing valueThe _id can be an absolutely fine shard key as it is not required to be an objectID. It can be any BSON except an array. https://docs.mongodb.com/manual/core/document/#the-id-field",
"username": "chris"
},
{
"code": "_id_id",
"text": "The default _id is monotonically increasing based off time, as @Prasad_Saya stated. So if you use a hashed shard key with _id then it is a good selection, but not as the default value.",
"username": "Natac13"
},
{
"code": "",
"text": "Thank you for your replies actually supporting lagacy code and queries to mongo are not optimized I want to horizontally scale the server what are the options?",
"username": "Coder_codaira"
},
{
"code": "",
"text": "supporting lagacy code and queries to mongo are not optimizedApplications with not optimized queries are hard on the users (and their work). This should be a definite and immediate concern. MongoDB has tools and techniques to optimize the existing queries and make them perform better.I want to horizontally scale the serverI think having little more information about the data and code (the application) will help to suggest and discuss. The amount of data and the kind of application are important factors. Sharding is recommended for certain amounts of capacities where vertical scaling becomes expensive.Sharding also means determining the shard key, analyzing and building the queries, and of course the new hardware configuration. The hardware and the software to put together a sharded cluster is more complex. So is its building and maintainance. And then the budget.These are just some initial thoughts.Also see:",
"username": "Prasad_Saya"
},
{
"code": "_id",
"text": "you can shard a collection on _id using hashed sharding. In my honest opinion, that is the easiest way to shard a collection and it will distribute your data across shards and get you additional performance and storage. However, as @Prasad_Saya stated, you need to build the sharded cluster which involves additional hardware and configuration.Keep in my mind the below two caveats with sharding:-More details about shard keys can be found here.",
"username": "errythroidd"
},
{
"code": "_id",
"text": "you can shard a collection on _id using hashed sharding. In my honest opinion, that is the easiest way to shard a collection and it will distribute your data across shards and get you additional performance and storage.Not necessarily. If majority of your queries are not by _id then you will run into a scaling problem because all of those queries will be “scatter gather” - meaning they will be sent to every shard.Now your system is always as slow as the slowest of all the shards…",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Scatter gather queries are not necessarily always bad. I guess it really depends on the data sets, but in some cases, if you shard the collection then the data set is smaller on each shard and easily fits in the memory making the queries faster despite being scatter gather. The goal here should be to fit the working data set in memory.Below are my assumptions behind my theory:-\na. All the shards in the cluster are identical (equally sized with similar IOPS and Memory).\nb. the data is more or less evenly distributed across shards. (in most cases, hashed sharding does give even distribution, although there are cases when it does not happen).\nc. unable to choose a shard key which satisfies all the queries .",
"username": "errythroidd"
},
{
"code": "",
"text": "Welcome aboard @errythroidd :",
"username": "Amalik_Amriou"
},
{
"code": "",
"text": "Scatter gather queries are not necessarily always bad. I guess it really depends on the data sets, but in some cases, if you shard the collection then the data set is smaller on each shard and easily fits in the memory making the queries faster despite being scatter gather. The goal here should be to fit the working data set in memory.I’m afraid that’s not really correct. It seems like having less data on each shard will help, but think about linear scaling - if you double the number of shards, you want things to get twice as fast. And while the amount of data on each shard will be half what it was before, a scatter gather query will actually generate double the number of queries you had before.The “magic” of sharding as a method for horizontal scaling lies in targeted queries, not scatter-gather queries.Asya",
"username": "Asya_Kamsky"
}
] | Choosing a shard key | 2020-02-27T10:56:39.747Z | Choosing a shard key | 2,422 |
null | [
"charts"
] | [
{
"code": "",
"text": "I installed MongoDB Charts on an EC2 instance for development environment, and I created some charts; some trial-errors and some good ones… then I embedded the good ones into our web app.This looks great and promising!Now I need to create a demo environment, what is the best way to export charts and import into new environment?p.s. I do not want to export/import the development database, since there so so many garbage data in it.Any feedback is highly appreciated, thank you.",
"username": "coderkid"
},
{
"code": "",
"text": "Hi KayGlad you’re enjoying Charts! A feature to export and import individual dashboards is on our roadmap for later in the year. In the meantime I’m afraid the only options are to copy your entire metadata database, or recreate the charts you want from scratch. If you choose to do the former, make sure to copy all databases in the cluster, as well as the mongodb-charts_keys Docker volume.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the best way to export/import chart metadata | 2020-02-28T16:44:25.502Z | What is the best way to export/import chart metadata | 2,177 |
null | [
"golang"
] | [
{
"code": "",
"text": "How to handle if document structure after production changes.Suppose I had 500 documents like this:\n{\nname : ‘n1’\nheight : ‘h1’\n}Later if I decide to add all the documents in below format:\n{\nname : ‘n501’\nheight : ‘h501’\nweight : ‘w501’\n}I am using cursor.All(&userDetails) to decode(deserialize) in GoLang to get the output of the query in struct userDetails. If I modify the structure of further documents and userDetails accordingly, it will fail for the first 500 documents?How to handle this change?",
"username": "Sahil_Chimnani"
},
{
"code": "WeightWeightWeight != \"\"*stringWeight != nil",
"text": "Hi Sahil,The driver will only decode the fields that exist in the document and the struct, so you should be able to add a Weight field to your struct and everything should work. For the first 500 documents, the driver will leave the Weight field empty. You can use a regular string and check for Weight != \"\" or a *string and check for Weight != nil to determine if the field was empty depending on your use case.",
"username": "Divjot_Arora"
}
] | Managing schema changes with MongoDB | 2020-02-28T12:45:21.251Z | Managing schema changes with MongoDB | 1,932 |
null | [] | [
{
"code": "",
"text": "the new version of compass has (as it seems) a new Interface: everything in one line. Exzellent. So I typedmongodb://m001-student:[email protected]:27017and got (automatism, Wonderful automatism)mongodb://m001-student:[email protected]:27017/admin?authSource=admin&readPreference=primary&appname=MongoDB%20Compass&ssl=trueAnd this leads to:An error occurred while loading navigation: ‘not master and slaveOk=false’: It is recommended to change your read preference in the connection dialog to Primary Preferred or Secondary Preferred or provide a replica set name for a full topology connection.And now I’m stuck.\nWhat is the universes master-key?",
"username": "Otto"
},
{
"code": "",
"text": "Include the other two nodes in the connection string separated by commas. Also look into including the name of the replica set.",
"username": "007_jb"
},
{
"code": "",
"text": "I’m in lesson:\nCloud: MongoDB Cloud. there an no other notes (I#ve never ever heart the word “node”)only:\n4. Use the following information to complete this form, but do not click “Connect” yet. Hostname: cluster0-shard-00-00-jxeqq.mongodb.net Username: m001-student Password: m001-mongodb-basics Replica Set Name: Cluster0-shard-0 Read Preference: Primary Preferred\n5. Click “Add to Favorites” and enter M001 RS as the Favorite Name . Adding this connection as a favorite will enable you to easily reconnect to our class MongoDB deployment after closing and restarting Compass.\n6. Now, click “Connect” and load the databases in the M001 class MongoDB deployment.",
"username": "Otto"
},
{
"code": "mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0%2Ftest&readPreference=primary&appname=MongoDB%20Compass&ssl=true",
"text": "The lectures are a bit behind with the new version. Here, try this one:mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0%2Ftest&readPreference=primary&appname=MongoDB%20Compass&ssl=truePaste it here:\n\nimage1014×299 16.9 KB\n",
"username": "007_jb"
},
{
"code": "",
"text": "If you are not overwhelmed about connection strings I would like to encourage you to read more about it and in particular about the new SRV style. You can find some information at https://docs.mongodb.com/manual/reference/connection-string/ even if it is outside the scope of this course.In the case of the shared cluster for M001 the SRV URI is simply mongodb+srv://m001-student:[email protected]",
"username": "steevej"
},
{
"code": "",
"text": "worked like a charm (or cat or Schoko-Eiscreme)\nthank you.",
"username": "Otto"
},
{
"code": "",
"text": "Closing this thread as the issue has been resolved.",
"username": "Shubham_Ranjan"
}
] | Connect: 'not master and slaveOk=false': | 2020-02-27T19:50:29.901Z | Connect: ‘not master and slaveOk=false’: | 4,649 |
null | [
"replication",
"performance"
] | [
{
"code": "",
"text": "just trialing out the forum…a very welcomed area…\nover at google.group - Stennie has seen - there is a user stating their primary’s RAM maxes out when their secondaries are down.I was wondering: if the write concern exists with a value that requires that secondary…but without any timeout value - - so now they hang… how does something like that affect RAM consumption?..if at all…",
"username": "James_Bailie"
},
{
"code": "",
"text": "The wait is indefinite in that case, but I wouldn’t expect it to max out CPU…",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "maxing out RAM is what is said…",
"username": "James_Bailie"
},
{
"code": "",
"text": "Ah, yes, sorry. Generally if there are any queries that normally go to secondaries (secondaryPreferred read preference) that now go to primary they are likely using the extra RAM.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "noted - I copy/pasted your comment over to google.groups…on the write side: if the secondary being down does not allow the write concern verification AND there is no timeout parameter - - does that create an unsatisfied ‘pending verification’ hang in RAM too? … such that it also would contribute to the burden? …or does it just go away perhaps…",
"username": "James_Bailie"
},
{
"code": "",
"text": "I think the answer depends on a few things - obviously more connections hanging around will contribute to RAM pressure somewhat, since each connection uses some amount of RAM on the OS level, but things also depend on whether these are synchronous or asynchronous requests…I would recommend that if you’re using any write concern that involves other nodes (and therefore may not be satisfiable in some cases) use a non-default wtimeout to allow the application decide what to do in these cases, rather than hanging for long periods of time.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "could read concern majority be a cause here. When running replicaSet in PSA architecture and if the secondary is down, there is going to be considerable amount of cache pressure on Primary and eventually it overflows to the wiredTigerLAS.wt file. I would try setting enableReadConcernMajority: false and see if the issue can be reproduced.",
"username": "errythroidd"
}
] | Primary RAM maxes when secondary down | 2020-02-06T15:12:17.822Z | Primary RAM maxes when secondary down | 2,045 |
null | [] | [
{
"code": "",
"text": "I have specified a directory for my databases, but i cannot see the files. I have created 2 databases, “test” and “eieio”. I can see them at the command prompt using “db”, and i can do CRUD operations on them. But, when i look at my folder in windows explorer, they are not there, neither can i search for them in Windows taskbar search. Can someone tell me where to look and what to look for?Thanks!!",
"username": "RMS"
},
{
"code": "",
"text": "They are in the directory specified by your dbpath. However the name of the files do not reflect the name of the collections.",
"username": "steevej"
},
{
"code": "use local\ndb.startup_log.find().sort({startTime:-1}).limit(1).pretty()\ndb.startup_log.find({},{cmdLine:1}).sort({startTime:-1}).limit(1).pretty()\n{\n\t\"_id\" : \"5ca6d0cdf984-1582162763917\",\n\t\"cmdLine\" : {\n\t\t\"net\" : {\n\t\t\t\"bindIpAll\" : true\n\t\t},\n\t\t\"replication\" : {\n\t\t\t\"replSet\" : \"s0\"\n\t\t},\n\t\t\"security\" : {\n\t\t\t\"authorization\" : \"enabled\",\n\t\t\t\"keyFile\" : \"/keyfile\"\n\t\t}\n\t}\n}\n",
"text": "just in case it was not set to what you thought you did you can look in the local.startup_log collectionOr slightly modified for brevity:",
"username": "chris"
},
{
"code": "mongod",
"text": "mongod has an option to create separate directories for each database. For your existing deployment, you can create separate directories for each database using the option –directoryperdb (and following instructions in the documentation).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This is my db folder. I searched documentation for .wt, but tons of info comes up. Is there some page that gives brief overview of these various files and folders that gives some explanation of structure and use?Thanks!",
"username": "RMS"
},
{
"code": "",
"text": "That is your database files.What exactly are you trying to achieve?",
"username": "chris"
},
{
"code": "",
"text": "would just like to understand what i am seeing. i have done a lot of MS Access work over the years and i can open tables, see the text in queries, etc. I was just wondering if these various files can be opened with a text editor, which file contains my data, and so forth.Thinking about backup too. Would i just backup everything in the folder and call it a day? Just trying to understand what i am seeing.Thanks!",
"username": "RMS"
},
{
"code": "mongo",
"text": "If you are wanting to ad-hoc navigate and query your database you’ll want to use a GUI client like MongoDB Compass or the mongo command line interface.Your instinct on backup is essentially accurate, there are a few other ways and considerations that make good reading https://docs.mongodb.com/manual/core/backups/",
"username": "chris"
},
{
"code": "",
"text": "That makes sense. I tried Compass, it works great. I guess that i was thinking that MongoDB stored data in a text file full of json. Really, i am just trying to get familiar with the workings of Mongo.Thanks!",
"username": "RMS"
},
{
"code": "",
"text": "I guess that i was thinking that MongoDB stored data in a text file full of json.MongoDB stores data and indexes on disk in a compressed binary format. Individual documents are represented in BSON, a binary JSON-like serialisation format that supports additional data types such as dates, 32-bit integers, 64-bit integers, 128-bit decimals, ObjectIDs, and binary data.Data is written to disk using block compression for documents and prefix compression for indexes. Compression defaults are configurable at a global level and can also be set on a per-collection and per-index basis during collection and index creation.For further reading that might provide helpful background info, see:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Where is my data stored? | 2020-02-19T21:39:35.814Z | Where is my data stored? | 27,350 |
null | [
"queries",
"sharding"
] | [
{
"code": "",
"text": "i have to find data by filtering specific dates range in a sharded database having total of 800 million document.what is the structure or the way to optimize query to get result in such case",
"username": "Priyanka_Saxena"
},
{
"code": "",
"text": "Hi @Priyanka_SaxenaFrom what I remember from Mongo University you would want to include your shard key in the query so that the Mongos can route the query to only the replica sets which store that portion of the data. However you have mentioned querying based off a date range. Again from what I remember any Monotonically Changing value is not good for a shard key as it will not provide an even distribution of data.\nI will leave some links to the documents to help you out as well since my reply does not give an solution to your problem.\nWhich includes hash and range sharding",
"username": "Natac13"
},
{
"code": "",
"text": "To query a sharded collection efficiently the query filter criteria must include the shard key. Without the shard key usage, the query will be scatter-gather operation, i.e., all the shards in the collection will be accessed to find the data. The query will be very slow.To efficiently get the data, the query need to be a targeted operation, and the filter uses the shard key.Is the specified date field part of the shard key? If not, is the shard key part of the query filter?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I have not used shard key yet and answering your question:-\nno date field is not part of shard key and not even used in query filter yet.I am using below intermediate querycursordata=event.aggregate([{\"$match\":{“name”:“values”}},{\"$unwind\":“detailArray”},{\"$project\":{“detailArray.date”:1,“detailArray,msg”:1}},{$group:{_id:null,count:{$sum:1}}}])i am not able to get event count of unwinded documents",
"username": "Priyanka_Saxena"
},
{
"code": "${ \"$unwind\": “$detailArray” }",
"text": "i am not able to get event count of unwinded documentsThe $unwind usage requires you prefix the $ to the field name like this in your aggregation query:\n{ \"$unwind\": “$detailArray” }",
"username": "Prasad_Saya"
}
] | How to query on a large sharded database of 800 million documents | 2020-02-27T10:56:15.763Z | How to query on a large sharded database of 800 million documents | 4,736 |
[
"queries"
] | [
{
"code": "",
"text": "Is it something we have to concern in version as of 4.0?Read Isolation, Consistency, and Recency — MongoDB ManualMongoDB cursors can return the same document more than once in some situations. As a cursor returns documents other operations may interleave with the query. If some of these operations change the indexed field on the index used by the query; then the cursor will return the same document more than once.",
"username": "felipe_dos_santos"
},
{
"code": "",
"text": "The docs you are quoting are for the current version (4.2) so yes, there is no guarantee of a single point in time view when you iterate over a cursor unless you are in a transaction or otherwise force a stronger readConcern.",
"username": "Asya_Kamsky"
}
] | Can cursors return the same document more than once in MongoDB 4.0? | 2020-02-27T21:40:52.414Z | Can cursors return the same document more than once in MongoDB 4.0? | 2,526 |
|
null | [] | [
{
"code": "",
"text": "We are currently using Enterprise mongodb replica set and want to downgrade to Community version. Are there any steps documented to perform the downgrade with no downtime on existing enterprise instance? Any precautions we need to take before performing the downgrade?",
"username": "Kiran_K"
},
{
"code": "",
"text": "If you are not using any Enterprise version options, you can switch to exact same version of Community as they are binary compatible. Just follow these directions but in reverse: https://docs.mongodb.com/manual/administration/upgrade-community-to-enterprise/",
"username": "Asya_Kamsky"
}
] | Enterprise to Community downgrade | 2020-02-26T19:09:16.174Z | Enterprise to Community downgrade | 2,441 |
null | [
"replication"
] | [
{
"code": "",
"text": "My first post here!How can MongoDB 4.0.12 be configured to SLOW its rate of re-connecting to its replica members when they are responding with “connection refused”?I had MongoDB replica members die. The online replica members get “connection refused” responses from the down members, then they re-attempt to connect as fast as the CPU and network allow, without any delay. This crushes the CPU and makes the log files grow huge, as Mongo retries so much.Is there a way to configure MongoDB to wait for some seconds or minutes before attempting to reconnect to a down replica member? I can’t find any such setting in the server & replica settings, and this seems like it would be an important setting to avoid crushing the CPU and network.Thank you!!",
"username": "Jon_Spewak"
},
{
"code": "",
"text": "Unless anyone has another idea, I will try rate-limiting outbound connections on the OS-level.",
"username": "Jon_Spewak"
},
{
"code": "",
"text": "Ran some tests by taking down a replica set member, rate-limiting outbound connections seems to work very well on the OS-level. CPU usage, log entries, and network traffic remain low when a replica member dies. Hope this helps someone who may experience the same issue I had.",
"username": "Jon_Spewak"
}
] | MongoDB Replica Set Reconnects to Down Secondaries Too Fast | 2020-02-13T21:57:41.969Z | MongoDB Replica Set Reconnects to Down Secondaries Too Fast | 1,684 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "I have been working with Kafka and Mongodb for a few months now. I am running into problems with the Kafka -> Mongodb sink connector, in that it cannot cope with the number of records I’m throwing at it. I’m processing around 100,000 records per second. I dont expect mongo to keep up with that but I’m only hitting about 1000-2000 records a second into mongo. I’m using upserting, using the primary key, any tips would be helpful. When we dump data straight in from SQL server we get much better through put, so its not server spec. I have increased batch size and max tasks, to no avail.Thanks in advance.",
"username": "derek_henderson"
},
{
"code": "",
"text": "Hi @derek_henderson,Are you doing any post processing of messages? Are you watching multiple topics with the connector?The connector only supports a single task, so changing max tasks won’t change the throughput. Did setting a batch size make any difference at all?Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hi Ross,Thanks for the response, I am watching about 50 topics, but am not doing any post processing, simply pushing it straight into a collections. Batch size did not make any significant improvement.Derek",
"username": "derek_henderson"
}
] | MongoDB Kafka Sink Connector | 2020-02-20T12:02:58.797Z | MongoDB Kafka Sink Connector | 2,784 |
null | [
"compass",
"schema-validation"
] | [
{
"code": "",
"text": "I am aware that Compass will analyze the schema of a collection, but I am struggling to find a tool that will report that schema as a JSON file or preferably as a schema diagram. Is anyone aware of any tools that can achieve this?",
"username": "Tim_Busfield"
},
{
"code": "",
"text": "Here is a tool for that: Variety, a Schema Analyzer for MongoDB.",
"username": "Prasad_Saya"
},
{
"code": "mongodrdlProduce a schema.drdlmongosqldUpload.drdlDownloadDeleteNamemongodrdl sample --db <db-name> --collection <collection-name> --out <filename>.drdl",
"text": "Hello Tim,you can use one of the MongoDB tools: mongodrdl is a relational schema management tool for the MongoDB Connector for BI. The mongodrdl binary can:usage:\nmongodrdl sample --db <db-name> --collection <collection-name> --out <filename>.drdlThe output is not json, what you are asking for, but since the drdl file is almost pure xml you can use one of the many online tools to convert xml to json.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "mongodb-schema",
"text": "I am aware that Compass will analyze the schema of a collection, but I am struggling to find a tool that will report that schema as a JSON file or preferably as a schema diagram.@Tim_Busfield MongoDB Compass actually has a feature to “Share schema as JSON” but it is easily missed because you have to use a menu option at the moment:If you want to use the same schema analysis outside of Compass, it is open source (Apache 2.0 license) and usable as a Node.js library or mongodb-schema command line tool: https://www.npmjs.com/package/mongodb-schema.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks everyone for all of your suggestions, Stennie’s is the one that I used and I can only mark one as the “solution” but they were all helpful.",
"username": "Tim_Busfield"
}
] | Mongo Schema Diagram or Report | 2020-02-11T09:12:16.354Z | Mongo Schema Diagram or Report | 7,620 |
null | [] | [
{
"code": "MongoDB.Driver.MongoCommandException: Command 'mapreduce' failed: too much data for in memory map/reduce (response: { \"ok\" : 0.0, \"errmsg\" : \"too much data for in memory map/reduce\", \"code\" : 13604, \"codeName\" : \"Location13604\"\n",
"text": "I used MapReduce but during the execution, I got an error:What is the root cause of this problem, and is there any solution?Thank you",
"username": "T_Y"
},
{
"code": "{out:{inline:1}}aggregateaggregatemapReduce",
"text": "Hi there!I suspect you may be using the option {out:{inline:1}} which performs MR in memory and then attempts to return the result as a single document. I have some questions for you:If you provide the full command you are running, I can confirm the cause of the error message, and maybe help you use aggregate instead of mapReduce for this?Asya",
"username": "Asya_Kamsky"
}
] | Command ‘mapreduce’ failed: too much data for in memory map/reduce | 2020-02-23T20:14:26.617Z | Command ‘mapreduce’ failed: too much data for in memory map/reduce | 2,186 |
null | [
"compass"
] | [
{
"code": "",
"text": "I was able, using MongoDB Compass to connect to my local database and after the update, I can’t anymore.I am using a mongodb docker image (library/mongo:4.0.12) using the querystring mongodb://db:27017/admin to connect my app to the db. My app connects fine. The port 27017 is being exposed to the outside world from the docker image.Before the update, I was able, using MongoDB Compass, to create a new connection using the default values (localhost and port 27017). Now, after then update, I can still create a new connection, I can see the databases but the databases have no collections! Has anything changed in the update that would cause this issue?Thanks!",
"username": "Andre_Oliveira"
},
{
"code": "",
"text": "Can you connect on commandline using the mongo program, use a database and do “show collections” to see what collections are there? Compass should show what’s there…if it’s different from what you see in Compass, report back.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "I found the solution!What I have on my local machine:By changing the port that the docker image was exposing the mongodb instance to, for example, 28000, and then using Compass to connect to it, it worked fine.I really don’t know how it was working previously (before the update). Maybe the docker’s instance was overriding the local mongodb instance and then after the update, that was reversed.Everything works fine now!Thanks!",
"username": "Andre_Oliveira"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Compass Update - Empty Databases | 2020-02-25T17:48:48.783Z | MongoDB Compass Update - Empty Databases | 3,595 |
null | [
"scala"
] | [
{
"code": "",
"text": "Hello,I’m writing a Scala app that will need to handle big spikes in load. I was quite surprised to see an exception that says the “Max number of threads (maxWaitQueueSize) of 500 has been exceeded”\n500! Why on earth would I need 500 threads?The documentation for Scala driver (which apparently uses the Java driver) says that it’s non-blocking and asynchronous. I’ve always thought that these two concepts are usually implemented by reusing a limited number of threads running on a thread pool of a particular size. Upon looking into the driver code I noticed that the mongo driver is creating a thread for each operation! How can that be efficient? Thread creation is after all, to the best of my knowledge, absurdly expensive.I anticipate that for sure there is a reason why is this is implemented this way, so I’d love if you could enlighten me on the following questions:Thanks,\nLukasz",
"username": "Lukasz_Myslinski"
},
{
"code": "MongoClient",
"text": "Hi @Lukasz_Myslinski,\nWelcome to the forum!Could you provide the version of mongo-scala-driver you are using ? Could you also provide a code snippet on how the MongoClient is initiated ?The MongoClient instance represents a pool of connections for a given MongoDB server deployment; you will only need one instance of class MongoClient even with multiple concurrently executing asynchronous operations.Regards,\nWan.",
"username": "wan"
},
{
"code": "class MongoClientFactory(val config: Configuration) extends DefaultBsonTransformers {\n\nprivate val mongoClient: MongoClient = MongoClient(config.dbUrl)\nprivate val database = mongoClient.getDatabase(config.dbName)\n\ndef getDatabase(codecProviders: Seq[CodecProvider]): MongoDatabase = {\n val codecRegistry = fromRegistries(fromProviders(codecProviders: _*), DEFAULT_CODEC_REGISTRY)\n database.withCodecRegistry(codecRegistry)\n }\n }\n",
"text": "Hi Wan,\nthanks for the reply. I’m using driver version 2.6.0. Also I’m quite sure I’m only using a single instance of MongoClient:In the meantime I think I’ve managed to anwer my own question, but I hope you could clarify I got this right. As I stated above, I was trying to perform a lot of concurrent write operations to the DB. Creating a thread for each operation and queueing it makes sense speed wise - if we can spawn and initialize a thread before a connection pool slot becomes available we save time on the potential context switch to reuse an existing thread. Am I reasoning this correctly?Regards,\nLukasz",
"username": "Lukasz_Myslinski"
},
{
"code": "insertMany()insert()",
"text": "Yes, the concept is to re-use threads in the connection pool. Please be aware that there is a limit for a thread to wait for a connection to become available. ConnectionPoolSettings: maxWaitTime.I’m writing a Scala app that will need to handle big spikes in load.You may have already done this, but if possible it’s best to batch operations together if they can be batched. For example, using insertMany() instead of multiple insert() or utilise Bulk Write Operations. This should reduce the number of operations waiting in queue.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "I get the connection pooling, and yes I am using bulk operations, but my question still stands - why is each read/write operation spawned as a separate thread? Can’t we just queue Runnables instead of Threads and supply them to existing connection pool threads from the queue? What’s the benefit here?",
"username": "Lukasz_Myslinski"
},
{
"code": "",
"text": "Hi @Lukasz_Myslinski,The Connection Pool is used as the resource of connections to send operations to MongoDB. Each operation does not necessarily spawn a new connection (thread) from the pool, but will do so if required up to the max connection pool size. If the connection pool size is at maximum and all connections are in use then pending operations are added to the wait queue. If the wait queue exceeds the configured size an error is thrown.This pool and wait queue behaviour is the same in the sync and async version of the underlying java driver. What is different is the sync driver blocks on waiting for the result of an operation, this generally makes it less likely that app code will exhaust the pool and wait queue (although certainly not impossible).With the async driver, it is far easier to write app code that does multiple concurrent database operations in a request. Unlike the sync version each will use a connection from the pool at the same time and that can exasperate the issue of connection exhaustion and need the wait queue.Please note in the next major release of the Scala driver the max wait queue size limitation is being removed but until then you can configure it via the waitQueueMultiple connection string setting.To understand fully what is going on in your scenario and to ensure nothing else is exasperating the issue I’d need to see some example code.I hope that helps,Ross",
"username": "Ross_Lawley"
}
] | Why does the Scala Driver spawn a thread for each operation? | 2020-02-11T21:04:34.082Z | Why does the Scala Driver spawn a thread for each operation? | 5,843 |
null | [] | [
{
"code": "",
"text": "I’m working on a project that is comparable with what Wireshark does. In my tests with MongoKitten, all is well. However, once I use the MongoDB CLI (on macOS), all commands work well except the insert command. It’s not only failing, it’s never sent to begin with.Is there any way to debug this?",
"username": "Joannis_Orlandos"
},
{
"code": "",
"text": "Hi Joannis!\nCan you post the result of the command? How do you know the command is not sent?",
"username": "DavidSol"
},
{
"code": "insertmongomongoversion()db.version()insertmongomongoOP_MSGmongoinsert",
"text": "I’m working on a project that is comparable with what Wireshark does.Can you elaborate on what you mean by this? Are you referring to analysing or decoding packets over the wire? WireShark passively captures/analyses network traffic so shouldn’t interfere with client compatibility.Given the description of an insert command not working, it sounds like you perhaps are developing a proxy which isn’t relaying commands/responses as expected. I would normally recommend a tool like WireShark to inspect the network communication and determine what is different between your working driver environment and the mongo shell ;-).once I use the MongoDB CLI (on macOS), all commands work well except the insert command.What specific version of the mongo shell and server are you using (as reported by version() and db.version()) and how are you detecting that the insert command is not sent? What is the result in the mongo shell?If you are developing something like a proxy, can you confirm how you are altering the wire protocol communication and what version(s) of the wire protocol you are supporting?The mongo shell relies on the wire protocol version for context on the connected server which may also alter the message sent (for example, older servers do not support the modern OP_MSG opcode). Modern versions of the mongo shell also convert inserts into bulk write commands, so you may not see an insert command if you are filtering network traffic to an expected opcode.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "0",
"text": "Hey. I know the command it not sent since there’s no TCP data leaving the client regarding insert. This is during the time the CLI is polling for a response.@Stennie_X I most definitely am interfering, but I found the issue yesterday. MongoDB’s C library expects the minimum wire version to be 0. My handshake had a minimum wire version of 2, 3 or 4 (have to read the source code for that).I am indeed building a proxy for MongoDB for a variety of reasons & use cases So far it also taught me about some details about using OP_MSG sequences chunks.",
"username": "Joannis_Orlandos"
}
] | Insert commands never send from the MongoDB CLI | 2020-02-23T20:13:48.749Z | Insert commands never send from the MongoDB CLI | 1,942 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "In MongodDB atlas there are a variety of options for picking a data type for a Number, however there is no generic Number option, unlike the ability to do so using mongoose to create the schema, so which would be best to use in MongoDB atlas?",
"username": "Quozzo_Crozzy"
},
{
"code": "number",
"text": "Hey @Quozzo_CrozzyHow big of a number are you thinking?I think mongoose stores the number as a Int-32 BSONIf you are looking at a large number then use 64-bit Int\nIf you are looking at financial data and need to be extremely precise then use the Decmial128 which is also a Mongoose type.Hope this helps",
"username": "Natac13"
},
{
"code": "",
"text": "As far as I know Mongoose does not natively support long and double data types. “number” is a “schema type”, it just for input validation…It is same on Atlas, these types for input validation, unless you pick Decimal128, mongodb stores everything same way generic number.",
"username": "coderkid"
},
{
"code": "DoubleNumbernumberInt32Decimal128",
"text": "In MongodDB atlas there are a variety of options for picking a data type for a Number, however there is no generic Number optionAssuming you are referring to Inserting documents using the Atlas UI, a Double is the equivalent BSON data type to choose for JavaScript’s generic Number type (a JavaScript Number uses the same 64-bit floating point representation as a BSON Double).The Atlas user interface uses BSON data types for field selection since they have more granular numeric representation than JavaScript including Int32, Int64, Double, and Decimal128 types.Typically you would choose the most appropriate data type for your use case. For example, if your values will fit in the range of 32-bit integers (Int32) these use 4 bytes of storage (plus BSON overhead) versus 8 bytes for a 64-bit value. For numeric values requiring precise floating point representation (like currency with fractional values), you would use Decimal128 (which uses 16 bytes).If you want to validate document inserts/updates using JSON Schema validation, you can either match specific numeric types or use the number type alias to match any numeric representation while still getting the advantages of appropriate storage size and precision.As far as I know Mongoose does not natively support long and double data types. “number” is a “schema type”, it just for input validation…Mongoose builds on the extended BSON data type support provided by the official MongoDB Node.js driver & BSON library. The driver provides object wrappers such as Int32 and Decimal128 so extended data types can be created and used from client code where possible. The MongoDB server will save values using the BSON types indicated by the driver.For one example of how this works in Node.js/Mongoose, see: A Node.js Perspective on MongoDB 3.4: Decimal Type.Regards,\nStennie",
"username": "Stennie_X"
}
] | What is the best type for a Number? | 2020-02-20T19:07:08.747Z | What is the best type for a Number? | 13,236 |
null | [
"replication"
] | [
{
"code": "",
"text": "I’ve noticed that by setting collection read preference to secondary read query response times have improved at least 10x. Shouldn’t this be default?",
"username": "Tyrel_Barstow"
},
{
"code": "",
"text": "Right setting of read preference is based on your use case. Reading from secondaries is faster, but can be tricky when some data is not replicated to secondaries - than eg. old version of document can be returned.See behavior section in docs - https://docs.mongodb.com/manual/core/read-preference/",
"username": "Lukas_Caldr"
}
] | Should Read Preference be Secondary by default? | 2020-02-20T19:07:15.815Z | Should Read Preference be Secondary by default? | 1,621 |
null | [] | [
{
"code": "",
"text": "Hi,\nIn the audit-message page I see that the JSON includes only remote ip and remote port.Is there any way to configure mongodb audit logs to include remote hostname and remote os as well?\nOr any other way to get these details?Thanks",
"username": "Ofer_Haim"
},
{
"code": "mongoapplication.name",
"text": "Is there any way to configure mongodb audit logs to include remote hostname and remote os as well?Remote O/S and hostname are not part of the current audit information and may be less deterministic than other audit details. Reliable identification of the remote O/S relies on information provided by the client, and hostname resolution from the MongoDB server’s point of view may differ from the source’s canonical hostname.any other way to get these details?Assuming your applications are connecting with modern drivers (updated for MongoDB 3.4+), you can find information about the remote O/S in the client metadata logged when a driver/client establishes a connection to MongoDB.For example, a connection from the mongo shell would be logged similar to:2020-02-27T10:03:37.510+1100 I NETWORK [conn1] received client metadata from 127.0.0.1:50319 conn2: { application: { name: “MongoDB Shell” }, driver: { name: “MongoDB Internal Client”, version: “4.2.3” }, os: { type: “Darwin”, name: “Mac OS X”, architecture: “x86_64”, version: “18.7.0” } }If modifying your application code is a possibility, you could include hostname and other details as part of the application.name in the client metadata.Since extracting information from logs probably isn’t ideal, I suggest you create a feature suggestion describing the desired auditing improvements on the MongoDB Feedback site. If you do submit a suggestion, please comment on this thread with a link so others can upvote & watch for updates.Regards,\nStennie",
"username": "Stennie_X"
}
] | Get remote hostname and operating system | 2020-02-26T19:09:19.818Z | Get remote hostname and operating system | 1,368 |
null | [
"node-js"
] | [
{
"code": "\n \n return;\n }\n \n \nif (self.state === DESTROYED || self.state === UNREFERENCED) {\n intervalId.stop();\n return;\n }\n \n \n// Filter out all called intervaliIds\n self.intervalIds = self.intervalIds.filter(function(intervalId) {\n return intervalId.isRunning();\n });\n \n \n// Initial sweep\n if (_process === Timeout) {\n if (\n self.state === CONNECTING &&\n ((self.s.replicaSetState.hasSecondary() &&\n self.s.options.secondaryOnlyConnectionAllowed) ||\n self.s.replicaSetState.hasPrimary())\n ) {\n \n \n \n return this;\n };\n \n \n this.stop = function() {\n clearTimeout(timer);\n timer = false;\n return this;\n };\n \n \n this.isRunning = function() {\n if (timer && timer._called) return false;\n return timer !== false;\n };\n }\n \n \nfunction diff(previous, current) {\n // Difference document\n var diff = {\n servers: []\n };\n \n \n\n nodejs:masterapapirovski:patch-priority-queue",
"text": "We where having some memory filling up and some base line CPU usage increase in our Node.js applications. The cause seems to be the MongoB driver. After some investigation we found out that the “mongodb-core”-package (being used in mongodb v3.0.x - v3.2.x) is leaking timers due to changes in timers for Node.js v12.The MongoDB Node.js driver versions 3.0.x till 3.2.x are reported to support Node.js v12.x.x here: https://docs.mongodb.com/ecosystem/drivers/driver-compatibility-reference/#reference-compatibility-language-nodeIs it possible to change the compatibility matrix or is the the bug going to be fixed? So people won’t run into the same issues while using older driver versions.Details:Timer list being filtered by “isRunning”:“isRunning” implementation using “_called”:Node.js pull request containing the removal of the “_called” variable (lib/timers.js line 282):This PR moves almost the full entirety of timers into JS land and hangs them all… off one TimerWrap. This simplifies a lot of code related to timer refing/unrefing and significantly improves performance.\n\n```\n confidence improvement accuracy (*) (**) (***)\ntimers/timers-breadth.js n=5000000 * 3.24 % ±2.84% ±3.78% ±4.92%\ntimers/timers-cancel-pooled.js n=5000000 *** -16.38 % ±2.06% ±2.74% ±3.57%\ntimers/timers-cancel-unpooled.js n=1000000 *** 47.81 % ±1.33% ±1.79% ±2.37%\ntimers/timers-depth.js n=1000 -0.24 % ±0.89% ±1.19% ±1.56%\ntimers/timers-insert-pooled.js n=5000000 *** 16.08 % ±7.09% ±9.49% ±12.47%\ntimers/timers-insert-unpooled.js n=1000000 *** 162.84 % ±4.82% ±6.49% ±8.59%\ntimers/timers-timeout-pooled.js n=10000000 *** 15.90 % ±1.59% ±2.11% ±2.76%\ntimers/timers-timeout-unpooled.js n=1000000 *** 948.32 % ±21.44% ±28.89% ±38.36%\n```\n\nFixes: https://github.com/nodejs/node/issues/16105\n\n<!--\nThank you for your pull request. Please provide a description above and review\nthe requirements below.\n\nBug fixes and new features should include tests and possibly benchmarks.\n\nContributors guide: https://github.com/nodejs/node/blob/master/CONTRIBUTING.md\n-->\n\n##### Checklist\n\n\n- [x] `make -j4 test` (UNIX), or `vcbuild test` (Windows) passes\n- [x] tests and/or benchmarks are included\n- [x] documentation is changed or added\n- [x] commit message follows [commit guidelines](https://github.com/nodejs/node/blob/master/doc/guides/contributing/pull-requests.md#commit-message-guidelines)",
"username": "Wilco_Waaijer"
},
{
"code": "{ useUnifiedTopology: true }MongoClientuseUnifiedTopology=true",
"text": "Hi @Wilco_Waaijer!\nThanks for bringing this to our attention. I’ve actually already been working on a fix for this on NODE-2460, and hope to have a fix out today.Regarding the compatibility matrix: we’re up to v3.5.4 of the driver now, so it’s unlikely that we will backport the fix three minor versions back. Generally we don’t backport changes within a major unless it is security related. In this case the recommendation is to upgrade to v3.5.x, where the fix will land shortly. I’ll get in touch with our docs team to correct the compatibility matrix.Additionally, you can avoid this issue today by upgrading to v3.5.4 and using the “unified topology” by passing { useUnifiedTopology: true } to your MongoClient constructor, or useUnifiedTopology=true to your connection string.",
"username": "mbroadst"
},
{
"code": "",
"text": "Perfect. I was not aware that there was an issue for that already. We are upgrading and depending on the release of the fix look into the unified topology option.It would indeed be nice if others are warned about the possible leak while running in Node.js v12 with the older versions. Is it okay if i comment on the issue with this thread or just the way to avoid the issue with the unified topology option? An extra way for people to find a solution if they’re searching for more info regarding the leak.Thank you for taking the time to provide some more information!",
"username": "Wilco_Waaijer"
},
{
"code": "",
"text": "The issue has been fixed in this commit. Feel free to link to this page on the ticket.We are looking for as much feedback as possible on the unified topology, since we’ll be completely removing the legacy topology types in the upcoming v4 major release. Please let us know if you decide to use it, and about your experience with it!",
"username": "mbroadst"
}
] | Node.js driver 3.2.x leaking timers running in Node.js v12 | 2020-02-25T17:48:53.213Z | Node.js driver 3.2.x leaking timers running in Node.js v12 | 3,154 |
null | [] | [
{
"code": "",
"text": "Hello everyone,My name is Juliette. I am a developer and musician originally from Chicago. I am currently a high desert dweller in Southern California. I became interested in MongoDB when I began to work with Node.js/Fullstack applications after having completed the Front End Tech Degree program @ Treehouse. I have been a fan and student of MongoDB/Mongo University for a few years now. My journey with MongoDB and Mongo U has taken me to some interesting and unexpected places such as learning about working with virtual environments (like Vagrant, and now Docker and Kubernetes) and Linux based commands. I’m enjoying this new path that I have embarked upon and I am super grateful to everyone at MongoDB that has made it possible for me to continue on this path of learning.Thanks for having me here:-)Juliette",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "Welcome, Juliette! We’re glad you’re here. ",
"username": "Jamie"
},
{
"code": "",
"text": "Thank you Jamie. I am glad to be here:-)",
"username": "Juliette_Tworsey"
}
] | Hello from the Mojave Desert! | 2020-02-24T22:42:48.874Z | Hello from the Mojave Desert! | 2,036 |
null | [
"php"
] | [
{
"code": " foreach($dataset as $key=>$data){\n $filter['id'] = $data['id'];\n $options = array('upsert'=>true);\n $mongo->collection->replaceOne($fiter,$data,$options);\n }\n$array_data = getData(); // here we collect updated data docs\n$array_data_ids = getIds(); // here we collect data_ids for filter\n$filter['id'] = array('$in'=>$array_data_ids);\n$mongo->collection->deleteMany($filter); // delete old docs with ids of new data\n$mongo->collection->insertMany($array_data); // insert new data\n",
"text": "Hello everyone. I have a set of data (around 100 docs every one sec).\nI use php Driver and Mongo 3.2\nSo, when I need update data I was using:But when I have around 200 docs per second, I see, my Mongo is loaded a lot. So I changed in:And I see, that this method works better than previous and not overload the mongo.\nBut sometimes I get errors for Duplicate key error. Because the same collection uses other one process for same data.The question is:\nIf there is any way to update docs without delete and insert or update/replace one?",
"username": "1114"
},
{
"code": "updateMany",
"text": "Hey @1114The question is:\nIf there is any way to update docs without delete and insert or update/replace one?It seems that you are looking for updateMany ← php driver\nmongo shell updateMany",
"username": "Natac13"
},
{
"code": "",
"text": "Maybe I don’t understand clear how can I use it.\nBut as I know, updateMany > Updates all documents that match the specified filter for a collection.\nAnd there is no specified filter for a collection.\nWe speak for every one document in collection.\nI looking for replaceMany function with upsert:true. So when I have the same unique key in collection, function should update data. When it doesn’t have -> just simple insertOne execute.\nI think, something like this.\nAnybody know something like this?",
"username": "1114"
},
{
"code": "updateOne_idupsert",
"text": "Then maybe do a bulk write with an array of updateOne calls.\nThat way you can have the _id as the query conditions and have access to an upsert flag.",
"username": "Natac13"
},
{
"code": "",
"text": "Thank you very much. It was exactly what I was looking for. And I did some test to see, how much time it takes. The same time with delete and insert many in 1000 records.\nThanks a lot!",
"username": "1114"
}
] | If there is any way to update docs without delete and insert or update/replace one? | 2020-02-22T18:29:26.784Z | If there is any way to update docs without delete and insert or update/replace one? | 7,359 |
null | [
"golang"
] | [
{
"code": "meetingService {\n collection : meetingCol\n}\nfunc (meetingService meetingService) GetMeeting(ctx context.Context, id int64) {\n collection = meetingService.collection //here I will use the collection\n //.. to find content by id in the collection\n}\n",
"text": "Hello,I connect the database then get the client, then I use the client to get the collection in the db, i.e. a collection called meetingCol. Can I save this meetingCol in an object, i.e. meetingService, then in the func of meetingService, like GetMeeting(), I want to use the meetingCol to retrieve contents in this collection. The code looks like:The code can work but I am not sure if the GetMeeting is thread safe in this way, if there are hundreds of client call GetMeeting, will that be OK for this kind of using and accessing the collection? do I need to transfer the mongo.Client as parameter to the meetingService, or to transfer the mongo.Database to the meetingService?",
"username": "Zhihong_GUO"
},
{
"code": "mongo.Collectionmongo.CollectionGetMeeting",
"text": "Hi,It is safe to use a mongo.Collection instance concurrently. The mongo.Collection type is immutable, so you can have multiple clients call GetMeeting concurrently without any issues.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Hello Divjot, many thanks for your answer.",
"username": "Zhihong_GUO"
}
] | Can I transfer the collection as parameter of func in golang | 2020-02-21T05:44:57.669Z | Can I transfer the collection as parameter of func in golang | 2,651 |
null | [] | [
{
"code": "2020-02-23T12:31:03.902+0000 I NETWORK [conn730] received client metadata from 192.168.0.10:45858 conn730: { driver: { name: \"mongoc / ext-mongodb:PHP\", version: \"1.16.1 / 1.7.2\" }, os: { type: \"Linux\", name: \"Gentoo\", version: \"2.6\", architecture: \"x86_64\" }, platform: \"PHP 7.4.2cfg=0x01d15ea8e9 posix=200809 stdc=201710 CC=GCC 9.2.0 CFLAGS=\"\" LDFLAGS=\"\"\" }\n2020-02-23T12:31:04.883+0000 I NETWORK [conn731] received client metadata from 192.168.0.10:46014 conn731: { driver: { name: \"mongoc / ext-mongodb:PHP\", version: \"1.16.1 / 1.7.2\" }, os: { type: \"Linux\", name: \"Gentoo\", version: \"2.6\", architecture: \"x86_64\" }, platform: \"PHP 7.4.2cfg=0x01d15ea8e9 posix=200809 stdc=201710 CC=GCC 9.2.0 CFLAGS=\"\" LDFLAGS=\"\"\" }\n2020-02-23T12:31:06.653+0000 I NETWORK [conn732] received client metadata from 192.168.0.10:46168 conn732: { driver: { name: \"mongoc / ext-mongodb:PHP\", version: \"1.16.1 / 1.7.2\" }, os: { type: \"Linux\", name: \"Gentoo\", version: \"2.6\", architecture: \"x86_64\" }, platform: \"PHP 7.4.2cfg=0x01d15ea8e9 posix=200809 stdc=201710 CC=GCC 9.2.0 CFLAGS=\"\" LDFLAGS=\"\"\" }\n2020-02-23T12:31:07.174+0000 I NETWORK [conn733] received client metadata from 192.168.0.1:57228 conn733: { driver: { name: \"mongoc / ext-mongodb:PHP\", version: \"1.16.1 / 1.7.2\" }, os: { type: \"Linux\", name: \"Gentoo\", version: \"2.6\", architecture: \"x86_64\" }, platform: \"PHP 7.3.14cfg=0x01d15ea8e9 posix=200809 stdc=201710 CC=GCC 9.2.0 CFLAGS=\"\" LDFLAGS=\"\"\" }\n2020-02-23T12:31:07.921+0000 I NETWORK [conn734] received client metadata from 192.168.0.11:32932 conn734: { driver: { name: \"mongoc / ext-mongodb:PHP\", version: \"1.16.1 / 1.7.2\" }, os: { type: \"Linux\", name: \"Gentoo\", version: \"2.6\", architecture: \"x86_64\" }, platform: \"PHP 7.4.2cfg=0x01d15e20c9 posix=200809 stdc=201710 CC=GCC 9.2.0 CFLAGS=\"\" LDFLAGS=\"\"\" }\n",
"text": "GreetingsI’ve recently added an arbiter to my cluster, but I was suprised to see that it appears to be receiving client connections:Is this normal / expected?",
"username": "Thomas_Mettam"
},
{
"code": "",
"text": "Yes it is going to connect. You should include it in the connection string/srv records. It can be used as a node for cluster discovery.",
"username": "chris"
},
{
"code": "",
"text": "I\"ve had some problems with arbiters receiving client connections since Mongo 3.6, if I’m not mistaken.\nErrors like connection timeout in the APIs.\nSince then, I always set my arbiters to hidden:true in the replicaSet config.",
"username": "Felipe_Esteves"
}
] | Arbiter receiving client connections | 2020-02-23T20:14:20.920Z | Arbiter receiving client connections | 2,437 |
null | [
"golang"
] | [
{
"code": "collection := client.Database(\"\").Collection(\"\")Collection()type Collection struct {\n\tclient *Client\n\tdb *Database\n\tname string\n\treadConcern *readconcern.ReadConcern\n\twriteConcern *writeconcern.WriteConcern\n\treadPreference *readpref.ReadPref\n\treadSelector description.ServerSelector\n\twriteSelector description.ServerSelector\n\tregistry *bsoncodec.Registry\n}\nCollectionCollection()",
"text": "Hello World,I’m currently updating a golang library that’s used in all our micro services that incorporates all the database objects and operations we allow them to use. Within every function that makes a call to the database there is a simple line collection := client.Database(\"\").Collection(\"\") which obviously declares the collection being used. Now, below I have the struct returned by Collection()-I have not dug into the methods associated with Collection, but am I able to globally define collections so multiple functions can use it or should I keep the code the way it is? Keeping in mind the Collection() function will find the lowest latency node for both reading and writing. Also, if you can, is there any decreased performance due to field writes locking the object?Thanks,\nSamuel",
"username": "Samuel_Archibald"
},
{
"code": "mongo.Collectionmongo.Collection",
"text": "Hi Samuel,You can define the collection globally. One pattern we see often is to store the mongo.Collection instance in a struct that has helper methods to do database operations. The mongo.Collection type is thread-safe, so multiple functions can use the same instance concurrently. As for performance, there shouldn’t be any impact from the driver code itself as the collection is immutable so there are no locks to modify collection fields when executing an operation.",
"username": "Divjot_Arora"
}
] | Golang Collection Struct Usage | 2020-02-21T23:41:19.215Z | Golang Collection Struct Usage | 2,055 |
null | [
"mongoose-odm",
"indexes"
] | [
{
"code": "const postSchema = new mongoose.Schema(\n {\n title: { type: String },\n description: { type: String },\n image: { type: String },\n price: { type: String },\n location: { type: String },\n image: { type: Array },\n author: {\n type: String,\n ref: 'User'\n },\n authorPremium: {\n type: Boolean,\n default: false,\n index:true\n },\n reported: {\n type: Boolean,\n default: false\n },\n reportClear: {\n type: Boolean,\n default: false\n }\n },\n {\n timestamps: true\n }\n);\n\n// users who are premium will keep post for 120 days\n// postSchema.index({createdAt: 1},{expireAfterSeconds: 360,partialFilterExpression : {authorPremium: true}});\n\n// users who are not premium will have posts deleted after 20 seconds\npostSchema.index({ createdAt: 1 }, { expireAfterSeconds: 20, partialFilterExpression: { authorPremium: false } });\n\nmodule.exports = mongoose.model('Post', postSchema);\n",
"text": "does anyone have experience with ttl for mongoose\nI am trying to set it for my schema to delete documents if the user is not premium at certain seconds, using the partialFilterExpression\nbut the document is being deleted\nregardless the state of the userthis is my schema",
"username": "_3x"
},
{
"code": "indexauthorPremium: {\n type: Boolean,\n default: false,\n index:true\n },\n",
"text": "Hey @_3xI think (could be wrong) by setting index here.and the partial at the end is making 2 indexes. Maybe only do the one at the end with the partial filter.I found a few stackover flows (including yours) about this.\nJust want to make sure you are on a version of mongodb that supports partial filters. As Other than that I am not sure why the above is not working as expected.",
"username": "Natac13"
}
] | Partial filter expression is deleting post regardless of filter | 2020-02-25T22:02:02.567Z | Partial filter expression is deleting post regardless of filter | 2,764 |
null | [
"containers",
"upgrading"
] | [
{
"code": "==================================================================\n2020-02-12T20:42:12.953Z W - [conn606] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 172.31.63.194:49488\n2020-02-12T20:42:12.958Z I - [conn606] \n 0x155c5e2 0x155c40d 0x14d8a50 0x150e006 0x150e71b 0x150e731 0x150e78d 0x14ff75d 0x150242e 0x7fdc06e0a6db 0x7fdc06b3388f\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"400000\",\"o\":\"115C5E2\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"400000\",\"o\":\"115C40D\",\"s\":\"_ZN5mongo15printStackTraceEv\"},{\"b\":\"400000\",\"o\":\"10D8A50\",\"s\":\"_ZN5mongo11DBException13traceIfNeededERKS0_\"},{\"b\":\"400000\",\"o\":\"110E006\",\"s\":\"_ZN5mongo6Socket15handleRecvErrorEii\"},{\"b\":\"400000\",\"o\":\"110E71B\",\"s\":\"_ZN5mongo6Socket5_recvEPci\"},{\"b\":\"400000\",\"o\":\"110E731\",\"s\":\"_ZN5mongo6Socket11unsafe_recvEPci\"},{\"b\":\"400000\",\"o\":\"110E78D\",\"s\":\"_ZN5mongo6Socket4recvEPci\"},{\"b\":\"400000\",\"o\":\"10FF75D\",\"s\":\"_ZN5mongo13MessagingPort4recvERNS_7MessageE\"},{\"b\":\"400000\",\"o\":\"110242E\",\"s\":\"_ZN5mongo17PortMessageServer17handleIncomingMsgEPv\"},{\"b\":\"7FDC06E03000\",\"o\":\"76DB\"},{\"b\":\"7FDC06A12000\",\"o\":\"12188F\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"3.2.22\", \"gitVersion\" : \"105acca0d443f9a47c1a5bd608fd7133840a58dd\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"4.15.0-1057-aws\", \"version\" : \"#59-Ubuntu SMP Wed Dec 4 10:02:00 UTC 2019\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"elfType\" : 2, \"b\" : \"400000\", \"buildId\" : \"C2070FF92CF0E7C7AF25D84027F691037262CEA2\" }, { \"b\" : \"7FFD040E5000\", \"path\" : \"linux-vdso.so.1\", \"elfType\" : 3, \"buildId\" : \"D05895E5E385880D40A2B0A20CF7D8C9B06423D6\" }, { \"b\" : \"7FDC07E27000\", \"path\" : \"/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0\", \"elfType\" : 3, \"buildId\" : \"0D054641049B9747C05D030262295DFDFDD3055D\" }, { \"b\" : \"7FDC079E4000\", \"path\" : \"/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0\", \"elfType\" : 3, \"buildId\" : \"9C228817BA6E0730F4FCCFAC6E033BD1E0C5620A\" }, { \"b\" : \"7FDC077DC000\", \"path\" : \"/lib/x86_64-linux-gnu/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"9826FBDF57ED7D6965131074CB3C08B1009C1CD8\" }, { \"b\" : \"7FDC075D8000\", \"path\" : \"/lib/x86_64-linux-gnu/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"25AD56E902E23B490A9CCDB08A9744D89CB95BCC\" }, { \"b\" : \"7FDC0723A000\", \"path\" : \"/lib/x86_64-linux-gnu/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"A33761AB8FB485311B3C85BF4253099D7CABE653\" }, { \"b\" : \"7FDC07022000\", \"path\" : \"/lib/x86_64-linux-gnu/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"41BDC55C07D5E5B1D8AB38E2C19B1F535855E084\" }, { \"b\" : \"7FDC06E03000\", \"path\" : \"/lib/x86_64-linux-gnu/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"28C6AADE70B2D40D1F0F3D0A1A0CAD1AB816448F\" }, { \"b\" : \"7FDC06A12000\", \"path\" : \"/lib/x86_64-linux-gnu/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"B417C0BA7CC5CF06D1D1BED6652CEDB9253C60D0\" }, { \"b\" : \"7FDC0808F000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"64DF1B961228382FE18684249ED800AB1DCEAAD4\" } ] }}\n mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x155c5e2]\n mongod(_ZN5mongo15printStackTraceEv+0xDD) [0x155c40d]\n mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0x140) [0x14d8a50]\n mongod(_ZN5mongo6Socket15handleRecvErrorEii+0xEE6) [0x150e006]\n mongod(_ZN5mongo6Socket5_recvEPci+0x5B) [0x150e71b]\n mongod(_ZN5mongo6Socket11unsafe_recvEPci+0x11) [0x150e731]\n mongod(_ZN5mongo6Socket4recvEPci+0x3D) [0x150e78d]\n mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x9D) [0x14ff75d]\n mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x2EE) [0x150242e]\n libpthread.so.0(+0x76DB) [0x7fdc06e0a6db]\n libc.so.6(clone+0x3F) [0x7fdc06b3388f]\n==============================================================================\n",
"text": "HI there,\nI am in the middle of a complex upgrade for a customer. Essentially updating 3 clusters form 3.0.12 to a minimum of 3.4 .I have hit an issue with logs, which others have hit, but having tried all suggestions I have no solution working… any help really appreciated.On a small Dev cluster, running nodes on 3.0.12, in Primary/Secondary/Arbiter, unsharded, I have added a 3.2.22 data node as a secondary, and it is pushing out a lot of these, with very few connections/activity happening:I have:Any other suggestions greatfully received. I am sort of at a loss to know what is causing this… is it a version incompatability?Many thx in advance",
"username": "Peter_Colclough"
},
{
"code": "",
"text": "The upgrade procedure is to update the binary in place. Not to add a new node. Have you tried that ?",
"username": "chris"
},
{
"code": "",
"text": "There is also likely a related log on one/all of the existing nodes. Take a look.",
"username": "chris"
},
{
"code": "",
"text": "Also… nope there aren’t related log entries on the other existing nodes… which admittedly is strange , but leads me to thing it may be version conflicts.",
"username": "Peter_Colclough"
},
{
"code": "",
"text": "Sadly this is not a possibility… due to teh way the live applications are written. Would have to call for a site outage to do that. So am building the cluster to have new nodes, along with old nodes, and get teh connection parameters (in many places) to be changed to pint to new nodes.Should also say that the current nodes are hugely overprovisioned, and need to go… which precludes upgrading in place.",
"username": "Peter_Colclough"
},
{
"code": "",
"text": "One of the benefits of replica set is that you can do this operations on the secondaries in a rolling fashion. So no site outage. I’ve done version upgrades and maintenance in this fashion many times.The primary still has to be stepped down at some point for the upgrade anyway.I guess you could add right-sized 3.0 nodes and try and upgrade them.",
"username": "chris"
},
{
"code": "",
"text": "I understand that… sorry… have had a 50 node cluster before. This is the first time I have seen this. I am getting this error on one of teh new nodes (right sized) … and its baffling me.The idea was to add teh right sized nodes, with a new version… the change the app access IPs… then take out the old nodes.Guess I am going to have to try putting th old vestion onto the new node… see if that gets rid of it, then upgrade the new nodes once the app connects to them properly.THx for the input though… appreciated",
"username": "Peter_Colclough"
},
{
"code": "",
"text": "NP. At least adding a 3.0 will rule out the version incompatibility.Also. Update here for prosperity ",
"username": "chris"
},
{
"code": "",
"text": "Ok… so for me… this is the scenario… and my take on it.What we have is (and dont laugh please… its not my setup) :Also, the 3.0.14 nodes did not report the error when communicating to teh 3.2.22 nodes. They just reported connection opens and closes.I have deduced from this it is a version issue with the heartbeat. Thee errors happen every 30 secinds , like clockwork. Data gets through from Primary to secondaries Ok, so it is not affecting operations.I tried installing the older version on teh new nodes, but hit an issue as I have stopped automatic upgrades. I suspect that would have fixed it… but then , for me, I would have hit it again as a part of a rolling update.Consequently I am leaving it, as teh old nods will be disappearing soon.Its a version issue between 3.0.14 and 3.2.22… unless it crops up again.Hope this helps someone, as it may not be limited to these versuions… and also I am not 100% suere docker isn;t playing a part in this either… but thats going too.",
"username": "Peter_Colclough"
},
{
"code": "",
"text": "A final (ish) update. I have managed to kill that message , in a bad way I think.If you set systemLog.traceAllExceptions:false in teh config file, it goes away. I beleive this will also stop any other DBEXceptionBacktraces as well… so its not a good solution, but the only one I have found.I am hoping this is fixed in a later version, or at least set up to be ‘ignorable’ , as stopping all DBEXceptions form logging seems to be a bad move.The 9001 socket error, is purely a closed connection reported, so a new one starts. So it is not really an ‘Exception’ … but is reported as such.Maybe mongodb engineers could answer that… or I will report back here if it goes away in later versions.",
"username": "Peter_Colclough"
}
] | DBException thrown :: caused by :: 9001 socket exception | 2020-02-12T22:06:19.752Z | DBException thrown :: caused by :: 9001 socket exception | 4,701 |
null | [
"dot-net"
] | [
{
"code": "var result= await _dataServiceJobONSITE.QueryAsync(j => j.Id.Contains(\"a\")).ConfigureAwait(false);\n public async Task<IList<T>> QueryAsync(Expression<Func<T, bool>> predicate)\n {\n try\n {\n //var result = await Collection.FindAsync(predicate).ConfigureAwait(false);\n var result = Collection.AsQueryable().Where(predicate);\n var batch = await result.ToListAsync().ConfigureAwait(false);\n\n return batch;\n }\n catch (Exception ex)\n {\n return new List<T>() { };\n }\n }",
"text": "In context of dot net core c# Linq queries\nI want to query for part of the object Id.\nThe ID is represented in Mongodb as object and in POCO as string.\nQuery the full Id works fine but the following does not workwhere QueryAsync is",
"username": "Graeme_Henderson"
},
{
"code": "_idObjectObjectIdObjectId0IdIAsyncEnumerable",
"text": "The _id in Mongo isn’t an Object but an ObjectId, it’s a unique identity that auto-increments, they are generated in a certain way which allows you to guarantee that certain parts will look the same (see: https://docs.mongodb.com/manual/reference/method/ObjectId/)As such the only thing I can think of without testing is do a query that looks for a range if you want specific documents where you fill in the first part of the ObjectId and replace the later section with 0, an example I found on StackOverFlow that should work: mongodb - mongo objectid 'contains' query - Stack OverflowAlternatively, you’d need to return all documents then parse the Id in code and extract only the documents you actually want, not great but using IAsyncEnumerable you could keep the memory overhead down by processing in batches…It might be someone else has a better answer, but that’s my understanding of things ",
"username": "Will_Blackburn"
}
] | Querying ObjectId | 2020-02-26T03:13:16.784Z | Querying ObjectId | 4,114 |
null | [] | [
{
"code": "_items_details_detailsproduct_description_itemsproduct_barcode_items",
"text": "I have two collections ( _items and _details ), and within the _details database I have a field called product_description , how would I be able to embed that field to the _items database by matching the product_barcode field as both of the databases have that field. I am trying to get the descriptions to match the barcodes in the _items databaseAny help would be appreciated",
"username": "samhuss123"
},
{
"code": "product_barcode$lookupproductitems$project$out_itemsdb._items.aggregate([\n {\n $lookup: {\n \n from: _products,\n localField: 'product_barcode',\n foreignField: 'product_barcode',\n as: products\n }\n },\n {\n $project: {\n .... fields you want in collection\n }\n },\n {\n $out: '_items'\n }\n])\n",
"text": "Hey @samhuss123The aggregation pipeline can help you with this.Has you have stated both collection have a product_barcode field. Therefore, you can use $lookup to get the product associated with the items, then use $project to filter down the fields you want in the collection then use $out to save the new collection of _items.\nFYI Compass makes building aggregation pipelines very easy.",
"username": "Natac13"
},
{
"code": "",
"text": "Thanks! Appreciate it, got it to work",
"username": "samhuss123"
}
] | How to import a field from one database to another? | 2020-02-25T17:48:40.302Z | How to import a field from one database to another? | 1,679 |
null | [
"node-js"
] | [
{
"code": "cursor.forEachconst cursor = client.db().collection('properties').find().limit(15);\ncursor.forEach(\n async function(row){\n return knex('properties').insert(row).then(console.log).catch(console.error);\n }\n ,async function(err){\n if(err) console.error(err);\n await client.close();\n console.log('done');\n process.exit();\n })\nfor await ( let row of cursor ) {\n console.log(row._id)\n return Promise.delay(2000);\n }\n while(await cursor.hasNext()) {\n \tconst row = await cursor.next();\n \tconsole.log(row._id);\n \treturn Promise.delay(2000);\n \t}\n",
"text": "I’m using the latest mongodb driver for nodejs 3.5.3 but I’m having issues with cursors.I’m planning on processing a table of 450k+ rows and doing some async operations so obviously I won’t want to use toArray() first.A simple cursor.forEach isn’t working – the async function isn’t being called at all.When I try simply using Bluebird’s Promise.map (which should handle async and concurrency), I get UnhandledPromiseRejectionWarning: TypeError: expecting an array or an iterable object but got [object Null]When I use npm: mongo-iterable-cursor to convert the cursor, I get the same error with Bluebird. (maybe that’s for driver v2?)When I use a native iterator code found on stack exchange (javascript - Async Cursor Iteration with Asynchronous Sub-task - Stack Overflow)We never get to the second row.Similar with this code, we never get to the second row: node.js - Iterating over a mongodb cursor serially (waiting for callbacks before moving to next document) - Stack Overflow(The Promise.delay is the same as my issues with calling knex with DB commands)Besides for the cursor simply not working, I want to use bluebird for processing the cursor since it has a concurrency setting. I’ve never seen an explanation of how cursor.forEach handled awaits.What am I doing wrong? Suggestions? Thanks!",
"username": "Avi_Marcus"
},
{
"code": "returnreturnlet get_docs = async function(collection) {\n const cursor = collection.find({})\n for await (const doc of cursor) {\n console.log(doc)\n }\n}\nlet run = async function() {\n ... connect to database ...\n await get_docs(conn.db('test').collection('test'))\n await conn.close()\n console.log('db closed')\n}().catch(console.error)\n\ndb closed{ _id: 0 }\n{ _id: 1 }\n{ _id: 2 }\ndb closed\nreturnlet get_docs = async function(collection) {\n const cursor = collection.find({})\n for await (const doc of cursor) {\n console.log(doc)\n return\n }\n}\n{ _id: 0 }\ndb closed\ndb closedfor await()Promise.delay(2000)awaitsetTimeoutlet get_docs = async function(collection) {\n const sleep = util.promisify(setTimeout)\n const cursor = collection.find({})\n for await (const doc of cursor) {\n console.log(doc)\n await sleep(2000)\n }\n}\n{ _id: 0 }\n...2 seconds wait...\n{ _id: 1 }\n...2 seconds wait...\n{ _id: 2 }\n...2 seconds wait...\ndb closed\n",
"text": "For your 4th and 5th example, you never see the second result because you have a return inside the loop, which will cut short the loop’s operation.For example, if I have no return:called from this main function:it will print the whole collection as expected and then print db closed (using my test data):However if I put a return like in your example:it will instead print:It only printed the first document, then it printed db closed because the for await() function returned after the first (and only) run through the loop.I don’t know why you return Promise.delay(2000) in your code, but if you want to wait a specific time between loop iteration, you can await on a promisified setTimeout function. For example:this will print:For more examples, see ways to iterate on a cursor, async or otherwise, in the node driver manual page.Best regards,\nKevin",
"username": "kevinadi"
}
] | Iterable/cursor issues in Nodejs | 2020-02-24T19:41:33.083Z | Iterable/cursor issues in Nodejs | 5,286 |
[
"queries"
] | [
{
"code": "{\n\"_id\": 123,\n\"somekey\": \"example\",\n\"arrayOfObjects\": [\n {\n \"objectName\": \"example1\",\n \"objectData\": \"somedata\",\n },\n { },\n { },\n { }\n],\n\"otherinfo\": \"something else\"\n}\narrayOfObjects{ \"objectName\": \"example2\", \"objectData\": \"newdata\" }const callback = (element) => element.constructor === Object && Object.entries(element).length === 0;\nconst index = documentData.arrayOfObjects.findIndex(callback);\ndocumentData$set",
"text": "I’m trying to create update query for document, which would update first empty object it finds in specified array in that document.For example, document would be something like this:So I would like to update only first empty object in arrayOfObjects with some data, for example { \"objectName\": \"example2\", \"objectData\": \"newdata\" }. Rest of the document should remain same.I found out how to check if there is empty object in that array, with this kind of code:In above code documentData is whole document returned from MongoDB. However I have feeling that it is possible to have some filter/selector in $set, but I can’t quite put my finger in it.Any help would be appreciated. I am using nodejs as programming language, if that makes any difference.",
"username": "kerbe"
},
{
"code": "{arrayOfObjects:{}}arrayUpdate",
"text": "The programming language doesn’t matter since you want the update filter to be applied on the DB and you want the update mutation also to happen in the server.So if your query is for the first document which has an empty object in that array you can query for {arrayOfObjects:{}} now you can update it in a couple of different ways. You can use the positional operator ($) with $set, you can use arrayUpdate option with set with [index]` or you can use aggregation with $addFields (alias $set) stage in the update pipeline.I’m curious why the empty objects are already present - more common would be to add data as it becomes available via $push. Is this to preallocate a specific number of elements in each array?Asya",
"username": "Asya_Kamsky"
},
{
"code": "\"maxNumberOfObjects\": 4_idarrayOfObjects[index]arrayFiltersdb.collection.update(\n { \"_id\": \"abc123\" },\n { $set: { \"objectName\": \"example2\", \"objectData\": \"newdata\" } },\n {\n arrayFilters: $[ how_to_filter_empty_object ]\n }\n)\n",
"text": "I’m curious why the empty objects are already present - more common would be to add data as it becomes available via $push. Is this to preallocate a specific number of elements in each array?Yes, I want to pre-allocate specific number of elements in array, and keep number of elements static from there on. I was thinking of having element like \"maxNumberOfObjects\": 4, but counting array elements in many places felt also pretty convenient, so defaulted to that.So if your query is for the first document which has an empty object in that arrayActually, document I would be updating, I know, so I have _id of it at hand. So I can target specific document, and update empty object in arrayOfObjects. Like you said more common way would be to use $push, this is kind of what I want to do, though instead of pushing new object in, updating first empty object, and ideally failing if there are no empty objects. If “nice” failing isn’t possible, I could first check that there is empty object before updating.More traditional database/programming way I would most likely go finding [index] in first query, then make another query which does update. However that update method seems to have arrayFilters which I have feeling could get this done in one query. Didn’t find anyone trying to filter empty object, so trying to figure out how to write such query.So I believe it would be something like:But I’m a bit lost what should be that actual filter there, and a bit unsure how correct is that $set line.\nDo you catch my idea here, and have some pointers how to make that update as working one @Asya_Kamsky?\nAlso if you see some big drawbacks (other than being uncommon way to have empty objects), I’m eager to hear. My idea with this document structure has been that I need less queries and document structure contains meaningful information itself too.",
"username": "kerbe"
},
{
"code": "{}",
"text": "Would it help to know that you can compare something to {} to match an empty object?I can show example of how to do this with all three methods, just wanted to check if you want to see the answer or prefer to just be pointed towards the answer ",
"username": "Asya_Kamsky"
},
{
"code": "NEW_OBJ = { \"objectName\": \"example2\", \"objectData\": \"newdata\" }\n\ndb.collection.updateOne(\n { \n _id: 123, \n arrayOfObjects: { }\n },\n [\n { \n $addFields: {\n // ix is index of the first element which is an empty object\n ix: { $indexOfArray: [ \"$arrayOfObjects\", { } ] },\n ixs: { $range: [ 0, { $size: \"$arrayOfObjects\" } ] } \n } \n },\n { \n $project: { \n arrayOfObjects: { \n $map: {\n input: \"$ixs\",\n in: { \n $cond: [ \n { $eq: [ \"$ix\", \"$$this\" ] },\n NEW_OBJ,\n { $arrayElemAt: [ \"$arrayOfObjects\", \"$$this\" ] }\n ] \n }\n }\n }\n }\n }\n ] \n)\n",
"text": "I tried this using the aggregation for the update (MongoDB version 4.2). The query looks for the first empty object in the array and updates it with the new object.This works fine from the Mongo Shell.",
"username": "Prasad_Saya"
},
{
"code": "db.test.update( \n { \"_id\": \"abc123\" }, \n { $set: { \"objectName\": \"example1\", \"objectData\": \"newdata\"}},\n { arrayFilters: $arrayOfObjects[ {} ]});\n$arrayOfObjects",
"text": "Thank you @Prasad_Saya, that looks interesting approach. I’ll give it a go. It looks more complex than I had expected it to be, as I was fiddling around something like this:But I was running into error that $arrayOfObjects wasn’t defined, and was just coming to ask a bit of example from @Asya_Kamsky how that filter works, and am I even in right path.",
"username": "kerbe"
},
{
"code": "db.test.update( \n { \"_id\": \"abc123\" }, \n { $set: { \"arrayOfObjects.$[i]\": {\"objectName\": \"example1\", \"objectData\": \"newdata\"}}},\n { arrayFilters: [ { i: {$eq: {}} } ] }\n);\n",
"text": "I think you were looking for something more likeps edited to fix syntax",
"username": "Asya_Kamsky"
},
{
"code": "db.coll.update(\n {_id:x, arrayOfObjects:{}},\n {$set: {“arrayOfObjects.$”:{new:”whatever”}}})\n",
"text": "You can also do it using positional operator:",
"username": "Asya_Kamsky"
},
{
"code": "db.test.update( \n { \"_id\": \"abc123\" }, \n { $set: { \"arrayOfObjects.$[i]\": {\"objectName\": \"example1\", \"objectData\": \"newdata\"}}},\n { arrayFilters: [ { i: { $eq: {} }} ] });\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n",
"text": "Thank you very much @Asya_Kamsky, both are nice and clean solutions. Tried both, and I think there is small typo on that arrayFilter example, missing { there maybe? What worked for me is like this (for future references when someone tries same solutions)Both seem to work similarly when there are empty objects which can be updated, giving following results:However when document doesn’t have empty objects, first one still gives Matched: 1, but doesn’t of course modify anything. Second one gives zeroes to both Matched and Modified.So I think I’ll go with first one, and see if I can get that result checked in code, and hop into different usecase if there ain’t empty objects anymore.Thank you once more Asya, this allows me to continue onward in my project. ",
"username": "kerbe"
},
{
"code": "",
"text": "If you add the check for presence of empty object to the query in the first query then the result would be the same - it says I didn’t find a matching record.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "I must have overlooked how these when I was testing them in mongo shell. Now that I tried implementing in nodejs, I noticed that arrayFilter way actually doesn’t work as intented. Instead of updating only first empty object, it updates all empty objects in that array in given document.As first one seems to be updating only first object, I can work with that. However is there some modification for arrayFilter to match only one/first, or is it actually meant to match all elements in array always?",
"username": "kerbe"
},
{
"code": "",
"text": "To overcome this issue of updating all the empty objects with the update operation, the aggregation solution is implemented. The aggregation first checks if there is an empty object in the array, then, gets the index of the first empty object, and, replaces that empty object with the supplied new object.The query does look complex, but it is a solution for the requirement.",
"username": "Prasad_Saya"
},
{
"code": "$concatArrays$map{}[ {$set:{ arrayOfObjects:{$let: {\n vars: { ix: { $indexOfArray: [ \"$arrayOfObjects\", { } ] } },\n in: {$concatArrays: [\n {$slice:[ \"$arrayOfObjects\", 0, \"$$ix\"] },\n [ { newObjectHere } ],\n {$slice:[ \"$arrayOfObjects\", {$add:[1, \"$$ix\"]}, 4] }\n ]\n}}}}]\nupdate",
"text": "Instead of updating only first empty object, it updates all empty objects in that array in given documentAh, I see what you’re saying - you don’t want to update all the matching entries, just the first one. Positional update (first syntax) is the one to use then - while aggregation in update works as well (as long as you’re on 4.2+) unfortunately there is no syntax to easily short-circuit from $map iteration over the array - with a small array it’s not a big deal, but for larger arrays it might be noticeably slower.It can actually be done with $concatArrays rather than $map by concatenating using the index (location of the first {} element in the array rather than for iteration but like this:This would go as the second argument to update. I didn’t run this so I’m sure there’s a paren missing somewhere or an off-by-one error… ",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "I’ll give that a testing in few days, once I return to that part of project. If I don’t see performance difference between first working one and this arrayFilter one, I might continue using first one. It is much shorter and easier to read I’m fairly sure that if I need to return that query at later date, or someone new tries to start understanding it, it will be appreciated. That’s however fine example of how powerfull things can be done directly in query, and I might need that at later time. Thank you very much @Asya_Kamsky for taking time with this. ",
"username": "kerbe"
},
{
"code": "db.collection.updateOne(\n { \n _id: 123, \n arrayOfObjects: { }\n },\n [\n { \n $set: { \n arrayOfObjects: { \n $map: {\n input: { $range: [ 0, { $size: \"$arrayOfObjects\" } ] },\n in: { \n $cond: [ \n { $eq: [ { $indexOfArray: [ \"$arrayOfObjects\", { } ] }, \"$$this\" ] },\n { \"objectName\": \"example2\", \"objectData\": \"newdata\" }, // this is the new object to be inserted\n { $arrayElemAt: [ \"$arrayOfObjects\", \"$$this\" ] }\n ] \n }\n }\n }\n }\n }\n ] \n)",
"text": "This is the same update query I had posted earlier with some refinement:",
"username": "Prasad_Saya"
},
{
"code": "$map",
"text": "Thank you @Prasad_Saya, that query starts to look like understandable now. I’ll try to give it a go as well… Though at a glance it looks like having similar elements as Asya’s previous query:\nThat one seemed to work without need to do any $map, updating only first empty object it found, so is it necessary to have this more complex query Prasad? As I haven’t tested yet, I don’t know do they behave differently in edge situations… like when there is no arrayOfObjects{} in document for some reason, or if it doesn’t have any empty objects. Or performance wise big differences? In my current use case arrayOfObjects is usually around 10 elements, most likely at maximum rare cases 30-50 elemenets, so quite small. But concurrent operations by different users to different documents can be significant, if this product takes off properly. But I’ll test them out…",
"username": "kerbe"
},
{
"code": "{arrayOfObjects:{}}",
"text": "Note that in all examples, you should include the test for {arrayOfObjects:{}} so that it’s guaranteed that there is an empty object in the array.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "@Prasad_Saya you are correct that it’s the same approach with aggregation update, I was simply pointing out that $map is less efficient in the case where you want to replace a single element in an array.",
"username": "Asya_Kamsky"
},
{
"code": "$map_id",
"text": "Yes, iterating over the entire array (i.e., using $map) is less efficient just to update one array element (especially if the array has a lot of elements). Also, noted that the update is for one document only (as it is queried by its _id).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I’m not really sure of the business and technical logic behind “pre-allocating” empty array elements.Thanks so muchBob",
"username": "Robert_Cochran"
}
] | How to update first empty object in array? | 2020-02-04T15:23:40.629Z | How to update first empty object in array? | 13,426 |
|
null | [] | [
{
"code": "",
"text": "Hi everybody,I’m Antonio from Spain. I work as software developer from more than 10 years, always developing with sql databases.\nTwo years ago I started to know about non-sql databases and I heard about MongoDB so I started to test and study it and today I would like to keep improving knowledge and knowing about real projects.Regards,\nAntonio",
"username": "antonio_baena"
},
{
"code": "",
"text": "Hi Antonio - welcome to the community!",
"username": "Jamie"
},
{
"code": "",
"text": "Hola Antonio\nSaludos desde México.\nSoy un caso muy similar, después de una vida de RDBMS aventurandonos en el oceano de Bases de Datos Documentales.",
"username": "DavidSol"
}
] | Hello from Spain | 2020-02-20T19:07:12.720Z | Hello from Spain | 1,812 |
null | [] | [
{
"code": "",
"text": "Hello, and thanks for the invitation.\nI hope we grow a great community here.",
"username": "DavidSol"
},
{
"code": "",
"text": "Bienvenido, David! Happy to have you here ",
"username": "Jamie"
}
] | Saludos de México | 2020-02-21T05:44:48.264Z | Saludos de México | 1,813 |
null | [
"atlas"
] | [
{
"code": "",
"text": "When setting up a new project with programatic APIs , the alerts are being sent to the group owner, which is the oldest user in the organization. This is not what we want, and I’m wondering what others are doing. I’m deleting all the alerts and then recreating them. Are you updating the exiting alerts? Are you creating/assigning another group owner?I would love to be able to set up a default email address (Typically an distribution list) for the alerts to go to when I create the project.",
"username": "Brian_Jones"
},
{
"code": "",
"text": "If you bootstrap a user into the Project when programmatically creating it so that that user has the Project Owner role, then that user’s email address will receive email-based alerts.",
"username": "Andrew_Davidson"
}
] | Creating Projects and Clusters with APIs - Alerts | 2020-02-21T22:49:16.114Z | Creating Projects and Clusters with APIs - Alerts | 1,475 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Hi, I’ve done some research on the problem I’m working on, specifically reading over the docs for $all, $in, $elemMatch etc. but I still don’t understand how to write a query to do the following:DataUser 1 has groups: [A, B, C]\nUser 2 has groups: [C, D, E]Ticket 1 groups required: [A, B] <— matches User 1. Does not match User 2.\nTicket 2 groups required: [B, C] <— matches User 1. Does not match User 2.\nTicket 3 groups required: [C, D] <— does not match User 1, because it contains one required group (D) not in the user groups. Matches User 2.ExplanationThe “groups” on the ticket are required for a user to take a ticket. If there are any groups in the group array on the ticket not in the user’s group list, the user can’t take the ticket. In other words, the required groups on a ticket have to be equal to a user’s group list, or a subset of a user’s group list for the ticket to be matched/returned.If it matters/makes a difference, this is being added as a $match stage of an existing aggregation query.Is this possible? Can you please offer some pointers for how to go about this?Thanks for any help you can offer ",
"username": "Perry_Trinier"
},
{
"code": "",
"text": "I have a potential solution, which feels like a hack but I’m pretty sure will get me the results that I want:Thoughts?",
"username": "Perry_Trinier"
},
{
"code": "user1 = [ \"A\", \"B\", \"C\" ]\nuser2 = [ \"C\", \"D\", \"E\" ]\ndb.collection.find( { $expr: { $setIsSubset: [ \"$groups\", user1 ] } } )db.collection.find( { $expr: { $setIsSubset: [ \"$groups\", user2 ] } } )user1user2{ _id: 3, groups: [ \"C\", \"D\" ] }",
"text": "I tried this from Mongo Shell, and it looks like this is what you are looking for:I created a collection with 3 documents:\n{ _id: 1, groups: [ “A”, “B” ] }\n{ _id: 2, groups: [ “B”, “C” ] }\n{ _id: 3, groups: [ “C”, “D” ] }The query:db.collection.find( { $expr: { $setIsSubset: [ \"$groups\", user1 ] } } )returns:\n{ _id: 1, groups: [ “A”, “B” ] }\n{ _id: 2, groups: [ “B”, “C” ] }and,\ndb.collection.find( { $expr: { $setIsSubset: [ \"$groups\", user2 ] } } )\n[ edit: Corrected the user1 to user2 ]returns:\n{ _id: 3, groups: [ \"C\", \"D\" ] }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks, this in combination with $setEquals looks like just what I need!",
"username": "Perry_Trinier"
},
{
"code": "",
"text": "$setIsSubset includes equality condition.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Ah, the nested example there in the docs confused me.",
"username": "Perry_Trinier"
}
] | How to $match documents with an array with a subset of given values | 2020-02-25T15:03:41.814Z | How to $match documents with an array with a subset of given values | 4,406 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Hi, everyone.How can I know how much I am “spending” per month on Mongob Atlas?There is the billing section but is shows per day invoices. How can I measure how much I would spend per month out of the startup program?Thank you in advance.",
"username": "programad"
},
{
"code": "",
"text": "Hi Daniel,In the billing section, you should be able to look at previous invoices on the Invoices tab which will have a detailed Summary by Service section. There you can see the itemized cost of your instances and other add-ons like support.",
"username": "Perry_Trinier"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to measure monthly Atlas spend during startup phase? | 2020-02-21T16:10:35.560Z | How to measure monthly Atlas spend during startup phase? | 4,807 |
null | [
"python"
] | [
{
"code": "",
"text": "Hello,I’m creating some functions with Python using PyMongo, create database and create collection. I’ve read the docs and it says that in order to databases to be created (committed) a document must be written.Is there a way to only create a database without having collections and documents? Or is there a way to handle this kind of situation?",
"username": "William_GM"
},
{
"code": "show dbs\nuse new_database;\ndb.new_collection.insertOne({});\ndb.new_collection.drop();\nshow dbs\n",
"text": "Short answer no… MongoDB doesn’t provide any command to create “database”if you drop all collections in a database, you delete the database too.you may try yourself;List all databases.switch the database you want to create (non-exists database name will work)Insert an empty document into the collection you want to create (again, non-exists collection name will work)drop the collection, you just created and inserted a document.List all databases, again.",
"username": "coderkid"
},
{
"code": "",
"text": "Why do you want/need to create an empty database? It doesn’t feel very MongoDB-ish…",
"username": "Asya_Kamsky"
}
] | PyMongo - Create database with no collections or documents | 2020-02-24T22:21:18.148Z | PyMongo - Create database with no collections or documents | 6,537 |
null | [
"production",
"php"
] | [
{
"code": "pecl install mongodb\npecl upgrade mongodb\n",
"text": "The PHP team is happy to announce that version 1.7.3 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes a compilation issue on Alpine Linux. The libmongoc dependency has been updated to 1.16.2 to fix this issue.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB PHP Extension 1.7.3 released | 2020-02-25T09:49:33.686Z | MongoDB PHP Extension 1.7.3 released | 2,193 |
null | [
"production",
"c-driver"
] | [
{
"code": "",
"text": "I’m pleased to announce version 1.16.2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No change since 1.16.1; released to keep pace with libmongoc’s version.Bug fixes:– Kevin Albertson",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Announcing libbson and libmongoc 1.16.2 | 2020-02-25T05:56:37.421Z | Announcing libbson and libmongoc 1.16.2 | 1,752 |
[
"charts"
] | [
{
"code": "",
"text": "It is not a really big deal, but I cannot access collection actions button on “Data Sources” page, Intercom button hovers on it.\nChats-ScreenShot1462×1416 217 KB\n",
"username": "coderkid"
},
{
"code": "",
"text": "Thanks, we have this on our backlog to look at. Agree it’s not great. In the meantime you can change the sort order to work around this.",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Charts UI/UX problem | 2020-02-25T02:40:25.841Z | Charts UI/UX problem | 2,284 |
|
null | [
"swift"
] | [
{
"code": "platform :ios, '13.0'\n\ntarget 'RealmTest' do\n use_frameworks!\n pod 'RealmSwift'\nend\nPODS:\n - Realm (4.3.2):\n - Realm/Headers (= 4.3.2)\n - Realm/Headers (4.3.2)\n - RealmSwift (4.3.2):\n - Realm (= 4.3.2)\n\nDEPENDENCIES:\n - RealmSwift\n\nSPEC REPOS:\n trunk:\n - Realm\n - RealmSwift\n\nimport Foundation\nimport RealmSwift\n\nclass DataItem: Object {\n dynamic var dataId = UUID().uuidString\n dynamic var value = \"\"\n \n override static func primaryKey() -> String? {\n return \"dataId\"\n }\n}\nimport RealmSwift\n\nclass ViewController: UIViewController {\n\n @IBOutlet weak var table:UITableView!\n \n var realm:Realm?\n var dataItems: Results<DataItem>?\n var token: NotificationToken? = nil\n \n override func viewDidLoad() {\n super.viewDidLoad()\n realm = try! Realm()\n if let realm = realm {\n token = realm.observe { notification, realm in\n self.updateUI() \n }\n }\n }\n\n private func updateUI() {\n if let realm = realm {\n dataItems = realm.objects(DataItem.self)\n table.reloadData()\n }\n }\n \n @IBAction func addItem() {\n // get a string representation of the date/time\n let now = Date()\n let formatter = DateFormatter()\n formatter.timeStyle = .medium\n \n // create a DataItem\n var dataItem = DataItem()\n dataItem.value = formatter.string(from: now)\n \n // store in the Realm\n addToRealm(dataItem)\n }\n private func addToRealm(_ item: DataItem) {\n guard let _realm = realm else {\n print(\"No realm is configured\")\n return\n }\n try! _realm.write() {\n _realm.add(item)\n }\n }\n\n}\n\nextension ViewController:UITableViewDelegate {\n \n}\n\nextension ViewController:UITableViewDataSource {\n func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {\n if let collection = dataItems {\n return collection.count\n }\n return 0\n }\n \n func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n let cell = tableView.dequeueReusableCell(withIdentifier: \"DataItemCell\", for: indexPath)\n guard let dataCell = cell as? DataItemCell else {\n return cell\n }\n if let collection = dataItems {\n let item = collection[indexPath.row]\n dataCell.configure(item)\n }\n return dataCell\n } \n}\n",
"text": "I am upgrading an iOS app from iOS9 to iOS13 (yeah, I should have kept up with the changes, but well . . .). It is based on Swift and uses RealmSwift for the database. After clearing out 2400 errors and updating the pods I got to the point of launching the app in the iOS 13 simulator (iPhone 11 Pro Max) in Xcode 11 (version 11.3.1) only to have Realm crash with an exception of 'Primary key property {some id} does not exist on object {some object}.I figured I messed something up and built a simple single view test application to see whether this was a cocoapods update issue or I had missed something. At anyr ate, in the end, I found my test app failed with the same error. I’m sure (hoping) it is something simple I have overlooked.I did find a few stack overflow issues for when Xcode 11 was in beta (back in July of 2019), and those pointed to test versions of RealmSwift in the 3.x range. But we are way past that now and I figure this has been solved. So any help would be much appreciated:My podfile looks like this:I checked my Podfile.lock and it ends up looking like this (which appears to be the latest):Now for the app code.My model looks like this:And my view controller that uses this model looks like this:That’s the entirety of the app other than the AppDelegate, the storyboard, and the cell definition.As the app starts up I get this exception2020-02-23 17:28:59.987449-0500 RealmTest[1923:130948] *** Terminating app due to uncaught exception ‘RLMException’, reason: 'Primary key property ‘dataId’ does not exist on object ‘DataItem’'This happens on initial app startup. I have done the usual (XCode hates me, clean build folder, delete derived data folder, restart XCode, restart Mac) but nothing has helped.I’m really at a loss as to how to solve this, so any help would be greatly appreciated",
"username": "Mark_Astin"
},
{
"code": "class DataItem: Object {\n @objc dynamic var dataId = UUID().uuidString\n @objc dynamic var value = \"\"\n \n override static func primaryKey() -> String? {\n return \"dataId\"\n }\n}\n@objcMembers class DataItem: Object {\n dynamic var dataId = UUID().uuidString\n dynamic var value = \"\"\n \n override static func primaryKey() -> String? {\n return \"dataId\"\n }\n}",
"text": "You forgot the @objc dynamic in your Realm classdynamic var dataId = UUID().uuidStringShould beor you can make the whole class managed with",
"username": "Jay"
},
{
"code": "\nimport Foundation\nimport RealmSwift\n\n@objcMembers class DataItem: Object {\n dynamic var dataId = UUID().uuidString\n dynamic var value = \"\"\n \n override static func primaryKey() -> String? {\n return \"dataId\"\n }\n \n}\n\n",
"text": "Thanks so much!! That solved it. I made the whole class managed. For future viewers the class modifier is “@objcMembers”, notice the plural part. So the model that worked for me was:Apparently when moving from Swift (I’m not sure which version, it was at iOS8 though) I didn’t have to have the @objc annotiations, so that is how I missed it when moving to Swift 5.1 (at least that’s the story I am going with). Again, thanks for your help",
"username": "Mark_Astin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Crash when app starts for the first time (Primary key property ... does not exist on object | 2020-02-23T23:14:24.779Z | Crash when app starts for the first time (Primary key property … does not exist on object | 6,600 |
null | [
"charts"
] | [
{
"code": "",
"text": "What category we have for product problems?I just found an UI/UX problem on Charts and I don’t know where to post it.",
"username": "coderkid"
},
{
"code": "",
"text": "https://jira.mongodb.org",
"username": "chris"
},
{
"code": "",
"text": "@chris Thank you, I am kind of new in mongo community, what else we have?",
"username": "coderkid"
},
{
"code": "",
"text": "Strange… I cannot find Charts project in Jira listhttps://jira.mongodb.org/secure/BrowseProjects.jspa",
"username": "coderkid"
},
{
"code": "",
"text": "Charts JIRA is not public. Please post here in the MongoDB Cloud forum.",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Where should I report a UI/UX problem with Charts? | 2020-02-24T03:32:52.853Z | Where should I report a UI/UX problem with Charts? | 2,777 |
null | [] | [
{
"code": "",
"text": "Hello, MongoDB community,My name is Mario, currently working as Database Engineer at Vopak (Oil and Gas storage company).I started working with MongoDB in 2015, but not 100%. Previously as working most of the time with SQL Server and MySQL DBs in AWS and GCP.Have a good week!",
"username": "Mario_Pereira"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Mario_Pereira.I hope you’re week is going to be amazing Be sure to let the community know if you have any questions, or give an answer to someone else’s question if you happen to know it.",
"username": "Peter"
},
{
"code": "",
"text": "Hi @Mario_Pereira and welcome to these parts.Like you, and many others, I too came from SQL Server and MySQL to MongoDB. Feel free to reach out if you have questions.Cheers!",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Mario_Pereira, welcome!!",
"username": "Jamie"
},
{
"code": "",
"text": "Hi @Mario_Pereira, welcome to the community, looking forward to more folks with a operations database background joining the fun ",
"username": "Michael_Grayson"
},
{
"code": "",
"text": "Hi @Mario_Pereira,Welcome!",
"username": "Juliette_Tworsey"
}
] | Hi peeps, Mario here! | 2020-02-04T15:23:37.666Z | Hi peeps, Mario here! | 2,377 |
null | [] | [
{
"code": "",
"text": "A couple of days ago, one of our servers hosting our mongoDB containers experienced an incredibly high load (more than 200x higher than the usual average, we’re still investigating where that came from). However, as all our apps relying on the databases seemed to behave normally and the general metrics of the server were still fine (it was responsive to requests, total cpu was < 20%, total memory < 50%, total disk usage < 30%, no active swapping), we decided to wait a while until we restarted the whole server to bring it back into a stable state (we had to, as we didn’t have ssh or console access at that point anymore due to the high load, but that is a different issue).Again, during the incident all our apps behaved normally (nodejs + mongoose) and claimed that data had been written to the database and could also be retrieved normally.The interesting (as in “wtf”) thing happened once we restarted the whole server once there was no user-interaction anymore, gained back control, restarted the containers and everything seemed to be back to normal. The only problem: All the data, that should’ve been written (according to the apps), was gone.So, my question:Is it possible, that mongoDB stores data in memory only, even without writing to the journal for a long time (1h+) in case there are problems like an extremely high load?For me it simply doesn’t make any sense that the apps were behaving normally, users could interact with it as they usually do, were able to “save” data to the database and retrieve it later on (no client-side caching involved), but the data was not persisted to the database at all. We were also able to confirm that there had been no overwriting of data on our side.We’re using an Ubuntu 18_04 Server that hosts vanilla mongo 4.0.7 instances (journaling enabled, no replica sets). Client is nodeJs + mongoose.",
"username": "B_S"
},
{
"code": "",
"text": "Have you ruled out that it hasn’t been filesystem / hard drive error? Meaning, if server has certain RAID setup, which has write cache. Writes could appear in MongoDB perspective being written to disk, but instead they are in write cache. Those usually should have backup to prevent data loss on server crash, but they do fail sometimes.\nAs you speak about containers, it could be also problem in container layer, in similar fashion. Container filesystem thinks it has done write properly, but it hasn’t actually been written to persistent storage.Haven’t experienced this kind of situations with MongoDB, but in the past those happened. We remedied situation somewhat by disabling write caches from servers, so problems would emerge immediately, and not noticed after server reboot.",
"username": "kerbe"
},
{
"code": "{w:0}{w:1, j:false}w:majority",
"text": "Check the drivers’ write concern. Just because journalling is enabled on the server does not mean it is being requested to journal or acknowledge a write.{w:0} or {w:1, j:false} for example may not persist to disk immediately.If you’re really concerned about persistence use a replica set and w:majority",
"username": "chris"
},
{
"code": "{w:1}j",
"text": "Thank you both for your help!@kerbe Currently, we can not rule out a harddrive problem, but the few pointers I have seem to point in that direction. I still don’t know if its the disc itself, but it seems to point in that direction.@chris\nI looked into that and the default write concern is by default {w:1} (we haven’t had the need manually tune the write concern yet, so it’s not defined in the code anywhere), but I couldn’t really find out what the default value for the journal (j) parameter is. Is there some documentation on this somewhere?",
"username": "B_S"
},
{
"code": "",
"text": "Should be indicated in the driver api. I think most of them default to w:1 j:true.",
"username": "chris"
},
{
"code": "",
"text": "@chris Thank you once again for your response, I am currently investigation this, but I believe mongoose seems not to set a default write concern unless it’s a bulk write. It’s possible to set it but I believe the mongoDB defaults are used by default.Also, @kerbe, it turns out that our cloud provider had an issue on the hypervisor level, that resulted in the high load and that weird behaviour we experienced.‘It is possible to commit no mistakes and still lose. That is not weakness , that is life.’",
"username": "B_S"
},
{
"code": "",
"text": "Also, @kerbe, it turns out that our cloud provider had an issue on the hypervisor level, that resulted in the high load and that weird behaviour we experienced.Did they also confirm that this caused dataloss, or that i/o was not behaving correctly, or is that still a mystery how data wasn’t there after restart?",
"username": "kerbe"
}
] | Is it possible possible that mongoDB doesn't write to disk/journal at all under high load? | 2020-02-21T10:15:57.316Z | Is it possible possible that mongoDB doesn’t write to disk/journal at all under high load? | 3,714 |
null | [
"performance"
] | [
{
"code": "collection.updateOne({_id: id}, {$set: theDocument})collection.replaceOne({_id: id}, {theDocument})",
"text": "Hey community,I’ve been developing a pattern for my application where I write get the document, edit it in memory the whole document back to the db.\nThis is really convenient because I don’t need to make a custom update for every bit of functionality. And this is becomes really testable in my app.From what I can see there are two ways to update a whole collection\ncollection.updateOne({_id: id}, {$set: theDocument})\nor\ncollection.replaceOne({_id: id}, {theDocument})Is any method better or faster than the other?\nAlso I realise this might not be safe, but these documents are not operated on many times within a small time frame, for sensitive updates I write atomic updating functions.",
"username": "Ihsan_Mujdeci"
},
{
"code": "",
"text": "Honestly I would insert the whole document as new, and then delete the old one.",
"username": "coderkid"
},
{
"code": "",
"text": "If… these documents are not operated on many times within a small time frameyou should not worry about being faster. That being written. If I remember correctly, existing fields with unchanged values are not modified during $set (sorry I do not remember exactly where I read that). If those unchanged fields are part of an index the replaceOne or insertThenDelete might have a much bigger impact than updateOne.",
"username": "steevej"
},
{
"code": "",
"text": "But then you can’t use the same _id, because its a unique index and now that is 2 operations. I don’t think it’s an ideal solution.",
"username": "Ihsan_Mujdeci"
},
{
"code": "db.collection.updateOne$set",
"text": "Hi @Ihsan_Mujdeci,I’d strongly encourage not doing an entire document update or replacement.Sending an entire document to update one or two fields (when many more fields remain the same) can quickly cause scalability problems. For example, as documents grow, sending an entire document across the network is a lot of unnecessary network traffic. Also, the oplog becomes unnecessarily bloated.Using db.collection.updateOne with the appropriate operator (e.g. $set) is smaller and efficient when only updating fields that change.Justin",
"username": "Justin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | updateOne vs replaceOne performance | 2020-02-18T21:53:18.948Z | updateOne vs replaceOne performance | 18,398 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hi All,I have two environments STAGE and PROD and the indexes in both environments are same, But while firing a query the indexes are not loading diffrently , due to that facing severe issue. Could any tell me what to do in this case.ThanksEswar",
"username": "eswar_sunny"
},
{
"code": "",
"text": "Hi @eswar_sunny you’ll need to provide more details for us to help.But while firing a query the indexes are not loading diffrentlyWhat do you mean the indexes are not loading differently? How are you verifying this?I have two environments STAGE and PROD and the indexes in both environments are same,If you’re seeing different indexes being used for the same query in STAGE and PROD, then you could have similar indexes that can be used for a given query. I would suggest auditing your indexes to make sure you only have the indexes necessary for the queries you run. Note that if there is a difference in amount of data and a difference in the data distribution between your STAGE and PROD systems different indexes may be used which could lead to slow queries.due to that facing severe issue.What is the severe issue you’re facing. Read queries are slow? Write queries are slow? Without this piece of information anything would be a guess. Trying to apply any type of suggestion at this point could make things worse.Other things we would need to know is what is the actual query/queries that are having problems. What index is being used and what index do you expect to be used.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi Doug,Thanks for your response.\nAs my indexes are loading differently my read queries are getting socket timeout.\nAnd also in stage if i execute the query and observe my query planner it is scanning on one index and the same query if i execute on prod and observe query planner it is scanning diffrent index.\nNote: indexes are same in both prod and stageThanks\nEswar",
"username": "eswar_sunny"
},
{
"code": "{\n$match: {\"crOn\":{\n '$gte': ISODate(\"2019-12-10 15:33:47.000Z\"),\n '$lt': ISODate(\"2019-12-20 15:33:47.000Z\")\n} }\n},\n{\n$match: { $and: [{ $or: [\n { \n $and : [ \n {\"quote.quoteTeam\" : { \"$elemMatch\": {\n\"Id\": { $in:[1,2]},\n\"role\": { $in:[\"A\",\n \"B\",\n \"C\"]},\n\"active\":1\n}},\n \"type\" : { $in:[1,2,3,5,7]}\n }\n \n ]\n },\n{ \n $and : [ \n {\"quote.partnerExtn.ptnrrBeGeoId\":{ $in:[1,2]},\n \"type\" : { $in:[3,5,7]}\n \n }\n \n ]\n }\n\n \n] }\n]}\n},\n// Dynamic FIlter for status\n {\n $match: {\"quote.qteStatus\":{ $nin:[-1,17]} }\n },\n",
"text": "I am using below aggregates, Could you please suggest what are the compound indexes that i can createThanks\nEswar",
"username": "eswar_sunny"
},
{
"code": "$match$match",
"text": "And also in stage if i execute the query and observe my query planner it is scanning on one index and the same query if i execute on prod and observe query planner it is scanning diffrent index.\nNote: indexes are same in both prod and stageWhat are the indexes presently you have created on the collection? Which ones are being utilized in prod and in the stage, respectively (as per the query plan)?In general, aggregation queries utilize indexes in $match stage, when the stage occurs early (definitely, as first stage) in the pipeline. Your aggregation query has the $match stage; is the stage at the start of the pipeline or down? This detail is required for study.Also, a sample structure of the document will help study the query; consider posting one.",
"username": "Prasad_Saya"
}
] | My queries are using available indexes differently in stage vs prod environments | 2020-02-16T08:56:04.589Z | My queries are using available indexes differently in stage vs prod environments | 2,406 |
null | [] | [
{
"code": "",
"text": "This notification bar on top of page seems a bit over enthusiastic. I’ve dismissed it now multiple time, but always it seems to be back when I return this site. Luckily it isn’t popup every time, but still it pushes whole page downwards, and requires extra clicking to get rid of it. Maybe something could be done to it?",
"username": "kerbe"
},
{
"code": "",
"text": "@kerbe I think I found a work around; I do not see that notification bar anymore…go to https://www.mongodb.com/community/forums/u/coderkid/preferences/notificationsand click on Enable Notification\nScreen Shot 2020-02-23 at 10.39.24972×690 43.4 KB\nThen accept browser notification prompt, then on the page Enable Notification turns to \"Disable Notification and click on it.",
"username": "coderkid"
},
{
"code": "",
"text": "Mhh… I haven’t been seeing that notification recently, or at least I haven’t been paying any more attention to it if it does so. Maybe it was tweaked silently, or there was some internal counter for initial days to try get it activated, but it has stopped.",
"username": "kerbe"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Site notifications reminder | 2020-02-07T03:03:01.448Z | Site notifications reminder | 2,877 |
null | [
"replication"
] | [
{
"code": "",
"text": "How to set featureCompatibilityVersion on secondary replica set member which does not startup due to this error? I am basically trying to resync a broken secondary. I have removed all the data and restarted the service but keep getting this error. I can see that featureCompatibilityVersion is set to 3.6 which the other are set to 3.4. I know this from starting the server up outside the set. However the second I add the server to replica set again I get the same error meaning I can’t change featureCompatibilityVersion to 3.4 since the server will not start. How do I fix this this catch 22 situation? Do I have to remove auth settings during syncing?Thanks in advance for any help.----ErrorInitial sync attempt failed – attempts left: 9 cause: IncompatibleServerVersion: Sync source had unsafe feature compatibility version: downgrading to 3.42020-02-11T15:46:56.115+0000 I REPL [replication-1] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 1, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1581436015001), initialSyncAttempts: [ { durationMillis: 0, status: “IncompatibleServerVersion: Sync source had unsafe feature compatibility version: downgrading to 3.4”, syncSource: “10.35.5.65:27017” } ] }",
"username": "Alex_Morton"
},
{
"code": "",
"text": "Can you post the error?",
"username": "chris"
},
{
"code": "",
"text": "Added error. This is what I see when it just loops round and round",
"username": "Alex_Morton"
},
{
"code": "",
"text": "Looks strange. I’d be tempted to change the binary to 3.4, Sync, and take it back to the 3.6 binary.Once back on 3.6 you should complete upgrading by setting the featureCompatibility to 3.6",
"username": "chris"
},
{
"code": "",
"text": "Thanks. The other servers are set to binary=3.6 but feature=3.4. Should I not leave it like that? I would have to update all of them, is this correct?Is there any way this will corrupt data?",
"username": "Alex_Morton"
},
{
"code": "",
"text": "Try setting the fetaureCompatibilityVersion on the cluster to 3.6 and add the node.The data on this node is useless anyway, as it is attempting initial syncs(no data loss as it is alreay gone). You should only need to use 3.4 on this node.Your last option(?) is to seed this nodes data from backup/snapshot from a current node.",
"username": "chris"
}
] | How to set featureCompatibilityVersion on secondary replica set member which does not startup due to this error? | 2020-02-20T17:32:16.804Z | How to set featureCompatibilityVersion on secondary replica set member which does not startup due to this error? | 3,004 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello.\nWould be possible to create a database programmatically using the c# driver, if not possible then to run a shell command via c#.\nI like to create a database per use and don’t have many users, just that it is only known at runtime.\nAny examples would be grateful.\nThanks.",
"username": "binary_storm"
},
{
"code": "",
"text": "MongoDB creates databases and collections on first write. This means they don’t need to be precreated. MongoDB will allow you to connect to a database that doesn’t exist. Because of this, you should be able to make the connection with the new database name and then send a write command to it and the database and collection will get created.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thank you Doug. Yes, learnt that the hard way.\nWould be nice if that existed in the c# driver manual.",
"username": "binary_storm"
},
{
"code": " var index = new CreateIndexModel<T>(\n Builders<T>\n .IndexKeys\n .Descending(i => i.Feild1)\n .Descending(i => i.Feild2)\n );\nvar ttlIndex = new CreateIndexModel<T>(\n Builders<T>\n .IndexKeys\n .Ascending(i => i.DateTime),\n new CreateIndexOptions { ExpireAfter = TimeSpan.FromHours(21) }\n );",
"text": "Don’t forget to create the indexes as well if you’re scripting the database, it’s fairly straight forward to do in the C# driver but let me know if you want some help with it, below are a couple of examples:An index on two fieldsA Time to Live index (lasting 21 hours), object T will need to have a DateTime property…",
"username": "Will_Blackburn"
}
] | How to create a database programmatically | 2020-02-22T18:29:36.280Z | How to create a database programmatically | 6,087 |
null | [] | [
{
"code": "",
"text": "Hi there i want to submit my assignment but first help me where can i find them.",
"username": "Muhammad_15946"
},
{
"code": "",
"text": "Assignments are nothing but labs\nPlease check explorer view on left side of your course page\nAlso see overview page",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Muhammad_15946,I hope you found @Ramachandra_37567’s response helpful. Please let me know if you have any other queries.Thanks,\nShubham Ranjan\nCurriculum Services Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | How to find m001 assignment | 2020-02-21T17:16:49.337Z | How to find m001 assignment | 1,122 |
null | [] | [
{
"code": "",
"text": "Hi,I would like to know how could I drop the current course schedule that I am in since I’ve already missed two deadlines of the first two chapters for some unexpected circumstances. I would also like to ask when will be the next schedule so that I can prepare my schedule in advance.Hoping for any response.",
"username": "ASTROFIL_HYDE_69184"
},
{
"code": "",
"text": "Hey @ASTROFIL_HYDE_69184I can only speak from experience; I had dropped out of the M040 course at the start of the summer since I was taking M001 & M103. However I was able to register again for the next M040 and M042 that are coming up! Dropping the course can be done on the My Courses page and select the Unregister Course button associated with the course you wish to leave.The next start date for M001 looks to be July 16th 2019\n\nnextM001course.png3200×1800 378 KB\n\nBest of luck to you when you take the course again. ",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @ASTROFIL_HYDE_69184,You are right @natac13!!The next course starts from 16 July 2019 at 17:00 UTC. You can register for the the course now.Good luck !!!Thanks,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "Hi @natac13 and @Sonali_Mamgain,Thanks for the info Big help!",
"username": "ASTROFIL_HYDE_69184"
},
{
"code": "",
"text": "Thank you for this. ",
"username": "SB_78958"
},
{
"code": "",
"text": "Hello, As you have already sat M001 and I dropped of the last round due to the issues with installing Mongo Shell on a Mac. You wouldn’t happen to know the secret to performing this installation, as I followed the video to the letter? I have raced through week one, in order to allow more time for this blocker. Any help would be appreciated.",
"username": "SB_78958"
},
{
"code": "",
"text": "@SB_78958There should be no secrets for the install. Mongo Shell should be included when you download MongoDB Community Edition",
"username": "Natac13"
},
{
"code": "",
"text": "I have experienced a great deal of issues with the installation of MongoDB shell on MacOS 10.11. I wondered if HomeBrew was required, however the update has not made a difference. The echo $PATH step only shows anaconda3, not the expected mongo part. I have used the MongoDB installation documentation, the course video to no avail.\nIs there a piece of the instruction that is missing?",
"username": "SB_78958"
},
{
"code": "",
"text": "Hey @SB_78958I am not a Mac user so I cannot say however the MongoDB docs are usually very good.On that note I found 2 blog post that may help you. The process seems simple enough but I do know that is not always the case Best of luck!Post 1\nPost 2",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @SB_78958,Please use this link to download the MongoDB Enterprise Server.Now follow the steps mentioned below to set up the mongo shell in your system.Note : The folder name (Highlighted above) could be different for you if you have downlaoded any other version.To test if the path has been successfully set up or not, try running this commandmongo --nodbIf all went well you should be able to see something like this :MongoDB shell version v4.0.10\nWelcome to the MongoDB shell.If you still face any issue, feel free to get back to us. Happy Learning Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "@Shubham_13709, You are the best! These steps have worked. I have now put this to the test. I can finally proceed with the course. Thank you very much.",
"username": "SB_78958"
},
{
"code": "",
"text": "passwordWhat could be the password please?",
"username": "oluwatosinjosh"
},
{
"code": "",
"text": "Hi @oluwatosinjosh,You are supposed to enter the password of your MAC system.If you have any other query, feel free to get back to us. Happy Learning Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Do I need to still keep the old .bash_profile file?",
"username": "oluwatosinjosh"
},
{
"code": "",
"text": "Hi @oluwatosinjosh,Don’t worry about the old .bash_profile file. Just go ahead and execute the instructions mentioned in the post and you should be able to successfully set up the mongo shell on your system.Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "I followed the steps above as mentioned… attached is my result.\n \nScreen Shot 2019-07-29 at 2.27.32 PM.png685×506 12.7 KB\n Thanks.",
"username": "oluwatosinjosh"
},
{
"code": "",
"text": "Can you please remove one “/” at the beginning of the path that you have added and then save the file.Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Just remove one of the “/” from my previous images and I still can’t use the “mongo --nodb”\n\n\nScreen Shot 2019-07-29 at 2.43.52 PM.png679×525 15.4 KB\n",
"username": "oluwatosinjosh"
},
{
"code": "",
"text": "Hi @oluwatosinjosh,Can you try this path and see if it works.~/mongodb-osx-x86_64-enterprise-4.0.11/binAnd also after adding this path please restart the terminal.Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Thanks so much for your support… Am very grateful\nKind Regards.",
"username": "oluwatosinjosh"
}
] | Dropping and Retaking the Course | 2019-07-10T09:16:45.159Z | Dropping and Retaking the Course | 6,750 |
null | [
"stitch"
] | [
{
"code": "",
"text": "I am using MongoDB Stitch to build the backend for my app. I noticed that deleteMany() operation in MongoDB Stitch is very slow and whenever I have to delete a huge number of documents (500+) then I get “Execution Time out error”. I see that MongoDB Stitch operations don’t run longer than 90s. Any good workarounds or solutions to this?",
"username": "Salman_Alam"
},
{
"code": "",
"text": "Did you try “remove”?",
"username": "coderkid"
},
{
"code": "",
"text": "Hi @Salman_Alam, welcome!I have to delete a huge number of documents (500+) then I get “Execution Time out error”Are you able to distribute the delete operations ? i.e. instead of 500+ documents in a X period of time, do X/5 period of time. Performing smaller number of delete operations but more frequent.Alternatively, depending on your use case, see also Expire Data From Collections by Setting TTL.Regards,\nWan.",
"username": "wan"
}
] | Issue with deleteMany() in MongoDB Stitch | 2020-02-23T04:04:56.139Z | Issue with deleteMany() in MongoDB Stitch | 2,262 |
null | [
"transactions"
] | [
{
"code": "given transaction number does not match any in-progress transactions\n",
"text": "I am getting above error when trying to create transactions for multiple orders and I am using async/await to create multiple transactions for each order one by one.Kindly help me to solve this error.",
"username": "Naif_Almuqbel"
},
{
"code": "TransientTransactionError",
"text": "Hi @Naif_Almuqbel, welcome!In order to help others answer your question, could you answer the following to provide more information:Regards,\nWan.",
"username": "wan"
},
{
"code": "TransientTransactionErrorfunction checkOrder() {\n const timeStampNow = new Date().getTime();\n const query = {\n status: {\n $in: [ Constants.PAID_PAYMENT,\n Constants.RESCHEDULE_REQUEST_ACCEPTED ]\n },\n expiryDate: {\n $lte: new Date(timeStampNow)\n },\n isActive: true\n };\n\n await Request.find(query, async (err, docs) => {\n if (err || Utility.isEmptyObject(docs)) {\n // console.log('error ' + err);\n }\n else {\n try {\n for (let i in docs) {\n let requestData = docs[i];\n\n const data = {\n referenceId: requestData.referenceId,\n status: Constants.COMPLETED\n };\n\n await updateRequestCronJob(data);\n }\n }\n catch (e) {\n console.log('CRON: error -job', e);\n }\n }\n });\n}\n\nasync function updateRequestCronJob(data) {\n\n const status = data.status; // to be updated\n const referenceId = data.referenceId;\n\n const query = { referenceId: referenceId, isActive: true };\n\n await Request\n .findOne(query)\n .then(async (request) => {\n let session = await mongoose.startSession();\n session.startTransaction();\n try {\n const opts = { session };\n const currentTime = Utility.currentTimeNow();\n\n let transactionId = Utility.generateUniqueRandomNumber();\n\n let transactionBody = {\n transactionId: transactionId,\n requestId: referenceId\n };\n\n let transactionsAr = [];\n let artistIncome;\n\n // deduct money from admin & credit to artist\n\n let orderAmount = request.paymentAmount;\n const artistId = request._artistId._id;\n\n let commissionAmount = Utility.getCompletionCommission(orderAmount,\n request.commissionPercent);\n\n artistIncome = orderAmount - commissionAmount;\n\n transactionBody['amount'] = artistIncome;\n transactionBody['status'] = Constants.TRANSACTION_STATUS_CREDITED;\n transactionBody['from'] = Constants.ADMIN_MONGO_ID;\n transactionBody['fromDesc'] = Constants.TRANSACTION_FROM_SERVICE_INCOME;\n transactionBody['to'] = artistId;\n transactionBody['desc'] = Constants.ARTIST_SERVICE_INCOME;\n transactionBody['type'] = Constants.TRANSACTION_TYPE_DEBIT;\n transactionBody['mode'] = Constants.TRANSACTION_MODE_WALLET;\n transactionBody['createdAt'] = Utility.currentTimeNow();\n\n transactionsAr.push(transactionBody);\n\n // save completion commission \n transactionBody = {};\n\n transactionBody['transactionId'] = Utility.generateUniqueRandomNumber();\n transactionBody['requestId'] = referenceId;\n\n transactionBody['amount'] = commissionAmount;\n transactionBody['status'] = Constants.TRANSACTION_STATUS_JAMELAH_COMPLETION_COMMISSION;\n transactionBody['from'] = Constants.ADMIN_MONGO_ID;\n transactionBody['to'] = Constants.ADMIN_MONGO_ID;\n transactionBody['desc'] = Constants.COMPLETION_COMMISSION;\n transactionBody['type'] = Constants.TRANSACTION_TYPE_CREDIT;\n transactionBody['mode'] = Constants.TRANSACTION_MODE_WALLET;\n transactionBody['createdAt'] = Utility.currentTimeNow();\n\n transactionsAr.push(transactionBody);\n\n await Transaction.create(transactionsAr, opts)\n .then(res => {\n //console.log(res);\n if (Utility.isEmptyObject(res)) {\n throw new Error('Error in creating transaction.');\n }\n });\n\n // b) deposit service amount in artist wallet\n await User.findOneAndUpdate({ _id: artistId },\n {\n $inc: { 'wallet.balance': artistIncome }\n },\n { session, new: true }).\n then(res => {\n //console.log(res);\n if (Utility.isEmptyObject(res)) {\n throw new Error('Error in crediting artist wallet.');\n }\n return res;\n });\n\n let orderBody = {\n status: status,\n updatedAt: currentTime,\n reviewNotificationTime: Utility.addHoursToCurrentTime(process.env.NOTIFICATION_HOURS_AFTER_COMPLETION ||\n Config.NOTIFICATION_HOURS_AFTER_COMPLETION),\n artistIncome: artistIncome\n };\n\n await Request.findOneAndUpdate(\n {\n referenceId: referenceId\n },\n {\n $set: orderBody,\n $push: {\n statusStages: {\n status: status,\n time: currentTime\n }\n }\n },\n opts)\n .then(res => {\n if (Utility.isEmptyObject(res)) {\n throw new Error('Error in updating order.');\n }\n return res;\n });\n\n await session.commitTransaction();\n session.endSession();\n return true;\n }\n catch (error) {\n await session.abortTransaction();\n session.endSession();\n throw error; \n }\n })\n .catch(e => {\n return;\n });\n}",
"text": "Hi @wan,I have provided my configuration below:MongoDB server version is 4.2.3 and it is hosted on atlas.\nDriver - NodeJS and version v12.13.1No TransientTransactionError is occuring and actual error is Given transaction number 2 does not match any in-progress transactions. The active transaction number is 1Also I am not performing any operations like creating collection, an index, dropping collection and other types of DDL operationsI have provided my code below.\nI am running a job to update order status as completed, creating and updating documents in transaction.",
"username": "Naif_Almuqbel"
},
{
"code": "updateRequestCronJob catch{ MongoError: Given transaction number 3 does not match any in-progress transactions. The active transaction number is 2\n at Connection.<anonymous> (/vbox/mongo/nodejs/node_modules/mongoose/node_modules/mongodb/lib/core/connection/pool.js:450:61)\n at Connection.emit (events.js:182:13)\n at processMessage (/vbox/mongo/nodejs/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:384:10)\n at Socket.<anonymous> (/vbox/mongo/nodejs/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:586:15)\n at Socket.emit (events.js:182:13)\n at addChunk (_stream_readable.js:283:12)\n at readableAddChunk (_stream_readable.js:264:11)\n at Socket.Readable.push (_stream_readable.js:219:10)\n at TCP.onread (net.js:639:20)\n errorLabels: [ 'TransientTransactionError' ],\n operationTime:\n Timestamp { _bsontype: 'Timestamp', low_: 8, high_: 1582506018 },\n ok: 0,\n errmsg:\n 'Given transaction number 3 does not match any in-progress transactions. The active transaction number is 2',\n code: 251,\n codeName: 'NoSuchTransaction',\n '$clusterTime':\n { clusterTime:\n Timestamp { _bsontype: 'Timestamp', low_: 14, high_: 1582506018 },\n signature: { hash: [Binary], keyId: 0 } },\n name: 'MongoError',\n [Symbol(mongoErrorContextSymbol)]: {} }\nasync function runTransactionWithRetry(txnFunc, data) {\n let session = await mongoose.startSession(); \n session.startTransaction(); \n try {\n await txnFunc(data, session);\n } catch (error) {\n console.log('Transaction aborted. Caught exception during transaction.');\n\n // If transient error, retry the whole transaction\n if (error.errorLabels && error.errorLabels.indexOf('TransientTransactionError') >= 0) {\n console.log('TransientTransactionError, retrying transaction ...');\n await runTransactionWithRetry(txnFunc, data);\n } else {\n session.abortTransaction();\n console.log(\"runTransactionWithRetry error: \");\n throw error;\n }\n }\n}\n\nasync function commitWithRetry(session) {\n try {\n await session.commitTransaction();\n console.log('Transaction committed.');\n } catch (error) {\n if (\n error.errorLabels &&\n error.errorLabels.indexOf('UnknownTransactionCommitResult') >= 0\n ) {\n console.log('UnknownTransactionCommitResult, retrying commit operation ...');\n await commitWithRetry(session);\n } else {\n console.log('Error during commit ...');\n throw error;\n }\n }\n}\n",
"text": "Thanks for providing more information and code snippets.It is likely that you’re not seeing the full error message because at the end of function updateRequestCronJob the catch does not throw error. The full error message should look something similar as below:This is likely related to mongoose issue #7502, and SERVER-36428.You can try to incorporate logic to retry the transaction for transient errors, and also retry the commit for unknown commit error. For example:See more information and snippets on Transactions In Applications: Core API (Switch the code tab to Node.JS)Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Given transaction number does not match any in-progress transactions | 2020-02-19T05:37:38.302Z | Given transaction number does not match any in-progress transactions | 16,099 |
null | [] | [
{
"code": "var express = require('express');\nvar app = express();\n\n// \"/\" => \"Hi there\"\napp.get('/', function(req, res) {\n res.send('hi there');\n});\n\napp.listen(27017, function() {\n console.log('i heard you');\n});\nnode ./app.js",
"text": "I have MongoDB installed and I can view it through Compass. Now i want to create an Express script that will connect to MongoDB and show data on my browser. I am working through the Web Developers Bootcamp on Udemy, and so far i have this:When I run this from PowerShell, using: node ./app.js, I get “i heard you” (from my listen function above. So I am apparently “connected” to port 27017. Now I want to see “i heard you” in the browser. I assume it should look, in browser address bar, 192.168.0.1/localhost:27017, with my “i heard you” in the browser window.So sorry if this is all over the place.Thanks for any help!",
"username": "RMS"
},
{
"code": "// app.js (or index.js)\nconst express = require('express')\nconst app = express()\nconst port = 3000\n\napp.listen(port, () => console.log('Example app listening on port ' + port))\n\n// This line of code is for printing Hello World! in the browser,\n// when you enter in the browser url bar: http://localhost:3000/\napp.get('/', (req, res) => res.send('Hello World!'))\n\n// This variable is populated in the findDocuments function (see below \"Using Mongo\")\nvar mongoDocsToDisplay = null;\n\n// This line of code will print the collection's documents in the browser,\n// when you enter in the browser url bar: http://localhost:3000/mongo\napp.get('/mongo', (req, res) => res.send(\n mongoDocsToDisplay\n));\n\n// Using MongoDB:\n\nconst MongoClient = require('mongodb').MongoClient;\nconst assert = require('assert');\nconst url = 'mongodb://localhost:27017';\n\nconst dbName = 'test';\nconst client = new MongoClient(url, { useNewUrlParser: true, useUnifiedTopology: true } );\n\n// Connect to MongoDB server, run the findDocuments function and close the connection.\nclient.connect(function(err) {\n\n assert.equal(null, err);\n console.log('Connected successfully to MongoDB server on port 27017');\n const db = client.db(dbName);\n\n findDocuments(db, function() {\n client.close();\n });\n});\n\nconst findDocuments = function(db, callback) {\n\n const collection = db.collection('test');\n\n collection.find({}).toArray(function(err, docs) {\n assert.equal(err, null);\n console.log('Found the following documents:');\n console.log(docs)\n\tmongoDocsToDisplay = docs;\n callback(docs);\n });\n}\n",
"text": "i want to create an Express script that will connect to MongoDB and show data on my browser.Okay, this about using Express (a web framework for NodeJS) and MongoDB. I put together this basic code from samples at: Getting started with Express and Quick Start Examples with MongoDB NodeJS DriverThe code works fine, and try to follow it by perusing the code comments.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This is helpful! Thanks!",
"username": "RMS"
},
{
"code": "3000http://localhost:27017http://localhost:3000/localhost192.168.0.127017console.log()res.send()",
"text": "So I am apparently “connected” to port 27017.Your Node application is actually listening to port 27017. Since 27017 is the default port used by a local MongoDB installation, typically you’d use a different port for your application (such as 3000 in @Prasad_Saya’s example).Now I want to see “i heard you” in the browser. I assume it should look, in browser address bar, 192.168.0.1/localhost:27017,The correct reference would be http://localhost:27017 for your original config or http://localhost:3000 for Prasad’s example. Your original URI is requesting the contents of the /localhost path from the host 192.168.0.1 on port 27017, which probably isn’t what you were expecting.The console.log() command logs to the scripting console. To return a response to a browser/client using Express, you should call methods on Express’ response object. For example, your original script used res.send().Regards,\nStennie",
"username": "Stennie_X"
}
] | Using a nodejs script to connect to localhost 27017 | 2020-02-21T22:49:42.993Z | Using a nodejs script to connect to localhost 27017 | 12,839 |
null | [
"upgrading"
] | [
{
"code": "sudo apt-get --only-upgrade install mongodb-org\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nmongodb-org is already the newest version (4.2.3)\nMongoDB shell version v4.2.2\n\nMongoDB server version: 4.2.2\n\n/usr/bin/mongod --version\ndb version v4.2.2\n",
"text": "Hi! I have upgraded MongoDB community edition to 4.2.3 on Ubuntu 18.04. Here is result of the command ran second time, I don’t have access to the original command as I rebooted the machine:But when I run any tool, it says 4.2.2:",
"username": "Dmytro_Bogdanov"
},
{
"code": "",
"text": "Reinstalled, no avail:Preparing to unpack …/mongodb-org_4.2.3_amd64.deb …\nUnpacking mongodb-org (4.2.3) …\nSetting up mongodb-org (4.2.3) …MongoDB shell version v4.2.2\nMongoDB server version: 4.2.2",
"username": "Dmytro_Bogdanov"
},
{
"code": "sudo apt-get install --reinstall mongodb-org 4.2.3",
"text": "Did you try to re-install the newest version?Type:\nsudo apt-get install --reinstall mongodb-org 4.2.3",
"username": "coderkid"
},
{
"code": "sudo apt-get purge mongodb-org*\n",
"text": "OK, the answer is to purge the package:",
"username": "Dmytro_Bogdanov"
}
] | Upgraded to 4.2.3 but my tools still say 4.2.2 | 2020-02-22T18:29:05.811Z | Upgraded to 4.2.3 but my tools still say 4.2.2 | 1,924 |
null | [] | [
{
"code": "ObjectId(1) 123\nid 1\nname shubham\nCreateDate 10/11/12\n\nObjetId(2) 456\nid 2\nname anant\nCreateDate: 10/11/12\nvar cursor =db.getCollection('dummydata').find({name : {$in : \n[/shubham/i,/anant/i]}})\nwhile (cursor.hasNext()) {\nvar record = cursor.next();\nprint(record.id +'|'+ record.name + '|' + record.CreateDate)\n} \n1|shubham|10/11/12\n2|anant|undefined\n",
"text": "I am trying to fetch data in field which contains expression \" : \", but i am getting error of Unexpected token :For eg:\nMy collection contains 2 documents like-while fetching data using qurey -I am getting this output-even though there is CreatDate field in id =2,It is showing undefined,\nPlease let me know how to troubleshoot this or some other query to fetch data which include\" : \" expression in the field.",
"username": "shubham_udata"
},
{
"code": "",
"text": "If I were you I would correct my data first. I do not think it is a good idea to have a field named “CreateDate” in one document while it is named “CreateDate:” in another one. Especially if you want to loop and manipulate your data in a consistent way.",
"username": "steevej"
},
{
"code": "print(record[\"id\"] +'|'+ record[\"name\"] + '|' + (record[\"CreateDate\"] || record[\"CreateDate:\"]))\n",
"text": "print(record.id +’|’+ record.name + ‘|’ + record.CreateDate)try",
"username": "coderkid"
},
{
"code": "",
"text": "Thank you for your reply. My approach was aslo same as yours.But sometime we are bound to not use update query …\nbut anyways thanks…I got result with another approach,…",
"username": "shubham_udata"
},
{
"code": "",
"text": "Hi Thank you,\nThis worked fine…",
"username": "shubham_udata"
}
] | How to fetch data using query in field which contains " : " | 2020-02-22T18:29:38.736Z | How to fetch data using query in field which contains ” : ” | 3,361 |
null | [] | [
{
"code": "",
"text": "Hi\nthis should be simple, starting with Saplings:Taken from the documentation:via the invite button on your user page, or at the bottom of a topic.I unfortunately do not find the button, is this keep “off” in the beta or did I over see something?\nThanks a lot\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hmm let me take a look…",
"username": "Jamie"
},
{
"code": "",
"text": "On initial check, it looks like the invite tokens might not play nicely with our SSO, but if this is important, I can investigate further. As we don’t have an invite-only forum, I don’t expect this is critical, but happy to hear use cases if we think this is vital to our community.",
"username": "Jamie"
},
{
"code": "",
"text": "Guess this is not a blocker…\nA nice use case would be to invite straight from a topic to a person in mind\nAn other, for a general invitation button: it is more comfortable to click a button and enter an address than writing a message in a third tool and adding a link …Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "I will look into what our options might be.",
"username": "Jamie"
},
{
"code": "",
"text": "Or just easy way to signup to forum.Every time I share a posting here with my colleagues who is not member, i need to explain them how to sign up…",
"username": "coderkid"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to invite a friend? | 2020-02-11T07:40:43.985Z | How to invite a friend? | 4,494 |
null | [
"queries"
] | [
{
"code": "",
"text": "I need to remove the data in one of the collection in dev enrionment but only remove the data that is created until yesterday.id field documents are numeric values(like 1,2,3 etc)… i am working in mongo shell looking for help on this. There are no date fields in my json that i could use filter criteria to remove.",
"username": "nagaraju_voora"
},
{
"code": "",
"text": "@nagaraju_vooraThere are no date fields in my json that i could use filter criteria to remove.id field documents are numeric values(like 1,2,3 etc)This may make things difficult.However…the collection in dev enrionmentAs this is a dev environment and I am assuming by ‘yesterday’ you mean any giving yesterday and not exactly Feb 18? Could you not just clear the database and await fresh data?I could be wrong but I think you would need some date field stored in the documents you wish to remove by a certain date",
"username": "Natac13"
},
{
"code": "_idObjectId$natural",
"text": "Timestamp would be ideal. If _id is the default ObjectID then you can use that(but I think you said this was an int, unless that was id not _id):sorting on an _id field that stores ObjectId values is roughly equivalent to sorting by creation time.\nhttps://docs.mongodb.com/manual/reference/bson-types/#objectidPossibly also using $natural in sort.",
"username": "chris"
},
{
"code": "{\n \"created_on\": {\n \"$lte\": new Date((new Date().getTime() - (24 * 60 * 60 * 1000)))\n }\n}",
"text": "I am assuming you have a field called “created_on” and it is date field, so you can subtract 24 hours and find the dates less than that value.",
"username": "coderkid"
},
{
"code": "",
"text": "I need ability remove the documents for a given date…my documents in mongo sb collection does not have any date related elements",
"username": "nagaraju_voora"
},
{
"code": "",
"text": "I really do not understand. You do not have or want any field with a date in your documents yet you want to be able to delete document for a given date. But this is comparable to I want delete all Blue items but my items have no colour. Impossible",
"username": "steevej"
},
{
"code": "",
"text": "Not entirely impossible in this case. It depends. I gave two possible alternatives when there is no deicated timestamp field.Given a replica set and using the oplog I am sure there are other possibilities.Is the dataset poorly designed for this operation? Almost certainly.",
"username": "chris"
},
{
"code": "$natural$naturalcreated_atcreated_at",
"text": "Timestamp would be ideal. If _id is the default ObjectID then you can use that(but I think you said this was an int, unless that was id not _id):id field documents are numeric values(like 1,2,3 etc)…Possibly also using $natural in sort.The $natural parameter returns items according to their natural order within the database. This ordering is an internal implementation feature, and you should not rely on any particular structure within it.I am assuming you have a field called “created_on” and it is date fieldI think this can only be assumed if @nagaraju_voora was using Mongoose as an ORM which adds created_at with timestampsI think the best way is to add a created_at field or some sort of date field. Keeping in mind this is only a dev environment things should be easy to change no?",
"username": "Natac13"
},
{
"code": "import pymongo\nfrom bson.objectid import ObjectId\nfrom datetime import datetime, timedelta\n\nclient = pymongo.MongoClient()\noplog = client.local.oplog.rs\ndb = client['dev']\nprevious_day = datetime.now() - timedelta(days=1)\n\nops = oplog.find({\"ns\":\"dev.users\", \"op\": \"i\", \"wall\": {\"$lt\": previous_day}})\n\nfor op in list(ops):\n db[\"users\"].delete_one({\"_id\": ObjectId(op['o']['_id'])})\n",
"text": "Hey @nagaraju_voora while I can do this, you really probably shouldn’t. At least not this way, I strongly agree with the previous posters that the prefered method would be to add a timestamp to your documents. That being said, you can probably do what you’ve asked using the oplogThis code is incredibly hacky and I do not recommend you run it in any environment, not even dev… but as a purely academic exercise of “is it possible”, then the answer is yes.You can find out more about the oplog in the docs",
"username": "aaronbassett"
},
{
"code": "",
"text": "I think you just filled the:Given a replica set and using the oplog I am sure there are other possibilities.",
"username": "chris"
},
{
"code": "",
"text": "How does the document in your collection look like?",
"username": "Victory_Osikwemhe"
}
] | How to remove documents by certain date (like until yesterday) | 2020-02-19T16:41:41.527Z | How to remove documents by certain date (like until yesterday) | 26,734 |
null | [] | [
{
"code": "",
"text": "Hi all,I would like to compare and sync collections manually and automatically. By syncing, I mean adding new documents to the target DB and removing obsolete documents from it.\nI have found 2 tools, that seem to do the job, at least manually:https://studio3t.com/knowledge-base/articles/compare-mongodb-collections/?utm_source=post&utm_medium=fb&utm_campaign=3tslpage\nhttps://mingo.io/feature/compare-n-syncAre there any other ways to do it? If possible by using free tools.",
"username": "N_E"
},
{
"code": "",
"text": "What is the purpose of this?Replica set already sync the data from primary to secondary servers.However if you want an offline copy I would recommend to use https://docs.mongodb.com/manual/reference/program/mongodump/",
"username": "steevej"
},
{
"code": "",
"text": "and check this page out https://docs.mongodb.com/manual/tutorial/backup-and-restore-tools/",
"username": "coderkid"
},
{
"code": "",
"text": "Thanks for the answers!@steevejWhat is the purpose of this?I don’t think it’s necessary but I’ll give an example: 2 mongoDB servers A,B which in the beginning are synced with some blogs. On server A new blogs are being added or removed. I would like to know which of them were added and which were removed in comparison to server B. Then accordingly add them or removed them from B. Simple.Replica set already sync the data from primary to secondary servers.No, I don’t mean replication.@coderkid: I did not tried it yet, but the idea now is to use mongoexport to export the collections in subsequent times in JSON format and compare the JSON files. Then import or remove the documents from the other server depending on the diff file.",
"username": "N_E"
},
{
"code": "",
"text": "Thanks. Your explications help provide a better answer. I think you will like what the change stream API provides. Take a look at https://docs.mongodb.com/manual/changeStreams/",
"username": "steevej"
},
{
"code": "",
"text": "@steevej:\nThanks. It looks interesting.",
"username": "N_E"
}
] | Syncing Collections from different servers | 2020-02-21T16:10:47.267Z | Syncing Collections from different servers | 7,304 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hey everyone. So currently I store my json data in a collection unique to that type of information. So all entries are just the same json data structure with varying values for my inner keys. I was wondering if I ever wanted to map a more advanced relationship with other collection data or or other types of data how might I go about doing this? Or how should I organize this structure?",
"username": "Faraz_Ahmad"
},
{
"code": "",
"text": "You might want to look at https://docs.mongodb.com/manual/aggregation/.",
"username": "steevej"
},
{
"code": "",
"text": "MongoDB’s flexible schema allows data design based upon your needs. The common data relationships one-to-one, one-to-many and many-to-many can be modeled with MongoDB data. The relationships between various data entities can be organized by embedding or referencing.If you already have the data in place or in the process of designing, there is the option to design / re-design the model using the modeling techniques specified (See Data Models).The modeled data can be queried using the MongoDB query language, Aggregation framework or your favorite programming language / platform (like Java, Python, NodeJS, etc.).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I highly recommend you take an introductory class from MongoDB University. The introductory courses explain how to structure a document, which is basically Extended JSON. These course are offered in popular programming languages such as .NET, Node.js, and so on. They are free of charge and I feel sure you will get a lot of benefit from such a course.Thanks so muchBob",
"username": "Robert_Cochran"
}
] | Best practices for mongodb json data | 2020-02-20T17:32:29.811Z | Best practices for mongodb json data | 2,388 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have a database with over 1000 reviews, I am trying to list all of the reviewers who have written at least 5 reviews, I am trying to show the reviewer ID, reviewer name, and number of reviews for each user and then list the names in alphabetical order, I have attempted it several times and just do not understand, my code is below;db.P_reviews.aggregate([{\ngroup:{ _id: { \"_reviewerID\" : \"_reviewerID\" },\n{ $match: { “_reviewCount”:{$gt:4}}},\n{ $project : { “_reviewerID”:1, “_reviewerName”:1, “_reviewerCount”:1}}}}])I do not understand what is wrong here",
"username": "samhuss123"
},
{
"code": "",
"text": "First it is $group not group.Second please concentrate your question in one thread. Please implement correctly what 3 different persons proposed. The $group is far from what was given, it is missing the $sum operation and the way the to have the name in the next stage.As already mentioned in the 2 other threads, $project and $match are useless as after the $group you are only left with _reviewerID.",
"username": "steevej"
}
] | What is wrong with this aggregate pipeline method? | 2020-02-21T16:10:09.805Z | What is wrong with this aggregate pipeline method? | 1,944 |
null | [] | [
{
"code": "",
"text": "I have started working on mongodb a month ago and going well. I fell in love with this actually.going to the topic:I need to create a new collection by joining 4 collections. Successfully did this, but I don’t want to execute this query on entire table every time. So I thought of using Merge. So the question here is , data in these 4 collections can change at any time. How can I bring data that was changed in the collections that are joined to main collection when no data changed in the main collection?For ex: Let’s say I am joining collection A (this is the main one with 15 million docs) with collections B,C, and D. If there is any changes in A that can be pulled and joined with other collections… no issues here.But take case where collection A doesn’t have any changes in past 1 hour, but collection C has some changes. Now how can I bring changes in the collection C (it is only used in Lookup) ?Is there a better way to achieve this ?Thanks in advance.\nChuck",
"username": "Chuck_Paul"
},
{
"code": "",
"text": "If I understand you right, I do not think there is another way to do it without scanning all collections… UNLESS, there is a field indicates that document has changed since last merge… a flag; change = true or timestamp; changed_at …",
"username": "coderkid"
}
] | Merge operation when using multiple collections | 2020-02-21T16:10:59.934Z | Merge operation when using multiple collections | 1,480 |
[
"aggregation"
] | [
{
"code": "",
"text": "Hello,I have a database with over 1000 reviews of products, I need to find all of the people with at least 3 reviews, and I need to show their reviewer ID, reviewer name, and the number of reviews they’ve writtenI understand that I have to use an aggregate method but I am very confused on how to do it, so any help would be appreciated\nScreenshot 2020-02-20 at 20.53.171103×510 46.8 KB\n",
"username": "samhuss123"
},
{
"code": "",
"text": "If you are configured confused with aggregation, my best advice is to take the free M121 course from MongoDB university. If your data is spread into more than one collection then the aggregation stage $lookup will be useful. The $group stage could also come in handy to group reviews by reviewer.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks ill look into it",
"username": "samhuss123"
},
{
"code": "$group$match$project$addFields",
"text": "MongoDB database data can be queried can be queried in different ways to get the data you want and in the required format.When there is a need for something likeI need to find all of the people with at least 3 reviews, and I need to show…it is the Aggregation query. With aggregation query, the collection’s documents are processed through stages (together called as pipeline) to get the desired output.For example, in this case, (1) it is grouping by people and counting, (2) matching (the count is at least 3), and finally, (3) projecting the required fields (reviewer ID, reviewer name, and the number of reviews).The aggregation pipeline stages in this case are: $group, $match and the $project (or $addFields). Aggregation Pipeline Quick Reference has links to the stages and examples.Also, MongoDB Compass GUI tool has this Aggregation TAB, where one can build the query using GUI (like list boxes and buttons). A nice feature with this is as you build each stage, you can see the transformed data at that stage in an adjacent window (and it will be input to the next stage in the pipeline).",
"username": "Prasad_Saya"
},
{
"code": "db.coll.aggregate([{$group:{\n _id:\"$reviewerID\",\n name:{$first:\"reviewerName\"}.\n count:{$sum:1}\n}},\n{$match:{\n count:{$gt:2}\n}}\n])",
"text": "Sounds like you want something like this:",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thanks but does not work",
"username": "samhuss123"
},
{
"code": "",
"text": "It cannot work as is because the field names in your collection are not the same. From your screenshot in https://www.mongodb.com/community/forums/t/aggregate-pipeline-help/47329?u=steevej-1495 it looks like your fields start with an underline and may be another character (the d is _id) that you did not include.",
"username": "steevej"
}
] | Aggregate Pipeline Method Help | 2020-02-20T20:50:28.675Z | Aggregate Pipeline Method Help | 2,747 |
|
[] | [
{
"code": "",
"text": "Attached sceenshot illustrates the use case. If I have to go to exact previous page where I can see the list of new topics, I can’t do that. I will need to click on MongoDB logo and then going again to the New section. From the usability prespective, giving back arrow is good option. isn’t it?Just sharing as I caught it and felt like it would be good option.\nNo_way_back1489×813 73.4 KB\nEither back button or having that homepage menu on all other pages can improve this.",
"username": "viraj_thakrar"
},
{
"code": "u",
"text": "@viraj_thakrar Thanks for the suggestion.This is the same request discussed on Easily Return to Previous Page. One available option is the u keyboard shortcut (go back to previous page). There are a few other tips in the earlier discussion.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you @Stennie_X.I have searched through the older topics before posting this but didn’t realize it was already there.Yes I found that shortcuts list which can make things really easy.Thanks ",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "And also, did you try the “hamburger” menu?\nScreen Shot 2020-02-21 at 10.10.011276×936 83.6 KB\n",
"username": "coderkid"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | There is no way of getting back to the last page or switching to other page if we visit detail page of any topic | 2020-02-21T12:12:18.210Z | There is no way of getting back to the last page or switching to other page if we visit detail page of any topic | 4,254 |
|
null | [] | [
{
"code": "",
"text": "Hi, I’m Moises, happy to be a part of this community and deeply grateful of MongoDB efforts to help the magnificent Australia to bounce back from these challenging times.",
"username": "Moises_Jafet_Corneli"
},
{
"code": "",
"text": " also from Toronto. Welcome to the community!",
"username": "alexbevi"
},
{
"code": "",
"text": "not .to but W on the 401 ",
"username": "chris"
},
{
"code": "",
"text": "Hi Moises - we’re glad to have you here!",
"username": "Jamie"
},
{
"code": "",
"text": "Does MongoDB have an office in Toronto? I am from London On.",
"username": "Natac13"
},
{
"code": "",
"text": "Some of the team is currently working out of the WeWork space at 1 University (see Office Locations).",
"username": "alexbevi"
}
] | Greetings from Toronto! | 2020-02-20T17:32:20.430Z | Greetings from Toronto! | 2,016 |
null | [] | [
{
"code": "",
"text": "As many developer or students asked me why to choose mongodb over other databases",
"username": "Pramod_Kumar_59437"
},
{
"code": "",
"text": "Hey @Pramod_Kumar_59437My reasons:",
"username": "Natac13"
},
{
"code": "",
"text": "Here are some of mine:",
"username": "steevej"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Why we choose mongodb | 2020-02-21T13:12:53.453Z | Why we choose mongodb | 1,473 |
null | [] | [
{
"code": "",
"text": "Hi all,I’m Kevin Adistambha, part of the MongoDB Developer Relations team based in Sydney, Australia. I joined MongoDB in 2015. Previously, I was an academic for a number of years doing research in the Multimedia space (specifically human motion detection and MPEG-7). I am originally from Jakarta, Indonesia, and feel equally at home in Sydney and in Jakarta.Our aim as a team is to help you to be successful with MongoDB. Feel free to ask me about schema design considerations, MongoDB official drivers, or general troubleshooting tips.Kevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin, and everyone,I’m Venkat.I’ve been a MongoDB fan for the past 5+ years. I look\nforward to interact with you all.Thank you & Best Regards,\nVenkat.",
"username": "Venkat_Ramakrishnan"
},
{
"code": "",
"text": "Hi Kevin,Im happy to know about this community. I’m heavily using mongodb in my daily task.Regards,\nSyukur",
"username": "Syukur_Md_Kassim"
}
] | 🌱 Hi I'm Kevin from MongoDB | 2020-02-17T00:04:04.886Z | :seedling: Hi I’m Kevin from MongoDB | 2,517 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Trying to connect to a free-tier cluster to begin Mongo training. The cluster created successfully, but I cannot connect to it in MongoDB Atlas. I enter my email and password, and then the Login box greys out and stays that way. If I position the cursor on the Login box, I get the circle-slash symbol.Suggestions?",
"username": "Ralph_Jones"
},
{
"code": "",
"text": "What browser you are using? If you check browser developer console, do you see some errors there? I just tried myself logging in to my MongoDB Atlas with few different browsers, and it seems to be working. Have you tried using some external client, for example MongoDB Compass?",
"username": "kerbe"
},
{
"code": "",
"text": "What is your browser? and version? do you see a symbol to similar to this ∅ ???",
"username": "coderkid"
}
] | Atlas connect issue | 2020-02-20T22:31:24.249Z | Atlas connect issue | 1,822 |
[] | [
{
"code": "",
"text": "My MongoDB is using way too much CPU usage on my server. So I’m wondering on what I am doing wrong and what I can fix.\nmongodb700%1317×218 30.6 KB\n \nsysteminfo1348×503 51.8 KB\n",
"username": "cai_martin"
},
{
"code": "",
"text": "Hey @cai_martinI think you will have to shared your config setup if you are looking for help to diagnose the CPU usage problem.\nOr how many applications do you have connected to the db?",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @cai_martin!Your server has 16 cores. Your mongod process is using about 7 out of 16 cores. Why do you think 700% is way too much CPU? I’m just trying to understand the context behind your question.",
"username": "logwriter"
}
] | mongodb using 700% cpu usage | 2020-02-19T22:57:08.868Z | mongodb using 700% cpu usage | 2,294 |
Subsets and Splits