image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "data-modeling", "schema-validation" ]
[ { "code": "", "text": "Can we have a field (in json schema) and make sure it will be IPV4 or IPV6 only. I know RegEx can work here - the question is: is there something out of the box?\nThanks", "username": "Avishay_Balderman" }, { "code": "{\n\"$schema\": \"http://json-schema.org/draft-03/schema#\",\n\"title\": \"test\",\n\"type\": \"object\",\n\"properties\": {\n \"type\": {\"enum\": [\"spice\", \"vnc\"]},\n \"listen\": {\n \"type\": \"string\",\n \"format\": \"ip-address\"\n }\n}, \n\"additionalProperties\": false\n}\n", "text": "Like the below:", "username": "Avishay_Balderman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Schema validation at the collection level - ip address
2022-06-28T12:56:42.551Z
Schema validation at the collection level - ip address
2,696
https://www.mongodb.com/…_2_1024x340.jpeg
[ "atlas", "flutter", "lebanon-mug" ]
[ { "code": "Backend & System Engineer | MUG Leader | GDG Organizer | GDSC FounderToters CTOICT ConsultantUI/UX Engineer | Front End Engineer | Women Techmakers Ambassador | GDG OrganizerSenior Application DeveloperFront end Engineer | GDG Organizer", "text": "The MongoDB User Group in Lebanon is pleased to invite you to the first on-site collaborative conference between MUG, GDG and the Women Tech makers team in Lebanon. Join us, next Saturday the 2nd of July, for a full day of technology , where we will connect, learn and network together in an amazing on-site experience . With a lot of workshops, tech talks, and panel interviews covering many emerging technologies, starting from MongoDB, to the role of UI/UX, Flutter, and more. .  2022-07-02T07:00:00Z→2022-07-02T12:00:00ZOn-Site : Chamber of Commerce, Industry, Agriculture of Tripoli and North CCIAT Tripoli, North Lebanon Online - via Google Meet\nimage1400×466 78.6 KB\n   Backend & System Engineer | MUG Leader | GDG Organizer | GDSC Founder   Toters CTO   \nICT Consultant   \nUI/UX Engineer | Front End Engineer | Women Techmakers Ambassador | GDG Organizer   \nSenior Application Developer   \nFront end Engineer | GDG OrganizerJoin the Lebanon MUG Group to stay updated with our upcoming events.", "username": "eliehannouch" }, { "code": "", "text": "So excited to be a speaker and also a part of the organizers team representing the woman techmakers ambassadors, having the opportunity to give back to this amazing community, aiming to empower every tech enthusiast and specially the woman’s in tech .", "username": "Darine_Tleiss" } ]
Lebanon MUG: I/O Extended Conference
2022-06-28T16:18:07.821Z
Lebanon MUG: I/O Extended Conference
4,628
null
[ "queries", "node-js" ]
[ { "code": "var MongoClient = require('mongodb').MongoClient;\nvar url = \"mongodb://localhost:27017/\";\n\nMongoClient.connect(url, function(err, db) {\n if (err) throw err;\n var dbo = db.db(\"Testingdb\");\n\n dbo.collection(\"regextest\").find( { $where: function() { return this.myregex.test('a') } } ).toArray(function (err,result) {\n if (err) {\n console.log(err);\n } else {\n console.log(result);\n }\n });\n});\n", "text": "I have a database and collection set up which has a BSONRegExp field named “myregex”. I have a number of entries each with different regex strings.When I run the following against the database via shell I get a correct response back:db.testcol.find(function() { return this.myregex.test(‘a’); } )As you can see I am using the JavaScript test() method to check whether or not there is a match returned. I have attempted to migrate this over to NodeJS but I am unable to find any documentation as to how this query should be formed. I have been using the MongoDB Node.js Driver MongoDB Node.js Driver The following is the cloest I am able to get however it returns the entire collection and not just those matching the regex match to the inputted string (the string, in this case, being “a”).Any assistance or suggestions would be appreciated.", "username": "Hans-Eirik_Hanifl" }, { "code": "", "text": "I tried to edit my initial post to correct the break-in script formatting but I believe I don’t have permission to do so. My apologies.", "username": "Hans-Eirik_Hanifl" } ]
Unable to Convert Find Command from Shell into NodeJS Query against Mongodb
2022-06-28T15:49:20.175Z
Unable to Convert Find Command from Shell into NodeJS Query against Mongodb
1,171
null
[ "aggregation", "queries" ]
[ { "code": "'item': [{'adjudication': [{'amount': {'code': 'USD',\n 'system': '4217',\n 'value': 22.51},\n 'category': {'coding': [{'code': 'bb.org/paid_amt',\n 'system': 'bb.org/adjudication'}]},\n 'reason': {'coding': [{'code': 'C',\n 'system': 'bb.org/drug_cvrg_stus_cd'}]}},\n {'amount': {'code': 'USD',\n 'system': '4217',\n 'value': 0},\n 'category': {'coding': [{'code': 'bb.org/discount_amt',\n 'system': 'bb.org/adjudication'}]}}\n ]\n }]\nadjudication: {paid_amt: 22.51, discount_amt: 0}\n", "text": "Hi everyone Vikram here. This is my first question ever so super excited to learn and apologies if the syntax is not up to mark, I will improve with time.Output desired", "username": "Vikram_Jindal" }, { "code": "", "text": "without hardcoding as the valuesSo neither adjudication, category, coding nor code can be hard coded? Are they at least parameters of the query?And you want the value of the field named value of the amount object within the element? Is amount field name hard coded?You also want the field name of the label to be the substring paid_amt from the code that has the prefix bb.org/? Is it always bb.org/ ? Is it always the value after the /?", "username": "steevej" } ]
Variables from values in MongoDB Arrays with variable length
2022-06-26T00:05:08.172Z
Variables from values in MongoDB Arrays with variable length
1,827
null
[ "java" ]
[ { "code": "", "text": "Hello! The MongoDB Developer Experience team is running a survey of Java developers, and we’d love to hear from you. The survey will take about 5-10 minutes to complete. We’ll be using the feedback in this survey to help us make changes that matter to you.As a way of saying thank you, we’ll be raffling off gift cards (five randomly chosen winners will receive a $150 gift card to a retailer of their choosing).You can find the survey here.Let us know if you have any questions!", "username": "Ashni_Mehta" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
The 2022 MongoDB Java Developer Survey
2022-06-28T14:43:10.056Z
The 2022 MongoDB Java Developer Survey
1,573
null
[ "aggregation" ]
[ { "code": "", "text": "I want to use $lookup to join two collections in my database. I need to take primary keys(\"_id\") from one collection and add them to another collection as “id2” in order to use lookup. How can I perform this operation?", "username": "SR123" }, { "code": "", "text": "I want to copy the “_id” for this collection to another collection and save it as “_id2”in another collection. And them perform $lookup and get both the collections together with my foreignField being “_id2”.", "username": "SR123" }, { "code": "collection_one = ...\ncollection_two = ...\n\ndocument_one = { ... }\ncollection_one.insertOne( document_one ) \ndocument_two = { \"_id2\" : document_one._id }\ncollection_two.insertOne( document_two )\n\nlookup = { \"$lookup\" : {\n \"from\" : \"collection_two\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"_id2\" ,\n \"as\" : \"two\" \n} } \n\ncollection_one.aggregate( lookup )\n", "text": "Untested:", "username": "steevej" } ]
How do I copy the primary keys of documents from one collection and save it in another collection in a similar database
2022-06-28T08:24:36.464Z
How do I copy the primary keys of documents from one collection and save it in another collection in a similar database
1,584
null
[ "monitoring" ]
[ { "code": "", "text": "Hi Team,Can someone guide me on how to collect the historical data of sessions that are running in mongodb server ?\nFor Instance, If I need the queries that are running in a span of 5 min on a particular day, where can I get those details ?Mongodb Profiling is a good option, but we need to capture each and every query.Can we save the db.CurrentOp() to a collection in another database ?Thanks in Advance\nGelli", "username": "Rama_Kiran" }, { "code": "", "text": "Hi Team,Appreciate If we have any response for the above mentioned problem.Thanks", "username": "Rama_Kiran" }, { "code": "", "text": "Hello @Rama_Kiran ,Welcome to the community!! If you are using MongoDB enterprise edition then you can refer to auditing in MongoDB.Else if you are using Atlas cluster M10 or greater, then you can refer to database auditing in Atlas.From developer perspective, you can add a logging layer in between your application and database which could log the queries being sent to database.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello @Tarun_GaurThank you for your response. However, If we are using community version of MongoDB, Is there a way to collect the historical data of sessions that are running?Thank you\nRama Kiran", "username": "Rama_Kiran" }, { "code": "", "text": "Hello @Rama_Kiran,For community version I would suggest you to go with Developer’s approach and add a logging layer to save session’s data.Tarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Collecting historical data of sessions running in mongodb server
2022-01-05T20:00:58.496Z
Collecting historical data of sessions running in mongodb server
3,552
null
[ "sharding", "change-streams" ]
[ { "code": "sizesizeold_sizedeletedsizeold_sizedeleted{'operationType': 'insert', 'clusterTime': Timestamp(1655982833, 4), 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655982833.69878', 'deleted': 0, 'old_size': 0, 'size': 5}, 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}}\n{'operationType': 'update', 'clusterTime': Timestamp(1655982835, 1), 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}, 'updateDescription': {'updatedFields': {'created_at': '1655982835.02931', 'old_size': 5, 'size': 3}, 'removedFields': []}, 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655982835.02931', 'deleted': 1, 'old_size': 0, 'retain_until': 0.0, 'size': 3}}\n{'operationType': 'update', 'clusterTime': Timestamp(1655982835, 2), 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}, 'updateDescription': {'updatedFields': {'deleted': 1, 'old_size': 0}, 'removedFields': []}, 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655982835.02931', 'deleted': 1, 'old_size': 0, 'retain_until': 0.0, 'size': 3}}\ndeletedfullDocumentdeletedupdatedFieldsfullDocumentupdatedFieldsupdatedFieldsfullDocument{'operationType': 'insert', 'clusterTime': Timestamp(1655984572, 3), 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655984572.96063', 'deleted': 0, 'old_size': 0, 'retain_until': 0.0, 'size': 5}, 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}}\n{'operationType': 'update', 'clusterTime': Timestamp(1655984581, 1), 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}, 'updateDescription': {'updatedFields': {'created_at': '1655984581.21706', 'old_size': 5, 'size': 3}, 'removedFields': []}, 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655984581.21706', 'deleted': 0, 'old_size': 5, 'retain_until': 0.0, 'size': 3}}\n{'operationType': 'update', 'clusterTime': Timestamp(1655984591, 2), 'ns': {'db': 'test', 'coll': 'col_test'}, 'documentKey': {'_id': 'testobjbar'}, 'updateDescription': {'updatedFields': {'deleted': 1, 'old_size': 0}, 'removedFields': []}, 'fullDocument': {'_id': 'testobjbar', 'created_at': '1655984581.21706', 'deleted': 1, 'old_size': 0, 'retain_until': 0.0, 'size': 3}}\n", "text": "I have decentralized application working on several nodes (with sharded collections) and I’m using mongo change-streams to gather inserts/updates for statistics etc.\nDuring some testing I caught a case where the change-stream updates are mixed (and reverse).The test:In the change stream I expect 3 rows:On the other hand, this is what I get: (removed and changed some fields to show only what is relevant)As seen above:Some points to observe:To compare, if I run the test line by line and wait, I get good results in the change stream:", "username": "Oded_Raiches" }, { "code": "fullDocumentfullDocument", "text": "What you observed might be the following:The fullDocument document represents the most current majority-committed version of the updated document. The fullDocument document may vary from the document at the time of the update operation depending on the number of interleaving majority-committed operations that occur between the update operation and the document lookup.Which is an extract fromalso of interest isMongoDB triggers, change streams, database triggers, real time", "username": "steevej" }, { "code": "updatedFieldsfullDocumentupdatedFieldsfullDocument", "text": "thanks a lot for the response!so you are saying that for example:Or is it always preferable to look only in the updatedFields section to lookup for updates and ignore fullDocument?", "username": "Oded_Raiches" }, { "code": "updatedFieldsfullDocument", "text": "Or is it always preferable to look only in the updatedFields section to lookup for updates and ignore fullDocument ?Only you can answer that. It depends of your use-cases. The documentation is clear about the fact that the fullDocument may differ. What ever you do you have to do it by taking into account that the fullDocument may differ.", "username": "steevej" } ]
Change stream race causes reversing and mixing of updates
2022-06-23T12:57:55.844Z
Change stream race causes reversing and mixing of updates
2,282
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"62b5de54925ed2a48da3b6b2\"\n },\n \"markets\": [1, 2],\n \"sales\": [{\n \"item\": \"A\",\n \"marketsToImplant\": [1, 2]\n }, {\n \"item\": \"B\",\n \"marketsToImplant\": [1, 3]\n }]\n}\n{\n \"_id\": {\n \"$oid\": \"62b5de54925ed2a48da3b6b2\"\n },\n \"markets\": [\n {\n \"market\":1,\n \"sales\": [\"A\", \"B\"]\n },\n {\n \"market\":2,\n \"sales\": [\"A\"]\n }\n ]\n}\n", "text": "Hi,Using a pipeline I need to modify the document :for each element in “markets” array to give this result :I tried with $map and $elemMatch but it’s wrong", "username": "emmanuel_bernard" }, { "code": "", "text": "I tried with $map and $elemMatch but it’s wrongplease share the exact code you tried, we may help better by knowing what you tried and how it failed. it gives us a starting point to experiment.", "username": "steevej" }, { "code": "[\n{\n$set: {\n duplicateMarkets: {\n $reduce: {\n input: '$salePeriods.panel.marketsToImplant',\n initialValue: [],\n 'in': {\n $let: {\n vars: {\n markets: {\n $map: {\n input: '$$this',\n 'in': {\n marketId: '$$this'\n }\n }\n }\n },\n 'in': {\n $concatArrays: [\n '$$value',\n '$$markets'\n ]\n }\n }\n }\n }\n }\n}\n}, \n{\n$set: {\n markets: {\n $setUnion: '$duplicateMarkets'\n }\n}\n}, \n{\n$project: {\n salePeriods: 1,\n markets: 1\n}\n}, \n{\n$set: {\n markets: {\n $let: {\n vars: {\n salePeriods: '$salePeriods'\n },\n 'in': {\n $map: {\n input: '$markets',\n as: 'market',\n 'in': {\n $let: {\n vars: {\n marketSalePeriods: {\n $filter: {\n input: '$$salePeriods',\n as: 'salePeriod',\n cond: {\n $in: [\n '$$market.marketId',\n '$$salePeriod.panel.marketsToImplant'\n ]\n }\n }\n }\n },\n 'in': {\n $mergeObjects: [\n '$$market',\n {\n salePeriods: '$$marketSalePeriods'\n }\n ]\n }\n }\n }\n }\n }\n }\n }\n}\n}\n]\n", "text": "Hi Steeve,Here is the code I triedThe 1st stage $set create an array duplicateMarkets extract from salePeriods.panel.marketsToImplant where each element is formated {marketId:<value of the market>}The 2nd stage $set eliminate duplicate values and create $markets arrayThe 3rd stage $project reduce the size of the documentsThe 4th stage $set merge the salePeriods to the $market array if the marketId matchs with the salePeriods.panel.marketsToImplant", "username": "emmanuel_bernard" }, { "code": "", "text": "This query seems complex\nDo you know a way to simplify ?", "username": "emmanuel_bernard" } ]
How to use $elemMatch to add elements from an array to another one
2022-06-24T16:12:00.357Z
How to use $elemMatch to add elements from an array to another one
1,790
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "const results = await Models.NFTs.aggregate([])\n\n .lookup({\n\n as: \"creators.minter\",\n\n from: \"users\",\n\n let: { minter: \"$creators.minter\", lazy_minter: \"$lazy_mint_creator\" },\n\n pipeline: [\n\n {\n\n $match: {\n\n $expr: {\n\n $or: [\n\n { $in: [\"$$minter\", \"$wallets\"] },\n\n { $in: [\"$$lazy_minter\", \"$wallets\"] },\n\n ],\n\n },\n\n },\n\n },\n\n {\n\n $project: { _id: 1, profile: 1, name: 1 },\n\n },\n\n ],\n\n })\n\n .unwind(\"creators.minter\")\n\n .sort({ _id: -1 })\n\n .project({\n\n _id: 1,\n\n nft_id: 1,\n\n creators: 1,\n\n editions: 1,\n\n name: 1,\n\n listing: 1,\n\n url: 1,\n\n })\n\"data\":[\n {\n \"_id\":\"62ba8e36cc78e90041e3829c\",\n \"creators\":{\n \n },\n \"editions\":{\n \n },\n \"name\":\"fgfg\",\n \"listing\":{\n \n },\n \"url\":\"link\"\n },\n {\n \"_id\":\"62ba8e36cc78e90041e3829c\",\n \"creators\":{\n \n },\n \"editions\":{\n \n },\n \"name\":\"fgfg\",\n \"listing\":{\n \n },\n \"url\":\"link\"\n },\n {\n \"_id\":\"62ba8e36cc78e90041e3829c\",\n \"creators\":{\n \n },\n \"editions\":{\n \n },\n \"name\":\"fgfg\",\n \"listing\":{\n \n },\n \"url\":\"link\"\n }\n]\n", "text": "the result:", "username": "Mohammad_Ahmad" }, { "code": "", "text": "If you gave me more detailed documents, I can find a more accurate reason, but if you look at it simply, it seems to be coming out because of $unwind.Refer to the example of a URL.", "username": "Kim_Hakseon" }, { "code": "", "text": "Thanks, I solve it.\nI just used “$unwind” after “$sort”", "username": "Mohammad_Ahmad" }, { "code": "", "text": "I found out thanks to you.\nHave a nice day ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why this code return to me a dublicated docs?
2022-06-28T06:16:04.177Z
Why this code return to me a dublicated docs?
1,579
null
[ "node-js" ]
[ { "code": "", "text": "My project is in nodejs platform. I got this error and server destroy. Error is occur in my CRON job function.", "username": "Zil_D" }, { "code": "", "text": "Hi @Zil_D,\nI believe that the connection is being closed because you may have incorrectly scoped your MongoClient instance or running async functions and might be closing the connection before those functions return.If that’s not the case, can you please provide the following details in order to help us reproduce this issue:Also, take a look at this guide to learn how to connect your Node.js server to the cluster.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "Hi @SourabhBagrecha ,\nThanks for your reply.", "username": "Zil_D" }, { "code": "mongoose.connect(process.env.DB_URL, {\n useCreateIndex: true,\n useNewUrlParser: true,\n useFindAndModify: false,\n ignoreUndefined: true,\n});\n\nconst connection = mongoose.connection;\napp.use(\"*\", express.static(drname));\nconnection.once(\"open\", function () {\n console.log(\"MongoDB database connection established successfully\");\n});\nserver.listen(3002, () => {\n console.log(\"Started your server on PORT 3002\");\n});\n", "text": "@SourabhBagrecha ,\nI’m using nodejs version 14 & moongoose as ODM. Let me share connection code snippet.// Connection setupAlso DB is of mongoDB altas.One more thing @SourabhBagrecha , My co-worker has used async/await for DB operations in the js map function in cron job. I know this is apart from mongoDB but just informing you if you have idea.", "username": "Zil_D" }, { "code": "mongoose", "text": "Hi @Zil_D,\nThanks for sharing that. Can you help us by providing some the following details:Also, can you elaborate more on this?My co-worker has used async/await for DB operations in the js map function in cron job. I know this is apart from mongoDB but just informing you if you have idea.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "const refund = async () => {\n try {\n let sessions = await Session.find({\n type: \"PUBLISHED\",\n });\n sessions.map(async (session) => {\n await refundOrder(session._id);\n await saveHistory(session._id);\n await Session.findByIdAndUpdate(session._id, {\n students_ids: [],\n });\n });\n } catch (error) {\n logger.log(error)\n console.log(\"Error in CRON\", error);\n }\n};\n", "text": "Hi @SourabhBagrecha ,I have updated this map function into for of loop but verifying whether this is causing an error or not.", "username": "Zil_D" } ]
MongoError: Pool was force destroyed
2022-06-17T13:40:44.508Z
MongoError: Pool was force destroyed
6,315
null
[ "queries" ]
[ { "code": "{\"_id\":{\"$oid\":\"5ec7cfa44aead20ddc9b785e\"}, \"name\":\"Course 1\", \"users\":[ {\"$ref\":\"user\",\"$id\":{\"$oid\":\"5eb0d50b564ded7137aa5472\"}}, {\"$ref\":\"user\",\"$id\":{\"$oid\":\"5ffefrf9735a14d92480c46\"}}, {\"$ref\":\"user\",\"$id\":{\"$oid\":\"626dedfrg556d48db6d1b95\"}} ]}collection.find()collection.find(\"name\": \"Course 1\")", "text": "I’ve got a collection where items look as follows:{\"_id\":{\"$oid\":\"5ec7cfa44aead20ddc9b785e\"}, \"name\":\"Course 1\", \"users\":[ {\"$ref\":\"user\",\"$id\":{\"$oid\":\"5eb0d50b564ded7137aa5472\"}}, {\"$ref\":\"user\",\"$id\":{\"$oid\":\"5ffefrf9735a14d92480c46\"}}, {\"$ref\":\"user\",\"$id\":{\"$oid\":\"626dedfrg556d48db6d1b95\"}} ]}How do I query for this item by users’ user id (oid)? I’m using collection.find() but no combination of queries seems to return anything. Finding by the name works perfectly (eg. collection.find(\"name\": \"Course 1\").Any help would be much appreciated.", "username": "Rafael_ME" }, { "code": "ObjectIddb.collection.find({\n \"users.$id\": ObjectId(\"5eb0d50b564ded7137aa5472\")\n})\n", "text": "Hi,You should query it like ObjectId:Working example", "username": "NeNaD" }, { "code": "Realm.BSON.ObjectId()", "text": "Thanks for the reply. I had seen that syntax before, but ObjectId isn’t defined in Realm Web.That said, you helped me figure it out. It’s Realm.BSON.ObjectId().", "username": "Rafael_ME" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query DBRef in Realm Web in React
2022-06-27T21:27:50.480Z
Query DBRef in Realm Web in React
1,659
null
[ "node-js", "mongoose-odm" ]
[ { "code": "const mongoose = require(\"mongoose\");\n\nconst RiskCatalogSchema = mongoose.Schema({\n RiskId: {\n type: String,\n required: true,\n unique: true\n },\n\n RiskGrouping: {\n type: String,\n required: true\n },\n\n dateAdded: {\n type: Date,\n default: Date.now\n },\n\n Risk: {\n type: String\n },\n\n CSFFunction: {\n type: String\n },\n\n Description: {\n type: String\n }\n});\n\nmodule.exports = mongoose.model(\"riskCatalog\", RiskCatalogSchema);\nconst mongoose = require(\"mongoose\");\n\nconst RiskSchema = mongoose.Schema({\n ID: {\n type: Number,\n required: true,\n unique: true\n },\n title: {\n type: String,\n required: true,\n unique: true\n },\n\n description: {\n type: String,\n required: false\n },\n\n dateAdded: {\n type: Date,\n default: Date.now\n },\n\n riskRating: {\n type: Number,\n default: 1\n },\n\n riskCategory: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"riskCatalog\"\n }\n});\n\nmodule.exports = mongoose.model(\"risks\", RiskSchema);\n\nriskRouter.get(\"/:riskId\", async (req, res) => {\n try {\n //const risk = await Risk.findById(req.params.riskId).populate();\n const risk = await Risk.findById(req.params.riskId).populate(\n \"riskCategory\"\n );\n\n res.status(200).json(risk);\n } catch (err) {\n console.log(\"Something is Wrong, \" + err);\n res.status(444).send(\"No risk found with the given criteria!\");\n }\n});\n", "text": "Hello,I am trying to use Mongoose populate function to get the data from another collection, but what ever I do it returns null in the returned document. I dropped both collections and some of the forums suggested, but still no luck. This is the first time I am using this feature, I am pretty sure there is something I am messing.First Schema:Second Schema:Express route /Function:", "username": "Sam_Al_Shami" }, { "code": "{\n\t\"_id\": \"62b89dc77a0a2ef69aee19cf\",\n\t\"ID\": 66,\n\t\"title\": \"Test Risk 3\",\n\t\"description\": \"Description, referncing Risk Catalog\",\n\t\"riskRating\": 1,\n\t\"riskCategory\": null,\n\t\"dateAdded\": \"2022-06-26T17:56:23.382Z\",\n\t\"__v\": 0\n}\n", "text": "This is what is returned:I am also sure that the object ID exists in RiskCatalog collection.", "username": "Sam_Al_Shami" }, { "code": "", "text": "Figured it out.I am was using a collection with the name RiskCatalog, Apparetly mongoose creates the name and adds “s” at the end by default. so the collection name is created as RiskCatalogs instead, not sure why mongoose does that.", "username": "Sam_Al_Shami" }, { "code": "", "text": "not sure why mongoose does that.Because they decided likewise and it is documented:https://mongoosejs.com/docs/guide.html#collection", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoose Populate Function
2022-06-26T18:35:30.250Z
Mongoose Populate Function
13,239
https://www.mongodb.com/…6_2_1024x145.png
[ "aggregation", "indexes", "atlas-search" ]
[ { "code": "router.get('/searchuser', userController.userNameCitySearchAutocomplete);\n\n//autocomplete search on user name and city\nexports.userNameCitySearchAutocomplete = async function (req, res) {\n try {\n const { userNameCityQueryparam } = req.query;\n console.log(\"search query param\", userNameCityQueryparam);\n const agg = [\n {\n $search: {\n 'compound': {\n \"should\": [{\n //search on user name\n index: \"userName\",\n autocomplete: {\n query: userNameCityQueryparam,\n path: 'name',\n fuzzy: {\n maxEdits: 2,\n prefixLength: 3\n }\n },\n //search on user city\n index: \"userCity\",\n autocomplete: {\n query: userNameCityQueryparam,\n path: 'city',\n fuzzy: {\n maxEdits: 2,\n prefixLength: 3\n }\n },\n }]\n }\n }\n }\n ]\n const response = await User.aggregate(agg);\n return res.json(response);\n // res.send(response);\n } catch (error) {\n console.log(\"autocomplete search error\", error);\n return res.json([]);\n }\n};\n", "text": "\nindex unrecognized1497×212 13.8 KB\n", "username": "Manoranjan_Bhol" }, { "code": "", "text": "\nimage1444×238 6.59 KB\nInstead of two indexes, created one index on both these fields", "username": "Manoranjan_Bhol" }, { "code": "", "text": "Hi @Manoranjan_Bhol", "username": "varun_garg" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to specify custom index params in compound search atlas
2022-06-26T19:12:13.504Z
How to specify custom index params in compound search atlas
1,511
null
[ "atlas-triggers" ]
[ { "code": "context.services.get(<SERVICE_NAME>)<SERVICE_NAME>const service = context.services.get(\"Cluster0\");{\"version\":1}serviceconst db = service.db(\"prod\");db{}<SERVICE_NAME>serviceundefined", "text": "When calling the context.services.get(<SERVICE_NAME>) function, what does <SERVICE_NAME> get set with?I’ve seen that it should be set to your cluster name. And that seems to work best, but still not useful. When doing:const service = context.services.get(\"Cluster0\");service gets this as a return object:{\"version\":1}Then is using service to get a db instance, it doesn’t work. So this…const db = service.db(\"prod\");creates db as {} - which is unusable.If I try anything other than “Cluster0” as the <SERVICE_NAME>, then the service object is undefined.I can’t find anything in the documentation on this. Does anyone have a read into this? Much appreciated.", "username": "Kevin_Horio" }, { "code": "", "text": "If you go to “Linked Data Sources” in the UI then you will see the Service Name you have set your cluster(s) to and that is what is supposed to go there.As for your other comments, does doing DB call like this work? https://www.mongodb.com/docs/atlas/app-services/functions/mongodb/#query-mongodb-atlasWe may intentionally not allow you to inspect the contents of the database object within the function", "username": "Tyler_Kaye" }, { "code": " // 2. Get a database & collection const db = mongodb.db(\"myDatabase\")", "text": "Tyler, thanks for confirming that my context.services.get() call is working properly. I wasn’t aware, but it makes sense that the return object may not be entirely visible. It may be a permissions issue I’m having in terms of the user the trigger is running under - may not have access to my db. But I am issuing the db command like this: // 2. Get a database & collection const db = mongodb.db(\"myDatabase\")and that has worked for me in the past. I’ll have to look into setting up and confirming the auth for the trigger.", "username": "Kevin_Horio" }, { "code": "", "text": "I think you may have meant to post more code than that. If you want to test if it is just a permissions issue, you should be able to set your function to “Run as System User” in which case the permissions of the user are not evaluated.Let me know if I can help in any way!\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What should the <SERVICE_NAME> be set to for the context.services.get<SERVICE_NAME>) function call?
2022-06-27T17:49:17.194Z
What should the &lt;SERVICE_NAME&gt; be set to for the context.services.get&lt;SERVICE_NAME&gt;) function call?
5,001
null
[]
[ { "code": "", "text": "Hi everyone, sorry if my question seems stupid, I’m new to mongo db. …\nHere I have a project in which I have 3 Collection so basically:\n>Project 0\n>Collection 1\n>Collection 2\n>Collection 3I would like to share only Collection 1 with someone, being sure that he does not have access to the other collections, I do not know how to do it.\nThank you for your help .", "username": "Bruno_Benhamou" }, { "code": "", "text": "you simply create a database user with the appropriate privileges on the database and collection you want to share. if you are using self hosted instances you use db.createUser(), if using Atlas you do it via the web interface.for general security take M150 from university.mongodb.com and for atlas A300.", "username": "steevej" } ]
Share Collection on Project
2022-06-27T13:08:48.022Z
Share Collection on Project
4,153
null
[ "indexes" ]
[ { "code": "{\n \"name\" : \"item_1_quantity_1\",\n \"key\" : \"kolp_key\"\n \"host\" : \"examplehost.local:27018\",\n}\n{\n \"properties\" : [ \n {\n \"k\" : \"name\",\n \"v\" : \"item_1_quantity_1\"\n }, \n {\n \"k\" : \"key\",\n \"v\" : kolp_key\n }\n", "text": "It would be great to be able to do a match query for any of the collection fields. Suppose I have a collection of documents like -I want to allow a match for name, key, and host. I have ~20 fields, ~100000 documents.I heard of a strategy for inserting fields into an array of properties, that is indexed using MultiKey indexes instead of many indexes/compound indexes, such as:The goal here is to save writing time.\nDoes that make sense?", "username": "Shani_Cohen" }, { "code": "", "text": "It would be great to be able to do a match query for any of the collection fields.Nothing stops you from doing that.Then you optimize the most used queries by creating indexes.", "username": "steevej" } ]
How to index a collection so unknown fields can be matched by query?
2022-06-23T15:07:42.131Z
How to index a collection so unknown fields can be matched by query?
1,647
null
[]
[ { "code": "", "text": "Good day.\nI’m reaching out as I’m a tad frustrated trying to understand what’s happening.\nI have a REALM sync app. Pretty simple, locations have a partion they sync to and that seemed to work ok minust some math issues I have to work on.Admitedly I made a few code changes as I hacked away, but nothing new I put out syncs. I can see all the local data it writes, but what is should have is ALL it’s partiion data which it no longer does.So, in the local REALM I can see all their entries but it never syncs to the entire partition and the entries for the specific location just sit there on the local system. Meaning they never get to Atlas anymore? Suddenly?I don’t seem to use a whole log of data and fall under my $9 testging subscription. However I’m not sure as Mongo people have been messing with my account for various reasons (Tried to buy consulting) and I wonder if that has anything to do with it?I did some sniffing and there was a post about resetting/pausing sync and re-enabling it. Which I did and at first ONE of the locations DID sync, but the others did not. Just writting to the local REALM and no sync.Anywho, if anyone has a clue on stepps to resolve this that’d be amazing!!CPT", "username": "Colin_Poon_Tip" }, { "code": "", "text": "Hi. Can you send a link to your app in the Atlas / Realm console? I can try to poke around a bit to see what might be going on. Also, if you could send me a link to a log for one of the sync connections you are making, that would be great.Thanks,\nTyler", "username": "Tyler_Kaye" } ]
Possbile REALM Capacity issue?
2022-06-24T21:53:47.339Z
Possbile REALM Capacity issue?
1,360
null
[ "queries", "node-js", "data-modeling" ]
[ { "code": "", "text": "Hi, I’m a beginner in using MongoDB and databases in general.I’m using MongoDB for a game, where players can select from a list of levels obtained from the database.Some levels are single-player, others are multiplayer. Users will only be able to search for either single or multiplayer levels, never both at the same time.I have a schema and collection called “levels”, only for single-player at the moment. Multiplayer levels could potentially use the same schema, the info is the same.My question is, should I create a new collection for multiplayer levels, so search/query is potentially faster? Or can I just add a “multiplayer” boolean field in the existing level schema, without significant performance penalties?It would be much easier for me to just add the multiplayer field, but I fear that if/when I have around a million documents, queries for a list of levels will be slower because it will have to check in a larger collection if each document is of the required “multiplayer” or “non-multiplayer” type. Does this make sense, or are there optimizations in place, or that I can make, to keep it performant?Thanks.", "username": "Vasco_Freitas" }, { "code": "", "text": "I would not add a second collection for multi-player levels.A flag is good.Implement the simpler design and figure how to optimize it only if you need to.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should I create a new collection for better performance in this case?
2022-06-24T17:52:18.268Z
Should I create a new collection for better performance in this case?
1,235
null
[ "aggregation", "change-streams" ]
[ { "code": "$$CLUSTER_TIMEdb.myCollection.update({}, [{ $set: { updatedTs: '$$CLUSTER_TIME } }]);\nclusterTimeupdatedTsmongohighTimestampupdatedTschangeEvent.clusterTime", "text": "Let’s imagine that I am updating a document with a pipeline instead of modifier and I am using $$CLUSTER_TIME variable to record the time when the update happened. For example:If I am also observing the same collection with a changes stream, the above operation will result in a change event which will have its own clusterTime property. I already know that it will not be the same as the value of updatedTs, but I am wondering if mongo can guarantee any relation between the two values. For example, is it correct to assume that the high component of the corresponding Timestamp objects will be the same for both updatedTs and changeEvent.clusterTime?", "username": "Tomasz_Lenarcik" }, { "code": "$$CLUSTER_TIMEchangeEvent.clusterTime$$CLUSTER_TIMEchangeEvent.clusterTimeclusterTime1 < clusterTime2updatedTs<<=", "text": "Unfortunately, the experimentation shows that there can be no connection at all. I guess the only thing one can assume is that $$CLUSTER_TIME will always be smaller than changeEvent.clusterTime, but I am not even sure if $$CLUSTER_TIME will be monotonic as a function of changeEvent.clusterTime, i.e. given two subsequent operations with clusterTime1 < clusterTime2, can one expect that the corresponding updatedTs (as in the example above) will also satisfy < or (at least <=) condition?", "username": "Tomasz_Lenarcik" } ]
Relation between $$CLUSTER_TIME and corresponding changeEvent.clusterTime
2022-06-26T21:33:37.532Z
Relation between $$CLUSTER_TIME and corresponding changeEvent.clusterTime
1,899
https://www.mongodb.com/…e_2_1024x512.png
[ "queries", "transactions" ]
[ { "code": "snapshotatClusterTimemongosnapshotmajorityatOperationTimemongo", "text": "I have already asked this question on StackOverflow but then I thought this forum would be more appropriate.There’s been a really handy addition in MongoDB 5.x that allows enforcing snapshot read concern in some read operations outside transactions and specify the timestamp at which the snapshot is taken via atClusterTime . If the timestamp is not provided, mongo will select it automatically and kindly return its value to the user for future reference. See:read concern, snapshot read concern, read isolation, transactions, multi-document transactionsSince MongoDB 4.x also supports read concern snapshot - but only inside transactions with write concern majority - I am wondering if it is also possible to obtain some information about when exactly the snapshot was taken, similarly to what 5.x is doing. I understand that I cannot explicitly specify atOperationTime but somehow mongo needs to select the timestamp on its own, so it seems reasonable to expect that this information should be available somewhere.", "username": "Tomasz_Lenarcik" }, { "code": "import { promisify } from 'util';\n\nimport { Binary, Document, Long, Timestamp } from './bson';\nimport type { CommandOptions, Connection } from './cmap/connection';\nimport { ConnectionPoolMetrics } from './cmap/metrics';\nimport { isSharded } from './cmap/wire_protocol/shared';\nimport { PINNED, UNPINNED } from './constants';\nimport type { AbstractCursor } from './cursor/abstract_cursor';\nimport {\n AnyError,\n MongoAPIError,\n MongoCompatibilityError,\n MONGODB_ERROR_CODES,\n MongoDriverError,\n MongoError,\n MongoErrorLabel,\n MongoExpiredSessionError,\n MongoInvalidArgumentError,\n MongoRuntimeError,\n MongoServerError,\nsession.operationTimesnapshotmajority", "text": "After reading a little bit of code here:it seems to be that it would be fair to assume that inside a read-only transaction within a causally consistent session, the timestamp I am looking for is simply session.operationTime after the read was performed. This is of course under the assumption that transaction’s read concern was set to snapshot and write concern to majority (even though there was no actual write).I would appreciate if someone with deep knowledge of MongoDB internals could confirm if my assumptions are correct. I also feel like this information is somehow missing from documentation as I wasn’t able to find it anywhere.", "username": "Tomasz_Lenarcik" } ]
Read concern "snapshot" and the corresponding clusterTime
2022-06-26T21:38:10.454Z
Read concern &ldquo;snapshot&rdquo; and the corresponding clusterTime
1,736
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "hallo,i have a React and angular app that connect to mongodb.but it not showing the data.i have to log in to mongodb, and then my app shows the data.So before i tell the user to see my app, i have to log in first into mongo db.i missed something in configuration ?best regards,\nStev de DEV", "username": "Stev_80585" }, { "code": "", "text": "Are there any errors in the browser developer console? Are you accessing MongoDB via a microservice that the web calls or directly from the client?", "username": "Robert_Walters" }, { "code": "", "text": "thanks for responding,i got no error.this is how i connect Angular to Node JS\nand this is how node JS take the data from mongodb\n\nnode990×561 72.8 KB\n", "username": "Stev_80585" }, { "code": "", "text": "There are a few things to check, firewall set up correctly? Check out this tutorial How To Use MERN Stack: A Complete Guide | MongoDB. Also try to debug the code and read the error if any when it creates the connection.", "username": "Robert_Walters" }, { "code": "", "text": "Please share the steps you take whenlog in to mongodWhat commands you use?", "username": "steevej" }, { "code": "", "text": "i just log into mongodb website", "username": "Stev_80585" }, { "code": "", "text": "That’s ODD.How do you determine that your app isbut it not showing the dataDo you have a screenshot that shows your app not showing data before and showing data after login in?", "username": "steevej" }, { "code": "", "text": "this is image empty\n\nempty1798×355 10.3 KB\nthis is image with data\n\nAngular1862×920 129 KB\n", "username": "Stev_80585" }, { "code": "", "text": "Can we see an un-cropped image of the one with empty data?", "username": "steevej" }, { "code": "", "text": "I took the liberty to check the address and then your backend. your reset-insert-difuse logic possibly has this bug you mention.\nit is very sluggish to return data, it is possible you have forgotten to close the response stream after sending data, or a problem with the codesandbox.\nyou have to call insert once to populate the table, and it seems it works well after that. but this operation gives timeout, again a possible non-closed stream. the difuse operation seems to respond fast so I will assume it is not a read operation problem from mongo.while writing the above finding, I noticed codesandbox stops the container if idle for a while, and starts a new container if that happens with new responses. this might be directly related to your first empty list issue.I suggest you try your code on your pc running both backend and frontend (change corresponding addresses) to rule out any cloud service delays. and btw you need to implement a security measure to your back end when you solve the problem. though it does not much, insert operation currently can fill your db infinitely.post edit: codesandbox goes idle in a few minutes. and whenever you open that page in that idle time, your page first returns an empty dataset, and after another 15-20 sec full data returns showing codesandbox is back online. it fetches data well after that.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thanks, it is Codesandbox pb that is in idle mode.", "username": "Stev_80585" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Db is off , but when i login into mogodb, is on again
2022-06-21T12:41:02.175Z
Db is off , but when i login into mogodb, is on again
2,145
null
[]
[ { "code": "", "text": "Hi All,I need an assistance,Setup environment:Server’s\n-Replica 1\n-Replica 2\nArbiter - acting as heartbeat\nDoes anyone have a experience how to backup this setup? And also in case of catastrophic failure you loose all the servers. How do you rebuild this environment and restore your data?", "username": "Thuso_Ramosu" }, { "code": "", "text": "Hi @Thuso_RamosuYou might find the relevant procedures in these pages: MongoDB Backup Methods for general overview on supported backup methods, along with links to more specific method’s page Restore a Replica Set from MongoDB Backups for restoring your backup into a new replica setHowever I find this sentence in your post curious:Arbiter - acting as heartbeatWhat do you mean by “heartbeat”, exactly? An arbiter in a replica set functions as a tiebreaker for primary election. They allow you to have a primary in case your secondary is down in a PSA (primary-secondary-arbiter) setup. However they also come with their own disadvantages (cannot confirm majority write, writes to the primary can be rolled back, etc.). This setup is quite inferior to a PSS (primary-secondary-secondary) setup in terms of high availability & data integrity, so you might want to deploy them with caution and know their limitations.Best regards\nKevin", "username": "kevinadi" } ]
How to backup and restore replica environment in case of catastrophic failure
2022-06-23T09:55:32.553Z
How to backup and restore replica environment in case of catastrophic failure
1,406
null
[ "data-modeling", "compass" ]
[ { "code": "", "text": "Hi\nI would like to programmatically activate the “Share Schema As JSON” feature (via API or CLI).\nIs that possible?\nThanksAvishay", "username": "Avishay_Balderman" }, { "code": "", "text": "Hi @Avishay_Balderman! The package that Compass uses internally for this is GitHub - mongodb-js/mongodb-schema: Infer a probabilistic schema for a MongoDB collection.. You can also use it outside of Compass as a standalone module.", "username": "Anna_Henningsen" } ]
Programmatically access (API/CLI) to Compass "Share Schema As JSON"
2022-06-26T08:52:48.127Z
Programmatically access (API/CLI) to Compass &ldquo;Share Schema As JSON&rdquo;
1,518
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hi there,\nI want some help to get data from a collection. For example, the collection has the following documents:\n{ _id: 1, Name: ab, month: JAN, achieved_score: 20%}\n{ _id: 2, Name: ab, month: FEB, achieved_score: 40%}\n{ _id: 3, Name: ab, month: MAR, achieved_score: 50%}\n{ _id: 4, Name: cd, month: JAN, achieved_score: 50%}\n{ _id: 5, Name: cd, month: FEB, achieved_score: 70%}\n{ _id: 6, Name: cd, month: MAR, achieved_score: 60%}\n{ _id: 7, Name: ef, month: FEB, achieved_score: 30%}\n{ _id: 8, Name: ef, month: MAR, achieved_score: 40%}now I want to get data in a single object where the name match and month and achieved_score marge into an object of an array as following:[\n{Name:ab, [{month: JAN, achieved_score: 20%},{month: FEB, achieved_score: 40%},{month: MAR, achieved_score: 50%}]},\n{Name:cd, [{month: JAN, achieved_score: 50%},{month: FEB, achieved_score: 70%},{month: MAR, achieved_score: 60%}]},\n{Name:ef, [{month: FEB, achieved_score: 30%},{month: MAR, achieved_score: 40%}]},\n]Please help me to solve this query by using mongoose\nThanks.", "username": "Muhammad_Bilal_Jamil" }, { "code": "$groupdb.collection.aggregate([\n {\n \"$group\": {\n \"_id\": \"$Name\",\n \"values\": {\n \"$addToSet\": {\n \"month\": \"$month\",\n \"achieved_score\": \"$achieved_score\"\n }\n }\n }\n }\n])\n", "text": "Hi,You can do it with Aggregation Framework and $group stage:Working example", "username": "NeNaD" } ]
Get data from a collection as single object where name match and marge other properties in an object of array
2022-06-26T12:20:01.042Z
Get data from a collection as single object where name match and marge other properties in an object of array
2,438
null
[ "atlas-device-sync" ]
[ { "code": "notesnotesnotesteamssharedViewerssharedEditors", "text": "I am a little confused about partition keys to get MongoDB Realm and Atlas working together. In particular, I am referencing the material at https://docs.mongodb.com/realm/sync/partitioning/Whilst there are a couple of examples in the documentation, my use case is a little different. My users create documents - notes, and often they will share those notes (with either read or write permissions) to one or more other users. They may also not share with any other users, likewise, they may also add new shares with additional users or un-share existing users during the lifecycle of a notes document.The Team Realms example wouldn’t suit here because the users that a document is shared with is not always going to be consistent across documents. So having this alternative teams collection would not suit.So my thought process…I currently have two fields in a notes document - sharedViewers and sharedEditors that contain the relevant user IDs where the document is shared with other users.But I’m then confused about how to deal with the partition key and the realms that are then created locally on users devices.In addition I noted that the documentation states that:Avoid changing a partition value in a Realm object using the Realm SDK because it triggers a client reset.Any help would be appreciated as I’m surely not the first that is trying to deal with this type of use case.", "username": "Anthony_CJ" }, { "code": "", "text": "Hi Anthony,I found this useful in this matter.", "username": "Benjamin_Storrier" }, { "code": "", "text": "Thanks @Benjamin_Storrier\nI did see that and have posted a follow up comment there also because it’s still not 100% clear to me. I really want to use MongoDB as my backend but this ‘complexity’ is leaving me doubtful.", "username": "Anthony_CJ" }, { "code": "", "text": "Don’t worry. It’ll grok.\nThe docs are pretty good. Keep re-reading them and trying things out.\nI found the node example to be helpful - try installing and running that.\nTake note that the walk through is not complete and you’ll need to refer to the readme on the github.", "username": "Benjamin_Storrier" }, { "code": "", "text": "Thanks. I’ve gone through the tutorials etc. And have watched countless hours of MongoDB videos from past keynotes etc. Some of the documentation is a little unclear between partitioning and then permissions.", "username": "Anthony_CJ" }, { "code": "", "text": "@Anthony_CJ I guess the way to think about it is that Realm has now become a mapping on a mapping. The first mapping function is Altas itself that maps objects into collections that conform to a schema. This by the way is similar to the way a Realm Cloud instance worked in the old Realm Cloud product. With MongoDB Realm, the partition key value introduces a second mapping. So basically all objects in all collections with a specific partition key value map into a specific Realm. At first it was a little confusing, but now I have the swing of it.", "username": "Richard_Krueger" }, { "code": "sharedViewerssharedViewers", "text": "Thanks @Richard_Krueger. That makes sense actually. So if I’m understanding this correctly, can’t I just map MongoDB Realms by setting the partition key to documents in Atlas where a userID is listed in the documents sharedViewers? That way, as the users are removed from or added to sharedViewers, the realms are ‘updated’?", "username": "Anthony_CJ" }, { "code": "", "text": "@Anthony_CJ so basically MongoDB Atlas organizes a data base as set of collections that contain objects that conform to a specific schema. This is the way Realm used to do it before the merger with MongoDB. But with the new system, you have Realm on the front end and MongoDB Atlas on the back end. Since the two models are not completely isomorphic, MongoDB needed a way to map Realm onto Atlas. The way this is done in through a partition key. For a specific app, the developer defines a partition key; usually this is called _partition or _realmId, but it can be called anything you want. There is only one partition per Realm app. The partition key is a property that is defined in the collections in Atlas that Realm maps on to. It’s the partition key value that specifically defines which Realm the object in the collection belongs to. For example, your app might want a Realm called ‘shared_object_realm’, which would be readable by all users. In that case the partition key value would be ‘shared_object_realm’. Similary, the app might want a realm that is only readable/writable by the logged in user and no one else, that partition key value would be the user id of the logged in user. In the older Realm world, this was called a private user realm. The read/write privileges for Realms are controlled through the sync partitions.I hope this was usefulRichard Krueger", "username": "Richard_Krueger" }, { "code": "", "text": "Thanks @Richard_Krueger. It does. What I’m struggling to wrap my head around is a scenario where…\nUser_A, User_B, User_C\nEach can see the documents they’ve created:\nDocument_1 - created by User_A\nDocument_2 - created by User_B\nDocument_3 - created by User_CSo if I had the partition key set to who created the documents, then the realm on each users device has their items. Got that.But if User_A wants to share Document_1 with User_B. I’m struggling with that concept. I had throughout I could just have a field ‘sharedWith’ and in that field I would store the creating user_id as well as anyone they’ve shared with. So for Document_1 it would hold User_A and User_B that that ‘sharedWith’ field would be the realm sync partition key but I can’t do that. So struggling with the concept of how to structure things in a way that allows documents to be randomly (as selected by a user) shared with 0…n other users.", "username": "Anthony_CJ" }, { "code": "", "text": "@Anthony_CJ I apologize for having taken an hiatus from these forums lately. My regular 9-5 job has had me buried in a documentation effort, away from programming, for the last two weeks. Let me give a stab at your problem.At present, MongoDB Realm still has not implemented fine grain rules permissions, so there is no way to give read/write permissions to a particular document to a particular set of users. I am sure that they will get this feature in, but for the moment you have to rely on SYNC level permissions to achieve the same goal.What you can do now is control whether a user has access to a specific partition key through the use of custom data associated with the user, and SYNC based rules that access that custom data. The custom data would contain a list of all the partitions a user can read from and/or write to (perhaps two separate lists). So coming back to your problem.Let’s say you have Document_1 that you want to share between User_A and User_B, you would create another partition key value named Team_AB and assign it to the document. You would then include Team_AB in the custom data list for both User_A and User_B. Maybe User_A would have write permission and User_B would only have read permission. I know this is clunky, but it is a work around that works right now.", "username": "Richard_Krueger" }, { "code": "collection user: [\n { user_id : \"user_1\", displayName : \"John\", email : \"[email protected]\" }\n { user_id : \"user_2\", displayName : \"Jane\", email : \"[email protected]\" }\n { user_id : \"user_3\", displayName : \"Adam\", email : \"[email protected]\" } \n]\n\ncollection document: [\n { createdBy: user_1, title : \"Document A\", sharedWith: [user_1] }\n { createdBy: user_1, title : \"Document B\", sharedWith: [user_1, user_2] }\n { createdBy: user_2, title : \"Document C\", sharedWith: [user_2, user_1] }\n]\nrealm user_1: [\n { user_id : \"user_1\", displayName : \"John\", email : \"[email protected]\" }\n { createdBy: user_1, title : \"Document A\", sharedWith: [user_1] }\n { createdBy: user_1, title : \"Document B\", sharedWith: [user_1, user_2] }\n { createdBy: user_2, title : \"Document C\", sharedWith: [user_2, user_1] }\n]\n\nrealm user_2: [\n { user_id : \"user_2\", displayName : \"Jane\", email : \"[email protected]\" }\n { createdBy: user_1, title : \"Document B\", sharedWith: [user_1, user_2] }\n { createdBy: user_2, title : \"Document C\", sharedWith: [user_2, user_1] }\n]\n\nrealm user_3: [\n { user_id : \"user_3\", displayName : \"Adam\", email : \"[email protected]\" } \n]\nsharedWith", "text": "Thanks @Richard_Krueger. I understand we aren’t all on here all the time. But I appreciate your help.So putting aside the permissions aspect, the actual realm partition syncing is still problematic for me. (sorry)So with these collections:The desired realm results would be as follows:Is there a way to do this so that the partition key could be the sharedWith? That way, as users add or remove users from a documents sharing privileges, then the realm syncing would update accordingly.", "username": "Anthony_CJ" }, { "code": "", "text": "I get what you are attempting to do here. This is not an answer but did you read through the Define Sync Permissions guide? The function rules section may let you provide/define user access based on a function that evaluates if that use has permission or not.We poked around with it a bit last week and it looks like it would enable that functionality.Also, cross posting can cause us to do extra work as the question may have already been answered on the other post. If you cross post, include a link so we’re all on the same page", "username": "Jay" }, { "code": "", "text": "Thanks Jay. Yes I did read it. I’ve read it all and that seems to be about accessing the data rather than what gets synced so you’re right, it’s not an answer.Re cross posting - didn’t realise that was an issue (and not sure why it is) but apologies. I’ve just been trying to get to the bottom of this for weeks and can’t seem to get to an outcome.", "username": "Anthony_CJ" }, { "code": "", "text": "Cross posting isn’t an issue but it’s much more efficient when all of the data is in one place for future readers - it’s funnels the energies and eyeballs on that topic.Let me see if I can clarify a bit - What you’re actually asking is about accessing the data, not setting up a sync…But if User_A wants to share Document_1 with User_B. I’m struggling with that concept.Generically imagine a case where you have Tasks app. Some tasks are personal and are either stored locally on the device or sync’d but only that user can access them (it’s tied to their _partitionKey) for example.Expand on that and and suppose you have tasks that can be shared amongst a group - (sharing Document_1 in your case).In that situation you would have a groups collection that would map user id’s to group id’s and then realm could determine if that user can access that partitions data. So a partition key for users that can access my tasks would be _partitionKey = “group_id=jay_group”. So essentially each shared document would have that groups partition key “jay_group”. That then leads to the users in that group sync’ing with just that groups tasks (e.g. sharing the document).There’s several ways of implementing this but using prefixes in the partition keys separates the data into natural groups and realm can parse those prefixes to limit accessThe way I look at it is not trying to set up sync’ing - its setting up access to those documents you want to share via appropriate partition keys - the sync then just ‘works’ and sharing documents within groups is ‘easy’", "username": "Jay" }, { "code": "", "text": "Thanks @Jay. Makes sense. Only issue I see is being able to update those groups dynamically where users might be removed and/or added to a document. So effectively being able to update who has access to the _partitionKey = “group_id=jay_group”. That’s the challenging part…", "username": "Anthony_CJ" }, { "code": "", "text": "That’s kind of the purpose of the group - it can be dynamically changed as you want allow/deny access for users to your data.you would have a groups collection that would map user id’s to group id’sSo if the user was not in the group (collection), they could not access that groups data.", "username": "Jay" }, { "code": "", "text": "Just to chip in my 2 cents. As there are limits on what partitions can and can’t filter (e.g. you sync the whole document or none of it), you need to consider your schema as well as your partition key (e.g., in some cases you might need an additional collection and/or duplicate some data using triggers). This article steps through how some reasonably complex partitioning requirements were implemented (together with the though process behind the decisions).", "username": "Andrew_Morgan" }, { "code": "", "text": "lm still has not implemented fine grain rules permissions, so there is no way to give read/write permissions to a particular document to a particular set of users. I am sure that they will get this feature in, but for the moment you have to rely on SYNC level permissions to achieve the same goal.What you can do now is control whether a user has access to a specific partition key through the use of custom data associated with the user, and SYNC based rules that access that custom data. The custom data would contain a list of all the partitions a user can read from and/or write to (perhaps two separate lists). So coming back to your problem.Hi @Anthony_CJ, I am having a very similar scenario as yours. How did you end up implementing this? Thanks in advance.", "username": "Benoit_Werner" }, { "code": "", "text": "I’m working out a very similar architecture which may end up being implemented via Flexible Sync when we get the greenlight from MongoDB to trust it in production. (I’m hanging out for an announcement at MongoDB World in a month.)In the meantime, a summary of my thinking:The context for the app I’m porting is that a small number of appointments and work shifts can be shared with other people. The fan-out is likely to be a max of 8, as a GUI limitation and also social experience.A limitation that hasn’t come up in this discussion is a restriction on the number of Realms you can have open at the same time. An answer relayed from Andrew Morgan was that 10 was a safe limit, derived from 6-8 file handles per Realm.There’s no hard-coded limit in Realm, it’s down to the number of file descriptors that various mobile devices allow an app to use (each open Realm uses 6-8). More modern devices allow more, and there are differences between iOS and Android (the lower limits tend to be on iOS).That limit has informed my thinking - it’s not feasible to have every single tiny shared item in its own Realm (partition).Without requiring users to define explicit teams, I’m planning to create our own sync pools which contain every person who has agreed to share some stuff with another. Whilst that sounds like it might leak into being a large group, I doubt it in practice.So a single partition key will be used for each pool. Due to this flood fill approach to defining the pool boundaries, a single user is unlikely to open more than one of them.Within the pool, access to individual shared items will be further restricted by having an Alice shared with Bob pair of identifiers in a Realm table.Most of a user’s data will be stored in another individual synced Realm. This provides backups and the chance to have mirroring with a spouse, as a feature.Initially, people will start completely offline, only when they pay for sharing will they migrate to a Synced Realm.", "username": "Andy_Dent" }, { "code": "", "text": "There is only one partition per Realm app.Pretty sure this is mis-stated. There’s only one partition per open Realm. Your app can have many Realms open (I’ve seen 10 simultaneously as a suggested limit).", "username": "Andy_Dent" } ]
Understanding Partition Keys
2020-08-25T06:19:13.688Z
Understanding Partition Keys
15,147
null
[ "indexes" ]
[ { "code": "{\nheader : \"898288283\",\nitems: [\n { itemNo: \"1\", name: \"dpcxed\"},\n { itemNo: \"2\", name: \"edfic\"},\n ]\n}\n", "text": "Hi Experts,I have document with below structure:I need set up autocomplete index for the items.name fields. May u know what is the properly index definition for it?thanks in advance.", "username": "Joshua_Wang" }, { "code": "itemsitems.nametype:autocomplete", "text": "Hey Joshua,This exact ask is a feature we are working on currently (updates to come soon!), but one option is you could use Embedded Documents (query, index definition) with type as “autocomplete”. I believe items is embedded and under it items.name is type:autocomplete", "username": "Elle_Shwer" }, { "code": "{\n \"mappings\": {\n \"fields\": {\n \"items\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\n{\n index: 'embedded',\n autocomplete: {\n query: 'dp',\n path: 'name',\n tokenOrder: 'sequential',\n fuzzy: {\n // maxEdits: 1,\n // \"maxExpansions\": 100,\n }\n }\n}\n", "text": "Hi ,\ni created a new search index with below json config.But when i try to execute .I got below error\nPlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: autocomplete index field definition not present at path materialNo", "username": "Joshua_Wang" }, { "code": "{\n embeddedDocument: {\n path: \"items\",\n operator: {\n autocomplete: {\n query: 'dp',\n path: 'items.name',\n tokenOrder: 'sequential',\n fuzzy: {}\n }}\n}}\n", "text": "Hey Josh, you have to also run the embeddeddocument query. I tested it with this and it worked for me:", "username": "Elle_Shwer" }, { "code": "", "text": "Hi Elie,\nIn compass’s aggregrations tab, I do not have the option to choose an operator with “embeddedDocument”.\nThere is $search $searchMeta…\nI am using mongo db 5.0. Or you are using different version?", "username": "Joshua_Wang" }, { "code": "[{$search: {\n embeddedDocument: {\n path: \"items\",\n operator: {\n autocomplete: {\n query: 'dp',\n path: 'items.name',\n tokenOrder: 'sequential',\n fuzzy: {}\n }}\n}}}]\n", "text": "This is the full aggregation, you would still use $search…I think you should be able to paste that into Compass. I’m on 5.0.9. I don’t believe it is tied to a version.", "username": "Elle_Shwer" }, { "code": "", "text": "Hi Elle,\nI tried to reindex then it start works. Many thanks.", "username": "Joshua_Wang" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set up atlas search index for field inside an array?
2022-06-17T09:54:21.755Z
How to set up atlas search index for field inside an array?
3,946
https://www.mongodb.com/…4_2_1024x323.png
[ "queries", "data-modeling", "python" ]
[ { "code": "\n def __init__(self, first_name, last_name, phone, email, uid, cart, card):\n self.first_name = first_name\n self.last_name = last_name\n self.phone = phone\n self.email = email\n self.uid = uid\n self.cart = cart\n self.card = card\n", "text": "I am new to Mongo and I need to know if I have a user collection with 2 users. Below is the document.\nScreenshot 2022-06-24 at 5.44.35 PM1332×421 58.6 KB\nMy question is two fold… First, if I insert a document, Do I need to input the fields in the order they are in the record. I.E. email first, then phone.Or can I just identify the field: value pair and it will be inserted into the field that I have built into the document?Second… I don’t need all the field upon record creation. I.E. “date_created”, or “date_updated”.I currently pass the first_name, last_name, phone, email to the class constructor in the python class to create the object…But I cannot seem to generate the object fields that I will need to update after record creation. I tried to use the postinit function, but I cannot seem to get the key: value into the object.Any ideas would be appreciated.", "username": "David_Thompson" }, { "code": "", "text": "Your issue is not with MongoDB. It is about your object mapper and validation 3rd party. From your previous message I think it is mongoengine.Mongo has a completely flexible schema and do not enforce field order, field presence or type.I am not sure if there are a lot of mongoengine users here. May mongoengine has its own forum. Stackoverflow?", "username": "steevej" }, { "code": "", "text": "Thanks @steevej ,\nIn my last message I was asking about mongoengine. I have since figured out that I am using pymongo in my project and that mongoengine is not being used in my project.So… The current issue that I am having is that I cannot figure out how to get fields to initialise in my object without passing them to the init function… So… I think my question is a Python question. I have also asked this on a facebook group for Python programmers.Thanks", "username": "David_Thompson" } ]
Document insert
2022-06-24T09:51:09.358Z
Document insert
1,600
null
[ "aggregation", "atlas-search" ]
[ { "code": "\ndb.getCollection(\"clients\").aggregate([\n {\n $search: {\n compound: {\n must: [\n {\n equals: {\n value: ObjectId(\"62aa72d4a152e94836d05dc9\"),\n path: \"externalId\"\n }\n },\n {\n equals: {\n value: false,\n path: \"isDeleted\"\n }\n },\n {\n wildcard: {\n query: '*James*',\n path: [\"email\", \"firstname\", \"lastname\"],\n allowAnalyzedField: true\n }\n }\n ]\n },\n \"count\": {\n \"type\": \"total\"\n }\n }\n },\n {\n $sort: {\n \"firstname\": 1\n }\n },\n {\n $project: {\n \"meta\": \"$$SEARCH_META\",\n \"_id\": 1,\n \"email\": 1,\n \"firstname\": 1,\n \"lastname\": 1\n }\n },\n {\n $skip: 0\n },\n {\n $limit: 10\n }])\n", "text": "We are getting “Use of undefined variable: SEARCH_META” when executing this query:Any help would be greatly appreciated", "username": "Eyad" }, { "code": "", "text": "Hi @Eyad and welcome to the community!Thank you for providing the aggregation pipeline and error details.Could you provide the following information:As per the 4.4 changelog details, specifically SERVER-58581:Add SEARCH_META variable that populates from mongotThe above change may not be available in prior versions which would explain the error/message being generated.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"meta\": \"$$SEARCH_META\"", "text": "Thanks @Jason_Tran,We just started using Atlas search, it is a new command. It runs successfully though if we don’t include \"meta\": \"$$SEARCH_META\" in the project stage.Our MongoDB version is 4.2.21Is there any workaround to count the documents for now rather than updating to a version more 4.4.x?", "username": "Eyad" }, { "code": "“$$SEARCH_META”$searchMeta", "text": "Hi @Eyad,Is there any workaround to count the documents for now rather than updating to a version more 4.4.x?Unfortunately i’m not aware of any workaround to provide the “$$SEARCH_META” data you’re after in version 4.2.21.Regarding the count, I believe this is also contained as part of the result data from the $searchMeta stage which is available from 4.4.9.Just to further understand the scenario, can you advise if there are any limitations preventing you from upgrading to 4.4?Regard,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Eventually, we are going to update MongoDB version but for now, we have a huge code base that needs to be tested thoroughly for any breaking changes if we upgrade that we are trying to avoid for this feature.\nThanks, @Jason_Tran", "username": "Eyad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Use of undefined variable: SEARCH_META?
2022-06-21T05:48:00.468Z
Use of undefined variable: SEARCH_META?
2,932
null
[ "aggregation", "node-js" ]
[ { "code": "", "text": "Hi all,\nI’m using MongoDB 5.3.1 Community and NodeJS.I have data that looks a bit like this:{\n“Specs”: [{\n“Group”: “A”,\n“Name”: “ABC” },\n{ “Group”: “A”,\n“Name”: “123” },\n{ “Group”: “B”,\n“Name”: “A12” }\n]Using an aggregation I would like to remodel it to look something like this:{\n\"Specs: [{\n“A”: [{\n“Name”: “ABC” },\n“Name”: “123” }],\n“B”: [{\n“Name”: “A12” }]\n}]I know I could use $unwind $project and $push to do this. But I’m thinking there must be a better way. The end game is to end up with a single document. I’m wondering is $setWindowFields would be a better option? But I can seem to get it to do what I want it to.Thanks.", "username": "Andy_Bryan" }, { "code": "$map$setUnion”$Specs.Group”input", "text": "You’re right that going from single document to same single document should not involve any unwinding and grouping, just field transformation.I can tell you that the transformation will probably involve $map expression and you’ll want to use $setUnion of ”$Specs.Group” expression to “seed” your input.Let me know if you’d like to see the entire solution or if this was enough to get you started in the right direction.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi there Asya,Thanks for the response. I’ve tried my best to work it out based on your advice, but keep running into a brick wall.Is there any chance of a little more help please?Thanks.", "username": "Andy_Bryan" }, { "code": "", "text": "I found this post by your good self and have managed to work it out from there.\nhttps://www.mongodb.com/community/forums/t/how-to-group-an-array-using-reduce-without-unwind-group/8550/7Thanks again.", "username": "Andy_Bryan" }, { "code": "", "text": "Hi @Andy_Bryan - I’m glad you were able to solve the problem!Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
A better way of Grouping an array together into a single document
2022-06-16T14:06:50.167Z
A better way of Grouping an array together into a single document
1,706
null
[]
[ { "code": "", "text": "I’m new to mongo DB and charts. Trying to create a simple table (in Mongo DB charts) to show a list of data and mongo DB says you have 4,000 plus documents hence cannot render.\nCannot we even render a table with 4,000 plus rows in Mongo DB charts?\nIs Mongo DB charts a very very basic utility? Cannot we use it for production purposes?", "username": "Vignesh_Venkataraman" }, { "code": "", "text": "Hi @Vignesh_Venkataraman -Table charts can render a lot more than 4000 rows. The limitation is around having a very large number of columns, as it results in poor performance and is rarely what the user intended. I suspect you are using a field in the “Dynamic Columns” channel which has a very large number of unique values.Tom", "username": "tomhollander" } ]
Issue with Mongo DB charts
2022-06-25T03:33:33.203Z
Issue with Mongo DB charts
1,969
null
[ "react-native" ]
[ { "code": "const tree = {\n branches: [\n [{ brown: true }, { brown: false }]\n [{ brown: false }]\n ]\n}\nclass Leaf {\n static schema: ObjectSchema = {\n name: 'Leaf',\n embedded: true,\n properties: {\n brown: 'boolean',\n }\n }\n}\n\nclass Tree {\n static schema: ObjectSchema = {\n name: 'Tree',\n properties: {\n // I've tested this\n branches: {\n type: 'list',\n objectType: {\n type: 'list',\n objectType: Leaf.schema.name\n }\n }\n // And also this\n branches: {\n bsonType: 'array',\n items: {\n bsonType: 'array',\n items: Leaf.schema.name\n }\n }\n }\n }\n}\n", "text": "I have the following example data structure:How can I create a schema for this? I’ve tried the following (note that this is an extracted example, so it might be slightly pseudo code):Both result in one of the following error:Error while parsing property ‘branches’ of object with name ‘Tree’. Error: objectType must be of type ‘string’, got (undefined)orError while parsing property ‘branches’ of object with name ‘Tree’. Error: objectType must be of type ‘string’, got ([Object, Object])", "username": "Adam_Gerthel" }, { "code": "MultiPolygons[[[lon, lat]]]", "text": "I had a similar issue when trying to sync geospatial types, like MultiPolygons coordinates, which need [[[lon, lat]]]. I posted to support and they came back saying that Realm sync doesn’t support array of arrays so I had to write a function to map from nested arrays of key-value objects to the required geospatial format.I hope they support these in the future.", "username": "Rob_Elliott" } ]
How to manage a list of lists (i.e. array of arrays)?
2022-05-20T18:59:25.066Z
How to manage a list of lists (i.e. array of arrays)?
2,408
null
[ "atlas-triggers" ]
[ { "code": "", "text": "Has anyone had trouble with database triggers? I have this trigger for an insert that should be firing, and it has been always firing. All of the sudden it just stopped and I can’t really see why.I’ve had similar troubles with triggers in the past. I’m going to implement this separately using a change stream listener through a MongoDB SDK. It’s a bit too worrisome to rely on something that will randomly stop working that I can’t really debug.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Can you send a link to your trigger? Curious if we can help figure out what is going on?", "username": "Tyler_Kaye" }, { "code": "", "text": "Sure be my guest! Link to trigger", "username": "Lukas_deConantseszn1" }, { "code": "mongodb change stream closed with error: (BSONObjectTooLarge) Executor error during getMore :: caused by :: BSONObj size: 23576300 (0x167BEEC) is invalid. Size must be between 0 and 16793600(16MB) }\n", "text": "Hi, so you are running into a problem in MongoDB actually so I dont think using Watch will be any better unfortunately. We actually just merged in a change that will go live next week to at least show you what is going on and alert you when this happens but I can give you the gist here.MongoDB has a document size limit of 16MB but that also applies to ChangeEvents. The MongoDB server is currently working on a project to avoid or circumvent this limitation for Change Events, but the issue you are running into is that your ChangeEvent is 24MB:What can you do from here? There are a few options:Option 3 is probably your best bet here to avoid the issue entirely. Sorry for you going through this but it will be more visible in the UI in a few days and we are working on a longer-term fix.", "username": "Tyler_Kaye" }, { "code": "", "text": "Ah okay I see thanks! Yeah having some message would be really nice haha so I don’t just feel like I’m crazy. Yeah this document is massive though…", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "You should often be able to see what kind of events in general don’t get fired by the trigger. That way we can also debug match statements.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "Yup. The error message fix will be in production in a few days. The other thing you mention is actually a project we aim to do at some point (ie, show you the 10 most recent change events and whether your match expression would return true or false to them)", "username": "Tyler_Kaye" } ]
Atlas App Service DB Trigger Blues
2022-06-23T22:29:52.797Z
Atlas App Service DB Trigger Blues
2,638
null
[ "aggregation", "atlas-search" ]
[ { "code": "", "text": "Hey Folks,We have an text field where we want to query for exact matches using Atlas Search.The analyzer Lucene.Keyword seems to be handy for doing such exact matches. However one issue that I have been facing is that Lucene.Keyword doesnt support case insensitive search which is one of our requirements. Other analyzers dont seem quite apt to our use case, as we do not want to break up the query field in punctuations/whitespaces etc.Any ideas on how i can use lucene keyword and make it case insensitve at the same time.Thanks", "username": "Shaurya_Gupta" }, { "code": "\"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"search_keyword_lowercaser\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n", "text": "For anyone facing a similar issue as this, I was able to fix it by using a custom analyzer and combining it with the keyword tokeniser and a lowercase token filterThe sample code is as follows", "username": "Shaurya_Gupta" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search | Lucene Keyword analyzer case insensitive
2022-06-24T18:09:50.299Z
Atlas Search | Lucene Keyword analyzer case insensitive
3,951
null
[ "dot-net", "connecting", "monitoring" ]
[ { "code": "db.serverStatus().connections\n{\n \"current\" : 218,\n \"available\" : 50982,\n \"totalCreated\" : 208189,\n \"active\" : 186\n}\ndb.runCommand( { \"connPoolStats\" : 1 } )\n{\n \"numClientConnections\" : 0,\n \"numAScopedConnections\" : 0,\n \"totalInUse\" : 0,\n \"totalAvailable\" : 1,\n \"totalCreated\" : 59272458,\n \"totalRefreshing\" : 0,\n ...\n", "text": "We are having an issue in production (replica set, version 4.2.1). We track email clicks and opened emails. When large email blasts are sent it slows down our website and users are unable to login.We get the following error:\nThe wait queue for acquiring a connection to server x is full.Here are some server stats when this happens:There is nothing that we have configured for connections in the config file. We must have the default settings. We use the C# driver and we can see the MaxConnectionPoolSize is 100 and the WaitQueueSize is 500.Here are my questionsThanks", "username": "Christian_Longtin" }, { "code": "", "text": "This is a recurrent problem that we have not found the answer yet.\nAny help would be appreciated.Thanks", "username": "Christian_Longtin" }, { "code": "", "text": "This is relentlessly happening to one of our apps too. There’s also seemingly no pattern to it, just suddenly nothing can connect, the wait queue builds up and then it just starts throwing wait queue exceptions and can’t seem to recover itself.", "username": "Paul_Allington" }, { "code": "", "text": "I am having the same issue on my app too. Did they found a workaround for it ?", "username": "Aneesh_S" }, { "code": "", "text": "Similar problem here, did you have any success with troubleshooting it?", "username": "Alan_Bucknum" }, { "code": "", "text": "I am having the same issue on my app too. Did you have any success with troubleshooting it?", "username": "Truong_Huynh" }, { "code": "", "text": "One thing we noticed in our logs was that the C# driver defaults to SHA-256, which was slowing the connection progress. Appending &authMechanism=SCRAM-SHA-1 to our connection string helped.", "username": "Alan_Bucknum" }, { "code": "db.currentOp(true).inprog\n", "text": "You can try to ensure that you using only 100 connections, by exploringIt has client field, that contains ip addres/es.", "username": "muphblu" } ]
Wait Queue is Full - Understand Connections
2021-02-27T19:08:43.359Z
Wait Queue is Full - Understand Connections
16,289
null
[ "aggregation", "queries", "indexes", "schema-validation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"62af099d90dcb82b1f750b12\"\n },\n \"survey_id\": 205380,\n \"account_id\": 1005,\n \"ws\": 15,\n \"date_reg\": {\n \"$date\": \"2022-06-19T11:33:49.000Z\"\n },\n \"time_reg\": \"11:33:49\",\n \"read\": 0,\n \"ip\": \"x\",\n \"ent\": {\n \"person\": \"Steve\",\n \"location\": \"California\",\n \"date\": \"2022-06-01\"\n },\n \"topics\": [\"terrible\", \"quality\", \"bug\"],\n \"tags\": [10091, 15335, 235235, 23235],\n \"sentiment\": 1,\n \"country\": \"\",\n \"star\": 0,\n \"ref\": \"\",\n \"page\": \"x\",\n \"meta\": {\n \"os\": 1,\n \"dev\": 0\n },\n \"answers\": [{\n \"answer\": \"sample answer 1\",\n \"original_answer\": \"demo antwoord 1\",\n \"type\": 0,\n \"qid\": 5768,\n \"datatype\": 0,\n \"so\": 0,\n \"q\": \"Let's get started! What is your first name?\"\n }, {\n \"answer\": 2,\n \"original_answer\": \"\",\n \"type\": 4,\n \"qid\": 5770,\n \"datatype\": 16,\n \"so\": 1,\n \"q\": \"Thanks Let's get started..., how likely are you to...\"\n }, \netc]\n}\n", "text": "Hi,I’m using MongoDB 4.x and the PHP 8.1 library/extension.\nI’m building a small customer feedback SaaS where the entries are stored in MongoDB.The problem that I am facing is that I have to query pretty much every key at some point making it very hard deciding on structure and indexes.This is wat I have so far:account_id + survey_id will be in pretty much every query.\nAll the other keys are keys that I have to query specifcally or multiple/all at once (when customer is selecting filters).the answers array contain all the answers, also here is pretty much every key important except the original_answer and so.Breakdown of what each key does:answer: is the answer that is given.\nIt’s function: view the answer that was given and search queriesoriginal_answer: is the answer in the native language of the respondent.\nmy customer can choose to auto translate the answer to their language.\nIt’s function: search queriestype: is the question type. E.g. textfield is 0, dropdown=1, textarea=2, checkbox=3 etc.\nfunction: for use in summaries/reporting, to quickly select by question type.\nCome to think of it, it could be less important because I can also query by question_idqid: the question id\nfunction: for use in summaries/reporting, to quickly select entries for that specific question.datatype: a integer for storing what kind of data it is. eg. multiple choice, sensitive data etc.\nfunction: used to quickly determine which entries have sensitive data.so: sortorder, only used in viewing feedback questions in the right order.q: the question that the respondent was answering.\nfunction: If my customer changes the question in the (MySQL) surveys table, I can tell them the results might be skewed because they changed the question when there was already live feedback.\nOn the other hand, if the original question was deleted, I can still show a small snippet so even if it’s removed they can still see what the question was about.As for indexes. I have a index on almost every field, but this is of course a no go.\nI would make account_id + survey_id a compound index, but then when you select multiple keys it will not use that index or at least not effectively.Please help.", "username": "Ralph_van_der_Sanden" }, { "code": "survey_idaccount_id{account_id: 1, survery_id: 1} and {survey_id: 1}{survey_id: 1, account_id: 1} and {account_id: 1}$in", "text": "Hi @Ralph_van_der_Sanden and welcome in the MongoDB Community !It’s really hard to answer without more numbers like the total size of the collection, nb of docs and cardinality of survey_id and account_id.With the given information and supposing a “normal” distribution, I would just create 2 indexes (depending on cardinality to try to make them as filtering as possible). Either:My supposition is that the filter on one of these fields or both at the same time will already filter down the result set from 1 millions docs to 100 docs. The remaining 100 docs will need to be fetch from disc to resolve the rest of the query, whatever the other filters. It’s an “OK” trade off so inserts don’t suffer too much and queries are still performant.Let it run like this during a month.Final comment, remember the ESR rule: Equality => Sort => Range.\nI saw you mentioned a sort at some point. If you need your answers to be sorted, then make sure the sort is always before a range query (a $in is a range) to avoid in-memory sorts.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Maxime,\nThe collection will have around 100 million documents and per survey (max) 1 million.\nHaving only the indexes as you suggested don’t work.For example the tags key, I have to use a aggregration pipeline to unwind that array.\nEven if account_id + survey_id brings it down to max. 1 million, it’s still too much without adding an index on “tags”.Thanks.", "username": "Ralph_van_der_Sanden" }, { "code": "tags$unwind{account_id: 1, survery_id: 1, tags: 1}\n{survery_id: 1, tags: 1}\n{account_id: 1, tags: 1}\ndb.orders.aggregate( [ { $indexStats: { } } ] )\nfind({})", "text": "The index on tags won’t help the $unwind, are we good on that?If tags is commonly used for filtering then I would add it in the indexes. It’s a trade off between the size of the indexes (which use RAM) and how often you are using these queries.Like : “Is it worth it to create this 2GB index for a query that is running 100 times a day and takes 1 sec?” Maybe yes, maybe no. It depends on the use case, budget, hardware, etc.You indexes could be:This is a bit sad but after a few weeks, you can run:And check which index is really useful and which one could go away and maybe be replaced by another one more strategic / worth it that would be detected by the profiler or Atlas Performance Advisor.I was consider adding wildcard indexes as a potential solution to your problem but I think they wouldn’t help as you have a specific & known schema.I think the easiest solution to your problem here is to force some filters to your users when they perform a search so you can prevent highly inefficient ad-hoc queries in the DB.\nElse they will click on the SEARCH button without any filter and basically send a find({}) to the DB… Nobody will like the result / perf of that one.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi, thank you.Will play around with indexes and $indexStats to get the right balance.Another thing I run into is a query like this:\nIn MySQL I would do SELECT key FROM table WHERE condition=x and will only get that value.\nIn MongoDB I can do that, but it will return the whole document which is logical.\nI can use projection to only view the key I need (answers.answer) but then all the other answers are returned too.For example, I want to find all (not empty) answers.answer where answers.qid=5\nI would get the answer I need but also all other answers with different qid’s. I would have to filter them in PHP but that’s not very effective.\nHow can I create a query that only returns the array/object that matched the conditions?", "username": "Ralph_van_der_Sanden" }, { "code": "", "text": "You need $filter in your projection.", "username": "steevej" }, { "code": "[\n {\n '$match': {\n 'answers.qid': 5768\n }\n }, {\n '$project': {\n 'items': {\n '$filter': {\n 'input': '$answers', \n 'as': 'item', \n 'cond': {\n '$eq': [\n '$$item.qid', 5768\n ]\n }\n }\n }\n }\n }\n]\n[\n {\n _id: ObjectId(\"62af099d90dcb82b1f750b12\"),\n items: [\n {\n answer: 'sample answer 1',\n original_answer: 'demo antwoord 1',\n type: 0,\n qid: 5768,\n datatype: 0,\n so: 0,\n q: \"Let's get started! What is your first name?\"\n }\n ]\n }\n]\n_id[\n {\n '$match': {\n 'answers.qid': 5768\n }\n }, {\n '$project': {\n 'answers': {\n '$filter': {\n 'input': '$answers', \n 'as': 'item', \n 'cond': {\n '$eq': [\n '$$item.qid', 5768\n ]\n }\n }\n }\n }\n }, {\n '$group': {\n '_id': null, \n 'all': {\n '$push': '$answers'\n }\n }\n }, {\n '$project': {\n '_id': 0, \n 'answers': {\n '$reduce': {\n 'input': '$all', \n 'initialValue': [], \n 'in': {\n '$concatArrays': [\n '$$value', '$$this'\n ]\n }\n }\n }\n }\n }\n]\n[\n {\n answers: [\n {\n answer: 'sample answer 1',\n original_answer: 'demo antwoord 1',\n type: 0,\n qid: 5768,\n datatype: 0,\n so: 0,\n q: \"Let's get started! What is your first name?\"\n },\n {\n answer: 'sample answer 1',\n original_answer: 'demo antwoord 1',\n type: 0,\n qid: 5768,\n datatype: 0,\n so: 0,\n q: \"Let's get started! What is your first name?\"\n }\n ]\n }\n]\n", "text": "If I insert with mongoimport the sample doc you provided in the top post, I can run this aggregation that is supported by an index on {“answers.qid”: 1}Final ouput looks like this:Does that work for you?If you prefer to only retrieve a single doc at the end with a single array, you could add an extra couple of stages to group all the arrays into a single one (and then flatten it as it’s an array of arrays).If I insert the same doc a second time in the collection with a different _id, and execute this pipeline:I get this output now:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you so much Maxime and Steeve, that works great!\nOk, last few questions if you don’t mind:Do you think the current structure for the answers (nested array/objects) is OK when it gets to 100+ million documents per collection and about 1-2 million documents per account_id & survey_id combination? I will need to perform a lot of aggregrations for analytics purpose.\nThe unwind could become the bottleneck I guess and how would I index the answers array/object the best?Searching is very problematic (at least with the things that I tried).\nI tried full text search, but that will only work if my customers are searching for a stemmed word which often will not be the case. Searching for anything else with my sample data of 2 million documents for 1 survey_id is just timing out.\nAny idea on how I can search fast in at least 500.000 - 1 million documents?\nanswers.answer is the field that I would query.Is there a way to write a query that will return a word cloud (answers.answer), the most occurring words will have to be the largest font size, least occurring smallest font size. I can do the font-size part but how would you query this so that you can get results in let’s say max 2 seconds with a list of words and the occurrence per word? Or should I just use a tokenizer when a survey submission comes in and save those words in a different array inside that document?", "username": "Ralph_van_der_Sanden" }, { "code": "explain(true)", "text": "Yes I think it’s okay as long as the size of the array “answer” is limited. If all your docs are more in the KB zone than in the MB zone, then you are fine. Data that is access together should be stored together. If you don’t need the answers each time you access these docs, maybe they could be moved to another collection then to keep the docs lightweight and reduce the size of the working set in RAM, reduce IOPS, etc.With the right index and the right hardware, you can only go so far. If you want to speed things up, maybe it’s time to consider sharding this collection if everything else is maxed out and carefully tuned. FTS isn’t designed to improve the query speed. Aggregations can be really fast if they are not manipulating 500M docs in memory. We are reaching the limits of physics at some point. In any case, always use explain(true) to check which stage is slow in your query and see if you can improve it with indexes.MongoDB Atlas Search supports Highlight Search Terms in results. But it’s only available on Atlas. The sync between your data and the (hidden) Lucene engine is completely automated in Atlas and included in the pricing. No need for another Elastic or Solr licences & servers for example.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need help defining the right document structure and indexes
2022-06-20T12:39:07.626Z
Need help defining the right document structure and indexes
3,250
null
[ "node-js", "mongoose-odm", "connecting", "atlas-cluster" ]
[ { "code": "", "text": "What’s the correct way to add a document to a specific mongodb database in the uri? I added <database=wdntestdb> in the uri but that did not add the item to the correct database. It added the item to a database that I deleted previously. I’m using nextjs and mongoose. The database is named “wdntestdb”.\n//env.local:\nMONGODB_CONNECTION_STRING = mongodb+srv://<USER_NAME>:@.mongodb.net/?retryWrites=true&w=majority&database=wndtestdb\"//Mongoose model\nconst AdminUser = models.AdminUser Schema || model('AdminUser ', AdminUserSchema, ‘AdminUsersCollection’);", "username": "david_h" }, { "code": "mongodb+srv://<USER_NAME>:<PASSWORD>@abcde.mongodb.net/AuthDBhere?retryWrites=true&w=majority\nconst database = client.db(\"sample_mflix\");\nconst movies = database.collection(\"movies\");\n", "text": "Hi @david_h and welcome in the MongoDB Community !“database” isn’t a valid connection string option. You can set the default authentication DB like this:But that’s it and you shouldn’t rely on this to write in the DB of your choice anyway.\nJust use this to get to the right place.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "const AdminSchema = new Schema(\n\n {\n\n name: {type: String, trim: true, required: true},\n\n email: {type: String, trim: true, required: true, unique: true, lowercase: true},\n\n password: {type: String, required: true, minlength: 8}\n\n }\n\n)\n", "text": "Thank you. It seems that using the Mongdb client directly is a better option considering the Mongoosejs documentation does not offer an obvious, compatible way of achieving this. The only thing I would miss is the ability to use models? Also, how would I do data validation on the backend. Using mongoose I can do the following:", "username": "david_h" }, { "code": "", "text": "You can definitely write to multiple databases and collection with both the MongoDB Node.js driver and Mongoose. See the doc for both but it’s 100% sure you can with both.Mongoose is an extra layer on top of the MongoDB Node.js driver though. I’m not a fan as I prefer to use directly the driver and all the features, but it’s your choice. Mongoose helps to enforce a schema as you have to work with models, but MongoDB is schemaless by design so nothing forces you to enforce a model / schema by default.That being said, if you need or want to enforce some constraints, there are a couple of ways to do so:If you do so in the back-end (like by using Mongoose or some other JSON schema modules), you can’t guarantee that another client (=back-end) or direct command lines sent to the cluster will respect these constraints.The only wait to enforce a constraint for sure, is to enforce it with the $jsonSchema directly in the MongoDB collection in the validator.Read this for more details:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add a document to a specific database AND collection in the URI
2022-06-23T20:54:34.519Z
How to add a document to a specific database AND collection in the URI
3,878
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "", "text": "Hi. I am trying to retrieve total count of docs as well as paginated docs. Is there a way to retrieve everything in one request?\nI want to be sure that the total count of docs is the same as paginated docs when someone just removed the doc.\nThe problem:\nI am querying total count - I get 5. Just some ms later I fetch for paginated results\nMeanwhile someone deletes the fifth document - the total count is 5 but there are only 4 documents.\nCan I solve this with cursor or the only option is to use aggregation?", "username": "Proth7_N_A" }, { "code": "count(x)find(x)", "text": "Hi @Proth7_N_A and welcome in the MongoDB Community !I think I would wrap the entire thing in a read multi-doc transaction or the aggregation pipeline for a single command but it really depends how you organise it really.You could also send a count(x) command + your paginated find(x) each time you update the view. I think the users wouldn’t be surprised if the count changed when they click on <page 2> as they know it’s a multi users page.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Alternatively, you may use $facet, one to get the total count and the second one to get the current page.However, I can conquer with Maxime, that most are not surprise if total count change, if inventory level change between clicks.Even if you do 2 requests, with 5 total count and get only 4 docs, there is nothing that stops you from adjusting your 5 total count to 4 if only 4 docs show up.I am pretty sure that is why some came up with the infinite scroll which I hate as a user. I rather have some kind of an idea of where I am. With infinite scroll it looks like there is always more then my FOMO kicks in.", "username": "steevej" }, { "code": "", "text": "Thanks Maxime. I wiill take a look intro multi-doc transaction as well as possible aggregation pipeline for it", "username": "Proth7_N_A" }, { "code": "", "text": "I see. Thanks for setting up few ideas in my mind ", "username": "Proth7_N_A" } ]
How to query docs and retrieve total count of docs (at that time) with skip and limit?
2022-06-22T14:03:33.662Z
How to query docs and retrieve total count of docs (at that time) with skip and limit?
4,817
null
[ "node-js", "mongoose-odm", "server" ]
[ { "code": "", "text": "Hi.I am mongodb noob and have installed mongodb on a PC (no issues) and a Mac. I installed mongodb community edition 5.0 using BREW. I have a m1 Mac laptopMy mongoose connection to the mongodb server does not want to resolve the name ‘localhost’ in the connection stringerror is Error: connect ECONNREFUSED ::1:27017\nconnection string is MONGO_URI = mongodb://admin:admin@localhost:27017/iTunes?authSource=admin&retryWrites=true&w=majoritythings I’ve tried:\nstopping the brew service and running mongod --ipv6 --dbpath /path-to-database/I got errors about Bootstrap which I didn’t understand.I uninstalled mongodb through brew and reinstalled it (not understanding why it installed mongodb 4 first, then mongodb 5.0 - both community editions)I restated using brew services start [email protected] and to my surprise all the collections from before were there, as was the ‘admin’ userI was able to connect using my connection string (mongodb://admin:admin@localhost:27017/iTunes?authSource=admin&retryWrites=true&w=majority) and once I saw it was working I ran some calls to the API I built and tired I stopped and closed the app.I returned later and I can no longer connect. I’ve seen so many error messages my head is spinning - is there something I can get the terminal to output so I can get some help?Is there any information missing about my setup that I can provide?thanks in advance.", "username": "ed_dickins" }, { "code": "", "text": "Possibly mongod is listening on the external interface and not on localhost. Check your config.", "username": "Jack_Woehr" }, { "code": "systemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1\n ipv6: true\n", "text": "thanks - this is my configI added the ipv6 switch.I have read that I can also use ::,127.0.0.1 for the bindIp option.\nIt is supposed to open up the ipv6 resolution to the 127.0.0.1 IP address.", "username": "ed_dickins" }, { "code": "/bin/launchctl bootstrap gui/501 Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist", "text": "other errors I have searched on have been the following - mostly pulled from running as root in the terminal and getting lots of traces:Error: Failure while executing; /bin/launchctl bootstrap gui/501 Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist exited with 5.mongodb Bootstrap failed: 5: Input/output errorwhat I find really interesting is that after uninstalling and reinstalling it worked first try after being launched from BREW. if the issue was ipv6 based it should not have been able to resolve the ‘localhost’ in my connection string.the error with the .plist makes me think that after running once mongodb wrote some config stuff and it is now jammed up and won’t accept connections going forward from that point.to be clear I can connect via MONGOSH and can see the db’s and collections in the shell. it is just this page using mongoose that is failing - MongooseServerSelectionError: connect ECONNREFUSED ::1:27017", "username": "ed_dickins" }, { "code": "sudo rm /tmp/mongodb-27017.sock", "text": "sudo rm /tmp/mongodb-27017.sock and try to make sure you always shut down cleanly.", "username": "Jack_Woehr" }, { "code": "sudo rm /tmp/mongodb-27017.sock", "text": "sudo rm /tmp/mongodb-27017.sockthanks Jack. This was one of the solutions I tried yesterday (and just now) it seems not to make a difference. Do you think my .conf file looks OK? Should I maybe be binding to ::,127.0.0.1 to accommodate ipv6? I am just guessing here.", "username": "ed_dickins" }, { "code": "", "text": "this is what the terminal outputs if I run mongod with --ipv6 and --dbpath optionsseems to start chucking errors here [16514:0x2055fe600], wiredtiger_open: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 808: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}I am not sure what wild tiger is.ed@edwards-Air itunes_restfulapi % mongod --ipv6 --dbpath /opt/homebrew/var/mongodb\n{“t”:{\"$date\":“2022-06-24T10:04:57.284+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:\"-\",“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.285+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:\"-\",“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.286+01:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.286+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“main”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.287+01:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:16514,“port”:27017,“dbPath”:\"/opt/homebrew/var/mongodb\",“architecture”:“64-bit”,“host”:“edwards-Air”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.7”,“gitVersion”:“b977129dc70eed766cbee7e412d901ee213acbda”,“modules”:[],“allocator”:“system”,“environment”:{“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Mac OS X”,“version”:“21.5.0”}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.288+01:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“net”:{“ipv6”:true},“storage”:{“dbPath”:\"/opt/homebrew/var/mongodb\"}}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.290+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:5693100, “ctx”:“initandlisten”,“msg”:“Asio socket.set_option failed with std::system_error”,“attr”:{“note”:“acceptor TCP fast open”,“option”:{“level”:6,“name”:261,“data”:“00 04 00 00”},“error”:{“what”:“set_option: Invalid argument”,“message”:“Invalid argument”,“category”:“asio.system”,“value”:22}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.291+01:00”},“s”:“I”, “c”:“NETWORK”, “id”:5693100, “ctx”:“initandlisten”,“msg”:“Asio socket.set_option failed with std::system_error”,“attr”:{“note”:“acceptor TCP fast open”,“option”:{“level”:6,“name”:261,“data”:“00 04 00 00”},“error”:{“what”:“set_option: Invalid argument”,“message”:“Invalid argument”,“category”:“asio.system”,“value”:22}}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.292+01:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:\"/opt/homebrew/var/mongodb\",“storageEngine”:“wiredTiger”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.292+01:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=3584M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.394+01:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:13,“message”:\"[1656061497:394229][16514:0x2055fe600], wiredtiger_open: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 808: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.395+01:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:13,“message”:\"[1656061497:395479][16514:0x2055fe600], wiredtiger_open: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 808: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.395+01:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:13,“message”:\"[1656061497:395926][16514:0x2055fe600], wiredtiger_open: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char , WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE ), 808: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.395+01:00”},“s”:“W”, “c”:“STORAGE”, “id”:22347, “ctx”:“initandlisten”,“msg”:“Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.”}\n{“t”:{\"$date\":“2022-06-24T10:04:57.395+01:00”},“s”:“F”, “c”:“STORAGE”, “id”:28595, “ctx”:“initandlisten”,“msg”:“Terminating.”,“attr”:{“reason”:“13: Permission denied”}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.395+01:00”},“s”:“F”, “c”:\"-\", “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:28595,“file”:“src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp”,“line”:687}}\n{“t”:{\"$date\":“2022-06-24T10:04:57.396+01:00”},“s”:“F”, “c”:\"-\", “id”:23092, “ctx”:“initandlisten”,“msg”:\"\\n\\naborting after fassert() failure\\n\\n\"}\ned@edwards-Air itunes_restfulapi %", "username": "ed_dickins" }, { "code": "", "text": "LOL I fixed this whole issue really simply. Thanks for the help everyone but I wasn’t thinking about ‘keeping it simple, stupid’my issue was that mongodb was running fine on my PC (as localhost) and all attempts to do the same on my MAC laptop (m1 chip) were failing.I realised that the simplest thing to do since my PC install was working fine was as follows:Now the PC and the MAC can both access the same mongodb instance using the same code to connect to the newly shared mongodb server on the PC. This is clearly the best solution. I’m sorry I did not learn what was up with the mongodb install on the m1 processor Mac laptop - I would have liked to help the community but I have little understanding of the errors it was chucking.thanks to everyone who answered my original post. here is a link to the video I used to set up this solution, hopefully it will be helpful to someone else.", "username": "ed_dickins" }, { "code": "mongod", "text": "w/r/t the M1, did you try to manually start mongod or did you use the brew-recommended service startup?", "username": "Jack_Woehr" }, { "code": "", "text": "originally I tried the BREW service, and then I tried directly using mongod.Interestingly there were no issues before I added authentication to mongodb - I had added it on the P\nC because ultimately in deployment I would need it but the authentication would not work and (from what I read) this is an ipv6 issue on Macs.I uninstalled, and reinstalled (via BREW) and it worked once through the brew service but not after that. Honestly the decision to give the PC a static IP address was not only simple but also clearly the correct solution. I was being daft having two different DBs but I am new to this and it seemed obvious to have it installed locally until I thought about it and realised it was a poor solution and a shared networked instance is logically much better.", "username": "ed_dickins" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mac local installation of mongodb is unstable - issue seems to be around ipv6
2022-06-23T16:17:15.147Z
Mac local installation of mongodb is unstable - issue seems to be around ipv6
4,405
null
[ "crud" ]
[ { "code": "{\n _id: ObjectId('6284ea2cad10a6a1b43a7f53'),\n bookingnr: 'BA-34567-3333-92540',\n flags: ['direct', 'last-minute', 'family']\n}\n{\n _id: ObjectId(\"628614db8e29fc2a5c8e4f0c\"),\n bookingnr: 'TA-87965-48521-89809',\n flags: ['direct', 'reduced', 'external-booking', 'in-house']\n}\ndb.collection.updateMany(\n { flags: 'direct' },\n { $set: { 'flags.$': 'in-house' } }\n);\n$set", "text": "Hello there,I need to replace a string in an array field on multiple docs without creating duplicate values in the array.Example documents:What i came up with for now is:But this creates duplicate values when the value used in $set is already part of the array. So how can I achieve this without duplicate array entries?Further details/requirements:cheers", "username": "Georg_Bote" }, { "code": "", "text": "See $addToSet.", "username": "steevej" }, { "code": "$addToSetdb.collection.updateMany(\n { flags: 'direct' },\n { \n $pull: { flags: 'direct' } ,\n $addToSet: { flags: 'in-house' }\n }\n);\nMongoServerError: Updating the path 'flags' would create a conflict at 'flags'\n$addToSet$addFields/$set $project/$unset $replaceRoot/$replaceWithdb.collection.updateMany(\n { flags: 'direct' },\n [ \n { $set: { 'flags.$': 'in-house' } },\n { $addToSet: { flags: 'in-house' } }\n ]\n);\n", "text": "Thanks, I already looked into it, but $addToSet alone would not remove the one I want to replace. And I can’t figure out how to do both at the same time. I tried:which gives this error:Somewhere i read that this means I can’t modify the same field multiple times in one update query.And I tried using an aggregation pipeline instead, but I don’t unterstand how to use $addToSet in there, because as described in the docs the only stages accepted are $addFields/$set $project/$unset $replaceRoot/$replaceWith.This does not work:", "username": "Georg_Bote" }, { "code": "", "text": "What do you want to do with flags:direct when flags: in-house is already present?Using aggregation update:To remove direct you may $filter flags array.Then use $reduce to determine if in-house is present or not, then adding it or not.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
updateMany: Replace string in array field, ignore duplicates
2022-06-23T09:42:43.673Z
updateMany: Replace string in array field, ignore duplicates
2,541
null
[ "replication", "java", "spring-data-odm" ]
[ { "code": "$ oc get po -n database -o wide\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\nmongo-0 2/2 Running 0 2d12h 10.128.2.185 host-node1.novalocal <none> <none>\nmongo-1 2/2 Running 0 2d12h 10.128.2.186 host-node1.novalocal <none> <none>\nmongo-2 2/2 Running 0 2d12h 10.128.2.187 host-node1.novalocal <none> <none>\n$ oc get po -n database -o wide\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\nmongo-0 2/2 Running 0 18s 10.128.3.53 host-node1.novalocal <none> <none>\nmongo-1 2/2 Running 0 13s 10.128.3.54 host-node1.novalocal <none> <none>\nmongo-2 2/2 Running 0 10s 10.128.3.55 host-node1.novalocal <none> <none>\norg.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@6f391070. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.128.2.187:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}, {address=10.128.2.186:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}, {address=10.128.8.138:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@6f391070. Client view of cluster state is {type=REPLICA_SET, servers=[{address=10.128.2.187:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}, {address=10.128.8.136:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}, {address=10.128.8.138:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}]\nat org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95)\nat org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2874)\nat org.springframework.data.mongodb.core.MongoTemplate.executeFindOneInternal(MongoTemplate.java:2749)\nat org.springframework.data.mongodb.core.MongoTemplate.doFindOne(MongoTemplate.java:2466)\nat org.springframework.data.mongodb.core.MongoTemplate.findOne(MongoTemplate.java:799)\nat org.springframework.data.mongodb.core.MongoTemplate.findOne(MongoTemplate.java:786)\nat com.fujitsu.fnc.fums.faultMgmt.service.FaultMgmtService.getNEFormat(FaultMgmtService.java:205)\nat com.fujitsu.fnc.fums.faultMgmt.service.FaultMgmtService.getAutonomousFaultResponse(FaultMgmtService.java:129)\nat com.fujitsu.fnc.fums.faultMgmt.stream.listener.AutonomousStreamListener.process(AutonomousStreamListener.java:28)\nat sun.reflect.GeneratedMethodAccessor123.invoke(Unknown Source)\n$ oc exec -it mongo-0 -n database -- mongo --version\nDefaulted container \"mongo\" out of: mongo, mongo-sidecar\nMongoDB shell version v4.0.19\ngit version: 7e28f4296a04d858a2e3dd84a1e79c9ba59a9568\nOpenSSL version: OpenSSL 1.0.2g 1 Mar 2016\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: ubuntu1604\n distarch: x86_64\n target_arch: x86_64\n$ cat /etc/os-release\nNAME=\"Red Hat Enterprise Linux\"\nVERSION=\"8.5 (Ootpa)\"\nID=\"rhel\"\nID_LIKE=\"fedora\"\nVERSION_ID=\"8.5\"\nPLATFORM_ID=\"platform:el8\"\nPRETTY_NAME=\"Red Hat Enterprise Linux 8.5 (Ootpa)\"\nANSI_COLOR=\"0;31\"\nCPE_NAME=\"cpe:/o:redhat:enterprise_linux:8::baseos\"\nHOME_URL=\"https://www.redhat.com/\"\nDOCUMENTATION_URL=\"https://access.redhat.com/documentation/red_hat_enterprise_linux/8/\"\nBUG_REPORT_URL=\"https://bugzilla.redhat.com/\"\n\nREDHAT_BUGZILLA_PRODUCT=\"Red Hat Enterprise Linux 8\"\nREDHAT_BUGZILLA_PRODUCT_VERSION=8.5\nREDHAT_SUPPORT_PRODUCT=\"Red Hat Enterprise Linux\"\nREDHAT_SUPPORT_PRODUCT_VERSION=\"8.5\"\n", "text": "Description:\nI have configured a mongodb replicaset on a kubernetes cluster (v1.21) with 1 primary and 2 secondary nodes where the client apps (springboot) are successfully able to read/write data to mongodb using the following connection string.mongodb connection string,spring.data.mongodb.uri=mongodb://mongo.database.svc:27017/?replicaSet=rs0&readPreference=secondaryPreferred&maxStalenessSeconds=120Issue:\nThe client apps fail to reach the mongodb endpoints when the kubernetes cluster restarts (or when the mongodb pods restart) because the client app is trying to connect replicaset using the service DNS that resolves to the old mongodb pod IPaddresses.Before Restart:After Restart:mongodb connection error from springboot client,Expected Behaviour:\nThe mongodb service DNS should be properly resolved to the current pod IP addresses.Actual Behaviour:\nThe mongodb service DNS are resolving to the old pod IP addresses.mongodb version,OS Details:Any suggestions would be appreciated.", "username": "bhavaniprasad_reddy" }, { "code": "", "text": "Hi @bhavaniprasad_reddy and welcome to the community!!Could you help by providing the following details for the above mentioned issue:Also, as I understand from the version mentioned, you are using MongoDB 4.0 which is no longer supported and hence no further updates will be available for it. Please refer to the following documentation to understand the next available and supported versions and upgrades.Please help us with the above information so that we could assist you further.Thanks\nAasawari", "username": "Aasawari" } ]
Mongodb unreachable when its service DNS resolves to old IP addresses of the mongodb pods after the kubernetes cluster restart
2022-06-15T10:29:53.094Z
Mongodb unreachable when its service DNS resolves to old IP addresses of the mongodb pods after the kubernetes cluster restart
3,224
null
[ "replication", "kafka-connector", "spark-connector" ]
[ { "code": "", "text": "Is it possible to connect mongodb data as source to Kafka or apache structured streaming without mongodb having replica set?", "username": "Gurudev_H_Y" }, { "code": "rsconf = { _id : \"rs0\", members: [ { _id : 0, host : \"mongo1:27017\", priority: 1.0 }]};\n\nrs.initiate(rsconf);\n--replSet rs0", "text": "For testing purposes you can create a single node “replica set”. In the Mongo shell define and configure as follows. My server host name is mongo1. localhost should work here as well.Note: Be sure to start the MongoDB service with the flag indicating its a replica set. Here my replica set name is rs0.--replSet rs0", "username": "Robert_Walters" } ]
Using MongoDB Connectors without having replica set
2022-06-24T04:03:52.295Z
Using MongoDB Connectors without having replica set
2,734
null
[ "crud", "transactions" ]
[ { "code": "\nexports = async function(site){\n \n let user = context.user;\n \n const _id = new BSON.ObjectId();\n const _partition = \"site_\".concat(_id.toString());\n const userId = user.id;\n \n site._id = _id;\n site._partition = _partition;\n site.creationUserId = userId;\n \n const client = context.services.get(\"mongodb-atlas\");\n const session = client.startSession();\n\n // mettere il db name come value\n let profileCollection = client.db(\"accounts\").collection(\"profile\");\n let siteCollection = client.db(\"sias\").collection(\"site\");\n let result = {};\n \n const transactionOptions = {\n readPreference: 'primary',\n readConcern: { level: 'local' },\n writeConcern: { w: 'majority' }\n };\n \n try {\n await session.withTransaction(\n async () => {\n \n // Important:: You must pass the session to the operations\n await profileCollection.findOneAndUpdate({}, \n { \n $push: { \"writePartitions\": _partition }\n }, \n { session }\n );\n \n result = await siteCollection.insertOne(site, { session });\n \n }, transactionOptions\n );\n \n \n } catch (e) {\n await session.abortTransaction();\n throw e;\n } finally {\n await session.endSession();\n }\n \n return result;\n};\n", "text": "Hello everyone,\nI have a problem on my Atlas cluster with partition sync.in a cloud function with realm authentication I need to create a partition (string), add the partition to the “writePartitions” array of the user’s custom data and, with the same operation, create an object that has the newly created partition.I would like to put everything in a transaction to avoid that, if the insert fails, the user will end up with an “orphan” partition in his “writePartitions” list.with the code I created, the function returns a 403 error as if the user did not have permission to insert an object with the specified partition but, theoretically, inserting the partition into the “writePartitions” array happens BEFORE inserting of the object.I don’t understand where the problem is.", "username": "Armando_Marra" }, { "code": "", "text": "this is the error:Error:insert not permitted for document with _id: ObjectID(“62ab54ebf40def89cc22787f”)", "username": "Armando_Marra" }, { "code": "exports = async function(site){\n \n let user = context.user;\n const userId = user.id;\n \n const client = context.services.get(\"mongodb-atlas\");\n const session = client.startSession();\n\n // mettere il db name come value\n let profileCollection = client.db(\"accounts\").collection(\"profile\");\n let siteCollection = client.db(\"sias\").collection(\"site\");\n \n site._partition = \"user_\".concat(user.id);\n site.creationDate = new Date();\n site.creationUserId = userId;\n\n const transactionOptions = {\n writeConcern: { w: 'majority' }\n };\n \n try {\n let result = {};\n await session.withTransaction(\n async () => {\n\n const insertResult = await siteCollection.insertOne(site, { session });\n \n const siteId = insertResult.insertedId;\n const _partition = \"site_\".concat(siteId);\n \n const updateResult1 = await profileCollection.updateOne({userId: userId},{ $push: { writePartitions : _partition }}, { session });\n const updateResult2 = await siteCollection.updateOne({_id: siteId}, [{ \"$set\": { _partition: _partition }}], { session });\n \n result = await siteCollection.findOne({_id: siteId});\n \n }, transactionOptions\n );\n return result;\n \n } catch (e) {\n await session.abortTransaction();\n throw e;\n } finally {\n await session.endSession();\n }\n\n};\n", "text": "finally i managed to get all this stuff working.just for reference if someone else has the same problem:I hope this will help someone.", "username": "Armando_Marra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Transaction inside realm function with sync enabled and partition strategy
2022-06-16T15:29:22.159Z
Transaction inside realm function with sync enabled and partition strategy
1,713
null
[ "sharding" ]
[ { "code": "{\ntest.item\n shard key : {\"num\" : \"hashed\"}\n unique : false\n balancing : true\n chunks: \n shard1 2\n shard2 2\n shard3 2\n shard4 2\n}\n{\ntest.item\n shard key : {\"num\" : \"hashed\"}\n unique : false\n balancing : true\n chunks: \n shard1 1\n shard3 1\n shard4 1\n}\n", "text": "I have sharded Cluster.\nI am using 4 shards.\nOther collections work fine, but one collection doesn’t.An example of what I think the normal state :Abnormal collection currently being output :Is the output now normal?\nIf it’s not normal, how can I fix it?", "username": "Park_49739" }, { "code": "", "text": "Hi @Park_49739 ,I assume you are referring to your second collection only existing on three shards at the moment?This would be expected if the collection only has three chunk ranges. Additional chunks will be created as needed when more data is inserted to the collection, and chunks will be distributed across shards by the Sharded Cluster Balancer.If you are concerned about some other aspect of this output, please provide more details.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi. @Stennie_XI’m experiencing the same symptoms.\nHowever, is this the hashed shard key, and even though there are about 10,000 total documents, can the above phenomenon occur?", "username": "Kim_Hakseon" } ]
Sharded Cluster Chunk error
2022-06-23T07:18:21.226Z
Sharded Cluster Chunk error
1,749
null
[]
[ { "code": "MongoError: no primary found in replicaset or invalid replica set nameno primary server available", "text": "I’m using a MongoDB Atlas sandbox database to develop a Meteor application. I have for many weeks successfully connected via both a localhost version of my app and also a public version hosted on Meteor Galaxy.This morning, I keep getting this error on localhost:\nMongoError: no primary found in replicaset or invalid replica set nameAnd this error on the Galaxy version:\nno primary server available(There’s more to the error messages, can share more if that matters.)I’ve searched a lot online, but can’t figure it out. I can connect to the DB via the Atlas website, from my command line using mongodump, from my computer using MongoDB Compass. I’ve checked whitelisting, and everything else I can think of. I can see a restart in the metrics graphs overnight. Seems significant, but one of several in the last 2 months where I’ve seen no errors connecting till now.Is there are an error log to read? Does anybody have further ideas on troubleshooting?! Thanks in advance…", "username": "Kevin_Ashworth" }, { "code": "", "text": "More info: There was a period of several hours overnight where these 2 replica sets switched back and forth between primary and secondary many times. See attached screenshot.Screen Shot 2020-06-22 at 7.06.47 PM4136×998 572 KB", "username": "Kevin_Ashworth" }, { "code": "", "text": "It works now. How I solved it: I did nothin’ but wait!", "username": "Kevin_Ashworth" }, { "code": "", "text": "Hello sir I got this same problem “no primary server available” can you explain how can i resolve this issue", "username": "kishan_gopal" }, { "code": "", "text": "Sorry, it’s been too long, I have no memory of this problem anymore. Good luck!", "username": "Kevin_Ashworth" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoError replicaset / no primary server
2020-06-22T20:32:15.682Z
MongoError replicaset / no primary server
7,148
null
[ "dot-net" ]
[ { "code": "{\n \"streamName\": \"...\",\n \"bodyJson\": \"...\"\n}\n", "text": "Hi,I am trying to reproduce a very weird problem that one customer reported. I am not sure whether this is the correct category, but you will tell me, I guess.My customer uses Mongo 4.2.19 on Windows.My application is a headless CMS based on CQRS pattern and the events are stored in a MongoDB collection. Basically the schema for events looks like this:As you can see, I am not storing the events directly in MongoDB. I serialize them to JSON first and then I add them as string. The reason is that not all my JSON objects are valid BSON objects.The problem is that sometimes the JSON cannot deserialized any more because it is corrupt. Only small differences, like wrong colons and so on.The customer has sent me an example to reproduce it, which does no work on my machine. In this example he creates large document (1 MB) with array properties in the first run. In the next run he fetches all documents and clones some of the array fields and makes an update. So basically the document structure does not change, but they becomes bigger over time. After a few runs he starts to see the deserialization problem.If there would be a bug with the JSON serializer it should have happened with the first run already. It is a very popular serializer for C#, so I doubt that there is general bug. So for me it seems that something goes wrong in the network stack or on MongoDB side. Perhaps something with compression. Is there a threshold after which fields are compressed? Basically my documents have only one large field which grows over time?I know that it is not likely that something like this is in the MongoDB code, but I have no more ideas right now.", "username": "Sebastian_Stehle" }, { "code": " \"PredictionProbability\": 0.9197f6ceee565a4\"}],\"PassageType\"\n", "text": "I analyzed an example and I found this oneit is very interesting, when you compare it with a non-corrupt version you see that it looks like something is actually not written to the database:So for me it seems there are at least these places to look at:", "username": "Sebastian_Stehle" }, { "code": "", "text": "Hi @Sebastian_StehleThis is curious indeed. I tend to think that all the libraries you mentioned would be well-tested against this type of corruption issue.I would suggest you do a step-by-step testing to determine the point of failure, e.g., on every step, ensure that the output of the relevant library is as you expect, without corruption.However I think it’s likely that the error occurs in either step 1, 2, or the glue between them. You also mentioned that a customer of yours reported this. Is it possible that there is something missing from their end? At this point there’s too many variables at play and it’s difficult to determine exactly what happened.It’ll be helpful if we can have a narrower scope of the issue, e.g. if we can have a set of inputs that is reproducible.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the answer. I tried that and I cannot reproduce it on my machine. The customer also sent me a sample to reproduce it which didn’t help either and when we tested it via screen sharing on his machine he could not reproduce it anymore.", "username": "Sebastian_Stehle" }, { "code": "", "text": "Is there a retry mechanism in MongoDB driver? Lets say a package gets lost (no idea how this can happen with TCP), the mongo server detects an invalid request and the driver makes a retry, then mongodb would insert invalid content, when the result of is still a valid document.", "username": "Sebastian_Stehle" }, { "code": "", "text": "Yes this feature is retryable writes and it’s been around since MongoDB 3.6.However I don’t believe that this would be the cause of the corruption issue your client is seeing. This feature is fully specced and tested exhaustively.Since the issue is not reproducible, it’s really hard to say what’s causing it, and how. However I would look at any TCP issues last since the protocol has been around for a very long time. It’s very, very unlikely this kind of corruption would be caused by TCP, especially since it’s being used to transfer gigantic amount of data every day very reliably since the 70s. Ditto with disk errors. If there are any disk corruption, WiredTiger would know about it (we’ve seen many of these instances ).Having said that, please do update us on the situation once a reliable method of reproducing the corruption is found. I’ll be happy to try to assist in that case.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data corruption and large fields
2022-06-15T05:19:09.492Z
Data corruption and large fields
2,717
https://www.mongodb.com/…1_2_1024x364.png
[ "schema-validation" ]
[ { "code": "", "text": "Hi,I am new here, but I am having a really hard time finding a workaround for using the keywords '“type” and “collection”. I am working on an NFT project that absolutely needs these keywords in the metadata standard. However I am getting validation errors using them. Is there a work around?\nScreen Shot 2022-06-19 at 10.58.04 AM1218×434 39.4 KB\n", "username": "Bryson_N_A" }, { "code": "", "text": "Hi @Bryson_N_A and welcome in the MongoDB Community !I don’t understand what you are trying to do. Are you trying to have a constraint on 2 fields named “collection” and “type” in your MongoDB documents?What’s the “metadata standard”?See the available keywords here:And the omissions here:I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Just in case, I would try to put all my literals (field names) in double quotes, in case they are reserved keywords.The exact validation error message you get would be interesting to see.", "username": "steevej" }, { "code": "", "text": "Hi!Thanks for the response. Right now I have to conform to an NFT metadata standard for Metaplex. However there are reserved keywords that are causing validation errors. “type” and “collection” are throwing me some errors. I need to see if there is a workaround. I will show you the NFT standard I need to comply with.\nScreen Shot 2022-06-21 at 7.11.54 PM1402×962 108 KB\n", "username": "Bryson_N_A" }, { "code": "collection typeproperties.files", "text": "So that’s the MongoDB document that you would store in MongoDB correct? And you need to ensure a constraint on the collection field (which is a sub doc == object) and the type field which is apparently a field in the array of subdocuments properties.files. Correct?What kind of constaints are you chasing?", "username": "MaBeuLux88" }, { "code": "", "text": "Could you please publish your document in JSON text rather than an image?Please also share your schema definition.This way we would be able to play with it.", "username": "steevej" }, { "code": "const ticketSchema = new mongoose.Schema({\n name: String,\n symbol: String,\n description: String,\n seller_fee_basis_points: Number,\n image: String,\n attributes: [\n { trait_type: String, value: String, _id: false },\n { trait_type: String, value: String, _id: false },\n { trait_type: String, value: String, _id: false },\n { trait_type: String, value: Boolean, _id: false },\n { trait_type: String, value: String, _id: false },\n {\n trait_type: String,\n value: String,\n _id: false,\n },\n { trait_type: String, value: String, _id: false },\n { trait_type: String, value: Number, _id: false },\n { trait_type: String, value: Boolean, _id: false },\n ],\n properties: {\n creators: [\n { address: String, share: Number, _id: false },\n { address: String, share: Number, _id: false },\n ],\n\n files: [{ uri: String, type: String, _id: false }],\n },\n collection: { name: String, family: String, _id: false },\n});\n", "text": "", "username": "Bryson_N_A" }, { "code": "", "text": "Yes. I just don’t know how to fix the error. If I change they words it works, but for some reason when I add collection and type it throws this error.\n\nScreen Shot 2022-06-23 at 5.23.04 PM1688×618 107 KB\n", "username": "Bryson_N_A" }, { "code": "", "text": "So this isn’t a schema validation related question actually. It’s a Mongoose related question which I have almost zero knowledge of.I just don’t recommend using it and I think we have an example of why. No idea what’s wrong.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @Bryson_N_AIn addition to what @MaBeuLux88 has mentioned, this is a known issue in Mongoose due to its reserved keywords: Using reserved keyword as schema key · Issue #1760 · Automattic/mongoose · GitHub and unfortunately I don’t know if it can be worked around, at least from Mongoose.However, I’d like to circle back to one of your earlier post:Right now I have to conform to an NFT metadata standard for Metaplex. However there are reserved keywords that are causing validation errors.I’d like to understand more of the goal you’re trying to achieve here. Is it to:To solve #1, you can use json schema which is a standard to describe the requirements of a JSON document. From the application side, there are node modules that can check the validity of a JSON document vs. a certain schema. The jsonschema module is but one example.To solve #2, MongoDB supports JSON schema validation via the $jsonSchema validation directive. You can create a collection and specify that only documents conforming to the schema can be inserted into the collection. Please see the linked page for examples on how to do this from the MongoDB side.To solve #3, well then you’ll have to do #1 and #2 together I looked around for a more concrete example and found their JSON schema definition from the Metaplex Github. It doesn’t appear to fully describe the example document you posted, but it may serve as a starting point for you to create a more specific schema.Best regards\nKevin", "username": "kevinadi" } ]
Unable to use Type or Collection in my Schema. Workaround?
2022-06-19T18:00:22.413Z
Unable to use Type or Collection in my Schema. Workaround?
5,035
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "const Objects = new mongoose.Schema({\n name: {\n type: String,\n required: true,\n },\n utility: {\n type: String,\n required: true,\n },\n createdAt: {\n type: Date,\n default: Date.now,\n }\n})\n", "text": "Hey, I have the following schema:What I want to do is to pick all data, for example, and then, make a rank of the objects.\nFor example, lets suppose the collection has 5 documents with the name “Pencil”, and then 8 documents with the name “Eraser”.So the rank would be, of course:\nEraser - 8\nPencil - 5But I don’t know the names of the objects to apply some $match or something, I need to get it from the data…", "username": "foco_radiante" }, { "code": "itemsdb> db.testcoll.find()\n[\n { _id: 0, name: “Eraser” },\n { _id: 1, name: “Eraser” },\n { _id: 2, name: “Eraser” },\n { _id: 3, name: “Eraser” },\n { _id: 4, name: “Eraser” },\n { _id: 5, name: “Eraser” },\n { _id: 6, name: “Eraser” },\n { _id: 7, name: “Eraser” },\n { _id: 8, name: “Pencil” },\n { _id: 9, name: “Pencil” },\n { _id: 10, name: “Pencil” },\n { _id: 11, name: “Pencil” },\n { _id: 12, name: “Pencil” }\n]\n$group“name”itemsdb> db.testcoll.aggregate({$group:{_id:‘$name’,count:{$sum:1}}})\n[ \n{ _id: ‘Eraser’, count: 8 }, \n{ _id: ‘Pencil’, count: 5 }\n]\n$group$sum", "text": "Hi @foco_radiante - Thank you for providing the schema.So the rank would be, of course:\nEraser - 8\nPencil - 5I am not entirely sure this is the exact output you are after but please see the following from my test environment:$group based on the “name” field:I would refer to the following documentation regarding the details of the aggregation stages / operators used in my above example:Additionally, you may also find the M121 - The MongoDB Aggregation Framework course useful.Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query to make a rank
2022-05-25T03:22:14.859Z
Query to make a rank
2,007
null
[ "node-js", "mongoose-odm" ]
[ { "code": "const mongoose = require('mongoose');\n\nmongoose.connect('mongodb://localhost/playground',{useNewUrlParser: true, useUnifiedTopology: true })\n\n .then(() => console.log('Connected to MongoDB...'))\n\n .catch((err) => console.log('Could not connect to MongoDB...', err))\nCould not connect to MongoDB... MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at Connection.openUri (D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\connection.js:847:32)\n at D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\index.js:351:10\n at D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\index.js:1149:10)\n at Mongoose.connect (D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\node_modules\\mongoose\\lib\\index.js:350:20)\n at Object.<anonymous> (D:\\KBG\\Ascent Class\\Node.js\\test\\mongo-demo\\index.js:3:10)\n at Module._compile (node:internal/modules/cjs/loader:1105:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1159:10)\n at Module.load (node:internal/modules/cjs/loader:981:32)\n at Module._load (node:internal/modules/cjs/loader:827:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)\n at node:internal/main/run_main_module:17:47 {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) { 'localhost:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n", "text": "{“t”:{“$date”:“2022-06-23T09:36:26.430-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}=== index.js ======= error getting ===", "username": "Karnasinh_Gohil" }, { "code": "mongodb://localhost:27017/playground\n", "text": "I do see an error in your connection string. You are missing the port\nTry adding the port information and see if you can connect.", "username": "tapiocaPENGUIN" } ]
Installed MongoDb but getting error while trying to connect through index.js file please refer attached codes
2022-06-23T13:38:58.978Z
Installed MongoDb but getting error while trying to connect through index.js file please refer attached codes
1,869
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "_id: ObjectId(\"62b2fb397fda9ba6fe24aa5c\")\nday: 1\nfamily: \"AUTOMOTIVE\"\nprediction: -233.99999999999892\nanalysis: ObjectId(\"629c86fc67cfee013c5bf147\")\n// Get the key of the field that is set dynamically.\nlet dynamicKey = req.body.dynamicKey;\n\n// Perform a query using a dynamic key.\ndocuments = await Model.find().where(dynamicKey).equals(req.body.value);\ndocuments = await Model.find({ \n $and: [{analysis: req.body.analysis_id}, {family: req.body.value}] \n});\ndocuments = await Model.find().where(dynamicKey).equals(req.body.value).where('analysis').equals(req.body.analysis_id);\n\ndocuments = await Model.find().where(dynamicKey).equals(req.body.value).where('analysis').equals(req.body.analysis_id);\n", "text": "So I’m trying to query my MongoDB database using Express & mongoose to fetch documents that have a specific family AND a specific analysis ID at the same time. Here is an example of the document structure:The problem I face in this case is that the name of the key of the family field is set dynamically and could therefore have any other name such as “product_family”, “category”, etc. This is why in order to fetch documents with a dynamic key name, I tried using the where() and equals() operators like so:HOWEVER, my goal here is NOT to just fetch all the documents with the dynamic key, but rather to fetch the documents that have BOTH the dynamic key name AND ALSO a specific analysis Id.Had the family field NOT been dynamic, I could have simply used a query like so:but this does not seem possible in this case since the keys inside the find() operator are mere text strings and not variables. I also tried using the following queries with no luck:Can somebody please suggest a solution?", "username": "Nikitas" }, { "code": "await Model.find({[dynamicKey]: req.body.value, analysis: req.body.analysis_id});await Model.find({analysis:req.body.analysis_id}).where(dynamicKey).equals(req.body.value);var predictionSchema = new mongoose.Schema({ \n day: {\n type: Number,\n required: true\n },\n prediction: {\n type: Number,\n required: true\n },\n analysis: {\n type: mongoose.Schema.Types.ObjectId, \n ref: 'Analysis', // Reference the Analysis Schema\n required: true\n }\n}, { strict: false }); \n", "text": "Part of the answer is to use the [] brackets to specify your dynamic key like so:await Model.find({[dynamicKey]: req.body.value, analysis: req.body.analysis_id});I found out that another query that can work is this:await Model.find({analysis:req.body.analysis_id}).where(dynamicKey).equals(req.body.value);However, it seems that for either of these solutions to work you also need to set your schema’s strict mode to “false”, since we are working with a dynamic key value.Example:", "username": "Nikitas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoose Query with Dynamic Key that also contains the "and" operator
2022-06-23T15:26:25.153Z
Mongoose Query with Dynamic Key that also contains the &ldquo;and&rdquo; operator
9,188
null
[ "queries", "node-js", "typescript" ]
[ { "code": "Usercollection<User>collection.find({ _id: '...' })Promise<{ _id: ObjectId }>UserfindUser4.x", "text": "I have type User which I use to have a collection, my collection is therefor of type collection<User>.\nWhen I call the method find such as doing collection.find({ _id: '...' }) TypeScript says it returns the type Promise<{ _id: ObjectId }> which is not what I expect as my collection was originally templated for User.The issue goes away if I template the find method with the type User ; But is that correct ? Do we have now to type every call despite our collections being already typed ? The issue only happens with the types provided by the package in 4.x, no issue with the community package.", "username": "Adrien_Mille" }, { "code": "", "text": "i did not get a option to post a question so apology to post here . I am trying to post a question here.getting below error when trying to connect with mongo from spring boot app\ncom.mongodb.MongoSecurityException: Exception authenticating\nMongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’, source=‘admin’, password=,\nmechanismProperties=}everty thing is correct because it is working with Aqua studio fine", "username": "shivam_sharma5" } ]
TypeError: find methods returning unexpected types
2021-07-15T10:24:16.355Z
TypeError: find methods returning unexpected types
3,529
null
[]
[ { "code": "", "text": "Is there currently functionality to organize realm functions into folder structures? As my app becomes more complex I find it challenging to quickly find the functions that I am looking for.", "username": "Tyler_Collins" }, { "code": "", "text": "Hi @Tyler_Collins,Looks like there is a “ticket” for this in the MongoDB Feedback website:Add folders to the Functions and Values areas and then to make it 10x better by adding permissioning by folder so we can keep our clients from altering sensitive functions/values in specific folders.You can vote for it but I’ll also surface this topic to the Realm team! I think it’s a good idea!Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @Tyler_Collins,It’s actually already possible. Here is a screenshot from the Realm UI when you try to create a new function:\nimage1311×319 37.8 KB\nIt’s going to be reflected in the folder hierarchy when you export the project (GitHub sync, realm-cli) but it’s not (yet?) reflected in the Realm UI though.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Oh my god I’ve been wanting to do this since I started using MongoDB Realm about a year ago! I checked the documentation multiple times over the first several months but it seemed to always explicitly say that functions had to be placed directly in the functions directly and not in its subdirectories.I’m so glad I searched again and found that functions can finally be organized in folders!!! Here’s the new documentation on defining functions in Atlas App Services.", "username": "Elias_Heffan1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Organize functions in folders
2022-02-21T17:36:38.024Z
Organize functions in folders
4,117
null
[ "aggregation", "queries", "crud" ]
[ { "code": "\"barcodes\": [\"1129118150100\", \"3606234836070\", [\"1129118150100150\", \"1129118150100151\"]]\n\"barcodes\": [\"1129118150100\", \"3606234836070\", \"1129118150100150\", \"1129118150100151\"]\n{\n $expr: {\n $gt: [\n {$reduce:\n {\n input:'$barcodes',\n initialValue:0,\n in:{\n $add:['$$value',{$cond:[{$eq:[{$type:'$$this'},'array']},1,0]}]\n }\n }\n },\n 0\n ]\n } \n}\n", "text": "Hi,I have an array that should contain only single value.\nAn error occured and a few elements are “arrays”, for example :I want to correct the mistakes using find() method to get this result :The filter is right, I’m trying to code an aggregation pipeline with $set but don’t succeed.Do you think there is a solution ?", "username": "emmanuel_bernard" }, { "code": "{ \"$set\" : {\n \"barcodes\" :\n {\n \"$reduce\" :\n {\n \"input\" : \"$barcodes\" ,\n \"initialValue\" : [] ,\n \"in\" : { \"$concatArrays\" : [\n \"$$value' ,\n { \"$cond\" : [\n { \"$eq\" : [ {\"$type\" : \"$$this\"} , \"array\" ] } ,\n \"$$this\" ,\n [ \"$$this\" ]\n ] }\n ] }\n }\n }\n} }\n", "text": "You were on the right track with $reduce. The enclosing $expr and $gt were not. The $add was not the correct operator. Try:In migration scenarios like yours, what I like to do is to migrate $out into a temporary collection, which I can verify before $merge-ing it back to the real collection. But you need downtime because between the $out and $merge, original data may change.", "username": "steevej" }, { "code": "{ \"$eq\" : [ {\"$type\" : \"$$this\"} , \"array\" ] }{ \"$isArray\" : \"$$this\" }\n", "text": "You may replace{ \"$eq\" : [ {\"$type\" : \"$$this\"} , \"array\" ] }with the short-cut", "username": "steevej" }, { "code": "\"barcodes\": [\"1129118150100\", \"3606234836070\", [\"1129118150100150\", \"1129118150100151\"]]\n{\n $expr: {\n $gt: [\n {$reduce:\n {\n input:'$barcodes',\n initialValue:0,\n in:{\n $add:['$$value',{$cond:[{$eq:[{$type:'$$this'},'array']},1,0]}]\n }\n }\n },\n 0\n ]\n } \n}\n'barcodes.$':{$type: 'array' } ", "text": "Hi Steeve,Thanks for your answer.\nI also need a filter to find the errors = barcodes array with elements of type “array”So here is my filter :I use “$reduce”, but I’m asking how to get the same result with a simple way ?\nI tried 'barcodes.$':{$type: 'array' } to check if an element of the array is an array, but this syntax is wrong", "username": "emmanuel_bernard" }, { "code": "{ \"barcodes\" : { $elemMatch : { \"$type\" : \"array\" }}}\n", "text": "I have done anything for the filter because in your first post you wroteThe filter is rightThe following should work:", "username": "steevej" }, { "code": "db.barcodes.updateMany(\n{\n barcodes:{$elemMatch:{$type:'array'}}\n},\n{\n $set: {\n barcodes : {\n $reduce : {\n input : '$barcodes' ,\n initialValue : [] ,\n in: { '$concatArrays' : [\n '$$value' ,\n { $cond : [\n { \"$isArray\" : \"$$this\" } ,\n '$$this' ,\n [ '$$this' ]\n ] }\n ] }\n }\n }\n }\n}\n)\n{\n _id: 'fr_150371',\n barcodes: {\n '$reduce': {\n input: '$barcodes',\n initialValue: [],\n in: {\n '$concatArrays': [ '$$value', { '$cond': [ [Object], '$$this', [Array] ] } ]\n }\n }\n }\n}\n", "text": "I have updated the collection with updateMany() operatorThe result is wrong, I got the string value of the operator but not the valueDon’t understand where is the error ?", "username": "emmanuel_bernard" }, { "code": "", "text": "To update with aggregation, you need to put your $set into [ ].", "username": "steevej" }, { "code": "db.barcodes.updateMany(\n{\n barcodes:{$elemMatch:{$type:'array'}}\n},\n[\n{\n $set: {\n barcodes : {\n $reduce : {\n input : '$barcodes' ,\n initialValue : [] ,\n in: { '$concatArrays' : [\n '$$value' ,\n { $cond : [\n { '$isArray' : '$$this' } ,\n '$$this' ,\n [ '$$this' ]\n ] }\n ] }\n }\n }\n }\n}\n]\n)\n", "text": "I forgot [ ] to open and close the pipeline !\nNow it’s OK", "username": "emmanuel_bernard" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to convert an array of array to an array of single value with updateMany()?
2022-06-20T16:09:23.774Z
How to convert an array of array to an array of single value with updateMany()?
2,338
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.6.1 MongoDB Java & JVM Drivers release is a patch to the 4.6.0 release.The documentation hub includes extensive documentation of the 4.6 driver.You can find a full list of bug fixes here.", "username": "Valentin_Kovalenko" }, { "code": "", "text": "One of the latest versions that I tried and tested on my site. There were some bugs related to the initialization of the database, but it quickly fixed, it was only necessary to reinstall the driver, because the curve probably became", "username": "Marina_Petrenko" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Java Driver 4.6.1 Released
2022-06-09T16:28:36.289Z
MongoDB Java Driver 4.6.1 Released
3,827
null
[ "indexes" ]
[ { "code": "", "text": "is it possible to create two indexes with the same keys by changing the collation parameters each time?", "username": "Isaac_NCHO" }, { "code": "", "text": "According to my best friend named documentation,You can create multiple indexes on the same key(s) with different collations. To create indexes with the same key pattern but different collations, you must supply unique index names.", "username": "steevej" } ]
Indexes with collations
2022-06-23T07:07:20.905Z
Indexes with collations
1,600
null
[ "aggregation" ]
[ { "code": "", "text": "Document 1 : { options: [ { size: “s” }, { color: “red” } ]; }Document 2 : { options: [ { size: “m” }, { color: “green” } ]; }Result should be : { options: [ size:[“s”,“m”], { color:[“red”,“green”]} ]; }Here,key object key name are dynamic that is they can be any other name like “fit”,“fabric” etc", "username": "Good_Going" }, { "code": "{ options : [ { size : [ \"s\" , \"m\" ] } , { color : [ \"red\" , \"green\" ] } ] }\n", "text": "You expected result is not valid JSON.Could you confirm that what you want is in fact the following?But that does not seem intuitive. If I see your result documents I will assume I have the option size:s,color:green as valid choice. But it is not if you only have the 2 documents you shared.", "username": "steevej" } ]
I tried with reduct but didn't work.how can i get this?
2022-06-23T09:07:42.167Z
I tried with reduct but didn&rsquo;t work.how can i get this?
948
null
[ "queries" ]
[ { "code": "", "text": "I have a collection name ACCOUNT where ABC-PQR-123-56789 is a key field.\neach filed separated by “-” have some meaning and all together they make a unique combination.Once the account is closed the key field is appended with open account date.\nlike ABC-PQR-123-56789-23062022 (eg. if it got closed today)But for some reason account is created again ABC-PQR-123-56789 and hence there are 2 account for same idABC-PQR-123-56789 open\nABC-PQR-123-56789-23062022 closedthis should not happen and trying to find out how many such key id exist in my collection.\ncould you please help me to find such cases where one account is open and one closed with appended account open date.", "username": "College_days_N_A" }, { "code": "", "text": "Your problem is your kludge that modifies the account number.Do you also modify documents from other collections that refer to this account? Ouch!You should leave the account number unchanged and have closed_date field, period.You should not have dates in the format DayMonthYear, use ISO-8601 date format YYYY-MM-DD.I really do not see any other way that doing it with 3 trips to the DB.", "username": "steevej" } ]
Finding duplicates from collection .where keyid differ a bit
2022-06-23T10:28:40.372Z
Finding duplicates from collection .where keyid differ a bit
1,298
null
[ "data-modeling" ]
[ { "code": "async def boards(ctx, map_code, level, title, query):\n \"\"\"Display boards for scoreboard and leaderboard commands.\"\"\"\n rank_number = 1\n async for entry in WorldRecords.find(query).sort(\"record\", 1).limit(10):\n # Do stuff for first record entry . .\n rank_number += 1\n", "text": "I have this code to display the top 10 time records of a race.\nHow can I change the rank # to be a part of the document, rather than an iteration count in a loop?\nI would like to add a field for rank and have that automatically update the others for that particular race.If there were 3 records:and a new one comes in say 5.12,\nit would insert it between 6.35 and 3.23 giving it rank 2 and moving the others below it to 3 and 4, and so on.Would triggers be a good solution to this? or would there be some way to do it in Python?", "username": "nebula" }, { "code": "", "text": "If you included rank as part of the document every insert would require you to update the rank field to reposition every document. This means a single write fans out to N writes. Database writes are slow so this is a bad idea.On the other hand for a well tuned database database reads (especially with repeated reads of the same data) will be very fast so the solution you have outlined will work perfectly. If you add an index on the time field the read will be near instantaneous.You could also add a project field to only return the time record as opposed to other elements of the document. This reduces the amount of data returned.", "username": "Joe_Drumgoole" }, { "code": "", "text": "So you’re saying triggers would be a good solution? Could you point me in the right direction on them? I’ve looked at the docs but I didn’t see anything too helpful for this problem. Maybe I didn’t look hard enough.", "username": "nebula" }, { "code": "", "text": "I’m not sure of the question you are asking. Do you want to only react when a new measurement comes in? In that case you would need to use a Change Stream to watch the collection for new entries. My colleague @Naomi_Pentrel has written an excellent post on the subject.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi @nebula, Have you found anything in this case? as I am also working on something similar to this model.Please let me know if you have any solution to this.Thanks in advance.", "username": "Durvesh_Parmar" } ]
Creating a constantly changing leaderboard
2021-03-05T12:29:56.120Z
Creating a constantly changing leaderboard
3,912
null
[ "compass", "connecting", "containers" ]
[ { "code": "FROM mongo:latest\nRUN echo \"rs.initiate({'_id':'rs', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});\" > \"/docker-entrypoint-initdb.d/init_replicaset.js\"\nRUN echo \"12345678\" > \"/tmp/key.file\"\nRUN chmod 600 /tmp/key.file\nRUN chown 999:999 /tmp/key.file\n\nCMD [\"mongod\", \"--replSet\", \"rs\", \"--bind_ip_all\", \"--keyFile\", \"/tmp/key.file\", \"--enableMajorityReadConcern\"]\nServer selection timed out after 30000 ms", "text": "Hi\nI create mongoDB replica with docker fileafter run, i can’t connect to mongodb server via mongo compass and in log received this message``json\n{“t”:{\"$date\":“2022-06-20T09:46:27.333+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“152.228.145.0:46734”,“uuid”:“a7570807-78a9-4351-a963-6f785f9ea15b”,“connectionId”:7534,“connectionCount”:16}}\n{“t”:{\"$date\":“2022-06-20T09:46:29.902+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:34196”,“uuid”:“fafcf29a-25ac-4a89-aa66-5f62b963bda7”,“connectionId”:7535,“connectionCount”:17}}\n{“t”:{\"$date\":“2022-06-20T09:46:29.903+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn7535”,“msg”:“client metadata”,“attr”:{“remote”:“127.0.0.1:34196”,“client”:“conn7535”,“doc”:{“application”:{“name”:“MongoDB Shell”},“driver”:{“name”:“MongoDB Internal Client”,“version”:“5.0.9”},“os”:{“type”:“Linux”,“name”:“Ubuntu”,“architecture”:“x86_64”,“version”:“20.04”}}}}\n{“t”:{\"$date\":“2022-06-20T09:46:29.932+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20250, “ctx”:“conn7535”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“ramooz”,“authenticationDatabase”:“admin”,“remote”:“127.0.0.1:34196”,“extraInfo”:{}}}\n{“t”:{\"$date\":“2022-06-20T09:46:29.943+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:21577, “ctx”:“conn7535”,“msg”:“Initiate: no configuration specified. Using a default configuration for the set”}\n{“t”:{\"$date\":“2022-06-20T09:46:29.944+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:21578, “ctx”:“conn7535”,“msg”:“Created configuration for initiation”,“attr”:{“config”:\"{ _id: “rs”, version: 1, members: [ { _id: 0, host: “f0f4572d950a:27017” } ] }\"}}\n{“t”:{\"$date\":“2022-06-20T09:46:29.944+00:00”},“s”:“I”, “c”:“REPL”, “id”:21356, “ctx”:“conn7535”,“msg”:“replSetInitiate admin command received from client”}\n{“t”:{\"$date\":“2022-06-20T09:46:29.950+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn7535”,“msg”:“Connection ended”,“attr”:{“remote”:“127.0.0.1:34196”,“uuid”:“fafcf29a-25ac-4a89-aa66-5f62b963bda7”,“connectionId”:7535,“connectionCount”:16}}", "username": "Ja7ad" }, { "code": "~/.bashrcalias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:5.0.9 --replSet=test && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"'\nFROM mongo:5.0.9\nRUN echo \"rs.initiate({'_id':'rs', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});\" > \"/docker-entrypoint-initdb.d/init_replicaset.js\"\nRUN echo \"12345678\" > \"/tmp/key.file\"\nRUN chmod 600 /tmp/key.file\nRUN chown 999:999 /tmp/key.file\n\nCMD [\"mongod\", \"--replSet\", \"rs\", \"--bind_ip_all\", \"--auth\", \"--keyFile\", \"/tmp/key.file\", \"--enableMajorityReadConcern\"]\ndocker build -t mabeulux/mdb . && docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mabeulux/mdb:latest\ndocker exec -it mongo mongosh --quiet\nuse admin\ndb.createUser({user: \"max\", pwd:\"secret\", roles: [\"root\"]})\nmongodb://max:secret@localhost:27017\n", "text": "Hi @Ja7ad,Personally, I like to create my Single Node Replica Sets with an alias that I have in my ~/.bashrc:Less complicated in my opinion and works just fine to test stuff quickly and run some tests.In your case, you are missing a few things. I think because you added the keyFile constaints, this requires Auth activated.So I updated your Dockerfile like this:And you didn’t provide your docker run command so I made my own version:Finally, I have to create the root user. So I connect like this:Then I create the root user:And then I can finally connect using Compass with this URI:So like I said, my solution is a one liner so a bit easier !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I found bitnami mongodb image, make easy replica with this imagehttps://hub.docker.com/r/bitnami/mongodb", "username": "Ja7ad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot connect to mongodb replica remote server
2022-06-20T09:52:12.507Z
Cannot connect to mongodb replica remote server
2,614
null
[]
[ { "code": "", "text": "Hi All,is it possible to change default data timezone on mongodb atlas?Thank you.", "username": "Watri_Wahab" }, { "code": "timezone", "text": "Hi @Watri_Wahab and welcome in the MongoDB Community !I’m not sure which TZ you are talking about.Each Atlas project have a custom TZ that you can modify here:\nimage1254×729 77.6 KB\nBut if you are talking about the TZ in the actual ISODate values that you are storing in your cluster, it depends on the back-end code that is generating these dates. MongoDB doesn’t interfere with the data that is being stored. Send X => Store X.The TZ in the Project Settings doesn’t affect any of the data. It’s just for the metrics, alerts, windows maintenance, etc.That being said, MongoDB can transform the ISODate you stored and present them in the TZ of your choice.If I remember correctly, all these operations support an optional timezone field.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,\nThanks for answer my question.\nYes, I mean the timezone of data collection.\nAs per I know, the default timezone is UTC. If I want to change the timezone, I need generate the date on application side. Is it correct?Cheers\nWatri", "username": "Watri_Wahab" }, { "code": "test [direct: primary] test> db.coll.insertOne({date: new Date()})\n{\n acknowledged: true,\n insertedId: ObjectId(\"62a9e3e50c008ec5fa9c6caf\")\n}\ntest [direct: primary] test> db.coll.findOne()\n{\n _id: ObjectId(\"62a9e3e50c008ec5fa9c6caf\"),\n date: ISODate(\"2022-06-15T13:51:33.388Z\")\n}\n", "text": "Yeah I think it’s fair to say that.I’m in Paris right now so my current TZ is UTC+2. It’s currently 15:51 local time in Paris. But if I create a new date in MongoDB, it’s going to be 13:51 UTC+2.Take a look at this topic as well. This might help.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,\nThanks for the answer and detail explain.Cheers\nWatri", "username": "Watri_Wahab" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to change the data timezone in mongodb atlas
2022-06-14T04:26:41.359Z
How to change the data timezone in mongodb atlas
15,494
https://www.mongodb.com/…1bd26edeb6ea.png
[ "indexes", "ops-manager", "upgrading" ]
[ { "code": "", "text": "Hi all,I thought I’d share my experiences of upgrading from M2 to M5 recently and hopefully pass some lesson’s learned on to others.So I had a production database running on an M2. I know MongoDB don’t recommend running a production system from shared infrastructure and perhaps this highlights one of the reasons it’s a bad idea. It was about 1.8GB in size with a couple of collections having more than 3M records. I chose a Saturday morning to start as it would mean I was free from distractions.Step 1 was to take a copy of the most recent back-up and download it for safe keeping if anything went wrong.\nStep 2 was to disable all of the running scheduled jobs and incoming HTTPS endpoints so that they wouldn’t be putting data into the database during the upgrade.\nStep 3 I also tried to unlink the database from the App so that no data could be written to the database. However, clicking unlink in the UI didn’t do anything (possible bug?) and trying to disable the link via a deployment also didn’t work. Given I’d already done steps 1 and 2, I wasn’t too worried about it.\nStep 4 was to click the upgrade button. A message next to the button said it would take between 7 - 10 minutes.After about an hour into the upgrade I started to get worried that it was going badly. During the upgrade process you lose access to all of the metrics so I couldn’t check to see how far through it was. I started to think about rolling back and restoring my back-up. I then created a fresh M5 instance to see if I could restore my back-up to it. However, it turns out the format of the back-up is not suitable for restoring back into MongoDB directly. I read online that I could install a local instance of Mongo and use that to convert the format but that seemed like a big job and not an area I was familiar with.After another hour, I started to wonder whether not unlinking the application was causing data to be written to the database and restarting the migration process. I hurriedly wrote a site under maintenance page for my website, which, I should have done from the beginning but I thought the site was only going to be down for 10 minutes so hadn’t bothered.I thought about contacting support but being a Saturday they weren’t available on my plan. Perhaps choosing a Saturday wasn’t a good idea.I then had the idea to switch my application linked database to the empty M5 database I created earlier to divert any remaining traffic away from the migrating database. I then just left it for a few more hours to see if the migration would complete. Finally the migration completed about 7 hours later with a message saying that it had failed.Once the database was back up, I could check the stats and saw that it had restarted a couple of times but the third time (after I had diverted the app to the empty database) succeeded. Although the migration still took several hours (much longer than the advertised 10 minutes).\nimage881×358 17.7 KB\nWhilst it said it had failed, it was up and running on an M5 and all of the collections appeared to be in place with data in them so as far as I was concerned it appeared successful. I linked the app back to the migrated database, took down the maintenance page and re-enabled the endpoints and schedules.I was a bit worried that the database had shrunk in size from 1.8GB to about 1GB but I put this down more efficient use of storage rather than data loss as all my data appeared to be in place.A few days later I got an email from a user saying they had lost some data and I started to dig into exactly what was missing. What I discovered was that all of the indexes on the collections had gone missing (not something I thought to check after the migration). The reason this hadn’t caused a massive performance issue is because Mongo had created some indexes automatically for me (nice feature!) however, it didn’t know that they needed to be unique. So I quickly tried to recreate the unique indexes but they failed due to the presence of duplicate records. I spend a few hours hunting down the duplicates one by one and deleting them but I wasn’t sure how many I had. Finally I wrote an aggregation to count them all and discovered that I had tens of thousands of duplicate records (not easy to spot in millions of records). I presume these came about from the migration restarting as many that I spot checked were of quite old data.I then wrote a function to find the duplicates via an aggregation then loop through and delete them using a bulk update in batches of 1000. Finally then was I able to reapply my indexes. Luckily I had written a script to create all of my indexes and views from scratch so this step was easy.The reason the user thought they had lost data was because an aggregation was failing due to the missing indexes and making it look like there was missing data in the application.I guess what I should have done and will do next time is create a fresh M5 instance then restore the M2 back-up into the M5 then delete the M2 and re-point the app to the new M5 instance. Hopefully this will be useful to anyone who is considering upgrading from M2 to M5.I’m also not entirely convinced that disabling the https endpoints really worked as I saw log entries showing them working even after I disabled themRequests from MongoDB:Sorry for the long post. Hope someone finds it useful.", "username": "ConstantSphere" }, { "code": "", "text": "Welcome aboard!I also experienced upgrading from M2 to M5 in production but I only had 1.2Go of data to migrate.I will be adding my experience of migrating Realm Cloud, maybe it will help someone, who knows…It was painful too but less than you I trust, probably because I expected the migration to go bananas. And that’s why I have the mobile clients check a status flag on my own server (serving a static file at Google Cloud but it could have been a public Github repo) before trying to connect to Realm.The flow is Launch app → Check status flag → Connect to Realm if up, else activate maintenance mode of the app.This way I am sure that nobody tries to save data that can’t be saved during server migration.Migrating data took about 2h30. It is impossible to guess how long it takes but you can track it in the Realm logs on your backend. Until you see a final log « Operation Complete, took x seconds », it is not over.This is actually not part of the migration itself, it is part of Realm DB Cloud logic to reconstruct itself whenever you modify the DB schema (in the case of migrating, you are basically creating a new schema). So keep in my mind that if you switch Dev Mode in prod, you will make Realm rebuild itself taking another couple of hours during which users can’t save anything to Realm cloud.The worst part is that because Realm DB Cloud is rebuilding itself, it can’t serve the correct documents to your users and your client logic will probably believe it can create a new unique document because you don’t have one but you actually already have one that can’t be served to the end users for the time being… So even if you have a unique key index, it won’t help. At least this is from my experience.What I recommend is to have a status flag endpoint so that you can kick users out of your app, stop them from connecting to Realm Cloud, and update your schema without any complication. Maybe make sure you have a query to check for conflicting entries in case something bad happens.", "username": "Jerome_Pasquier" }, { "code": "", "text": "Simon, Jerome,All I can say is WOW these posts are a gut punch to read for those us here at MongoDB working hard to make these products better and it’s disappointing that we have not responded sooner.To be intellectually honest and show some vulnerability, I think each of us that read your note line for line probably tried to respond and then felt ashamed and/or just didn’t quite know how to fully respond, and then said “I will come back to this later” and now it’s been 12 days and a second community member has responded on top with a similar albeit different issue and still 6 days have passed since then which likely caused folks who had planned or hoped to respond to further think “now how can I really even begin to respond”! None of that is to make any excuses, but only to simply start engaging with you.THANK YOU both for sharing the long and incredibly frustrating journeys you’ve been on with us. I believe that both of you have experienced different issues even if they look and sound similar on the surface.Simon, I believe you suffered from our M2 to M5 upgrade process running into an edge case in the brittleness of the backend processes that move the data (on the backend we pipe mongodump to mongorestore and have occasionally seen classes of errors that require manual intervention to fix; we have a plan to move to a more modern backend utility to power these upgrades in the future but unfortunately that utility is still in development and we’ve prioritized upgrades from our serverless environment to dedicated clusters ahead of M2 to M5 upgrades first which may have been a mistake in hindsight). The fact that you felt unable to get support when you needed it is also unacceptable – even if this was a small database, your users were counting on it and we let you down. The process you went through to pin down the data issues afterwards sounds nightmarish: I am still not 100% clear on whether you think the data issues derived from your app writing during the upgrade or restore, or if you believe the backed up data itself had the issue? if the latter that is very concerning.And then Jerome, your issue I believe may be completely different, and related to the fact that upon upgrade, the oplog is not preserved–this can cause a Sync enabled application to lose the ability to stay in sync and to need to re-initialize. We are trying to figure out how to architecturally handle this situation more elegantly: it is unfortunately a nuanced and technically complex topic to properly address. Your suggestion around better ergonomics for managing this state is a good one: ideally we would not need the state at all.Taking a step back I want to really celebrate both of you for taking a positive “help the community” tone instead of coming in hot and angry as I probably would have done after experiencing these really problematic experiences. Your patience and willingness to help us help the community is really an incredible sign of maturity that all of us at MongoDB appreciate.-Andrew (SVP Cloud Products)\n(we will reach out separately via email)", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi @Andrew_Davidson - thank you very much for your response (and the separate email).To answer your question about the data integrity - I don’t think I lost any data; I think the duplicates came about from the restore process restarting - there were way too may records for the application to have generated them in that time and some of the timestamps went back over a year. My assumption is that the restore process takes place before the (unique) indexes are added and restarting that process multiple times can cause duplication.Thanks once again for reaching out, it is much appreciated.", "username": "ConstantSphere" }, { "code": "", "text": "Hi @Andrew_DavidsonThanks for officially answering and no problem at all, this is what forums are for It is exactly as you said: My issue was with oplog and, as a result, a bigger issue with client reset which wasn’t gracefully handled on our end.\nNot sure what the client reset problem was as we have the same code as in the complete sample provided in the iOS documentation. But I heard there is an upcoming SDK release to improve client reset internally ", "username": "Jerome_Pasquier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Experiences in upgrading from M2 to M5
2022-06-10T11:15:31.694Z
Experiences in upgrading from M2 to M5
4,002
https://www.mongodb.com/…7_2_1024x417.png
[ "python", "data-api" ]
[ { "code": "", "text": "I’m trying to Update/Post to my documents in an Atlas database through using the Data API via postman, I followed all the steps and filled in all the essential variables and I consistently get that errorinvalid session: error finding user for endpoint\nCapture1271154×471 62 KB\n", "username": "Amir_Adel" }, { "code": "", "text": "Here it shows that I successfully forked the MongoDB Data API to my Postman collection and filled in all the necessary variables\nCapture1261192×717 61.6 KB\n", "username": "Amir_Adel" }, { "code": "", "text": "This is me making the POST request to find a document in my database (this same exact error keeps on showing with all the other endpoints, update, insert, etc…)", "username": "Amir_Adel" }, { "code": "", "text": "And these are the header and body of the request showing that the variables were correctly setup and the the request is going through with no problems but I get that same error every time .\nCapture1291014×836 60.4 KB\n", "username": "Amir_Adel" }, { "code": "", "text": "So I figured it might be a postman thing so I tried to make the request using Python instead and I got the same exact error there too.\nCapture1301017×290 21.9 KB\n", "username": "Amir_Adel" }, { "code": "", "text": "This is the document I am trying to send the requests to,\nCapture1311071×361 23.4 KB\nAnd this is my Data API game with the cluster all being setup and given access for both reading and writing.So I am not sure what is exactly going on or what am I not seeing", "username": "Amir_Adel" }, { "code": "", "text": "Hi @Amir_Adel and welcome to the community!Based off the error that’s being generated by postman, I believe this may have to do with an invalid API key being entered. Has the same key worked for your previously?If not, can you try the testing again using a brand new Data API key?I was able to reproduce the error on my test environment by providing an incorrect API key.Please note that I was able to successfully perform the POST request to find a document before purposefully changing the API key to an invalid value:\nimage1376×700 83.8 KB\nRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Postman, Atlas Data API invalid session error
2022-06-20T17:40:05.981Z
Postman, Atlas Data API invalid session error
4,718
null
[ "app-services-cli" ]
[ { "code": "", "text": "How are MongoDB Atlas + Realm projects supposed to organize environments?Are Realm environments supposed to all use the same database?Are we expected to migrate entire Realm applications from project to project throughout a development cycle?What’s the correct to organize MongoDB and Realm intro environments?Why do I always have trouble finding documentation on this online?Thanks for any help!Double posting from old thread in other topic: How do I organize LOCAL-DEV, TEST, and PRODUCTION environments? - #3 by Tim_N_A", "username": "Tim_N_A" }, { "code": "", "text": "Hi Tim - @Lauren_Schaefer 's talk on this might be a good place to start: Building CI/CD Pipelines for MongoDB Realm Apps - YouTube", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks for the pointer to my talk, Sumedha! Here is a blog post that covers the same content: How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions | MongoDB", "username": "Lauren_Schaefer" }, { "code": "", "text": "How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions | MongoDB I was following the github actions tutorial from you @Lauren_Schaefer and I would still have some questions:Supposing I have a simple realm app with a trigger, trigger configuration json is bound to a specific database. How can I use a single repo to manage dev/ prod environments seamlessly, so that I only need to do a PR when I’m confident that dev environment is fine? I would totally avoid using UI to deploy changes for prod unless there is a simple setting which could only be done there.My thinking was something like this:I would have dev project connected to my repository develop branch. Once I’m confident everything’s good, I would do a PR into master for release. Merging the PR would call github actions as you described which will:This doesn’t seem like a good option to me, since setting an automated deploy on the production application for master branch would not allow me to do any changes on the realm UI without breaking the master branch I guess, since mongo will overwrite dev code… ( with this approach, master branch will have dev settings, but relying on github actions to do all the required json manipulations )I’m pretty confused about the best way of doing it. Your blog post it’s focused on the mobile application and I cannot see anything related to triggers which are tied to database / collections. Would you advise me which option would work best conceptually for my use case? Thanks a lot in advance!", "username": "Tudor_Suditu" }, { "code": "", "text": "Hi Tudor -I’m interested in learning more hereThis doesn’t seem like a good option to me, since setting an automated deploy on the production application for master branch would not allow me to do any changes on the realm UI without breaking the master branch I guess, since mongo will overwrite dev code… ( with this approach, master branch will have dev settings, but relying on github actions to do all the required json manipulations )Is there a reason or scenario why you anticipate using both Github Deploy + UI to deploy changes to your app? From what I can tell, your concern is mainly around when you are doing both frequently.The other thing I’m not sure I understand is why making a change to your UI on the dev app (which is tied to your dev branch), doesn’t appropriately propogate to your prod app, since the same GH action will trigger a deploy to prod, changing the environment.", "username": "Sumedha_Mehta1" }, { "code": "name: Build\n\non:\n push:\n branches:\n - \"main\"\n\njobs:\n deploy:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v2\n\n # SET ENVIRONMENT VARIABLES WE WILL USE IN LATER STEPS\n - name: \"Set env vars\"\n if: ${{ github.ref == 'refs/heads/main' }}\n run: |\n echo \"REALM_APP_ID=${{ secrets.PROD_REALM_ID }}\" >> $GITHUB_ENV\n echo \"CLUSTER_NAME=cicd-prod-cluster\" >> $GITHUB_ENV\n echo \"DATABASE_NAME=production\" >> $GITHUB_ENV\n echo \"APP_NAME=cicd-prod\" >> $GITHUB_ENV\n - name: \"Install the Realm CLI\"\n run: |\n npm install -g mongodb-realm-cli\n - name: \"Authenticate Realm CLI production\"\n if: ${{ github.ref == 'refs/heads/main' }}\n run: |\n realm-cli login --api-key=\"${{ secrets.PROD_REALM_API_PUBLIC_KEY }}\" --private-api-key=\"${{ secrets.PROD_REALM_API_PRIVATE_KEY }}\" --realm-url https://realm.mongodb.com --atlas-url https://cloud.mongodb.com\n - name: update triggers configurations\n run: |\n for filename in triggers/*.json; do\n echo \"`jq '.config.database=\"${{ env.DATABASE_NAME }}\"' $filename`\" > $filename\n cat $filename\n done\n - name: update realm-config\n run: |\n echo \"`jq '.app_id=\"${{ env.REALM_APP_ID }}\"' realm_config.json`\" > realm_config.json\n echo \"`jq '.name=\"${{ env.APP_NAME }}\"' realm_config.json`\" > realm_config.json\n cat realm_config.json\n - name: update data sources\n run: |\n echo \"`jq '.config.clusterName=\"${{ env.CLUSTER_NAME }}\"' data_sources/mongodb-atlas/config.json`\" > data_sources/mongodb-atlas/config.json\n cat data_sources/mongodb-atlas/config.json\n - name: deploy realm application\n run: |\n realm-cli push --remote=\"${{ env.REALM_APP_ID }}\" -y\n", "text": "Attaching the build.yml started from the skeleton that you did. It kinda seems to work, however, adding a trigger with preimages set to enabled, then obviously I need to change trigger configuration for preimage in each json with some scripts… I could do that , but I have some doubts that the same structure will hold for you guys. Looking forward to hearing from you how the environments issue would be solved so that it doesn’t involve human interaction to manually create triggers, which obviously would be error prone.", "username": "Tudor_Suditu" }, { "code": "", "text": "It seems to me that it’s kind of clunky to execute github actions on main branch push just so that I modify json configs and deploy altered code to production app. If at any point in time I want to add a hotfix from the UI in prod, then UI will overwrite the github content ( which will have dev configurations << github actions don’t alter the actual repository as shown above>> ) , and the main branch will eventually have prod values in configurations…That means that at the next pull request from dev to master, we ll need to solve conflicts, something like that. I’m just wondering whether there’s a better approach or not into solving this.", "username": "Tudor_Suditu" }, { "code": "name: Build\n\non:\n\n push:\n\n branches:\n\n - \"main\"\n\njobs:\n\n deploy-production:\n\n if: ${{github.ref == 'refs/heads/main'}}\n\n runs-on: ubuntu-latest\n\n steps:\n\n - uses: actions/checkout@v2\n\n # SET ENVIRONMENT VARIABLES WE WILL USE IN LATER STEPS\n\n - name: \"Set env vars\"\n\n run: |\n\n echo \"REALM_APP_ID=${{ secrets.PROD_REALM_ID }}\" >> $GITHUB_ENV\n\n echo \"CLUSTER_NAME=xxxxx\" >> $GITHUB_ENV\n\n echo \"DATABASE_NAME=xxxxx\" >> $GITHUB_ENV\n\n echo \"APP_NAME=xxxxxxx\" >> $GITHUB_ENV\n\n - name: \"Install the Realm CLI\"\n\n run: |\n\n npm install -g mongodb-realm-cli\n\n - name: \"Authenticate Realm CLI production\"\n\n run: |\n\n realm-cli login --api-key=\"${{ secrets.PROD_REALM_API_PUBLIC_KEY }}\" --private-api-key=\"${{ secrets.PROD_REALM_API_PRIVATE_KEY }}\"\n\n - name: update triggers configurations\n\n run: |\n\n for filename in triggers/*.json; do\n\n echo \"`jq '.config.database=\"${{ env.DATABASE_NAME }}\"' $filename`\" > $filename\n\n cat $filename\n\n done\n\n - name: update realm-config\n\n run: |\n\n echo \"`jq '.app_id=\"${{ env.REALM_APP_ID }}\"' realm_config.json`\" > realm_config.json\n\n echo \"`jq '.name=\"${{ env.APP_NAME }}\"' realm_config.json`\" > realm_config.json\n\n echo \"`jq '.environment=\"production\"' realm_config.json`\" > realm_config.json\n\n cat realm_config.json\n\n - name: update data sources\n\n run: |\n\n echo \"`jq '.config.clusterName=\"${{ env.CLUSTER_NAME }}\"' data_sources/mongodb-atlas/config.json`\" > data_sources/mongodb-atlas/config.json\n\n echo \"`jq '.config.namespacePreimageConfigs=(.config.namespacePreimageConfigs | if(type==\"array\" and length > 0) then map(.dbName=\"${{ env.DATABASE_NAME }}\") else [] end) ' data_sources/mongodb-atlas/config.json`\" > data_sources/mongodb-atlas/config.json\n\n cat data_sources/mongodb-atlas/config.json\n\n - name: deploy realm application\n\n run: |\n\n realm-cli push --remote=\"${{ env.REALM_APP_ID }}\" --include-package-json -y\n", "text": "Attaching final version of the build script. I already tested this one, and it seems to be working fine with triggers / secrets / config values for environments ( development.json, production.json ). I would be very interested in finding out if this approach is an anti pattern or not at least.Current approach for our use case:PS: in this use case, we disabled automated deployments for production, so that all the updates will be flowing through our PRs and repo main branch in a safe manner.", "username": "Tudor_Suditu" }, { "code": "{\n \"app_id\": \"%(%%environment.values.realmAppId)\",\n \"config_version\": 20210101,\n \"name\": \"%(%%environment.values.realmAppName)\",\n \"location\": \"US-VA\",\n \"provider_region\": \"aws-us-east-1\",\n \"deployment_model\": \"GLOBAL\",\n \"environment\": \"%(%%environment.values.environment)\"\n}\n", "text": "Why wouldn’t you use environment variables like this (realm_config.json)?", "username": "Kevin_18580" }, { "code": "", "text": "@Kevin_18580 I tried to use environment variables as you suggested, and it worked for app_id and name, but not for the environment. It looks like the environment is kind of self-referencing itself, so how should it know which environment to use?", "username": "Jimpy" }, { "code": "", "text": "Just to confirm – App Services does not support environment variables for the Environment field as this would create a cyclical reference. Our recommendation here would be to hardcode the environment in the repo/branch and then update it as a part of the code promotion / CICD process.", "username": "Drew_DiPalma" }, { "code": "{\n \"values\": {\n \"cluster\": \"my-cluster\",\n \"environment\": \"development\",\n \"realmAppId\": \"triggers-someid\",\n \"realmAppName\": \"Triggers\",\n }\n}\n", "text": "I am using the following in my environments/development.json:Each environment json would have different values.The project/cluster will have the deployment environment set and that determines which JSON file is used in the Realm application.FYI: we are using 3 clusters (one for development, qa and production).", "username": "Kevin_18580" } ]
Realm Environments?
2021-07-22T14:01:57.704Z
Realm Environments?
6,499
null
[ "data-api" ]
[ { "code": "", "text": "having this\n“area”: { “$regex”: “17)” } }I tried escape the ) as “17)” and also tried with “17\\)” both not workingis there something I missing?Regards", "username": "Brahian_Velazquez" }, { "code": "curl --location --request POST 'https://data.mongodb-api.com/app/data-xxx/endpoint/data/beta/action/find' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: xxx' \\\n--data-raw '{\n \"collection\":\"test\",\n \"database\":\"test\",\n \"dataSource\":\"rs0\"\n}'\n\n{\"documents\":[{\"_id\":0,\"area\":\"17\"},{\"_id\":1,\"area\":\"17)\"}]} \ncurl --location --request POST 'https://data.mongodb-api.com/app/data-xxx/endpoint/data/beta/action/find' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: xxx' \\\n--data-raw '{\n \"collection\":\"test\",\n \"database\":\"test\",\n \"dataSource\":\"rs0\",\n \"filter\": {\"area\": {\"$regex\":\"17\\\\)\"}}\n}'\n\n{\"documents\":[{\"_id\":1,\"area\":\"17)\"}]}\n{\"$regex\": \"17\\\\)\"}", "text": "Hi @Brahian_VelazquezYou would need to escape the escape character for example, I have two documents in my test collection:and we can select the one with the bracket using regex:Please note the {\"$regex\": \"17\\\\)\"} in the filter above.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to escape special caracter with data api
2022-06-22T23:54:38.999Z
How to escape special caracter with data api
2,983
null
[]
[ { "code": "zsh: bad CPU type in executable: mongod", "text": "I just upgraded my developer machine to a Mac Studio, with the M1 Max chip.I seem to get zsh: bad CPU type in executable: mongod when I try to install the MongoDB developer tools with brew.Is the M1 Max supported - or can I somehow get MongoDB community working on my new machine?", "username": "Alex_Bjorlig" }, { "code": "", "text": "Search the forums for M1 questions, this stuff has been extensively discussed. Short answer: you may need to install Rosetta.", "username": "Jack_Woehr" }, { "code": "softwareupdate --install-rosetta", "text": "I would only post my question if I was in doubt - of course, I did search the forums.Finally, the only thing I missed was installing rosetta and restart the computer.Solution\nInstall rosetta with softwareupdate --install-rosetta in the terminal, and restart the computer after following the steps described here.", "username": "Alex_Bjorlig" }, { "code": "", "text": "Rosetta", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Support for Mac Studio M1 Max chip
2022-06-21T11:35:38.229Z
Support for Mac Studio M1 Max chip
1,346
null
[ "scala" ]
[ { "code": "", "text": "Hi,findOne - https://www.mongodb.com/docs/manual/reference/method/db.collection.findOne/limit - https://www.mongodb.com/docs/manual/reference/method/cursor.limit/Am I blind or does the Scala driver not support these? How is one supposed to do this with the Scala driver if not?", "username": "Kristoffer_Almas" }, { "code": "collection.find().first().printHeadResult()\n.limit(int)", "text": "Hi @Kristoffer_Almas,Looks like findOne in scala is :See the doc here.And limit is just .limit(int) on a cursor which is called a FindObservable in Scala apparently:See the doc here and search for “limit”.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Scala driver - findOne or limit?
2022-06-22T07:10:49.752Z
Scala driver - findOne or limit?
2,991
null
[ "database-tools" ]
[ { "code": "", "text": "hi, I have done importing csv file and made changes and when I am trying to export using the command “–mongoexport --db=users --collection=contacts --type=csv --fields=Name,Department,DOJ --out=contacts.csv”\nits is showing\nconnected to: mongodb://localhost/\nerror opening output stream: open contacts.csv: Access is denied.", "username": "Aman_N_A" }, { "code": "chmod 644 contacts.csv\n/tmp~/", "text": "Hi @Aman_N_A and welcome in the MongoDB Community !It’s not something like by any chance?If the file exists… Or maybe you are in a folder where you don’t have write authorization? Go to /tmp or your home ~/.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Getting error while mongoexport
2022-06-22T16:09:56.527Z
Getting error while mongoexport
2,554
null
[ "serverless" ]
[ { "code": "", "text": "Hi I am using serverless and wanted to know:It the case dataSize is paid by customer, is there a way to decrease it? For our use case we don’t expect the data to go beyond 200GB. Not sure if serverless is trying to be smart on how large the cluster should be.", "username": "Ouwen_Huang" }, { "code": "", "text": "Hi @Ouwen_Huang and welcome in the MongoDB Community !The 1TB limit is per cluster. See all the other limitations here:About the costs, you can read more about them here:And you can also check your Billing tab in Atlas to check exactly what is counted. Don’t forget that a MongoDB Atlas cluster contains your data + the oplog + the indexes + system collections. All this need some space. If you want to reduce your data size, maybe you could consider archiving some data in the Data Lake. Sometimes a wrong data model (schema) can also lead to unnecessary data sizes.The WiredTiger compression can mitigate this a bit but it’s not magical either.I hope this helps a bit.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "compactmongosh", "text": "Hi @MaBeuLux88,Thanks for the quick response! I am trying to debug why the storage size is so large (4x the underlying collection). Datasize was about 400GB before and after a couple small <1GB indexes were created. The oplog + system collections should be using the defaults.I’ve also tried using the compact command on my collections via the mongosh but it doesn’t seem to have affected the size.", "username": "Ouwen_Huang" }, { "code": "rs.printReplicationInfo()", "text": "No idea without diving in the data. Check the data sizes with MongoDB Compass maybe?\nimage1221×134 9.82 KB\nThe UI gives a few informations about the collections & docs.Check the oplog sizes as well with rs.printReplicationInfo().Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{\n db: 'dbname',\n collections: 5,\n views: 0,\n objects: Long(\"62852233\"),\n avgObjSize: 6738.123068388039,\n dataSize: Long(\"423506081077\"),\n storageSize: Long(\"104617545728\"),\n totalFreeStorageSize: Long(\"7421489152\"),\n numExtents: Long(\"0\"),\n indexes: 10,\n indexSize: Long(\"15020978176\"),\n indexFreeStorageSize: Long(\"7416700928\"),\n fileSize: Long(\"0\"),\n nsSizeMB: 0,\n ok: 1\n}\nrs.printReplicationInfo()mongos", "text": "Appreciate the debugging help, let me know if there is anything I should be doing. This is what db.stats() gives back.rs.printReplicationInfo() gives me an error for mongos is there a different connection/command for serverless to get the replication info?", "username": "Ouwen_Huang" }, { "code": "rs.printReplicationInfo()local", "text": "From what I see in this stats, you have 423 GB of uncompressed data stored in MongoDB and because of the compression of WiredTiger, it’s reduced to 104 GB. You also have 15 GB of indexes.My bad about the rs.printReplicationInfo() command, it’s actually written in the Atlas Serverless Limitations link I shared earlier: there is no access to the collections in the local database.Did you check your billing and how much data storage is billed for this cluster?", "username": "MaBeuLux88" }, { "code": "", "text": "Seems like billing is for the 423GB (I’m also unsure if the 1TB compression limit applies to compressed disk use or uncompressed)", "username": "Ouwen_Huang" }, { "code": "", "text": "I talked with the Atlas & Serverless team and they explained to me a bit more how it works.So, yes, Atlas is billing Serverless based on the uncompressed size of your BSON docs + the indexes. The idea of billing on the uncompressed data rather than on the compressed data is that the final price doesn’t depend on the performances of the compression algorithm that WiredTiger is using. So it’s always fair and wouldn’t change if we update the compression algorithm in the future.If Atlas Serverless was billing on the compressed size, it would be x4 or x5 more expensive so it would seem that it’s less competitive and it would be less predictable as the compression can be more or less performant depending on the schema design you are using, the field types, etc. So it would be more complicated to predict your serverless costs in advance & plan ahead your spendings.Finally, the 1TB storage limitation is based on the collection + indexes data compressed. It is expected that users would migrate to Atlas Dedicated clusters if they come close to the limit. Eventually in the future, the team wants to push this limit up or completely remove it.I hope this makes sense and is helpful !Let me know if you have more questions of course!Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "@MaBeuLux88, this was very helpful. Thanks for asking on my behalf.So far I’m quite happy with the performance of mongo serverless, our use-case is extremely bursty, so paying for compute use is very attractive. I did notice some issues when scaling from 10GB insertion to 400GB there was some downtime which we needed to code resiliency for. I’m guessing there may have been a resource allocation trigger happening behind the curtain.I think I understand what you mean on cost: if billed on compressed size, the storage cost would just be scaled up 4x - 5x, so its the same price just different way to view it. I would actually prefer pricing on compressed disk. “Uncompressed” data pricing encourages the customer side to design around it.", "username": "Ouwen_Huang" }, { "code": "", "text": "Well it’s a good thing then in my opinion because if a customer optimize their schema design, they are on the good path to awesome perfs! ", "username": "MaBeuLux88" }, { "code": "", "text": "Just for clarity, wouldn’t there be no price difference between using wiredtiger compression vs having compression turned off?", "username": "Ouwen_Huang" }, { "code": "", "text": "It could increase the number of RPU and WPU that you consume if the compression was disabled. Not completely sure about that one.\nBut using compression is definitely a big performance boost.\nIt’s also the point of serverless: you don’t need to know how it’s managed in the background !", "username": "MaBeuLux88" }, { "code": "", "text": "Update because the “me” of the past isn’t as smart as the new “me”. RPU and WPU wouldn’t be affected by a disabled compression because the calculations are done before the storage engine so again, uncompressed. ", "username": "MaBeuLux88" } ]
MongoDB Serverless Pricing and Limitations
2022-06-19T16:43:23.779Z
MongoDB Serverless Pricing and Limitations
5,007
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "We have a collection in an Atlas database. This collection has a field that takes decimal value. The collection is mapped in a Realm app as a field of type float. Sync is activated on this realm app.When a document is inserted in the Atlas collection with value of this field as 0, the record fails to sync with the Realm app.“Detailed Error: could not convert MongoDB value to Realm payload for { table: work_order_section_item, path: min }, value=0 : cannot transform int value for non-int property of type float”If the value is non zero decimal, it works fine. I tried mapping the field to float as well as double. Same results.", "username": "Sanjay_Mogarkar" }, { "code": "", "text": "How are you inserting the document into Atlas? Depending on the driver you may need to explicitly cast the value type as float upon insertion", "username": "Ian_Ward" }, { "code": "{\n \"id\": {\n \"type\": \"string\",\n \"value\": \"f530dac6db8630509b3a3632f39619ca\"\n },\n \"name\": {\n \"type\": \"string\",\n \"value\": \"temperature\"\n },\n \"min\": {\n \"type\": \"decimal\",\n \"value\": \"4.2\"\n },\n \"max\": {\n \"type\": \"decimal\",\n \"value\": \"6.4\"\n }\n }\n{\n \"_id\": \"f530dac6db8630509b3a3632f39619ca\",\n \"name\": \"temperature\",\n \"min\": 4.2,\n \"max\": 6.4\n}\n", "text": "I am using the mongodb javascript driver. I have a script that calls a third party REST API to get an array of documents. The objects in array contain all values as strings but there is a type definition in the input array. If the input object isI am transforming it intoi.e. when type is decimal I am using parseFloat(value) to convert incoming string to float. Finally I am calling insertMany() on the collection with array of transformed objects as input.Insert to Atlas collection works for any value but when the value is an integer the sync to Realm fails.“Detailed Error: could not convert MongoDB value to Realm payload for { table: work_order_section_item, path: max }, value=5 : cannot transform int value for non-int property of type double”At the moment I have set the Realm attribute type as double but I was getting similar errors when the type was float.\nI reckon javascript will treat 5.0 as 5 or 0.0 as simply 0. Not sure why int is not automatically transformed to float/double.", "username": "Sanjay_Mogarkar" }, { "code": "", "text": "Hello,Were you able to find a suitable fix to this? I am experiencing the same problem.", "username": "Harry_Heffernan" } ]
Value 0 in Atlas collection field fails to sync to realm field of type float and decimal
2021-08-20T07:11:08.210Z
Value 0 in Atlas collection field fails to sync to realm field of type float and decimal
2,855
null
[ "atlas", "graphql", "realm-web" ]
[ { "code": "", "text": "Hello Members,\nI am trying to implement Full-Text Search over a GraphQL API in Atlas, with the help of the MongoDB tutorial. But when I am trying to search the “name”, the “search” option is not showing in the graphql section also tried to query in the custom resolver, and provided me an empty array.", "username": "Debajyoti_Chowdhury" }, { "code": "", "text": "\nScreenshot from 2022-06-22 17-20-39903×1058 97.8 KB\n", "username": "Debajyoti_Chowdhury" } ]
MongoDB Graphql Query for the Full Text Serch
2022-06-22T15:20:46.637Z
MongoDB Graphql Query for the Full Text Serch
2,477
https://www.mongodb.com/…1_2_1023x390.png
[]
[ { "code": "sudo dnf install mongodb-org mongodb-org-server mongod$ mongod {\"t\":{\"$date\":\"2022-06-22T15:21:27.818+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.825+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.825+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":26029,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"fedora\"}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"openSSLVersion\":\"OpenSSL 1.1.1n FIPS 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel80\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Fedora release 36 (Thirty Six)\",\"version\":\"Kernel 5.18.5-200.fc36.x86_64\"}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}} ", "text": "I was trying to install mogodb on fedora 36 using this\n\nimage1122×428 36.6 KB\nand then sudo dnf install mongodb-org mongodb-org-server but now when i try to run mongod i get this$ mongod {\"t\":{\"$date\":\"2022-06-22T15:21:27.818+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.825+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.825+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":26029,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"fedora\"}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"openSSLVersion\":\"OpenSSL 1.1.1n FIPS 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel80\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Fedora release 36 (Thirty Six)\",\"version\":\"Kernel 5.18.5-200.fc36.x86_64\"}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}} {\"t\":{\"$date\":\"2022-06-22T15:21:27.826+05:30\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"} {\"t\":{\"$date\":\"2022-06-22T15:21:27.827+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}} ", "username": "akshat_kumar" }, { "code": "\"s\":\"E\"", "text": "In the log, errors have severity E, which you can find by searching for\"s\":\"E\"In your case it is:NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.The cause and the solution are clear.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb not working on Fedora 36 help!
2022-06-22T10:01:18.646Z
Mongodb not working on Fedora 36 help!
3,144
https://www.mongodb.com/…b3a83a4b1834.png
[ "python" ]
[ { "code": "", "text": "When trying to use pymongo’s, myclient.list_database_names() it doesn’t return the list of databases connected to the server but rather just the MongoClient information. My team and I are making a pymongo resource file to use for robot framework tests, 2 images below are code snippets and we keep getting a timeout error when using list_database_names() even after setting up a successful connection.", "username": "Kirill_Kobyakov" }, { "code": "client = MongoClient(\"mongodb://127.0.0.1:27017/admin?readPreference=secondaryPreferred\")\ndef listDatabases():\n names = client.list_database_names()\n return names\nx = listDatabases()\nprint(x)\n['ZONES', 'address', 'admin', 'applications', 'browser', 'config', 'id_test', 'local', 'products', 'random', 'school', 'servers', 'temp_data', 'ttl', 'update', 'users', 'users2']\n", "text": "When I run the list_database_names() function I get my list of databases returned. See below the example code / output.OutputCan you verify the version of pymongo that you are using? And perhaps share the output you are getting from your command?", "username": "tapiocaPENGUIN" }, { "code": "", "text": "pymongo 4.1.1 and I get a server selection time out error, this is it:ServerSelectionTimeoutError:: [Errno 11001] getaddrinfo failed, Timeout: 30.0s, Topology Description: <TopologyDescription id: 62b30bb5ef8befdc1c4a2a93, t\nopology_type: Unknown, servers: [<ServerDescription () server_type: Unknown, rtt: None, error=AutoReconnect(: [Errno 11001\n] getaddrinfo failed’)>]>Would this be a server side problem or something with my code?\nAnd thanks for the reply!", "username": "Kirill_Kobyakov" }, { "code": "", "text": "getaddrinfo failedSeems to indicate a DNS resolution issue.What URI are you using?", "username": "steevej" } ]
Problem getting list_database_names() to return list with database names
2022-06-21T18:50:32.507Z
Problem getting list_database_names() to return list with database names
3,181
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hello,\nI’m running MongoDB shell version: 2.0.7 and I would like to make a backup of my database and upgrade my server to the latest version. so, I was wondering:Thanks", "username": "Fabio_Perez" }, { "code": "mongodmongo", "text": "Hi @Fabio_Perez and welcome in the MongoDB Community !Some questions for you:In theory, the “normal” way to upgrade would be to follow the upgrade instructions from each major release and upgrade your cluster from major releases to major releases… But this would mean for you: 2.0.7 => 2.2.X => 2.4.X => 2.6.X => 3.0.X => 3.2.X => 3.4.X => 3.6.X => 4.0.X => 4.2.X => 4.4.X => 5.0.X (with X being the biggest minor version number as possible each time).You can find the doc for each major releases but here is an example doc for RS 4.0: https://www.mongodb.com/docs/manual/release-notes/4.0-upgrade-replica-set/Depending on the amount of data, MAYBE using mongoexport / mongoimport could be a solution as it’s JSON instead of BSON but it depends what’s in your data as well. You might have problems with data types, etc. If you only have basic values (string, bool, int, …) then maybe it could be a solution. But that’s a lot of IFs and no guarantee without testing & checking.Maxime.", "username": "MaBeuLux88" }, { "code": "mongodump --host IP_server --db ping --out /home/mongos/mongo2x-`date +\"%m-%d-%y\"` --gzipFailed: can't create session: could not connect to server: server at 208.85.216.45:27017 reports wire version 0, but this version of the Go driver requires at least 2 (MongoDB 2.6)mongodChecking status of database: mongodb running.\n/etc/mongodb.confmaster = true\nsourse = server2_hostname\nslave = true\nsource = server1_hostname\nreplSetmongodump", "text": "Hi Maxime,\nThanks for all that information.Here are my answers to your questions:Then I went to the config file of that other server and it has this:Tho the replSet value is commented out so it does not have a name.\nDoes this mean that the replicaSet is active and the DBs are mirrored?If the data is mirrored between the 2 servers, what would be the process to update ? should try to update the mongodb version on the second server (slave) to at least 2.6 to use mongodump ?Thanks for your help with this.", "username": "Fabio_Perez" }, { "code": "mongodmongod --version\ndb version v2.0.7, pdfile version 4.5\n", "text": "Hello,\nJust as an update. I’m running mongod on the same version:", "username": "Fabio_Perez" }, { "code": "mongodumpmongodumpmongorestoremongodmongodmongorestoredb.adminCommand( { setFeatureCompatibilityVersion: \"4.2\" } )", "text": "Hi @Fabio_Perez,It’s normal that you got this error. mongodump latest version (currently packaged with the other Mongo Tools in v100.5.3) isn’t compatible with MongoDB 2.0.X.It supports MongoDB 4.0 => 5.0. So the good news is, when you reach 4.0, you should be able to mongodump your 4.0 server and mongorestore in a 5.0 final cluster directly.Since relatively recently (couple of years top but I would say one year) the tools aren’t shipped anymore directly with the mongod binaries. They now follow independent release cycles.2.0.X is so old that I never touched that version. I wouldn’t touch it with a stick! When I see the “master” and “slave” terms, I think it’s because “Replica Set” wasn’t yet a concept at the time and the replication was still really primitive. So yes, I think they are “mirrored” with the old school system.What I would do if I was in your shoes:I would check the Production Notes / Upgrade instructions for each versions to check if there aren’t additional checks or command to run for specific versions. But it should be more or less OK. Long and annoying but at least this should work…If you want to speed up the process, you could technically just replace the binaries and keep the same data folder when you start the new mongod version. If you choose this alternative, you definitely need to read the upgrade instructions because you’ll have to run a few additional commands. For example db.adminCommand( { setFeatureCompatibilityVersion: \"4.2\" } ) is a command that you will need for each version. But you will also have extra “problems” with this approach. For example in v 2.6 => 3.0, you have to upgrade from MMAPv1 => WiredTiger.So I think in your case, I would dump/restore until 3.0. Then starting from 3.0 it’s faster to just restart the node with the new binaries of the next major release (like 3.0.X => 3.2.X where X is as high as possible) and apply the required extra commands to finish the migration. And repeat until you reach 4.0.\nThen you could just use the final latest version of mongodump 100.5.3 and do a final dump => restore in 5.0.9.It’s a bit complicated when you have to upgrade 11 years of innovations… Am I making sense or I just confused you even more at this point? The cool thing is that if you fail, you can always start over from scratch and don’t destroy the current prod.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Maxime,\nThanks for all that information. You didn’t confuse me a lot LOL but it does seems like a lot of work to catch up with 11 years of innovations hehehehe.I was wondering how to download the binaries. I wen to this link: Release notes for 2.x\nand at one point there, it says “Download the binaries from this link” but that link redirects to someplace else. So I’m not sure as to where I can find all the binaries I need.Since the data on the backup is changing all the time because it’s live production data. Once I made the first backup on the new server, and run all the updates until almost the latest mongo version, that would mean that the data from that backup won’t be the same as the one currently in production. assuming It take me just 1 day doing all version updates.\nIf what I’m saying makes sense and it’s true. doing this won’t help me at all because I can update the server while is life so I would lose data while doing the update. right? so maybe is not worth the time doing it? and just leave the server as is until the end of time?Thanks again for the support", "username": "Fabio_Perez" }, { "code": "", "text": "I went here to try to find the archives:\nimage2066×834 102 KB\nIt leads to this link:https://www.mongodb.com/download-center/community/releases/archiveBut…\nimage800×450 74.2 KB\nSo you can go to the source where I assume there is truly evrything!Try MongoDB Atlas products free. Developers can choose to use in the cloud or download locally. Either way, our software makes it easy to work with data.I hope you find what you need !Also, another solution could be to one shot it complete with mongoexport & mongoimport. If you export everything in JSON and these files are correct without complex types, there is no reason why it wouldn’t work. It’s gonna take a long time but… Maybe it’s worth it rather than doing the entire upgrade path. I would give it a shot maybe? mongoexport 2.0.7 a small collection and see if you can mongoimport 100.5.3 in 5.0.9 ?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "mongodumpmongorestoremongodmmmongodumpmongorestoremongodumpmongodump*.metadata.jsonmongorestoremetadata.json", "text": "Hi @Fabio_Perez,I joined MongoDB when 2.0 was the current release, so I have some familiarity although we’re digging into ancient history now ;-).A few questions about your environment & goals:Is 48Gb the size of your data files or the size of the data?Is downtime during upgrades acceptable?Is your cluster hosted in the cloud (or does it have great connectivity)?Do you have a representative staging/QA environment where you can test upgrade processes?Have you already looked at updating your MongoDB driver versions? There have been significant changes in wire protocol and API since MongoDB 2.0, so a driver upgrade will likely be a prerequisite to your server upgrades.Some thoughts:Given the size of your data set I would be inclined to perform in-place upgrades (take backups before and after each major version upgrade!) rather than doing a mongodump & mongorestore as those will require downtime and dumping all data through mongod in order to recreate the data files. In-place upgrades are relatively straightforward and quick – the most time consuming aspect is normally validation.Before upgrading I would change from the deprecated Master/Slave topology to the modern Replica Set topology which will allow you to do rolling upgrades to newer server versions.Follow the instructions in the documentation for doing in-place upgrades, for example Upgrading to MongoDB 2.2. Make sure you check the instructions for each major release as there will be some different steps given the wide range of versions you are upgrading through.If you are looking for older server binaries you may find m helpful: GitHub - aheckmann/m: mongodb version management. You should be able to use this to download generic Linux binaries. m downloads and unpacks the binary tarballs and I expect these will be easier to work with than older server packages which will no longer have verifiable signatures (due to expiry).Related reading: Replace mongodb binaries all at once? - #3 by Stennie. If you can get your deployment to a new enough version where automation can be used, you can potentially reduce the number of upgrade steps to get to a non-EOL server version.If downtime is acceptable and you want to try the mongodump/mongorestore path (not fully tested or supported):Use 2.2.7 version of mongodump to backup your deployment. The 2.2 version of mongodump captures index definitions in *.metadata.json files.Try using latest version of mongorestore to restore this backup into a 5.0 deployment. You may encounter some errors due to stricter validation of collection options and indexes, but they should be fixable (you may have to edit the metadata.json file).Regards,\nStennie", "username": "Stennie_X" } ]
Upgrade mongodb 2.0.7 to latest version
2022-06-20T15:14:37.059Z
Upgrade mongodb 2.0.7 to latest version
2,899
null
[ "database-tools", "backup" ]
[ { "code": "2022-06-12T15:01:01.937+0000 building indexes up to 4 collections in parallel \n2022-06-12T15:01:01.937+0000 starting index build routine with id=3 \n2022-06-12T15:01:01.937+0000 starting index build routine with id=0 \n2022-06-12T15:01:01.937+0000 no indexes to restore for collection \n2022-06-12T15:01:01.937+0000 restoring indexes for collection from metadata \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"Status_EntitySubType\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"Status\", Value:1}, primitive.E{Key:\"EntitySubType\", Value:1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 run create Index command for indexes: Status_EntitySubType \n2022-06-12T15:01:01.937+0000 starting index build routine with id=1 \n2022-06-12T15:01:01.937+0000 restoring indexes for collection from metadata \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"date\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"SentAt\", Value:-1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"EntityId_NotificationId\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"EntityId\", Value:1}, primitive.E{Key:\"NotificationId\", Value:1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 run create Index command for indexes: date, EntityId_NotificationId \n2022-06-12T15:01:01.937+0000 restoring indexes for collection notifications.EmailLogs from metadata \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"Subject\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"Subject\", Value:-1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"SentAt\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"SentAt\", Value:-1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 run create Index command for indexes: Subject, SentAt \n2022-06-12T15:01:01.937+0000 starting index build routine with id=2 \n2022-06-12T15:01:01.937+0000 no indexes to restore for collection notifications.EmailTemplates \n2022-06-12T15:01:01.937+0000 restoring indexes for collection notifications.SMSLogs from metadata \n2022-06-12T15:01:01.937+0000 index: &idx.IndexDocument{Options:primitive.M{\"background\":true, \"name\":\"Phone_NotificationId\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"Phone\", Value:1}, primitive.E{Key:\"NotificationId\", Value:1}}, PartialFilterExpression:primitive.D(nil)} \n2022-06-12T15:01:01.937+0000 run create Index command for indexes: Phone_NotificationId \n2022-06-12T15:01:01.944+0000 Failed: notifications.SMSLogs: error creating indexes for notifications.SMS: createIndex error: connection() error occured during connection handshake: auth error: unable to authenticate using mechanism \"SCRAM-SHA-256\": (KeyNotFound) Cache Reader No keys \n found for HMAC that is valid for time: { ts: Timestamp(1655046061, 8295) } with id: 0\n", "text": "Hi,\nI am using mongorestore in a Kubernetes job. Sometimes, the job fails randomly especially when the db I’m restoring is a big one. Restarting the exact same k8s job sometimes works if the db I’m restoring is small.\nIt seems to be a shared resource issue but I am not sure what could it be. I tried to increase the resources for the job but it didn’t work it’s still failing with same error.It fails with following error:", "username": "Reab_AB" }, { "code": "", "text": "Hi @Reab_AB and welcome to the community !!It would be really helpful if you could share a few details for the above mentioned issue:Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "standaloneHi @Aasawari", "username": "Reab_AB" }, { "code": "", "text": "Forgot to mention, I am using MongoDB Database Tools in the k8s job. Setting requested memory 8 Gi and memory limits 24Gi", "username": "Reab_AB" }, { "code": "mongodump mongodumpmongodump", "text": "Hi @Reab_AB and thank you for sharing the above information.I don’t believe I have enough information to reproduce the issue that you have been seeing. Could you provide more information regarding:Please help us with the above information to help you further.Thanks\nAasawari", "username": "Aasawari" } ]
Mongorestore in K8s job fails randomly when building indexes
2022-06-13T11:31:40.878Z
Mongorestore in K8s job fails randomly when building indexes
2,715
null
[]
[ { "code": "", "text": "Mongodb Atlas support database triggers, from what I understand that is that only on insert, update and delete operation? My questions is can I send either a webhook or a stitch function to be triggered based on a document date in a collection?I.e. if ccExpiryDate = Date.now() trigger a webhook to my api or trigger a stitch function.I feel like this is possible with Atlas no?My other options is to use a lambda function to poll the database daily for expired dates, (seems inefficient)", "username": "Rishi_uttam" }, { "code": "", "text": "Hi @Rishi_uttam,You can use scheduled triggers based on cron tab expressions to pull the data using Realm triggers, no need for lambda.Having said that, if you need to perform actions based on expiry of documents you can see the following workaround I offered on that thread:Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,Cron expressions cant reach in to a document collection to check dates can they?would you mind providing an example whereby a webhook or function is run based on a document date field.When you say “Use schedule triggers based on cron tab expressions” That means run the function daily and check if any documents match\" This could mean the function could run unnecessarily but didn’t have to since no documents returned that match the criteria?The solution you provided to @ilker_cam was a good one, however that would require me creating a TTL index which will delete the document (i dont want to delete the document) i want to check dates within it and run webhook or function.a) i could run a lambda function that checks the document date every minute, but that makes no sense\nb) it would be great if mongo could trigger the function on its own without being calledwhat are my options?", "username": "Rishi_uttam" }, { "code": "scheduleINSERTschedule _idconst sc_collection = context.services.get(\"LiveMig\").db(\"applicationData\").collection(\"schedule\");\nconst doc = sc_collection.insertOne({_id:changeEvent.fullDocument._id,triggerDate: changeEvent.fullDocument.enddate });\ntriggerDatecreateIndex( { \"triggerDate\": 1 }, { expireAfterSeconds: 0 } )\nDELETEschedule_idconst user_collection = context.services.get(\"LiveMig\").db(\"applicationData\").collection(\"user_coll\");\nconst doc = await user_collection.findOne({ _id: changeEvent.documentKey._id});\nenddateconfig.scheduletriggerDateconfig.scheduleconfig.schedule", "text": "Hi @Rishi_uttam,I can offer 2 options which I once worked with a colleague on (Big kodos to @Irena_Zaidman!!!) .This option is generally similar to the TTL Idea with some tweaks.The implementation will be as following:Create a collection that will contain the schedule data. In my example: schedule collection.Create a database trigger that is activated on the INSERT of the record in the original collection. The trigger will populate the schedule collection with same _id field from and the original collectionE.gCreate a TTL index on the triggerDate in the schedule collection as following:Create another database trigger that is activated on the DELETE of the record in the schedule collection.\nUse the _id of the deleted record (which is identical to the record in the original collection) to retrieve the required record of the original collection and run the required scripts.\ne.gThe implementation will involve creating scheduled triggers, using Realm API. The triggers will be created when the record containing the enddate is saved to the collection.Limitations to this solution:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel.A lot may have changed since this post was created… I am still having an issue of easily triggering a function based on a document value (i.e date value) Cron expressions are ment to be run multipole times, however I only want to run a function once (like send an email based on a date/time)Currently i poll the database daily and check the dates for all records, and launch the function (but seems very inefficient)Note that I do not want to delete the document, I only want to check if the date field is passed. Can i still use a TTl trigger for this?)Is the above solution still the best option ? Thanks", "username": "Rishi_uttam" }, { "code": "", "text": "Hi @Rishi_uttam ,You can use a triggering collection that will delete a dummy trigger document and not the real document you want to operate on.I have a similar example in this article:In this article, we will explore a trick that lets us invoke a trigger task based on a date document field in our collections.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Wonderful, I’ll try and report soon.", "username": "Rishi_uttam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Database Triggers based on document date
2021-02-10T12:25:05.457Z
Database Triggers based on document date
7,649
https://www.mongodb.com/…b_2_1023x394.png
[ "dot-net", "containers" ]
[ { "code": "Newtonsoft.JSON{\n \"payload\": \"string\"\n \"...\"\n}\nSerialized 00:00:00.0294601 - Raw 00:00:00.0319273 - Json 00:00:00.0068617 - Bson 00:00:00.0060091 - HTTP 00:00:00.0036443\nSerialized 00:00:00.0277608 - Raw 00:00:00.0312567 - Json 00:00:00.0088210 - Bson 00:00:00.0043639 - HTTP 00:00:00.0037534\nSerialized 00:00:00.0313045 - Raw 00:00:00.0319250 - Json 00:00:00.0191259 - Bson 00:00:00.0060418 - HTTP 00:00:00.0035886\nSerialized 00:00:00.0298384 - Raw 00:00:00.0316654 - Json 00:00:00.0065837 - Bson 00:00:00.0047787 - HTTP 00:00:00.0036145\nSerialized 00:00:00.0300794 - Raw 00:00:00.0310646 - Json 00:00:00.0072488 - Bson 00:00:00.0040601 - HTTP 00:00:00.0036597\nSerialized 00:00:00.0300397 - Raw 00:00:00.0322210 - Json 00:00:00.0065505 - Bson 00:00:00.0054159 - HTTP 00:00:00.0036771\nSerialized 00:00:00.0301598 - Raw 00:00:00.0314309 - Json 00:00:00.0073720 - Bson 00:00:00.0040479 - HTTP 00:00:00.0036394\nSerialized 00:00:00.0296444 - Raw 00:00:00.0313114 - Json 00:00:00.0072129 - Bson 00:00:00.0072335 - HTTP 00:00:00.0044463\nSerialized 00:00:00.0303547 - Raw 00:00:00.0338965 - Json 00:00:00.0064084 - Bson 00:00:00.0057766 - HTTP 00:00:00.0035770\nSerialized 00:00:00.0292143 - Raw 00:00:00.0318035 - Json 00:00:00.0071610 - Bson 00:00:00.0042087 - HTTP 00:00:00.0035685\npublic static class Program\n{\n public static async Task Main()\n {\n var json = File.ReadAllText(\"input.json\");\n\n var mongoClient = new MongoClient(\"mongodb://localhost\");\n var mongoDatabase = mongoClient.GetDatabase(\"test\");\n\n var serializedDocument = Newtonsoft.Json.JsonConvert.DeserializeObject<ComplexObject>(json)!;\n var serializedCollection = mongoDatabase.GetCollection<ComplexObject>(\"serialized\");\n\n var rawDocument = BsonDocument.Parse(json)!;\n var rawCollection = mongoDatabase.GetCollection<BsonDocument>(\"raw\");\n\n var httpClient = new HttpClient();\n\n for (var i = 0; i < 50; i++)\n {\n var serializedWatch = Stopwatch.StartNew();\n serializedDocument.Id = Guid.NewGuid().ToString();\n serializedCollection.InsertOne(serializedDocument);\n serializedWatch.Stop();\n\n var rawWatch = Stopwatch.StartNew();\n rawDocument[\"_id\"] = Guid.NewGuid().ToString();\n rawCollection.InsertOne(rawDocument);\n rawWatch.Stop();\n\n var jsonWatch = Stopwatch.StartNew();\n Newtonsoft.Json.JsonConvert.SerializeObject(serializedDocument);\n jsonWatch.Stop();\n\n var bsonWatch = Stopwatch.StartNew();\n BsonSerializer.Serialize(new BsonDocumentWriter(new BsonDocument()), typeof(ComplexObject), serializedDocument);\n bsonWatch.Stop();\n\n var httpWatch = Stopwatch.StartNew();\n await httpClient.PostAsJsonAsync(\"http://localhost:5005\", serializedDocument);\n httpWatch.Stop();\n\n Console.WriteLine(\"Serialized {0} - Raw {1} - Json {2} - Bson {3} - HTTP {4}\", serializedWatch.Elapsed, rawWatch.Elapsed, jsonWatch.Elapsed, bsonWatch.Elapsed, httpWatch.Elapsed);\n }\n }\n}\n", "text": "I am not sure, where to post it, but I am investigating a performance issue and It seems that the driver is a little bit slow.I am using .NET Core 6 with the new newest mongodb driver and I am inserting documents with many fields of around 1.5 MB per document. MongoDB is hosted in docker on the developer machine.The following screenshot is from New Relic:\nimage1388×535 51.1 KB\nFor each operation I make 3 inserts or updates:I made two observations:I am using the mongo profiler to get information about the queries and I see the following results:\nimage1091×602 42.3 KB\nWhat you can see is the update statements for the snapshot collections sorted by milliseconds in descending order. Even the slowest example has a huge difference to the shown graph above, where the update on the client side takes around 67ms.I made another test. I insert large documents into the database and compare the performance of the C# side, with Mongo side. For comparison I also make other tests:If you take the HTTP performance (3ms) and add the MongoDB time (3ms), then it should take 6ms, not 30ms.The test is very simpleIf you compare it with MongoDB you see a very big overhead (screenshot shows old result of test program)\nimage1036×529 49.5 KB\nMy assumption was that it could have been caused by serialization, but it does not make a difference whether I use BsonDocument or a custom class.", "username": "Sebastian_Stehle" }, { "code": "", "text": "Hi, @Sebastian_Stehle,I see that you also filed CSHARP-4222. To keep the analysis and discussion all in one place, we will be responding there. Please follow that conversation for updates.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Do you have indexes, other that, _id:1 on the raw collection?If you do, then it is extra work, that has to be done compared to the serialized version.You writeis almost 50% slower in all casesbut, would that be normal if you compare writing totwo snapshot collectionsvswrite an event to the event stream collectionWould writing to 2 collections be expected to take twice as much time?I am really not sure about what you compare with what.In the output with Serialized and Raw columns, I do not see a 50% difference.", "username": "steevej" }, { "code": "", "text": "Yes, you are right. I have mixed up too many things here in my post.As a summary: The profiler reports 10ms in average and the same takes 60ms on the client side.With my test application the difference is similar: 3ms reported by profiler vs 30ms on the client side.", "username": "Sebastian_Stehle" } ]
Poor Driver Performance
2022-06-20T07:06:43.857Z
Poor Driver Performance
3,568
null
[ "aggregation", "connector-for-bi" ]
[ { "code": "", "text": "Hi everyone!I’m working with some collections hosted in an atlas cluster and my mongo bi connector in tableau prep.I’m having a lot of problems when I doing aggregations in tableau due to the memory limitation set in the $group parameter.My main idea is to create views of these collections to load the data into tableau, but I can’t find a way to set the {AllowDiskUse: True} option in my pipeline. This would not be a problem when running from the mongo shell but I am trying to load these views directly from Tableau so I can run the flows automatically.I have tried to create a new collection with the already aggregated data with success, but this is not the best solution as this way I am duplicating data in a new collection and storing it twice.My questions are: Is there any way when creating these views to set the {AllowDiskUse: True} parameter? Is it possible to set by default that there is no memory limit in the aggregations? How can I query these views from Tableau prep with my mongo connector without having this problem of memory in the aggregations?Thanks in advance, I hope you can help me with this problem.", "username": "Fernando_Lumbreras" }, { "code": "find", "text": "Hi @Fernando_Lumbreras and welcome to the community!!In order to deal with the above problem, there could be two possible methods which might help. But you would need to trade off between the memory used and response time for the aggregation query being used.One would be a method where you could use the find method along side {AllowDiscUse: true}. Please refer to the following documentation in cursor.allowDiskUse().The other method could be to make use of Materialised View which is available for MongoDB version 4.2.x which creates on-demand materialised views and updates the contents with each pipeline run.I have tried to create a new collection with the already aggregated data with success,This could be one of the possible ways to solve the issue when you could compromise space compared to time.Let us know if you have further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari ! First of all, thank you very much for you reply.The first method works correctly to query the views from the mongo shell but my problem comes when querying these views from the Tableau Prep Builder application where I don’t have the mongo shell. The only thing I can do is call tables and views (without being able to add find and {AllowDiscUse: true}) or write custom mysql queries.As for the second method. In this case, the problem is that materialized views increase the size of the database. This would make me have duplicate data and increase the cost of the service.Is there no way to configure the mongo BI connector to avoid having these memory problems?Thanks", "username": "Fernando_Lumbreras" }, { "code": "$group", "text": "Hi @Fernando_LumbrerasThe allowDiskUse is used by default in the BI Connector. The following documentation on how-are-queries-processed would hopefully be useful for you.I’m having a lot of problems when I doing aggregations in tableau due to the memory limitation set in the $group parameter.Could you elaborate on how you determined that the issue was caused by the $group stage? Is there any error message that you see?Please help us with the above information to help you further.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @AasawariYes, I have read something about the BI Connector having this parameter active by default. But in practice I don’t see this working.I am working with a collection that contains about 55 million data (about machine errors).Once the collection is opened in tableau prep, what I am trying to do is group all this data by error type. Everything seems to work fine, but when it reaches the end of execution I get the following error on the Tableau server.Thanks", "username": "Fernando_Lumbreras" }, { "code": "GroupByStage - an allocation of 1318 bytes pushes overall allocation over the limit of 2147483647$groupallowDiskUseallowDiskUse", "text": "Hello, @Fernando Lumbreras, and thank you for sharing the information above.The error notice GroupByStage - an allocation of 1318 bytes pushes overall allocation over the limit of 2147483647 is an error message that was raised by the BI connector due the $group stage, user internally, exceeding the 2GB max allocation that is enforced by Atlas. Notably, allowDiskUse allows a stage to grow beyond 100MB, and this message was not caused by the lack of allowDiskUse option.Unfortunately, this limit is currently not customisable. Furthermore, even if the limit is customisable, I tend to think that with more data in your database, you would hit the elevated limitation again sooner rather than later. Thus, the only workaround at this moment is to limit the amount of data that the Tableau query pulls.Please let us know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Connector for BI in Tableau
2022-05-27T08:54:57.209Z
MongoDB Connector for BI in Tableau
3,147
null
[ "next-js", "typescript" ]
[ { "code": " 19 | // is preserved across module reloads caused by HMR (Hot Module Replacement).\n 20 | if (!global._mongoClientPromise) {\n> 21 | client = new MongoClient(uri, options);\n | ^\n 22 | global._mongoClientPromise = client.connect();\n 23 | }\n 24 | clientPromise = global._mongoClientPromise;\n", "text": "When the Nextt.js page loaded i encountered this error and am quite confused on what is happening. Could this error be thrown because of errors in my .env.local file? Could anyone help?MongoParseError: Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”lib/mongodb.ts (21:13) @ eval", "username": "Manny_N_A" }, { "code": "mongodb://mongodb+srv://.env.local", "text": "Welcome to the MongoDB Community @Manny_N_A !Per the error message, a valid MongoDB Connection String is expected to start with mongodb:// or mongodb+srv://.You haven’t included a snippet showing how you are creating the connection, but if you are defining a MongoDB URI in your .env.local file it appears to be missing the expected prefix.You may want to compare against the full example in How to Integrate MongoDB into your Next.js App.A MongoDB Atlas connection string would look similar to the following as an environment variable:MONGODB_URI=mongodb+srv://:@cluster0.mongodb.net/dbname?retryWrites=true&w=majorityRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas Error
2022-06-22T02:30:56.486Z
MongoDB Atlas Error
3,204
https://www.mongodb.com/…b198a2ccdacc.png
[ "aggregation", "text-search" ]
[ { "code": "", "text": "Hello Community,\nI have encountered a possible problem using synonyms.\nWhen I attempt to search for something and the search query contains many words, I get no output at all.When the query consists of a single word, I get the output that you’d expect from a normal behaviour.Removing the synonyms field also gives normal behaviour.Changing all analayzers to lucene.standard doesn’t help at all.It is written in the documentation that a synonym must consist of a single word, but nothing is said about the query.Am I doing something wrong ?", "username": "wael_kassem" }, { "code": "", "text": "Hi @wael_kassemTo get a better idea of what you are trying to achieve, could you provide the following:Regards,\nJason", "username": "Jason_Tran" }, { "code": "{\n index: 'default',\n text: {\n query: 'car ',\n path: ['text']\n\n },\n highlight:{path: ['text']}\n}\n{\n index: 'default',\n text: {\n query: 'car to the repair',\n path: ['text']\n },\n highlight:{path: ['text']}\n}\n", "text": "I am using the following equivalent synonyms :\n[“car”,“vehicle”,“automobile”]And here’s my document:\n{\"_id\":{\"$oid\":“62b1c77d5b26cca04adad1f1”},“text”:“I took my gasoline red vehicle to the repair shop”}The following pipeline doesn’t get any matches, which is expected because i didn’t include a synonyms parameter. Once I include the synonyms parameter , I get my expected output.The following pipeline gets my matches, which isn’t expected because i didn’t include a synonyms parameter. Once I include the synonyms parameter , I don’t get matches anymore.Please be aware of the multi word query .\nThank you,", "username": "wael_kassem" }, { "code": "", "text": "We are using MongoAtlas version 5.0.9", "username": "wael_kassem" }, { "code": "/// Text search index called \"synindex\"\n{\n index: 'synindex',\n text: {\n query: 'car to the repair',\n path: ['text']\n },\n highlight:{path: ['text']}\n}\nlucene.standardtextquerysourcecarquerysynonyms/// pipeline:\n[{$search: {\n index: 'synindex',\n text: {\n query: 'car to the repair',\n path: ['text'],\n synonyms:\"syn\"\n },\n highlight:{path: ['text']}\n}}]\n\n/// output document:\n {\n _id: ObjectId(\"62b23ed41585d2a73cdd92ec\"),\n text: 'I took my gasoline red vehicle to the repair shop'\n }\nquery\"car\"source {\n _id: ObjectId(\"62b25bf91585d2a73cdd92ed\"),\n mappingType: 'equivalent',\n synonyms: [ 'car', 'vehicle', 'automobile' ]\n }\n", "text": "Hi @wael_kassem - Thanks for providing those details.The following pipeline gets my matches, which isn’t expected because i didn’t include a synonyms parameter.On my test environment using the lucene.standard analyser, I do also get back the document you had mentioned using this search pipeline definition. I believe this behaviour is due to the following noted on the text operator, specific to the query field:The string or strings to search for. If there are multiple terms in a string, Atlas Search also looks for a match for each term in the string separately.Once I include the synonyms parameter , I don’t get matches anymore.This does appear a bit odd due to the synonym working on the singular query example you provided above. Would you be able to provide:For the troubleshooting purposes, I have the following search pipeline using the synonyms parameter which does return the document you provided:Please note that if I change the query value to \"car\", the same document is returned as well.The source collection document for the above example:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help using Search Synonyms
2022-05-25T09:42:13.656Z
Help using Search Synonyms
3,809
null
[ "aggregation" ]
[ { "code": "year", "text": "Let’s say we have a bunch of server logs spanning a couple of years, now it’s grouped by year and I want to sample 100 documents from each year, is it possible to do that purely with aggregation pipeline?", "username": "Fred_Wilson" }, { "code": "mongoNUMBER_OF_SAMPLES_REQUIRED$yearNUMBER_OF_SAMPLES_COLLECTED$samplevar NUMBER_OF_SAMPLES_REQUIRED = 100;\nvar NUMBER_OF_SAMPLES_COLLECTED = 150 ;\n\ndb.collection.aggregate([\n {\n $group: { \n _id: { year: \"$year\" }, \n docs: { $push: \"$$ROOT\" }, \n count: { $sum: 1 } \n }\n },\n {\n $project: { \n random_docs: {\n $let: {\n vars: {\n random_positions: {\n $slice: [ {\n $setDifference: [ {\n $map: { \n input: { $range: [ 0, NUMBER_OF_SAMPLES_COLLECTED ] }, \n in: { \n $floor: { $multiply: [ { $rand: {}}, { $floor: \"$count\" } ] }\n }\n }\n }, [] ]\n }, NUMBER_OF_SAMPLES_REQUIRED ]\n }\n },\n in: {\n $map: {\n input: \"$$random_positions\",\n in: { \n $arrayElemAt: [ \"$docs\", \"$$this\" ], \n }\n } \n },\n }\n }\n }\n },\n])\n", "text": "Hello @Fred_Wilson, welcome to the MongoDB Community forum!… is it possible to do that purely with aggregation pipeline?Yes, its possible. Here is the aggregation query which gets the desired results. The query runs from the mongo shell.Note that, this requires MongoDB v4.4.2 or greater. There are two variables defined, - the NUMBER_OF_SAMPLES_REQUIRED which is a number of random samples you are looking for, for each grouping ($year). The random numbers generated are not unique, so we generate little more than the needed 100, and remove the duplicate random numbers (and, the variable NUMBER_OF_SAMPLES_COLLECTED allows more samples).There is a remote chance that you may see one or two duplicate less documents in the samples.There is an Aggregation $sample stage, but I have not tried it in this case . Let me know how this works for you!", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks, this worked like a charm! ", "username": "Fred_Wilson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sample X number of documents in each group with or after a $group stage?
2022-06-21T08:31:30.419Z
Sample X number of documents in each group with or after a $group stage?
1,629
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "[\n {\n \"gstin\": \"07AAACH0812J1ZY\",\n \"company_name\": \"HERO MOTOCORP LIMITED\",\n \"status\": \"Approved\",\n \"createdBy\": {\n \"name\": \"BagalVarsha\",\n \"email\": \"[email protected]\",\n \"mobile_number\": 9421492671\n }\n },\n[\n {\n \"gstin\": \"07AAACH0812J1ZY\",\n \"company_name\": \"HERO MOTOCORP LIMITED\",\n \"status\": \"Approved\",\n \"name\": \"BagalVarsha\",\n \"email\": \"[email protected]\",\n \"mobile_number\": 9421492671\n \n },\n", "text": "after populating data i don’t want data like this i want data in main schema is there in a way to do it\nexample:", "username": "Kashif_Iqbal" }, { "code": "const ordersArray = Order.find().populate(\"user\");\nconst transformedOrdersArray = ordersArray.map((order) => ({\n gstin: order.gstin,\n company_name: order.company_name,\n status: order.status,\n // Transferring the following values from the \n // embedded user object to the parent object\n name: order.user.name, \n email: order.user.email,\n mobile_number: order.user.mobile_number,\n}));\nconst orders = Order.aggregate(\n [\n { $match: { _id: ObjectId(orderId) } },\n { $lookup: { from: \"users\", localField: \"createdBy\", foreignField: \"_id\", as: \"createdBy\" } },\n { $unwind: \"$createdBy\" },\n {\n $project: {\n gstin: \"$gstin\",\n company_name: \"$company_name\",\n status: \"$status\",\n // Transferring the following values from the \n // embedded user(createdBy) object to the parent object\n name: \"$order.createdBy.name\", \n email: \"$order.createdBy.email\",\n mobile_number: \"$order.createdBy.mobile_number\",\n }}\n ]\n)\n", "text": "Hi @Kashif_Iqbal,\nMongoose’s .populate() method does not provide the ability to flatten the document. There’s a feature request in the mongoose’s official GitHub repository to do that.Off the top of my head, there are 2 different ways using which you can achieve this transformation:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Regarding the mongoose populate
2022-06-18T11:05:01.991Z
Regarding the mongoose populate
3,729
null
[ "mongodb-shell" ]
[ { "code": "mongodsudo mongodsudo systemctl start mongodmongod{\"t\":{\"$date\":\"2022-06-20T14:22:53.598-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.598-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.599-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.599-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.600-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.600-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.600-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.600-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":21086,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"v0id\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.9\",\"gitVersion\":\"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\",\"openSSLVersion\":\"OpenSSL 1.1.1o 3 May 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"EndeavourOS\",\"version\":\"\\\"rolling\\\"\"}}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1019}}\n{\"t\":{\"$date\":\"2022-06-20T14:22:53.601-04:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\nsudo mongod{\"t\":{\"$date\":\"2022-06-20T14:23:46.933-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.933-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.933-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.933-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":21108,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"v0id\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.9\",\"gitVersion\":\"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\",\"openSSLVersion\":\"OpenSSL 1.1.1o 3 May 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"EndeavourOS\",\"version\":\"\\\"rolling\\\"\"}}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.934-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.935-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-06-20T14:23:46.936-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\nsudo systemctl start mongodFailed to start mongod.service: Unit mongod.service not found.\nsudo systemctl enable --now mongodb.servicesudo systemctl start --now mongodb.servicemongoMongoDB shell version v5.0.9\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nsudo systemctl status mongodb.service× mongodb.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Mon 2022-06-20 14:26:39 EDT; 5min ago\n Duration: 38ms\n Docs: https://docs.mongodb.org/manual\n Process: 21272 ExecStart=/usr/bin/mongod --config /etc/mongodb.conf (code=exited, status=14)\n Main PID: 21272 (code=exited, status=14)\n CPU: 21ms\n\nJun 20 14:26:39 v0id systemd[1]: Started MongoDB Database Server.\nJun 20 14:26:39 v0id systemd[1]: mongodb.service: Main process exited, code=exited, status=14/n/a\nJun 20 14:26:39 v0id systemd[1]: mongodb.service: Failed with result 'exit-code'.\n", "text": "After installing mongodb with the help of the arch wiki (link in specs). I am unable to start the “mongod” service either manually with the mongod or sudo mongod commands or with systemctl, meaning with the sudo systemctl start mongod command.Here are the outputs of each commands :What is interesting is that the arch wiki says that you can enable and start the service with the following commands : sudo systemctl enable --now mongodb.service and sudo systemctl start --now mongodb.service (notice how the name of the service is different than the last output, mongodb.service instead of mongod.service). These commands yeild no error or output when executed BUT if I run the mongo command after having enabled and started (supposedly) the mongodb.service, I am met with this output :indicating that the service is most likely not running. To confirm my doubt, I would then run the following command to know what’s going on with this service sudo systemctl status mongodb.service which yeilds the following output :What am I doing wrong? I just want to start the service.Specs :\nOS: EndeavourOS Linux x86_64\nKernel: 5.18.5-arch1-1\nintalled mongodb through : MongoDB - ArchWiki\npackages installed : mongodb-bin, mongodb-tools-bin, mongosh-bin (with yay).", "username": "Nycola_Plaisance" }, { "code": "", "text": "This StackOverflow topic sounds like your problem …", "username": "Jack_Woehr" }, { "code": "", "text": "I suspect issue with tmp file\nCheck ownership/permissions of that sock file\nCorrect method is to start with sysctl\nWhen you try to start as mongod or sudo mongod it uses different parameters\nIn first case it complained about tmp file\nIn second case it says no /data/db directory\nSo as root it is trying to bring up mongod but since /data/db dir is not exists it failed\nAs sysctl it tries to bring up mongodb as mongod user which is same as first case but it is failing as tmp file is most likely owned by root\nSo remove this file and start the service again", "username": "Ramachandra_Tummala" }, { "code": "sudo mongodsudo systemctl start mongodb", "text": "Thanks! That was it, I had to created the path to /data/db and then I could either chown the folder or use sudo mongod OR sudo systemctl start mongodb. Again, thanks a lot!", "username": "Nycola_Plaisance" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod service won't start
2022-06-20T18:33:54.137Z
Mongod service won&rsquo;t start
4,864
null
[ "configuration" ]
[ { "code": "", "text": "How much RAM does a connection use?\nBy default there are 100 connections in the pool ready to be used. From those 100 connections how much does an unused connection is using?", "username": "Mugurel_Frumuselu" }, { "code": "", "text": "Hi @Mugurel_Frumuselu,Please take a look at the following post : Memory allocated per connection - #2 by StennieBy default there are 100 connections in the pool ready to be used. From those 100 connections how much does an unused connection is using?Regarding your example with the 100 connections, the connections are created as required up to a maximum of 100 (based off your connection pool size example).You may also find the Connection monitoring and pooling details useful. In addition to this, please check out the How does connection pooling work in PyMongo? documentation as well as this may help.However, with pymongo (as stated within the docs above):The maximum number of milliseconds that a connection can remain idle in the pool before being removed and replaced can be set with maxIdleTimeMS, which defaults to None (no limit).Hope this helps!Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection RAM usage
2022-05-17T09:15:47.714Z
Connection RAM usage
2,909
null
[ "data-modeling", "python" ]
[ { "code": "from dataclasses import dataclass\nfrom model.cart import Cart\n\n\n@dataclass\nclass User(object):\n \n def __init__(self, first_name, last_name, phone, email, uid, cart):\n self.first_name = first_name\n self.last_name = last_name\n self.phone = phone\n self.email = email\n self.uid = uid\n self.cart = cart\n \n\n def __getattr__(self, key):\n return getattr(self, key)\n\n def __repr__(self):\n return \"first_name: {}, last_name: {}, uid: {}, cart: {}, card: {}\".format(self.first_name, self.last_name, self.uid, self.cart, )\n\n\n first_name: str\n last_name: str\n phone: str\n email: str\n uid: str\n cart: Cart\nclass User(Document):\n email = StringField(required=True)\n first_name = StringField(max_length=50)\n last_name = StringField(max_length=50)\n", "text": "I am new to Python and MongoDB and I am struggling with how to set up my classes within my project to use MongoDB. In my learning journey I came across the mongoengine and I have the following question.I have a project that has 1 database with currently 7 collections. The collections are:\ncart\ncountry\ninvoice\nproduct\nsession\nstore\nuserI am currently taking a long hard look at my collections and strongly considering eliminating the country and invoice collections. Country can be combined into both the store and user collection as individual fields. (Tax can still be figured based on where the sale is taking place… from the store country) I am unsure about the invoice collection as this can be a large collection over time. (having to keep sales records for long periods of time)But upon looking up multiple questions I came across the mongoengine… From my research, it appears that mongoengine is a third party and if I am using pymongo it seems to not be required. Am I right?My current classes are written like this:I find the documentation on mongoengine to define classes quite differently. They define classes like this:I cannot find any documentation that tells me what format I need to use… Can you point me in the right direction for clarification? I am dying to read about this and whether I need to use the schema outlined in the mongoengine or if I can use the schema like all the tutorials explain in Python. I.E. can I use the standard class definition or do I need to use the mongoengine verison.From what I can gather, the mongoengine version enforces validation. If I don’t use this format, I’m assuming I need to program in the validation. (the required portion and also max length, etc)Am I on the right track?", "username": "David_Thompson" }, { "code": "mongoengineinsert_many(X)", "text": "Hi @David_Thompson and welcome in the MongoDB Community !mongoengine is a third party Object-Mapper library. Just like Mongoose is an object mapper for Node.js that isn’t required nor mandatory, mongoengine isn’t required either. Pymongo can be used on its own, just like the MongoDB Node.js driver can be used on its own.MongoDB is a schemafree / schemaless database. I’m probably going to start a debate just by stating this already, but what I mean by this is that MongoDB doesn’t need to impose a schema on the docs by default. mongoengine and Mongoose help you impose a schema on your MongoDB collections on the back-end side.This doesn’t prevent you from modifying the docs manually by any other mean and breaking these “contracts” in your back-end.Another solution if you need or want to create some rules in your MongoDB documents is to use the JSON Schema validators. By adding these rules directly in the MongoDB collection, you can ensure that you cannot insert a negative price for your product (for example) whatever the mean this time (a wrong update in mongosh, etc). The validator will always keep you in check.So to sum up, mongoengine is just here to help you map your MongoDB docs to python objects. If you feel that you don’t need it, then don’t use it. It’s an extra layer / proxy to the pymongo driver which might just add a layer of complexity in your code.If you require / need more industrialisation, it might help.Example of a COVID-19 data import script I have designed. It’s only using Pymongo. I scan CSV files, build the docs in memory and send them to MongoDB with insert_many(X).If you need more help, I would recommend to check the MongoDB for Python Developer free training that we have on the MongoDB University:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Or our Python ressources / tutos on the MongoDB Developer Hub:Use Python with MongoDB! A high-level, interpreted programming language and it is used for general purpose. Find out how to use it for data-intensive tasks here.Especially this one:Learn how to perform CRUD operations using Python for MongoDB databases.I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "@MaBeuLux88 ,\nThanks!!! I will for sure check those out… While I have already taken the M220P course, there is limited exposure on how to set up the program… It mainly focuses on queries. This is the sticky point that I can see for the MongoDB University courses… There isn’t one that really takes one through the FULL development process. I wish there was, but sadly that is my achilles heal… I can get info on queries, and I can find information on schema… But practical application into a development cycle still remains an elusive thing.", "username": "David_Thompson" } ]
Setting up Python to use Mongo
2022-06-21T03:11:17.913Z
Setting up Python to use Mongo
2,957
https://www.mongodb.com/…_2_1024x243.jpeg
[]
[ { "code": "", "text": "Hi, I get a problem like that :\n\nerror mongod1423×338 176 KB\nPlease Help,", "username": "Regi_Hadi_Permadi" }, { "code": "", "text": "in mongo log,\n\nerror mongod log1620×563 238 KB\n", "username": "Regi_Hadi_Permadi" }, { "code": "", "text": "Your log is giving you the solution\nCheck if /data/db exists on your system\nCd /data/db or ls /data", "username": "Ramachandra_Tummala" }, { "code": "", "text": "and then the service can running again", "username": "Regi_Hadi_Permadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Forked process: 1314, ERROR: child process failed, exited with error number 1
2022-06-21T09:08:53.334Z
Forked process: 1314, ERROR: child process failed, exited with error number 1
10,251
null
[ "crud" ]
[ { "code": "exports = async function () {\n const collection = context.services\n\t.get(cluster)\n\t.db(\"scheduler\")\n\t.collection(\"schedules\");\n\n const doc = await collection.updateMany(\n\t{\n\t\tstatus: \"processing\",\n\t\t$expr: {\n\t\t\t$gt: [{ $subtract: [\"$$NOW\", \"$scheduleDate\"] }, 30 60 1000]\n\t\t}\n\t},\n\t{\n\t\t$set: { status: \"waiting\" }\n\t}\n );\n};\n", "text": "Hi,I am using a function in scheduled trigger which in turn uses updateMany() method to update the documents based on certain criteria.I want to log the documents before update. Can somebody help me with that?Thanks", "username": "manasa_pradeep" }, { "code": "console.log(X)EJSON.stringify(doc)JSON.stringify(doc)toArray()items.forEach(console.log)", "text": "Hi @manasa_pradeep and welcome in the MongoDB Community !You can use console.log(X) and the helpers EJSON.stringify(doc) or JSON.stringify(doc).Don’t forget to resolve the promise with the toArray().Note that you actually have an alternative in the above link with items.forEach(console.log).Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "exports = async function () {\n const collection = context.services\n\t.get(\"Cluster0\")\n\t.db(\"test\")\n\t.collection(\"tests3\");\n\t\n const query = {\n status: \"processing\",\n $expr: {\n $gt: [{\n $subtract: [\"$$NOW\", \"$scheduleDate\"]\n }, 30 * 60 * 1000]\n }\n };\n const projection = {};\n\n const readRecords = await collection.find(query, projection)\n .toArray()\n .then(items => {\n console.log(`Successfully found ${items.length} documents.`)\n console.log(JSON.stringify(items));\n })\n .catch(err => console.error(`Failed to find documents: ${err}`))\n\n\n\n}\n", "text": "Thanks, that worked:)\nNow that I found the records, can I modify each item to update the status field to wait?", "username": "manasa_pradeep" }, { "code": "{$set: {status: \"wait\"}}", "text": "It should be an updateMany(X), no ?\nSame query you already have and update = {$set: {status: \"wait\"}}See the doc here:", "username": "MaBeuLux88" }, { "code": "", "text": "Can I use findandModify() method?", "username": "manasa_pradeep" }, { "code": "", "text": "findAndModify only finds and modifes a single doc.Also apparently it’s not supported in App Services Functions.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks. That clarifies.", "username": "manasa_pradeep" }, { "code": "", "text": "Can you please help me with one more question?\nI want to create a trigger that should fire when an insert happens to the nested column. Basically, it is an update to an existing document. How can I achieve that? I need to post the entire document to the HTTP end point. Thanks", "username": "manasa_pradeep" }, { "code": "", "text": "Using a MongoDB App Services Trigger, you can filter on update operation on a given collection. Then if you want to listen to just an update on a particular fields, you can check for the existance of that field ($exists) in the updateDescription of that update event.Check the doc for triggers here. And especially you have exactly this exemple in the doc here.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
Logging the data
2022-06-16T05:29:11.212Z
Logging the data
2,851
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.16.0 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.16.1%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.16.1 Released
2022-06-21T23:39:25.056Z
.NET Driver 2.16.1 Released
1,474
null
[ "golang" ]
[ { "code": "IDprimitive.ObjectIDbsonprimitive.ObjectIDstringID_id", "text": "In all the examples I find about mapping golang structs to mongodb, the ID field is defined as type primitive.ObjectID and the bson filters tag is also used.Question:Must the ID field be of type primitive.ObjectID? I prefer to use string.\nI read somewhere that field tags are not mandatory, but in that case, how will the driver know that the ID field of my struct is the same as the _id field of the bank?", "username": "Matheus_Saraiva" }, { "code": "_idstring_id_id_idtype myDocument1 struct {\n\tID string\n\tName string\n}\n\nfunc main() {\n\tb, _ := bson.Marshal(myDocument1{ID: \"abcd\", Name: \"Bob\"})\n\tfmt.Println(bson.Raw(b))\n}\n{\"id\": \"abcd\",\"name\": \"Bob\"}\ntype myDocument2 struct {\n\tID string `bson:\"_id\"`\n\tName string\n}\n\nfunc main() {\n\tb, _ := bson.Marshal(myDocument2{ID: \"abcd\", Name: \"Bob\"})\n\tfmt.Println(bson.Raw(b))\n}\n{\"_id\": \"abcd\",\"name\": \"Bob\"}\n", "text": "Hey @Matheus_Saraiva thanks for the question! According to the Document page in the MongoDB manual:The _id field may contain values of any BSON data type, other than an array, regex, or undefined.So yes, you should be able to use a string as a document _id field. As far as field tags, they aren’t mandatory if you’re OK with the default BSON document field name based on the Go struct field name. However, there’s no Go struct field name that automatically maps to BSON document field name _id, so you would need to specify a struct tag if you want to explicitly specify the document _id.For example, consider the following code without struct tags:That code prints:Then consider the following code using a struct tag:That code prints:Check out a working example of both here.", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Struct ID as string and without tags
2022-06-21T13:11:53.196Z
Struct ID as string and without tags
3,696
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.21 is out and is ready for production deployment. This release contains only fixes since 4.2.20, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.21 is released
2022-06-21T20:15:43.942Z
MongoDB 4.2.21 is released
2,269
null
[ "cxx" ]
[ { "code": "", "text": "i m currently trying to install the mongocxx driver on my windows os. I followed the official guide except I git clone instead of getting from the tarball. there were no fatal errors during the build. However, I was unable to link to the header files.\nFor example, when I run: gcc -lmongoc-1.0 -lbsoncxx -lmongocxx test.cpp\nit gave me a fatal error: mongocxx/client.hpp:No such file or directory\n1 | #include <mongocxx/client.hpp>.\nIt would seem like they could not find the client.hpp file in the mongocxx file.Just to be clear, the compiler was not be able to find any header files.\nThank you for the help!", "username": "Xiaoyan_Ge" }, { "code": "/usr/local", "text": "@Xiaoyan_Ge Have you confirmed that your compiler default search patch includes the installed location of your installed MongoDB C++ driver build? You do not mention if this was done in an environment like Cygwin that would support Unix-like default directories for the compiler. You also do not mention if you modified the default target directory for the C++ driver installation. By default the build will install into a sub-directory of the directory from which you execute the build, rather than some place that could require root/administrative permissions (like /usr/local). If you could provide the complete sequence of commands you used to build the C++ driver with all the options you used, along with the complete terminal output, then perhaps we can help with better precision.", "username": "Roberto_Sanchez" } ]
Can not find header in mongocxx on OS:windows
2022-06-21T09:38:34.221Z
Can not find header in mongocxx on OS:windows
2,016
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.15 is out and is ready for production deployment. This release contains only fixes since 4.4.14, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.15 is released
2022-06-21T20:12:10.808Z
MongoDB 4.4.15 is released
2,801
null
[]
[ { "code": "", "text": "Good day, just playing with Charts and while there is a develpment process to filter charts in embeded charts I wanted to figure out if there are URL Query string parameter abilites in the native Mongo charts?Essentially I have charts around similar data. In this case it’s “sales” data and there are locations across Canada/US. Nothing that comples, I’m not that smart. However, to this point, I’m wondering if I can dynamically filter the chart with a URL parameter.For example: A set of users us the same URL, but has a different Query parameter for someID.for example. One set of users would get this URL:\nhttps://charts.mongodb.com/charts-project/public/dashboards/xsxxxxxxxxx?some_id=629a77e337baf146addaedd5Another set can get another filterd version with a different ID:\nhttps://charts.mongodb.com/charts-project/public/dashboards/xsxxxxxxxxx?some_id=62aa213123a3074b742ffe86Basically the collection is partitioned/filtered on “some_id” but use the same dashboard.Is that possible out of the box or do I have to start hosting a website and embed the iFrames? Which I realize I’ll eventually have to do, but at this moment I’d love to know if it’s possible out of the box.Thanks in advance!!\nCPT", "username": "Colin_Poon_Tip" }, { "code": "", "text": "Hi @Colin_Poon_Tip -There isn’t any ability to filter charts or dashboards in the main product via query strings. You can however use Dashboard Filtering to provide UI controls that allow users to see different information on the charts. Using query strings to control dashboard filtering is an interesting feature request; you might want to raise it at feedback.mongodb.com to see if others are also interested in the idea.Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks for your wisdom Tom!! I’m finding some quirks as i go;)Much appreciated.", "username": "Colin_Poon_Tip" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Charts and Query parameters
2022-06-16T16:44:16.830Z
Mongo Charts and Query parameters
2,949
null
[]
[ { "code": "", "text": "Hello from UK. Pleased to be here", "username": "Mayank_Kumar2" }, { "code": "", "text": "", "username": "MaBeuLux88" } ]
Hello world from UK
2022-06-21T17:06:06.289Z
Hello world from UK
2,434
null
[ "aggregation", "queries", "dot-net" ]
[ { "code": " var dateTimeFrom = new DateTime(2022, 6, 16, 0, 3, 53, DateTimeKind.Utc);\n\n var matchStage = new BsonDocument(\"$match\",\n new BsonDocument(\"utcCreated\",\n new BsonDocument(\"$gte\", dateTimeFrom)));\n\n var facetStage = new BsonDocument(\"$facet\",\n new BsonDocument\n {\n {\n \"metadata\",\n new BsonArray\n {\n new BsonDocument(\"$count\", \"total\")\n }\n },\n {\n \"data\",\n new BsonArray\n {\n new BsonDocument(\"$skip\", 100),\n new BsonDocument(\"$limit\", 10)\n }\n }\n });\n\n var pipeline = new[]\n {\n matchStage,\n facetStage\n };\n\n var pipelineDefinition = PipelineDefinition<RawMessageDto, RawMessageCountModel>.Create(pipeline);\n\n var watch = Stopwatch.StartNew();\n\n var elapsedList1 = new List<long>();\n\n for (var i = 0;\n i < 10;\n i++)\n {\n watch.Restart();\n\n var result = await _rawMessageCollection\n .Aggregate(pipelineDefinition)\n .FirstOrDefaultAsync();\n elapsedList1.Add(watch.ElapsedMilliseconds);\n }\n\n watch.Reset();\n\n var elapsedList2 = new List<long>();\n\n for (var i = 0;\n i < 10;\n i++)\n {\n watch.Restart();\n\n var query = _rawMessageCollection.Find(x => x.UtcCreated >= dateTimeFrom);\n\n var countTotal = await query.CountDocumentsAsync();\n\n var gg = await query.Skip(100).Limit(10).ToListAsync();\n elapsedList2.Add(watch.ElapsedMilliseconds);\n }\n\n watch.Stop();\n{find({ \"utcCreated\" : { \"$gt\" : ISODate(\"2022-06-16T00:03:53Z\") } })}\n public class RawMessageCountModel\n {\n public IEnumerable<Metadata> Metadata { get; set; }\n public IEnumerable<RawMessageDto> Data { get; set; }\n }\n\n public class Metadata\n {\n public int total { get; set; }\n }\n", "text": "Hi all,I’m implementing the well-known task for pagination with a count over a collection.\nThe discussion comes down to measuring performance of two implementations:The first implementation is by using $facet, and the second is by using a simple query and performing two calls (one for the count and another for the results) on it.I’ve done some simple tests against the same collection, and the results surprised me…\nThe test is done using .Net 5 and MongoDB.Driver (2.12.3):The results from the executions are:According to some of the docs. and topics on forums:\nIn the first implementation, the $facet queries on the same set of data, thus expecting a faster execution.\nIn the second implementation, we have a query, and do two “round-trips” on the same query.Questions:NOTE: Using the $facet way, there is some serialization and model creation time of the classes below (the data property doesn’t come in this calculation since the IEnumerable exists in both cases) , but I believe that time is trivial and can be neglected observing the results of 10 subsequent executions.", "username": "Mile_Stoilovski" }, { "code": "", "text": "similar topic:", "username": "Mile_Stoilovski" }, { "code": "", "text": "A few notes.", "username": "steevej" } ]
Paging with count - Performance $facet vs. simple query
2022-06-21T10:04:43.194Z
Paging with count - Performance $facet vs. simple query
7,767
null
[ "aggregation", "data-modeling" ]
[ { "code": "/* 1 */\n{\n \"_id\" : \"1\",\n \"d\" : 4.5,\n \"c\" : 1.1,\n \"b\" : \"Nothing Special\",\n \"a\" : false\n} \n/* 2 */\n{\n \"_id\" : \"2\",\n \"a\" : true\n} \n{\n \"_id\" : \"1\",\n \"d\" : 4.5,\n \"c\" : 1.1,\n \"b\" : \"Nothing Special\",\n \"a\" : false\n}\n{\n \"_id\" : \"2\",\n \"d\" : null,\n \"c\" : null,\n \"b\" : null,\n \"a\" : true\n}\n", "text": "Hey, I want to show all the document fields in the output but some of the fields are not present in some of the document then how can i display those fields with “NULL”. Here is the example:\nSample document:Required Output:Thanks in advance.", "username": "Nabeel_Raza" }, { "code": "db.getCollection(\"abcdef\").aggregate(\n [\n \n { \n \"$project\" : { \n \n \"_id\":1,\n \"a\": { $ifNull: [ \"$a\", null ] }, \n \"b\":{ $ifNull: [ \"$b\", null ] }, \n \"c\":{ $ifNull: [ \"$c\", null ] }, \n \"d\":{ $ifNull: [ \"$d\", null ] }\n \n }\t \n }\n\n ]\n);", "text": "The below query will help to get the desired output by using $ifNull.", "username": "Nabeel_Raza" }, { "code": "/* 1 */\n{\n \"_id\": \"1\",\n \"d\": [{ \"element1\": 1.1 }, { \"element2\": 1.2 }],\n \"c\": [{ \"element1\": 2.1 }],\n \"b\": [{ \"element1\": 3.1 }, { \"element2\": 3.1 }],\n \"a\": false\n}\n/* 2 */\n{\n \"_id\" : \"2\",\n \"a\" : [{ \"element1\": 1.1.2 }]\n} \n", "text": "what if the data contained some array. For example:Required Output:\n{\n“_id” : “1”,\n“d”: [{ “element1”: 1.1 }, { “element2”: 1.2 }],\n“c”: [{ “element1”: 2.1 }, { “element2”: null }],\n“b”: [{ “element1”: 3.1 }, { “element2”: 3.1 }],\n“a”: false\n}\n{\n“_id” : “2”,\n“d” : [{ “element1”: null }, { “element2”: null }],\n“c” : [{ “element1”: null }, { “element2”: null }],\n“b” : [{ “element1”: null }, { “element2”: null }],\n“a” : [{ “element1”: 1.1.2 }, { “element2”: null }]\n}How would the query look like?Thanks for the help.", "username": "Nirav_Lah" }, { "code": "", "text": "Simply do exactly the same king of $project with $ifNull but for each value you want null.It will get complex and ugly very fast as you will need to use $range to create you array indexes, $map to map each element to the existing element or to your default null using $ifNull.But, my personal opinion, is that this king of null-ishing cosmetic manipulations are better done on the application data access layer rather than the data server. It is easier to scale the state-less data access layer rather than the server. And most of the time there is nothing to do on the application side, you just access the data and you get null or undefined in most languages. You absolutely gain nothing by having the data server do more work in order to send more useless data over the wire. This is reminiscence of SQL where all columns are there even when there is no data.", "username": "steevej" } ]
How to get all fields of projection, if they doesn't exist in the collection?
2020-03-26T06:44:12.205Z
How to get all fields of projection, if they doesn&rsquo;t exist in the collection?
9,322
null
[ "node-js" ]
[ { "code": "", "text": "I have an old version of MongoDb (4.0.3) installed on a Mac that I’d like to upgrade. Checking the NodeJS guide for carrying that out, one of the prerequisites is that I should check on the compatibility of the driver before carrying out the upgrade itself.What I’ve not been able to find though, is anything that tells me how to determine what the current driver version is. Is there a command for this I can use?", "username": "Christopher_Perry" }, { "code": "", "text": "In order to check the installed version of NodeJS:\n$ node -vTo verify the version of the installed libraries\n$ npm list <LIB_NAME>\ni.e.:\n$ npm list mongodb\n└── [email protected] this information helps you", "username": "Ernesto_Valle" }, { "code": "", "text": "Thanks Ernesto,I’ll give it a go a bit later once I’ve finished work.", "username": "Christopher_Perry" } ]
How to check installed NodeJS driver version?
2022-06-20T07:43:22.747Z
How to check installed NodeJS driver version?
6,155
null
[ "kotlin", "data-api" ]
[ { "code": "", "text": "Dears,as far as i know from the document of Mongodb(https://www.mongodb.com/docs/manual/tutorial/query-documents/) you can use query to filter data. However, what is the correct syntax to do this using Data API as i will get response in retrofit in kotlin how can i handle the filter as i need to use AND OR queries.Thank you.", "username": "laith_ayyat" }, { "code": "find()$or$and", "text": "Hi @laith_ayyat,Why not use directly the Realm Kotlin SDK if you are doing a mobile app or directly the Java Driver if that’s not the case? I think this would simplify the code a lot.The Data API takes query just like the find() command so there are no tricks really. $or and $and would work as usual as far as I know.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "If i need to make an E-Commerce application, as far as i know i would need a Node.js application that would be on a server? then i would hit on the server ip. Unless i am wrong, is it ok to make a full application using the Realm SDK directly ?", "username": "laith_ayyat" }, { "code": "", "text": "Well it really depends but it can be OK.Either you do a “standard” 3 tiers app with front-end + back-end + DB. So for example this could be React + Node.js (Next.js & MDB Node.js driver) + MongoDB.Or you can design an app using a Realm SDK + an app in the Atlas App Service (was called Realm) + MongoDB.The App in the Atlas App Service will act as a backend service (auth, rules, functions, triggers, …) but you can have a direct access to MDB from the SDK. It’s up to you then if you want to call mdb command directly from the front or encapsulate this in a back-end function (for security or just to factorise the code so it’s easier to update the front-end for example - it can be tricky if it’s a mobile app).Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Data API Filter[ And, OR]
2022-06-20T12:07:53.969Z
Data API Filter[ And, OR]
4,066
null
[]
[ { "code": "", "text": "Good day!I’ve a very quick question about custom user data. The SDK docs state that changed user data may be stale for up to 30 minutes, e.g. until the next access token refresh.Is the same true for server-side functions and function rules? Or will changes be reflected immediately in server-side functions?Thanks in advance!\nBest\nMathias", "username": "Mathias_Gerdt" }, { "code": "", "text": "Hi @Mathias_Gerdt,Can you please share a direct link to the right paragraph in the doc please? I think I can answer but I’d like to confirm first.Cheers,\nMaxime", "username": "MaBeuLux88" }, { "code": "", "text": "\nThanks in advance!", "username": "Mathias_Gerdt" }, { "code": "", "text": "Are you referring to this?Custom Data May Be Stale\nAtlas App Services does not dynamically update a user’s custom data if the underlying document changes. Instead, Atlas App Services fetches a new copy of the data whenever a user refreshes their access token, such as when they log in. This may mean that the custom data won’t immediately reflect changes, e.g. updates from an authentication Trigger. If the token is not refreshed, App Services waits 30 minutes and then refreshes it on the next call to the backend, so custom user data could be stale for up to 30 minutes plus the time until the next SDK call to the backend occurs.This means that you can only retrieve the user’s custom data when he sends an authentication query which at worst happens every 30 min because the token expires every 30 min.This is because user’s custom data are stored by the Third Party Auth service (like Google OAuth) and they can be stored in MongoDB only when they are sent to MongoDB during an authentication query.Server-side functions and rules aren’t impacted by this and will be updated as soon as you push (deploy) a new one to take its place.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "context.user.custom_data.name\n", "text": "So just to clarify:\nIf I update a user’s custom data, say in the “users” collection, e.g. with a new name, and then run a function some seconds later, willbe the new name with no delay?", "username": "Mathias_Gerdt" }, { "code": "", "text": "I think we are talking about these fields (screenshot from Authentication > Authentication Providers > Google)\nimage1287×367 53.5 KB\nI think there won’t be a delay if you update manually but it will be overwritten by the value from Google OAuth when the user authenticates. I would need to test. ", "username": "MaBeuLux88" }, { "code": "", "text": "Ah, I think there is a misunderstanding. \nI didn’t mean data from authentication providers, but manually linked custom user data from my own “users” collection. Will those changes be reflected instantly when using realm functions or server-side rules?\nAgain, thanks in advance. ", "username": "Mathias_Gerdt" }, { "code": "", "text": "Ha yes, I think this should be reflected immediately. Give it a try and let me know if that’s not the case.", "username": "MaBeuLux88" } ]
Custom user data stale in functions?
2022-06-19T15:02:47.967Z
Custom user data stale in functions?
2,111
null
[ "aggregation" ]
[ { "code": "", "text": "If one has to perform aggregation with grouping and join how to approach that. also multiple collections.", "username": "Suraj_Pinjan" }, { "code": "$lookup", "text": "$lookup", "username": "Jack_Woehr" } ]
How to perform sofisticated grouping and aggregation in pipeline?
2022-06-21T05:16:34.266Z
How to perform sofisticated grouping and aggregation in pipeline?
1,063
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "{ \n id: 1,\n entitlements:[{ \n purchasedAt: 2022-07-20,\n expiresAt: 2022-07-27\n }] \n}, \n{ \n id: 2,\n entitlements:[{ \n purchasedAt: 2022-07-20,\n expiresAt: 2022-08-01\n }] \n}, \n$expr$dateSubtract", "text": "I have a collection that I’d like to query by an array of objects. The query needs to calculate the difference between two dates and return only documents where the difference is less than or equal 7 days.In following example, I’d like to have document 1, but not document 2.I’m using Mongoose as ODM, but I’m failing to build a query or aggregation for the case above.In detail I think I need to figure the following out:Any help is appreciated. I’d also take a solution and figure the rest out myself from there ", "username": "Thomas_Obermuller" }, { "code": "var DAYS = 7 *24 * 60 * 60 * 1000 // 7 days in milliseconds\n\ndb.collection.aggregate([\n{ \n $addFields: { \n entitlements: {\n $filter: { \n input: '$entitlements', \n cond: { $lte: [ \n { $subtract: [ { $toDate: \"$$this.expiresAt\" }, { $toDate: \"$$this.purchasedAt\" } ] }, \n DAYS\n ] }\n }\n }\n }\n},\n{ \n $match: { entitlements: { $ne: [] } } \n}\n])\n$filter$match$expr$dateSubtract$dateSubtract$dateDiff$dateDiff$addFields$set$project$addFields$project", "text": "Hello @Thomas_Obermuller, welcome to the MongoDB Community forum!The following aggregation query :return only documents where the difference is less than or equal 7 days.I will try answer your questions here:It depends upon the data. In this case the date data is string type, the query requires that the difference be calculated and the data is in an array. So, the approach is to use the $filter aggregation array operator to filter on the condition. Note that the string date is converted to a Date object for the match operation. Then, check for documents in the following $match stage.$dateSubtract is used for subtracting units of time (e.g., days, mins, etc.) from a given date. So, this may not be useful in this case. Maybe you are thinking about $dateDiff. You can try using the $dateDiff in the above query.I think you are referring to what is called as “projection”. With Aggregation queries, you can use $addFields (or its alias $set) and $project stages. They have different behavior, that, the $addFields includes all the fields and $project restricts the fields. In addition, the projections can include new fields (e.g., a calculated field value).Please see the manual for the respective operators:", "username": "Prasad_Saya" }, { "code": "db.collection.aggregate([\n{ \n $match: {\n $expr: {\n $ne: [\n { $filter: { \n input: '$entitlements', \n cond: { $lte: [ \n { $subtract: [ { $toDate: \"$$this.expiresAt\" }, { $toDate: \"$$this.purchasedAt\" } ] }, \n DAYS\n ] }\n } }, []\n ]\n }\n }\n},\n])\n", "text": "You can also try the same query with a a single stage:", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya, this really helped me. I could now create the aggregation.", "username": "Thomas_Obermuller" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Querying array of objects with expression
2022-06-20T15:58:45.196Z
Querying array of objects with expression
4,523