image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"data-modeling"
] | [
{
"code": "db.FamilyTree.insert({\"id\":\"100\", \"ParentName\":\"Jhon Smith\", \"Children\" : \"Michael Smith\" })\ndb.FamilyTree.insert({\"id\":\"100\", \"Diana Smith\":\"Jhon Smith\", \"Children\" : \"Diana Smith\"})\ndb.FamilyTree.insert({\"id\":\"101\", \"ParentName\":\"Michael Smith\", \"Children\" : \"\"})\ndb.FamilyTree.insert({\"id\":\"102\", \"ParentName\":\"Diana Smith\", \"Children\" : \"Britney Smith\"})\ndb.FamilyTree.insert({\"id\":\"301\", \"ParentName\":\"Britney Smith\", \"Children\" : \"\"})\ndb.FamilyTree.insert({\"id\":\"200\", \"ParentName\":\"Richard Smith\", \"Children\" : \"M Smith\" })\ndb.FamilyTree.insert({\"id\":\"200\", \"ParentName\":\"Richard Smith\", \"Children\" : \"D Smith\" })\ndb.FamilyTree.insert({\"id\":\"201\", \"ParentName\":\"M Smith\", \"Children\" : \"\" })\ndb.FamilyTree.insert({\"id\":\"202\", \"ParentName\":\"D Smith\", \"Children\" : \"\" })\n [\n {\n \"id\":\"100\",\n \"name\":\"Jhon Smith\",\n \"children\":[\n {\n \"id\":\"101\",\n \"name\":\"Michael Smith\",\n \"children\":null\n },\n {\n \"id\":\"102\",\n \"name\":\"Diana Smith\",\n \"children\":[\n {\n \"id\":\"301\",\n \"name\":\"Britney Smith\",\n \"children\":null\n }\n ]\n }\n ]\n },\n {\n \"id\":\"200\",\n \"name\":\"Richard Smith\",\n \"children\":[\n {\n \"id\":101,\n \"name\":\"Michael Smith\",\n \"children\":null\n },\n {\n \"id\":\"102\",\n \"name\":\"Diana Smith\",\n \"children\":null\n }\n ]\n }\n]\ndb.FamilyTree.aggregate( [\n { $project: { _id: 0}},\n { \n $graphLookup: {\n from: \"FamilyTree\",\n startWith: \"$Children\",\n connectFromField: \"Children\",\n connectToField: \"ParentName\",\n as: \"TreeSearch\"\n\n \n }\n }\n] )\n",
"text": "Hiwe have a requirement to show data in tree format.Sample Data:the expected output should be in the below format.I Have tried with $graphLookup but I didn’t get the required output:Please help me with this to achieve the required outcome.Thanks.",
"username": "vidiyala_ganesh"
},
{
"code": "",
"text": "Hi @vidiyala_ganesh, I would recommend you to use $group clause to merge the same type of records for that check this.",
"username": "Nabeel_Raza"
},
{
"code": "Childrendb.FamilyTree.insert({\"id\":200, name:\"Richard Smith\", Children:[201, 202]})\ndb.FamilyTree.insert({\"id\":201, name:\"M. Smith\", Children:[]})\ndb.FamilyTree.insert({\"id\":202, name:\"D. Smith\", Children:[]})\n",
"text": "Hi @vidiyala_ganesh, and welcome to the forums!It’s been a while since you posted this question, have you found a solution yet ?I would recommend to change the data model. For example, the Children field should contain a list of children. For example:Just by performing the above data remodelling, your example aggregation pipeline should provide the desire output.\nFor more information about modelling data in tree structures please see:Regards,\nWan.",
"username": "wan"
}
] | How to represent the data in tree structure | 2020-08-19T06:13:57.250Z | How to represent the data in tree structure | 2,307 |
null | [
"node-js",
"transactions"
] | [
{
"code": " await this.client.connect();\n const session = this.client.startSession();\n\n try{\n \n await session.withTransaction(async () => {\n await this.client.db(\"Person\").collection(\"persons\").updateMany({ \"phone\": \"23138213\"}, {$set: {\"gender\": \"Male\"} }, function(err, res) {\n if (err) throw err;\n console.log(res.result.nModified + \" document(s) updated\");\n });\n\n await this.client.db(\"Person\").collection(\"persons\").updateMany({ \"phone\": \"23138213\"}, \"Other\" , function(err, res) {\n if (err) throw err;\n console.log(res.result.nModified + \" document(s) updated\");\n });\n\n })\n}\n finally{\n await session.endSession();\n\n }\n\"\"\"\"gender:Male",
"text": "I have to update two collections. I want to make sure that either both the collections get updated or they don’t.I created this simple example of two updates (both of them use same collection here, but they could be different):Now if the initial value of gender was empty \"\". Then after executing the above code, the final value should still be \"\" because the second update is invalid syntax and would throw exception.But the result is gender:Male",
"username": "jim_rock"
},
{
"code": "\"\"\"\"sessionawait this.client.db(\"Person\").collection(\"persons\").updateMany({ \"phone\": \"23138213\"}, {$set: {\"gender\": \"Male\"} }, {\"session\":session}, \n function(err, res) {\n if (err) throw err;\n console.log(res.result.nModified + \" document(s) updated\");\n }\n);\nasyncawaitPromisePromiselet result1 = await client.db(\"nodejs\").collection(\"one\").updateMany({ \"phone\": \"123\"}, {\"$set\": {\"gender\": \"Male\"} }, {\"session\": session});\nlet result2 = await client.db(\"nodejs\").collection(\"two\").updateMany({ \"phone\": \"123\"}, {\"$set\":{\"name\":\"foobar\"}} , {\"session\": session});\n",
"text": "Hi @jim_rock,Now if the initial value of gender was empty \"\" . Then after executing the above code, the final value should still be \"\" because the second update is invalid syntax and would throw exception.There are a few things here. You need to pass the session into the operation as a transaction options. Otherwise the operation will be performed outside of a session. For example:See also Collection.updateMany() for more information on transaction options.In addition, the update operation is structured with callbacks. In this case, what likely to happen is the first update was executed then the second thrown an exception.\nIf you’re using async functions, you could use await operator with a Promise to pause further execution until the Promise is resolved. You can do sequential logic execution in this manner, for example:Alternatively if you would like to use callbacks, you should nest them. Please see Promises and Callbacks for more information.If you would like to abort a transaction, you should explicitly call ClientSession.abortTransaction(). i.e. if error then abort. For more information, see also QuickStart Node.JS: How To Implement Transactions.Regards,\nWan.",
"username": "wan"
}
] | MongoDB transactions don't seem to be working | 2020-09-26T01:33:26.961Z | MongoDB transactions don’t seem to be working | 4,223 |
[
"compass"
] | [
{
"code": "tabularMS Excel",
"text": "Hi,\nI want to copy the output of a query in tabular structure to paste it into MS Excel for comparison.\nI tried copy pasting, but it doesn’t work. I tried on Robo 3T, Compass but it doesn’t support this functionality. Can anybody help me to export the output into a tabular structure.\nHere is sample document.image1116×491 36 KB",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Export to CSV. Then open the CSV using Excel program.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya for reply, as it is mentioned that the tool we have are only compass and robo. Basically it’s on client side (production) and we can’t add or remove services.\nI know that we can import collections by using mongoExport commnad but here we have some limitation of tools. Once again thanks.",
"username": "Nabeel_Raza"
},
{
"code": "mongoexport",
"text": "You can export from Compass or command-line mongoexport.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Ahan!, i tried but didn’t found any option to export all the documents(of a query output) using compass. can you tell me how can we export/copy all the documents from compass in a one go.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Its pretty simple. See: Compass - Export Filtered Subset of a Collection.",
"username": "Prasad_Saya"
},
{
"code": "export the output",
"text": "If we want to export all the collection then we can use this option but I want to export the output of a collection. Kindly reread the question.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "You have two options, and you can use one of them;",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Ahan my bad. Yes it contain two options Thanks @Prasad_Saya",
"username": "Nabeel_Raza"
},
{
"code": "CollectionExport CollectionExport Full Collectionoutput fileexporting pathexport",
"text": "Using Mongo Compass we can export the collection(full collection and result of a query) for that follow the below steps:\n1- Goto Collection menu from the menubar.\n2- Click Export Collection.\n3- Then a dialogue box will be opened.\ni) untick the Export Full Collection option\nii) choose the output file format i.e. CSV\niii) choose the exporting path and file name\niv) click export button.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Exporting output of a query in tabular format | 2020-09-28T07:54:02.139Z | Exporting output of a query in tabular format | 12,661 |
|
null | [
"app-services-user-auth",
"react-native"
] | [
{
"code": "async function loginAnonymous() {\n // Create an anonymous credential\n const credentials = Realm.Credentials.anonymous();\n try {\n // Authenticate the user\n const user = await app.logIn(credentials);\n // `App.currentUser` updates to match the logged in user\n assert(user.id === app.currentUser.id)\n return user\n } catch(err) {\n console.error(\"Failed to log in\", err);\n }\n }\ndeclare namespace Realm {\n namespace Credentials {\n /**\n * Payload sent when authenticating using the [Anonymous Provider](https://docs.mongodb.com/realm/authentication/anonymous/).\n */\n type AnonymousPayload = {};\n\n /**\n * Payload sent when authenticating using the [Email/Password Provider](https://docs.mongodb.com/realm/authentication/email-password/).\n */\n type EmailPasswordPayload = {\n /**\n * The end-users username.\n * Note: This currently has to be an email.\n */\n username: string;\n\n /**\n * The end-users password.\n */\n password: string;\n };\n",
"text": "Hello,I am trying to allow anonymous login for my React Native app.\nI have the loginAnonymous function implemented in my AuthProvider:When I call it, I can see that it is successful in the mongodb realm backend, however, it returns a null user/credentials.From my reading, I assumed that a unique ID would be returned.\n“Anonymous user objects have a unique ID value but no other metadata fields or configuration options.”However, when I was reading into the Realm library, I noticed that the credentials payload for anonymous returns an empty object {}.I was expecting some sort of ID being returned as the user. My app currently looking at the user object to dictate wheter or not they see the login screen.Can anyone assist?Thanks",
"username": "Irfan_Maulana"
},
{
"code": "credentialsuserAnonymousPayload",
"text": "Hi there - welcome to the forum it returns a null user/credentials.What is null, the credentials or the user?The AnonymousPayload that you’re referring to is the data sent to the server when authenticating, which is the empty object when authenticating anonymously. The server will create a user and respond to the auth request with a user object, including its unique id created by the server.If you think there is infact a bug in the Realm JS SDK, you might consider creating an issue on GitHub.",
"username": "kraenhansen"
}
] | Anonymous Login with React Native returning null user | 2020-09-26T22:25:01.191Z | Anonymous Login with React Native returning null user | 3,475 |
null | [
"indexes",
"text-search"
] | [
{
"code": "{\n description: some descripton here\n name: John Galt\n status: 1\n}\n",
"text": "Hi,Just as and example, say I have Collection of following items:I would like to have text index for name and description and I want to be able to find all documents, where status equals to 1.I create index this way:db.collection.createIndex({“name”:“text”, 'description\":“text”, “status”:1});but I don’t get how to query the document above: say I want to search by:$text: John\nand “status”:1I’m new to MongoDB, just 2nd day looking into it so appreciate help from community.Thanks",
"username": "T_W"
},
{
"code": "db.collection.find( { status: 1, $text: { $search: \"John\" } } )db.collection.find( { $text: { $search: \"John\" }, status: 1 } )",
"text": "but I don’t get how to query the document above: say I want to search by:$text: John\nand “status”:1Hello @T_W, welcome to the community.You can do your search as follows using the compound text index you had created:db.collection.find( { status: 1, $text: { $search: \"John\" } } )-or-db.collection.find( { $text: { $search: \"John\" }, status: 1 } )The result with both the queries is the same. That is the order of the fields in the query doesn’t matter. But, the order of the fields used with creating index matters how the index is used.",
"username": "Prasad_Saya"
}
] | How to create compound index of text fields and numeric fields | 2020-09-26T01:32:12.414Z | How to create compound index of text fields and numeric fields | 2,434 |
null | [] | [
{
"code": "",
"text": "I am new to mongodb so need some help with export and import of mongodb data with nodejs.I have a mongodb database and some collections(eg product collection,formula collection and rules collection which have reference of product id) inside it, I want to export the data from different collections based on params of api request and generate file containing the corresponding data this file will be downloaded on client browser. The exported file can be used by the user to import exported data into another db instance.Have already searched on this topic and came acros this answer not sure if i can use mongoexport for my task. Any idea how can i do that.I have also asked on stackoverflow: node.js - Export mongodb collection data and import it back using node js - Stack OverflowAny help or idea is greatly appreciated.Thanks\nYogesh",
"username": "yogesh_manjhi"
},
{
"code": "importexportconst MongoClient = require('mongodb').MongoClient;\nconst fs = require('fs');\nconst dbName = 'testDB';\nconst client = new MongoClient('mongodb://localhost:27017', { useUnifiedTopology:true });\n\nclient.connect(function(err) {\n //assert.equal(null, err);\n console.log('Connected successfully to server');\n const db = client.db(dbName);\n\n getDocuments(db, function(docs) {\n \n console.log('Closing connection.');\n client.close();\n \n // Write to file\n try {\n fs.writeFileSync('out_file.json', JSON.stringify(docs));\n console.log('Done writing to file.');\n }\n catch(err) {\n console.log('Error writing to file', err)\n }\n });\n}\n\nconst getDocuments = function(db, callback) {\n const query = { }; // this is your query criteria\n db.collection(\"inCollection\")\n .find(query)\n .toArray(function(err, result) { \n if (err) throw err; \n callback(result); \n }); \n};\noutCollectionclient.connect(function(err) {\n\n const db = client.db(dbName);\n const data = fs.readFileSync('out_file.json');\n const docs = JSON.parse(data.toString());\n \n db.collection('outCollection')\n .insertMany(docs, function(err, result) {\n if (err) throw err;\n console.log('Inserted docs:', result.insertedCount);\n client.close();\n });\n});\n",
"text": "for importing and exporting the data form database, mongodb provides import & export clause. check this out.\nHere is the code for exporting the collections:The Import:Reads the exported file “out_file.json” and inserts the JSON data into the outCollection .",
"username": "Nabeel_Raza"
}
] | Export and Import data from collection in mongodb using nodejs | 2020-08-25T08:09:24.543Z | Export and Import data from collection in mongodb using nodejs | 20,669 |
[
"compass",
"schema-validation"
] | [
{
"code": "",
"text": "This is a screenshot from Compass. I don’t understand why my document has failed validation. The validator states that description is required and the document has a description of “hello” \n\n\n",
"username": "Julie_Stenning"
},
{
"code": "{\n \"description\" : \"required\"\n}\nrather than\n{\n \"required\" : \"description\"\n}\n",
"text": "I am not very schema validation aware but my gut feeling would be that it should besimply because if you have many required field a basic JSON requirement of not having multiple fields with the same key will be broken if you had { required : f1 , required : f2 } rather than { f1 : required , f2 : required }.",
"username": "steevej"
},
{
"code": "{\n $jsonSchema: {\n required: [\n 'x'\n ]\n }\n}\n",
"text": "Thanks for your reply. That didn’t work. However, I have solved it. I needed to use $jsonSchema and put the field name into a list.",
"username": "Julie_Stenning"
}
] | Document failed validation | 2020-09-26T22:24:45.169Z | Document failed validation | 6,083 |
|
null | [] | [
{
"code": "",
"text": "We have a 3 node cluster with 1 replicaset, it was running fine for months before it started crashing every day.\nWe are stuck where the mongod process on the cluster keeps failing and we are not able to retrieve the data. We tried running the mongod --repair option but no luck. We also tried copying the data dir to another VM and starting mongod there but it did not start citing some missing rollback ID. The DB looks corrupt and we are not able to see the Databases even if we start another mongod and copy over our DB files (including ‘local’ ). Is there a way to recover this data.\nMongo DB version - 4.0.7\nOS : Ubuntu 18.04",
"username": "Nikhil_Ingole"
},
{
"code": "",
"text": "Unsure if you have resolved this issue by now as it was 6 weeks ago - I sure hope so!Did you have any errors in the mongod.log file? Were you unable to perform a stepDown within your replicaset prior to moving the data to another server?Do you not have any backups via Ops Manager or mongodump?Clive",
"username": "clivestrong"
}
] | Recover database from Mongo DB | 2020-08-17T20:31:41.845Z | Recover database from Mongo DB | 1,273 |
null | [
"atlas-triggers"
] | [
{
"code": "this service was recently updated, so it may take several minutes \nuntil the cluster can be connected to. \nError: error connecting to MongoDB service cluster: \nfailed to ping: connection() : auth error: sasl \nconversation error: unable to authenticate using \nmechanism \"SCRAM-SHA-1\": (AtlasError) bad auth \nAuthentication failed.\n",
"text": "I keep restarting a trigger and keep getting the same error:I have checked the data linked and it looks good. It seems like there might be something that goes wrong after I do a deploy from the CLI where I use the replace strategy.Any ideas on how to debug this? Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi @Lukas_deConantseszn1,Yea looks like cluster is not properly linked… I would suggest relinking the cluster. To see what caused it.in first place I would recommend contacting support…Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Trigger keeps being suspended, unclear why | 2020-09-26T04:20:07.993Z | Realm Trigger keeps being suspended, unclear why | 2,850 |
null | [] | [
{
"code": "",
"text": "Hi Team,\nI hope you guys are doing well. I wanted to set one field deleted: true based on Date field timeoutOn.\nSo when Date.Now > timeoutOn, I want to set document as deleted: true.\nDoes Mongo Atlas Provides this ??\nI can see I can call Mongo Functions on scheduled triggers but It would be same as any k8s cron job/ AWS lambda.Thanks,",
"username": "Gheri_Rupchandani"
},
{
"code": "",
"text": "Hi @Gheri_Rupchandani,I am not entiry sure I understand your use case but it seems you eant to expire documents after specific time on field timeOn . We have TTL indexes just for this purpose:If you wish to update a field instead you will need a database trigger for when the field is added or updated you can run a function with the needed logic to set deleted field. To avoid a circular trigger place a match on the timeOn field only so the the deleted field won’t trigger it.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for inputs.\nI think you got confused with word deleted but I want to set a field when value in timeoutOn field is passed.\nFor example lets say timeoutOn = 1 sept 2020 1:00:00 AM. so i want to set field deleted: true when current time (1 sept 2020 1:01:00 AM) is greater than timeoutOn value.",
"username": "Gheri_Rupchandani"
},
{
"code": "new Date()",
"text": "@Gheri_Rupchandani, In that case you need a trigger to run every lets say min and query all “undeleted” whic times passed new Date() and updating them…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for answer.\nIs there a way to define the scheduled trigger in code like cloud formation in AWS ??\nAlso if I run update this field based on above criteria on collection will it update for million documents (matching query criteria) in collections ??",
"username": "Gheri_Rupchandani"
},
{
"code": "",
"text": "Hi @Gheri_Rupchandani,Yes schedule triggers runs a JS function which could be linked to your atlas cluster and performing any type of crud as long as it is faster than 90s (execution limit).I would recommend indexing the fields properly and update limited portion each time to be qble to do that in 90s.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "There is one similar use-case. Here I have created a ttl index (say expiresOn) with expirySeconds as 24 hours.\nnow we all know document will get removed when time (expiresOn + 24hours) is passed.\nI have requirement to show this document as (expired: true) when expiresOn is passed so that user can extend the time if he wants and if he does not it would automatically deleted due to TTL.\nDeleteion would be handled by mongo automatically due to TTL index.\nIs there any way to auto set the property expired: true when expiresOn reached ??\nOr I have to follow the same method that you recommended just to set the property ??Thanks,\nGheri",
"username": "Gheri_Rupchandani"
},
{
"code": "",
"text": "@Gheri_Rupchandani, than I would say schedule triggers are the best solution I can think about…",
"username": "Pavel_Duchovny"
}
] | I wanted to set some fields on documents based on field "timeoutOn" | 2020-09-24T19:38:44.590Z | I wanted to set some fields on documents based on field “timeoutOn” | 2,445 |
null | [] | [
{
"code": "{ \"_id\" : \"14_0\", \"data\" : [ { \"dn\" : \"subdata=data_a\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"5\" } ] }\n\n{ \"_id\" : \"15_0\", \"data\" : [ { \"dn\" : \"subdata=data_b\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"3\" } ] }\n\n{ \"_id\" : \"16_0\", \"data\" : [ { \"dn\" : \"subdata=data_c\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"9\" } ] }\ndb.testcol.find({ $expr: { $lte: [ { $toDouble: \"$data.userDL\" }, 5 ] } }).count()\n\n2020-09-18T06:26:37.010+0530 E QUERY [js] uncaught exception: Error: count failed: {\n \"ok\" : 0,\n \"errmsg\" : \"Unsupported conversion from array to double in $convert with no onError value\",\n \"code\" : 241,\n \"codeName\" : \"ConversionFailure\"\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDBQuery.prototype.count@src/mongo/shell/query.js:376:11\n@(shell):1:1\n> db.testcol.aggregate([{ $project:{ adjustedGrades:{$map:{input: \"$data.userDL\",as: \"grade\",in: {$lte : [{ $toInt: \"$grade\" },5] } }}}}, {$match: {adjustedGrades: {$eq: true}}}])\n\n{ \"_id\" : \"14_0\", \"adjustedGrades\" : [ true ] }\n{ \"_id\" : \"15_0\", \"adjustedGrades\" : [ true ] }\n",
"text": "If my doc contains normal key-val pairs, the convert $toDouble/$toInt and then comparison works fine. But my doc looks like this:For an array it throws conversion error given below.So I tried with aggregate:I want to be able to convert my string to Int/Double and do the comparison inside find() and not aggregate(). I mean if the convert and compare is working for normal key-value pair using find() then, mongoDB should also be able to support for array data also right? Array data is a commonly given in docs and querying it should be as easy as — db.testcol.find({ $expr: { $lte: [ { $toDouble: “$data.userDL” }, 5 ] } }).count()",
"username": "K_S"
},
{
"code": "{\n \"$expr\": {\n \"$allElementsTrue\": {\n \"$map\": {\n \"input\": \"$data\",\n \"as\": \"d\",\n \"in\": {\n \"$gt\": [\n {\n \"$toDouble\": \"$$d.userDL\"\n },\n 5\n ]\n }\n }\n }\n }\n}\n\n{ $expr: { $lte: [ { $toDouble: “$data.userDL” }, 5 ] } }\napplyAllTrue(myarray,varname,myf)\n{\n \"$expr\": {\n \"$allElementsTrue\": {\n \"$map\": {\n \"input\": \"$\"+myarray,\n \"as\": varname,\n \"in\": myf\n }\n }\n }\n}\n\nAnd call it like\n\napplyAllTrue(\"data\",\"member\",{\"$gt\": [{\"$toDouble\": \"$$member.userDL\"},5]})\n\nwhich is one line and simple.\n\n",
"text": "Hello : )The error is because $data.userDL is an array.\nWhen an array contains documents the path,selects that field from each document member,\nand makes an array.For the first document it is [5].I don’t know why you what to use find only,and not aggregation stages,but if you want,\nto use only find you can also.This will be a filter to keep a document only if all userDL are >5.With thisYou mean check that all $data.userDL values are <5 (after type convertion).Mongo json can be wrapped in functions so you can do that,without writing all the json.\nAny time you want to check if all elements of a array satisfy a condition you can use this.Meaning run this function in all members of data array,and return true only if all\nsatisfy the function.Instead of map and allElementsTrue i could use reduce,start from true and reduce to true\nor false,map is used when i want to output an array,not a value like true/false,but map\nworks also,and looks simpler.Json code can be generated,if something is common to use,making it a function helps",
"username": "Takis"
},
{
"code": "{ \"_id\" : \"14_0\", \"data\" : [ { \"dn\" : \"subdata=data_a\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"5\" } ] }\n{ \"_id\" : \"15_0\", \"data\" : [ { \"dn\" : \"subdata=data_b\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"3\" }, { \"dn\" : \"subdata=data_z\", \"dummy\" : \"0\", \"objectClass\" : \"NEWDATAdummy\", \"user\" : \"something\" } ] }\n{ \"_id\" : \"16_0\", \"data\" : [ { \"dn\" : \"subdata=data_c\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"9\" } ] }\n> db.testcol.find({\"$expr\": {\"$allElementsTrue\": {\"$map\": {\"input\": \"$data\",\"as\": \"d\",\"in\": {\"$gt\": [{\"$toInt\": \"$$d.userDL\"},0]}}}}})\n{ \"_id\" : \"14_0\", \"data\" : [ { \"dn\" : \"subdata=data_a\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"5\" } ] }\n{ \"_id\" : \"16_0\", \"data\" : [ { \"dn\" : \"subdata=data_c\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"9\" } ] }\n> db.testcol.find({\"$expr\": {\"$allElementsTrue\": {\"$map\": {\"input\": \"$data\",\"as\": \"d\",\"in\": {\"$lte\": [{\"$toInt\": \"$$d.userDL\"},10]}}}}})\n{ \"_id\" : \"14_0\", \"data\" : [ { \"dn\" : \"subdata=data_a\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"5\" } ] }\n{ \"_id\" : \"15_0\", \"data\" : [ { \"dn\" : \"subdata=data_b\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"3\" }, { \"dn\" : \"subdata=data_z\", \"dummy\" : \"0\", \"objectClass\" : \"NEWDATAdummy\", \"user\" : \"something\" } ] }\n{ \"_id\" : \"16_0\", \"data\" : [ { \"dn\" : \"subdata=data_c\", \"userUL\" : \"0\", \"objectClass\" : \"NEWDATA\", \"userDL\" : \"9\" } ] }\n",
"text": "Hi Takis,\nThank you for your response. Yes, I want to use find() only because at the moment I have created some 200+ unique queries that use find(). Switching to aggregate would mean that I need to change these queries accordingly. So I am sticking to find() at the moment. The solution you provided worked fine at 1st glance, according to the docs I provided here (which is highly simplified). However when I add another object(containing different fields) to the “data” array the find(query) doesn’t work fine for the $gt… but works for $lte… which is weird. Can you help me here? Below I have added 1 more object for 1 of the docs:When $gt is used:When $lte is used:[Update]\nI think I need to use $allElementsTrue with $lte and $anyElementTrue with $gt.",
"username": "K_S"
},
{
"code": "{\n \"$expr\": {\n \"$allElementsTrue\": {\n \"$map\": {\n \"input\": \"$data\",\n \"as\": \"d\",\n \"in\": {\n \"$cond\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$$d.userDL\"\n },\n \"missing\"\n ]\n },\n {\n \"$gt\": [\n {\n \"$toInt\": \"$$d.userDL\"\n },\n 0\n ]\n },\n true\n ]\n }\n }\n }\n }\n}\n",
"text": "Hello : )The previous solution assumed that in $data array all emelemnts had userDL field.\nBut in the next data,one member doesn’t have userDL field and it doesnt work.The below meansif for all documents in $data array\n(have userDL field >0) or (dont have use usedDL field)\nkeep the documentFind() is fine,i only said it because when i started mongodb,i thought aggregation is\ncomplicated,but now i think the opposite,that aggregation is so nice and powerful,and\nyou can do all with it, query/project/update.$allElementsTrue => true is all true\n$anyElementTrue => true if at least 1 true\nPick the one that does what you need.",
"username": "Takis"
},
{
"code": "",
"text": "Thank you so much for your help and the solution provided ",
"username": "K_S"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Convert String to Int/double then compare doesn't always work with collection.find() | 2020-09-24T19:39:07.282Z | Convert String to Int/double then compare doesn’t always work with collection.find() | 8,929 |
null | [] | [
{
"code": "",
"text": "MongoDB Realm’s SDKs provide the ability to register users using an email and password.However I can’t find anything in the documentation describing how passwords are stored. What hashing scheme is used? Are passwords salted?The only information I can find is “the password must be between 6 and 128 characters”. Are there are restrictions on characters used? Are passwords expired?I’d like to know as much as possible so I can assess the security of private user data.",
"username": "Martin_Bradstreet"
},
{
"code": "",
"text": "Hi Martin,Are there are restrictions on characters used?No, but you can enforce on the client.Are passwords expired?No, they are not by default. This is logic you will have to implement yourself.Are passwords salted and hashed?Yes they are salted and hashed with SHA256.Hope that helps!",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Yes that’s very helpful. Thank you.",
"username": "Martin_Bradstreet"
},
{
"code": "",
"text": "What about password format validation? I’m able to register users with a password 123456 which is a big security issue.",
"username": "David_N_A"
}
] | Password security for Email/Password users | 2020-08-11T20:32:08.360Z | Password security for Email/Password users | 2,194 |
null | [] | [
{
"code": "",
"text": "Hi,I’m starting with Realm Web and I’m wondering how do you validate user password during registration? Should i use a realm function to validate email and password and then call the realm register api function ?I’m also wondering how do you validate user form in general, do you have to create a custom resolver, or create a validation schema on the collection(s), or something else?Thanks for your help.",
"username": "David_N_A"
},
{
"code": "",
"text": "Hi @David_N_A,Welcome and good luck with Realm.As part of email password config you are required to set the way you confirm users:https://docs.mongodb.com/realm/authentication/email-password/#configurationIt can be a built-in eMail verification or a function where you can write your own logic.However, the flow mus start with sdk registration methods.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you, that’s what I was looking for !\nAs for validating user inputs on the server side do I have to use mongodb schema validation and graphql customer resolvers. Is this how I have to validate user inputs and get error messages back that I handle on forms ?Thanks.",
"username": "David_N_A"
},
{
"code": "",
"text": "Well that depends on what you are validating… If its your documents structure then using schema validation in https://docs.mongodb.com/realm/mongodb/document-schemas/ rules make sense:Pavel.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Do I have to wrap registration in a realm function so that I can validate the password?",
"username": "David_N_A"
},
{
"code": "",
"text": "Let’s say i’m building an api and I have a userProfile endpoint. This endpoint should be able to validate and return error messages.\nOr let’s say i have a user profile form on the website, I should be able to get validation error messages from graphql.",
"username": "David_N_A"
},
{
"code": "",
"text": "Actually with a confirmation function the password doesn’t seem to be passed. How can I validate the password format server side ?",
"username": "David_N_A"
}
] | User password validation | 2020-09-26T07:38:00.550Z | User password validation | 3,630 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "const {\n DynamicLoader\n} = require('bcdice');\n\nasync function main() {\n const loader = new DynamicLoader();\n\n console.log(loader.listAvailableGameSystems().map(info => info.id));\n\n const GameSystem = await loader.dynamicLoad('Cthulhu7th');\n const result = GameSystem.eval('CC<=54');\n\n console.log(result && result.text);\n}\n\n\n\nconst Discord = require('discord.js');\nconst client = new Discord.Client();\n\n",
"text": "I got error when mongodb work with bcdice\nSo i create a simple nodejs project, just install mongodb an bcdice,If remove all bcdice code, mongodb is work fine…",
"username": "QQ_Zeix"
},
{
"code": "",
"text": "I got error when mongodb work with bcdiceHi @QQ_Zeix,Can you share the error message you are encountering?It would also be useful to know your versions of Node.js, Mongoose, and MongoDB Node.js driver.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "node_modules\\mongodb\\lib\\core\\topologies\\replset_state.js:456\n var normalizedHosts = ismaster.hosts.map(function(x) {\n ^\n\nTypeError: ismaster.hosts.map is not a function\n at ReplSetState.update (g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\topologies\\replset_state.js:456:42)\n at Server.<anonymous> (g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\topologies\\replset.js:756:43)\n at Object.onceWrapper (events.js:422:26)\n at Server.emit (events.js:315:20)\n at Pool.<anonymous> (g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\topologies\\server.js:384:12)\n at Pool.emit (events.js:315:20)\n at g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\connection\\pool.js:581:12\n at g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\connection\\pool.js:1036:7\n at callback (g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\connection\\connect.js:75:5)\n at g:\\GitHub\\helloworld\\test-bcdice-with-mongoose\\node_modules\\mongodb\\lib\\core\\connection\\connect.js:148:11\n",
"text": "",
"username": "QQ_Zeix"
},
{
"code": "\"mongodb\": \"^3.6.2\"\n",
"text": "“bcdice”: “^1.6.0”,NODEJS V12",
"username": "QQ_Zeix"
}
] | Error when I use BCDice with Mongoose | 2020-09-25T14:29:02.913Z | Error when I use BCDice with Mongoose | 2,249 |
[
"typescript"
] | [
{
"code": "const app = new Realm.App({ id: \"<Your App ID>\" })npm install --save [email protected]",
"text": "According to the Quick Start Guide, one should initialize the app like this: const app = new Realm.App({ id: \"<Your App ID>\" }). However, in Typescript it gives the following warning:\nScreen Shot 2020-09-21 at 8.28.02 PM1300×356 31.7 KB\nI installed Realm using npm install --save [email protected], as directed by the Install Realm for Node.js guide.",
"username": "Brian_Burns"
},
{
"code": "",
"text": "Hey @Brian_Burns - welcome to the community!I’m seeing the same error you are. I’ll do some investigating.",
"username": "Lauren_Schaefer"
},
{
"code": "npm install --save [email protected]",
"text": "@Brian_Burns There is a newer version of Realm. I tried it out, and it resolved the problem for me. Let me know if this fixes it for you:npm install --save [email protected]’m talking with our Documentation team about getting this updated.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Thanks you so much for your reply @Lauren_Schaefer! I think I tried beta.12. I’ll give 13 a try.",
"username": "Brian_Burns"
}
] | Realm Node SDK | Realm.App({id: ''}) | Expected 0 arguments, but got 1.ts(2554) | 2020-09-22T00:58:27.070Z | Realm Node SDK | Realm.App({id: ”}) | Expected 0 arguments, but got 1.ts(2554) | 4,491 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi there,I’m a newbie to MongoDB and could use some help with my db.I want to build a db that stores product data. A product has a lot of attributes that I want to embed in the product itself. Additionally a product can be purchased in 1-n markets and is assigned to 1 category.In the app the user has a category list and once he selects a category all products of the category are listed in a grid. He also has the possibility to filter the products to see in which markets they’re available.I watched some tutorials and I believe it’s best to have three collections (products, markets, categories), since there are cases in the app where I would query products, markets and categories independently.In each category and each market I’d store the id’s of all assigned products in an array. In the product itself I’ll have an array with the related market id’s and an attribute with the category id.Does this sound right to you or do you have any suggestions for improvement.Any help is much appreciated.Thanks!",
"username": "tmmsdlczek"
},
{
"code": "{\nProductId: ....,\nProductName: ...,\nCategory: [ ... ],\nMarkets : [ { marketId: ... , MarketName : ...}...]\n...\n}\n",
"text": "Hi @tmmsdlczek,Welcome to MongoDB community!It sounds like a classic online store schema that you are looking for where products collection will probably be the main one. I would recommend keeping as minimum amount of collections as possible therefore embedding as much as possible into the product document sounds correct.Of course data must be also logically segregated according to the application access patterns. I think a following document might be good fit:I assume there will still be catagories and market collections but you can still index category and markets fields to search products solely on products collection…Also read the following blogUsing a reference to improve performanceBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello : )I try to have the least collections possible,for simplicity and performance.\n1)embeded documents/arrays ,with indexes\n2)if query from many locations , separate and duplicate data\n3)if many updates on duplicated data , references in arrays and joinsSee this topic,the answer from Michael Höller\nHe gathered additional links that can help ",
"username": "Takis"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help modeling my product DB | 2020-09-26T01:33:38.961Z | Help modeling my product DB | 3,529 |
null | [] | [
{
"code": "contains?(\"$$mydoc\",\"$$mykey\")\nput(\"$$mydoc\",\"$$mykey\",\"randomValue\")\nremove(\"$$mydoc\",\"$$mykey\")\n....\n(and all those basic operation on maps that programming languages offer)\n",
"text": "HelloHow to do in mongoDB document operations?\nLets say that “$$mydoc” is a document\nand “$$mykey” is a key (variable key not literal)How to doThe root of the problem is that in mongo seems that you can’t use variables to refer to fields.I know that i can use $function operator and javascript,but those above are so common\nthings and so useful when processing json data.If key is literal like (“myfield”) i can construct a document {…} or use merge objects.\nBut if the key comed from variable the only way i know to do those is use object to array,and array to object,\nbut those cant be the normal way to update a document.If you need to do many updates it will be\nso slow,and code so complicated.I like mongodb alot,is there a way those to be added?\nWe can ask somewhere for new features we need to added in mongoDB query languange?Thank you",
"username": "Takis"
},
{
"code": "",
"text": "Hi @Takis,How to do in mongoDB document operations?Could you clarify where are you trying to perform these document operations ?\nIf you’re referring to MongoDB Aggregation Pipeline, perhaps you could utilise $let.Also, if it’s an aggregation pipeline could you share the context on those operations (i.e. contains, put, remove) ?Regards,\nWan.",
"username": "wan"
},
{
"code": "Lets say you are in pipeline and you have this\n\n $$mydoc = one document\n $$mykey = one key\n \nHow to do \n\n get(\"$$mydoc\",\"$$mykey\")\n contains?(\"$$mydoc\",\"$$mykey\")\n put(\"$$mydoc\",\"$$mykey\",\"randomValue\")\n remove(\"$$mydoc\",\"$$mykey\")\n ....\n (and all those basic operation on maps that programming languages offer)\nFor arrays its so easy to do a put it in mongoQL\n{\"$concatArrays\" [\"$$array1\" [\"$$newmember\"]])}\nI want something like the above but with embeded documents\n",
"text": "Hello @wan ,Use cases are so many,everytime that you want to edit one embeded document\nusing a variable key (not literal, for example you didnt know it at pipeline writing time,you found it at pipeline run time) it is a use case.Every programming language that has hash-maps provides those methods to process those\nhash-maps.I managed to make so far only put,in a fast way using mergeObjects and objectToArray.\nBut it wasn’t straightforward at all.If you want to see an use example see this post,where i tried to group an array\nusing reduce and i couldnt do it.ObjectToArray and ArrayToObject are ok if document is small or if you will do it one time.\nIf you are in a reduce,i cant convert the object to array and back to object at each step.Its like saying to a Javascript programmer,dont add to an Object make the object array first,add to array,and then back to Object,its too slow to be usable.Thank you",
"username": "Takis"
},
{
"code": "",
"text": "Hi @Takis,Thanks for providing the background context.You won’t be able to fully substitute a programming language with a database query language. A programming language will have more expressive operators, and a database can manipulate the data only to a certain extent (i.e. aggregation).Instead of attempting to shift a complex computation from the application to the database, it would be better to either keep the operations at application side, and/or re-design the schema. See also Building With Patterns: A Summary.If you want to see an use example see this post,where i tried to group an arrayIn regards to the other thread, it looks like the thread ended with an answer. If that doesn’t work, you should clarify further what’s difference between your expected outcome and the response.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "There’s actually a JIRA ticket for this functionality: https://jira.mongodb.org/browse/SERVER-30417Feel free to watch it (and upvote it).",
"username": "Asya_Kamsky"
},
{
"code": "\n $$mydoc = one document\n $$mykey = one key\n \n get(\"$$mydoc\",\"$$mykey\") //we need this to be added\n contains?(\"$$mydoc\",\"$$mykey\") //if we have get we can do it,with check if null/missing\n put(\"$$mydoc\",\"$$mykey\",\"randomValue\") //if we have get we can do it,with mergeObjects\n remove(\"$$mydoc\",\"$$mykey\") //we need this to be added\n\n",
"text": "Thank you i was searching for that on JIRA also it would be nice we to have those,i upvoted\nand i am watching it,but i don’t think i can create JIRA to ask for those.\nif we have get we can do contains,and put using get and mergeobjects\n(new value(we can do it even now) or updated).\nI think remove is also needed,i don’t know a way to do it.",
"username": "Takis"
}
] | MongoDB missing basic document(object) operators? | 2020-09-04T12:02:21.591Z | MongoDB missing basic document(object) operators? | 3,538 |
null | [
"indexes"
] | [
{
"code": "",
"text": "I have two partial indexes, that I hope MongoDB will join two of these indexes when all the fields are specified exactly as they are in the partialFilterExpression for both indexes.I ran an explain plan, but it does not appear to intersect both indexes. Why is that? are there any limitations with index intersection? Does it do both OR and AND conditions?Referring to https://docs.mongodb.com/manual/core/index-intersection/",
"username": "Bluetoba"
},
{
"code": "",
"text": "Hi @Bluetoba,Can you please share the index definitions and your query along with the explain plan so we can investigate?Also, why not create a compound partial index at this point? This would solve your issue, no?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Unfortunately, we have removed these indexes, but we have a collection that has the following fields (for storing chat conversation messages):ID, Merchant_ID, Buyer_ID, Top_Chain_ID, Top_Chain_FLAG, Last_Msg_DateNote:We had the following indexes\nIndex 1: Merchant_ID with partialFilterExpression(Merchant_ID exists)\nIndex 2: Buyer_ID with partialFilterExpression(Buyer_ID exists)\nIndex 3: Top_Chain_ID, Last_Msg_Date: -1 with partialFilterExpression(Top_Chain__FLAG=1)What we hoped to do are two scenarios:So, yes we could have two indexes with compound keys, but it would be storage efficient if index intersect works and reuse the same index3. However, it ignores index 3 and simply just use index 1 or index 2 whether it’s merchant or buyer query.",
"username": "Bluetoba"
},
{
"code": "db.col.createIndex({\"Merchant_ID\": 1, \"Last_Msg_Date\": -1}, { partialFilterExpression: { \"Merchant_ID\": { $exists: true }, \"Top_Chain_FLAG\": 1 } })db.col.createIndex({\"Buyer_ID\": 1, \"Last_Msg_Date\": -1}, { partialFilterExpression: { \"Buyer_ID\": { $exists: true }, \"Top_Chain_FLAG\": 1 } })Top_Chain_FLAGdb.col.find({\"Merchant_ID\": \"John\", Top_Chain_FLAG: 1}).sort({\"Last_MsgDate\": -1})db.col.find({\"Buyer_ID\": \"Bobby\", Top_Chain_FLAG: 1}).sort({\"Last_MsgDate\": -1})",
"text": "From what I’m reading, I think you need:Note that you don’t need Top_Chain_FLAG in the index. See example here.This should work correctly with these queries:I guess this could even be covered queries if you add the “message” (what you actually need to retrieve here) at the end of the index and project on it. But of course this would make the index bigger and the query would be fully executed in RAM.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88, that is exactly what I wanted to do, i.e. covered queries and what you suggested with two indexes is exactly what ended up having.However, I wondered why the index interesection did not kick in with the indexex that I had earlier.thanks,\nGus",
"username": "Bluetoba"
},
{
"code": "$hint",
"text": "I’m not sure why exactly but usually index intersections are ranked pretty low by the query planner because they find that it’s faster (or fast enough) to use only one index.Also, check out @kevinadi’s answer here.You could consider using $hint and suggest the 2 indexes but it’s currently not supported - but there is a ticket for this. But as you figured, it’s not really a priority because there is a better alternative in most cases.Hopefully, you have the performance you were looking for now !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "thank you @MaBeuLux88",
"username": "Bluetoba"
}
] | Index Intersection | 2020-09-15T01:48:52.361Z | Index Intersection | 2,994 |
null | [
"replication",
"monitoring"
] | [
{
"code": "show databasesHistory 0.203GB\nPublicRecords 0.203GB\nStarfields 83.913GB\nadmin 0.078GB\nconfig 0.078GB\ndata_sets 0.078GB\nlocal 435.865GB\nmongo_data_local 0.078GB\nmongo_data_test 0.078GB\nnagios 0.203GB\ntest 0.078GB\nshow databasesHistory 0.078GB\nPublicRecords 0.203GB\nStarfields 83.913GB\nadmin (empty)\ndata_sets 0.078GB\nlocal 50.054GB\nnagios 0.078GB\ntest (empty)\nDataSetslocal",
"text": "I’m working on getting an old (v 2.6.9) mongo replica set healthy so that we can upgrade it. Unfortunately, there is some weird replication behavior.Primary show databases output:Secondary show databases output:If you add together the DataSets and the local db sizes, they add up to almost exactly the same amount.I can’t figure out why that is. DataSets on the Secondary is also missing a collection.But rs.status() lists them both as healthy (along with an arbiter).I want to get the data in a good state before attempting any sort of upgrade. What could cause this?",
"username": "Trevor_Rydalch"
},
{
"code": "",
"text": "Hi @Trevor_Rydalch,Replica set sizes can vary as some replica members might go through more resyncs than others. Plus 2.6.9 historical version is using mmapv1 which is much more sensetive to fragmentation.Additionally, the local database is also a subject of oplog size defined for each member. In 2.6.9 those are defined per member and the highest chance is that Primary has a much bigger oplog resulting in large local database compare to secondary.If you compact or resync all nodes all non local databases should go to the same size. However its not critical for upgrades. You can do counts on collections to see you get same number of docs.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I just performed a resync of one of the nodes. It’s nearly complete, but the average object size is considerably larger than on the primary and it’s increasing. Any idea why this might be?",
"username": "Trevor_Rydalch"
},
{
"code": "",
"text": "Additionally, what could cause document counts to not match across all members of a replica set? The other secondary in the set has fewer documents (a very small percentage) and is not behind on syncing.",
"username": "Trevor_Rydalch"
},
{
"code": "",
"text": "This is rather unexpected. When you run rs.printSlaveReplicationInfo() you see same info on both nodes?Perhaps since 2.6 is far from being supported its best to take a 3.6 Mongodump from yiur primary and restore it to a new 3.6 replica set ? Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Why are my databases different sizes across the replica set? | 2020-09-25T06:10:17.999Z | Why are my databases different sizes across the replica set? | 4,376 |
null | [
"dot-net",
"legacy-realm-cloud"
] | [
{
"code": " Exception thrown on realm access.\n (This is always the UI thread)\n",
"text": "I have updated Realm and RealmFody to v 5.0.1 (from 4.3.0), in 2 projects.\n(Both on Realm Cloud Full Sync).Both projects now fail to run giving ‘Realm access on incorrect thread’ exceptions.\nIt behaves as if any callback is treated as a separate thread:In a WPF Page:E.G.\nvoid ok_Click(object sender, RoutedEventArgs e) {}\nThe problem happens every where, less I get a new realm instance for every query!!\nThe projects have run well until the update. Problem is, I can’t revert to 4.3.0 because I get a new error when connecting to Realm about ‘encryption specified blah’.A possible problem: How does the realm DB format on Realm cloud get updated to V10 format to be in sync with 5.0.1 ?Thanks. Richard",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "@Richard_Fairall Can you share some code snippets please? Like how you are setting up your viewModel and what your onClick function looks like? Where the Realm setup is taking place, etc.",
"username": "Ian_Ward"
},
{
"code": "public partial class SimpleLoginDialog : Window\n{\n private Realm realm;\n\n public SimpleLoginDialog(object caller)\n {\n InitializeComponent();\n }\n\n private void OnWindowLoaded(object sender, RoutedEventArgs e)\n {\n // We should be in the UI thread here\n this.realm = RealmConnector.GetRealmInstance();\n\n // The call above does: Realm.GetInstance(realmConfig) \n // the realmConfig is a static entity\n\n BuildUsersViewModels();\n }\n\n private void BuildUsersViewModels()\n {\n // !! This is Ok from this method \n var users = realm.All<LoginUser>();\n }\n\n // event from normal Click=\"okButton_Click\" in XAML\n private void okButton_Click(object sender, RoutedEventArgs e)\n {\n // this fails when called her. Can be any type of callback (TextChanged etc)\n var users = realm.All<LoginUser>();\n\n // if I use a local realm instance, it works...\n var localRealm = RealmConnector.GetRealmInstance();\n var result = localRealm.All<LoginUser>();\n\n if (realm != null)\n {\n realm.Dispose();\n }\n Close();\n }\n}",
"text": "Here’s a code snippet:",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "I cannot test at the moment due to the dreaded exception:\n“Metadata Realm encryption was specified, but no encryption key was provided.”\nThis occurs when switching back to version 4.3.0.\nI now get this using v 5.0.1 as well, when calling:\nvar user = await User.LoginAsync(credentials, new Uri(RealmAuthUrl));I have never specified using encryption. Is it mandatory on 5.0.1?",
"username": "Richard_Fairall"
},
{
"code": "realm-object-server",
"text": "I was able to reproduce the threading issue - it’s due to a peculiarity in WPF that we’ll have to find a workaround for.For the encryption issue - it’s a change of behavior between 4.3.0 and 5.0.1. It’s not required to use encryption, but 4.3.0 incorrectly attempted to encrypt some metadata information which then 5.0.1 tries to read unencrypted. Depending on your platform, this metadata is typically stored in a realm-object-server folder, either in the user’s documents folder or locally next to the .exe. You can safely delete that and the error will go away as long as you don’t switch between versions.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks a bunch. I’m now back on 4.3.0 which is working OK.Rich",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "I filed 5.0.1 throws invalid thread access on WPF main thread · Issue #2026 · realm/realm-dotnet · GitHub to track our progress. I believe I have a reasonable workaround and if it doesn’t introduce some unexpected issues, I’ll get it released later this week.",
"username": "nirinchev"
},
{
"code": "",
"text": "This has now been fixed and published in a prerelease package: Package Realm · GitHub.Feel free to give it a try and let me know if you still experience issues. Instructions on how to use the nightly builds can be found here.",
"username": "nirinchev"
},
{
"code": "",
"text": "Hi Nikola\nThank you for the result. I’m having problems with Nuget. Cannot authenticate and will not allow me another chance to connect to GitHub.\nI’ve tried common-line method. No luck yet.The reason I went to 5.0.1 was mainly to move to MongoDB from Realm Cloud. I can wait for a 5.0.2 release. Any rough idea of release date for 5.0.2?Rich",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Hi Richard, apologies for taking so long to reply - I must have misconfigured my notification settings. I agree that the GitHub NuGet feed is a bit of a chore, but at least they offer a way to download the package directly (the download link is under “Assets” on the package page). An official 5.0.2 is expected early next week - I was waiting on some Core fixes that have now been released.Note, however, that the 5.x series do not yet support synchronization with MongoDB Realm. A beta release with support for it is expected later this fall. If you’re feeling especially adventurous and want to give the very early alphas a spin, let me know and we can share a build out of this PR.",
"username": "nirinchev"
},
{
"code": "",
"text": "Hi Nikola, thanks for the info. I can wait for 5.0.2, or even the dotnet sync update, (Covid-19 has given us the opportunity to delay launch of our apps) I will conquer Nuget!",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Cloud project: Upgrade Realm DotNet to 5.0.1 throws Access on incorrect thread | 2020-09-12T10:09:01.866Z | Realm Cloud project: Upgrade Realm DotNet to 5.0.1 throws Access on incorrect thread | 4,645 |
null | [
"data-modeling",
"performance"
] | [
{
"code": "",
"text": "Hello MongoDB fellows,I’m currently developing an app that requires data aggregations based on dates.We have a small framework and we once decided to store all dates as strings. (UTC in ISO format)\nNow, this works fine but when it comes to aggregations on dates, the MongoDB query functions rely on BSON dates.So my question is actually: what is the best approach?Thank you already for all the expertise you can share with this new kid on the blob.",
"username": "Coding_Edge"
},
{
"code": "",
"text": "My preference is to store the native BSON date. I do not have numbers but since BSON are some kind of integer, they must use less space and comparisons must be faster. Less space means smaller working sets. This gives more chance to fit in RAM. So less I/O.This being said it should be quite easy to write a small benchmark to confirm the above claims.However if your string dates are ISO and Utc with year-month-day order you should not need to convert when comparing.",
"username": "steevej"
},
{
"code": "",
"text": "I agree with what @steevej said but i would add to it the question “how do you plan to use these dates”?You mention querying and also aggregating and you mentioned being concerned about performance but what about correctness? Do you plan to do date arithmetic? You’ll end up having to convert your string dates to BSON dates anyway.Another question to consider is what were the reasons that made you originally decide to store dates as strings? Those might be valid reasons to continue that way, or they may be reasons that didn’t really hold.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Well, to be honest, I can’t really remember anymore why we shifted towards dates as strings.\nMy best guess would be that we noticed issues in date handling between our backend services and our web applications…Where in the past the datetimes were used just to record the time of the event, now we also have a requirement to aggregate on the parts of these dates. So basically my question can be rephrased as:Assuming I have 10M documents that are created in the past 14 days, if I would like to see the number of records per day per hour, would the $dateFromString function give me a penalty over the native type? I agree with Steeve, it is more than likely that the native type will be much faster and it can be benchmarked.So the real question is indeed your question: why would one use strings for dates?",
"username": "Coding_Edge"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Performance penalty $dateFromString | 2020-09-25T07:31:40.945Z | Performance penalty $dateFromString | 2,389 |
[] | [
{
"code": "import Foundation\nimport RealmSwift\n\n// Private Realm\nclass User: Object {\n @objc dynamic var syncuserid = \"\"\n @objc dynamic var username = \"\"\n @objc dynamic var image = \"\"\n @objc dynamic var email = \"\"\n @objc dynamic var exp:Int = 0\n @objc dynamic var level:Int = 0\n}\n\n// Common Realm\nclass Article: Object {\n @objc dynamic var name = \"\"\n @objc dynamic var body = \"\"\n}\n\n\n",
"text": "I’m developing an ios App with Realm Cloud(Realm Platform). I create Common Realm to which all end user have access, and Private Realm where personal data of each end user are saved.\nIn my App, user can post articles. Created articles are saved in Common Realm and all user can see them.My App has three pages, ArticleListPage, ArticlePage and UserInfoPage.\n(1)I want to show users who posted the article on ArticlePage.\n(2)I want to show users a personal information of writer on ArticlePage.ArticleListPage\n828×1283 157 KB\nArticlePage\n828×1660 136 KB\nUserInfoPage\n828×1566 116 KBNow I want to add “postedBy” property to “Article” object, but “User” object are saved each Private Realm.\nWhat is the best way to create relationship between Common Realm and Private Realm? Thank you.",
"username": "Shi_Miya"
},
{
"code": "",
"text": "To put it simply, How can end user access the data stored on another user’s Private Realm?",
"username": "Shi_Miya"
},
{
"code": "",
"text": "There’s a number of ways to approach this and it would be challenging to provide a specific answer without understanding the entire use case.I create Common Realm to which all end user have accessIf you want data to be available to all end users, then it would be stored within that Common Realm.Given that each user has their own Realm as well, at least some of their info will be shared so that info would also be stored in the common realm since anyone can see it at any time (based on the article it’s attached to).So perhaps storing some user data - their screen name, their story and their favorite food would be stored on the common realm, whereas info that’s not shared is stored on their own realm.You could also leverage Realm Permissions to provide a more granular control of control, as it can be per realm or even at the object level.Take a look at the App Architecture guide as it provides some guidance on app design with a similar architecture as to what’s in your question.A Common Realm for things that are required upon log in by every user - these might be",
"username": "Jay"
},
{
"code": "// Common Realm? or Private Realm?\nclass DMRelation: Object {\n @objc dynamic var userid1 = \"\"\n @objc dynamic var userid2 = \"\"\n let chats = LinkingObjects(fromType: Message.self, property: \"dmrelation\")\n}\n\nclass Message: Object {\n @objc dynamic var relatedChatroom: DMRelation?\n @objc dynamic var userid = \"\"\n @objc dynamic var body = \"\"\n @objc dynamic var image = \"\"\n}\n",
"text": "It’s crystal clear. Thank you.\nAnd now I have another question. When I make DirectMessage function, which Realm should I use?\nI will create DMRelation model for that functionbut I feel that each end user has relation data of all users is so inappropriate and inefficient.",
"username": "Shi_Miya"
},
{
"code": "",
"text": "It’s generally best practice to ask separate questions in separate posts so the thread doesn’t get too long and convoluted.We don’t know what a DirectMessage function is, what the purpose is or the use case. Maybe you would have a Messages Realm where messages are stored?",
"username": "Jay"
},
{
"code": "",
"text": "It’s generally best practice to ask separate questions in separate posts so the thread doesn’t get too long and convoluted.I got it. I’ll be careful from now on.DirectMessage function is real-time chat function like this\nios-chat-tutorial@2x874×688 94.1 KB\nUsing this, users can communicate with each other in my app.\nI created DMRelation class to connect two users, and also created Message class to save chat log.\nIt seems a little bit inefficient to store DMRelation object of all user in Common Realm, but storing in Private Realm will be also inefficiently. Because I’m new to Realm, I don’t know what to do.",
"username": "Shi_Miya"
},
{
"code": "// Common Realm\nclass Article: Object {\n @objc dynamic var name = \"\"\n @objc dynamic var body = \"\"\n//this is where you put the partitionKeyValue the user object in the per-user realm\n @objc dynamic var userRealmPartitionKeyValue = \"\"\n}\nlet realm = try! Realm(configuration: user.configuration(partitionValue: myArticle.userRealmPartitionKeyValue!))",
"text": "@Shi_Miya So one thought I’ve had about implementing this is -A common realm which includes a list of all Articles. The ArticlesList View would contain a model of all Articles which has some denormalized data about the User, for instance their display name and description.Then a user would click on an article which would navigate to a view just for that article (and bound to a query just for that Article id) - no need to modify the data model here.Now for the trickier bit - your UserInfo page - at this point you’ll likely want to switch to a per-user realm. The user who is requesting another user’s realm will need to request the appropriate partitionKey value in order to open that realm - once they open that realm they will have read permissions for that realm and can view the users data (or as much as you want to allow, you may want to split “private” user data into another realm. )From here you would then open the user realm for this user by passing this field into a new realm open call. Such as (pseudocode for the sake of understanding)let realm = try! Realm(configuration: user.configuration(partitionValue: myArticle.userRealmPartitionKeyValue!))I hope this helps. We understand that managing different realms can become a bit of an implementation detail and we are endeavoring to fix this. It is our top priority right now to implement a flexible syncing model - a la query-based sync in the legacy realm cloud which would make this schema design much easier.@Jay I appreciate your response here and it hasn’t gone unnoticed, but I think we should direct people away from object-level permissions for the legacy realm cloud. It was a good idea but it didn’t scale and right now per-realm/partition permissions is what we have. Stay tuned for future “flexible-sync” improvements",
"username": "Ian_Ward"
},
{
"code": "",
"text": "First of all, Thank you for your detail explanation. It helps me a lot.\nHowever, I have an error while trying to open other’s per-user realm, so I’ll create another question.",
"username": "Shi_Miya"
},
{
"code": "",
"text": "If I could weigh in on this. My own sense is to partition the data base into one common realm that is readonly by everyone, and multiple private user realms that are read/write only by the logged in user and not readable by anyone else. If you make the common realm writable, you might have some scalability issues as the number of users increase. All the writing to the common realm should be done by server functions that act on behalf of each user. So essentially the super user server is the only agent updating the common realm. For that I would add an operations collection that users write in their private realms, that the server listens to, and updates the shared common realm accordingly.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How could I get the data of another Realm? | 2020-09-20T05:15:48.238Z | How could I get the data of another Realm? | 3,277 |
|
[
"atlas-search"
] | [
{
"code": "",
"text": "Hi,I got problem with Near Operator when i try to use date.My mapping\nMy pipeline\n\npivot was 1 week in millisecondsI got date not near in result\nThanks for your help",
"username": "Jonathan_Gautier"
},
{
"code": "new Date \"origin\": ISODate(\"1995-01-01T00:00:00.000+00:00\")",
"text": "Hi @Jonathan_Gautier,I would recommend running a simpler query as in our example. Using a direct “near” clause and replacing new Date with\n \"origin\": ISODate(\"1995-01-01T00:00:00.000+00:00\")Learn how to search near a numeric, date, or GeoJSON point value.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,Same ! Near dont work with simpler query Thanks for your help.",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hi @Jonathan_Gautier,Do you have documents closer to 1995?Perhaps sort as the next stage asc …",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_DuchovnyNo documents closer to 1995 ! Every time got false/positive result.",
"username": "Jonathan_Gautier"
},
{
"code": "nearnew Date(\"1995\")nearmustfiltermustmustNot",
"text": "near will return all documents, but will score those closer to a date higher. So the first result will be the document closest to new Date(\"1995\")using near in a must clause will have no effect on a filter as defined on filter or must or mustNot.If you want to filter your results, you can use a range operator.",
"username": "Doug_Tarr"
},
{
"code": "",
"text": "Thanks for your help, i will use range now.I understand how near works now ",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search "Near" Operator Date, Bug? | 2020-09-24T18:36:12.225Z | Atlas Search “Near” Operator Date, Bug? | 2,162 |
|
null | [
"data-modeling"
] | [
{
"code": "$bucketoutput",
"text": "Hi,\ni just go through the $bucket clause in mongodb and test that as well. I came to know that it was at output level while getting the records from the database. is there is any way to insert data into respective records as we do in hive(bucketing) ? if so then kindly share the syntax\nthanks",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Here is the link: https://docs.mongodb.com/manual/reference/operator/aggregation/bucket/\ni just want to know that is there is any possibility to insert the data in buckets as in the above link we can only retrieve the data from the collections in buckets.Thanks",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "what i concluded is that we can’t insert data in mongoDB in buckets(like we used to do in hive etc) but we can retrieve the documents in form of buckets. Check this link for bucket pattern ",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Do you mean like this: Building with Patterns: The Bucket Pattern | MongoDB Blog",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "yes @Asya_Kamsky. that is only used when we want to display the document then we can use bucketing but it is not the same as we have in HIVE etc",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Maybe it would be easier to answer if you describe exactly what functionality you are looking for… Do you mean partitioning collections (similar to RDBMS partitioning)?",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "yes, I want to store the data(documents) in bucketed(partitioning in RDBMS) form so that when so ever I apply some find operation it should not scan the whole data.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "so that when so ever I apply some find operation it should not scan the whole dataThat’s not how things work in MongoDB - unless you are doing an unindexed query. If you create an index on field (or fields) that you are querying by, then the index will be quickly searched and only documents matching the values you’re querying about will be fetched. This is just like regular indexes in RDMBS.Asya",
"username": "Asya_Kamsky"
},
{
"code": "ONLY",
"text": "Oh! It mean that we can’t do the same thing as we do in Hive(bucketing) so mongodb ONLY support for displaying the data in bucketed form(run time)",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Insertion of data in buckets | 2020-09-18T10:43:30.931Z | Insertion of data in buckets | 3,856 |
null | [
"aggregation"
] | [
{
"code": " {\n \"id\": \"8SDQ8\"\n \"durationInMonths\": 12,\n \"startDate\": {\n \"$date\": \"2020-07-03T09:14:46.609Z\",\n },\n \"endDate\": {\n \"$date\": \"2021-07-03T09:14:46.609Z\",\n }\n }\n const myDate = new Date();\n subs.findOneAndUpdate({\n \tid: '8SDQ8'\n }, {\n \t$set: {\n \t\tstartDate: myDate,\n \t\tendDate: {\n \t\t\t$addMonths(myDate, \"$durationInMonths\")\n \t\t}\n \t}\n });\n",
"text": "Let’s say I have a collection of subscriptions as below:How can I add $durationInMonths to a variable date without being forced to fetch $durationInMonths beforehand ? Something like this:",
"username": "Ji_Be"
},
{
"code": "[ {$set:{endDate:{$dateFromParts:{\n year:{$year:\"$startDate\"},\n month:{$add:[\"$durationInMonths\", {$month:\"$startDate\"}]},\n day:{$dayOfMonth:\"$startDate\"},\n hour:{$hour:\"$startDate\"},\n minute:{$minute:\"$startDate\"},\n second:{$second:\"$startDate\"}\n }}}}\n]",
"text": "Do you mean \"how do I increment start date by 12 (or some arbitrary number of months) to get end date?You can use aggregation pipeline update syntax (new in 4.2) like this:",
"username": "Asya_Kamsky"
}
] | Increment months to a date using an element attribute | 2020-07-09T10:43:53.070Z | Increment months to a date using an element attribute | 3,021 |
null | [
"compass"
] | [
{
"code": "Robo 3Tdb.collectionA.find({\"userId\":\"u123\"}).count()MongoDB Compass",
"text": "Hi\nI want to get the count of total records in a collection w.r.t to a filter so for that I typed the query in Robo 3T as\ndb.collectionA.find({\"userId\":\"u123\"}).count()\nand it works fine for me and show the output.But when I moved to MongoDB Compass\ni was confused that where to write the count clause, can anybody help me to do this?",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "In Compass, when you run a query you see the number of results without the need for a count.image1392×892 192 KBAlternatively, you can use the embedded shell and use the same command you use in Robo 3T.image1392×892 145 KB",
"username": "Massimiliano_Marcon"
},
{
"code": "(1 of 20 /N/A)",
"text": "it show N/A in total documents (1 of 20 /N/A)",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Weird. Is it a big collection? Can you share a screenshot?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Yes it is. ok here is the screenshot\nimage2014×338 16.5 KB",
"username": "Nabeel_Raza"
},
{
"code": "countmaxTimeMS5000",
"text": "The size of the collection might be the reason then. It’s possible that the count times out. You can try expanding the query bar by clicking on Options. On the bottom right, you’ll be able to set maxTimeMS. By default, it’s set to 5000 but you can increase it.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "But where to write the count clause in MongoDB Compass. Can you share the screenshot, that would be great!",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "Hi, you don’t need to specify the “count” clause explicitly in Compass UI. Simply put filter criteria and hit enter - it would return all the values matching that criteria and would show:Displaying documents 1-20 of 150\nAssuming, there were 150 records that matched your filter criteria.However, as suggested by @Massimiliano_Marcon - please click to expand the OPTIONS button and try increasing the MAXTIMEMS from 5000 to 10,000 or something and then see. You could even try combination of SKIP and LIMIT if the data set count returned from your query criteria is extremely very large number.Otherwise, simply use the MongoDB shell that comes with new version of Compass UI and write the same query that you would use with Robo3T.image1879×545 27.5 KB",
"username": "ManiK"
},
{
"code": "Mongo CompassN/Asolution10000000+MAXTIMEMS",
"text": "Thanks @ManiK for detailed message.\nThe thing is that we have some limitation by the client that we can only do queries via Mongo Compass & it’s version is 1.19.X. we can’t update it.\nSecondly we have hundred and thousands of documents in a collection so it show N/A instead of total document count. (check this image)The solution is that I have around 10000000+ documents in a collection and as @Massimiliano_Marcon and @ManiK said that increase the MAXTIMEMS limit so I did the same step and got the solution.Here is this screenshot with success:\n\nimage1113×211 17.8 KB\n",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How we can use $count clause in MongoDB Compass | 2020-09-25T07:07:01.882Z | How we can use $count clause in MongoDB Compass | 50,655 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "All - I am building a Kubernetes Operator to integrate an Open Source digital asset management application with Mongo. As such - I am standing up a Mongo cluster as documented in MongoDB Enterprise Kubernetes Operator — MongoDB Kubernetes Operator 1.7 - in a dev environment to test the integration. So far I’ve created the Operator and Ops Manager. The instructions seem to indicate that it requires logging into the Ops Manager UI to perform some of the set up. In other words - it looks to me like in v1.7 it is not possible to stand up a complete Mongo environment 100% declaratively using only Kubernetes manifests. I’m asking the question in case I’ve missed some piece of documentation. Help is appreciated. Thanks",
"username": "Eric_Ace"
},
{
"code": "",
"text": "Having completed generation of the Operator, Ops Manager, and Database using manifests, the only piece that requires manual intervention is the generation of Project-level API keys in the Ops Manager UI. I don’t see any info leading me to believe this can be done declaratively. This would be a great feature for a future release of the Operator.",
"username": "Eric_Ace"
},
{
"code": "",
"text": "Hi. This is one manual step that needs to happen to authenticate with Ops manager. We are trying to get to a stage where we can have E2E model in Kubernetes, but the security model for K8S API and Ops Manager is different. Here is a feature that we track and please vote for it Headless OPS Manager deployment – MongoDB Feedback Engine\nIt is possible to create one API Key for all users but then that secret has to be created in each namespace. However, It is not a particularly secure mode.",
"username": "Andrey_Belik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Kubernetes cluster declaratively | 2020-09-24T20:02:59.074Z | Mongo Kubernetes cluster declaratively | 2,008 |
null | [
"containers",
"ops-manager"
] | [
{
"code": "$ oc get opsmanager\nNAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS\nops-manager 3 4.4.1 Running Reconciling 21m\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:52.410] [.info] [cm/director/director.go:planAndExecute:544] <ops-manager-db-0> [16:01:52.410] Step=WaitFeatureCompatibilityVersionCorrect as part of Move=WaitFeatureCompatibilityVersionCorrect in plan failed : <ops-manager-db-0> [16:01:52.410] Postcondition not yet met for step WaitFeatureCompatibilityVersionCorrect because \"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[<Current FeatureCompatibilityVersion = is not equal to desired = 4.2 >].\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\" Recomputing a plan...\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:52.197+0000 I NETWORK [listener] connection accepted from 10.116.0.110:49708 #85 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:52.198+0000 I NETWORK [conn85] received client metadata from 10.116.0.110:49708 conn85: { driver: { name: \\\"mongo-java-driver|legacy\\\", version: \\\"3.10.2\\\" }, os: { type: \\\"Linux\\\", name: \\\"Linux\\\", architecture: \\\"amd64\\\", version: \\\"4.18.0-193.12.1.el8_2.x86_64\\\" }, platform: \\\"Java/AdoptOpenJDK/11.0.8+10\\\" }\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:53.110] [.info] [main/components/agent.go:LoadClusterConfig:197] [16:01:53.110] clusterConfig unchanged\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:53.515] [.info] [cm/mongoclientservice/mongoclientservice.go:func1:1364] [16:01:53.515] Testing auth with username __system db=local to ops-manager-db-0.ops-manager-db-svc.backing.svc.cluster.local:27017 (local=false) connectMode=SingleConnect ipversion=0 tls=false\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:53.602] [.info] [cm/mongoctl/processctl.go:GetKeyHashes:1742] <ops-manager-db-0> [16:01:53.602] Able to successfully auth to ops-manager-db-0.ops-manager-db-svc.backing.svc.cluster.local:27017 (local=false) using desired auth key\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:53.716] [.info] [cm/mongoctl/replsetctl.go:RsMemberCanPerformGlobalAction:468] <ops-manager-db-0> [16:01:53.716] Mongod (ops-manager-db-0.ops-manager-db-svc.backing.svc.cluster.local:27017) cannot perform Global Update Action because there is no healthy primary in the replica set.\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:53.717] [.info] [cm/mongoctl/globalaction.go:MongodCanPerformGlobalAction:296] <ops-manager-db-0> [16:01:53.716] Mongod cannot perform Global Update Action because other members of rs set are not done\"}\n> {\"logType\":\"automation-agent-stdout\",\"contents\":\"<ops-manager-db-0> [16:01:54.020] ... process has a plan : WaitFeatureCompatibilityVersionCorrect\"}\n> {\"logType\":\"automation-agent-stdout\",\"contents\":\"<ops-manager-db-0> [16:01:54.020] Running step 'WaitFeatureCompatibilityVersionCorrect' as part of move 'WaitFeatureCompatibilityVersionCorrect'\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.424+0000 I NETWORK [conn84] end connection 10.116.0.114:52980 (24 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53046 #86 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53048 #87 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [conn87] end connection 10.116.0.114:53048 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53050 #88 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [conn88] received client metadata from 10.116.0.114:53050 conn88: { driver: { name: \\\"mongo-go-driver\\\", version: \\\"v1.0.1\\\" }, os: { type: \\\"linux\\\", architecture: \\\"amd64\\\" }, platform: \\\"go1.13.4\\\", application: { name: \\\"MongoDB Automation Agent v10.2.15.5958 (git: 0e81a83f7adc69fb862335072e7c36ceb868b8dd)\\\" } }\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.425+0000 I NETWORK [conn83] end connection 10.116.0.114:52978 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.507+0000 I ACCESS [conn88] Successfully authenticated as principal __system on local from client 10.116.0.114:53050\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.516+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53052 #89 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.518+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53054 #90 (27 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.518+0000 I NETWORK [conn90] received client metadata from 10.116.0.114:53054 conn90: { driver: { name: \\\"mongo-go-driver\\\", version: \\\"v1.0.1\\\" }, os: { type: \\\"linux\\\", architecture: \\\"amd64\\\" }, platform: \\\"go1.13.4\\\", application: { name: \\\"MongoDB Automation Agent v10.2.15.5958 (git: 0e81a83f7adc69fb862335072e7c36ceb868b8dd)\\\" } }\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.599+0000 I ACCESS [conn90] Successfully authenticated as principal __system on local from client 10.116.0.114:53054\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.602+0000 I NETWORK [conn89] end connection 10.116.0.114:53052 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53056 #91 (27 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [conn86] end connection 10.116.0.114:53046 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53058 #92 (27 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [conn88] end connection 10.116.0.114:53050 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [conn92] end connection 10.116.0.114:53058 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [conn90] end connection 10.116.0.114:53054 (24 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.603+0000 I NETWORK [listener] connection accepted from 10.116.0.114:53060 #93 (25 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.604+0000 I NETWORK [conn93] received client metadata from 10.116.0.114:53060 conn93: { driver: { name: \\\"mongo-go-driver\\\", version: \\\"v1.0.1\\\" }, os: { type: \\\"linux\\\", architecture: \\\"amd64\\\" }, platform: \\\"go1.13.4\\\", application: { name: \\\"MongoDB Automation Agent v10.2.15.5958 (git: 0e81a83f7adc69fb862335072e7c36ceb868b8dd)\\\" } }\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:53.616+0000 I ACCESS [conn93] Successfully authenticated as principal __system on local from client 10.116.0.114:53060\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:54.094+0000 I NETWORK [listener] connection accepted from 10.116.0.109:47232 #94 (26 connections now open)\"}\n> {\"logType\":\"mongodb\",\"contents\":\"2020-09-24T16:01:54.095+0000 I NETWORK [conn94] received client metadata from 10.116.0.109:47232 conn94: { driver: { name: \\\"mongo-java-driver|legacy\\\", version: \\\"3.10.2\\\" }, os: { type: \\\"Linux\\\", name: \\\"Linux\\\", architecture: \\\"amd64\\\", version: \\\"4.18.0-193.12.1.el8_2.x86_64\\\" }, platform: \\\"Java/AdoptOpenJDK/11.0.8+10\\\" }\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:54.020] [.info] [cm/director/director.go:computePlan:269] <ops-manager-db-0> [16:01:54.020] ... process has a plan : WaitFeatureCompatibilityVersionCorrect\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:54.020] [.info] [cm/director/director.go:executePlan:876] <ops-manager-db-0> [16:01:54.020] Running step 'WaitFeatureCompatibilityVersionCorrect' as part of move 'WaitFeatureCompatibilityVersionCorrect'\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:54.020] [.info] [cm/director/director.go:tracef:781] <ops-manager-db-0> [16:01:54.020] Precondition of 'WaitFeatureCompatibilityVersionCorrect' applies because \"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[All the following are true: \"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\" ['currentState.Up' = true]\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"]\"}\n> {\"logType\":\"automation-agent-verbose\",\"contents\":\"[2020/09/24 16:01:54.021] [.info] [cm/director/director.go:planAndExecute:544] <ops-manager-db-0> [16:01:54.020] Step=WaitFeatureCompatibilityVersionCorrect as part of Move=WaitFeatureCompatibilityVersionCorrect in plan failed : <ops-manager-db-0> [16:01:54.020] Postcondition not yet met for step WaitFeatureCompatibilityVersionCorrect because \"}\napiVersion: mongodb.com/v1\nkind: MongoDBOpsManager\nmetadata:\n name: ops-manager\n namespace: backing\nspec:\n adminCredentials: ops-manager-admin-secret\n applicationDatabase:\n additionalMongodConfig:\n operationProfiling:\n mode: slowOp\n members: 3\n persistent: false\n podSpec:\n cpu: \"0.25\"\n version: 4.2.6-ent\n backup:\n enabled: false\n configuration:\n automation.versions.source: mongodb\n mms.adminEmailAddr: [email protected]\n mms.fromEmailAddr: [email protected]\n mms.ignoreInitialUiSetup: \"true\"\n mms.mail.hostname: [email protected]\n mms.mail.port: \"465\"\n mms.mail.ssl: \"true\"\n mms.mail.transport: smtp\n mms.minimumTLSVersion: TLSv1.2\n mms.replyToEmailAddr: [email protected]\n replicas: 3\n statefulSet:\n spec:\n template:\n spec:\n containers:\n - name: mongodb-ops-manager\n readinessProbe:\n failureThreshold: 100\n version: 4.4.1\n",
"text": "I’m trying to bring up Ops Manager on OpenShift Code Ready Containers. I’m following instructions here: Deploy an Ops Manager Resource — MongoDB Kubernetes Operator 1.7What I consistently see is, three pods come up successfully: ops-manager-db-0,1,2. Then ops-manager-0,1,2. Then the ops-manager-db pods restart in order: 3,2,1. 3 and 2 succeed, but 1 never succeeds. If shows READY=0/1. Ops manager is also stuck in a state of “Reconciling”ops-manager-db-0 pod repeats the same log messages indefinitely:Here is the manifest for the Ops Manager. (I changed the failure threshhold because Ops Manager was triming out starting up.)Any help is appreciated. Thanks.",
"username": "Eric_Ace"
},
{
"code": "spec:\n applicationDatabase:\n featureCompatibilityVersion: \"4.2\"",
"text": "In case it helps anyone, I was able to resolve this by adding:",
"username": "Eric_Ace"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trying to bring up Ops manager in OpenShift CRC | 2020-09-24T19:38:29.576Z | Trying to bring up Ops manager in OpenShift CRC | 3,177 |
null | [
"security",
"configuration"
] | [
{
"code": "{\"t\":{\"$date\":\"2020-09-22T18:46:03.897+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"209.17.96.210:53448\",\"connectionId\":11,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-22T19:55:23.236+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"45.227.255.224:61000\",\"connectionId\":12,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-22T20:03:18.361+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"38.88.252.187:44861\",\"connectionId\":13,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-22T21:04:01.448+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"162.142.125.35:43268\",\"connectionId\":14,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-23T03:04:55.582+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.241.232.202:34776\",\"connectionId\":15,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-23T04:31:06.036+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"20.36.16.23:13312\",\"connectionId\":16,\"connectionCount\":1}}\n",
"text": "I need some clarity on how the bindIp in /etc/mongod.conf works. I started a new VM instance in Azure. I opened the port 27017 (it is open to public). I started a new MongoDB instance (version is 4.4). In /etc/mongod.conf, I added the private IP of the VM instance. So it has 127.0.0.1 and the private IP like this 10.0.x.x,127.0.0.1. I restarted the server. I did not enable authentication - thought I will do all that after application is set up.The next day, I see lot of entries in the log like this:Is this expected? I thought Mongo is listening only at the private IP and 127.0.0.1 and hence only someone from within the network or access to the machine will be able to connect.",
"username": "Jayadevan_Maymala"
},
{
"code": "/etc/mongod.conf",
"text": "Hi @Jayadevan_Maymala welcome to the community!I’m not sure why MongoDB accepts incoming connection from the internet when you specifically only bind to private addresses. On bind ip, MongoDB does nothing special in terms of binding, as mentioned in the bind ip page. Having said that, I would highly recommend you use IP whitelisting on top of on MongoDB’s bind ip setting to limit incoming connections.Maybe share your /etc/mongod.conf file, and some log lines from the server when it starts up so we can see what’s going on?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi ,\nI am pasting the log entries which mention which IPs MongoDB is listening on.\ngrep Listening ./mongod.log\n{“t”:{\"$date\":“2020-09-22T08:45:38.054+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:\"/tmp/mongodb-27017.sock\"}}\n{“t”:{\"$date\":“2020-09-22T08:45:38.054+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{\"$date\":“2020-09-22T10:57:24.262+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:\"/tmp/mongodb-27017.sock\"}}\n{“t”:{\"$date\":“2020-09-22T10:57:24.262+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“10.0.1.4”}}\n{“t”:{\"$date\":“2020-09-22T10:57:24.262+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}May be - hackers are sending requests to standard ports on public IPs, and Azure’s network is mapping it to the private IP and forwarding it? I have opened the port once more. Would you like to trace the path and help troubleshoot? IP is 52.172.147.136.",
"username": "Jayadevan_Maymala"
},
{
"code": "",
"text": "Hi @Jayadevan_Maymala,Would you like to trace the path and help troubleshoot? IP is 52.172.147.136.Apologies, I can’t really help by connecting directly to your deployment.However, you can check if internet connectivity is allowed by Azure by trying this experiment from another server, or possibly your co-worker’s laptop.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Yes, allowing a port on a Azure Network Security Group will send it to the hosts local ip. The internet IP is not bound on the virtual machine.You can set the source address to allow only the IPs you want to access this port. If you don’t access the VM from a static IP you can use Just In Time Access to allow your ip when you need to access the vm.",
"username": "chris"
},
{
"code": "",
"text": "Thanks. I closed the port. There was no real need to open 27017 to public. Internal connections from the web server can go through the private IP anyway.",
"username": "Jayadevan_Maymala"
}
] | MongoDB bind ip and ports | 2020-09-23T04:51:19.322Z | MongoDB bind ip and ports | 4,994 |
null | [
"pymodm-odm"
] | [
{
"code": " connect(f\"mongodb://{ip}:{port}/{db_name}\", alias='inventory')\n\n class add_as_reference(fields.ReferenceField):\n name = fields.CharField\n\n class Meta:\n connection_alias='inventory'\n\n class hold_many_references(MongoModel):\n name = fields.CharField\n my_refs = fields.ListField(add_as_reference)\n\n class Meta:\n connection_alias='inventory'\n",
"text": "I am very new to pymodm, and I am stuck on trying to create a one to many linking using the models included with pymodm. Here is my example code:This fails, with the error:\nValueError: field must be an instance of MongoBaseField, not <class ‘pyuniti.common.lib.inventory.inventory.InventoryDB_API.open_db..add_as_reference’>I see that you must make this an instance of MongoBaseField, but how can I do that, but have the list hold references to other documents?Any help would be greatly appriecated.",
"username": "Adam_Pfeiffer"
},
{
"code": " class Meta:\n connection_alias='inventory'\n\n class hold_many_references(MongoModel):\n name = fields.CharField()\n my_refs = fields.ListField(ObjectIdField())\n\n class Meta:\n connection_alias='inventory'\n\n ref1 = add_as_reference('ref1').save()\n ref2 = add_as_reference('ref2').save()\n hmr1 = hold_many_references('hmr1', [ref1._id, ref2._id]).save()\n",
"text": "After doing more reading, I have identified multiple issues with my original post. I have come up with a solution, but I feel there should be a better way to manage a list of document ids than what I have come up with. Here is my new code that does allow me to store a list of document ids`class add_as_reference(MongoModel):\nname = fields.CharField()`The downside of doing this way is that I have to manually dereference the document ids in my_refs. What I would really like a field type ListOfReferenceField which you could access values from like this:\nhmr.my_refs[0].nameAm I missing something basic? I ask as I am surprised this doesn’t exist already.",
"username": "Adam_Pfeiffer"
}
] | Create a ListField that contains multiple ReferenceFields | 2020-09-25T00:32:24.860Z | Create a ListField that contains multiple ReferenceFields | 4,435 |
null | [] | [
{
"code": "",
"text": "Hello everyone !In our project we face always the same question, shall we just calculate the derived data ‘on the fly’ or shall we stock and maintain derived data (with callbacks and scripts)?Let’s say I have the collections “Projects” and “Users”. I want to know all my projects so I can\n1 Create a field userId in Projects and look for all the projects with this userId (having indexed) every time.\nOr\n2 Create an array myProjectIds in Users plus the userId in Projects. myProjects should be maintained with callbacks and scripts turning each x time.\nThis time the first option seems easier and enoughBut what if we want to look for all the projects I belong to in any way to a project and I have too many fields to query in each document (let’s say I have an array teamMembers with and object in each)?Of course we can measure the time to answer of the differente queries, but is there any rule to know in advance if it’s worth it to take the second approach?Thanks for you answers!",
"username": "Christophe_Conceicao"
},
{
"code": "",
"text": "Hi @Christophe_Conceicao - welcome to the community!I’m curious what direction you went and how that is working out for you. Have you learned anything along the way?The rule of them when modeling data in MongoDB is data that is accessed together should be stored together. The way you model your date really depends on your use case, and how the application will need to update and retrieve the data.I’m thinking the Extended Reference Pattern could be a good option for your use case. This would allow you to store relevant information in the Projects collection as well as the Users collection.A few resources that can help you on your data modeling journey:\nBlog series on schema design patterns:A summary of all the patterns we've looked at in this seriesBlog series on schema design anti-patterns\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summaryFree MongoDB University Course on Data ModelingDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "Lauren_Schaefer"
}
] | Stock the data or calculate 'on the fly', how to choose? | 2020-06-22T13:09:08.910Z | Stock the data or calculate ‘on the fly’, how to choose? | 2,389 |
null | [] | [
{
"code": "",
"text": "Hi, I’m a newbie developer and i want to build a tinder-like app in React Native with some features like chat, geolocation…etc.\nAt first i wanted to build my own API with NodeJS and MongoDB but then i found out about MongoDB Realm. I would like to know if MongoDB Realm can do the job?",
"username": "Ali_Khodr"
},
{
"code": "",
"text": "Hi @Ali_Khodr,Welcome to the community !Yes, one of MongoDB Realm and Atlas platforms advantages is it gives you all the backend tools you need to build scalable apps like Tinder.From authentication through most popular providers (Google, Facebook, Apple etc.) To crud and aggregation capabilities with rule based access to your data from the frontend app itself as well as event driven triggers and mobile sync for offline first development.\nAlso you can integrate various aws services to store pictures or files:MongoDB Stitch allows for the easy updating and tracking of files which can then be automatically updated to a cloud storage provider like AWS S3. See the code and follow along on the live coding on Twitch._Stitch was rebranded as Realm _Please read the introduction:Good luck!Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Tinder like app with MongoDB Realm | 2020-09-25T09:58:15.965Z | Tinder like app with MongoDB Realm | 2,216 |
null | [] | [
{
"code": "class Entry: Object {\nlet date: Date = Date()\n}\nlet entries: List<Entry>\nEntryA (date: 2020-09-01 02:00 am)\nEntryB (date: 2020-09-01 03:10 pm)\nEntryC (date: 2020-09-02 03:40 am)\nEntryD (date: 2020-09-02 05:23 am)\nEntryE (date: 2020-09-02 08:42 pm)\nEntryF (date: 2020-09-03 11:04 am)\nEntryG (date: 2020-09-03 13:42 pm)\nEntryA (date: 2020-09-01 02:00 am)\nEntryC (date: 2020-09-02 03:40 am)\nEntryD (date: 2020-09-02 05:23 am)\nEntryF (date: 2020-09-03 11:04 am)\nEntryB (date: 2020-09-01 03:10 pm)\nEntryE (date: 2020-09-02 08:42 pm)\nEntryG (date: 2020-09-03 13:42 pm)\n",
"text": "There is one class inherited Object that includes one Date-type property.And there is a list of Entry objects.I’d like to divide this list into two groups: “am” and “pm”.The am group describes entries that have a date property between 12:00 am - 11:59 am.The pm group describes entries that have a date property between 12:00 pm - 11:59 pm.am Group:pm Group:How can I filter the entries from the list into the am and pm groups (as shown above)?",
"username": "T_Mock"
},
{
"code": "",
"text": "Can you confirm what SDK you’re using using? My assumption is you’re using Swift.Assuming that and you’re trying to filter from an already queried set of results - you will have to convert the AM/PM time format to the 24 hour format using DateFormatter and then filter by hours that are < 12hr and > 12hr.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "I am sorry it took so long to get back to you and thanks for your reply!Can you confirm what SDK you’re using? My assumption is you’re using Swift.Yes. I am using Realm Swift/Cocoa.Assuming that and you’re trying to filter from an already queried set of results - you will have to convert the AM/PM time format to the 24 hour format using DateFormatter and then filter by hours that are < 12hr and > 12hr.I have already tried the approach above before and it worked well but I would like to filter the entries in Realm queries, if possible. So I can use the lazy-loading feature, which is more efficient and fast. What do you think?",
"username": "T_Mock"
}
] | How to sort time in AM/ PM in Realm | 2020-09-15T01:48:21.810Z | How to sort time in AM/ PM in Realm | 3,073 |
null | [] | [
{
"code": "exports = async function(commentEvent) {\n const mongodb = context.services.get(\"fp-review\");\n const userNotifications = mongodb.db(\"Notifications\").collection(\"userNotifications\");\n const posts = mongodb.db(\"fightpandemics\").collection(\"posts\");\n const { fullDocument } = commentEvent;\n \n const specificPost = await posts.find({_id: fullDocument.postId});\n\n const newEmailNotification = {\ncomment: fullDocument,\nspecificPost,\n };\n \n await userNotifications.insertOne(newEmailNotification);\n}",
"text": "I’m creating a trigger for new entries in my comment collection. When a user leaves a comment, I want mongo to create a new entry in my notification collection with the comment and the post data. To get the data from the post collection, I’m executing post.find( ) but I’m getting an empty object. What am I doing wrong?",
"username": "Vinicius_Rodrigues"
},
{
"code": "const specificPost = await posts.find({_id: BSON.ObjectId(fullDocument.postId) });\n",
"text": "Hi @Vinicius_Rodrigues,I have an assumption that the postId in the received change document is a string while the _id is of type ObjectId.I think you have to convert it to object Id:\nhttps://docs.mongodb.com/realm/mongodb/find-documents/#query-based-on-a-document-s-idLet me know if that helpsBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "const specificPost = await posts.find();",
"text": "@Pavel_Duchovny when I convert it to BSON, I get this error:Error:\n{“message”:“ObjectId in must be a single string of 12 bytes or a string of 24 hex characters”,“name”:“Error”}I tried executing the .find without any parameter but I’m still getting an empty object.\nconst specificPost = await posts.find();\nIt works fine when I execute separate as a mongodb realm function but when I call the function from the trigger I get an empty object. This is what I get when I console.log specificPost in both case above:\nLogs:\n[\n“[object Object]”\n]I’m wondering if this issue has anything to do with permissions or access to the databases.",
"username": "Vinicius_Rodrigues"
},
{
"code": "const specificPost = await posts.find({}).toArray();\nconsole.log(JSON.stringify(specificPost));\n",
"text": "Hi @Vinicius_Rodrigues,Can you send me a link to your realm app?Also please try to print the following:Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "It’s working now that I included .toArray() at the end of the function. Thanks a lot @Pavel_Duchovny",
"username": "Vinicius_Rodrigues"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | .find( ) returning an empty object when called from a trigger | 2020-09-23T03:54:58.036Z | .find( ) returning an empty object when called from a trigger | 9,185 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "I am seeing some strange behavior with the logging of a realm function.When I run my function, it is supposed to query for documents in a collection and make updates to the documents it queries.For a bit now, the logs I see are telling me that the function isn’t finding any documents, however, I know that it is actually updating these documents because I can see the updates happening in Atlas. Nothing else could be making these updates, only the function. And, in terms of the logging, I am referring to the logs inside of the function editor itself when you click the “Run” button.Here’s a link to my function for context: App ServicesOne of the things it does is log the length of documents it found, and it currently keeps logging 0 even though it is obviously finding documents and making updates.Also for further context, it was logging correctly beforehand.Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "const occasionsCursor = await Occasions.find({ nextOccasionDate: { $gt: yesterday, $lt: tomorrow } });for",
"text": "Hey Lukas, looking at your function, it looks like you’re referring to this line:const occasionsCursor = await Occasions.find({ nextOccasionDate: { $gt: yesterday, $lt: tomorrow } });Are the print statements within each loop of your for loop working when it logs ‘0’? Another thing I can think of is that there was a bulk update to your application that caused updates and documents no longer match your query, or something about the timeliness of your query is no longer finding documents because it depends on when you run it.Another way to double check is to use the data explorer to confirm that documents are being returned for your query is to use data explorer with the same query\nhttps://cloud.mongodb.com/v2/5bc1648acf09a2891bf25a98#metrics/replicaSet/5e1b40339ccf64e656863ab3/explorer",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hi Sumedha!Turns out I had a mistake going on. A return statement was placed incorrectly. Once I fixed that, things were essentially running as normal.I would however still say I have noticed some wonkiness with the way a Realm function logs things compared to what actually happens to my data. Is there a way to see a log of what happens to atlas data and what caused the CRUD operation?",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "I would however still say I have noticed some wonkiness with the way a Realm function logs things compared to what actually happens to my data.Can you go a bit more into detail here? Are you referring to the console output from running a function or the actual logs for Realm.Is there a way to see a log of what happens to atlas data and what caused the CRUD operation?From Realm, you can actually download the logs and grep based on what you’re looking for (e.g. Function updates, graphQL, etc.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Sorry, the console output. Not the actual logs for Realm.That looks like a nice trick. Thanks!The issue is that the console output didn’t show I was making changes, I had console logs that would have ran if I did, but my data was updated. So I know that the console output didn’t match what had actually happened. Wish I could provide more insight on this because I know that’s very vague but that’s all I’ve got.",
"username": "Lukas_deConantseszn1"
}
] | Odd logging bug from Realm Function | 2020-09-18T16:25:16.783Z | Odd logging bug from Realm Function | 1,607 |
null | [
"python",
"connecting"
] | [
{
"code": " File \"/usr/lib64/python2.7/site-packages/pymongo/pool.py\", line 810, in authenticate\n auth.authenticate(credentials, self)\n File \"/usr/lib64/python2.7/site-packages/pymongo/auth.py\", line 673, in authenticate\n auth_func(credentials, sock_info)\n File \"/usr/lib64/python2.7/site-packages/pymongo/auth_aws.py\", line 85, in _authenticate_aws\n exc, pymongo_auth_aws.__version__))\npymongo.errors.OperationFailure: temporary MONGODB-AWS credentials could not be obtained (pymongo-auth-aws version 1.0.1)\n",
"text": "We have connected from AWS EC2 machine using AWS ROLE based authentication.It throws error for some EC2 machines… some Ec2 works… But both EC2 machine is able to get temporary role authentication from AWS metadata using curl command.Below is the error when i try to use pymongo[aws]$python -c “import pymongo_auth_aws; print(pymongo_auth_aws.version)”1.0.1$ python -c “import pymongo; print(pymongo.version); print(pymongo.has_c())”3.11.0True$ python -c “import sys; print(sys.version)”2.7.18 (default, May 27 2020, 12:45:48)[GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]",
"username": "Kiran_Hegde"
},
{
"code": "",
"text": "Hi @Kiran_Hegde - welcome to MongoDB Community!I’ve passed your question on to our engineering team, and we’ve tracked down the line of code throwing the error, but can’t work out why this might be happening. I’d suggest looking further into the differences between your EC2 instances to track down what’s failing.If you do work out what’s happening - please do let us know here - it would be super-helpful if someone has the same problem in future.Mark",
"username": "Mark_Smith"
},
{
"code": "headers = {'X-aws-ec2-metadata-token-ttl-seconds': '60'}\nres = ***requests.post***(_AWS_EC2_URI+'latest/api/token', headers=headers, timeout=_AWS_HTTP_TIMEOUT)\ntoken = res.content\nheaders = {'X-aws-ec2-metadata-token': token}\nres = requests.get(_AWS_EC2_URI+_AWS_EC2_PATH, headers=headers, timeout=_AWS_HTTP_TIMEOUT)\nrole = res.text\nres = requests.get(_AWS_EC2_URI+_AWS_EC2_PATH+role, headers=headers, timeout=_AWS_HTTP_TIMEOUT)\nres_json = res.json()\n",
"text": "Thanks Mark… I did some more deep down on this .\nSome of the auth code is doing a “post” call to get the temp token to connect.\nAnd it fails to do that . Looks aws not allowing “post” call there… If I do “put” call instead of post below, it works. May be you have to fix this in pymongo code…\nHappy to work more. .",
"username": "Kiran_Hegde"
},
{
"code": "python -m pip install --upgrade https://github.com/ShaneHarvey/pymongo-auth-aws/archive/PYTHON-2378.tar.gz\n",
"text": "Thanks @Kiran_Hegde, this is indeed a bug. I filed a fix for it here: https://jira.mongodb.org/browse/PYTHON-2378I also have a proposed fix that can be released shortly, here: PYTHON-2378 Use PUT for EC2 token request, not POST by ShaneHarvey · Pull Request #2 · mongodb/pymongo-auth-aws · GitHubIt would be great if you could test out this fix. To test it yourself, install the updated version like this:",
"username": "Shane"
},
{
"code": "",
"text": "Thanks Shane for a quick fix … It is working .\nBut we need this from python library directly when I install pymongo[aws]. I am using it inside docker. So it would be good to fix pymong[aws] to implement this in our environment.",
"username": "Kiran_Hegde"
},
{
"code": "python -m pip install 'pymongo[aws]'",
"text": "We’ve released pymongo-auth-aws version 1.0.2 so python -m pip install 'pymongo[aws]' will work fine now. Thanks for helping us with the fix @Kiran_Hegde!",
"username": "Shane"
},
{
"code": "",
"text": "Thanks Shane,Mark. Never thought we get a fix so quickly by just putting it in forum.",
"username": "Kiran_Hegde"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Temporary MONGODB-AWS credentials could not be obtained | 2020-09-22T19:26:40.754Z | Temporary MONGODB-AWS credentials could not be obtained | 3,306 |
null | [] | [
{
"code": "",
"text": "Hello.\nI am asking if there are some REST interfaces that Mongo DB exposes to integrate whatever platform with this protocol ? From what i see the REST interface to Mongo DB is not available.\nI am working on Pegasystems so i am asking if using REST Is possible to integrate with MONGO to perform Crud operations ?\nI saw that is possible to import a jar file into a Java app then connect but this way is too intrusive. I wanted a way to easy integrate with this database.\nThank you.Regards\nEliseo Olla",
"username": "Eliseo_Olla"
},
{
"code": "",
"text": "Hi @Eliseo_Olla welcome to the community!No there is no natively built REST interface in MongoDB.If your needs are specific, you can build your own REST interface using e.g. Node + Express. See The Modern Application Stack – Part 3: Building a REST API Using Express.js for an in-depth write up.Other than that, you may be able to use something like Restheart that provides a generic REST interface to a MongoDB database.Best regards,\nKevin",
"username": "kevinadi"
}
] | MongoDB Rest Connector | 2020-09-24T12:16:01.027Z | MongoDB Rest Connector | 2,003 |
null | [
"data-modeling"
] | [
{
"code": " {\n \"id\": \"#####\",\n \"name\": \"hotel name\"\n \"state\": \"some state\",\n \"city\": \"some city\"\n \"category\": \"category\",\n \"address\": \"address\",\n \"phone\": \"phone\",\n \"email\": \"email\",\n \"users\":[\n {\n \"fName\": \"name\",\n \"lName\": \"name\",\n \"email\": \"[email protected]\",\n \"password\": \"hashedsecretthing\",\n \"accessLevel\": \"accessLevel\"\n }\n ],\n \"rooms\":[\n {\n \"name\": \"roomName\",\n \"category\": \"roomCat\",\n \"max_cap\": 0,\n \"phy_rooms\":[\n {\n \"id\": \"#######\",\n \"daily\":[\n {\n \"day\":{\n \"date\": \"todaysDate\"\n \"rate_id\": \"####\"\n \"availability\": \"reservationID/service/maintanace\",\n }\n }\n ]\n }\n ],\n \"amenities\":{\n \"amenityName\": true\n }\n }\n ]\n},\n//have 2 documents, ACTIVE and PREVIOS RATES\n{\n \"hotelID\": \"#######\"\n \"id\": \"#####\",\n \"rateName\": \"name\",\n \"rate\": \"0.00\",\n \"tier\": \"standard/promo/especial/manual\",\n \"duration\":{\n \"startDate\": \"date\",\n \"endDate\": \"date\"\n },\n \"rooms\":[\n \"roomName\",\n ...\n ]\n}\n//reservations\n{\n \"hotelID\": \"#######\"\n \"id\": \"########\",\n \"fName\": \"name\",\n \"lName\": \"name\",\n \"email\": \"email\",\n \"phone\": \"phone\",\n \"status\": \"status\",\n \"dates\":{\n \"arrival\": \"date\",\n \"departure\": \"date\",\n \"createdAt\": \"date\"\n },\n \"rooms\":[\n {\n \"roomID\": \"######\",\n \"rateID\": \"#####\",\n \"pax\": 0\n }\n ],\n \"extras\":[\n \"extraID\",\n ...\n ]\n}\n",
"text": "I have extensive experience working with SQL DBs, but this is my first time trying my hand at designing something in mongoDB. I am developing a reservation engine for a multi-hotel site. I came up with this model. I have done a lot of reading and I am not sure if I am going in the right direction or if I should take a ifferent aproach.I have a few questions. I will have multiple hotels managing their rooms, rates, and reservations. Should I enbede the Rates and Reservations documents inside each hotel document, or should I have a large Document for each, only referencing the hotel id?? Is there a better way to handle this?",
"username": "christopher_luna"
},
{
"code": "",
"text": "Hi @christopher_luna, there is a MongoDB University course on Data Modeling that provides guidance on the schema patterns.",
"username": "Katya"
}
] | I could use some tips on my DB design | 2020-06-20T05:03:51.921Z | I could use some tips on my DB design | 2,205 |
null | [
"python",
"production",
"motor-driver"
] | [
{
"code": "",
"text": "We are pleased to announce the 2.3.0 release of Motor - MongoDB’s Asynchronous Python Driver. This release adds contextvars support.See the changelog for a high-level summary of what’s new and improved or see the Motor 2.3.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Motor 2.3.0 Released | 2020-09-24T23:33:47.241Z | Motor 2.3.0 Released | 3,544 |
null | [] | [
{
"code": "",
"text": "\nHi, I am not geeting any results for find query. can anyone please guide. I am absolute beginner",
"username": "Atul_Kumar"
},
{
"code": "MongoDB Universityvideoquery predicatequery predicate",
"text": "Hi @Atul_Kumar,Welcome to the MongoDB University discussion forum .Please ensure that you are connected to the right database (i.e. video) before we analyse the query predicate.If you are connected to the right database and still not getting any result then that means that the query predicate that you have specified in the find command didn’t match any document in the collection.Let’s see what you are trying to do here. Could you please explain what did you want to do here ?~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Thanks Shubham. Issue resolved.",
"username": "Atul_Kumar"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Not getting any results for find query | 2020-09-22T18:24:34.987Z | Not getting any results for find query | 1,879 |
null | [] | [
{
"code": "",
"text": "I’ve successfully completed the iOS programmatical app tutorial on MongoDB’s documentation. I’m currently trying to apply what I learned from this tutorial into my own app created in Xcode, and using storyboards. However, the following line of code will always crash the app when the respective view controller is called.// Implement this overload of init to satisfy the UIViewController subclass requirement.\nrequired init?(coder: NSCoder) {\nfatalError(“init(coder:) has not been implemented”)\n}Any help would greatly be appreciated.",
"username": "Hector_S"
},
{
"code": "",
"text": "That’s a tad brief to understand the use case. Can you expand that code a bit?Where is it being called, at what point? Is that in a viewController or perhaps a subclass of some kind?",
"username": "Jay"
},
{
"code": "",
"text": "Do you know of any other MongoDB Realm tutorials elsewhere online that do not take the Programmatical approach? I can’t seem to find any.",
"username": "Hector_S"
},
{
"code": "",
"text": "Not sure what you’re asking.MongoDB Realm is a database that requires writing code so your users can interact with your app. Are you asking about understanding database modeling or understanding programming concepts in general or something else entirely?",
"username": "Jay"
},
{
"code": "",
"text": "My apologies. I have hit a dead end developing my iOS app with MongoDB Realm. When I comment out the required init for realm, as well as the required init for NSCoder, along with the realm variables, I can display dummy data on a table view that I coded on the cellForRow at method. As soon as I uncomment both required inits, and the realm variables, the app crashes on the NSCoder required init. The required init for realm is pretty much a direct copy of the one I found in the Mongo Realm tutorial, with minor modifications to match the Realm collections in my Atlas account. What could be causing this?",
"username": "Hector_S"
},
{
"code": "",
"text": "@Hector_S Hey Hector - this initializer// Implement this overload of init to satisfy the UIViewController subclass requirement.\nrequired init?(coder: NSCoder) {\nfatalError(“init(coder:) has not been implemented”)\n}Does not have anything to do with Realm and is from UIKit which is a built in framework from iOS SDK - not sure how to help you here without more information",
"username": "Ian_Ward"
},
{
"code": "Login succeeded!\n2020-09-18 07:27:54.537156-0400 App[1379:41058] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\n2020-09-18 07:27:54.623370-0400 App[1379:41058] Sync: Connection[1]: Connected to endpoint 'x.x.x.x:443' (from 'x.x.x.x:50530')\n2020-09-18 07:27:55.067969-0400 App[1379:41058] Sync: Connection[1]: Disconnected\n",
"text": "Hi Ian, thank you for your response. Currently, I am trying to build a simple test app on a FREE tier Atlas server using Realm and sync. I have created several documents inside the realm that contain: _id, _partition, name, and phonenumber as a test. I have an object class in swift that initializes those same variables. In addition to this, I have set up a view controller with a tableview with the intention of listing those documents in the realm, by means of the name property on each cell.I’m aware of the fact that the initializer ultimately causing the crash is not required by realm and instead required by UIKit. It has been somewhat difficult to debug the app when the fatal error is thrown almost immediately and prior to any view appearing on the iOS simulator screen. If I comment out any variables ( partitionValue, realm, as well as well as the Results object variable), and then remove both initializers, the app does not crash. In fact, the app successfully connects to the realm as evidenced by the Realm logs and the Xcode console, which shows the following:This tells me that the Atlas service, along with the Realm, including user authentication and Sync, are set up correctly. It is at this point where a tableview successfully appears with hard coded data for the two required tableview delegate methods. numberOfRowsInSection has a return value of 1, and cellForRowAt displays “Test” as the cell’s label.There’s no easy way for me to debug anything beyond this. Due to the UIKit initializer crashing the app prior to the view controller’s view appearing on the screen, I am unable to determine what exactly is causing the crash. Perhaps the object Results variable is empty and it is causing the crash, though my level of expertise can’t ascertain that.At this point, I’m going to re-read the tutorial on MongoDB Realm’s website for the umph time hoping to catch something I may be missing. It would be great if I could find a tutorial online for a similar app using storyboard and simple features, just a simple tableview app being populated by just a simple property, so that I can build upon that.",
"username": "Hector_S"
},
{
"code": "",
"text": "We can be a lot more help when you include code in your questions - you could have a typo or missing a required component needed to initialize your app property.It may also be how you’re attempting to work with Realm asynchronously - that can be a complex process until you wrap your brain around it.Usually a short, minimal amount of code is what’s recommended. The guide on the MongoDB Realm site does contain pretty much everything you need to know about init’ing your app.",
"username": "Jay"
},
{
"code": "let partitionValue: String\nlet realm: Realm\nlet names: Results<Names>\n\nvar notificationToken: NotificationToken?\n\nrequired init(projectRealm: Realm) {\n \n guard let syncConfiguration = projectRealm.configuration.syncConfiguration else {\n fatalError(\"Sync configuration not found! Realm not opened with sync?\")\n }\n \n realm = projectRealm\n // After updating to RealmSwift10.0.0-beta.5, and Xcode to V12 \n partitionValue = (syncConfiguration.partitionValue?.stringValue!)!\n \n names = realm.objects(Names.self).sorted(byKeyPath: \"_id\")\n \n super.init(nibName: nil, bundle: nil)\n \n notificationToken = names.observe { [weak self] (changes) in\n guard let myTableView = self?.myTableView else { return }\n switch changes {\n case .initial:\n\n myTableView.reloadData()\n case .update(_, let deletions, let insertions, let modifications):\n\n myTableView.beginUpdates()\n\n myTableView.deleteRows(at: deletions.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.insertRows(at: insertions.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.reloadRows(at: modifications.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.endUpdates()\n\n case .error(let error):\n fatalError(\"\\(error)\")\n }\n }\n}",
"text": "I hope this isn’t too much code. My apologies if it is.",
"username": "Hector_S"
},
{
"code": "required init?(coder: NSCoder) {",
"text": "It’s not clear what that code has to do with the code in the initial question. While it’s not too much code, its purpose is unclear.You initially asked aboutrequired init?(coder: NSCoder) {but the code above appears to be unrelated and is attempting to add an observer to your realm objects.I think you need to back up a step; does your app read and write data from realm at all. If yes, then it’s time to explore notifications (with the code included in the post).If not, you need to address why not and examine how you’re initializing Realm in the first place.",
"username": "Jay"
},
{
"code": "import UIKit\nimport RealmSwift\n\nclass ViewController: UIViewController {\n\n@IBOutlet weak var myTableView: UITableView!\n\nvar partitionValue: String\nlet realm: Realm\nlet names: Results<Names>\n\nvar notificationToken: NotificationToken?\n\nrequired init(projectRealm: Realm) {\n\n // Ensure the realm was opened with sync.\n guard let syncConfiguration = projectRealm.configuration.syncConfiguration else {\n fatalError(\"Sync configuration not found! Realm not opened with sync?\")\n }\n\n realm = projectRealm\n\n // Partition value must be of string type.\n partitionValue = (syncConfiguration.partitionValue?.stringValue!)!\n\n // Access all tasks in the realm, sorted by _id so that the ordering is defined.\n // Only tasks with the project ID as the partition key value will be in the realm.\n names = realm.objects(Names.self).sorted(byKeyPath: \"_id\")\n\n super.init(nibName: nil, bundle: nil)\n\n notificationToken = names.observe { [weak self] (changes) in\n guard let myTableView = self?.myTableView else { return }\n switch changes {\n case .initial:\n // Results are now populated and can be accessed without blocking the UI\n myTableView.reloadData()\n case .update(_, let deletions, let insertions, let modifications):\n // Query results have changed, so apply them to the UITableView.\n myTableView.beginUpdates()\n // It's important to be sure to always update a table in this order:\n // deletions, insertions, then updates. Otherwise, you could be unintentionally\n // updating at the wrong index!\n myTableView.deleteRows(at: deletions.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.insertRows(at: insertions.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.reloadRows(at: modifications.map({ IndexPath(row: $0, section: 0) }),\n with: .automatic)\n myTableView.endUpdates()\n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n }\n}\n\nrequired init?(coder: NSCoder) {\n fatalError(\"init(coder:) has not been implemented\")\n}\n\ndeinit {\n // Always invalidate any notification tokens when you are done with them.\n notificationToken?.invalidate()\n }\n\noverride func viewDidLoad() {\n super.viewDidLoad()\n // Do any additional setup after loading the view.\n \n openRealm()\n \n }\n\nfunc openRealm() {\n \n let email = \"********\"\n let password = \"********\"\n \n app.login(credentials: Credentials(email: email, password: password)) { [weak self](user, error) in\n // Completion handlers are not necessarily called on the UI thread.\n // This call to DispatchQueue.main.sync ensures that any changes to the UI,\n // namely disabling the loading indicator and navigating to the next page,\n // are handled on the UI thread:\n DispatchQueue.main.sync {\n \n guard error == nil else {\n // Auth error: user already exists? Try logging in as that user.\n print(\"Login failed: \\(error!)\");\n \n return\n }\n\n print(\"Login succeeded!\");\n\n // Go directly to the Tasks page for the hardcoded project ID \"My Project\".\n // This will use a common project and demonstrate sync.\n let partitionValue = \"********\"\n\n // Open a realm.\n Realm.asyncOpen(configuration: user!.configuration(partitionValue: partitionValue)) { [weak self](realm, error) in\n guard let realm = realm else {\n fatalError(\"Failed to open realm: \\(error!.localizedDescription)\")\n }\n }\n }\n }\n}\n}\n\nextension ViewController: UITableViewDelegate, UITableViewDataSource {\n\nfunc tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {\n return names.count\n\n}\n\nfunc tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n \n let name = names[indexPath.row]\n let cell = tableView.dequeueReusableCell(withIdentifier: \"Cell\") ?? UITableViewCell(style: .default, reuseIdentifier: \"Cell\")\n \n cell.textLabel!.text = name.firstname\n \n return cell\n }\n}\nimport Foundation\nimport RealmSwift\n\n\nclass Names: Object {\n\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partition: String = \"\"\n @objc dynamic var firstname: String = \"\"\n @objc dynamic var phonenumber: String = \"\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n}\n",
"text": "This particular app has no real purpose, other than to apply what I learned in the MongoDB tutorial. As of right now, the only thing this app should do, is list the first name of each user. I am posting the contents of two files ViewController.swift, and Names.swift. I have entered data manually into the realm by means of the compass software. As of right now, the Realm is on Developer mode. Once I am able to correct the issue with this, I can continue to build upon this app and add more featuresThe following are the contents of Names.swift.At present, this app does not read nor write data from the realm as the app crashes instantly upon building on Xcode.",
"username": "Hector_S"
},
{
"code": "@IBOutletinit(projectRealm: …)init(coder:…)fatalError()",
"text": "One question: is your UIViewController instantiated in code, or in a Storyboard (as the @IBOutlet seems to suggest)?Thing is, the original Realm Swift tutorial creates the UI in code, hence the setup is done in init(projectRealm: …) when the initial view controller puts it on the screen, but, when a view controller is instantiated in a Storyboard, init(coder:…) is called instead, that ends up crashing the app (that’s what fatalError() does…)It is perfectly possible to use Storyboards with Realm Swift, of course, but the app flow happens in a slight different way: I’ve done it in my own take of the tutorial, feel free to adapt it to your needs.",
"username": "Paolo_Manna"
},
{
"code": "required init(projectRealm: Realm)// Open a realm.\nRealm.asyncOpen(configuration: user!.configuration(partitionValue: partitionValue)) { [weak self](realm, error) in\n guard let realm = realm else {\n fatalError(\"Failed to open realm: \\(error!.localizedDescription)\")\n }\n }\nguard let realm = realm else { return }\nself!.navigationController!.pushViewController(\n TasksViewController(projectRealm: realm), animated: true\n)\n",
"text": "At present, this app does not read nor write data from the realm as the app crashes instantly upon building on Xcode.The app crashes upon BUILDING or RUNNING?You also may be missing a line of code - if so, then the projectRealm var is nil hererequired init(projectRealm: Realm)and the rest of the code will crash. This is probably where you want to call it from and pass in a valid realm:Somewhere in that section you should be doing something with the realm object like thisI’m using a viewController class TasksViewController but that would be whatever the name of your viewController is.",
"username": "Jay"
},
{
"code": "",
"text": "Yes. The app is built using storyboard. Your code is somewhat extensive, but I will try to look into it and see how I can apply it to my simple app. Thank you.",
"username": "Hector_S"
},
{
"code": "",
"text": "I deleted everything and started over from scratch. Using a storyboard, I have placed a table view that is connected to the view controller. The tableview has a prototype cell with an identifier of “Cell”. The tableview is referenced by means of an IBOutlet in the only view controller the app currently has. I have successfully built and run the app without any RealmSwift code, and it shows a few cells with a label of “Test”.After adding all the Realm code, including the last bit of code you posted doesn’t help. Upon running the app, it crashes at the init(coder: NSCoder) line. Paolo_Manna is correct to say that a Storyboard app is setup differently, which is why I initially asked if there are any good Realm tutorials in Swift, using storyboards. On his last reply on this thread, he posted ‘TaskTraker2’, the tutorial app made with storyboards, but the code is extensive. I will have to look into his code and see how and what applies to my code.",
"username": "Hector_S"
},
{
"code": "init(coderoverride func viewDidLoad() {\n super.viewDidLoad()\n}\n\nrequired init?(coder aDecoder: NSCoder) {\n fatalError(\"init(coder:) has not been implemented\")\n}\nrequired init ?(coder aDecoder: NSCoder) {\n super.init(coder: aDecoder)\n}\n",
"text": "Yes, your app will crash at that line - every time - because you’re telling it to.init(coderIn fact you’ll see it has nothing to do with Realm. Create a new iOS App in XCode, single view and add this right after viewDidLoadrun the app. Crash.The crash is because that function is missing code (e.g. the function is not fully implemented). There are a number of options but try thisAlso, and this is just IMO, you’re got a LOT of disciplines to deal with in how you’re going about creating your simple app. It’s actually very complex! You’re dealing with Swift, tableViews, custom cells, Realm, Syncing, Notifications etc. That’s a lot to work through simultaneously.I would suggest simplifying and tackle one task at a time. So for example, loose the notifications. You don’t need that initially. Just focus on getting sync working and reading the Realm objects and understanding partitions. Add notifications later. Again, just IMO and my .02.",
"username": "Jay"
}
] | iOS required init crashing app | 2020-09-15T18:27:02.554Z | iOS required init crashing app | 6,001 |
[] | [
{
"code": "",
"text": "What is Hacktoberfest?Hacktoberfest is a month-long celebration of open source software first created by DigitalOcean in 2013. Hacktoberfest encourages experienced developers and novices alike to get involved with community open source projects by contributing code, documentation, and bug fixes to open source projects during the month of October. Participants who submit four pull requests (updates to an open source software repository) in public repositories on GitHub during the month of October will earn a limited edition Hacktoberfest t-shirt.What will MongoDB be doing for Hacktoberfest?MongoDB’s core values of “Think Big, Go Far”, “Make it Matter”, and “Build Together” come together naturally during Hacktoberfest. To make it easy for you to participate in Hacktoberfest, MongoDB will be hosting several online events throughout October. Members of the MongoDB Engineering and Product teams will be at the events and here in the forums to help those making their first pull request and providing assistance and encouragement.We are focusing our efforts around O-FISH, the open source mobile application that helps WildAid patrol for illegal fishing and keep our oceans pristine. This project has been an amazing and fulfilling opportunity for our engineers and we want to encourage others to get involved and help us with this vital cause. We will also be featuring other community open source projects that utilize MongoDB Solutions including mongod, Atlas, and Realm to help members of the community find the right opportunity to contribute and earn their shirt!We will also be awarding forum badges to community members who complete Hacktoberfest and a special “O-FISH Contributor” badge to any community member who helps improve #o-fish by submitting a pull request with improvements, bug fixes, or documentation enhancements.How can I get involved?There are several ways to get involved:Sign up to get updates on Hacktoberfest and register in October.Check out the events being hosted virtually from around the world, including MongoDB hosted events. We will be getting started with a virtual kickoff event October 6th.If you haven’t contributed to a GitHub repository before, check out this guide on making your first pull request.Recommend an open source project in this thread that utilizes MongoDB tools (MongoDB server, Atlas, Realm, etc) so we can feature them and help bring in new contributors. We will contact project maintainers and provide information about how to get the most out of Hacktoberfest. We will also promote selected projects throughout October here in the community forum, on social media and in other places.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Hacktoberfest 2020! | 2020-09-24T20:01:19.264Z | Hacktoberfest 2020! | 3,794 |
|
null | [
"installation"
] | [
{
"code": "",
"text": "Hello, I am new to MongoDB and have a few questions in regards to installation and binary ownership. I am coming from an Oracle DB back ground.\nOur SysAdmin installed version 4.2 locally on RedHat Linux using RPM method. The binaries were installed to /usr/bin and owned by root. The data directory, log directory and mongod.conf file is owned by the mongod user. In order for a DBA such as myself to start and stop mongod i have to first login as mongod and then do sudo systemctl start/stop mongod. I see in the MongoDB howto guide for installation that it can be accomplished by unpacking a tar file. If that is the case can that be done by the mongod user and not by root? Does root need to own the binaries in order to run a Standalone MongoDB on premise database?",
"username": "Kelly_Maxwell"
},
{
"code": "mongod",
"text": "No, as long as it is readable by the user you start mongod.",
"username": "ken.chen"
},
{
"code": "",
"text": "So the mongod user can expand the tar file and own the binaries? Is there anything that needs to be owned by root? Trying to find out if SysAdmin have to intervene with installation/patching/upgrading in MongoDB or whether all can be accomplished by a DBA as the mongod user.",
"username": "Kelly_Maxwell"
}
] | Do MongoDB binaries need to be owned by root? | 2020-09-23T19:43:04.163Z | Do MongoDB binaries need to be owned by root? | 2,064 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.2.10-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.9. The next stable release 4.2.10 will be a recommended upgrade for all 4.2 users.\nFixed in this release:4.2 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.10-rc0 is released | 2020-09-24T19:15:36.641Z | MongoDB 4.2.10-rc0 is released | 1,606 |
null | [] | [
{
"code": "",
"text": "Reading package lists… Done\nBuilding dependency tree\nReading state information… Done\nNote, selecting ‘mongodb-enterprise-server’ instead of ‘./mongodb-enterprise-server_4.4.1_amd64.deb’\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:The following packages have unmet dependencies:\nmongodb-enterprise-server : Depends: libcurl3 (>= 7.16.2) but it is not installable\nDepends: libsensors4 (>= 1:3.0.0) but it is not installable\nDepends: libssl1.0.0 (>= 1.0.2~beta3) but it is not installable\nE: Unable to correct problems, you have held broken packages.My system arch —> x86_64Later tried manually installing dependency Its says upgraded versions already install.\nbelow error Iam getting:\nmisal@desktop:~/ $ sudo apt install libcurl3\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nPackage libcurl3 is not available, but is referred to by another package.\nThis may mean that the package is missing, has been obsoleted, or\nis only available from another source\nHowever the following packages replace it:\nlibcurl4:i386 libcurl4",
"username": "Misal_Raj"
},
{
"code": "operating system",
"text": "Hi @Misal_Raj,Can you please try download the package directly from the MongoDB Download Centre ?Please select the right version of operating system.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Not able to install mongodb-enterprise-server on 20.04 LTS | 2020-09-22T17:24:10.496Z | Not able to install mongodb-enterprise-server on 20.04 LTS | 3,618 |
null | [] | [
{
"code": "",
"text": "Hi Mongo DB Team,We have a 3-node MongoDB Replication cluster with mongo version 4.0.6 We are facing issue of one of the secondary servers is lag behind the primary (sometimes 2-5 minutes).\nKindly help us and suggestions for resolving the issue.\nAs we refer log we found something like 2018-11-16T12:31:35.886-0500 I REPL [repl writer worker 13] applied op: command { … }, took 112ms on secondary nodes.We are struggling a lot kindly guide thanks in advance!!!",
"username": "Nagesh_Kamble"
},
{
"code": "",
"text": "Hi @Nagesh_Kamble and welcome in the MongoDB Community !Are the 3 nodes identical (same RAM, CPU, disks, …)?\nAre the 3 nodes identical in content? Have you created indexes on some nodes and not on others?\nWhat’s the latency between the nodes? Is one of them really far away from the others? Do you have recurrent transient network issues?Did you identify why that particular command is slow to replay on the secondary nodes? Missing an index maybe?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Replication Lag on Secondary in 3 node mongodb Cluster | 2020-09-24T12:16:47.094Z | Replication Lag on Secondary in 3 node mongodb Cluster | 2,018 |
null | [] | [
{
"code": "",
"text": "I am new to MongoDB. I am currently working with cursors. I need to fetch records from the server side in batches using cursors.\nlets say I have 12 records and i want to fetch 5 at a time . The cursor iterator has to go 3 times and fetch 5,5,2 .I wanted to know how i can use while (cursor.hasNext ) loop for this.\nAny help is much appreciated.",
"username": "Rajeswari_Rathnavel"
},
{
"code": "let skipThisManyResults = 0;\nlet numberOfResultsToRetrieve = 5;\n\nlet cursor = client.db(\"Test\").collection(\"things\")\n .find({})\n .skip(skipThisManyResults)\n .limit(numberOfResultsToRetrieve);\n\nwhile (skipThisManyResults < await cursor.count()) {\n\n while (await cursor.hasNext()) {\n console.log(await cursor.next());\n }\n\n skipThisManyResults += numberOfResultsToRetrieve;\n\n cursor = client.db(\"Test\").collection(\"things\")\n .find({})\n .skip(skipThisManyResults)\n .limit(numberOfResultsToRetrieve);\n}\n",
"text": "Hi @Rajeswari_Rathnavel - welcome to the community!You can use skip() and limit() to achieve this.For example, you could do something like…There are some drawbacks to this approach, which Justin explains in this blog post: Paging with the Bucket Pattern - Part 1 | MongoDB Blog",
"username": "Lauren_Schaefer"
}
] | Fetch Records in batches by cursors | 2020-09-23T05:38:50.594Z | Fetch Records in batches by cursors | 3,060 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi all,I am trying to work through exercises in Mongodb university. One of the issues I came across is that when I use mongosh it always gives me an Syntax error, please see below:mongod --port 27000\nUncaught:\nSyntaxError: Unexpected token, expected “;” (1:9)1 | mongod --port 27000Does anyone know how to fix it? The customer support is not helping at all except for suggesting to post a question on this forum.",
"username": "Timur_Akhmadulin"
},
{
"code": "",
"text": "I have not seen your post on the MongoDB University. What is your user name at MongoDB University?MongoDB University courses are based on the legacy mongo shell. The new mongosh is not covered in the university course.Most likely, you are trying to run mongod while already connected to one of the mongo or mongosh shells.",
"username": "steevej"
},
{
"code": "connection string",
"text": "@Timur_Akhmadulin Hey, If you are doing some exercise from MongoDB University you must have seen the complete connection string in the previous lecture(s). By adding the complete string you can connect your system to their server.\nFor example the connection string for basic course is as followed:mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student --password m001-mongodb-basicsIf you still have some confusing you can share the course link so that we can send you the connection string (if you are connecting your system to their server)",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "HI @Nabeel_Raza,Thank you for such prompt reply.I am doing M103 the first lab. Below is the linkhttps://university.mongodb.com/mercury/M103/2020_August_4/chapter/Chapter_1_The_Mongod/lesson/5e54545fad8aed99fcd86276/problembelow is what I try to do.\nmongod --port 27000",
"username": "Timur_Akhmadulin"
},
{
"code": "",
"text": "my user name is [email protected]",
"username": "Timur_Akhmadulin"
},
{
"code": "",
"text": "M103 has a specific forum at the university.Please post a screenshot of the issue overthere.",
"username": "steevej"
},
{
"code": "MongoDB",
"text": "@Timur_Akhmadulin make sure that you have set the MongoDB environmental variable in your system. if you are trying it from local system (using command line)",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "The error messageUncaught:\nSyntaxError: Unexpected token, expected “;” (1:9)is typical to the mongo shell javascript interpreter.Most likely scenario, he did a first mongo -nodb or mongosh --nodb and then try to start mongod at the mongo or mongosh prompt. He has to exit the current javascript interpreter and return to the bash interpreter from the in browser IDE.It is specific to the M103 course and it is best to have this discussion over there.",
"username": "steevej"
},
{
"code": "",
"text": "HI,Thank you for the suggestion. I already tried. Before I set up environmental variable the mognosh would just disappear after I try to start it. Not it does not, but the errors are the same.Uncaught:\nSyntaxError: Unexpected token, expected “;” (1:9)1 | mongod --port27000\n| ^\n2 |",
"username": "Timur_Akhmadulin"
},
{
"code": "",
"text": "HI,Not sure I understood the solution. Would you please elaborate more? Thank you.",
"username": "Timur_Akhmadulin"
},
{
"code": "",
"text": "In your latest post there is no space between port and 27000What steevejSteeve Juneau mentioned was you might have been at mongo prompt already and when you try mongod command it will fail with syntax errorSo exit from mongo prompt and run the command from os prompt(sh#4.4)I hope you are running the commands for the lab in IDE",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "command from os prompt(sh#4.4)All good now, thanks a lot to all of you. It was really a combination of all your suggestions.",
"username": "Timur_Akhmadulin"
}
] | Mongo shell is not working | 2020-09-22T11:56:58.047Z | Mongo shell is not working | 10,418 |
null | [
"queries"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"5f58b71b6f6d360f356a02ba\"\n },\n \"cnpjCpf\": \"97515035072\",\n \"producao\": {\n \"impressoras\": [{\n \"_id\": {\n \"$oid\": \"5f5a3c0cd84b5a2a8bd5d50c\"\n },\n \"impressora\": \"Etirama\",\n \"z\": [{\n \"_id\": {\n \"$oid\": \"5f5a4f7cd6c4678dd0a533de\"\n },\n \"z\": \"10\",\n \"modulo\": \"x\",\n \"facas\": [{\n \"_id\": {\n \"$oid\": \"5f5a51642c26e32e008ecb12\"\n },\n \"faca\": \"01\",\n \"largura\": \"x\"\n }, {\n \"_id\": {\n \"$oid\": \"5f5a516f027b588538f52914\"\n },\n \"faca\": \"02\",\n \"largura\": \"y\"\n }]\n }, {\n \"_id\": {\n \"$oid\": \"5f5a4fadd6c4678dd0a533df\"\n },\n \"z\": \"20\",\n \"modulo\": \"x\",\n \"facas\": [{\n \"_id\": {\n \"$oid\": \"5f5a502dd6c4678dd0a533e2\"\n },\n \"faca\": \"01\",\n \"largura\": \"x\"\n }, {\n \"_id\": {\n \"$oid\": \"5f5a5078d6c4678dd0a533e3\"\n },\n \"faca\": \"02\",\n \"largura\": \"y\"\n }]\n }, {\n \"_id\": {\n \"$oid\": \"5f6a303c758a0f9e8a1219ae\"\n },\n \"z\": \"100\",\n \"modulo\": \"1,5 mm\",\n \"desenvolvimento\": \"320\",\n \"encolhimento\": \"310\",\n \"distorcao\": \"96.875\"\n }],\n \"perfil\": [{\n \"_id\": {\n \"$oid\": \"5f5a4fbcd6c4678dd0a533e0\"\n },\n \"perfil\": \"papel\",\n \"cores\": {\n \"other\": {\n \"curva\": [\"g34\"],\n \"hd\": [\"hd05\"],\n \"lineatura\": [\"112\"]\n }\n }\n }, {\n \"_id\": {\n \"$oid\": \"5f5a4fded6c4678dd0a533e1\"\n },\n \"perfil\": \"bopp\",\n \"cores\": {\n \"cyan\": {\n \"curva\": [\"h40\"],\n \"hd\": [\"hd05\"],\n \"lineatura\": [\"112\"]\n }\n }\n }]\n }, {\n \"_id\": {\n \"$oid\": \"5f6a3a395179f2a3c2c7ae05\"\n },\n \"impressora\": \"Komexy\",\n \"banda\": \"Larga\",\n \"trap\": [\"0.05\", \"0.20\", \"0.25\"],\n \"espessura\": [\"1.14\", \"1.7\"]\n }]\n }\n}\n",
"text": "Hello, I would like to know how I can query a array producao.impressoras..z.._ id, and receive only the content of the object that matches the _id above? in producao.impressoras.$._id I got it.My document:",
"username": "Fabio_Bracht"
},
{
"code": "db.col.insert({\n \"_id\": ObjectId(\"5f58b71b6f6d360f356a02ba\"),\n \"cnpjCpf\": \"97515035072\",\n \"producao\": {\n \"impressoras\": [{\n \"_id\":\n ObjectId( \"5f5a3c0cd84b5a2a8bd5d50c\")\n ,\n \"impressora\": \"Etirama\",\n \"z\": [{\n \"_id\":\n ObjectId(\"5f5a4f7cd6c4678dd0a533de\")\n ,\n \"z\": \"10\",\n \"modulo\": \"x\",\n \"facas\": [{\n \"_id\":\n ObjectId(\"5f5a51642c26e32e008ecb12\")\n ,\n \"faca\": \"01\",\n \"largura\": \"x\"\n }, {\n \"_id\":\n ObjectId( \"5f5a516f027b588538f52914\")\n ,\n \"faca\": \"02\",\n \"largura\": \"y\"\n }]\n }, {\n \"_id\":\n ObjectId( \"5f5a4fadd6c4678dd0a533df\")\n ,\n \"z\": \"20\",\n \"modulo\": \"x\",\n \"facas\": [{\n \"_id\":\n ObjectId( \"5f5a502dd6c4678dd0a533e2\")\n ,\n \"faca\": \"01\",\n \"largura\": \"x\"\n }, {\n \"_id\":\n ObjectId( \"5f5a5078d6c4678dd0a533e3\")\n ,\n \"faca\": \"02\",\n \"largura\": \"y\"\n }]\n }, {\n \"_id\":\n ObjectId(\"5f6a303c758a0f9e8a1219ae\")\n ,\n \"z\": \"100\",\n \"modulo\": \"1,5 mm\",\n \"desenvolvimento\": \"320\",\n \"encolhimento\": \"310\",\n \"distorcao\": \"96.875\"\n }],\n \"perfil\": [{\n \"_id\":\n ObjectId(\"5f5a4fbcd6c4678dd0a533e0\")\n ,\n \"perfil\": \"papel\",\n \"cores\": {\n \"other\": {\n \"curva\": [\"g34\"],\n \"hd\": [\"hd05\"],\n \"lineatura\": [\"112\"]\n }\n }\n }, {\n \"_id\":\n ObjectId(\"5f5a4fded6c4678dd0a533e1\")\n ,\n \"perfil\": \"bopp\",\n \"cores\": {\n \"cyan\": {\n \"curva\": [\"h40\"],\n \"hd\": [\"hd05\"],\n \"lineatura\": [\"112\"]\n }\n }\n }]\n }, {\n \"_id\":\n ObjectId(\"5f6a3a395179f2a3c2c7ae05\")\n ,\n \"impressora\": \"Komexy\",\n \"banda\": \"Larga\",\n \"trap\": [\"0.05\", \"0.20\", \"0.25\"],\n \"espessura\": [\"1.14\", \"1.7\"]\n }]\n }\n})\n[\n {\n '$match': {\n 'producao.impressoras.z._id': new ObjectId('5f5a4f7cd6c4678dd0a533de')\n }\n }, {\n '$unwind': {\n 'path': '$producao.impressoras'\n }\n }, {\n '$match': {\n 'producao.impressoras.z._id': new ObjectId('5f5a4f7cd6c4678dd0a533de')\n }\n }, {\n '$unwind': {\n 'path': '$producao.impressoras.z'\n }\n }, {\n '$match': {\n 'producao.impressoras.z._id': new ObjectId('5f5a4f7cd6c4678dd0a533de')\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$producao.impressoras.z'\n }\n }\n]\n{\n\t\"_id\" : ObjectId(\"5f5a4f7cd6c4678dd0a533de\"),\n\t\"z\" : \"10\",\n\t\"modulo\" : \"x\",\n\t\"facas\" : [\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"5f5a51642c26e32e008ecb12\"),\n\t\t\t\"faca\" : \"01\",\n\t\t\t\"largura\" : \"x\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"5f5a516f027b588538f52914\"),\n\t\t\t\"faca\" : \"02\",\n\t\t\t\"largura\" : \"y\"\n\t\t}\n\t]\n}\n",
"text": "Hi @Fabio_Bracht and welcome in the MongoDB Community !Here is the document I inserted in my collection (same as your but with ObjectId instead of $oid).Here is the aggregation pipeline I wrote:And here is the result I get:The intermediate $match are optional. Only the last one is really mandatory to get your result but the previous $match stages insure that we remove all the useless documents from the pipeline as soon as possible.\nWe can see this logic in MongoDB Compass:image1272×1202 128 KBIt’s probably not the most optimized way to solve your query. But I think it does the job done.I hope it helps and I’m curious if someone can find a better solution for this or not.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[{\n '$match': {\n '_id': new ObjectId('5f58b71b6f6d360f356a02ba')\n }\n}, {\n '$project': {\n 'impressora': {\n '$filter': {\n 'input': '$producao.impressoras',\n 'cond': {\n '$eq': [\n '$$this._id', new ObjectId('5f5a3c0cd84b5a2a8bd5d50c')\n ]\n }\n }\n }\n }\n}, {\n '$unwind': {\n 'path': '$impressora'\n }\n}, {\n '$project': {\n 'z': {\n '$filter': {\n 'input': '$impressora.z',\n 'cond': {\n '$eq': [\n '$$this._id', new ObjectId('5f5a4f7cd6c4678dd0a533de')\n ]\n }\n }\n }\n }\n}, {\n '$unwind': {\n 'path': '$z'\n }\n}, {\n '$project': {\n 'z.facas': 0,\n '_id': 0\n }\n}, {\n '$set': {\n 'z.z': 22\n }\n}]\n",
"text": "hello, this solved the problem. Now I saw that I have another one, how to update this data?\nI updated with aggregate and $ set, but it didn’t save.",
"username": "Fabio_Bracht"
},
{
"code": "",
"text": "$set is an alias for $addFields which is just adding fields into the documents at this stage in the pipeline.Documents in a pipeline are completely disconnected from the documents in the collections. You can’t update them this way.You must use the update operation and most probably the array operators in your case.",
"username": "MaBeuLux88"
},
{
"code": "{\n \"_id\": \"5f58b71b6f6d360f356a02ba\",\n \"cnpjCpf\": \"97515035072\",\n \"producao\": {\n \"impressoras\": [\n {\n \"_id\": \"5f5a3c0cd84b5a2a8bd5d50c\",\n \"impressora\": \"Etirama\",\n \"z\": [\n {\n \"_id\": \"5f5a4f7cd6c4678dd0a533de\",\n \"z\": \"10\",\n \"modulo\": \"x\",\n \"facas\": [\n {\n \"_id\": \"5f5a51642c26e32e008ecb12\",\n \"faca\": \"01\",\n \"largura\": \"x\"\n },\n {\n \"_id\": \"5f5a516f027b588538f52914\",\n \"faca\": \"02\",\n \"largura\": \"y\"\n }\n ]\n },\n {\n \"_id\": \"5f5a4fadd6c4678dd0a533df\",\n \"z\": \"20\",\n \"modulo\": \"x\",\n \"facas\": [\n {\n \"_id\": \"5f5a502dd6c4678dd0a533e2\",\n \"faca\": \"01\",\n \"largura\": \"x\"\n },\n {\n \"_id\": \"5f5a5078d6c4678dd0a533e3\",\n \"faca\": \"02\",\n \"largura\": \"y\"\n }\n ]\n },\n {\n \"_id\": \"5f6a303c758a0f9e8a1219ae\",\n \"z\": \"100\",\n \"modulo\": \"1,5 mm\",\n \"desenvolvimento\": \"320\",\n \"encolhimento\": \"310\",\n \"distorcao\": \"96.875\"\n }\n ],\n \"perfil\": [\n {\n \"_id\": \"5f5a4fbcd6c4678dd0a533e0\",\n \"perfil\": \"papel\",\n \"cores\": {\n \"other\": {\n \"curva\": [\n \"g34\"\n ],\n \"hd\": [\n \"hd05\"\n ],\n \"lineatura\": [\n \"112\"\n ]\n }\n }\n },\n {\n \"_id\": \"5f5a4fded6c4678dd0a533e1\",\n \"perfil\": \"bopp\",\n \"cores\": {\n \"cyan\": {\n \"curva\": [\n \"h40\"\n ],\n \"hd\": [\n \"hd05\"\n ],\n \"lineatura\": [\n \"112\"\n ]\n }\n }\n }\n ]\n },\n {\n \"_id\": \"5f6a3a395179f2a3c2c7ae05\",\n \"impressora\": \"Komexy\",\n \"banda\": \"Larga\",\n \"trap\": [\n \"0.05\",\n \"0.20\",\n \"0.25\"\n ],\n \"espessura\": [\n \"1.14\",\n \"1.7\"\n ]\n }\n ]\n }\n}\n{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {},\n \"u\": [\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$ROOT._id\",\n \"5f58b71b6f6d360f356a02ba\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$ROOT\",\n {\n \"producao\": {\n \"$mergeObjects\": [\n \"$$ROOT.producao\",\n {\n \"impressoras\": {\n \"$map\": {\n \"input\": \"$$ROOT.producao.impressoras\",\n \"as\": \"impressora\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$impressora._id\",\n \"5f5a3c0cd84b5a2a8bd5d50c\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$impressora\",\n {\n \"z\": {\n \"$map\": {\n \"input\": \"$$impressora.z\",\n \"as\": \"zmember\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$zmember._id\",\n \"5f5a4f7cd6c4678dd0a533de\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$zmember\",\n {\n \"facas\": {\n \"$map\": {\n \"input\": \"$$zmember.facas\",\n \"as\": \"faca\",\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$faca._id\",\n \"5f5a51642c26e32e008ecb12\"\n ]\n },\n {\n \"$mergeObjects\": [\n \"$$faca\",\n {\n \"largura\": \"NEW LARGURA!!\"\n }\n ]\n },\n \"$$faca\"\n ]\n }\n }\n }\n }\n ]\n },\n \"$$zmember\"\n ]\n }\n }\n }\n }\n ]\n },\n \"$$impressora\"\n ]\n }\n }\n }\n }\n ]\n }\n }\n ]\n },\n \"$$ROOT\"\n ]\n }\n }\n }\n ],\n \"multi\": true\n }\n ]\n}\n{\n \"_id\": \"5f58b71b6f6d360f356a02ba\",\n \"cnpjCpf\": \"97515035072\",\n \"producao\": {\n \"impressoras\": [\n {\n \"_id\": \"5f5a3c0cd84b5a2a8bd5d50c\",\n \"impressora\": \"Etirama\",\n \"z\": [\n {\n \"_id\": \"5f5a4f7cd6c4678dd0a533de\",\n \"z\": \"10\",\n \"modulo\": \"x\",\n \"facas\": [\n {\n \"_id\": \"5f5a51642c26e32e008ecb12\",\n \"faca\": \"01\",\n \"largura\": \"NEW LARGURA!!\"\n },\n {\n \"_id\": \"5f5a516f027b588538f52914\",\n \"faca\": \"02\",\n \"largura\": \"y\"\n }\n ]\n },\n {\n \"_id\": \"5f5a4fadd6c4678dd0a533df\",\n \"z\": \"20\",\n \"modulo\": \"x\",\n \"facas\": [\n {\n \"_id\": \"5f5a502dd6c4678dd0a533e2\",\n \"faca\": \"01\",\n \"largura\": \"x\"\n },\n {\n \"_id\": \"5f5a5078d6c4678dd0a533e3\",\n \"faca\": \"02\",\n \"largura\": \"y\"\n }\n ]\n },\n {\n \"_id\": \"5f6a303c758a0f9e8a1219ae\",\n \"z\": \"100\",\n \"modulo\": \"1,5 mm\",\n \"desenvolvimento\": \"320\",\n \"encolhimento\": \"310\",\n \"distorcao\": \"96.875\"\n }\n ],\n \"perfil\": [\n {\n \"_id\": \"5f5a4fbcd6c4678dd0a533e0\",\n \"perfil\": \"papel\",\n \"cores\": {\n \"other\": {\n \"curva\": [\n \"g34\"\n ],\n \"hd\": [\n \"hd05\"\n ],\n \"lineatura\": [\n \"112\"\n ]\n }\n }\n },\n {\n \"_id\": \"5f5a4fded6c4678dd0a533e1\",\n \"perfil\": \"bopp\",\n \"cores\": {\n \"cyan\": {\n \"curva\": [\n \"h40\"\n ],\n \"hd\": [\n \"hd05\"\n ],\n \"lineatura\": [\n \"112\"\n ]\n }\n }\n }\n ]\n },\n {\n \"_id\": \"5f6a3a395179f2a3c2c7ae05\",\n \"impressora\": \"Komexy\",\n \"banda\": \"Larga\",\n \"trap\": [\n \"0.05\",\n \"0.20\",\n \"0.25\"\n ],\n \"espessura\": [\n \"1.14\",\n \"1.7\"\n ]\n }\n ]\n }\n}\n",
"text": "Hello : )Data,1 document,but works if many in collection,it only updates the “_id”: “5f58b71b6f6d360f356a02ba”I updated\nROOT when “_id”: “5f58b71b6f6d360f356a02ba”\n“producao”\n“impressoras” when “_id”: “5f5a3c0cd84b5a2a8bd5d50c”\n“z” when “_id”: “5f5a4f7cd6c4678dd0a533de”\nfacas when “_id”: “5f5a51642c26e32e008ecb12”\n“largura”: “NEW LARGURA!!” (i added this)In update pipelines the output of the pipeline,must be the updated document.\nIts alot nested so may look complecated but its always the same thing.When i update a document i do\n“$mergeObjects” the_doc_i_have {:updatedfield1 …}When i update a array i do\n“$map” and if match the filter,i replace the memberThis way the result is what i had + the updates i made.This looks complecated because its alot nested,and its raw json,\nwith function use that wrap the json it can be much more simple.\nIf time i might resend simpler code with functions.\nDrivers also provide query builders that help.If anyone knows a better way,would be helpful also.QueryResultHope it helps",
"username": "Takis"
},
{
"code": "const MongoClient = require('mongodb').MongoClient, { ObjectId } = require('mongodb'),\n queryDb = { 'producao.impressoras.z._id': new ObjectId('5f5a4f7cd6c4678dd0a533de') },\n updateDocument = {\n '$set': {\n 'producao.impressoras.$.z.$[pressZ].zCilindro': 1,\n 'producao.impressoras.$.z.$[pressZ].modulo': 1,\n 'producao.impressoras.$.z.$[pressZ].desenvolvimento': 1,\n 'producao.impressoras.$.z.$[pressZ].encolhimento': 1,\n 'producao.impressoras.$.z.$[pressZ].distorcao': 1\n }\n },\n options = { 'arrayFilters': [{ 'pressZ._id': new ObjectId('5f5a4f7cd6c4678dd0a533de') }] };\nMongoClient.connect(\n 'mongodb://...', { useNewUrlParser: true, useUnifiedTopology: true },\n function(connectErr, client) {\n const coll = client.db('flexograv').collection('customers'),\n r = coll.updateOne(queryDb, updateDocument, options);\n console.log(r);\n client.close();\n });\n",
"text": "thank you all! This solved.\nI did it in nodejs, not exactly like that, but I transcribed it for general use:",
"username": "Fabio_Bracht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query to return only matching array element | 2020-09-22T20:20:17.029Z | Query to return only matching array element | 10,241 |
[
"performance",
"monitoring"
] | [
{
"code": "",
"text": "When the business slows down, the message “serverStatus was very slow” frequently appears in the log.\nWhat does each item mean?\nAs shown in the picture, how to troubleshoot?\n微信图片_20200921200425995×525 1.09 MB",
"username": "11197"
},
{
"code": "",
"text": "Hi @11197 welcome to the community.Generally “serverStatus was very slow” means that the server is overburdened to the point that it cannot reply to status command without significant delay. I believe the serverStatus command in question was called by the full time diagnostic data capture system, once every second.Most of the time, if you know that the server is overburdened and expect that the workload will get lighter in the near future, this is not an issue. If you see this message all the time, then more in depth troubleshooting is required. However as I mentioned, typically only an overburdened server shows these messages, so upgrading the server’s hardware is one way to alleviate this.Please note that it’s best for next time if you paste the actual text message instead of posting a screenshot. A screenshot is much harder to read, and won’t show up in search results.Best regards,\nKevin",
"username": "kevinadi"
}
] | "serverStatus was very slow",What does each item mean? | 2020-09-21T13:22:11.359Z | “serverStatus was very slow”,What does each item mean? | 7,852 |
|
null | [] | [
{
"code": "",
"text": "2020-09-21T21:21:37.672+0000 I WRITE [conn15303751] update test.postInsightsContext query: { _id: “225264” } planSummary: IDHACK update: { _id: “225264”, accountId: 322, contextInfo: { tester: true }, _class: “com.spr.beans.d” } keysExamined:1 docsExamined:1 nMatched:1 nModified:0 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } 0msSo in above query context : conn15303751, when i want to know the actual client ip for this context, i need to parse the logs again to find something like this2020-09-21T21:21:37.184+0000 I NETWORK [conn15303751] received client metadata from 10.0.5.2:50692 conn15303751: { driver: { name: “mongo-java-driver|legacy”, version: “871df02c6e00ce83812cf2cdeb6d80e928ee794f” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “3.10.0-1127.el7.x86_64” }, platform: “Java/Oracle Corporation/1.8.0_131-b11” }So this is how i will be able to map conn15303751 with 10.0.5.2:50692.\nBut when i want to find client IP for each query in logs, it becomes tough.\nSo is there any other way to do this? also any specific reason as to why mongo doesnt put IP in context instead of string representation of thread?",
"username": "Prakhar_Jain02"
},
{
"code": "",
"text": "I think you can use jq utility on Unix serversor from shelldb.currentOp(true).inprog.forEach(function(d){if(d.client)print(d.client, d.connectionId)})\n127.0.0.1:59630 2",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Actually jq can be used when you trying to parse it manually from the logs. correct?\nBut suppose you are parsing logs from logstash on mongo box and sending it to some other place for visualization, there when you see queries they will have context in them, so here i need to map each context with a client IP.",
"username": "Prakhar_Jain02"
}
] | Parsing mongo logs to find the client IP from the context | 2020-09-24T01:01:57.718Z | Parsing mongo logs to find the client IP from the context | 4,268 |
null | [
"text-search"
] | [
{
"code": "{\n title: \"The adventure of Jane doe\"\n authors: [{ name: \"john doe\" }]\n}\n",
"text": "Use case:insert this document in a collection:creating a multiple fields text index :db.mycollection.createIndex({title: “text”, “authors.name”: “text”})querying:db.mycollection.find({$text: {$search: “jan” }})returns the documentquerying:db.mycollection.find({$text: {$search: “joh”}})doesn’t return anythingquerying:db.mycollection.find({$text: {$search: “john”}})returns the documentIt seems that search text doesn’t search prefix on array of documentsIf anyone has an explanation, i would be very grateful",
"username": "Maxime_Radigue"
},
{
"code": "",
"text": "Hi @Maxime_Radigue,I actually surprised that the “jan” part worked and it might be just luck with stemming of words. Text indexes are designed to search words/phrases or delimited parts of words and not substrings.The new Atlas search service have capabilities of text regex queries , however, legacy text indexes don’t.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Text index on array of objects | 2020-09-23T19:42:33.403Z | Text index on array of objects | 7,839 |
null | [
"sharding",
"configuration"
] | [
{
"code": "db.shards.findOne(){\n\t\"_id\" : \"shard1\",\n\t\"host\" : \"shard1/XXX.XXX.XXX.XXX:27017,XXX.XXX.XXX.XXX:27017,XXX.XXX.XXX.XXX:27017\",\n\t\"state\" : 1\n}\nsh.reconfig()",
"text": "Hello,In the coming days I will be responsible for migrating an existing 4.0 sharded cluster to a new network. Unfortunately, at the time it was set up our internal DNS was not reliable enough to use hostnames instead of IP addresses. There were better workarounds even back then, but that ship has sailed and this is what you see (replacing the Xs with actual IP addresses) when you connect to the config db and run db.shards.findOne():We can accept some downtime, but not the kind of downtime associated with dumping and restoring the data into a new cluster on the new network. There are good guides for updating replica set configs and I feel confident in my ability to stand up each shard (a three member replica set) on the new network with new IP bindings, but there is no equivalent sh.reconfig() that I can use to say shard1 is now reachable at these three IP addresses or hostnames.Aside from the shards collection in the config db I can’t actually find another place where the hostnames for each shard are specified. Would it be as simple as updating these documents in the config db? Feels too good to be true and the lack of a sh.reconfig() probably means that this is way more complicated than changing a few strings.Thank you for any help or insights you might be able to provide for how to most efficiently do this while preserving the existing state of the cluster at the time that we shut it down for migration.-Daren",
"username": "Daren_McCulley"
},
{
"code": "db.shards.updateOne({_id: 'shard1'}, {$set: {host: 'shard1/s1r1:27017, s1r2:27017, s1r3:27017'})sharding:\n configDB: <configReplSetName>/cfg1.example.net:27019, cfg2.example.net:27019,...\n",
"text": "TL;DR - It might actually be as simple as db.shards.updateOne({_id: 'shard1'}, {$set: {host: 'shard1/s1r1:27017, s1r2:27017, s1r3:27017'}). Still can’t find anything in the docs to support this and I certainly wouldn’t recommend doing it live (especially with the balancer running).Hate to respond to my own question, but I’ll say what we did in case it helps anyone else who needs to rebind a sharded cluster to a new set of IPs.First we ensured that all members in the cluster had an IP + hostname entry in /etc/hosts with the current set of IPs. We also updated all config files to use bindIpAll: true. Then we stopped the balancer and shutdown the cluster except for the config replica set. We used db.shards.updateOne() on the config db to change the existing IPs in the host value to use hostnames. Finally we used rs.reconfig() to update the config replica set to use hostnames.We restarted each shard and used rs.reconfig() to use hostnames. Before restarting the query routers we updated the config files to specify the config replica set by hostnames instead of IPs for this field:Once the QRs were up, the cluster was live on the existing IP addresses, but in an IP agnostic state. Every member was using a hostname to refer to any other member of the cluster. Changing the IPs was now a simple matter of updating the networking interface on the VM and the /etc/hosts IP => hostname mapping (or using DNS and removing the hardcoded mapping in /etc/hosts). Our cluster is live again on a new network. If there really isn’t more to it than changing the host values for each of the shard documents in the config.shards collection I’m confused as to why there isn’t an sh.reconfig() method.",
"username": "Daren_McCulley"
}
] | Migrating a Sharded Cluster | 2020-09-16T17:38:50.682Z | Migrating a Sharded Cluster | 1,691 |
null | [
"golang"
] | [
{
"code": "{\nmeeting_name: \"it is a test meeting name\"\nto: Array\n 0: Object\n contact: Object\n phone:\"+33123456789\"\n 1: Object\n contact: Object\n phone:\"+33987654321\"\n}\n{\nmeeting_name: \"it is a test meeting name\"\nto: Array\n 0: Object\n contact: Object\n phone:\"+33123456789\"\n name: \"john doe\"\n 1: Object\n contact: Object\n phone:\"+33987654321\"\n}\n",
"text": "Hello, I have created a document, in Mongodb compass it looks like:The field “to” is an array of object contact, where the object contact is a map, with key value pares like “phone”: “+33123456789”. I want to do something like: find the contact with phone “+33123456789”, set the name of the contact to be “john doe”, so that the data looks like:how can I implement this function by golang driver?Thanks for the help,James",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "I have examples of updates in mongo-go-examples/update_test.go at master · simagix/mongo-go-examples · GitHub. The title is misleading because I don’t see any array in action.",
"username": "ken.chen"
},
{
"code": "{ \n \"to\": [\n {\"contact\":{\"phone\":\"+33123456789\"}},\n {\"contact\":{\"phone\":\"+33987654321\"}}\n ]\n}\n{ \n \"to\": [\n {\"contact\":{\"phone\":\"+33123456789\", \"name\":\"john doe\"}},\n {\"contact\":{\"phone\":\"+33987654321\"}}\n ]\n}\n\"name\":\"john doe\"",
"text": "Hello Ken, thanks for the quick answer. The data looks like:I want to change it to:I plan to search the contact object with phone is “+33123456789”, then add a pair \"name\":\"john doe\" to it, how can I do this?Thanks,james",
"username": "Zhihong_GUO"
},
{
"code": "func TestUpdateArray(t *testing.T) {\n\tvar err error\n\tvar client *mongo.Client\n\tvar ctx = context.Background()\n\tif client, err = getMongoClient(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdoc := bson.M{\"to\": []bson.M{\n\t\tbson.M{\"contact\": bson.M{\"phone\": \"+33123456789\"}},\n\t\tbson.M{\"contact\": bson.M{\"phone\": \"+33987654321\"}},\n\t}}\n\tdefer client.Disconnect(ctx)\n\tcollection := client.Database(\"test\").Collection(\"foo\")\n\tcollection.DeleteMany(ctx, bson.M{})\n\tcollection.InsertOne(ctx, doc)\n\n\tvar filter bson.M\n\tjson.Unmarshal([]byte(`{\"to\": { \"$elemMatch\": {\"contact.phone\": \"+33123456789\"} } }`), &filter)\n\tvar update bson.M\n\tjson.Unmarshal([]byte(`{\"$set\": {\"to.$.contact.name\": \"John Doe\"} }`), &update)\n\tif _, err = collection.UpdateOne(ctx, filter, update); err != nil {\n\t\tt.Fatal(err)\n\t}\n}\n",
"text": "I added below to update_test.go",
"username": "ken.chen"
},
{
"code": "",
"text": "Hello Ken,That’s exactly what I need! Thank you so much!James",
"username": "Zhihong_GUO"
}
] | Access the items in array in db | 2020-09-23T11:19:32.094Z | Access the items in array in db | 2,758 |
null | [
"sharding"
] | [
{
"code": "mongos> db.getCollection('cases').getShardDistribution()\n\nShard shard1 at shard1/172.23.138.32:27017\n data : 1363.79GiB docs : 79164325 chunks : 171591\n estimated data per chunk : 8.13MiB\n estimated docs per chunk : 461\n\nShard shard2 at shard2/opmongodbas4:27017\n data : 4522.08GiB docs : 376106878 chunks : 171589\n estimated data per chunk : 26.98MiB\n estimated docs per chunk : 2191\n\nTotals\n data : 5885.88GiB docs : 455271203 chunks : 343180\n Shard shard1 contains 23.17% data, 17.38% docs in cluster, avg obj size on shard : 18KiB\n Shard shard2 contains 76.82% data, 82.61% docs in cluster, avg obj size on shard : 12KiB\nactioncreation",
"text": "Hi,\nWe have a quite big collection that is replicated and sharded (2 shards, 1 replica by shard)\nThe data is unbalanced betwwen the 2 shard :The shard key is {context: 1, action:1, creation 1}\ncontext has a very low cardinality (much of the data has the same value)action has a medium cardinality (some hundreds)\ncreation is the creation timestamp of the dataAny direction to dig ?",
"username": "FRANCK_LEFEBURE"
},
{
"code": "",
"text": "Hi @FRANCK_LEFEBURE and welcome in the MongoDB community !I’m wondering if your shard_key is not growing monotonically here. Are your write operations distributed evenly across all the chunks?Some good doc:Also, is the balancer activated? Is it running smoothly?sh.status() can probably give you these information.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "db.getCollection('chunks').find({ns : 'softbridge4.cases', min:{\n '$gte':{context:'orange', actionId:191, creation: ISODate('2019-06-17T03:41:41.000-0400')},\n '$lte':{context:'orange', actionId:191, creation: ISODate('2019-06-24T03:21:05.000-0400')}\n }}).sort({ns:1, min:1}) \n softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1560757301000),shard1,36932,2\n softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561357092000),shard1,0,0\n softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561359957000),shard1,0,0\n softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561360865000),shard1,553980,30\ndb.adminCommand( {\n mergeChunks: \"softbridge4.cases\",\n bounds: [ {context:'orange', actionId:191, creation: ISODate('2019-06-17T03:41:41.000-0400')},\n {context:'orange', actionId:191, creation: ISODate('2019-06-24T03:21:05.000-0400')} ]\n} )\nFailed to commit chunk merge :: caused by :: DuplicateKey: chunk operation commit failed: version 97732|17||5d5efd91753aa982feb1ecfb doesn't exist in namespace: softbridge4.cases. Unable to save chunk ops. Command: { applyOps: [ { op: \"u\", b: false, ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191.0creation_new Date(1560757301000)\", ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561360865000) }, shard: \"shard1\", lastmod: Timestamp(97732, 17), lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb') }, o2: { _id: \"softbridge4.cases-context_\"orange\"actionId_191.0creation_new Date(1560757301000)\" } }, { op: \"d\", ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561357092000)\" } }, { op: \"d\", ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561359957000)\" } } ], preCondition: [ { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561357092000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } }, { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191, creation: new Date(1561357092000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561359957000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } }, { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191, creation: new Date(1561359957000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561360865000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } } ], writeConcern: { w: 0, wtimeout: 0 } }. Result: { applied: 1, code: 11000, codeName: \"DuplicateKey\", errmsg: \"E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { : \"softbridge4.cases\", : { context: \"orange\", actionId: 191.0, creat...\", results: [ false ], ok: 0.0, operationTime: Timestamp(1600798453, 1609), $gleStats: { lastOpTime: { ts: Timestamp(1600798453, 1609), t: 6 }, electionId: ObjectId('7fffffff0000000000000006') }, $clusterTime: { clusterTime: Timestamp(1600798453, 2328), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } :: caused by :: E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { : \"softbridge4.cases\", : { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) } }",
"text": "I tried some little chunk merge .\nEg on this little sub dataset :That corresponds to 4 chunks contiguous on the same shard :With this command (balancer stopped):But I end with this error :Failed to commit chunk merge :: caused by :: DuplicateKey: chunk operation commit failed: version 97732|17||5d5efd91753aa982feb1ecfb doesn't exist in namespace: softbridge4.cases. Unable to save chunk ops. Command: { applyOps: [ { op: \"u\", b: false, ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191.0creation_new Date(1560757301000)\", ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561360865000) }, shard: \"shard1\", lastmod: Timestamp(97732, 17), lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb') }, o2: { _id: \"softbridge4.cases-context_\"orange\"actionId_191.0creation_new Date(1560757301000)\" } }, { op: \"d\", ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561357092000)\" } }, { op: \"d\", ns: \"config.chunks\", o: { _id: \"softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1561359957000)\" } } ], preCondition: [ { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561357092000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } }, { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191, creation: new Date(1561357092000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561359957000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } }, { ns: \"config.chunks\", q: { query: { ns: \"softbridge4.cases\", min: { context: \"orange\", actionId: 191, creation: new Date(1561359957000) }, max: { context: \"orange\", actionId: 191, creation: new Date(1561360865000) } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5d5efd91753aa982feb1ecfb'), shard: \"shard1\" } } ], writeConcern: { w: 0, wtimeout: 0 } }. Result: { applied: 1, code: 11000, codeName: \"DuplicateKey\", errmsg: \"E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { : \"softbridge4.cases\", : { context: \"orange\", actionId: 191.0, creat...\", results: [ false ], ok: 0.0, operationTime: Timestamp(1600798453, 1609), $gleStats: { lastOpTime: { ts: Timestamp(1600798453, 1609), t: 6 }, electionId: ObjectId('7fffffff0000000000000006') }, $clusterTime: { clusterTime: Timestamp(1600798453, 2328), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } :: caused by :: E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { : \"softbridge4.cases\", : { context: \"orange\", actionId: 191.0, creation: new Date(1560757301000) } }",
"username": "FRANCK_LEFEBURE"
},
{
"code": "softbridge4.cases-context_\"orange\"actionId_191creation_new Date(1600775145000),shard1,0,0\nsoftbridge4.cases-context_\"orange\"actionId_191creation_new Date(1600775525000),shard1,0,0\nsoftbridge4.cases-context_\"orange\"actionId_191creation_new Date(1600775740000),shard1,0,0\nsoftbridge4.cases-context_\"orange\"actionId_191creation_new Date(1600775885000),shard1,0,0\nsoftbridge4.cases-context_\"orange\"actionId_191creation_new Date(1600776045000),shard1,0,0\n",
"text": "Hi Maxime, thanks for your quick reply,The last part of the composite shard key (the timestamp one) grows monotically. The write operations seem evenly distributed.\nThe design of the shard key was initially to optimise the read preference (Every costly read request include criteria on the 3 partition subkey). It seems effective.\nAt this moment I prefer to not think about a repartioning cause it would be a true pain.I’ve dig a little since yesterday and I understand a little more the problem.We do a lot of update/delete on this collection.\nFor some reason, one of the “action” value documents are over affected by delete.\nEach shard have ~170000 chunks\n~130000 chunks on the shad1 are about document with the same value action=191\nOn theses 130000 chunks, ~124000 are empty chunks. That explains the unbalance. As I understand the balancer only takes care of the chunks number on each shard.My first non-understanding is about all the chunks with the same “action” values that stay on the same shardWhy don’t the different values of date distribute on both shard ? Is it a consequence of the monotically increase of theses values ? What is the root cause of this strategy ?To correct the unbalance, i’m planning to :After theses operations, the number of chunks on the shard1 will be lower. May be ~50000.\nCan I hope that the balancer will redistribute the chunks evenly after this operation ?\nOr should I expect to have to write a job to move some chunks manually ?Franck",
"username": "FRANCK_LEFEBURE"
},
{
"code": "",
"text": "A missing info : the cluster was 3.6.13 based.\nIt has been upgraded today to 3.6.20 (there was some tickets in Jira related to chunks)\nBut this upgrade didn’t fix the mergeChunks problem.\nTomorrow I will try to reproduce the problem on a dev server",
"username": "FRANCK_LEFEBURE"
},
{
"code": "db.adminCommand( {\n mergeChunks: \"test.cases\",\n bounds: [ {actionId:191)},\n {actionId:250)} ]\n} )\ndb.getSiblingDB(\"config\").chunks.find({ns : 'test.cases', min:{actionId:191}}).forEach(function(chunk) {chunk1=chunk;});\ndb.getSiblingDB(\"config\").chunks.find({ns : 'test.cases', min:{actionId:250}}).forEach(function(chunk) {chunk2=chunk;});\ndb.adminCommand( { mergeChunks : 'softbridge.cases' , bounds:[chunk1.min,chunk2.min]});\n",
"text": "Well I think I have a workaround for the mergeChunks problem.\nI think this is somewhere related to a Bson de/serialization problemI’ve mounted a sharded database with a simpler partition key : {action:1}\nThe action values are integersif I execute a command like this one in a mongo shell or in Robo3T :Then the command fails on a DuplicateKey errorNow if I execute a script like this oneThen the execution is OK.I think there is at least a documentation bug here",
"username": "FRANCK_LEFEBURE"
},
{
"code": "",
"text": "I don’t know what’s happening exactly with the mergeChunks command. It could also be something that has been fixed in later versions. I would recommend to upgrade if possible.Regarding the issue with the empty / almost empty chunks. The only way to solve the problem is to merge them so the balancer could balance non-empty chunks correctly.When you massively delete data & it changes dramatically the way your data is split across the chunks, you need to consider running a few mergeChunks commands to regroup the newly empty chunks.There is no other way to solve this issue without completely redesigning the shard key and the way the documents are organised & stored in the cluster.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I came to this conclusion.\nA chunks merging job, based on mongo/sharding_utils.js at master · vmenajr/mongo · GitHub is actually running on the cluster.\nFor the possible bug, I will give a try on the latest 4.4 and will open a Jira if it’s still relevant\nThanks",
"username": "FRANCK_LEFEBURE"
}
] | Unbalanced data repartition between shard | 2020-09-21T20:36:21.761Z | Unbalanced data repartition between shard | 3,593 |
null | [] | [
{
"code": "\"example-document\": {\n _id: \"example-document\",\n \"example-array1\": [],\n \"example-array2\": []\n}\n",
"text": "I have a document in the following format,and it’s not clear to me in the docs how to remove “example-array2” from this particular document since it’s an array and not an object. How can I just remove one particular array by name?",
"username": "scar9"
},
{
"code": "$unsetdb.foo.updateOne({ _id: \"example-document\" }, {$unset: { \"example-array1\": true } } )\n",
"text": "An array is an object. Use $unset.",
"username": "ken.chen"
}
] | Removing an array from a document? | 2020-09-23T19:41:15.578Z | Removing an array from a document? | 1,407 |
[
"dot-net"
] | [
{
"code": "",
"text": "I have an entity with simple enum ( LicenseType ) in my .Net Core project. I want to update LicenseType field of my entity, but receiving error: image720×98 5.5 KBAlso, when i try to unset that field i receive such error:When performing such operations on my local db instance everything is okey, but using remote fails. My entity model is:And enum:",
"username": "Kenny_Wood"
},
{
"code": "",
"text": "Hi @Kenny_Wood and welcome in the MongoDB Community !Looks like your collection is sharded in production and not in dev which is why you don’t have the issue in dev. It also looks like “LicenceType” is part of the shard key which is immutable before MongoDB 4.2. Which version are you using?If you are sure that you want to update this field, maybe you should reconsider your shard key and maybe data model.If you use MongoDB >= 4.2, then you should use a transaction if you want to make this update successful. More considerations here:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi, Thanks for your reply.My problem was resolved by recreating the collection and importing dump data.Maybe, collection was created in some custom way",
"username": "Kenny_Wood"
},
{
"code": "",
"text": "Well, yes. My guess is that is was a sharded collection and now you recreated an un-sharded collection that now resides only in the primary shard of its database.More documentation about the sharding in MongoDB: https://docs.mongodb.com/manual/sharding/Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Can't update immutable enum | 2020-09-23T09:55:11.610Z | Can’t update immutable enum | 3,305 |
|
null | [
"configuration"
] | [
{
"code": "",
"text": "Unable to execute admin commands on mongodb Atlas instance.The instance is a M10(General) instance.Have been using the older node versions (node version 2.2.12) connection url. And the user has the atlasAdmin permission.Any body faced similar issues with Atlas Mongodb?",
"username": "Chaitanya_Narra"
},
{
"code": "",
"text": "What command you were trying to run?\nThere are some restrictions on using some admin commands",
"username": "Ramachandra_Tummala"
},
{
"code": "setParameter",
"text": "Welcome to the community @Chaitanya_Narra!Per the link that @Ramachandra_Tummala shared, setParameter is not a supported administrative command for MongoDB Atlas. This command modifies options that are normally set on the command-line for a deployment, and any changes are not persisted.If there is a specific deployment parameter you are trying to adjust, please reach out to the Atlas support team for advice and assistance.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoError: not authorized on admin to execute command setParameter: | 2020-09-23T13:30:46.557Z | MongoError: not authorized on admin to execute command setParameter: | 7,949 |
null | [
"golang"
] | [
{
"code": "interface{}primitivesbson.Abson.E[]byte[]byteinterface{}map[string]interface{}",
"text": "I am trying to unmarshal bson into an interface{} that can be type checked to native types (no primitives or bson.A or bson.Es), and/or convert a bson []byte to a json []byte.One of the central issues I’m having with the go driver is that it pushes mongo’s type system into the business logic when using generic types like interface{}. If I write code to type check mongodb types, it won’t work if I input standard json unmarshaled map[string]interface{} fields.I would like to do these things via the UnmarshalBSON / UnmarshalBSONValue interfaces so that I’m not messing with the Default registry. Is this possible?",
"username": "Dustin_Currie"
},
{
"code": "",
"text": "Also, it would be awesome to see the driver move in this direction, or at least add an escape hatch. If the driver decodes generically to go native types other tools/code won’t have issues…and I won’t have to write a struct and unmarshaler for every weird type a javascript programmer decides to push to mongo ",
"username": "Dustin_Currie"
},
{
"code": "",
"text": "Hi @Dustin_Currie,We’re definitely open to ideas that would make this kind of thing easier for users. Can you provide an example that shows what you’d like the driver to do? We don’t necessarily need a code sample, but it’d be helpful to have an example of a document in the database and some specific information about how you’d like the driver to store it after querying and what you want to do with it in your code.– Divjot",
"username": "Divjot_Arora"
}
] | Convert bson []byte to json []byte | 2020-09-20T04:04:30.647Z | Convert bson []byte to json []byte | 3,625 |
null | [
"app-services-user-auth",
"serverless",
"next-js"
] | [
{
"code": "Authorization: Bearer <token>// on server side\n\n// receive the token in request header (or in GET query?)\nconst token = req.headers.token;\n// can we do like this?\nconst credentials = Realm.Credentials.tokenAlreadyIssuedByRealm(token);\nconst user = await app.logIn(credentials);\n",
"text": "A typical authentication flow for realm-web is:This is no problem at all. Now, my question is how to use your own api, instead of Realm’s GraphQL endpoint.(FYI: I’m using Google OAuth for the step#1)I faced this usecase with Next.js. Logins happen on client side, but later some logics may run on server side. The same user identities/roles wanted to be used.Ideally something like:Do I have to use the User Api Key for this kind of usecase, or am I missing something?",
"username": "Toshi"
},
{
"code": "",
"text": "Hi @Toshi,Why not to use Realm Google Auth to get a token and then use it for graphQL?Any auth provider can be used to get an access token.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for response!\nBefore trying your suggestion, please allow me to clarify what I’m doing.So, I already used Realm Google Auth, and made it work successfully on the client-side (with realm-web SDK). Then, please assume I have to use a separate 3rd party serverless function. Not a GraphQL but a REST one. (More precisely, it’s a Next.js api fuction, but it doesn’t matter)User workflow is like this:\nFirst, a user clicks “Login with Google” button. Redirects to Google’s login screen. Then redirects back to the original page (localhost), successfully logged in.\nThen, the browser fetches an external api.My question – now on the API server, how I can tell the request comes from (on behalf of) the user?\nDo you still think I should somehow use Google Realm Auth on the server side again?",
"username": "Toshi"
},
{
"code": "",
"text": "Hi @Toshi,So if I understand you correctly you want to call a http webhook on behalf of the already authenticated user?If my assumption is correct I advise you to review my detailed example here on how to use payload user_id field to run a webhook by this userBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you, the post itself looks informative, but hmm…So if I understand you correctly you want to call a http webhook on behalf of the already authenticated user?For this question, the answer is yes and no – yes, I want to send a http request on behalf of the already authenticated user, but not toward the realm cloud, but to a normal NodeJS server I set up.An analogy may be described like this: So you have “React for frontend + Rails for backend” combination, but the authentication is done on React (=browser) side. You want to allow the backend to access the user object to do some CRUD operations respecting the roles.\n(*I’m not using Rails, but NodeJS of course)How can the backend (NodeJS) get access to the user object?\nIs your post mentioned above also applicable to this problem?",
"username": "Toshi"
},
{
"code": "",
"text": "@Toshi, if this node js code need to get access to the Realm resources and respect the defined roles you will need to authenticate it through realm .Now for webhooks you csn have the NodeJS code call them with the realm user_id, with the script auth this should be enough to set the excutor.If you have some unrelated to realm tasks for this node code its up to you how to store/identify the user.Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hmm, let me take some time to understand what you said.",
"username": "Toshi"
},
{
"code": "https://realm.mongodb.com/api/client/v2.0/app/<yourappid-abcde>/auth/providers/<provider type>/loginprovider type",
"text": "OK, let me seek for another workaround… after all I want to re-authenticate a user from my next.js code.So from the post you mentioned above, I found you said:Basically any provider can be authenticated via the following url:\nhttps://realm.mongodb.com/api/client/v2.0/app/<yourappid-abcde>/auth/providers/<provider type>/loginI’m curious about this. Can it be used with Google Auth as the provider type? Could you refer to any doc about it if any?",
"username": "Toshi"
},
{
"code": "$ curl --location --request POST 'https://realm.mongodb.com/api/client/v2.0/app/app-123/auth/providers/oauth2-google/login' \\\n> --header 'Content-Type: application/json' \\\n> --data-raw '{\n> authCode : \"\"\n> }'\n{\"error\":\"failed to load auth request\",\"er-location --request POST 'https://realm.mongodb.com/api/client/v2.0/app/friendlychat-amzgp/auth/providers/oauth2-google/login' --header 'Content-Type: application/json' --data-raw '{\n }'\n{\"error\":\"expected either accessToken, id_token or authCode in payload\",\"error_code\":\"AuthError\"\n",
"text": "Hi @Toshi,It seems to be possible if you will investigate what payload is expected by google and you con grab from Realm side…Basically it seems that it looks for one of the following:So if you have accessToken, id_token or authCode from your initial auth you should be able to reauthenticateAlso read:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "import { withSSRContext } from 'aws-amplify'\n// ...\nexport async function getServerSideProps(context) {\n const { Auth } = withSSRContext(context)\n const user = await Auth.currentAuthenticatedUser() //← this line\n // ...\n}\n",
"text": "Thank you! But unfortunately the requirements of authCode seems not a simple solution to the problem.Meanwhile, I just found an article which does exactly what I want to do, although it features AWS Amplify.The complete guide to Next.js authentication with AWS - Client, Server, and API routesPlease check the first 2-3 sections. Those sentences really well explain how I (and other Next.js developers) want to use authentication.In short, all I want is Realm version of this:But I now suspect Realm currently does not have similar capability yet. Why I think so is because Amplify team shipped this feature only 6 days ago. You can find the discussion in this github issue.I’ve been watching the issue for weeks, but until today I didn’t think it relates to authentication. I now start thinking that implementing a client-server integrated auth system is a difficult task…What are your thoughts?",
"username": "Toshi"
},
{
"code": "",
"text": "Thank you! But unfortunately the requirements of authCode seems not a simple solution to the problem.Can you explain why this isn’t simple? Is this because you can’t re-use the auth code that’s generated from a log-in twice?At the moment it looks like a possible workaround is using JWT Authentication w/ Realm + Auth0/Cognito/any other Auth Service.You can:",
"username": "Sumedha_Mehta1"
},
{
"code": "authCodeid_tokenRealm.Credentials.google()Realm.Credentials.google(accessToken)accessTokenRealm.Credentials.google(redirectUri)const redirectUri = process.env.NEXT_PUBLIC_REALM_LOGIN_REDIRECT_URL;\n//...\nconst logIn = async () => {\n const credentials = Realm.Credentials.google(redirectUri);\n await app.logIn(credentials);\n};\n// in JSX\n<button onClick={logIn}>Log In with Google</button> \nhandleAuthRedirectrealm-webid_tokenauthCodenext-auth",
"text": "Hi Sumedha, thank you. To explain this, I need details, so you’ll have to read a little long post – but let me do:Can you explain why this isn’t simple? Is this because you can’t re-use the auth code that’s generated from a log-in twice?One reason is that I’m not using Google’s SDK, so in my current code no authCode and id_token appear.What does that mean? The first thing to mention, how I use Realm.Credentials.google() func. In Realm’s official doc, you find Realm.Credentials.google(accessToken). The argument is accessToken acquired via Google’s SDK. But I found another blog post which shows me I can use Realm.Credentials.google(redirectUri) instead. It looked simpler so I adopted this approach.The code looks something like this:This code successfully redirects user to Google’s login page. (I also used handleAuthRedirect from realm-web SDK to handle the user redirected back)So, in my current code, I don’t know about Google’s id_token or authCode. They are nicely hidden behind. Yes if I write some code I will get access to them, but I want to skip the steps if possible. Does it make sense to you?So next, the latter half, about JWT – yes, it looked very attractive so I have been trying it as a plan B in parallel. Actually, Next.js has a library called next-auth which can issue JWT as a sort of 3rd party authenticator. It also beautifully handles the client-server shared authentication.The main problem I faced was that it seemed I have to manage two different logins accordingly. For example, a user clicks “Log out” button, it triggers next-auth’s (3rd party’s) sign-out function. It doesn’t trigger Realm’s sing-out automatically and the user may remain signed-in with Realm. I may write my own code to handle this, but if there’s a bug it’s serious, so I gave up with this plan B. (I’ll maybe try again, I might be wrong)Now you might come up with further questions, if so, please ask.",
"username": "Toshi"
},
{
"code": "Auth Coderedirect_urinext-auth",
"text": "Hey Toshi – your approach for Google seems fine for only client-side authentication, but probably will not work for the case of trying to re-authenticate from the server. Even if you took the Auth Code approach with the Google SDK, an Auth Code would not work given that you’re only allowed to use it once. I just wanted to confirm how you were going about it.I also want to warn you that using the redirect_uri approach may not be the best choice if you’re taking your application into production. The reason we’ve left it out of the documentation is we’re in the process of understanding a recent change in Google’s API which no longer lets users verify their application on Google Cloud when using the realm/stitch as their Redirect URI.As for using the Realm logout simultaneously with next-auth’s 3rd party sign-out function, I don’t see why you wouldn’t be able to call the logout function after the user clicks the ‘Logout’ Button, unless there is something else that I’m not catching",
"username": "Sumedha_Mehta1"
},
{
"code": "redirect_uriremoteMongoClient",
"text": "Hi Sumedha,but probably will not work for the case of trying to re-authenticate from the server.Yes, I definitely agree with you. And that is the reason why I’m seeking (in this thread) for a general, provider-agnostic way to re-authenticate a user on server, who already logged in on browser.I also want to warn you that using the redirect_uri approach may not be the best choiceHmm that’s a very interesting story. Thank you for letting me know. I’ll definitely follow your advice.I don’t see why you wouldn’t be able to call the logout function after the user clicks the ‘Logout’ ButtonThat’s a good question, you may be right – last time I tried implementing it, I smelled some future complexity and just stopped, but maybe I was wrong. I’ll have to try again.So now – I’m feeling I’m taking too much time of you (and Pavel) for this issue. I appreciate your responsive feedbacks! At this moment it seems there’s no super elegant way to achieve client-server synced authentication, but I think I can find a way to do without it, thanks to Realm providing different approaches. (While I wanted to use remoteMongoClient on the server, Realm also allows us to run MQL on client side so next I will try make use of it)So I’ll use my brain a bit more to find a way to achieve the same user experience without server-side re-authentication. Yet just in case, may I post another feature-request regarding this on feedback.mongodb.com?",
"username": "Toshi"
}
] | How to Re-Authenticate a User with Token? | 2020-09-20T02:00:43.637Z | How to Re-Authenticate a User with Token? | 8,097 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello All -We are planning to setup 5 Node cluster. In our current setup we have ONLY 1 Node and as it is a SPOF, we wanted to setup HA by adding a secondary.However we wanted to setup 3 Arbiter nodes, as it would align with our HA cluster, of course with the standard port of 27017.The standard Mongo Documentation warns against the usage of more than 1 arbiter.Do we have any specific reasoning on why we should not use more than 1 arbiter?Thanks in advance.Regds,\nSivakumar",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "If you have 2 nodes + 3 arbiters, it means majority = 3 because you have a total of 5 nodes.So first of all, it means you need to be able to write on 3 nodes if you use the write concern = “majority”. Which is impossible in your case so each write operations emitted with w=“majority” will fail.Also, because majority = 3, it means that you won’t be able to elect a primary if 3 nodes are down. So for example if your 3 arbiters are down, you will be stuck with 2 secondaries that are not able to elect a primary. This would not happen if you had only one arbiter because the majority would be at 2 so they would be able to elect a primary.Arbiters in general are not recommended. The minimum configuration that MongoDB Atlas is deploying is 3 nodes RS with P+S+S.If you use a P+S+A, it means you can only write on 2 machines. If something happens to the P or the S, it means that you are now only able to write to one machine which means that all the write operation with w=“majority” will fail. So P+S+A is not HA for durable write operations (ie using w=“majority”).With P+S+S, you can continue to write with w=“majority” if you lose one machine (any of them).\nWith P+S+A, you can continue to write with w=“majority” only if you lose the A so it means your P & S are weak points.I don’t understand what is your motivation for using more than one arbiter. Can you please explain why you are considering 3? It’s only more troubles. Zero arbiter is the best way to go.With P+S+S, you can take one machine out for maintenance, backups and rolling upgrades and still accept durable write operations.\nWith P+S+A, you can’t afford to do this.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you @MaBeuLux88 for the detailed response.In summary, if there is no “write” concern of majority, then there shouldn’t be any worry overall.In our case, we have a 3 node High Availability Managing cluster, that we wanted to re-purpose with 27017 port allotted for arbiters. That way we will be able to handle the HA of Mongo as well, with the existing machines without deploying new nodes ONLY for arbiter.Also I got your feedback on the P+S+S again based on the write concern. As I mentioned we were using Single node anyways for the existing system where we were not using the write concern anyways.Also definitely we won’t run into the situation of all Arbiter nodes being down at the same time.In summary the proposal for 3 arbiter nodes is for consistency on the High Availability nodes (from a deployment stand-point) and nothing else.Please let me know if we should be considering any other factors for our case, other than write concern.Thanks a lot.Regds,\nSivakumar",
"username": "siva_kumar_Rallabhan"
},
{
"code": "majority",
"text": "Hi @siva_kumar_Rallabhan!An arbiter is only required when you’re running an even number of MongoDB nodes in a replica set, to round up the number of instances to an odd number, which is required for quorum. Running more than one arbiter doesn’t add any benefit. It doesn’t increase consistency, but it does add needless complexity to your cluster.Running more than one arbiter means you cannot use majority write concern with your cluster (as @MaBeuLux88 mentioned already). Unless you’re okay with losing data, or with downtime, you should use majority write concern, and you should have at least 3 full mongod nodes in your cluster.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "To sum up:Here is an even more detailed answer from @Stennie_X.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Please listen to Max.Do not deploy more than one arbiter ever.There is no advantage to it, ever. None. I don’t know what consistency with your other system even means, but it’s a non-reason - your other system isn’t a three arbiter system so there is no consistency.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thanks a lot Max and Mark for your detailed replies. And Stennie too.!Asya, I agree with your team’s thoughts as you are the experts in this area.To make this thread complete, also wanted to share more details on our thoughts and reasoning around our proposal.The failover and High Availability is maintained in general by a set of servers outside of the actual data nodes. An example could be Sentinel monitoring for Redis, Or Zookeeper monitoring for Solr/Kafka etc.The difference with Mongo is that the data nodes also participate in the voting which is different than the systems mentioned above.We have a high availability setup of 3 nodes, that generally monitors the data nodes as mentioned above.So we thought we would fit in the arbiters as well into the three nodes of High availability cluster, to keep the installation consistent in our High Availability cluster nodes.But we realized late that Mongo doesn’t work this way as the data nodes (primary and secondary) also participate in voting process.As you guys suggested, we will trade-off the write concern to the deploying of additional hardware. We will have the decision made on this by next week and will keep you posted on the decision progress.Many thanks again for all your help and guidance on this. Truly insightful every feedback.Regds,\nSivakumar",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "Hello Max, Good Day !We have been testing more combinations of fail-over and arbiter combinations of 1 and 3.The main observation that stood out is, writeMajority is not affected by the number of arbiter nodes. i.e. if there are two data nodes, P and S, then the writeMajority is always 2, even if Arbiters are 1 or 3.Only difference as you pointed out, majorityvotecount would be 2 for the replicaset if there is only 1 arbiter, while it would be 3 if there are a total of 5 nodes (including 3 arbiters).Also the statement PSAAA won’t move forward is not true, as the number of data bearing nodes is unchanged from 2 irrespective of 1 arbiter or 3 arbiters.Also the hypothetical situation you mentioned that all 3 arbiter nodes might be down, would still be possible even for the single arbiter as well. So there’s no difference.Sorry for the persistence on this. But we need to vet this down to ensure there are no shortcomings in our testing.Thanks again for your help.",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "You are mistaken about arbiters being down.If you have 1 arbiter and it’s down you still have majority (two data nodes). If you have three arbiters and they are down you no longer have majority (as majority is 3).",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Hello @Asya_Kamsky - Thanks for the clarification. I see what you are saying. For testing the same, performed the following combinations:With PSAAA -> turned off 3 Arbiters;\nResult: P became S and S remained S (Impact to RS overall)With PSA -> turned off 1 Arbiter;\nResult: P remained P and S remained S (no impact)writeMajorityCount_MajorityVoteCount1000×669 89.7 KBWhile this is definitely a fetching point for a single arbiter based, this is a hypothetical situation that all three arbiter nodes would be down at the same time.On the contrary, if the primary is also down, obviously the three node PSA cannot recover and elect a new primary with single Secondary alone alive in the ReplicaSet. However if a single arbiter is down and primary is also down, the remaining two arbiters along with Secondary, can make Secondary as primary, and hence enabling the better stability and availability for the ReplicaSet.In both the cases, I have clarified with the images above, on writeMajorityCount being unchanged from 2 (with PSAAA and PSA). While majorityVoteCount is changed from 3 to 2 when it is PSAAA and PSA.Thanks again and hope the above explains the better reliability with PSAAA.Regds,\nSivakumar",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "Adding this doc in the mix to justify the writeMajorityCount you noticed: https://docs.mongodb.com/manual/reference/write-concern/#calculating-majority-countIn the case of the PSAAA you explained above, you actually do NOT want your S to become P because you are actually creating a potential massive rollback.Let’s compare side by side PSA and PSAAA.Of course my little story is a little “worst case scenario”. This kind of troubles could also happen with network partitioning.\nBut MongoDB was designed to sustain the worst case scenarios already. One arbiter is already NOT recommended (cf the doc above).\nimage1203×249 34 KB\nHaving one arbiter already increases your “chances” to be rollbacked. As I demonstrated above, with similar issues and more arbiters, you actually increased your chances of being rollbacked.Writing with w=“majority” is recommended if you want to avoid a potential rollback of your data but this is only achievable if you use 3 data bearing servers because using a PSA in this scenario would not make your replica set highly available.I really hope this helps !\nIf you want to have a more robust and forward looking replica set, I would definitely recommend a PSS rather than a PSAAA which may even cost more $ in the end as you can’t regroup these machines of course.\nWith a PSS, you would have more room for maintenance and backup operations. It would be easier to take down a node and keep accepting w=“majority” writes in the meantime so rollback operations would be definitely out of the table in every possible scenarios.Cheers,\nMax.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks again and hope the above explains the better reliability with PSAAA.It does not. PSAAA is not more reliable, it’s borderline unsupported. We do not expect anyone to be using multiple arbiters except as an extremely transient situation (I can just barely see maybe needing something like that when doing some major moving of nodes in a cluster but that would be a one time thing).While it’s unlikely for any three nodes to be down at once in a cluster, what’s not unlikely are network partitions. So if you have PSA no matter how they get partitioned (two on one side, one on the other side) the side that has two nodes will be able to elect a primary. If you have PSAAA then any partition that leaves P and S partitioned from the arbiters will not be able to elect a primary on either side of the partition.Adding more arbiters gains you nothing but it creates new complex scenarios that may fail in unpredictable ways.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "@siva_kumar_RallabhanYou have multiple mongodb employees, some of them extremely experienced, explaining why you should not do this. You should take this advise.Adding a non-employee voice to the mix:\nYour first second and third options for a replicaSet should be 3 data nodes. Only do a PSA as a last resort.If I came across the monstrosity you propose the first order of business would be tearing it down to a PSA and prioritising making the remaining arbiter a data node.",
"username": "chris"
},
{
"code": "",
"text": "Thanks all for your inputs.To summarize the feedback (preferred options are with regards to reliability):Pros: No impact to RS with any node failures and no concerns to write acknowledgements overall, and can sustain network partitioning without any issuesCons: 3x h/w costs (cpu, memory) as compared to single nodePros: No impact to overall RS with any node failuresCons: Inconsistent topology with two data bearing nodes and one voting node (arbiter), and there are impacts of network partitioningPros: Low overall cost involved (of course this is our specific case, and shouldn’t be generalized)Cons: Less reliable for network partitioning cases.Also specific to our case, From #3 to #1 the reliability increases and so is the cost. We will have this reviewed along with our team and let you know if there are follow-up questions.Regds,\nSivakumar",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "Hi @siva_kumar_Rallabhan,Your summary is slightly incorrect.From #3 to #1 the reliability increases and so is the cost.Actually #2 is both cheaper than #3, and more reliable. There is never any reason to use more than one arbiter.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Hi Mark, thanks for the reply.Pros: Low overall cost involved (of course this is our specific case, and shouldn’t be generalized)As I mentioned earlier, this is specific to our case as we planned to repurpose the hardware, hence the reason for low cost (also inherently bringing the advantage of consistency) without deploying a new node. If you read the above thread, you will find those details.Hope this clarifies.",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "Yup - I saw.Just use one of those machines as an arbiter and use the others for something else!",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Hi Mark, the challenge with that for us is we use automation for some of the vm deployments and we wanted to keep those set of high availability ensuring vms as consistent, which will simplify the deployment process.As I said, we are reviewing these options and would follow-up if there are any other questions.Thanks again for pitching in and providing your thoughts.",
"username": "siva_kumar_Rallabhan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Replica Set with Master, Secondary, 3 Arbiters | 2020-09-02T09:08:39.657Z | Replica Set with Master, Secondary, 3 Arbiters | 5,829 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "How to do equi join in MongoDB, $lookup is giving left join result",
"username": "vinodkumar_Mallikarj"
},
{
"code": "",
"text": "Hi @vinodkumar_Mallikarj,$lookup is the equivalent of the SQL join which is a left outer join.\nCan you please elaborate your question so we can find a solution?Can you share maybe a few documents (just the important fields) from your 2 collections and the output documents you are expecting so we can try to write this aggregation pipeline for you or at least give you some directions?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Check this out,looks like solution,if you want more complicated join condition,see $lookup with\npipeline maybe.",
"username": "Takis"
},
{
"code": "db.Issues.aggregate(\n [\n {\n $match: {\n \"issue_type\": \"CRASH\",\n \"zipfile.zip_file_dt_tm\": {$gte: \"2015-01-01\", $lte : \"2020-01-01\"} ,\n \"client.client_mnemonic\": { $in: [\"AL\", \"CEC\"] },\n \"client.domain_name\": { $in: [\"P3\", \"P94\"] },\n \"fin.program_name\" : \"wmse.exe\"// Application column\n }\n },\n {\n $lookup: {\n from: \"TEAM\",\n localField: \"Issues.fin.program_name\",\n foreignField: \"TEAM.COMPONENT\",\n as: \"teams\"\n }\n },\n {\n $match: {\n \"teams\":{$ne:[]},\n \"teams.ORGANIZATION\": \"Recle\",\n \"teams.DEPARTMENT\": \"Dev\",\n \"teams.TEAM_NAME\": \"Sing\",\n \"teams.BUSINESS_UNIT\": \"SDev\"\n }\n },\n {\n $group: {\n _id: \"$zipfile.zip_file_dt_tm\",\n Fincnt: { $addToSet: \"$fin.fingerprint_id\" },\n\t\t\t\tclientcnt: { $addToSet: \"$client.client_mnemonic\" }\n }\n },\n {\n $project: {\n F_Count: { $size: \"$Fincnt\" },\n C_cnt: { $size: \"$clientcnt\" }\n }\n },\n {\n $sort: {_id :1}\n }\n \n ]\n)\n\"teams\":{$ne:[]}",
"text": "I have used the condition \"teams\":{$ne:[]}, still the query is returning the records.Actually the query should not return any records as the teams collection does not contain any records with the COMPONENT column value “wmse.exe”",
"username": "vinodkumar_Mallikarj"
}
] | Equi join in MongoDB | 2020-09-22T15:58:47.430Z | Equi join in MongoDB | 3,027 |
null | [
"aggregation",
"performance"
] | [
{
"code": "SELECT t.* FROM ModelEvaluation t\n WHERE t.id = (SELECT t1.id FROM ModelEvaluation t1\n WHERE t1.customer_id = t.customer_id \n ORDER BY t1.modified DESC LIMIT 1)\ndb.ModelEvaluation.aggregate([ \n { \n $lookup: { \n from: \"ModelEvaluation\", \n let: {f_customer_id: \"$customer_id\"}, \n pipeline: [ \n { \n /* This looks complex but it has to be this way. \n * \n * https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#lookup-join-let */ \n $match: { \n $expr: { \n $eq: [\"$customer_id\", \"$$f_customer_id\"], \n } \n } \n }, \n {$sort: {modified: -1}}, \n {$limit: 1}, \n {$project: {_id: true}} \n ], \n as: \"c1\", \n } \n }, \n {$out: \"foo\"}, \n]) \n",
"text": "Hi guys,I am fairly new to MongoDB so there might be obvious mistakes here. I have attemted an aggregation pipeline which takes several seconds for a mere collection size of ~2000 documents. I have isolated the bottleneck: the initial $lookup step. Hints are welcome, you are also welcome to send me your contact details if you do consulting around these kind of issues on a regular basis.I am trying to do something similar to the SQL statementMy Mongo aggregation (the part that takes +90% of the time)",
"username": "Henrik_Holst"
},
{
"code": "",
"text": "Hi @Henrik_Holst - welcome to the community!By their nature $lookup operations are slower than queries that don’t need to join data from more than one collection.If you’ll be needing to join the data together frequently, you may want to consider restructuring your data. The rule of thumb when modeling data in MongoDB is data that is accessed together should be stored together.If you’re not able to restructure the data, can you post some sample documents from each collection as well as the output you’re trying to generate, so we can see if there is a way to optimize your $lookup?",
"username": "Lauren_Schaefer"
},
{
"code": "db.ModelEvaluation.aggregate([\n {\n $sort: {\n modified: -1\n }\n },\n {\n $group: {\n _id: \"$customer_id\",\n values: {\n $first: \"$values\"\n },\n dataset_transform: {\n $first: \"$dataset_transform\"\n }\n }\n },\n {$unwind: \"$values\"},\n {\n $match: {\n \"values.target\": \"accoding\"\n }\n },\n {$unwind: \"$dataset_transform\"},\n { \n $match: {\n \"dataset_transform.tail\": {$exists: true}\n }\n },\n {\n $project: {\n customer: \"$_id\",\n accuracy: \"$values.value\",\n size: \"$dataset_transform.tail\",\n }\n },\n {$out: \"foo\"},\n])\n",
"text": "I agree that the $lookup seems to be designed to allow fetching data from other collections and this is not what I am doing. We have seen various ideas how to optimize this here but in the end of the day I found another way to solve this using $group and $first.The complete pipeline also became 50% as long:",
"username": "Henrik_Holst"
},
{
"code": "{\n \"aggregate\": \"modelevaluation\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": \"$customerid\",\n \"top1Customer\": {\n \"$max\": {\n \"modified\": \"$modified\", //put first on document the sorting creteria,here the modified value\n \"customerid\": \"$customerid\", //or \"$values\" on your example\n \"modelid\": \"$modelid\" //or \"$dataset_transform\" on your example\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"customerid\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n \"maxTimeMS\": 0,\n \"cursor\": {}\n}\n\n",
"text": "HelloIf you want only the top1 for each customer based on $modified, $max is simple way to go.You need this i believe,mongoDB allows document comparisonEach time the $max will keep the document with the top value on $modified.Its fast and don’t need index,that would need memory.",
"username": "Takis"
}
] | Aggregation $lookup optimization to match _id for top-1 selection | 2020-09-20T23:31:24.855Z | Aggregation $lookup optimization to match _id for top-1 selection | 6,122 |
null | [
"database-tools"
] | [
{
"code": "C:\\MMSAutomation\\versions\\mongodb-windows-x86_64-4.4.1\\bin>mongodump --port 27999 --host apple --db local --collection oplog.rs --out \"C:\\Program Files\\mydump\" --query \"{\"ts\":{\"$gt\":{\"$timestamp\":{\"t\":1600853643,\"i\":1}}}}\"\n2020-09-23T15:48:43.465+0530 Failed: error parsing query as Extended JSON: invalid JSON input\n",
"text": "Hi,I was try mongodump command on windows, with MongoDB 4.4 server, but it fails withe below error.\nSame command goes fine on ubuntu.\nIs any change in timestamp format required?Thanks,\nAkshaya Srinivasan",
"username": "Akshaya_Srinivasan"
},
{
"code": "'--query '{\"ts\":{\"$gt\":{\"$timestamp\":{\"t\":1600853643,\"i\":1}}}}'\n",
"text": "You have quoting issues in your query, I highly doubt this would work on the ubuntu command line as is either.Make your outside quotes single ' .",
"username": "chris"
},
{
"code": "",
"text": "Thanks @chrisOn windows:C:\\MMSAutomation\\versions\\mongodb-windows-x86_64-4.4.1\\bin>“C:\\MMSAutomation\\versions\\mongodb-windows-x86_64-4.4.1\\bin\\mongodump” --port 27999 --host apple --db local --collection oplog.rs --out “C:\\Program Files\\2284” --query ‘{“ts”:{\"$gt\":{\"$timestamp\":{“t”:1600853643,“i”:1}}}}’2020-09-23T16:36:53.714+0530 Failed: error parsing query as Extended JSON: invalid JSON inputOn ubuntu20:mongodump --port 27017 --host aksubuntu20 --db local --collection oplog.rs --out “/1199” --query ‘{“ts”:{\"$gt\":{\"$timestamp\": {“t”: 1600851861, “i”: 1}}}}’2020-09-23T16:35:55.803+0530 writing local.oplog.rs to /1199/local/oplog.rs.bson\n2020-09-23T16:35:55.809+0530 done dumping local.oplog.rs (1700 documents)",
"username": "Akshaya_Srinivasan"
},
{
"code": "--queryFile=",
"text": "Still running into some quoting issue on the windows command line. I don’t have windows to hand to assist further.You can put your query in a file and use the the --queryFile= option if you want to avoid this and get your dump running.",
"username": "chris"
},
{
"code": "",
"text": "Thanks @chrisBut using a file is not preferable.",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "The query has to be formatted on Windows\nPlease check this linkMongodump –query not able to filter using timestamp",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodump command is failing in MongoDB4.4 with JSON error | 2020-09-23T10:23:49.706Z | Mongodump command is failing in MongoDB4.4 with JSON error | 11,402 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection(\"Issues\").aggregate(\n [\n {\n \"$project\": {\n \"zipdate\" : {$month: \"$zipfile.zip_file_dt_tm\"},\n }\n },\n {\n $group: { \n _id: \"$zipdate\" ,\n totalPrice : { $sum: 1 }\n \n }\n }\n ]\n);\n{\n\t\"message\" : \"can't convert from BSON type string to Date\",\n\t\"ok\" : 0,\n\t\"code\" : 16006,\n\t\"codeName\" : \"Location16006\",\n\t\"name\" : \"MongoError\"\n}\n{\n\t\"_id\" : ObjectId(\"5f68a350621f37798e5523f4\"),\n\t\"issue_type\" : \"CH\",\n\t\"zipfile\" : [\n\t\t{\n\t\t\t\"zip_file_dt_tm_utc\" : \"2019-08-08 12:08:52\",\n\t\t\t\"zip_file_dt_tm\" : \"2019-08-08\"\n\t\t}\n\t]\n}\n",
"text": "If I run the above query. it is giving me the below error.Here is collection details.",
"username": "vinodkumar_Mallikarj"
},
{
"code": "zipfile$month$unwind$dateFromString",
"text": "The error is because zipfile is an array. The data expression used with $month must be a valid expression that resolves to a Date, a Timestamp, or an ObjectID. See https://docs.mongodb.com/manual/reference/operator/aggregation/month/. You’ll have to $unwind the array and convert to date using $dateFromString.",
"username": "ken.chen"
}
] | Group by month is giving error | 2020-09-23T10:39:37.106Z | Group by month is giving error | 3,895 |
[
"mongodb-shell"
] | [
{
"code": "",
"text": "Hello i have some questioning about mongodb shell connectWhen I was an opened shell and I was connected to\nmongo “mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0” --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basicsthis above run command and still getting an error please say me solution\nimage1358×130 35.3 KB\n",
"username": "Raj_Parmar"
},
{
"code": "mongomongo --nodb",
"text": "Hello @Raj_Parmar,You are already in mongo shell environment (you have used the mongo --nodb command). To connect to a MongoDB instance follow the instructions from Opening New Connections from within the shell you are in.",
"username": "Prasad_Saya"
}
] | Error connecting to MongoDB: unexpected token string literal | 2020-09-23T05:38:52.828Z | Error connecting to MongoDB: unexpected token string literal | 5,101 |
|
null | [
"python"
] | [
{
"code": "",
"text": "Is it possible to use the same connection to obtain next batch (GetMore operation) in PyMongo? Here get_socket is called and new connection is returned to obtain next batch. It causes CursorNotFound error because we use balancer over multiple mongos instances. The problem is described here: mongodb - Load Balancing Between Mongos - Stack Overflow",
"username": "Vladimir_Vladimirov"
},
{
"code": "mongodmongosmongos",
"text": "Hi @Vladimir_Vladimirov welcome to the community.Most official drivers (including pymongo) use a connection pool to manage their connection to the database (either mongod or mongos), and they are transparently managing the connections with getmores or any other network operations. Thus I’m not aware of any user-facing method to provide a low level instruction to pymongo.As mentioned in one of the StackOverflow answers, pymongo was not designed to work with a custom load balancer in front of a number of mongos processes. I concur with the answerer there that what I think happened is that you executed a query from one mongos, then called getmore from another mongos due to the load balancer.So the short answer is, no you cannot put a custom load balancer between pymongo and mongos, as they were designed to connect directly to each other.Instead, pymongo provides a feature to connect to multiple mongos and do basic load balancing between them. See mongos Load Balancing for details and examples.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you for answer",
"username": "Vladimir_Vladimirov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Use the same connection for GetMore operation in PyMongo | 2020-09-22T11:17:43.423Z | Use the same connection for GetMore operation in PyMongo | 2,002 |
null | [] | [
{
"code": "",
"text": "Hi,I’m unable to find the document I’ve created in a synced client realm in my connected MongoDB cluster. I’m able to query for the document I’ve created in the synced client realm, but when I check Atlas, the document isn’t in the collection.I was under the impression that any mutations conducted in the synced client realm would persist to the MongoDB data cluster.Does anyone have any insight into what’s going on?Thank you!",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "@Jerry_Wang Can you please share logs on the client and server side that correspond to the time where you made the mutation?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I am also having this same issue using the node client.Creations and modifications on anything on the client side are not reflected in the collections when I view them in compass.Kinda terrifying tbh",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "Hey @Benjamin_Storrier we’ve spoken on email - I am looking into your issue right now - I will get back to via email and I am also happy to update your forum post with the findings. Yours is actually pretty uniqueI am interested in @Jerry_Wang logs because generally there are a few simple remedies to documents not syncing - the most common culprit is a schema mismatch between the client and server schema. This can be remedied by matching the schemas and often requires a wipe of the client side as well.Jerry - if you’d rather share logs privately you can email me at [email protected]",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi Ian,Thanks for the support! I figured out that the main culprit was a permissions error, so I’ve given every user read and write permissions to everything (currently in development mode) for now, and everything’s syncing correctly so far. I’ll figure out how to correctly configure permissions later down the line and let you know if anything comes up.Once again, I really appreciate your work and support!",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "Hi @Ian_WardYes! Thanks for being so quick to respond - its really great.I added to this post because I didn’t think the other item we were discussing is related to this one.Do you think this issue and my other issue might be related somehow?B",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "Hi @Jerry_WangWhen you say that your main culprit was a permission issue - did you see logs / messages that guided you to that conclusion?@Ian_Ward - If this is the case - I would be a little confused about write permissions.\nIf a user doesn’t have permission to write to a realm, then I would have assumed that they would not be able to write to the local version of it.\nIf not - then the suggestion here is that a user can write to a local version of a realm they don’t have permissions to without triggering an error with the only consequence being that the data does not get synced back to the cloud.This would be most unexpected.",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "@Benjamin_Storrier Your issue does not have to do with permissions - its internal - and has to do with merging changes - I am working with engineering right now on this and will provide you with more information when I we complete our investigation.",
"username": "Ian_Ward"
}
] | React Native Realm not syncing with MongoDB cluster | 2020-09-21T04:31:58.574Z | React Native Realm not syncing with MongoDB cluster | 2,216 |
null | [] | [
{
"code": "",
"text": "I want to achieve something like that:Some important points:PS: I am using the term Realm Cloud instance because I used to use Realm Cloud before MongoDB Realm. This is still the right term?",
"username": "Robson_Barreto"
},
{
"code": "",
"text": "@Robson_Barreto Is this a hypothetical or have you already implemented this? Its definitely possible to replicate data between the legacy Realm Cloud and the MongoDB Realm cloud (which goes to Atlas). This would need some sort of server-side app to react to changes on both sides and replicate data back and forth - in general though, I would encourage you to move toward a fully MongoDB Realm sync architecture and move off legacy Realm Cloud.",
"username": "Ian_Ward"
}
] | MongoDB Realm allow this kind of architecture? | 2020-09-22T22:45:40.584Z | MongoDB Realm allow this kind of architecture? | 1,346 |
null | [] | [
{
"code": "Realm.asyncOpen(configuration: config, callbackQueue: DispatchQueue.main) { realm, error in\n if error == nil {\n// Return Realm Instance\n } else {\n // capture error and show in console. Here we are getting above error.\n }\n }\n",
"text": "Hi Team,I’m using Realm framework for one of iOS application and facing below issue while opening Realm in async open mode. When we try to open ROS using Realm studio we are able to open realm files seamlessly without any issues.This issue is happening only when we are trying to open Realm instance using Async open from iOS side , we were getting below error.Error Domain=io.realm.unknown Code=89 “Operation canceled” UserInfo={Category=realm.basic_system, NSLocalizedDescription=Operation canceled, Error Code=89}we were using below Realm SDK API to open realm.I would like to understand what scenarios we will receive these \" Operation Canceled \" error from Realm Async open API. What could be the reason for getting these error. Is it issue with ROS connectivity?Below is the Environment /SDK details.Realm-cocoa : v4.3.2\nROS Server : 3.28.5\nXcode :11.3.1\niOS :13.3Appreciate your inputs to identify root cause .Thanks,\nSeshu",
"username": "Udatha_VenkataSeshai"
},
{
"code": "",
"text": "@Udatha_VenkataSeshai Does the user have permission to open the realm? Also - asyncOpen is an asynchronous operation that goes over the network - are you able to call this on the main thread? Try removing that.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Is it possible that ROS server side compaction and/or vacuuming could cause this error?",
"username": "Benjamin_Snider"
},
{
"code": "",
"text": "Its possible - do you have more logs on the client and server side?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "We’ve opened an internal support ticket #00689754 with relevant client logs. @Udatha_VenkataSeshai can you also attach relevant server logs in the same timeframe as the client logs (i.e. with the same timestamps)?",
"username": "Benjamin_Snider"
},
{
"code": "",
"text": "Sure Ben - feel free to tag me on the ticket and support will reach out to me and I’ll have a look.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "",
"username": "wan"
}
] | Realm Async open error "Operation canceled" Realm Error Domain=io.realm.unknown Code=89 | 2020-09-22T11:03:23.056Z | Realm Async open error “Operation canceled” Realm Error Domain=io.realm.unknown Code=89 | 2,939 |
null | [] | [
{
"code": "produk:PRIMARY> show dbs\ngraylog 0.028GB\ngraylog,192 0.002GB\nlocal 0.313GB\nreproduk:SECONDARY> show dbs\n2020-09-21T14:27:51.126+0200 E QUERY [thread1] Error: listDatabases failed:{ \"ok\" : 0, \"errmsg\" : \"not master and slaveOk=false\", \"code\" : 13435 } :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nMongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1\nshellHelper.show@src/mongo/shell/utils.js:769:19\nshellHelper@src/mongo/shell/utils.js:659:15\n@(shellhelp2):1:1\nreproduk:SECONDARY> rs.status()\n{\n \"set\" : \"reproduk\",\n \"date\" : ISODate(\"2020-09-21T12:37:04.748Z\"),\n \"myState\" : 2,\n \"term\" : NumberLong(65),\n \"syncingTo\" : \"192.158.20.100:27017\",\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"192.158.20.100:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 2625358,\n \"optime\" : {\n \"ts\" : Timestamp(1600691823, 20),\n \"t\" : NumberLong(65)\n },\n \"optimeDate\" : ISODate(\"2020-09-21T12:37:03Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-21T12:37:03.658Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-09-21T12:37:03.128Z\"),\n \"pingMs\" : NumberLong(0),\n ...\n",
"text": "Hello,I am new to this forum.\nI am more in an admin role of graylog app on 3 nodes, that is using mongodb replica.\nmongodb replica was running fine for more than a year. Then we did some smaller upgrade of graylog sw, probably somethng else was done with mongo-not sure, but now I have a problem and I suspect it is mongodb related.All users and app config is stored in mongodb. If I run graylog app on NODE1 it starts,\nbut it shows no users, no config, all is missing.If I run graylog on the other node, NODE3, it still show fine users and config.I suspect somethig is wrong on the mongodb lebvel, but I cann not reall debug it properly.some details: here is how nodes connect to replica:mongodb_uri = mongodb://192.158.20.100/graylog,192.158.20.101/graylog,192.158.20.102/graylog?replicaSet=reproduknow if I log to NODE1(192.158.20.100) and run mongo, it shows me:I never before saw this strange db “graylog,192”\n192 is actually the first part of IPIf I run the same command on the only node still runing ok, node3, I get ERROR:But if I run commands like rs.conf() or rs.status() I get practically the same working result on both node1 and node 3:Any pointers how could I continue my debugging ?Maybe deleting this collection graylog,192 ?",
"username": "Lec_Kozol"
},
{
"code": "mongodb_uri = mongodb://192.158.20.100/graylog,192.158.20.101/graylog,192.158.20.102/graylog?replicaSet=reproduk\ngraylog,192mongodb_uri = mongodb://192.158.20.100:27017,192.158.20.101:27017,192.158.20.102:27017/graylog?replicaSet=reproduk&retryWrites=true&w=majority\nshow dbsrs.slaveOk()replicaSet=reproduk",
"text": "Hi @Lec_Kozol and welcome in the MongoDB Community !I see a few problems in here.First:This URI is not a valid URI and indeed, this is probably that which is causing this weird graylog,192 database in your replica set. You can read more about URIs in the MongoDB documentation.From what I see here, your URI should look like this:The 2 last options are not mandatory but they are good practises. I recommend you have a look to our documentation about retryable writes and write concerns.Second:The error you got when trying to show dbs on a secondary node is normal. By default Mongo Shell blocks reads from a secondary node as these reads are considered “eventually consistent” with the primary (because of the asynchronous replication process).In order to read data on a secondary node, you need to tell the Mongo Shell it’s OK to read eventually consistent data using the command: rs.slaveOk() but if your Primary and Secondary are replicating correctly to each other, you should not see any difference here - modulo the replication lag.Third:Because you didn’t use the correct URI, I don’t know what the MongoDB driver actually understood when it connected to MongoDB. I’m not sure the information replicaSet=reproduk actually went through so I don’t know how the replica set behaved at this point.Can you please share the entire rs.status() command? Are the 3 nodes configured correctly? Did you confirme that the replica process is working properly and our main database and collections are present in the 3 nodes correctly?I hope this helps !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "reproduk:PRIMARY> use graylog\nswitched to db graylog\nreproduk:PRIMARY> rs.status()\n{\n \"set\" : \"reproduk\",\n \"date\" : ISODate(\"2020-09-22T14:12:24.499Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(65),\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"192.158.20.100:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 2776362,\n \"optime\" : {\n \"ts\" : Timestamp(1600783944, 2),\n \"t\" : NumberLong(65)\n },\n \"optimeDate\" : ISODate(\"2020-09-22T14:12:24Z\"),\n \"electionTime\" : Timestamp(1598007616, 3),\n \"electionDate\" : ISODate(\"2020-08-21T11:00:16Z\"),\n \"configVersion\" : 70477,\n \"self\" : true\n },\n {\n \"_id\" : 1,\n \"name\" : \"192.158.20.101:27017\",\n \"health\" : 1,\n \"state\" : 3,\n \"stateStr\" : \"RECOVERING\",\n \"uptime\" : 2338682,\n \"optime\" : {\n \"ts\" : Timestamp(1573169381, 10),\n \"t\" : NumberLong(36)\n },\n \"optimeDate\" : ISODate(\"2019-11-07T23:29:41Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-22T14:12:23.922Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-09-22T14:12:21.911Z\"),\n \"pingMs\" : NumberLong(0),\n \"configVersion\" : 70477\n },\n {\n \"_id\" : 2,\n \"name\" : \"192.158.20.102:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 2776358,\n \"optime\" : {\n \"ts\" : Timestamp(1600783944, 1),\n \"t\" : NumberLong(65)\n },\n \"optimeDate\" : ISODate(\"2020-09-22T14:12:24Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-22T14:12:24.091Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-09-22T14:12:23.009Z\"),\n \"pingMs\" : NumberLong(0),\n \"syncingTo\" : \"192.158.20.100:27017\",\n \"configVersion\" : 70477\n }\n ],\n \"ok\" : 1\n}\n\n\n",
"text": "But just in case I will post here also rs.status() output.\nI know that node1 is in the state of RECOVERING. I will need to fix it sometime. Once I treid to fix it by deleting its mongodata dir and starting mongod again, but it didnt work.",
"username": "Lec_Kozol"
},
{
"code": "",
"text": "Hello Maxime,\nWow, that was quick and efficient. On the non working node1, I just typed the connection mongo uri that you suggested and bang… right after restart it joined the cluster, also the cluster side sees it,al config is there, it looks perfect!I forgot to mention, we are running a bit older version of mongodb, 3.2.\nAnd funny enought, the old, “wrong” connection string was doing just fine for few years :-).Thank you!\nGrateful greetings from Slovenija.",
"username": "Lec_Kozol"
},
{
"code": "",
"text": "MongoDB 3.2 is not supported anymore. Same for 3.4 .Whichever MongoDB product you’re using, find the support policy quickly and easily.I strongly suggest to update to 4.4 gradually, version by version and by following the production notes for upgrades for each version.Maybe doing a dump and restoring in 4.4 would be easier at this point.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to debug problem with mongodb replica | 2020-09-22T11:03:17.280Z | How to debug problem with mongodb replica | 3,075 |
null | [] | [
{
"code": "",
"text": "we have a 2 node cluster\n16GB Ram on each node\nwe see that mongod is using swap space\nbeing new in the world of mongoDB\nany help or link is appreciated to find the reason of using swap space.\nthanks in advance",
"username": "Salim_Mohammad"
},
{
"code": "",
"text": "any help or link is appreciated to find the reason of using swap space.Swap is addressed in the production notes.we have a 2 node clusterWhat you have is in accident waiting to happen . You need 3 nodes to create a functional replicaSet.being new in the world of mongoDBGo and sign up for some course at https://university.mongodb.comM103 covers basic cluster administration.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.The documentation covers this nicely too:",
"username": "chris"
}
] | mongod DB using swap space | 2020-09-22T19:24:08.920Z | mongod DB using swap space | 4,658 |
null | [
"dot-net"
] | [
{
"code": " static void Main(string[] args)\n {\n Init();\n InsertFindHouse().GetAwaiter().GetResult();\n }\n\n private static async Task InsertFindHouse()\n {\n Guid clientId = new Guid(\"1A76AD2A-4FF6-4291-860E-51C5C34AA890\");\n\n // Insert\n House house = new House\n {\n Id = 1,\n ClientId = new Guid(\"1A76AD2A-4FF6-4291-860E-51C5C34AA890\")\n };\n await Houses.InsertOneAsync(house);\n\n // Find\n FilterDefinition<House> filter = Builders<House>.Filter.Eq(x => x.ClientId, clientId);\n var result = await Houses.Find(filter).ToListAsync();\n }\n\n private static void Init()\n {\n BsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard));\n\n string connectionString = ConfigurationManager.AppSettings[\"testConnectionString\"];\n MongoUrl mongoUrl = MongoUrl.Create(connectionString);\n MongoClientSettings settings = MongoClientSettings.FromUrl(mongoUrl);\n string dbName = mongoUrl.DatabaseName;\n MongoClient mongoClient = new MongoClient(settings);\n _database = mongoClient.GetDatabase(dbName);\n Houses = _database.GetCollection<House>(\"houses\");\n }\n}\n",
"text": "Hi people,\nIn a test That I had done it looks to me that Different Guid representation is used for insert and for find operations.\nThe following example that I constructed for illustration use the c# driver 2.11.1 and tested on MongoDB 4.2.5 and 4.2.9 servers.class House\n{\npublic int Id { get; set; }\npublic Guid ClientId { get; set; }\n}class Program\n{\nprivate static IMongoDatabase _database;\nprivate static IMongoCollection Houses;The Find function generates the following (taken from the MongoDB log file)\n2020-08-29T14:24:21.763+0300 D2 COMMAND [conn4] run command test.$cmd { find: “houses”, filter: { ClientId: BinData(3, 2AAD761AF64F9142860E51C5C34AA890) }, $db: “test”, lsid: { id: UUID(“4b4f1b41-0ada-4cb6-bcdc-837923e7629b”) } }This query does not return any document, probably, because the format of the Guid is BinData(3, 2AAD761AF64F9142860E51C5C34AA890).When I query it from the mongo shell, it returns the following correct answer:db.houses.find({ “ClientId” : UUID(“1a76ad2a-4ff6-4291-860e-51c5c34aa890”) })\n{ “_id” : 1, “ClientId” : UUID(“1a76ad2a-4ff6-4291-860e-51c5c34aa890”) }It looks like that in the sample application the driver saved the Guid in one format and query it in another format.\nAm I doing something wrong?Thanks,\nItzhak",
"username": "Itzhak_Kagan"
},
{
"code": "",
"text": "Hi,I have the same issue - after upgrade from 2.8.x to 2.11.2 and changing from\nBsonDefaults.GuidRepresentation = GuidRepresentation.Standard\nto\nBsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard))I get the issue that find (e.g. in a filter ReplaceOneAsync) does not work anymore. I guess it is the same issue as above, the find uses BinData(3,…) instead of BinData(4, …) . Is the only fix to roll back to a previous version of the driver??",
"username": "Deyan_Petrov"
},
{
"code": "#I \"../../../../.nuget/packages/\"\n#r @\"mongodb.driver/2.11.2/lib/netstandard2.0/MongoDB.Driver.dll\"\n#r @\"mongodb.driver.core/2.11.2/lib/netstandard2.0/MongoDB.Driver.Core.dll\"\n#r @\"mongodb.bson/2.11.2/lib/netstandard2.0/MongoDB.Bson.dll\"\n#r @\"mongodb.libmongocrypt/1.0.0/lib/netstandard1.5/MongoDB.Libmongocrypt.dll\"\n#r @\"dnsclient/1.3.2/lib/netstandard2.0/DnsClient.dll\"\n\nopen System\nopen MongoDB.Driver\nopen MongoDB.Driver.Core\nopen MongoDB.Bson\nopen MongoDB.Bson.Serialization.Attributes\nopen MongoDB.Bson.Serialization\nopen MongoDB.Bson.Serialization.Serializers\n\n//BsonDefaults.GuidRepresentation <- GuidRepresentation.Standard // this is deprecated but works!\nBsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard)) // this does not work!\n\nlet mongo = MongoClient(\"mongodb://localhost:27017\")\n\nlet db = mongo.GetDatabase \"TestDB\"\n\ntype TestEntity = {\n [<BsonId>]\n Id: Guid\n\n Message: string\n\n [<BsonElement(\"lastModOn\")>]\n LastModifiedOn: DateTime\n}\n\nlet col = db.GetCollection<TestEntity>(\"testEntities11\")\n\nlet te1 = { Id = Guid.NewGuid(); Message=\"Bla\"; LastModifiedOn = DateTime.UtcNow }\n\nlet result = col.InsertOneAsync (te1) |> Async.AwaitTask |> Async.RunSynchronously\n\n//let filter = sprintf \"\"\"{ _id:UUID('%s')}\"\"\" (te1.Id.ToString()) // this works only with BsonDefaults.GuidRepresentation <- GuidRepresentation.Standard\nlet filter = sprintf \"\"\"{ _id:BinData(4, '%s')}\"\"\" (BsonBinaryData(te1.Id, GuidRepresentation.Standard).AsByteArray |> System.Convert.ToBase64String ) // this works only with BsonDefaults.GuidRepresentation <- GuidRepresentation.Standard\nlet filterDef: BsonDocumentFilterDefinition<TestEntity> = BsonDocumentFilterDefinition(BsonDocument.Parse(filter))\n\n//let filterDef = ExpressionFilterDefinition<TestEntity>(fun x -> x.Id.Equals(Guid.Parse(\"7da6b8d1-bd8e-4df6-9129-1f9e873b779a\"))) // this does not work\n//let filterDef = Builders<TestEntity>.Filter.Eq(StringFieldDefinition<Guid>(\"Id\"), Guid.Parse(\"7da6b8d1-bd8e-4df6-9129-1f9e873b779a\")) // this does not compile\n\nlet options = ReplaceOptions()\noptions.IsUpsert <- false\n\nlet te2 = { te1 with Message=\"Bla2\"; LastModifiedOn = DateTime.UtcNow }\n\nlet result2 = col.ReplaceOneAsync (filterDef, te2, options) |> Async.AwaitTask |> Async.RunSynchronously\nresult2.ModifiedCount = 0L\n",
"text": "Here a small repro, however in F#:",
"username": "Deyan_Petrov"
},
{
"code": "",
"text": "Hi Deyan,You can add to your code at the very beginning of your program the following:\nBsonDefaults.GuidRepresentationMode = GuidRepresentationMode.V3;The point is that this property is marked as Obsolete.\nSo far, I’d not seen any other comments about this issue.Regards,\nItzhak",
"username": "Itzhak_Kagan"
},
{
"code": "",
"text": "Thanks a lot, Itzhak, I found in the meantime this:https://jira.mongodb.org/projects/CSHARP/issues/CSHARP-3179?filter=allopenissuesandhttps://jira.mongodb.org/browse/CSHARP-3195with the same recommendation … but now I have compilation warning for the next months/years to come … Strange why the driver has been changed in such a breaking (at least for our codebase) way - all of a sudden many integration tests starting failing, only change just being the package reference version …",
"username": "Deyan_Petrov"
}
] | C# driver 2.11.1 allegedly use Different Guid representation for insert and for find | 2020-08-31T13:33:37.198Z | C# driver 2.11.1 allegedly use Different Guid representation for insert and for find | 9,812 |
null | [
"app-services-cli"
] | [
{
"code": "> realm-cli import --app-id=XXXXXX \nfailed to diff app with currently deployed instance: error: exit status 1\n",
"text": "Getting an “exit status 1” error, when running the realm-cli import command, to update the Realm Application.Hard to know what the issue is with this generic error message and no (obvious) way to get more verbose error messaging?Thanks,\nMartin",
"username": "Martin_Kayser"
},
{
"code": "",
"text": "The issue was an uncaught error in an unused function.Leaving this thread open as I believe it would be very helpful to have more detailed error messaging here!",
"username": "Martin_Kayser"
},
{
"code": "",
"text": "Hi there! would you mind describing what the uncaught error in the function was? If possible, the code snippet (or at least the relevant part) would be helpful in reproducing this. Thanks!",
"username": "ChrisCap"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Deployment: failed to diff app with currently deployed instance: error: exit status 1 | 2020-09-22T14:34:23.588Z | Realm Deployment: failed to diff app with currently deployed instance: error: exit status 1 | 2,568 |
null | [
"sharding",
"upgrading"
] | [
{
"code": "",
"text": "Hello,I’m trying to understand how setting FCV on a sharded cluster works. Taking as an example version 4.0.\nAccording to the documentation setting new FCV on a shared cluster is being done through a mongos instance.\nI tried to check the code and found the following workflow:Thank you,\nCristian",
"username": "Cristi_Radan"
},
{
"code": "",
"text": "Hi Cristian, why is important to know this. This is the kind of internal detail that is likely to change in future versions of MongoDB and shouldn’t make a difference to end users?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Hi ,Sorry for the late reply, I’ll try to give some context on why we’re interested how this works, especially for 4.0.\nWe have some very large clusters in PROD (over 20 shards / multiple databases and collections / config database having over 1M documents) . We’ve observed impact during setting FCV to 4.0 on such a cluster while the whole cluster “freezes” when cache chunks are refreshed on each shard.\nI think that understanding this process, would help us seek/find a way to do this with minimum impact on the overall cluster. I hope to understand what would be the factors that can cause impact when setting FCV.Thank you,\nCristian",
"username": "Cristi_Radan"
},
{
"code": "",
"text": "Hi Cristian, you should raise a SERVER ticket on jira.mongodb.org. Our core engineering team may have some answers.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Thank you Joe.\nI will do that.Best Regards,\nCristian",
"username": "Cristi_Radan"
}
] | How Enabling FCV on Sharded Clusters works | 2020-09-15T12:47:21.177Z | How Enabling FCV on Sharded Clusters works | 1,816 |
null | [
"aggregation"
] | [
{
"code": "$arrayElemAt{ $objectProp: [ <expr that evaluates to an object>, <expr that evaluates to object key> ] }\n{ $objectProp: [ { foo: 11, bar: 22 }, \"foo\" ] }11$objectToArray$filter$first$let",
"text": "In an aggregation stage is it possible to get the value of an object property when the name is in another variable?I’m looking for something like $arrayElemAt but for objects:So for example this { $objectProp: [ { foo: 11, bar: 22 }, \"foo\" ] } would evaluate to 11.I guess it’s possible to do with $objectToArray & $filter & $first & $let (to get the value), but that’s pretty cumbersome (10 lines of code?). Is there an easier way that I’m missing?",
"username": "qtax"
},
{
"code": "\nLets say you are in pipeline and you have this\n\n $$mydoc = one document\n $$mykey = one key\n \nHow to do \n\n get(\"$$mydoc\",\"$$mykey\")\n contains?(\"$$mydoc\",\"$$mykey\")\n put(\"$$mydoc\",\"$$mykey\",\"randomValue\")\n remove(\"$$mydoc\",\"$$mykey\")\n ....\n (and all those basic operation on maps that programming languages offer)\n \n",
"text": "HelloI think you are asking thisI had the same problems before sometime,in mongodb for now seems that the key can’t be in a variable.\nThe problem is not just the 10 lines,but object to array and back to object,is so slow to be done multiple times(for example hen reducing a array to a document).\nI didn’t got any solution yet,i dont think its possible.\nI managed to implement fast only put,and only if the key is new.But if the you keep the documents with known fields (known schema),you can use unwind/group/project\netc , and if you want to do those for an embeded data,you can use facet,and do those.\nJavascript is also possible with $function stage ,but its slower.If you know where to send this either to be answered or to ask to be added on mongoDB ,would be nice.",
"username": "Takis"
},
{
"code": "$objectToArray",
"text": "Yeah this is exactly what I’m wondering. Too bad that there doesn’t seem to be a (better) way. :-/In my case the fields are not known in advance, but there are not too many of them (1-100), so I hope the approach I wrote using $objectToArray and back will be fast enough when the number of fields is at the upper limit.",
"username": "qtax"
},
{
"code": "",
"text": "HelloIf the fields are known use facet and project or addFields.\nI think its easier solution,maybe test it.\nFacet allows the use of stage operators,even if the object is embeded,if you replace the root\ninside the facet with that object,and then use the stage operators.",
"username": "Takis"
},
{
"code": "$let",
"text": "In my case the fields are not knownIf the fields are know then you can use $let.",
"username": "qtax"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Accessing "dynamic" object property in aggregation stage | 2020-09-16T14:40:46.408Z | Accessing “dynamic” object property in aggregation stage | 8,422 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hi everyone,I’m anxious to help contributing to Compass in order to improve my weak Javascript and MongoDB skills. I’ve been reading code and poking around various corners of the code base.It seems to me (and I could be very, very wrong) that the unit tests for Compass are a little sparse. I’d be very happy to help writing unit tests to improve the project. It appears that Compass is using Spectron / WebdriverIO for unit tests.Are there any experienced developers who could suggest some potential missing unit tests that I could take a shot at?Appreciate any feedback",
"username": "Justin_Hopkins"
},
{
"code": "",
"text": "Hey @Justin_Hopkins - it’s wonderful that you want to contribute!I’ll notify the Compass development team and see if they have any suggestions.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Just checking in. Any sugggestions from the Compass development team? Or, could I get in touch with them directly?Thanks",
"username": "Justin_Hopkins"
},
{
"code": "",
"text": "Hey @Justin_Hopkins!Sorry for the delayed response! I did pass it on to the team last week (actually @Stennie_X beat me to it) and this was the response:Right now Compass is really hard to contribute to. The codebase is split across 50+ repos/modules and even just testing the changes is hard. We are thinking to change that but it’s not something we can do in a couple of weeksOn the other hand, @Massimiliano_Marcon did suggest that it might be possible for someone to pick up his Dark Mode Hack and take that forward. I’d suggest commenting on that PR to see what’s required.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Thanks for getting back to me.I’d love to work on the Dark Mode Hack. Will check it out. Thanks.Also, is there any publicly available discussion for Compass development? I’ve scoured the mongodb community and can’t seem to find anything.",
"username": "Justin_Hopkins"
},
{
"code": "",
"text": "Compass is mostly developed internally, so I think most discussion happens in the company Slack rather than in the community forum.If you have specific questions or ideas, the GitHub repo would be a good place to start a discussion.",
"username": "Mark_Smith"
}
] | New unit tests for Compass? Looking for ideas on how to contribute | 2020-09-16T03:48:05.455Z | New unit tests for Compass? Looking for ideas on how to contribute | 1,933 |
null | [
"replication"
] | [
{
"code": "grp-replic-001:PRIMARY> rs.status()\n{\n \"set\" : \"grp-replic-001\",\n \"date\" : ISODate(\"2020-09-18T21:14:06.597Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(6),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 2,\n \"writeMajorityCount\" : 2,\n \"votingMembersCount\" : 3,\n \"writableVotingMembersCount\" : 3,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"lastCommittedWallTime\" : ISODate(\"2020-09-18T21:13:56.959Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2020-09-18T21:13:56.959 Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"lastAppliedWallTime\" : ISODate(\"2020-09-18T21:13:56.959Z\"),\n \"lastDurableWallTime\" : ISODate(\"2020-09-18T21:13:56.959Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1600463596, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"electionTimeout\",\n \"lastElectionDate\" : ISODate(\"2020-09-18T20:34:20.457Z\"),\n \"electionTerm\" : NumberLong(6),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1600451358, 1),\n \"t\" : NumberLong(5)\n },\n \"numVotesNeeded\" : 2,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"numCatchUpOps\" : NumberLong(0),\n \"newTermStartDate\" : ISODate(\"2020-09-18T20:34:24.959Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2020-09-18T20:34:26. 581Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"mongodb01:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 2874,\n \"optime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"optimeDate\" : ISODate(\"2020-09-18T21:13:56Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1600461260, 1),\n \"electionDate\" : ISODate(\"2020-09-18T20:34:20Z\"),\n \"configVersion\" : 3,\n \"configTerm\" : 6,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 2,\n \"name\" : \"mongodb03:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 2395,\n \"optime\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1600463636, 1),\n \"t\" : NumberLong(6)\n },\n \"optimeDate\" : ISODate(\"2020-09-18T21:13:56Z\"),\n \"optimeDurableDate\" : ISODate(\"2020-09-18T21:13:56Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-18T21:14:05.430Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-09-18T21:14:06.250Z\" ),\n \"pingMs\" : NumberLong(1),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"mongodb01:27017\",\n \"syncSourceId\" : 0,\n \"infoMessage\" : \"\",\n \"configVersion\" : 3,\n \"configTerm\" : 6\n },\n {\n \"_id\" : 3,\n \"name\" : \"mongodb02:27017\",\n \"health\" : 0,\n \"state\" : 8,\n \"stateStr\" : \"(not reachable/healthy)\",\n \"uptime\" : 0,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-18T21:14:06.037Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(2),\n \"lastHeartbeatMessage\" : \"Our replica set configuration is invalid or does not include us\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -2,\n \"configTerm\" : -1\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1600463636, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1600463636, 1)\n}\n> rs.status()\n{\n \"operationTime\" : Timestamp(0, 0),\n \"ok\" : 0,\n \"errmsg\" : \"Our replica set config is invalid or we are not a member of it\",\n \"code\" : 93,\n \"codeName\" : \"InvalidReplicaSetConfig\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1600463767, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\n>\n>\n",
"text": "Everyone, greetings …I am a new user of mongoDB, and I set up a laboratory to study and train the replication configuration (1 Master and 2 slaves)The initial configuration and putting everything to work, despite having gone through some errors, I managed to complete and leave the entire replication structure between the members.When I tried to delete a member and re-add it didn’t work.\nTo delete the member, I used the command rs.remove (“mongodb02: 27017”) on the master node. And on the mongodb02 node I stopped the mongodb service and then deleted all files in the mongo’s data folder.To include mongodb02 again in the replication group I did the following: on the master node, I ran the command rs.add (“mongodb02: 27017”). And on the slave node is I started the mongodb service.On the slave node (mongodb02) the files are recreated, however the member does not return for replication.In rs.status () in master node, I have the following returnBut, rs.status() in node slave, I returnCould someone help me find where I’m going wrong?",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "rs.add()rs.status()",
"text": "Hello @Mauricio_Tavares_Lei, welcome to the community.When I tried to delete a member and re-add it didn’t work.\nTo delete the member, I used the command rs.remove (“mongodb02: 27017”) on the master node. And on the mongodb02 node I stopped the mongodb service and then deleted all files in the mongo’s data folder.This means you just have to add a new member to the existing replica-set.The following are the steps to add a new member to the existing replica-set. The links refer to documentation from the tutorial “Add Members to a Replica Set”. Make sure you follow these steps and see it works.Requirements:Procedures:",
"username": "Prasad_Saya"
},
{
"code": "{\"t\":{\"$date\":\"2020-09-21T00:55:59.519-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-0\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:00.030-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-2\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:00.541-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-4\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:01.046-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-0\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:01.556-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-4\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:02.059-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-0\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:02.564-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-2\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:03.069-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-0\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:03.574-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-4\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:04.076-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-2\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:04.593-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-4\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n{\"t\":{\"$date\":\"2020-09-21T00:56:05.123-03:00\"},\"s\":\"I\", \"c\":\"REPL_HB\", \"id\":23974, \"ctx\":\"ReplCoord-0\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb03:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":93,\"codeName\":\"InvalidReplicaSetConfig\",\"errmsg\":\"Our replica set configuration is invalid or does not include us\"}}}\n",
"text": "Hi @Prasad_Saya,I did the procedures you suggested, and the error persists …\nI checked the slave node’s log file, and you’re giving this message …Is it some kind of conflict between the two slave nodes?are the node hosts as follows:\nmongodb01 - 192.168.56.103 (node master)\nmongodb02 - 192.168.56.104 (node slave - troubled node)\nmongodb03 - 192.168.56.102 (node slave)",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "",
"text": "Hi @Mauricio_Tavares_Lei.What was the status of the replica-set before adding the new member? Check the status was correct with the existing two members. Then add the new member using the above steps.What is the MongoDB version you are using? Are you using any configuration files?If possible, please post the exact commands of the steps you had tried.",
"username": "Prasad_Saya"
},
{
"code": "cluster-001:SECONDARY> rs.status()\n {\n \"set\" : \"econsular-cluster-001\",\n \"date\" : ISODate(\"2020-09-21T04:11:21.789Z\"),\n \"myState\" : 2,\n \"term\" : NumberLong(7),\n \"syncSourceHost\" : \"mongodb01:27017\",\n \"syncSourceId\" : 0,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 2,\n \"writeMajorityCount\" : 2,\n \"votingMembersCount\" : 3,\n \"writableVotingMembersCount\" : 3,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"lastCommittedWallTime\" : ISODate(\"2020-09-19T01:20:11.056Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2020-09-19T01:20:11.056Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"lastAppliedWallTime\" : ISODate(\"2020-09-19T01:20:11.056Z\"),\n \"lastDurableWallTime\" : ISODate(\"2020-09-19T01:20:11.056Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1600478350, 1),\n \"electionParticipantMetrics\" : {\n \"votedForCandidate\" : true,\n \"electionTerm\" : NumberLong(7),\n \"lastVoteDate\" : ISODate(\"2020-09-21T03:51:25.877Z\"),\n \"electionCandidateMemberId\" : 0,\n \"voteReason\" : \"\",\n \"lastAppliedOpTimeAtElection\" : {\n \"ts\" : Timestamp(1600477027, 1),\n \"t\" : NumberLong(6)\n },\n \"maxAppliedOpTimeInSet\" : {\n \"ts\" : Timestamp(1600477027, 1),\n \"t\" : NumberLong(6)\n },\n \"priorityAtElection\" : 1,\n \"newTermStartDate\" : ISODate(\"2020-09-19T00:58:39.312Z\"),\n \"newTermAppliedDate\" : ISODate(\"2020-09-21T03:51:25.911Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"mongodb01:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 1197,\n \"optime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"optimeDate\" : ISODate(\"2020-09-19T01:20:11Z\"),\n \"optimeDurableDate\" : ISODate(\"2020-09-19T01:20:11Z\"),\n \"lastHeartbeat\" : ISODate(\"2020-09-21T04:11:21.451Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2020-09-21T04:11:20.655Z\"),\n \"pingMs\" : NumberLong(1),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1600477119, 1),\n \"electionDate\" : ISODate(\"2020-09-19T00:58:39Z\"),\n \"configVersion\" : 5,\n \"configTerm\" : 7\n },\n {\n \"_id\" : 2,\n \"name\" : \"mongodb03:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 1200,\n \"optime\" : {\n \"ts\" : Timestamp(1600478411, 1),\n \"t\" : NumberLong(7)\n },\n \"optimeDate\" : ISODate(\"2020-09-19T01:20:11Z\"),\n \"syncSourceHost\" : \"mongodb01:27017\",\n \"syncSourceId\" : 0,\n \"infoMessage\" : \"\",\n \"configVersion\" : 5,\n \"configTerm\" : 7,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\nreplication:\n oplogSizeMB: 128\n replSetName: econsular-cluster-001\n enableMajorityReadConcern: true\n",
"text": "What was the status of the replica-set before adding the new member? Check the status was correct with the existing two members. Then add the new member using the above steps.when the 3 nodes were working, I had this return from the rs.status () commandWhat is the MongoDB version you are using? Are you using any configuration files?Version 4.4\nYes, I am using the replication settings via the mongo.conf fileIn the replication session I inserted the following parametersIf possible, please post the exact commands of the steps you had tried.Node mongodb01\n1.1. mongo\n1.2. rs.remove(“mongodb02:27017”)Node mongodb02\n1.1. systemctl stop mongod\n1.2. rm -rf /var/lib/mongodb/Node mongodb03\n2.1. systemctl stop mongodNode mongodb02\n3.1. rsync -arv [email protected]:/var/lib/mongo/ .\n3.2. systemctl start mongodNode mongodb01\n4.1. rs.add(“mongodb02:27017”)",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "storage:\n dbPath: G:\\mongo\\mongo-stuff\\misc\\db\\node1\nnet:\n bindIp: localhost\n port: 27011\nsystemLog:\n destination: file\n path: G:\\mongo\\mongo-stuff\\misc\\db\\node1\\mongod.log\n logAppend: true\nreplication:\n replSetName: myrset\nconf2.cfgconf3.cfgmongod -f conf1.cfg\nmongod -f conf2.cfg\nmongod -f conf3.cfg\nmongo --port 27011rs.initiate()\nrs.add(\"localhost:27012\")\nrs.add(\"localhost:27013\")\nrs.status()use admin\ndb.shutdownServer()\nrs.shutdownServer()db.shutdownServer()rs.remove(\"localhost:27012\")G:\\mongo\\mongo-stuff\\misc\\db\\node2G:\\mongo\\mongo-stuff\\misc\\db\\node2mongod -f conf2.cfgrs.status()\nrs.add(\"localhost:27012\")\nrs.status()\n",
"text": "Hi @Mauricio_Tavares_Lei, I tried the same actions: From a 3 member replica-set removed a member and added it again. See the steps I followed on a Windows and MongoDB v4.2.8.conf1.cfg:Similarly, I created conf2.cfg and conf3.cfg on ports 27012 and 27013 with data directories on node2 and node3 folders (instead of node1).Started the three mongods:Connected to a member:mongo --port 27011Initiated the replica-set and added the other two members.rs.status()All okay, replica-set created and running.\nAdd some data, etc.I connected to the secondary member “localhost:27012”, and stopped the instance.[ NOTE: Corrected the typo rs.shutdownServer() to db.shutdownServer() ]From the primary, removed the member “localhost:27012” from the replica-set:rs.remove(\"localhost:27012\")Deleted the data directory of member (“localhost:27012”): G:\\mongo\\mongo-stuff\\misc\\db\\node2Prepared the Data Directory:NOTE from documentation: If your storage system does not support snapshots, you can copy the files directly using cp, rsync, or a similar tool. Since copying multiple files is not an atomic operation, you must stop all writes to the mongod before copying the files. Otherwise, you will copy the files in an invalid state.Started the instance of the tobe-added member.mongod -f conf2.cfgConnect to replica-set’s primary. Add the new member “localhost:27012”.\nCheck the status before and after adding.All okay.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "My friend, your script worked perfectly …\nThank you very much!I will use this new environment that I set up based on your script and will continue testing.There is only one thing that I was very doubtful about.\nIn the step that you tell me to copy the files from another node but with the exception of the log, what was the log you were referring to?\nIf it is opLog, help me find it, because I don’t know where it is recorded.",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "",
"text": "Hi @Mauricio_Tavares_Lei , it is just the server log. But, you could be having this log at different location.",
"username": "Prasad_Saya"
},
{
"code": "systemLog:\n destination: file\n path: G:\\mongo\\mongo-stuff\\misc\\db\\node1\\mongod.log\n logAppend: true",
"text": "In this case it would be the log that is configured there in the configuration file ???",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "",
"text": "Correct. That is the log file I was referring to.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Interesting!!!Well, now that my laboratory is working, I will proceed with the studies, because now I need to understand how to mount the backup and restore including the oplog. And that I already have some doubts …@Prasad_Saya , thank you very much for the help and attention you gave to my problem !!!I hope one day I can help you too.",
"username": "Mauricio_Tavares_Lei"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Re-add member in replica | 2020-09-21T02:07:08.634Z | Re-add member in replica | 13,609 |
null | [] | [
{
"code": "{ \n name: \"Error\",\n message: \"Bad error code (ERROR)\",\n isFatal: true,\n category: \"realm::sync::Client::Error\",\n code: 114,\n ...\n}\n",
"text": "I’m having trouble debugging some errors, specifically the one that gives this helpful message that sometimes occurs on opening a realm:Is there documentation on error codes and their causes anywhere?B",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "@Benjamin_Storrier That’s a strange/surprising one - can you share some more logs? Particularly the logs on the server side would be helpful",
"username": "Ian_Ward"
}
] | Realm Error Codes / Documentation | 2020-09-20T23:17:03.412Z | Realm Error Codes / Documentation | 2,412 |
null | [] | [
{
"code": "",
"text": "Hi,If I have two sets of data with two different partition keys, can I return both sets of data in the same realm? Or do I have to open two different realms, passing in a different partition key for both realms?For example, one set of data should be available to all users, so I’ll specify a partitionValue of “PUBLIC” in the realm config to open a new synced realm. The other set of data is user specific, so I’ll specify a partitionValue of “userId=123” in the realm config to open a new synced realm. Since these are two different partitionValues, do I have to open two separate realms? Or is there a more seamless way to acquire all the data partitioned as “PUBLIC” and acquire all the data that’s user specific and partitioned as “userId=123” in one realm?Also, if the only way is to open multiple realms, can multiple realms be opened with Promise.all? Or do they have to be opened sequentially?Thanks!",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "You would need to open two separate realms by supplying each with a separate partitionKey value. You should be able to open them both at the same time.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How are multiple partitions handled in React Native Realm? | 2020-09-21T02:07:15.156Z | How are multiple partitions handled in React Native Realm? | 2,444 |
null | [
"data-modeling",
"atlas-device-sync",
"android"
] | [
{
"code": "",
"text": "I am trying to develop one android application.\nApplication requirement is find near by shops and search shops by name.So i see realm sdk for android. Documentation also not clear what kind of data we need to put into model class. How model class upload into mongodb atlas. etc.So, Can you please give some examples, How to add POJO object directly into our database.\nHow to search nearby shops and how naming field search via android sdk.Maybe, if you don’t have any solution right now. Then please help how can we do that?\nLike use AWS and connect to atlas db and use lambda function etc.I am completely new with mongodb. If you have any end-to-end solutions please provide.",
"username": "Yogesh_Rathi"
},
{
"code": "",
"text": "@Yogesh_Rathi You define your model classes by extending RealmObject -From there you open a synced realm using partitionKey field -And your data would then replicate to MongoDB Atlas.You can then run a query based on store.name for instance which would be a string. For geolocation you can store lat and long as a double under the Store object fields and then compare that against a User object which has the current location lat/long",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Android Geo location handling | 2020-09-18T12:47:04.362Z | Android Geo location handling | 2,246 |
null | [
"aggregation"
] | [
{
"code": "const search = db.collection(\"search\");\n\nsearch.aggregate([\n {\n '$match': {\n 'id_int': 0\n }\n }, {\n '$project': {\n '_id': 0, \n 'collection': 1, \n 'id_int': 1\n }\n }, {\n '$lookup': {\n 'from': 'arxiv', \n 'localField': 'id_int', \n 'foreignField': 'id_int', \n 'as': 'arxiv'\n }\n }\n], function(err, cursor) ... )\ncollection:\"arxiv\"\nid_int:0\n",
"text": "I am trying to see if i can change the from in the $lookup or rearrange my query to somehow retrieve from three different collections. So far i have managed to set up the query like so:The $match and then $project pipeline stages return a result with the following properties:The collection value will always be one of three arxiv, crossref or pmc_test. Therefore i’d like my $lookup from to use this property value programmatically as opposed having it hard coded.Thanks",
"username": "natedeploys"
},
{
"code": "from",
"text": "Hi @natedeploys,Therefore i’d like my $lookup from to use this property value programmatically as opposed having it hard coded.Currently the from is accepting a collection namespace, not aggregation expressions.However, you could still do this programmatically by running two phase query on the application code. First, perform a query to find out which collection, then use the collection value into an aggregation pipeline.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Dynamic from in $lookup | 2020-03-11T18:48:11.978Z | Dynamic from in $lookup | 6,056 |
null | [] | [
{
"code": " var description: String? = null java.lang.RuntimeException: Unable to start activity ComponentInfo{com.mongodb.tasktracker/com.mongodb.tasktracker.TaskActivity}: io.realm.exceptions.RealmMigrationNeededException: Migration is required due to the following errors:\n - Property 'Task.description' has been added.\n",
"text": "I followed the TaskTracker tutorial for Android and was successful in getting everything working and syncing.Then, I went to Realm schema for Tasks collection and added “description” property as an optional string. Then I deployed my change.In Android code, I added the following to the Task model (which I got from SDKs tab): var description: String? = nullUpon starting the app I get this exception:The documentation states that the database automatically handles all synced schema migrations.If that’s the case, why do I get this error? What is the fix for this?",
"username": "David_Boyd"
},
{
"code": "",
"text": "I figured out the problem. I got a hint from another post but it wasn’t super clear what the answer was.In TaskActivity.kt, you need to comment out the “realm = Realm.getDefaultInstance()” within onCreate().The call to getDefaultInstance() was the cause of the migration exception. With it commented out, the call to getInstanceAsync() in onStart() happens first which uses waitForInitialRemoteData(). This migrates the data before you use it.",
"username": "David_Boyd"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | TaskTracker Android tutorial fails after schema change | 2020-09-20T23:31:29.276Z | TaskTracker Android tutorial fails after schema change | 2,696 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,We’re running through a large upgrade process and thinking about our replicaset and how many members we need for optimal uptime.We have two main data centres and our office network as a 3rd location. We should really only be serving data out of our Primary or Secondary data centre.Would you choose a 5 or 7 member replicaset with 2 or 3 data bearing nodes in each data centre and an arbiter in our office to vote in an election? Any argument for making the arbiter a data member? The only thing I can think of would be of if we lost both data centres for a significant period of time.In either case, if we lose the primary or secondary site, we still have the majority available assuming the other 3 or 4 members can communicate.Having 3 members allows a little tolerance for an initial sync should we require one. We do not currently shard our data and therefore our data size is significantly larger than the recommended 2TB limit (something we’re looking to change, but unlikely until next year). But 3 members comes with a storage cost.Thoughts?Clive",
"username": "clivestrong"
},
{
"code": "",
"text": "Hi @clivestrong,It’s hard to give a truly enlighten advice without the full picture but I will try to, at least, add a few considerations in the mix.Running with an arbiter is never optimal but in your case, as you only have 2 data centers and you don’t want to have data in the 3rd one you have, it definitely makes sense.My first question would be: which write concern are you using?Let’s suppose you are running a 5 nodes replica set like this (but it would be the same issue with 7 nodes):If you are using w=“majority”, you won’t be able to write anymore if you have a network partition that isolated one of the data centers (1 or 2). You would still have a P because 2 nodes + the Arbiter would still be able to elect a node but in this case majority = 3 and a write would be validated by the cluster only if 3 nodes were able to acknowledge the write operation which is impossible with the A in the mix.Replace the A by an S with p=0 and this issue disappear because your S with p=0 can still vote and therefore still acknowledge write operations but cannot be elected primary.That being said, w=1 (default) and w=2 operations would still be fine in this configuration but w=“majority”=3 would fail.Adding 2 nodes (one in each DC1 and 2) would not solve the issue as it would make majority = 4. But at least w=3 operations would then work fine.Write operations are more likely to be acknowledged by nodes close to the primary due to the latency between the nodes so also note that in the case of a 5 nodes RS, w=2 write operations might be way faster than w=3 - depending how far away your 2 DC are from each other.Let’s imagine that the DC3 with the A is halfway in the middle of the 2 other DC.Again, in this scenario, changing the A into an S with p=0 and v=1 would reduce the time for a w=3 write operation by 2 as the DC3 in the middle would acknowledge the write operation twice as fast than the other DC.In general, I would not recommend w=“majority” write operations in a RS with an A because then the cluster is not truly Highly Available (HA). In my example, it would fail in a network partition scenario. If w=1 or 2 write operations are also used, they could be rollbacked if the Arbiter switches side and now decides to elect a P in DC2 without sync between DC1 & DC2.I hope this helps a little !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime,I was comfortable with the election side of things, but the notes around the S in the 3rd DC and latency for writes is very useful information for me. Food for thought!Thank you for the insight.Clive",
"username": "clivestrong"
}
] | MongoDB across multiple sites | 2020-09-21T14:27:47.476Z | MongoDB across multiple sites | 2,367 |
null | [
"data-modeling"
] | [
{
"code": "totalRatingsScoretotalNumberOfRatings$inctotalNumberOfRatingstotalRatingsScore$avgratingsmatchIDtotalNumberOfRatingstotalRatingsScoretransaction",
"text": "Hi EveryoneI am working on an app which has a ratings system, where one set of users can rate a set of players on a per match basis.I want to be able to aggregate the data from all the users on a per match basis coupled with long term averages for each player.I am struggling to work out the best way to model this using MongoDB. I have come up with the following solution.For the averages on a per match basis:\nI would have two fields for each player sub document within a match document consisting of totalRatingsScore and totalNumberOfRatings. These would be updated each time a user submits a rating for a player using the $inc operator with +1 for the totalNumberOfRatings and adding the user rating that is between 1 - 10 for the totalRatingsScore. To get the average for that match I could then simply use $avg.Does this solution scale if in the dream scenario, thousands of people are submitting ratings, that are likely to happen at around the same time after a match finishes?For the long term averages:\nI would have a ratings field on a Player document that would have sub documents consisting of a matchID, totalNumberOfRatings and totalRatingsScore that would be updated in a similar fashion to the per match basis. To get the long term average for the player would then involve calculating the average for each match and then getting the average from all the match averages.\nBoth scenarios would be updated using a transaction.What do you think about this solution? How would you tackle it?Cheers",
"username": "Piers_Ebdon"
},
{
"code": "{\n \"user\" : <username or id> # The user who is giving a rating\n \"game\" : <game id> # The game that was being played\n \"rating\" : <rating value> # rating score\n \"player\" : <username or id> # rating score\n}\ngameplayer",
"text": "Some questions to think about:The challenge with a document per game or a document per player is if the number of players is large or the number of users is large everyone will be trying to update that document at once which will mean writes are queued up on the document.Better to have a Ratings collection which captures the follow for each rating:Now to get all the ratings for a game you can query on game. To get all the ratings per player you can query on player.You could just run aggregations on this collection to get averages. As the collection grows you could collection the data for a single game or a single player in a document in another collection.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "HI @Joe_DrumgooleFirstly thanks for responding to my question. I really appreciate how helpful MongoDB employees and dev advocates are within these forums.Apologies for the slow response. I am in the process of moving cities within the UK and haven’t had any quality time outside of work to respectfully reply to your response.Are there a large number of players per game?\nSo I am talking about a soccer match, which has 11 starting players per team and up to 7 subs, depending on the competition, who can also participate in the game. In my scenario, each match only focuses on a user’s preferred team, so there are up 18 players per game and up tp 30 players who are likelyto play for a team in a whole season.What is the typical ratio of users to players?\nI currently don’t have any users but hope to very soon! Aiming to be out for when iOS 14 is released which I reckon will be the middle of next month.What about outliers? Can you have a game with a very large number or users?\nThere could be a large numbers of users for each game. I am building a platform where the people creating the games already have large social media followings which range from tens of thousands to hundreds of thousands. I haven’t got anyone on board yet but theoretically there could be a large number.\nThis is my biggest concern, which may turn out to be a complete pipe dream, but I would really lke to be able to structure the database correctly to handle this, as in my day job I am an iOS develper and so don’t have commerical experience handling backends and databases.In regards to your answer of having a ratings collection, one of the calls I need to be able make is to be able to fetch the long term average for every player within a team who has been rated, from every user who follows that team for the particular person who has created that match.\nWould it be possible and efficient to fetch this within a potentially large number of documents, say 1m +?\nThis seems like a quick win but as someone who is fairly new to MongoDB, how easy would it be to then move away from this structure if it is no longer performant?",
"username": "Piers_Ebdon"
},
{
"code": "",
"text": "So every player has a a list of ratings. Those ratings are created by users and appended to the ratings collection.To calculate a rolling average for each player you just need the current average and the previous number of entries. So a change stream on the ratings could update the player document with the current new average and the fields required to calculate the next rolling average.That should work for millions of entries and peformance is capped by how fast new rattings are added. I would suggest you collect the change streams entries and update the averages once a second, a minute an hour. The period can be controlled by the load and the number of ratings arriving.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Hi JoeThis sounds great, although I haven’t used a change stream before and will need to dig into that.I’ve got the whole weekend to give this a go and will let you know how it goes.Thanks again for the help",
"username": "Piers_Ebdon"
},
{
"code": "",
"text": "Hey @Joe_DrumgooleSo I was able to spend a bit of time looking into using change streams over the weekend just gone.Like you said I just need a change stream to watch inserts on a ratings collection. I just want to confirm, the change stream would be setup when my app begins and is not triggered by an api call right?",
"username": "Piers_Ebdon"
},
{
"code": "",
"text": "You have to call watch to setup a change stream. Then you loop over the cursor getting change events. Make sure you keep track of the resume token so when you restart you start where you finished.",
"username": "Joe_Drumgoole"
}
] | Schema design for Ratings system | 2020-09-08T19:05:30.017Z | Schema design for Ratings system | 7,084 |
null | [
"aggregation",
"python",
"atlas",
"weekly-update"
] | [
{
"code": "",
"text": "Welcome to MongoDB $weeklyUpdate, a weekly digest of MongoDB tutorials, articles, and community spotlights!Each week, we’ll be sharing the latest and greatest MongoDB content and featuring our favorite community content too, curated by Adrienne Tacke at MongoDB! (P.S. Are you as excited as we are for Hacktoberfest? Keep an eye out for some exciting announcements soon )Enjoy!Want to find the latest MongoDB tutorials and articles written for developers, by developers? Look no further than our DevHub!MongoDB.Live 2020 Keynote In Less Than 10 Minutes\nDidn’t get a chance to attend the MongoDB.Live 2020 online conference this year? Don’t worry. Karen Huaulme has compiled a quick recap of all the highlights to get you caught up!Write A Serverless Function with AWS Lambda and MongoDB\nAdo Kukic shows you how to create your first AWS Lambda function powered by MongoDB Atlas in this step-by-step tutorial!Designing a Strategy to Develop a Game with Unity and MongoDB\nWalk through the planning phase of our Game Dev series. In this article, Nic Raboy summarizes our strategic meeting for Plummeting People!We stream tech tutorials, live coding, and talk to members of our community every Friday. Sometimes, we even stream twice a week! Be sure to follow us on Twitch to be notified of every stream!#100DaysOfCode: Community Showcase with Reagan Ekhameye\nBack with another community showcase, Joe Karlsson chats with Reagan Ekhameye about vuetube, a YouTube clone that he built for #100DaysOfCode! Honestly, I couldn’t tell the difference…Reagan did an EXCELLENT job that I thought we were streaming the YouTube home page \n Watch nowNoSQL Data Modeling for the RDBMS Developer\nEntity diagrams = what, exactly, in NoSQL? Sheeri Cabral walks us through the answer as well as all of your proper data modeling needs, specifically for the SQL peeps out there!\n Watch nowEpisode 3: Extending our User Profile Store\nLast week’s episode had Adrienne, Karen, and Nic creating API endpoints for their user profile store. Creating these allowed the team to add the Twitch chat users into the game as players!They start out with a typical Express/Node setup, but then see how MongoDB Realm helps them achieve the same functionality in MUCH less time! Be sure to catch up on the series!Episode 4 is on Sep 23, 10:30am PT, where we may actually play around in Unity! Join us in the chat and help us build this game! And if you ever need to catch up on any of our streams, you can always find them on our Developer Hub or our Twitch Live Streams playlist!Episode 18: Thena with Kieran Peppiatt\nThis week, Kieran Peppiatt, Founder and CEO of Thena, chats with hosts Nic Raboy & Mike Lynn about how MongoDB powers Thena’s ability to capture notes and other action items during meetings, allowing users to focus on the meeting itself!(Not listening on Spotify? We got you! We’re most likely on your favorite podcast network, including Apple Podcasts, PlayerFM, Podtail, and Listen Notes )Every week, we pick interesting articles, questions, and more from all over the internet! Be sure to use the #MongoDB hashtag when posting on dev.to or leave a comment on my weekly Tweets. You might be featured in an upcoming edition!mongodb-transactions\nhttps://github.com\nAyush Jain has created a REST API to perform large operations on the mongodb database with flexibility to cancel the operation in between and revert the changes. Check it out!Help creating $group and $count query\n_https://www.mongodb.com/community/forums_Build A Laundry CRUD API with FastAPI using MongoDB - 1\nhttps://dev.to\nDEV user Totoola Kenny has written an excellent, step-by-step tutorial that shows you how tobuild a CRUD API with Fast API and MongoDB. Check it out!How to Re-Authenticate a User with Token?\n_https://www.mongodb.com/community/forums_Part 3: User Roles and Management Datastore - MongoDB\nhttps://dev.to\nPart 3 of DEV user Rachel’s series explores how to use MongoDB as a datastore for the user roles and management of her app!Watch our team do their thang at various conferences, meetups, and podcasts around the world (virtually, for now). UpcomingSept 24th: Big Data LDN\nDid you grow up on SQL databases? Are document databases a bit of a mystery to you? Then Lauren Schaefer’s talk “From Tables to Documents—Changing Your Database Mindset” is perfect for you!Sept 25th: North East Remote Programming Conference 2020\nLauren Schaefer will be giving her talk “From Tables to Documents—Changing Your Database Mindset”! Watch it again!Sept 13: PennApps XXI\nJoe Karlsson gave his first Keynote ever and absolutely killed it! Watch The Art of Computer Science now!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "These are so awesome! Love getting all the updates in one place. thanks for putting this together @yo_adrienne!",
"username": "JoeKarlsson"
},
{
"code": "",
"text": " Amazing roundup!",
"username": "ado"
},
{
"code": "",
"text": "Thank YOU @JoeKarlsson for giving me amazing content to share! Still have good vibes from your keynote!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "@ado Whoop whoop Thank you!",
"username": "yo_adrienne"
}
] | MongoDB $weeklyUpdate #4: Fall is just around the corner | 2020-09-21T18:39:44.414Z | MongoDB $weeklyUpdate #4: Fall is just around the corner | 1,636 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "We want to install 50 Servers with MongoDB, I want to know the necessary requirements?",
"username": "aristides_villarreal"
},
{
"code": "",
"text": "When you want to operate MongoDB at scale you start to think in numbers of shards rather than numbers of servers. Each shard represents a single replica set with some number of replicas. Assuming you have N shards each shard can handle 1/N part of the load.All this can be more easily managed by starting with our managed service MongoDB Atlas.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "In our case we would have our own servers.",
"username": "aristides_villarreal"
}
] | Maximum number of clusters supported by mongodb? | 2020-09-20T05:16:07.144Z | Maximum number of clusters supported by mongodb? | 2,370 |
null | [
"indexes"
] | [
{
"code": "createIndexmongodumpmongorestore# ... set up db test-1 (takes ~2s)\nmongodump --archive=\"test-1\" --db=test-1 > test_output/mongodump-log 2>&1\n\necho \"Setting up dbs via mongorestore\"\n# this takes 40s. Why?\nseq 2 $NUM_SERVERS | xargs -L 1 -I % -P $NUM_SERVERS sh -c '\\\n mongorestore --archive=test-1 test-% --nsFrom=\"test-1.*\" --nsTo=\"test-%.*\" > test_output/mongorestore-test-% 2>&1; \\\n'\necho \"Done setting up dbs\"\n",
"text": "We use mongodb for our production database. We are hoping to improve test reliability & isolation by having each test worker on a machine talk to its own db in mongodb.So, we are trying to spin up 30 identical dbs on our test server. Each db should start out with empty collections, but the collections need to have indexes & validation rules defined on them.We tried to do this in two ways:A node script, which uses the mongodb nodejs driver to connect to each db and call the createIndex and various other commands.Preparing a single correctly configured db, dumping it via mongodump, then replicating it to the other 29 dbs via mongorestore.Both of these approaches fail to parallelize. While doing the setup on a single db takes ~2s, doing it across 30 dbs takes over 40s. In the case of option 1, we were seeing very inconsistent index creation times, ranging from 40s to over 3 minutes. Since the dbs are independent of each other, we expect this to take a similar amount of time as a single db, so 2-5s.It appears that the mongo instance running on the test server is not able to run index creation across several dbs at the same time. Is this a known limitation? Are we mis-configuring mongo in some way that prevents it from running these operations in parallel?Thanks for your input!FYI, the dump/restore command we are using is:",
"username": "Denis_Lantsman"
},
{
"code": "",
"text": "As a test, I ran this procedure with an empty db mongodump (no docs or indexes) and it succeeded nearly instantly, even with 30 mongorestores. So it does seem to be something related to index creation (even for an empty collection).",
"username": "Denis_Lantsman"
},
{
"code": "",
"text": "You do not seem to start mongorestore in background so your script is starting 39 mongorestore processes, one after the other, with no parallelism at the source. So even if mongod could do it in parallel your script is not.",
"username": "steevej"
},
{
"code": "-Pxargsseq 2 $NUM_SERVERS | xargs -L 1 -I % -P $NUM_SERVERS sh -c '\\\n echo \"starting %\"; \\\n sleep 10; \\\n echo \"done %\"; \\\n'\n",
"text": "The -P option on xargs is meant to run the tasks in parallel. I tested this script with just aand it did print out all the “starting” messages first, then the “done” messages 10s later.",
"username": "Denis_Lantsman"
},
{
"code": "mongodmongodmongodmongodmongodmongodiostatmongod",
"text": "Hi @Denis_Lantsman welcome to the community.I would like to clarify some things:So, we are trying to spin up 30 identical dbs on our test server.How many mongod processes are we talking about? Are all 30 dbs live in a single mongod instance? If you have 30 mongod instances, are they on a single machine, or on separate machines?If all 30 databases live in a single mongod process or if you have 30 mongod processes in a single machine, I don’t think you can expect it to run with the same timings as preparing a single database. This is because to create an index, it would need to do a collection scan which involves reading the whole collection into the cache. Multiply this by 30, and you’re hammering the cache and the disk with IO requests. If your disk cannot handle the requests, mongod will be forced to sit idle while waiting for the disk to complete its work. You can check if disk is the bottleneck by checking the iostat numbers, and see if the disk is fully utilized during this process.Actually you can do a small experiment by trying this process with less parallelization numbers. Say start with 2 processes simultaneously, and observe the reported timings. Then gradually increase the number of parallelization until you don’t see the benefit of adding more processes. I would be interested in seeing at what point the hardware start to get overworked.If you need further help, please post more details:Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Oops. Sorry but I completely missed that.",
"username": "steevej"
},
{
"code": "",
"text": "Hey Kevin, thanks for the reply. We are looking into your suggestions and running some experiments.A few clarifications:",
"username": "Denis_Lantsman"
},
{
"code": "",
"text": "Another update, we’re sometimes seeing the db setup via mongorestore take as long as 7 minutes!",
"username": "Denis_Lantsman"
},
{
"code": "",
"text": "I also tried running this with mongod 4.4 and had similar results.",
"username": "Denis_Lantsman"
},
{
"code": "2020-09-16T20:45:36.498+0000\tchecking options\n2020-09-16T20:45:36.498+0000\t\tdumping with object check disabled\n2020-09-16T20:45:36.498+0000\twill listen for SIGTERM, SIGINT, and SIGKILL\n2020-09-16T20:45:36.500+0000\tconnected to node type: standalone\n2020-09-16T20:45:36.500+0000\tstandalone server: setting write concern w to 1\n2020-09-16T20:45:36.500+0000\tusing write concern: w='1', j=false, fsync=false, wtimeout=0\n2020-09-16T20:45:36.538+0000\tarchive prelude source-db.collection1\n2020-09-16T20:45:36.538+0000\tarchive prelude source-db.collection2\n2020-09-16T20:45:36.538+0000\tarchive prelude source-db.collection3\n2020-09-16T20:45:36.538+0000\tarchive prelude source-db.collection42020-09-16T20:47:06.479+0000\tcreating collection target-db.collection3 using options from metadata\n2020-09-16T20:47:06.479+0000\tusing collection options: bson.D{bson.DocElem{Name:\"idIndex\", Value:mongorestore.IndexDocument{Options:bson.M{\"name\":\"_id_\", \"ns\":\"target-db.collection3\"}, Key:bson.D{bson.DocElem{Name:\"_id\", Value:1}}, PartialFilterExpression:bson.D(nil)}}}\n2020-09-16T20:47:14.678+0000\trestoring target-db.collection3 from archive 'mongodump-source-db'\n2020-09-16T20:47:14.678+0000\tfinished restoring target-db.collection5 (0 documents)\n2020-09-16T20:47:14.678+0000\tfinished restoring target-db.collection34 (0 documents)\n2020-09-16T20:47:14.680+0000\tusing 1 insertion workers\n2020-09-16T20:47:14.680+0000\tdemux checksum for namespace source-db.collection3 is correct (0), 0 bytes\n2020-09-16T20:47:14.680+0000\trestoring indexes for collection target-db.collection3 from metadata\n2020-09-16T20:47:14.680+0000\tdemux namespaceHeader: {source-db collection4 false 0}\n2020-09-16T20:47:14.680+0000\tdemux Open\n2020-09-16T20:47:14.680+0000\treading metadata for target-db.collection4 from archive 'mongodump-source-db'\n2020-09-16T20:47:14.680+0000\tcreating collection target-db.collection4 using options from metadata\n2020-09-16T20:47:14.680+0000\tusing collection options: bson.D{bson.DocElem{Name:\"idIndex\", Value:mongorestore.IndexDocument{Options:bson.M{\"name\":\"_id_\", \"ns\":\"target-db.collection4\"}, Key:bson.D{bson.DocElem{Name:\"_id\", Value:1}}, PartialFilterExpression:bson.D(nil)}}}\n2020-09-16T20:47:14.680+0000\tdemux namespaceHeader: {source-db collection4 true 0}\n2020-09-16T20:47:39.042+0000\trestoring target-db.collection4 from archive 'mongodump-source-db'\ndemuxcollection4",
"text": "One more thing that might be useful - I recorded a mongoresotre log from one of the particularly long mongorestore operations (this one took ~6min).\nIt seems like there are some places where the script is sitting idle for as much as 15s at a time (see last two lines below):The demux makes me think it’s waiting on some sort of mutex lock on collection4…",
"username": "Denis_Lantsman"
},
{
"code": ": steevej @ asus-laptop ; cat create-test-collections.js\nfor( let instance = 0 ; instance < 30 ; instance++ )\n{\n\tdb = db.getSiblingDB( \"test-\" + instance ) ;\n\tdb.dropDatabase() ;\n\tdb.getCollection( \"collection-1\" ).createIndex( { a : 1 , b : 1 } , { background : true } ) ;\n\tdb.getCollection( \"collection-2\" ).createIndex( { c : 1 , d : 1 } , { background : true } ) ;\n\tdb.getCollection( \"collection-3\" ).createIndex( { e : 1 , f : 1 } , { background : true } ) ;\n}\n: steevej @ asus-laptop ; date ; mongo create-test-collections.js ; date\nWed Sep 16 19:08:08 EDT 2020\nMongoDB shell version v4.0.5\nconnecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"b186b791-9815-45d8-a61d-f45988260b73\") }\nMongoDB server version: 4.0.5\nWed Sep 16 19:08:11 EDT 2020\n: steevej @ asus-laptop ; \n",
"text": "While I do not know why it would take so long, I would like to propose another route for setting up your test databases.I admit, that everything is local but I get all the databases, all collections and some indexes within 4 seconds. My system is7886MiB System memory\nIntel® Core™ i5-2410M CPU @ 2.30GHz\nLinux asus-laptop 4.19.36-1-lts #1 SMPSince the indexes are built in the background they might not be finished building at the end.I also tried with more indexes on an Atlas Free Tier and I could create 30 dbs with 3 collections per dbs and 2 (2-fields) indexes per collections. It took around 15 seconds.",
"username": "steevej"
},
{
"code": "",
"text": "We tried this approach (creating indexes via the js driver). It seemed to have the same bottleneck - and I don’t doubt that you would find the times increasing if you created 30 collections with 60 indexes per db.This approach also experiences the same variability in performance, with some runs taking significantly more than 30s.",
"username": "Denis_Lantsman"
},
{
"code": "mongodmongodmongodecho \"Creating first db\"\necho \"creating fixture db mongod process\"\nmkdir ./mongo-dbs/db-fixture\nmongod --port $FIRST_MONGO_PORT --dbpath ./mongo-dbs/db-fixture --logpath ./test_output/mongo-fixture &\nuntil mongo mongodb://localhost:$FIRST_MONGO_PORT/db-test --eval \"{}\" > /dev/null 2>/dev/null; do\n\techo \"fixture db not ready. waiting\"\n sleep .1\ndone\necho \"Setting up indexes\"\nenv MONGODB_URL=\"mongodb://localhost:$FIRST_MONGO_PORT/db-test\" scripts/setup-database-indexes --background false > test_output/setup-database-indexes\nmongod --dbpath ./mongo-dbs/db-fixture --shutdown\necho \"done creating first db\"\necho \"Launching $NUM_SERVERS mongo dbs\"\nseq 1 $NUM_SERVERS | xargs -L 1 -I % -P $NUM_SERVERS bash -c '\\\n\tcp -r ./mongo-dbs/db-fixture/. ./mongo-dbs/db-%; \\\n MONGO_PORT=`expr $FIRST_MONGO_PORT + % - 1`; \\\n (mongod --port $MONGO_PORT --dbpath ./mongo-dbs/db-% --logpath ./test_output/mongo-% &) ; \\\n (until mongo mongodb://localhost:$MONGO_PORT/db-test --eval \"{}\" > /dev/null 2>/dev/null; do echo \"mongo % not ready...\"; sleep .1; done) ; \\\n\techo \"mongo % ready\"; \\\n'\necho \"done launching mongod processes\"\n",
"text": "For wayward travelers who might run into the same issue:We ended up giving up on using a single mongod process, since all of our efforts to parallelize it didn’t work. Instead, we went with this approach:Overall it takes about 7s for our 30 mongod processes, with indexes, to be ready for requests. A great improvement over the occasional 2min wait!",
"username": "Denis_Lantsman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setting up dbs with indexes in parallel | 2020-09-15T20:00:04.334Z | Setting up dbs with indexes in parallel | 4,111 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hello,Im struggeling for 3 - 4 days how I can tell this in a aggregate or a findIt seems I cannot use two times a $or because the second time overwrites the first one.\nBut how can I make this work ?",
"username": "Roelof_Wobben"
},
{
"code": "",
"text": "and",
"username": "steevej"
},
{
"code": "",
"text": "that does not work both a a string so I thought of using $ neBut still how can I make a query where I have to use two times a $or",
"username": "Roelof_Wobben"
},
{
"code": "{ rated : { $in : [ \"P\", \"PG\" ] } }\n",
"text": "A JSON document cannot have the same key twice.An array can be used to $and or $or multiple clauses.I do not understand the following:that does not work both a a stringButworks fine even if rated is a string.And since this thread is related to M121 at MongoDB University, it is best to have this thread over there in the course specific forum.",
"username": "steevej"
},
{
"code": "",
"text": "If you are still stuck and you want an aggregation solution look at that also : )https://www.mongodb.com/community/forums/t/a-pipeline-stage-specification-object-must-contain-exactly-one-field/9411/6",
"username": "Takis"
}
] | How to make this work | 2020-09-21T17:46:29.317Z | How to make this work | 2,330 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "I would like to know the best practices for installation and configuration. We are thinking of 50 servers, what would be the optimal installation and configuration and the limitations that mongo would have",
"username": "aristides_villarreal"
},
{
"code": "",
"text": "Impossible to say without some idea of how many will be active at what time, what the working set for a given user will be and what will the write load vs read load be?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "If you want to benchmark the performances of different MongoDB configurations, you can use MongoDB Atlas to setup different clusters with different number of shards, size of instances (RAM, CPU), etc and run your benchmark to make sure you are hitting your numbers.But you can only do this once you have figured your data model, indexes, shard keys and most frequent queries which I highly recommend to plan before starting to plan anything else as this will drive the rest.",
"username": "MaBeuLux88"
}
] | I need to install a cluster for 1 million users | 2020-09-20T23:26:22.854Z | I need to install a cluster for 1 million users | 2,430 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi TeamMongoDB Realm Sync is in beta right now. Is it recommended to use this beta version in live mobile applications?",
"username": "Saba_Siddiqui"
},
{
"code": "",
"text": "@Saba_Siddiqui This recent forum discussion might give you some guidance -",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Sync beta version | 2020-09-21T09:47:02.342Z | Realm Sync beta version | 1,834 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "We are currently on mongo 3.2 and plan to upgrade the storage engine from MMAP to WiredTiger.We currently have 1 primary,1 secondary & 1 arbiter in our replicaset(for reference lets call it RS1).\nWe take daily snapshots of our secondary node. Our data size is around 500GB.In mongo3.2, upgrading the storage engine from MMAP to WiredTIger should be done in a rolling manner. This includes an initial sync of the mongo node having WiredTiger with an existing mongo node having MMAP. Initial sync fails due to renameCollection command during aggregate or map-reduce functions (JIRA ticket\nNow this issue is fixed in mongo 3.6.\nSo there are two ways to handle this:However we tried a different approach:\n1.launch a mongo node(MMAP storage engine) with the snapshot data and assigned it to a different replicaset(RS2) which was isolated from the application (ie no read/writes happening).\n2.attach a new mongo node without any data and WiredTiger storage engine to this new replicaset.\n3.Let it sync,\n4.Once sync is complete, remove this WiredTiger node from the replicaset(RS2), flush its local database which has replicaset information .\n5.Attach this node to the original replica set (RS1)\n6.Hopefully it should sync the data without doing an initial syncBut alas, step 6 didnt work and it went for an initial sync.\nThis eventually failed whenever there was a renameCollection operation with the following error:\n“OplogOperationUnsupported: Applying renameCollection not supported in initial sync”Is this expected? Are we doing something wrong?",
"username": "Anshuman_Biswal"
},
{
"code": "allowUnsafeRenamesDuringInitialSync",
"text": "Hi @Anshuman_Biswal,Yes this is expected, you cant take a node from replica B and plug it to replica A. Once it had a different replica ID it will force initial sync.I think the best approach is to make sure you are running 3.2.18+ on all replica set nodes and use a server parameter to allow initial sync to finish regardless of renames allowUnsafeRenamesDuringInitialSync.It was introduced in 3.2.18 as part of https://jira.mongodb.org/browse/SERVER-29772Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,\nThanks a lot of your answer. Will definitely try using the allowUnsafeRenamesDuringInitialSync parameter.\nJust one question, i thought the replicaset information resided in the local database.\nSo when we do the following in mongo shell:\nuse local\ndb.dropDatabase()\nI guess the node looses the information to which replica set it was belonging.\nIs it not correct?",
"username": "Anshuman_Biswal"
},
{
"code": "",
"text": "Hi @Anshuman_Biswal,No replication information means not belonging to replica A or B. Attaching it to both will result in initial sync.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,Thanks for your reply.I assume the flag allowUnsafeRenamesDuringInitialSync will be set true in all the replica set nodes right?Thanks\nAnshuman",
"username": "Anshuman_Biswal"
},
{
"code": "--setParameter allowUnsafeRenamesDuringInitialSync=true\n",
"text": "Hi @Anshuman_Biswal,Yes its a parameter.It should be :Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny for the help. Really appreciate it !!",
"username": "Anshuman_Biswal"
}
] | Upgrading Storage Engine from MMAP to WiredTiger in MongoDB 3.2 version | 2020-09-18T11:24:58.999Z | Upgrading Storage Engine from MMAP to WiredTiger in MongoDB 3.2 version | 3,382 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I am looking for a query in mongodb which is similar to the below sqlSelect count(users),count(issues), count(fingercnt)\nfrom issuedetails\ngroup by created_date",
"username": "vinodkumar_Mallikarj"
},
{
"code": "",
"text": "Hi @vinodkumar_Mallikarj! Welcome to the community!MongoDB allows you to create Aggregation Pipelines. Aggregation Pipelines are handy when you want to perform analytical queries or combine data from more than one collection.You can use the $count and $group stages as part of the Aggregation Pipeline.",
"username": "Lauren_Schaefer"
},
{
"code": "covid19.global_and_us[\n {\n '$facet': {\n 'countries': [\n {\n '$group': {\n '_id': '$country'\n }\n }, {\n '$count': 'count'\n }\n ], \n 'states': [\n {\n '$group': {\n '_id': '$state'\n }\n }, {\n '$count': 'count'\n }\n ], \n 'counties': [\n {\n '$group': {\n '_id': '$county'\n }\n }, {\n '$count': 'count'\n }\n ]\n }\n }\n]\n",
"text": "Hi @vinodkumar_Mallikarj and welcome onboard !I think what you are looking for here is $facet which is part of the Aggregation Pipeline.Here is an example that I executed on the MongoDB Open Data COVID-19 cluster on the collection covid19.global_and_us.This collection contains all the COVID-19 stats from different countries in the world, sub-divided in states then counties.With the following query, I’m counting how many countries, states and counties are in this collection.The result looks like this:\nimage796×321 28.3 KB\nYou could also choose to run these 3 sub-pipelines separately in 3 separated queries but the $facet stage makes it possible to run them all at once.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n \"user\": \"\",\n \"issues\": \"\",\n \"finger\": \"\",\n \"created_date\": \"2019-01-01T00:00:00Z\"\n },\n {\n \"user\": \"\",\n \"issues\": null,\n \"finger\": \"\",\n \"created_date\": \"2019-01-01T00:00:00Z\"\n },\n {\n \"user\": \"\",\n \"finger\": \"\",\n \"created_date\": \"2019-01-02T00:00:00Z\"\n },\n {\n \"user\": \"\",\n \"issues\": \"\",\n \"finger\": null,\n \"created_date\": \"2019-01-02T00:00:00Z\"\n },\n {\n \"user\": \"\",\n \"issues\": \"\",\n \"created_date\": \"2019-01-02T00:00:00Z\"\n },\n {\n \"user\": \"\",\n \"issues\": \"\",\n \"created_date\": \"2019-01-02T00:00:00Z\"\n }\n]\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": \"$created_date\",\n \"user_count\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$or\": [\n {\n \"$eq\": [\n {\n \"$type\": \"$user\"\n },\n \"missing\"\n ]\n },\n {\n \"$eq\": [\n \"$user\",\n null\n ]\n }\n ]\n },\n 0,\n 1\n ]\n }\n },\n \"issues_count\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$or\": [\n {\n \"$eq\": [\n {\n \"$type\": \"$issues\"\n },\n \"missing\"\n ]\n },\n {\n \"$eq\": [\n \"$issues\",\n null\n ]\n }\n ]\n },\n 0,\n 1\n ]\n }\n },\n \"finger_count\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$or\": [\n {\n \"$eq\": [\n {\n \"$type\": \"$finger\"\n },\n \"missing\"\n ]\n },\n {\n \"$eq\": [\n \"$finger\",\n null\n ]\n }\n ]\n },\n 0,\n 1\n ]\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"created_date\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n \"maxTimeMS\": 0,\n \"cursor\": {}\n}\n\n[\n {\n \"user_count\": 4,\n \"issues_count\": 3,\n \"finger_count\": 1,\n \"created_date\": \"2019-01-02T00:00:00Z\"\n },\n {\n \"user_count\": 2,\n \"issues_count\": 1,\n \"finger_count\": 2,\n \"created_date\": \"2019-01-01T00:00:00Z\"\n }\n]\n",
"text": "HelloI think this does the same.\nGroup by created_date.And count the non null values on the fields.Example dataThe command.The pipeline is the code to use in any driver.\nIt can be much sorter if you use a function,because its 3x the same code.\nIt groups by created_date,and counts excluding missing fields or fields with null valuesResults (missing or null are excluded on counting)",
"username": "Takis"
},
{
"code": "",
"text": "Your query is different than mine in the sense that you are only grouping by one criterion and then counting different fields so it works with a single pipeline.In my case, I’m grouping by 3 different criteria “at the same time” so I need to use $facet to run 3 different $group over the same input.Maybe you will need it some day Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "db.Issues.aggregate(\n [\n {\n \"$project\": {\n \"_id\": NumberInt(0),\n \"issueid\": \"$issueid\",\n \"program_name\": \"$fin.program_name\",\n \"issue_type\": \"$issue_type\",\n \"client_name\" : \"$client.client_mnemonic\",\n \"zipfile_dt\" : \"$zipfile.zip_file_dt_tm\",\n \"user_name\" : \"$user.user_name\", \n \"finid\" : \"$fin.finid\"\n }\n },\n {\n $match: {\"client_name\" :{$in: [\"ACL\",\"CEC\"]},\"zipfile_dt\" : \"2020-01-01\"}\n },\n {\n $group: { \n _id: \"$zipfile_dt\", \n Fincnt: {$addToSet: \"$finid\"}\n }\n },\n {\n $project: {\n uniqueFinCount:{$size:\"$Fincnt\"}\n }\n }\n ]\n )\n",
"text": "I used the below query, it is giving the output.But in the above query i can only able to retrieve the distinct count of only one column, for multiple columns i am facing the issue. If I add addtoset on one more column, the previous values will overwrite and give the count as 0",
"username": "vinodkumar_Mallikarj"
},
{
"code": "",
"text": "The query is giving the overall count, assume there are total 7 records and out of 7 if we use distinct count then it should return 3.I want the count 3 instead of 7.",
"username": "vinodkumar_Mallikarj"
},
{
"code": "$group: {\n_id: “$zipfile_dt”,\nFincnt: {$addToSet: “$finid”},\nIssuecnt: {$addToSet: “$issueid”}\n}\n$group: {\n_id: “$zipfile_dt”,\nacount: {$addToSet: {\"finid\" : “$finid”, \"issueid\" : \"$issueid\"}}\n}\n",
"text": "HelloWhat do you mean overriden and get count as 0.Add fields like that?\nMaybe like that?If you give example data in,how they should become,and your query,it could help alot.",
"username": "Takis"
},
{
"code": "",
"text": "$match should be the first stage in your aggregation so it can benefit from the index {zipfile_dt:1, “client_name”:1} that you should create for this query to run fast.\nAfter a $project stage, your index won’t work anymore.\nAlways $match and $sort first if you can do so.",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n \"program_name\": \"E\",\n \"user_name\": \"B\",\n \"zipfile_dt\": \"2019-01-01\"\n },\n {\n \"program_name\": null,\n \"user_name\": \"B\",\n \"zipfile_dt\": \"2020-01-01\"\n },\n {\n \"program_name\": \"E\",\n \"user_name\": \"B\",\n \"zipfile_dt\": \"2020-01-01\"\n },\n {\n \"program_name\": \"C\",\n \"user_name\": \"B\",\n \"zipfile_dt\": \"2020-01-01\"\n },\n {\n \"program_name\": \"D\",\n \"user_name\": \"A\",\n \"zipfile_dt\": \"2020-01-01\"\n },\n {\n \"program_name\": \"D\",\n \"user_name\": \"A\",\n \"zipfile_dt\": \"2020-01-01\"\n },\n {\n \"program_name\": \"D\",\n \"user_name\": \"A\",\n \"zipfile_dt\": \"2020-01-01\"\n }\n]\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$zipfile_dt\",\n \"2020-01-01\"\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": null,\n \"program_name_array\": {\n \"$addToSet\": \"$program_name\"\n },\n \"user_name_array\": {\n \"$addToSet\": \"$user_name\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$project\": {\n \"users_count\": {\n \"$size\": \"$user_name_array\"\n },\n \"program_count\": {\n \"$size\": \"$program_name_array\"\n }\n }\n }\n ],\n \"maxTimeMS\": 0,\n \"cursor\": {}\n}\n{\n \"users_count\": 2,\n \"program_count\": 4\n}\n{\n \"$project\": {\n \"users_count\": {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$user_name_array\",\n \"as\": \"m\",\n \"cond\": {\n \"$not\": [\n {\n \"$eq\": [\n \"$$m\",\n null\n ]\n }\n ]\n }\n }\n }\n },\n \"program_count\": {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$program_name_array\",\n \"as\": \"m\",\n \"cond\": {\n \"$not\": [\n {\n \"$eq\": [\n \"$$m\",\n null\n ]\n }\n ]\n }\n }\n }\n }\n }\n}\n{\"users_count\":2,\"program_count\":3}\n",
"text": "Hello : )I think you want count distinct in each field,in the first answer,it was about count only.\nBecause in that sql query you didnt add the distinct.I used “$group”: {\"_id\": null …} because you already kept only 1 date on match stage,\nso here all collection will be 1 group.Maybe this is what you want.\nExample data.Data inQueryResultIf you want to not count the null valuesIn the above query replace the last project with that,it filters the array and keeps\nonly the not nulls,you can make a function and generate the ,here is the same code 2xResult (program_name null value wasn’t counted)Hope this time helps",
"username": "Takis"
},
{
"code": "",
"text": "Super. It is working fine.\nThank you",
"username": "vinodkumar_Mallikarj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help creating $group and $count query | 2020-09-17T09:26:42.522Z | Help creating $group and $count query | 31,788 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "How to deploy a cluster using api call in mongodb ops manager how to do with ansible",
"username": "shreya_chouhan"
},
{
"code": "",
"text": "Hi @shreya_chouhan,I would recommend reviewing our Api automotion document:Automation Configuration Resource — MongoDB Ops Manager 6.0This is the resource that when adjusted should deploy the added cluster.However, I would suggest exploring our mongo-cli for ops managerFor easy operation for easy API operationsBest\nPavel",
"username": "Pavel_Duchovny"
}
] | Deploy a cluster using api call in mongodb ops manager | 2020-09-20T19:24:12.292Z | Deploy a cluster using api call in mongodb ops manager | 1,845 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I work on a ROBLOX game called RoBeats CS, which is a VSRG (vertical scrolling rhythm game). We have hundreds upon thousands of scores in one table. Every player gets his/her own Skill Rating, which is calculated by the Rating of the best 25 scores that player made. What is the best way to do a ranking system so that players can have their own rank? Do note that most of my backend is done and I mostly know what I’m doing. I just don’t know the best approach. Thanks for any help I can get.",
"username": "blizzo"
},
{
"code": "",
"text": "Hi @blizzo,I know this is a little late, but maybe this will give you some ideas?:https://www.mongodb.com/how-to/maintaining-geolocation-specific-game-leaderboard-phaser-mongodbIt’s a tutorial I wrote around leaderboards in a Phaser game.Best,",
"username": "nraboy"
},
{
"code": "",
"text": "Ah yeah, thanks. I already figured out a lot but I’ll let you know if this works.",
"username": "blizzo"
}
] | What is the best way to generate a global leaderboard for our game? | 2020-03-27T00:19:58.283Z | What is the best way to generate a global leaderboard for our game? | 3,965 |
[
"aggregation"
] | [
{
"code": "\n{\n \"mysingle1\": 1,\n \"mysingle2\": 1,\n \"mysingle3\": 1,\n \n \"myarray1\": [1,2],\n \"myarray2\": [1,2],\n \"myarray3\": [1,2],\n \n \"myobject1\": {\"afield\": \"\"},\n \"myobject2\": {\"afield\": \"\"}, \n \"myobject3\": {\"afield\": \"\"} \n}\n\n$addFields behaviour\n\nsingle -> add single/array/document => replace\n\narray -> add single/array => replace\narray -> add document => update,all members of the array becomes that document\n\n {\"$addFields\" : {\"myarray3\" : {\"afield1\" :\"\"}}}\n became \n {\"myarray3\" : [{\"afield1\" :\"\"} {\"afield1\" :\"\"}]}\n\ndocument -> add single/array => replace\ndocument -> add document => update document,added becomes its member\n \n {\"$addFields\" : {\"myobject3\" : {\"afield1\" :\"\"}}}\n became\n {\"myobject3\" {\"afield\" \"\", \"afield1\" \"\"}}\n\ndocument -> add document => replace\n",
"text": "Hello : )I inserted those data,1 document and i tried to replace the existing fields,with a single value,\nan array,and an onbject and i got un-expected for me results,i just wanted to replace the old value.And i used $addFields.To always add a new field,that already existed.\nReading the documentatation from the IMPORTANT + example after\nseemed that new field value,will replace the existing but this is not the case.\nSometimes it replaces,sometimes it updates,and the update result looks unexpected.\nMaybe in the documentation somewhere it resolves it but this is why i thought it is always replace.\n\nScreenshot from 2020-09-18 16-48-53835×128 12.1 KB\nThen i used the same but with $project,results was like addFields butThis is complicated for me ,i just want to replace the value.\nWithout thinking what was there,and what i add.\nWhy it works this way?Where this helps?\nEspecially the update of the array with the document as all its members looks so un-expected.\nThere is a way to just replace the old field,and never update,using one stage?Thank you.",
"username": "Takis"
},
{
"code": "db.test.aggregate([\n { \"$addFields\" : { \"myobject3\" : { \"afield1\" : \"\" }}}\n])\nreturns the updated field as: { \"myobject3\": { \"afield\": \"\", \"afield1\": \"\" }}\nafield1myobject3\"myobject3\": {\"afield\": \"\" }\"myobject3\": {\"afield1\": \"\" }{\"$addFields\" : { \"myobject3\": { \"afield1\" : \"\" }, \"myobject3.afield\" : \"$$REMOVE\" } }{\"$addFields\" : { \"myobject3.afield1\" : \"\", \"myobject3.afield\" : \"$$REMOVE\" } }{\"$project\" : { \"myobject3.afield1\" : \"\" } }$project_iddb.test.aggregate([\n { \"$project\" : { \"myobject3\" : { \"afield1\" : \"\" } } }\n])\nreturns => \"myobject3\" : { \"afield1\" : \"\" }\n$project_id\"myobject3.afield1\"db.test.aggregate([\n { \"$addFields\" : { \"myarray3\" : { \"afield1\" : \"\" } } },\n])\nreturns the array updated as: {\"myarray3\" : [{\"afield1\" :\"\"} {\"afield1\" :\"\"}]}\n$project",
"text": "Hello @Takis,Yes, your observations are correct. Here are some clarifications (I think).document → add document => update document, added becomes its memberThis behavior is correct. It is adding a new field afield1 to the myobject3 sub-document.If you want to replace the \"myobject3\": {\"afield\": \"\" } with \"myobject3\": {\"afield1\": \"\" } then you can use one of the following:{\"$addFields\" : { \"myobject3\": { \"afield1\" : \"\" }, \"myobject3.afield\" : \"$$REMOVE\" } }-or-{\"$addFields\" : { \"myobject3.afield1\" : \"\", \"myobject3.afield\" : \"$$REMOVE\" } }-or-{\"$project\" : { \"myobject3.afield1\" : \"\" } }NOTE: The $project will exclude all other fields, except the _id (which is included by default).document → add document => replace\nThen i used the same but with $project,results was like addFields butThis behaviour is correct:This is because $project ignores all the fields that are not included, except the _id (which is included by default). In this case only the \"myobject3.afield1\" is included…array → add document => update, all members of the array becomes that document\n{“$addFields” : {“myarray3” : { “afield1” : “” } } }\nbecame\n{“myarray3” : [{“afield1” :“”} {“afield1” :“”}]}Yes, the aggregation:The behavior is same with $project also. Adds, the sub-document as elements of the array, replacing the existing array elements.I am afraid this is the expected behavior, according to this MongoDB JIRA: $project computed fields on arrays sets every element of the array – inconsistent with $set",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hello ,and thank you for the reply.To me the intuitive was to replace the value always.\nBut this is not the case,and from the JIRA looks like they decided to leave it like this.\nIf they update the documentation also it would be nice.I used a custom notation ,and a function that auto produce 3 stages,so i can always replace\nif i want,hopefully without big perfomance cost.We will see : )",
"username": "Takis"
}
] | Unexpected behaviour with $addFields and projection on existing field name | 2020-09-18T13:31:17.576Z | Unexpected behaviour with $addFields and projection on existing field name | 3,450 |
|
null | [
"installation"
] | [
{
"code": "",
"text": "I can’t seem to get 4.4 to start on windows 10 pro. when i run mongod.exe it simply returns with no output. i get the same result no matter what flags I pass in. It simply won’t start or give any sort of feedback. This is the same with 4.4.0 and 4.4.1. Any ideas on what’s wrong?",
"username": "Justin_Lee"
},
{
"code": "mongodmongod --dbpath <somePathToAnExistingEmptyFolder>\n--auth",
"text": "Hi @Justin_Lee,How did you install it? As a service or just the binaries? Which command line are you using to start it?Usually, for a “Hello World”, mongod just needs a path for the storage:If you are planning something more serious, I would at least recommend an additionnal --auth, log management, etc.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "i’m extracting the zip files and running them as parts of a test framework. I pass more options (via config file) in the “live” version but it fails to start. Running from the command line with no options has, traditionally, returned an error/usage message but I get nothing with 4.4.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "This test framework works with the linux downloads. it’s just the windows 4.4.x versions that have never started up for me. But everything back through 3.6 works just fine.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "so apparently powershell will just swallow errors but using cmd i get a pop up complaining about a missing vcruntime140_1.dll. so a clue at least if not quite a fix just yet.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "Reinstalling the Visual c++ runtime fixed it.This article lists the download links for the latest versions of Visual C++ Redistributable packages.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Starting 4.4 on windows | 2020-09-19T16:03:19.883Z | Starting 4.4 on windows | 3,098 |
null | [
"kotlin"
] | [
{
"code": "",
"text": "I’m sure its been asked before but I didn’t see it…I have a string value for the _id I want to findIf I find() I get back {\"_id\": {\"$oid\": “5f64d2413ff0231ed8ef1b54”}, “Tag 0”: “I’m here”}\nIf I find using eq(“Tag 0” and “I’m here”) I get the jdocBut I can’t figure out the syntax for using a string and _idHelp",
"username": "Bob_Pappenhagen"
},
{
"code": "\n \n return personCollection.find().into(new ArrayList<>());\n }\n \n @Override\n public List<Person> findAll(List<String> ids) {\n return personCollection.find(in(\"_id\", mapToObjectIds(ids))).into(new ArrayList<>());\n }\n \n @Override\n public Person findOne(String id) {\n return personCollection.find(eq(\"_id\", new ObjectId(id))).first();\n }\n \n @Override\n public long count() {\n return personCollection.countDocuments();\n }\n \n @Override\n public long delete(String id) {\n return personCollection.deleteOne(eq(\"_id\", new ObjectId(id))).getDeletedCount();\n \n ",
"text": "Hi @Bob_Pappenhagen and welcome in the MongoDB Community !This is how I do it with Java. I guess it’s somewhat similar in Kotlin?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Kotlin find() using _id primary key with a string | 2020-09-18T18:06:25.120Z | Kotlin find() using _id primary key with a string | 4,970 |
[] | [
{
"code": "accounts",
"text": "Hi,\ni was just upgrading my mongodb version from 4.2 to 4.4 to explore some new features of mongoDB so i googled it and found the method but the first step was to take the backup of previous data so i did the same step and start exporting the collections but after few collections i notices that it’s only exporting 4 records of each collections. it doesn’t seems good to me.\nHere are some screenshots of mongodbexport command.\nBy the way accounts collection have only ONE record and it exported FOUR, how?image1352×485 24.7 KB\n\nonly one record in accounts collection and 4 records in accountStandingInstructions",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "i exported all the collections and i saw some collections contain .BSON extension if it contain some jibrish type language, don’t know how this happens \nI’m worried about this!",
"username": "Nabeel_Raza"
},
{
"code": "mongoexportmongodumpmongodumpdumpmongorestoredump",
"text": "Hi @Nabeel_Raza,First, your commands lines are not correct.\nimage2068×733 375 KB\nYou are always downloading the same collection over and over again but in a different file each time.Second, which version of mongoexport are you using? Did you also update it to 100.1.1?Third, you got bson + metadata.json files because you used mongodump for these.If you want to backup the entire database, the easiest solution is to use mongodump which by default will create a dump folder with all your databases and collections. mongorestore can then be used to restore these documents using this dump folder.You can read more about these tools in the MongoDB documentation.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks @MaBeuLux88 for you response which was helpful for me ",
"username": "Nabeel_Raza"
}
] | Mongodbexport does not export all collections | 2020-09-18T04:56:19.102Z | Mongodbexport does not export all collections | 4,911 |
|
null | [] | [
{
"code": "",
"text": "Hi ,For enterprise content management system Document database we are currently using Oracle ContentDB. Now we are thinking to use MongoDB. Can someone please give steps how to migrate it whole system from Oracle ContentDB to Mongodb. Thanks.",
"username": "Debasis_Sahu"
},
{
"code": "",
"text": "Hi @Debasis_Sahu,There is no magic tool that can best migrate your deployment and you will need to perform some extraction , transfer and load to MongoDB to leverage the document model concepts.Please review the following:Courses for SQL pros:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Best practices for delivering performance at scale with MongoDB. Learn about the importance of JSON document data modeling and memory sizing.A summary of all the patterns we've looked at in this seriesBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "also we need to know what shhould be the middleware if we use mongoDB for enterprise content management system Document database. Please help on this. Thanks.",
"username": "Debasis_Sahu"
},
{
"code": "",
"text": "Hi @Debasis_Sahu,This also depends on software architecture and language.Please read more in our Driver documentation\nhttps://docs.mongodb.com/drivers/Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Migrating from Oracle Content DB to MongoDB | 2020-09-17T09:25:42.663Z | Migrating from Oracle Content DB to MongoDB | 1,782 |
Subsets and Splits