image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[
"aggregation"
] | [
{
"code": "",
"text": "Hi Friends,\nI am new to mongoDB and it’s really exciting.I have a scenario where i need to group on common fields(3) span across collections and have to perform summing up all the numeric columns by column wise and all string/text columns need to make “Nan” or “Null”.Help me to form the query…\nattaching two sample collections, great regards from my side.The common fields across the collections are, X, Y_DT and Z. and other columns either numeric, string/text.col1771×743 15 KB col2743×752 15.2 KB",
"username": "Murali_Muppireddy"
},
{
"code": "{\n $group: {\n \"_id\": {\n X: \"$X\",\n Y_DT: \"$Y_DT\",\n Z: \"$Z\"\n },\n adj: {\"$sum\": \"$adj\"},\n bjc: {\"$sum\": \"$bjc\"},\n ...\n }\n} \n0$sum",
"text": "Hi @Murali_Muppireddy, you would want to do something similar to the following:Any non numeric value will be converted to 0 for the purposes of the $sum operation.",
"username": "Doug_Duncan"
},
{
"code": " db.test650.aggregate(\n{\n $group: {\n \"_id\": {\n X: \"$X\",\n Y_DT: \"$Y_DT\",\n\t\t\tZ: \"$Z\"\n },\n\t\tadj: {$sum: \"$adj\" }, \n\t\tbjc: {$sum: \"$bjc\" },\n\t\tjbc: {$sum: \"$jbc\" },\n\t\tmnk: {$sum: \"$mnk\"}\n }\n }\n)\n",
"text": "Thank @Doug_Duncan, I could able to get the expected results…\nwith the following query…But in case of bigger collection with many number of fields, say like 500, it will be very complex to write all the filed names and sumup, is there any generic way we can write some thing like looping rest of the fields(other than grouping ones)?",
"username": "Murali_Muppireddy"
},
{
"code": "db.test.aggregate([\n {\n $group: {\n // specify group-by fields here\n _id: {\n x: '$x',\n y: '$y',\n z: '$z',\n },\n docsEntries: {\n $push: {\n $objectToArray: '$$CURRENT',\n },\n },\n },\n },\n {\n $addFields: {\n docKeysList: {\n $map: {\n input: {\n $arrayElemAt: ['$docsEntries', 0],\n },\n in: '$$this.k',\n },\n },\n },\n },\n {\n $addFields: {\n // gather fields, values of which we will sum-up\n filteredKeyList: {\n $filter: {\n input: '$docKeysList',\n cond: {\n $not: {\n // specify ignored fields, that you do not want to calculate\n // at minimum, here should be _id and grouping keys\n $in: ['$$this', ['_id', 'x', 'y', 'z']],\n },\n },\n },\n },\n },\n },\n {\n $addFields: {\n // collect all entries (values of same key from all docs in the group)\n // in the single array\n groupedEntries: {\n $map: {\n input: '$filteredKeyList',\n as: 'filteredKey',\n in: {\n $reduce: {\n input: '$docsEntries',\n initialValue: [],\n in: {\n $let: {\n vars: {\n targetDocEntry: {\n $filter: {\n input: '$$this',\n as: 'docEntry',\n cond: {\n $eq: ['$$docEntry.k', '$$filteredKey'],\n },\n },\n },\n },\n in: {\n $concatArrays: ['$$value', '$$targetDocEntry'],\n },\n },\n },\n },\n },\n },\n },\n },\n },\n {\n $addFields: {\n calculatedEntries: {\n $map: {\n // we need to ned to return { k, v } for each key, so we can\n // transform it to single object with custom prop names\n input: '$groupedEntries',\n as: 'groupedEntry',\n in: {\n k: {\n $let: {\n vars: {\n item: {\n $arrayElemAt: ['$$groupedEntry', 0],\n },\n },\n in: {\n // here the custom prop name is calculated\n // feel free to change the logic if needed\n $concat: ['total_', '$$item.k'],\n },\n },\n },\n v: {\n $reduce: {\n input: '$$groupedEntry',\n initialValue: 0,\n in: {\n $add: ['$$value', {\n $convert: {\n input: '$$this.v',\n to: 'double',\n // Change NaN to 0 for onError prop,\n // if your props can contain various value types\n // and you cant to calculate only number values\n // for every document field\n onError: NaN,\n onNull: 0,\n },\n }],\n },\n },\n },\n },\n },\n },\n },\n },\n {\n $addFields: {\n results: {\n $arrayToObject: '$calculatedEntries',\n },\n },\n },\n {\n $project: {\n results: true,\n },\n },\n]).pretty();\ndb.test.insertMany([\n { x: 1, y: 1, z: 1, propA: 5, propB: '5', propC: 't5' },\n { x: 1, y: 1, z: 2, propA: 10, propB: '10', propC: 't10' },\n { x: 1, y: 1, z: 2, propA: 15, propB: '15', propC: 't15' },\n]);\n[\n {\n \"_id\" : {\n \"x\" : 1,\n \"y\" : 1,\n \"z\" : 1\n },\n \"results\" : {\n \"total_propA\" : 5,\n \"total_propB\" : 5,\n \"total_propC\" : NaN\n },\n },\n {\n \"_id\" : {\n \"x\" : 1,\n \"y\" : 1,\n \"z\" : 2\n },\n \"results\" : {\n \"total_propA\" : 25,\n \"total_propB\" : 25,\n \"total_propC\" : NaN\n },\n },\n]\n",
"text": "collection with many number of fields, say like 500Seems a bit crazy is there any generic way we can write some thing like looping rest of the fields(other than grouping ones)?For the crazy request - crazy solution! And for the following documents:The aggregation will return:The solution is not ideal and can be improved, but that is another crazy story I made this just to proove, that this is possible with Mongo v4.2.",
"username": "slava"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | How to group on multiple columns and sum up individual columns | 2020-05-12T17:43:20.135Z | How to group on multiple columns and sum up individual columns | 62,133 |
|
null | [
"connecting"
] | [
{
"code": "",
"text": "I try to connect to database to local port using /Users/charles/mongodb/bin/mongod —-dbpath=/charles/mongodb-data. Error returned\nAutomatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\nNo TransportLayer configured during NetworkInterface startup\nInvalid command: —-dbpath=/charles/mongodb-data\nBut i can see that --dbpath is clearly a storage options as seen in the options menu\n–dbpath arg Directory for datafiles - defaults to\n/data/db\nThis was working perfectly for 6 months since i started working with the mongodb community server and the command just stopped working after i restarted my computer esterday",
"username": "charles_T"
},
{
"code": "",
"text": "It might be the markup language for this forum but the first dash seems odd. There is clear difference between the dash dash of your commandmongod —-dbpath=/charlesand the dash dash of the sentenceBut i can see that --dbpath …The first dash of the command is longer.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks you are right",
"username": "charles_T"
}
] | Fail to connect to local server after restarting Mac pro | 2020-06-25T19:58:43.533Z | Fail to connect to local server after restarting Mac pro | 2,403 |
null | [] | [
{
"code": "{\n _id: \"user_id\",\n first: \"first name\",\n last: \"last name\",\n addresses: [\n {city: \"New York\", state: \"NY\", zip: \"10036\", address: \"229 W 43rd\"},\n {city: \"Palo Alto\", state: \"CA\", zip: \"94301\", address: \"100 Forest Ave\"}\n ]\n}\n{_id:\"user_id\",first: \"first name\", last: \"last name\"}\n{city: \"New York\", state: \"NY\", zip: \"10036\", address: \"229 W 43rd\"}\n{city: \"Palo Alto\", state: \"CA\", zip: \"94301\", address: \"100 Forest Ave\"}\n",
"text": "Hi,\nI need solution for these case, help me out.\nsub-document should be converted to new document along with main existing document.Example:Output should be like these:Please help me out on this.",
"username": "AKASH_SKY"
},
{
"code": "db.your_collection.aggregate([\n {\n $group: {\n _id: null,\n usersDocs: {\n $push: {\n _id: '_id',\n first: '$first',\n last: '$last',\n },\n },\n addrDocs: {\n $push: '$addresses',\n },\n },\n },\n // flatten array in $addressDocs prop\n {\n $set: {\n addrDocs: {\n $reduce: {\n input: '$addrDocs',\n initialValue: [],\n in: {\n $concatArrays: ['$$value', '$$this'],\n },\n },\n },\n },\n },\n]);\n{ \n usersDocs: [...],\n addrDocs: [...] \n} \n{\n $project: {\n mixedDocs: {\n $concatArrays: ['$usersDocs', '$addrDocs'],\n },\n },\n},\n{\n $unwind: '$mixedDocs',\n},\n{\n $replaceWith: '$mixedDocs',\n},\n",
"text": "Hello, @AKASH_SKY! Welcome to the community!\nHave a look at this aggregation:The above command will provide you with the result like this:This is, often, a better structure, as you group group all the objects of the same type in one array:Such approach, often, is more practical But, of course, you can get array of mixed objects, by adding in the end of that aggregation pipeline, those stages:I am curious, what is the reason, that you want to have an array of mixed objects? ",
"username": "slava"
},
{
"code": "{\n _id: 1,\n name: \"Parent product\",\n is_child: false,\n is_parent: true,\n children: [\n {\n _id: 1,\n name: \"Child 1\",\n is_child: true,\n is_parent: false,\n children : []\n },\n {\n _id: 2,\n name: \"Child 2\",\n is_child: true,\n is_parent: false,\n children : []\n }\n ] \n}\n{\n _id: 1,\n name: \"Parent product\",\n is_child: false,\n is_parent: true,\n},\n {\n _id: 1,\n name: \"Child 1\",\n is_child: true,\n is_parent: false,\n children : []\n},\n{\n _id: 2,\n name: \"Child 2\",\n is_child: true,\n is_parent: false,\n children : []\n}\n",
"text": "Hi, @slava I got your point. But, i have a parent child relationship in a same collection. So, to fetch the child objects, i’m using lookup but it returns child objects in an array.Result like this:This is my exact structure. Parent & child record should be in different objects.Output like this:@slava So can you help me out to get the exact structure of output.",
"username": "AKASH_SKY"
},
{
"code": "db.your_collection.aggregate([\n { $group: { ... } },\n { $set: { ... } },\n { $project: { ... } },\n { $unwind: { ... } },\n { $replaceWith: ... }\n]);\n{\n _id,\n parentId,\n name\n}\n",
"text": "can you help me out to get the exact structure of output.Actually, the code I provided above, will provide you the exact result that you want.\nYou just need to put everything into 1 single aggregation:Also, consider to restructure your documents like this:With this structure, every object, parent or child will be in a separate documents in the collection. If parentId is not null, then this document is a child, otherwise it is parent. Also, it will be easy to do a $lookup of children by parentId value ",
"username": "slava"
},
{
"code": "",
"text": "I have followed the step which you have mentioned\ndb.your_collection.aggregate([\n{ $group: { … } },\n{ $set: { … } },\n{ $project: { … } },\n{ $unwind: { … } },\n{ $replaceWith: … }\n]);But, i’m getting error\nScreenshot (230)493×505 9.69 KBProvide solution",
"username": "AKASH_SKY"
},
{
"code": "db.your_collection.aggregate([\n {\n $group: {\n _id: null,\n usersDocs: {\n $push: {\n _id: '$_id',\n first: '$first',\n last: '$last',\n },\n },\n addrDocs: {\n $push: '$addresses',\n },\n },\n },\n // flatten array in $addressDocs prop\n {\n $addFields: {\n addrDocs: {\n $reduce: {\n input: '$addrDocs',\n initialValue: [],\n in: {\n $concatArrays: ['$$value', '$$this'],\n },\n },\n },\n },\n },\n {\n $project: {\n mixedDocs: {\n $concatArrays: ['$usersDocs', '$addrDocs'],\n },\n },\n },\n {\n $unwind: '$mixedDocs',\n },\n {\n $replaceRoot: {\n newRoot: '$mixedDocs',\n },\n },\n]);\n",
"text": "Seems like your version of MongoDB is not the latest.\n$set and $replaceWith stages are available in MongoDB since v4.2.Try this aggregation:",
"username": "slava"
},
{
"code": "",
"text": "I’m using MongoDB version v4.0.6.\nI’ve tried $addFields & $replaceRoot, it’s working mean while i got the solution what i’ve expected.\nThank you so much @slava\nThe query may cause any performance related issue, if have lakhs of records in collection ?",
"username": "AKASH_SKY"
},
{
"code": "{\n $unwind: '$mixedDocs',\n},\n{\n $replaceRoot: {\n newRoot: '$mixedDocs',\n },\n},\n",
"text": "The query may cause any performance related issue, if have lakhs of records in collection ?Well, with this aggregation you process every single document in your collection. That means, each new inserted document will increase the time, needed for this aggregation to execute. You can remove those two stages:The result will be a bit different, but the aggregation will be a bit faster.\nThe aggregation is already well-optimised. But, sooner or later, it will become slow.In your situation it is better to rethink the structure of you document. Looks like you have a tree structure.\nI think, in your case, you should consider using Tree structures with parent refs.",
"username": "slava"
},
{
"code": "",
"text": "Thank you @slava for your kind information regarding performance related. Once again thank u.",
"username": "AKASH_SKY"
},
{
"code": ",{ explain: true},{ allowDiskUse: true}",
"text": "Hello @AKASH_SKYconcerning aggregation and performance there are some rules of thumb, I try to compile a list here. This might not be complete, but hopefully a good starter:Basically, you want to ensure that your aggregation queries are able to use indexes as much as possible.\nImportant to know: data moves through your pipeline from the first operator to the last, once the server encounters a stage that is not able to use indexes, all of the following stages will no longer be able to use indexes either.\nIn order to determine how aggregation queries are executed and whether or not indexes are being utilized, you can pass ,{ explain: true} as an option to the aggregation method. This will produce an explain output with lot of prepossessing details.The $match operator is able to utilize indexes. Operators that use indexes must be at the front of your pipelines. Similarly, you want to put $sort stages as close to the front as possible. Performance can be degraded when sorting isn’t able to use an index. For this reason, make sure that your sort stages come before any kind of transformations so that you can make sure that indexes are used for sorting.If you’re doing a $limit and doing a $sort, make sure that they’re near each other and at the front of the pipeline. Then, the server is able to do a top-k sort. This is when the server is able to only allocate memory for the final number of documents. This does not need indexes!Your results are all subject to the 16 megabyte document limit that exist in MongoDB. Aggregation generally outputs a single document, and that single document will underlie this limit. This limit does not apply to documents as they flow through the pipeline. The best way to mitigate this issue is by using $limit and $project to reduce your resulting document size.Another limitation is that for each stage in your pipeline, there’s a 100 megabyte limit of RAM usage. The best way to mitigate this is to ensure that your largest stages are able to utilize indexes. If you’re still running into this 100 megabyte limit even if you’re using indexes, then there’s an additional way to get around it. And that is by specifying ,{ allowDiskUse: true} on your aggregation query. This will allow you to spill to disk, rather than doing everything in memory. A word of warning : this is a absolute last resort measure. Hard drives are dramatically slower than memory, so by splitting to disk, you’re going to see serious performance degradation.Cheers,\nMichael",
"username": "michael_hoeller"
}
] | Sub-document to new document along with existing main document | 2020-06-24T18:30:36.295Z | Sub-document to new document along with existing main document | 8,456 |
null | [] | [
{
"code": "",
"text": "Hi,Now mongo stitch sdk will disapear.Do you know when you will add social login ( FaceBook, Google, … ) ?Thanks.",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hi Jonathan – I’m not sure which SDK you’re referring to (a few already have the providers enabled) but if you’re looking for the Web SDK you can follow this Github issue.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thanks i have follow and put question ! But got no response or any planning about login features or phase 2 !Can you give some deadline or planning about this, if you can ?",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hi Drew,we are currently using ‘Sign in with Apple’ as our only authentication method in our iOS app with Realm Cloud, will this continue to work after the move to MongoDB Realm?",
"username": "Daniel_Kristensen"
}
] | Social Login MongoDB Realm SDK | 2020-06-16T17:55:09.745Z | Social Login MongoDB Realm SDK | 2,426 |
null | [
"golang"
] | [
{
"code": "\tif err != nil {\n\t\tif aerr, ok := err.(awserr.Error); ok {\n\t\t\t// If it's an AWS error, override the dumb message to a human readable one\n\t\t\tswitch aerr.Code() {\n\t\t\tcase secretsmanager.ErrCodeDecryptionFailure:\n\t\t\t\terr = fmt.Errorf(\"Secrets Manager can't decrypt the protected secret text using the provided KMS key\")\n\t\t\tcase secretsmanager.ErrCodeInternalServiceError:\n\t\t\t\terr = fmt.Errorf(\"An error occurred on the server side\")\n\t\t\tcase secretsmanager.ErrCodeInvalidParameterException:\n\t\t\t\terr = fmt.Errorf(\"You provided an invalid value for a parameter\")\n\t\t\tcase secretsmanager.ErrCodeInvalidRequestException:\n\t\t\t\terr = fmt.Errorf(\"You provided a parameter value that is not valid for the current state of the resource\")\n\t\t\tcase secretsmanager.ErrCodeResourceNotFoundException:\n\t\t\t\terr = fmt.Errorf(\"The secret was not found or we don't have permission to view it\")\n\t\t\t}\n\t\t}\n\t}\n\terrStr := err.Error()\n\tswitch {\n\tcase strings.HasPrefix(errStr, \"(IndexKeySpecsConflict) Index must have unique name.\"):\n\tcase strings.HasPrefix(errStr, \"(DuplicateKey)\"):\n\t\t// We can ignore conflicts\n\t\t// log.WithFields(logrus.Fields{\n\t\t// \t\"index\": m.Key,\n\t\t// \t\"collection\": coll.Name(),\n\t\t// \t\"database\": coll.Database().Name(),\n\t\t// \t\"conflictErr\": err,\n\t\t// }).Debug(\"Index was previously created - this is an Ensure function, so this is fine\")\n\t\terr = nil\n\tcase strings.HasPrefix(errStr, \"(DatabaseDifferCase)\"):\n\t\t// The casing is different on the DB - we need to error out\n\t\tlog.WithFields(logrus.Fields{\n\t\t\t\"index\": m.Key,\n\t\t\t\"collection\": coll.Name(),\n\t\t\t\"database\": coll.Database().Name(),\n\t\t\t\"err\": err,\n\t\t}).Errorf(\"Failed to create index because a DB with the same name exists with a different case\")\n\tdefault:\n\t\tlog.WithFields(logrus.Fields{\n\t\t\t\"index\": m.Key,\n\t\t\t\"collection\": coll.Name(),\n\t\t\t\"database\": coll.Database().Name(),\n\t\t\t\"err\": err,\n\t\t}).Errorf(\"Failed to create index\")\n\t}\n",
"text": "In AWS SDK, they have AWS specific errors that are defined as consts that you can check against, e.g.Is there something similar to do for mongo? Are you guys using some kind of wrapped errors? How I’m having to do error checking presently involves a strings.Contains, e.g.Is there a better/recommended method for checking for errors in general in mongo?",
"username": "TopherGopher"
},
{
"code": "mongo.CommandErrorDuplicateKeymongo.WriteExceptionmongo.BulkWriteExceptionWriteExceptionInsertManyBulkWriteIsDup",
"text": "Hi @TopherGopher,Looks like all of these are server errors, so they should all be of type mongo.CommandError. The trickiest one is DuplicateKey because the format of that error has changed over server versions. For example, it might be returned as a write error from the server, in which case it’d be a mongo.WriteException or mongo.BulkWriteException, depending on the method you use (all writes return a WriteException except for InsertMany and BulkWrite).The DuplicateKey case is definitely confusing. There’s an open GODRIVER ticket to implement something like mgo’s IsDup function to handle all of that logic: https://jira.mongodb.org/browse/GODRIVER-972.– Divjot",
"username": "Divjot_Arora"
}
] | How should we be checking for errors in mongo-go-driver? | 2020-06-17T16:52:06.116Z | How should we be checking for errors in mongo-go-driver? | 6,804 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hi, I am using the password which I created while log in to the site. But when I use it for installation of the software it says enter administrative password. I tried to enter the password which I used while creating the login, but its failing.\nplease help me with what should be the administrative password?",
"username": "tina_mistry"
},
{
"code": "",
"text": "Your title is bit misleadingYou have to use your system admin pwd while installating not mongodb site id/pwd",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Welcome to the community @tina_mistry!Please provide more information on what you are trying to install (software and O/S version).I’m guessing you might be referring to installing the MongoDB Community Server on Windows, which will allow you to setup MongoDB to run as a service.We’ll need more details in order to offer any suggestions.Thanks,\nStennie",
"username": "Stennie_X"
}
] | Password for administration for downloading the software | 2020-06-25T15:16:20.061Z | Password for administration for downloading the software | 1,575 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi everyone,Here is my project :I’m creating a Python application, with a user interface. This application has to be connected to a MongoDB database, in ordre to save datas. My aim is to create an installer for this app, so that anyone can install it and access to a local MongoDB database, without having to download or install MongoDB previously.I would like to put all these steps inside my app : installing a mongodb server and connecting the app to this server.Is it possible ?\nAnd if yes, how can I do it ?\nI didn’t manage to find a way to do it !Thank you.",
"username": "Claire_Ceresa"
},
{
"code": "",
"text": "Hi Claire,It may be possible since Python can execute shell command using various methods (popen, subprocess, etc.). However, installing any server is usually more involved than just running the executable, and some people may consider auto-installing a server they don’t know about to be a little invasive.Instead, how about mentioning that your app depends on a working MongoDB server, and installing the server should be the first step in installing your app?Another idea is to use a cloud-based MongoDB deployment e.g. Atlas as your app’s database, so you’ll know it’s always available and you wouldn’t depend on the user to be able to properly install the MongoDB server by themselves.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "m",
"text": "Hi Claire,You will need to roll your own custom solution to achieve something like this. Your solution will need to do 2 things:Overall, I think building a stable solution that does what you like will take significant effort and will be hard to maintain (URLs of MongoDB binaries might change, steps required to configure a cluster might change, etc.). I think Kevin’s suggestion of requiring the user to have a working MongoDB deployment is a better idea.-Prashant",
"username": "Prashant_Mital"
}
] | Installing MongoDB server via Python | 2020-05-25T09:28:02.027Z | Installing MongoDB server via Python | 1,969 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Is there a way to use a synced MongoDB Realm with WidgetKit?I had a Today Widget with a local realm some years ago before synced realms\nwhere introduced, when it was introduced it wasn’t supported due to no fileURL for synced realms.",
"username": "Daniel_Kristensen"
},
{
"code": "",
"text": "@Daniel_Kristensen You cannot use a synced Realm in WidgetKit because we do not have multiprocess support for sync. You can use a non-synced Realm in WidgetKit",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Synced MongoDB Realm with WidgetKit | 2020-06-25T12:24:41.510Z | Synced MongoDB Realm with WidgetKit | 2,227 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "We have dynamic creation of database with fix prefix string. eg test_abc, test_xyz.They have same collections inside. So I want to watch db names with regex or some other means.How could I achieve this? I need to push every changes of collection to kafka topics in those dbs.",
"username": "Mohammed_Aadil"
},
{
"code": "",
"text": "You should be able to use a pipieline to watch for your dynamic database names.The Change Stream Output shows what these events look like. In your source connector define a pipeline…“pipeline”: \"[{\"$match\": {\"ns.db\":{\"$regex\": /^(mydb1|mydb2)$/}}}]\",you can also use $in for explicit matches but since you dynamic, use regex",
"username": "Robert_Walters"
}
] | Kafka source connector - How can I watch for dynamic db names? | 2020-03-10T05:36:41.443Z | Kafka source connector - How can I watch for dynamic db names? | 2,021 |
[] | [
{
"code": "",
"text": "Hello!\nI’m developing an android app in JAVA and I’m using Mongo Stitch to do the Login and Authentitfication for the users.\nNow I want to have a functionality that deletes the authentifiacted user in my Stitch App, but I didn’t found any recourse how to do this.\nI just know the manual deletion in the Browser ( as you can see in the screenshot). Can anyone help me out?image1571×448 39.9 KB",
"username": "Matthias_Linder"
},
{
"code": "",
"text": "@Matthias_Linder I believe you will want to use the Realm Admin API for this -\nhttps://docs.mongodb.com/realm/admin/api/v3/index.html#delete-/groups/{groupid}/apps/{appid}/users/{uid}",
"username": "Ian_Ward"
}
] | Delete Realm/Stitch User in Android SDK JAVA | 2020-06-25T15:16:25.083Z | Delete Realm/Stitch User in Android SDK JAVA | 1,535 |
|
null | [
"node-js"
] | [
{
"code": "electron-rebuildC:\\src\\vcpkg\\installed\\x64-windows-static\\libREALM_SSL_LIBS",
"text": "When using realm-js with Electron, I have to run electron-rebuild. The rebuild recompiles realms from sources, so it needs SSL libs installed on C:\\src\\vcpkg\\installed\\x64-windows-static\\lib. It is possible to configure the path somehow or to make it relative? E.g. some env variable like REALM_SSL_LIBS etc.?\nThanks",
"username": "Ondrej_Medek"
},
{
"code": "README.mdlibssl-devREADME.md",
"text": "Moreover, I miss in README.md doc about build on Linux (need to install at least libssl-dev) and that Windows build needs Windows SDK 10 (it was Windows SDK 8.1 in the former versions).Also, the README.md starts with: “… and Node.js (on MacOS and Linux) but …” - missing Windows ",
"username": "Ondrej_Medek"
},
{
"code": "electron-rebuild",
"text": "Hi @Ondrej_Medek,It has been a while since you posted this question. Were you able to find a solution?If you still need help investigating, can you provide some more information about your environment including the versions of:The specific error encountered during electron-rebuild would also be useful.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "electron-rebuildC:\\src\\vcpkg\\installed\\x64-windows-static\\libREADME.mdREADME.md",
"text": "Hi @Stennie_X, the electron-rebuild work ok for me (realm-js 5.0.2, Electron 8) on Windows. It just needs to have SSL libs on the absolute path C:\\src\\vcpkg\\installed\\x64-windows-static\\lib, it is described in realm-js README.md in the section Building Realm I am just asking if it would be possible to have this path relative or configurable, e.g. by an env variable.Also, the README.md has quite a few obsolete information.",
"username": "Ondrej_Medek"
},
{
"code": "",
"text": "Hi Ondrej,\nFeature requests and adjustments to the readme would be very welcome as github issues. The forum here is mostly for community answers to “how do I?” kind of questions.\nThanks!",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "Ok, I’ve made a GitHub issue #2784",
"username": "Ondrej_Medek"
}
] | Electron-rebuild and ssl libs on Windows | 2020-03-18T14:42:20.394Z | Electron-rebuild and ssl libs on Windows | 3,336 |
null | [
"security"
] | [
{
"code": "",
"text": "Anyone can help please\nwhen I create admin user in mongo shell I got this problem\nuncaught exception: Error: couldn’t add user: command createUser requires authentication :",
"username": "PT_Tadib_Sejahtera"
},
{
"code": "",
"text": "You need to authenticate with the admin user to create subsequent users.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @PT_Tadib_SejahteraIf you are trying to create your first user when authentication is enabled you need to do it using mongo on the server that mongodb is running.Or you can follow the Enable Access Control Document.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't add user after creating admin user: createUser requires authentication | 2020-06-25T08:10:14.566Z | Can’t add user after creating admin user: createUser requires authentication | 30,377 |
null | [
"queries"
] | [
{
"code": "{\n\"pidNumber\" : NumberLong(12103957251), \n\"eventDate\" : ISODate(\"2018-05-15T00:00:00.000+0000\")\n} \n",
"text": "Quite an interesting case. I have an enormous MongoDB collection with lots of documents. These are two of the fields ( I changed the field names).I need to count all the instances where the date is older than 1 year but ONLY if there’s a more recent document with the same pidNumber.So for example: If there’s only one document with pidNumber 1234 and it’s from three years ago - keep it (don’t count). But if on top of that there’s another document with pidNumber 1234 and it’s from two years ago - then count the three years old one.Is it possible to do? Does anyone have on how to do it?Thanks ahead!",
"username": "Jack_Smith"
},
{
"code": "db.your_collection.aggregate([\n {\n $group: {\n _id: '$pidNumber',\n dates: {\n $push: '$eventDate',\n },\n },\n },\n {\n $addFields: {\n totalOldDates: {\n $filter: {\n input: '$dates',\n cond: {\n // replace <oneYearAgoDate> with your date value\n $gte: ['$$this', <oneYearAgoDate>], \n },\n },\n },\n },\n },\n {\n $project: {\n hasEnoughOldDates: {\n $gt: [{ $size: '$totalOldDates' }, 1],\n },\n },\n },\n {\n $match: {\n hasEnoughOldDates: true,\n },\n },\n {\n $count: 'total',\n },\n]);\n",
"text": "Hello, @Jack_Smith! Welcome to the community!If I understood you correctly, you need to count all the ‘pidNumber’ in the collection, that have two or more dates, older than specified date, right?If so, this aggregation will provide you the desired result:",
"username": "slava"
}
] | How to delete old documents only if there's a more recent document with the same value | 2020-06-25T10:56:55.282Z | How to delete old documents only if there’s a more recent document with the same value | 2,907 |
null | [
"dot-net",
"compass",
"mongodb-shell"
] | [
{
"code": "{\n \"dateFrom\": {\n \"$date\": {\n \"$numberLong\": \"1569888000000\"\n }\n }\n }\n}\n",
"text": "q1) pls tell how can I do it using mongo compass and also from mongo shell in date datatypefor ex we keep date in this formate , what i have to do is, i want to keep it null\nand want to filter it in c# using mongo driver.",
"username": "Rajesh_Yadav"
},
{
"code": "const intDate = Date.now();\ndb.your_collection.insert({ d: intDate });\ndb.your_collectoin.insert({ d: null });\ndb.createCollection('your_collection', {\n validator: {\n $jsonSchema: {\n bsonType: 'object',\n required: [ 'd' ],\n properties: {\n d: {\n bsonType: ['double', 'null'],\n },\n },\n },\n },\n});\n",
"text": "Hello, @Rajesh_Yadav!\nIf I understood you correctly, you want to have property ‘d’ in the document, that would contain integer representation of date or null, if date is not provided?\nIf so, consider this mongo shell example:You can also add some validation for ‘d’ property:With the above validator, you will be able to insert either int or null as a value for property ‘d’. Property ‘d’ is required (must always be present in any inserted document).",
"username": "slava"
}
] | How can I insert null in date data type of MongoDB | 2020-03-31T10:41:28.086Z | How can I insert null in date data type of MongoDB | 19,021 |
null | [
"atlas-device-sync"
] | [
{
"code": "Realm.asyncOpen(configuration:realm = try! Realm(configuration:",
"text": "The getting started guide Open a Synced Realm indicates the first time a realm is opened, it should be with asyncOpen.Realm.asyncOpen(configuration:The reason is that the normal Realm() initializer is a write and creates the database schema which would fail on a Read Only realm.If Realm.asyncOpen is used the first time, but then the normal initializer is used thereafterrealm = try! Realm(configuration:would that still not be a write? Should synced realms only be opened with .asyncOpen?What’s the proper process here - should .asyncOpen be tossed into the AppDelegate or app opening sequence?The documentation implies this applies to all realms?If it is the first time opening that particular realm, especially if it is a read-only realmbut then says specifically for read-onlythe first time you open any given read-only realm.Any guidance is appreciated.",
"username": "Jay"
},
{
"code": "",
"text": "would that still not be a write? Should synced realms only be opened with .asyncOpen?No it will not be a write because after the first initial asyncOpen, the try Realm method will use the cached file on disk.What’s the proper process here - should .asyncOpen be tossed into the AppDelegate or app opening sequence?I would toss it in the initial app opening sequence but probably not in AppDelegate, in your first initial VC after logging in. asyncOpen is typically tied to some initial loading screen and you can tie progressListeners to it to show the user that data is downloading.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardJust saw the new 5.1 release which has a bit of a behavior change/addition from the above:So read-only realms can now be accessed without asyncOpen.",
"username": "Jay"
},
{
"code": "",
"text": "It sure does Jay - glad to see you are following out releases!",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This change is great news! Makes it a lot easier to work with readonly realms ",
"username": "Simon_Persson"
}
] | Opening Sync'd Realm Sequence | 2020-06-20T14:27:22.216Z | Opening Sync’d Realm Sequence | 2,419 |
null | [] | [
{
"code": "python-driverpython",
"text": "G’day MongoDB Community members!It has been just over four months since we officially launched the MongoDB Developer Community Forums and there have been a lot of great discussions.We’ve made a few small category/tag adjustments along the way to improve your experience, but have recently taken a more holistic look at the categories, tags, and discussion to better fit current usage.Here’s what has changed:“Drivers, ODMs, and Connectors” has been split into two categories: Drivers & ODMs and Connectors & Integrations. Connectors & Integrations includes discussions on Kafka, Spark, the MongoDB Connector for BI, and other integrations or use cases like ETL.Tags referencing a programming language used to have -driver suffixes (for example, python-driver is now python). We’ve dropped the suffix to be more consistent with usage: discussions tagged with a programming language may be about drivers, SDKs, or other development questions. Any previously used tag names will redirect to their new equivalent.We’ve added a new Developer Tools category for discussions about the MongoDB Shell, Compass, VS Code extension, and similar topics.The “MongoDB Cloud” category has been renamed to MongoDB Atlas and there is a new top-level category for MongoDB Charts. Past discussions about MongoDB Stitch have been moved into the MongoDB Realm category.The existing “Getting Started with MongoDB” has been renamed to Other MongoDB Topics and moved lower on the homepage category list. This category is for discussions that don’t fit into other categories.We’ll also be adding some relevant resource links to the “About …” descriptions in each category and pinning those posts so they are easy to find. For example: About the Developer Tools category.Please let us know what you think of the changes, or if you have any additional suggestions.Thanks,The MongoDB Community Team",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Forum category/tag adjustments June 2020 | 2020-06-25T05:10:37.414Z | Forum category/tag adjustments June 2020 | 3,492 |
[
"charts",
"on-premises"
] | [
{
"code": "",
"text": "HiI would like to know when you plan to release a new version for MongoDB Charts (On-Prem) (last release was 19-12-1 December 16, 2019) and if in the new release you will include the nice feature: Dashboard Filtering:659×988 73.8 KBThank you.All the bestRui",
"username": "Rui_Ribeiro"
},
{
"code": "",
"text": "Hi @Rui_Ribeiro! Currently the team’s focus is entirely on the cloud version of Charts, and we don’t have a planned date for updating the on-prem version. We’ll post a note here when we have any updated information.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Charts - Dashboard Filtering on (On-Prem) Version | 2020-06-24T18:30:44.156Z | MongoDB Charts - Dashboard Filtering on (On-Prem) Version | 3,332 |
|
null | [
"mongoose-odm",
"connecting"
] | [
{
"code": "console.log(\"mongo db connection\", err);\n",
"text": "Hi,\nNew to Mongo environment, I am trying to build an express app with mongo using mongoose:After requiring mongoose, I use the following:\"\"mongoose.connect(uri,{ useNewUrlParser: true, useUnifiedTopology: true },err => {});“”However, after running my app.js I get the following error:\n{Error: querySrv ETIMEOUT _mongodb._tcp.cluster0-rsadq.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:197:19)\nerrno: ‘ETIMEOUT’,\ncode: ‘ETIMEOUT’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.cluster0-rsadq.mongodb.net’ }Note that I already whitelisted my IP and that I use the standard uri format where I replaced test with the db nama as well as password…Thank you!",
"username": "sam23"
},
{
"code": "",
"text": "querySrv ETIMEOUT _mongodb._tcp.cluster0Are you able to connect by shell?\nMake sure no firewall/antivirus/vpn blocking your request",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello @sam23 wellcome to the community!As Ramachandra mentioned: can you please check if you can connect via the mongo shell to make sure that you can reach the database? In this MongoDB Doc you find further information how to connect. In case you get stuck feel free to post further information.Since you start with MongoDB and want to use Mongoose I like to point you also to this great article Do You Need Mongoose When Developing Node.js and MongoDB Applications from @adoCheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thank you for your recommandations. I will try to switch to mongo over mongoose. Concerning shell connection, as you excepected I have a problem:C:\\Users\\hp>mongo\nMongoDB shell version v4.2.6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-06-23T16:48:50.665+0000 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Aucune connexion n�a pu �tre �tablie car l�ordinateur cible l�a express�ment refus�e. :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-06-23T16:48:50.676+0000 F - [main] exception: connect failed\n2020-06-23T16:48:50.677+0000 E - [main] exiting with code 1What do you suggest for resolving this problem?",
"username": "sam23"
},
{
"code": "mongo 'mongodb+srv://cluster0-rsadq.mongodb.net'\n",
"text": "When you type mongo, you are trying to connect locally.Error connecting to 127.0.0.1:27017Since your cluster is cluster0-rsadq.mongodb.net, you should tryCheck the usage on the first link provided by @michael_hoeller, to see how to supply your credentials.",
"username": "steevej"
},
{
"code": "",
"text": "Finally, I get this error:DNSHostNotFound: Failed to look up service “”:Cette op├®ration sÔÇÖest termin├®e car le d├®lai dÔÇÖattente a expir├®.\ntry ‘C:\\mongodb\\bin\\mongo.exe --help’ for more information",
"username": "sam23"
},
{
"code": "",
"text": "I followed the docs but this time I get this :C:\\Users\\hp>mongo ‘mongodb+srv://cluster0-rsadq.mongodb.net’ -u my_username -p password\nMongoDB shell version v4.2.6\n2020-06-23T20:52:01.514+0000 F - [main] exception: No digits\n2020-06-23T20:52:01.516+0000 E - [main] exiting with code 1Note that my_username and password were replaced and I get the same log",
"username": "sam23"
},
{
"code": "",
"text": "Use normal double or single quotes. Yours are new web version.",
"username": "steevej"
},
{
"code": "",
"text": "That didn’t change anything, I keep getting an error related to: DNSHostNotFound: Failed to look up service",
"username": "sam23"
},
{
"code": "",
"text": "But your earlier post gives different error with SRV string(no digits)Did your try with doble quotes as Steeve suggested?or create a test user and share pwd for us to checkI tried your string with some id/pwd\nIt gave authentication erroror try long form of connect string which has names of all 3 nodes of your cluster with replicaset name\nIt is possible your are using a DNS resolver that does not support DNS seedlist\nAlso make sure no vpn/firewall,anti virus issues blocking your connection",
"username": "Ramachandra_Tummala"
},
{
"code": "mi01@LU30:~$ mongo 'mongodb+srv://cluster0-rsadq.mongodb.net' -u my_username -p password\nMongoDB shell version v3.6.8\nconnecting to: mongodb+srv://cluster0-rsadq.mongodb.net\n2020-06-24T07:05:20.706+0200 I NETWORK [thread1] Starting new replica set monitor for Cluster0-shard-0/cluster0-shard-00-02-rsadq.mongodb.net.:27017,cluster0-shard-00-01-rsadq.mongodb.net.:27017,cluster0-shard-00-00-rsadq.mongodb.net.:27017\n2020-06-24T07:05:20.793+0200 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to cluster0-shard-00-00-rsadq.mongodb.net.:27017 (1 connections now open to cluster0-shard-00-00-rsadq.mongodb.net.:27017 with a 5 second timeout)\n2020-06-24T07:05:20.793+0200 I NETWORK [thread1] Successfully connected to cluster0-shard-00-02-rsadq.mongodb.net.:27017 (1 connections now open to cluster0-shard-00-02-rsadq.mongodb.net.:27017 with a 5 second timeout)\n2020-06-24T07:05:20.813+0200 I NETWORK [thread1] changing hosts to Cluster0-shard-0/cluster0-shard-00-00-rsadq.mongodb.net:27017,cluster0-shard-00-01-rsadq.mongodb.net:27017,cluster0-shard-00-02-rsadq.mongodb.net:27017 from Cluster0-shard-0/cluster0-shard-00-00-rsadq.mongodb.net.:27017,cluster0-shard-00-01-rsadq.mongodb.net.:27017,cluster0-shard-00-02-rsadq.mongodb.net.:27017\n2020-06-24T07:05:20.880+0200 I NETWORK [thread1] Successfully connected to cluster0-shard-00-00-rsadq.mongodb.net:27017 (1 connections now open to cluster0-shard-00-00-rsadq.mongodb.net:27017 with a 5 second timeout)\n2020-06-24T07:05:20.887+0200 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to cluster0-shard-00-02-rsadq.mongodb.net:27017 (1 connections now open to cluster0-shard-00-02-rsadq.mongodb.net:27017 with a 5 second timeout)\n2020-06-24T07:05:20.965+0200 I NETWORK [thread1] Successfully connected to cluster0-shard-00-01-rsadq.mongodb.net:27017 (1 connections now open to cluster0-shard-00-01-rsadq.mongodb.net:27017 with a 5 second timeout)\nImplicit session: session { \"id\" : UUID(\"55b56a74-945f-47d9-923d-dabe2ab99112\") }\nMongoDB server version: 4.2.8\nWARNING: shell and server versions do not match\n2020-06-24T07:05:21.141+0200 I NETWORK [thread1] Marking host cluster0-shard-00-02-rsadq.mongodb.net:27017 as failed :: caused by :: Location8000: can't authenticate against replica set node cluster0-shard-00-02-rsadq.mongodb.net:27017: Authentication failed.\n2020-06-24T07:05:21.179+0200 I NETWORK [thread1] Marking host cluster0-shard-00-00-rsadq.mongodb.net:27017 as failed :: caused by :: Location11002: can't authenticate against replica set node cluster0-shard-00-00-rsadq.mongodb.net:27017: socket exception [CONNECT_ERROR] for cluster0-shard-00-00-rsadq.mongodb.net:27017\n2020-06-24T07:05:21.216+0200 I NETWORK [thread1] Marking host cluster0-shard-00-01-rsadq.mongodb.net:27017 as failed :: caused by :: Location11002: can't authenticate against replica set node cluster0-shard-00-01-rsadq.mongodb.net:27017: socket exception [CONNECT_ERROR] for cluster0-shard-00-01-rsadq.mongodb.net:27017\n2020-06-24T07:05:21.259+0200 I NETWORK [thread1] Marking host cluster0-shard-00-02-rsadq.mongodb.net:27017 as failed :: caused by :: Location8000: can't authenticate against replica set node cluster0-shard-00-02-rsadq.mongodb.net:27017: Authentication failed.\n2020-06-24T07:05:21.259+0200 E QUERY [thread1] Error: can't authenticate against replica set node cluster0-shard-00-02-rsadq.mongodb.net:27017: Authentication failed. :\nDB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20\n@(auth):6:1\n@(auth):1:2\nexception: login failed\nmi01@LU30:~$\n",
"text": "Hello @sam23the connection string seems to be ok, quote or double quote should not matter. I can reach your DB, but not authenticate since I used my_username / passwordAs Ramachandra noted, this seems to be more of an infrastructure problem.It is possible your are using a DNS resolver that does not support DNS seedlist\nAlso make sure no vpn/firewall,anti virus issues blocking your connection",
"username": "michael_hoeller"
},
{
"code": "",
"text": "You could try using google’s DNS servers at 8.8.8.8 and 8.8.4.4.",
"username": "steevej"
},
{
"code": "",
"text": "I think I am close to establishing the connection: I verified whitelisted IPs, removed antivirus but this time:connecting to: mongodb://cluster0-rsadq.mongodb.net:27017/?compressors=disabled&gssapiServiceName=mongodb*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.2020-06-24T15:45:21.281+0000 E QUERY [js] Error: couldn’t connect to server cluster0-rsadq.mongodb.net:27017, connection attempt failed: HostNotFound: Could not find address for cluster0-rsadq.mongodb.net:27017: SocketException: H�te inconnu. :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-06-24T15:45:21.296+0000 F - [main] exception: connect failed\n2020-06-24T15:45:21.296+0000 E - [main] exiting with code 1I get a Host Not Found error!By the way thank you for your indications, I learned a lot of concepts by looking up these issues.",
"username": "sam23"
},
{
"code": ";; ANSWER SECTION:\ncluster0-rsadq.mongodb.net. 59 IN TXT \"authSource=admin&replicaSet=Cluster0-shard-0\"\n;; ANSWER SECTION:\n_mongodb._tcp.cluster0-rsadq.mongodb.net. 59 IN SRV 0 0 27017 cluster0-shard-00-00-rsadq.mongodb.net.\n_mongodb._tcp.cluster0-rsadq.mongodb.net. 59 IN SRV 0 0 27017 cluster0-shard-00-01-rsadq.mongodb.net.\n_mongodb._tcp.cluster0-rsadq.mongodb.net. 59 IN SRV 0 0 27017 cluster0-shard-00-02-rsadq.mongodb.net.\n",
"text": "The URI you are using is wrong. You are using part old style and part SRV. The DNS name cluster9-rsadq.mongodb.net does not correspond to a host. It correspond to a cluster. It has the following DNS entry:So your hosts are:To use SRV, remove the port number and use mongodb+srv:// rather than mongodb:// like it was given in the examples.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Beginner issue with database connect | 2020-06-23T01:20:56.489Z | Beginner issue with database connect | 11,051 |
null | [] | [
{
"code": "",
"text": "Hi everybody!\nI want to show you a problem:\nin RDBMS i would like to do this querySELECT PERS_ID FROM table WHERE type=‘A’ and PERS_ID not in (SELECT PERS_ID from TABLE where type=‘B’)how can i do this query in mongo?I have\n{pers_id: “1”, type: “A”}\n{pers_id: “2”, type: “A”}\n{pers_id: “2”, type: “B”}\n{pers_id: “3”, type: “A”}\n{pers_id: “4”, type: “B”}\nand i want to see as result{pers_id: “1”}\n{pers_id: “3”}How can i do??",
"username": "Annsa_Lisa"
},
{
"code": "db.your_collection.find({ type: 'A' }, { _id: false, pers_id: true });\ndb.your_collection.aggregate([\n {\n $match: {\n type: 'A', \n }, \n },\n {\n $project {\n _id: false, \n pers_id: true,\n }, \n },\n]);\n",
"text": "Hello, @Annsa_Lisa, welcome to the community!\nYou can achieve the desired result with .find() commandOr using .aggregate() command:",
"username": "slava"
},
{
"code": "",
"text": "I don’t think that is what I want…\nI want to show collections where the pers_id with type A doesn’t have any other document of type B with the same pers_id.\nCan you help me?\nThanks",
"username": "Annsa_Lisa"
},
{
"code": "",
"text": "I would $group by pers_id using $addToSet the type in a types array and then $match out documents that have type B in the types array.",
"username": "steevej"
},
{
"code": "SELECT PERS_ID FROM table WHERE type=‘A’db.selfjoin.aggregate([\n { '$group': {\n '_id': '$pers_id',\n 'type': { '$addToSet': '$type' }\n }\n },{ \n '$match': { 'type': { $ne: 'B' } } \n}]);\n// MongoDB Playground\n// Select the database to use.\nuse('test');\n\n// The drop() command destroys all data from a collection.\ndb.selfjoin.drop();\n\n// Insert a few documents into the selfjoin collection.\ndb.selfjoin.insertMany([\n { 'pers_id' : '1', 'type' : 'A' },\n { 'pers_id' : '2', 'type' : 'A' },\n { 'pers_id' : '2', 'type' : 'B' },\n { 'pers_id' : '3', 'type' : 'A' },\n { 'pers_id' : '4', 'type' : 'B' },\n]);\n\n// Run an aggregation \nconst aggregation = [\n { '$group': {\n '_id': '$pers_id',\n 'type': { '$addToSet': '$type' }\n }\n },{ \n '$match': { 'type': { $ne: 'B' } } \n}];\n\ndb.selfjoin.aggregate(aggregation);\n",
"text": "Hello @slavayour statement will be the transformation of:\nSELECT PERS_ID FROM table WHERE type=‘A’@Annsa_Lisa is looking for all pers_id with type A but not Bthis can be archived as @Ramachandra_Tummala already wrote.Here is the code for it:In case you use VSCode as editor you can, or may already have, add the mongodb plugin which was introduced here from @Massimiliano_Marcon and run the attached code as playground:Cheers,\nMichael",
"username": "michael_hoeller"
}
] | How to transform a SQL self join to a MongoDB query | 2020-06-24T11:00:15.308Z | How to transform a SQL self join to a MongoDB query | 10,117 |
null | [] | [
{
"code": "",
"text": "We are looking forward to getting started with Mongo DB Realm in our production app.\nIs there any documentation for users using the Legacy Realm Cloud to migrate to the Mongo DB Realm?",
"username": "Enoooo"
},
{
"code": "",
"text": "We don’t have a guide available right now but it is something I am looking to build with MongoDB support. The thing to keep in mind here is that the sync versions are incompatible and any clients that are upgraded need to re-download their data from the new MongoDB Realm sync. Because of this you will need to move the Realm data in Realm Cloud and insert it as documents into MongoDB Atlas. Then enable syncThe basic steps would be:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thank you for the explanation.\nUntil the guide is published, I will try each steps.\nIf any questions, I’ll ask them again on the forum.",
"username": "Enoooo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to migrate from a legacy realm cloud to a MongoDB Realm | 2020-06-21T12:38:12.865Z | How to migrate from a legacy realm cloud to a MongoDB Realm | 2,574 |
null | [
"replication"
] | [
{
"code": " **\"optimeDate\" : ISODate(\"2020-10-03T12:32:22Z\"),**\n",
"text": "Hi,I would like to ask, why we are seeing different durable and optime entries.\nTt is ahead of time. For where this entries are been taken. How can I find that it is coming.I also ried with overiting from config file using timeStampFormat: iso8601-local, but it is same**\t\t\t“optimeDurableDate” : ISODate(“2020-10-03T12:32:22Z”),**\n“lastHeartbeat” : ISODate(“2020-06-20T00:56:36.126Z”),\n“lastHeartbeatRecv” : ISODate(“2020-06-20T00:56:36.302Z”),@Doug_Duncan if you have any insights to this, it is very helpful.",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Hi @Aayushi_Mangal I can’t say that I’ve ever seen this before. Have you checked the system time on all of your replica set servers? It would seem that one of them has the wrong system date. If it does, I would recommend making sure your system has a NTP client running to keep times in sync.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Doug_Duncan,Thank you for your reply.Yes, i have checked all the servers date with “date” command, and it is all same in IST, can you please recommend any other command to check for in particular.",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Also, we are using IST instead of UTC, do we need any other changes also at server side, or mongoside",
"username": "Aayushi_Mangal"
},
{
"code": "optimeDateoptimeDate",
"text": "Hi @Aayushi_Mangal unfortunately I’m not sure how the optimeDate could be 3.5 months ahead if the system dates are correct. Are the log files all showing the correct timestamps?I wonder if the date was wrong at some point in time and then got corrected. Is the optimeDate value increasing or is it staying at 2020-10-03T12:32:22Z?@kevinadi can you think of anything that would cause an issue like this? I’m sure you’ve seen a lot more interesting MongoDB issues than I have. ",
"username": "Doug_Duncan"
},
{
"code": "rs.status()rs.printReplicationInfo()mongo",
"text": "Hi Aayushi,I believe @Doug_Duncan was correct in that at some point the server time was accidentally moved forward, then was fixed.Just to double check, could you post the output of rs.status() and rs.printReplicationInfo() from the mongo shell? Please also post your MongoDB version.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "020-06-20T05:55:38.364+0530 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hostname based on ismaster reply: { hosts: [ \"hostname\", \"hostname\", \"hostname\" ], setName: \"test\", setVersion: 5, ismaster: false, secondary: true, primary: \"hostname\", me: \"hostname\", lastWrite: { opTime: { ts: Timestamp(1601728342, 18363), t: 5 }, lastWriteDate: new Date(1601728342000), majorityOpTime: { ts: Timestamp(1601728342, 18363), t: 5 }, majorityWriteDate: new Date(1601728342000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1592612738364), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1601728342, 18363), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1601728342, 18363), $configServerState: { opTime: { ts: Timestamp(1592609481, 2), t: 31 } }, $clusterTime: { clusterTime: Timestamp(1601728342, 18371), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }\n",
"text": "Hello @kevinadi,Yes that seems issue related to NTP or cluster time. Can you help me to confirm from where it takes that value from the server. Like any particular existing file from where it refers or map that value.Does it anyway belongs to keyfilebeing used in cluster, since if keyfile is different even at mongo, the cluster will not communicate at all, but here cluster communicated, but found that date.Here is the log, where I found first that is the issue, where it is taking 3rd oct 2020 date:Mongodb version: 4.0Other details I will not be able as it is in sync now, and I am unable to simulate the case.",
"username": "Aayushi_Mangal"
},
{
"code": "optimeDateoptimeDurableDateoptimeDateopTime: { ts: Timestamp(1601728342, 18363) ...\nTimestamp(..., ...)> new Date(1601728342000)\nISODate(\"2020-10-03T12:32:22Z\")\noptimeDate",
"text": "Hi Aayushi,Can you help me to confirm from where it takes that value from the server.The optimeDate & optimeDurableDate are from the oplog. It is the timestamp of the latest recorded operation in the oplog, and used to sort events in the oplog in chronological order even though your server time is inaccurate. In reality, it is an implementation of Lamport clock to provide oplog event ordering.Thus, it is not a problem if optimeDate is further ahead from the real date.The log line you posted is not the actual event where the server time drifted 4 months ahead as can be seen in this part of the log:The first part of the Timestamp(..., ...) number is a date:which is exactly the date in your earlier post.The second number signifies that there are already 18363 write operations already happened at that point (see Timestamp specification), so the clock drifted way earlier.Does it anyway belongs to keyfilebeing used in clusterNo it has nothing to do with keyfiles.You can simulate this behaviour by starting a 3-node replica set, change the system clock of the primary forward, do some writes on the primary, then set the system clock back to the correct time. You will see that you arrive at exactly this situation.Unfortunately this situation is not reversible, e.g. you cannot set optimeDate back to the current date as that would render the oplog unable to determine the order of operations anymore. Having said that, this situation will fix itself in 4 months time, assuming no more clock jumps happen in the meantime Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi,Thank you for your detailed responce, that is helpful.\nI was able to simulate the case, but yes it was not causing any problem. Lat time when I faced this issue, i was not able to insert data from mongos to shard, and my logs are full of with this error messges:D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceledBut now, when i changed system time and tried to simulate the case, i found that optime is changed, but i was able to insert into the data.",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Replication Oplog is ahead of time | 2020-06-20T05:38:45.480Z | Replication Oplog is ahead of time | 3,673 |
null | [
"atlas",
"atlas-triggers"
] | [
{
"code": "",
"text": "Is there a re-drive mechanism for MongoDB triggers in Atlas? I can not find it in Atlas UI, neither can find documentation about itExample:My trigger function is calling a service, if the service is down at that time I can not find a way to re-drive these failed attempts later.AWS Lambda for example, has a Dead Letter Queue so you can re-drive the failed events later if needed.There should be some mechanism that stores failed events for a period of times and some retry logic like:\n-X times every 10 minutes, if all attempts failed, send events to S3 or an SQS, and you should be able to manually retry them later or ignore them",
"username": "Franklin_Rivero"
},
{
"code": "",
"text": "No - there’s not a similar retry feature built-in to the Atlas Trigger facility. You can code this by inspecting the trigger state using a function. In the trigger configuration, you can specify the function name, or code for a function to be executed as a result of the trigger operation. That being said, there is a restart configuration for triggers that get stuck in a suspended state - https://docs.atlas.mongodb.com/triggers/#restart-a-suspended-trigger - but I don’t think that’s what you’re after.",
"username": "Michael_Lynn"
}
] | Is there a retry mechanism for Atlas trigger functions? | 2020-06-23T18:06:05.873Z | Is there a retry mechanism for Atlas trigger functions? | 2,771 |
null | [] | [
{
"code": "",
"text": "I’m going to be attempting to set up queries to multiple mongoDB servers to query a collection which has different values but the same schema and was wondering if anyone has done this or if there’s some standard method for approaching this.Our use case is that I’m working with health data and the model for keeping the data secured is by having a private DB and a public DB for each institute in our consortium. This way each member can control what data is shown publicly to the other consortium members due to data sensitivity. This means that the data will be structured in the same way on each system but contain different data. So what we need to do then is query the DBs and work with the collated results.What we we’re thinking of doing is opening a connection to each DB, preform the query, and then collate the results for additional processing. Is their a way that mongoDB has that would handle this for us? (ie provide multiple connections and then it handles the query and collation) or do we have to do it on our end?",
"username": "Kim_Ng"
},
{
"code": "mongos",
"text": "Hi @Kim_Ng,This is essentially what happens with sharding. A proxy layer (mongos) sits between the application and MongoDB replica sets, then when a query is performed it will execute and collate the results.You can cleverly pick a shard key that distributes the data to whichever shard (or replica set) where you’d like to store the data.Hope that helps!JustinPS. Field level encryption may be useful to you. I suggest taking a look: https://docs.mongodb.com/drivers/use-cases/client-side-field-level-encryption-guide",
"username": "Justin"
},
{
"code": "",
"text": "Hi @Justin,Thanks for the reply.I had a look at sharding before and it was my impression that it probably didn’t work for our use case as sharding applied to collections accross servers is read/writable to all who have access on that collection. In our case we want to ensure data control for public and private data within each institute. So only that institute would modify it’s own public data. So if we had a collection called samples then in our current understanding we’d have:Query for samples at institute 1: samples against servers instutute_1_private and instutute_2_public\nQuery for samples at institute 2: samples against servers instutute_2_private and instutute_1_publicMy colleague thinks we may be able to get around this if we utilize both union and sharding so that full data control would still be in place. Following the idea on the last example we’d have:Query for samples at institute 1: samples_private union samples_public_institute_2 against servers instutute_1 and instutute_2\nQuery for samples at institute 1: samples_private union samples_public_institute_1 against servers instutute_1 and instutute_2If this seems like it may be possible but requires more in depth information perhaps we can also be put into contact with a paid consultant to flesh out the details of this viability of this.As for the field level encryption we’re looking into that as well as a way to layer on additional data protection, I think it’ll be very useful.",
"username": "Kim_Ng"
},
{
"code": "",
"text": "Sharding doesn’t change the access strategies and restrictions placed on your underlying collections by the application. We can definitely dive into this more if you’re interested in a paid consulting engagement. I’m a consulting engineer and we work with requirements like this often.Have you worked with a MongoDB Account Executive in the past? He/she can help look at consulting options.Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "Hi @Justin,I ran it by my two fellow devs on the project and with the data controls we want in place it makes sense for us to pursue the first strategy and not the sharded approach (namely to limit access with DB credentials). We’ll likely also put an API to control the queries and what can affect the DBs more so.With this we don’t plan on contacting a consultant currently but may look into it again in the future (haven’t done it before). Thanks for this info and I’ll google how to contact a MongoDB Account Executive when we need in the future.",
"username": "Kim_Ng"
}
] | Querying multiple mongoDB servers to do a query on a collection | 2020-06-17T07:51:25.723Z | Querying multiple mongoDB servers to do a query on a collection | 4,804 |
null | [
"connecting",
"spring-data-odm"
] | [
{
"code": " MongoCredential credential = MongoCredential.createCredential(\"username\", \"database1\", \"password\".toCharArray());\n MongoClientSettings settings = MongoClientSettings.builder()\n .credential(credential)\n .retryWrites(true)\n .applyToConnectionPoolSettings(builder ->\n builder.maxConnectionIdleTime(5000, TimeUnit.MILLISECONDS))\n .applyToClusterSettings(builder -> {\n builder.hosts(Arrays.asList(\n new ServerAddress(\"mongastreamlistener-shard-00-00-ja2vb.mongodb.net\", 27017),\n new ServerAddress(\"mongastreamlistener-shard-00-01-ja2vb.mongodb.net\", 27017),\n new ServerAddress(\"mongastreamlistener-shard-00-02-ja2vb.mongodb.net\", 27017)\n ));\n })\n\n .build();\nat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:568) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:447) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:298) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:258) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:103) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:60) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-4.0.4.jar:na]\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-4.0.4.jar:na]\nat java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]\n",
"text": "I have tried passing the setting the connection spring in the application.properties → spring.data.mongodb.uri=I am getting the below errors:Caused by: java.net.SocketException: Connection reset",
"username": "Tom_Marler"
},
{
"code": "mongo",
"text": "Hello Tom,Are you able to connect to your cluster from mongo shell?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Haven’t tried. I am able to connect to the cluster with python. I can perform CRUD operations fine with python without any issues. The issue I am having is connecting to the Cluster with Spring, seems like a SSL issue, but I am not forsure?",
"username": "Tom_Marler"
},
{
"code": "",
"text": "Which version of the driver are you using?If using maven, please provide Mongo related elements from your pom.xml.",
"username": "steevej"
},
{
"code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n\txsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n\t<modelVersion>4.0.0</modelVersion>\n\t<parent>\n\t\t<groupId>org.springframework.boot</groupId>\n\t\t<artifactId>spring-boot-starter-parent</artifactId>\n\t\t<version>2.3.1.RELEASE</version>\n\t\t<relativePath/> <!-- lookup parent from repository -->\n\t</parent>\n\t<groupId>com.patriotech</groupId>\n\t<artifactId>mongabackend</artifactId>\n\t<version>0.0.1-SNAPSHOT</version>\n\t<name>mongabackend</name>\n\t<description>Monga Social</description>\n\t<properties>\n\t\t<java.version>1.8</java.version>\n\t</properties>\n\t<dependencies>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-data-mongodb</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-test</artifactId>\n\t\t\t<scope>test</scope>\n\t\t\t<exclusions>\n\t\t\t\t<exclusion>\n\t\t\t\t\t<groupId>org.junit.vintage</groupId>\n\t\t\t\t\t<artifactId>junit-vintage-engine</artifactId>\n\t\t\t\t</exclusion>\n\t\t\t</exclusions>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-web</artifactId>\n\t\t</dependency>\n\t</dependencies>\n\t<build>\n\t\t<plugins>\n\t\t\t<plugin>\n\t\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t\t<artifactId>spring-boot-maven-plugin</artifactId>\n\t\t\t</plugin>\n\t\t</plugins>\n\t</build>\n</project>\n",
"text": "pom.xml",
"username": "Tom_Marler"
},
{
"code": "",
"text": "The driver version is 4.0.4",
"username": "Tom_Marler"
},
{
"code": "",
"text": "I added this to the POM.xml\n\norg.mongodb\nmongodb-driver-sync\n3.8.1\nKnow I am getting this error:\nServlet.service() for servlet [dispatcherServlet] in context with path threw exception [Handler dispatch failed; nested exception is java.lang.NoClassDefFoundError: com/mongodb/connection/DefaultClusterFactory] with root cause java.lang.ClassNotFoundException: com.mongodb.connection.DefaultClusterFactory",
"username": "Tom_Marler"
},
{
"code": "POM.xml",
"text": "I am able to connect to the cluster with python.Please post the code you have used to connect from PyMongo.EDIT ADD: The POM.xml is good.",
"username": "Prasad_Saya"
},
{
"code": "localhostMongoTemplateMongoClient MongoClient mongoClient = MongoClients.create(\n MongoClientSettings.builder()\n .applyToClusterSettings(builder ->\n builder.hosts(Arrays.asList(\n new ServerAddress(\"localhost\", 30001),\n new ServerAddress(\"localhost\", 30002),\n new ServerAddress(\"localhost\", 30003))))\n .build());",
"text": "@Tom_MarlerI can connect to database on a replica-set on localhost and query using MongoTemplate API. It is a Spring Boot application (2.3.1) with similar Spring configuration and Java Driver (4.0.4).The MongoClient is created using the following:",
"username": "Prasad_Saya"
},
{
"code": "MongoClientURI uri = new MongoClientURI( \"mongodb+srv://UserName:[email protected]/database1?retryWrites=true&connectTimeoutMS=5000\" ) ;\nMongoClient client = new MongoClient( uri ) ;\n",
"text": "Could you try with com.mongodb.MongoClientURI and the SRV string?",
"username": "steevej"
}
] | Problems connecting to Atlas with Spring Data: connection reset | 2020-06-23T01:20:47.276Z | Problems connecting to Atlas with Spring Data: connection reset | 7,546 |
null | [] | [
{
"code": "",
"text": "I need a database to handle about 3.5 million new devices positions every day, today I’m using mysql, but the having performance issues on reports with many positons, I’m considering to change for a mongodb database, should I consider use sharding or not?",
"username": "Cassiano_Cussioli_Si"
},
{
"code": "",
"text": "Hello @Cassiano_Cussioli_Si welcome to the community!If sharding is appropriate for your new deployment, can’t be answered at this point, there are many points to consider. If your dataset fits on a single server, you should begin with an unsharded deployment, while your dataset is small sharding may provides little advantage.The story will start to think about a good schema design. In case you move the SQL normalized Data Model 1:1 to MongoDB you will not have much fun or benefit.\nYou can find further information on the Transitioning from Relational Databases to MongoDB in the linked blog post. Please note also the links at the bottom of this post, and the referenced migration guide.Since you mention that the current main problem is reporting run time. One option with MongoDB is to have a replicaset with one, or more hidden secondary servers which are dedicated for special tasks e.g. analytics, searches, … the key factor with them is that they can have individual indexes to support their special tasks. This as a starter, please follow this link to find out more about hidden Replicat Set Members.Sharding is a method to horizontal scale your load when vertical scaling gets inefficient.Horizontal Scaling involves dividing the system dataset and load over multiple servers, adding additional servers to increase capacity as required. While the overall speed or capacity of a single machine may not be high, each machine handles a subset of the overall workload, potentially providing better efficiency than a single high-speed high-capacity server. The other side of the coin is increased complexity.I’d recommend to start without sharding, find and tune your schema and work from there.Shadring provides advantages, when your use case fits, as like:Reads / Writes: MongoDB distributes the read and write workload across the shards in the sharded cluster, allowing each shard to process a subset of cluster operations. For queries that include the shard key or the prefix of a compound shard key, mongos (the routing server) can target the query at a specific shard. These targeted operations are generally more efficient than broadcasting to every shard in the cluster.Storage Capacity: Sharding distributes data across the shards in the cluster, allowing each shard to contain a subset of the total cluster data. As the data set grows, additional shards increase the storage capacity of the cluster.High Availability: A sharded cluster can continue to perform partial read / write operations even if one or more shards are unavailable. While the subset of data on the unavailable shards cannot be accessed during the downtime, reads or writes directed at the available shards can still succeed.Hope that helps\nMichael",
"username": "michael_hoeller"
}
] | Sharding or not? | 2020-06-23T23:19:11.160Z | Sharding or not? | 2,064 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "Please help me to get below limitsWhat the limitation on maximumdatabase size,\ncollection size,\ndocument size [ 16GB upper limit ]With Latest MongoDB Version.",
"username": "Nagarajan_Palaniappa"
},
{
"code": "ulimit",
"text": "Hi @Nagarajan_Palaniappa,For general information, please see MongoDB Limits and Thresholds in the MongoDB documentation. You’ll note that this doesn’t specifically mention a limitation on the number of databases or collections, but I’ll explain why below.The maximum document size in MongoDB is currently 16MB, but if you are approaching that I would definitely give serious consideration to whether your data model can be improved. For some related commentary, see: Use case for storing pages of text like articles as key:value pairs - #4 by Stennie_X.The theoretical limits for maximum database and collection size are much higher than the practical limits imposed by filesystem, O/S, and system resources relative to workload. You can add server resources and tune for vertical scaling, but when your workload exceeds the resources of a single server or replica set you can also scale horizontally (partitioning data across multiple servers) using sharding.Practical scaling involves more considerations than the size of data and is highly dependent on your workload, deployment topology, and resources. For example: the amount of data you need to actively work with relative to your total data stored, the volume of read/write operations, and expectations around performance metrics/SLA are all factors.For some examples of deployments scaling across different dimensions, please see MongoDB at Scale. There are deployments with petabytes of data, billions of operations per day, and billions of documents. However, distributed deployments at that scale have significantly different resources than you may have getting started in a laptop or shared server environment.I believe the theoretical limits are probably imposed by use of unsigned 64-bit integers (so perhaps 2^64 - 1 collections or databases), but the first soft limits you may encounter on most operating systems will be around available and open file handles (for example, ulimit Settings for Linux/UNIX-like environments). RAM, CPU, and I/O resources may also be limiting factors depending on your workload. The maximum size or number of files for a directory or filesystem will also vary depending on your filesystem options.If you are managing your own server deployments, I strongly recommend reviewing the Operations Checklist, Production Notes, and Security Checklist relevant to your version of MongoDB and deployment topology.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thankyou Stennie for the details… This helps…!",
"username": "Nagarajan_Palaniappa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Database and collection limitations | 2020-06-23T10:12:42.875Z | Database and collection limitations | 8,485 |
[
"atlas",
"field-encryption"
] | [
{
"code": "mongocryptd",
"text": "I know this might be an interesting question but I along with others are having difficulty getting CSFLE working with Atlas, all of the guides show working examples with a locally hosted instance. Connection errors generally seam to originate while trying to perform the mongocryptd handshake. Before I put any more dev time into getting this to work I would like to confirm that is will work with Atlas…I had assumed it would however I can’t seam to show any guides that reference it. If anyone has a working connection config with Atlas it would sure save us some time.Thanks,David“MongoError: Unable to connect to mongocryptd, please make sure it is running or in your PATH for auto-spawn”",
"username": "David_Stewart"
},
{
"code": "mongocryptdmongocryptdmongocryptd",
"text": "Hi @David_Stewart,Yes, Client-Side Field Level Encryption is supported with MongoDB Atlas v4.2 clusters.For Automatic Encryption methods, the official MongoDB 4.2-compatible drivers require access to the mongocryptd process on the application/client host machine. The 4.2-compatible drivers by default search for the mongocryptd process in the system PATH.“MongoError: Unable to connect to mongocryptd , please make sure it is running or in your PATH for auto-spawn”The error message indicates that you may not have mongocryptd installed and/or available in the application system PATH. See mongocryptd installation for more information. You may also find the Encryption Components diagram that illustrates the relationships between the components useful.Also, please ensure that you’re using the official v4.2 compatible driver and versions. See Driver Compatibility Table for more information.You can follow Client-Side Field Level Encryption Guide for an introduction on how to implement automatic CSFLE. The guide contains example and code snippets in Java, Node.JS and Python.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "Wan,Thanks for the clarification. Looks like we will need it install this process in our docker file. Do you know of a good example that for a Node deployment that we can reference where it will just install the process, not the entire mongoDB server library?David",
"username": "David_Stewart"
},
{
"code": "mongocryptdmongodb-enterprise-cryptdmongodb-enterprise-cryptdmongodb-enterprise",
"text": "Hi David,Per the mongocryptd installation guide that Wan mentioned, there is a mongodb-enterprise-cryptd package available for the same Linux systems supported for MongoDB Enterprise.You can use the instructions to Install MongoDB Enterprise on Linux as a reference for steps to add to your Docker image with the Node deployment, but use the more specific mongodb-enterprise-cryptd package instead of mongodb-enterprise.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks Stennie,I will report back, thanks for the help everyone!",
"username": "David_Stewart"
},
{
"code": "Dockerfilemongodb-enterprise-cryptd",
"text": "Hi @David_Stewart,Do you know of a good example that for a Node deployment that we can reference where it will just install the process, not the entire mongoDB server library?Please have a look at this Dockerfile example: github.com/sindbach/field-level-encryption-docker that only installs mongodb-enterprise-cryptd package.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Wan this is GREAT thanks for posting!David",
"username": "David_Stewart"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is Client-Side Field Level Encryption supported with Atlas? | 2020-06-21T22:01:00.081Z | Is Client-Side Field Level Encryption supported with Atlas? | 6,518 |
|
null | [
"atlas-search"
] | [
{
"code": "{\n \"detail\": \"Unexpected error.\",\n \"error\": 500,\n \"errorCode\": \"UNEXPECTED_ERROR\",\n \"parameters\": [],\n \"reason\": \"Internal Server Error\"\n}\n",
"text": "I can’t delete a Full text index on a collection. It is stuck forever in “Build in progress”/“Delete in progress” state. If I try to delete it via API, it returns following error:Any ideas how can I resolve the issue? Thanks!",
"username": "Olyasik"
},
{
"code": "",
"text": "Welcome to the community @Olyasik!For operational issues with Atlas you are best contacting support for assistance.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Full Text Search index is stuck during creation/deletion | 2020-06-23T20:39:43.114Z | Full Text Search index is stuck during creation/deletion | 1,841 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi all!I have an AWS EC2-hosted MongoDB Community Edition ReplicaSet consisting of a primary and 4 secondary replicas and a separate MongoDB Atlas cluster. The Op-Log for both is sized at 16GB. For Atlas, when I run rs.printReplicationInfo(), I see the following:\nconfigured oplog size: 16999MB\nlog length start to end: 87384secs (24.27hrs)\noplog first event time: Wed Jun 17 2020 13:05:27 GMT-0400 (Eastern Daylight Time)\noplog last event time: Thu Jun 18 2020 13:21:51 GMT-0400 (Eastern Daylight Time)\nnow: Thu Jun 18 2020 13:21:51 GMT-0400 (Eastern Daylight Time)For the AWS hosted ReplicaSet, I see:\nconfigured oplog size: 16384MB\nlog length start to end: 1368secs (0.38hrs)\noplog first event time: Thu Jun 18 2020 12:58:50 GMT-0400 (Eastern Daylight Time)\noplog last event time: Thu Jun 18 2020 13:21:38 GMT-0400 (Eastern Daylight Time)\nnow: Thu Jun 18 2020 13:21:38 GMT-0400 (Eastern Daylight Time)I’m curious as to why the log length for the AWS ReplicaSet Op-Log contains so much less activity than the Atlas hosted ReplicaSet. Any ideas? From what our Developers tell me, the Atlas instances should be experiencing a much higher rate of CRUD operations than the AWS instances. I’m fairly new to Mongo so ti doesn’t take much to baffle me, but this just seems weird.",
"username": "Gary_Hampson"
},
{
"code": "mongorestoremongoimport",
"text": "I’m curious as to why the log length for the AWS ReplicaSet Op-Log contains so much less activity than the Atlas hosted ReplicaSet. Any ideas? From what our Developers tell me, the Atlas instances should be experiencing a much higher rate of CRUD operations than the AWS instances.Hi Gary,The log length (or oplog duration) is based on the difference in times between the first and last events in the oplog.I would start by asking your developers how they are measuring CRUD operations in each environment and ensure they are comparing similar metrics rather than an instinctive “prod should be busier”. There may be unexpected tasks or load testing running in your AWS environment.If you are certain the CRUD workload is similar, a few possible differences to consider are:You could also try to compare differences in the oplog entries (grouping by operation type or namespace), but I would be cautious on doing so in a production environment. The oplog does not have (or support) secondary indexes, so arbitrary queries will result in a collection scan and possible performance impact.For detailed querying & analysis I would dump and restore oplogs into a test environment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mongorestoremongoimport",
"text": "Hi Stennie,I’ve confirmed with the Dev Team as well as the other DBAs that there are no mongorestore or mongoimport activities happening on the AWS-hosted instance. I’ve also confirmed that they are both on version 3.6.18. From an application logic perspective, there is no difference in code, apart from the connection string settings pointing to the two different SRV records.When you say “grouping by operation type or namespace”, would you be willing to provide an example? As I said in my initial post, I’m fairly new to working with Mongo and have much to learn, so any info you might be willing to share would be greatly appreciated.Kind regards,\nGary Hampson",
"username": "Gary_Hampson"
},
{
"code": "localoplog.rsmtoolsdb/repl/oplog_entry.idl{\n\t\"ts\" : Timestamp(1592606630, 3),\n\t\"t\" : NumberLong(1),\n\t\"h\" : NumberLong(\"-3140717664474930715\"),\n\t\"v\" : 2,\n\t\"op\" : \"i\",\n\t\"ns\" : \"test.foo\",\n\t\"ui\" : UUID(\"7abe0fbb-7416-4401-92ff-327e6250410e\"),\n\t\"wall\" : ISODate(\"2020-06-19T22:43:50.235Z\"),\n\t\"o\" : {\n\t\t\"_id\" : ObjectId(\"5eed3fa6e795017f20b360b5\")\n\t}\n}\ndb.getSiblingDB('local').oplog.rs.aggregate([\n { $group: {\n _id: { namespace: \"$ns\", operation: \"$op\" },\n count: { $sum: 1 }\n }}\n])\n{ \"_id\" : { \"namespace\" : \"config.system.sessions\", \"operation\" : \"i\" }, \"count\" : 1 }\n{ \"_id\" : { \"namespace\" : \"test.foo\", \"operation\" : \"d\" }, \"count\" : 1 }\n{ \"_id\" : { \"namespace\" : \"test.foo\", \"operation\" : \"i\" }, \"count\" : 2 }\n{ \"_id\" : { \"namespace\" : \"test.bar\", \"operation\" : \"i\" }, \"count\" : 4 }\n{ \"_id\" : { \"namespace\" : \"config.$cmd\", \"operation\" : \"c\" }, \"count\" : 3 }\n{ \"_id\" : { \"namespace\" : \"admin.system.keys\", \"operation\" : \"i\" }, \"count\" : 2 }\n{ \"_id\" : { \"namespace\" : \"test.$cmd\", \"operation\" : \"c\" }, \"count\" : 2 }\n{ \"_id\" : { \"namespace\" : \"admin.$cmd\", \"operation\" : \"c\" }, \"count\" : 1 }\n{ \"_id\" : { \"namespace\" : \"\", \"operation\" : \"n\" }, \"count\" : 76 }\nconfig.system.sessionstest.footest.baritest.fooducn",
"text": "When you say “ grouping by operation type or namespace ”, would you be willing to provide an example? As I said in my initial post, I’m fairly new to working with Mongo and have much to learn, so any info you might be willing to share would be greatly appreciated.Hi Gary,The replication oplog is a capped collection in the local database called oplog.rs. It can be queried like any other MongoDB collection, with the important caveat I noted earlier about not supporting secondary indexes. If you are using an Atlas shared tier deployment (M0, M2, M5) you will not have direct access to query the oplog, but since you mention a 16GB oplog that must be a dedicated cluster.I asked how your developers are measuring CRUD operations because these sorts of insights are generally more straightforward using a monitoring tool which captures and plots activity metrics over time. For example, with your Atlas cluster you can look at charts like “Opcounters”, “Oplog GB/Hour”, and “Replication Oplog Window”. A significant change or difference in activity should be easier to spot visually and you can also configure alerts based on conditions.If you don’t have a monitoring solution for your AWS deployment (or if you do, but don’t have comparable charts), I suggest trying MongoDB Cloud Manager. This provides similar charts & alerts to Atlas which should help identify variations between your deployment workloads. Another approach you can take is log forensics using something like mtools or Keyhole.However, since your specific question is about oplog consumption the most definitive source of information will be the oplog data. The oplog format is generally considered internal for replication and the majority of users don’t dig into the details. You can find more context by peeking at the MongoDB source code on GitHub: db/repl/oplog_entry.idl describes the oplog format in MongoDB 3.6.The oplog format varies slightly between versions of MongoDB but the general document format looks like:Most of these fields are only interesting for internal replication, but for your specific question you might want to aggregate across a few dimensions like namespace and operation (I renamed these fields in my aggregation grouping for clarity):Sample output from a 3.6 replica set I just created:For the oplog period covered in this output, you can see there was:If you have a largely idle replica set, the no-op entries will likely have the highest count but not be an interesting difference in terms of oplog duration (a more active deployment will have larger document writes instead of no-ops).Given your description of equivalent versions with quite different oplog durations, my guess is that you have a higher number of insert/update/delete operations than expected or the content of those operations is different between environments (for example, you are using retryable writes on AWS or a different driver version).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,This query was very helpful, so thank you very much for that. In looking at the query’s output, I still see the Atlas instance having a much larger count of CRUD operations when compared to the AWS-hosted instance, so I’m confused as to why with the same 16GB oplog sizing, the AWS-hosted instance still has a much shorter oplog duration. Any other ideas?OPLOG1429×685 213 KBKind regards,\nGary Hampson",
"username": "Gary_Hampson"
},
{
"code": "",
"text": "oplog first event time: Thu Jun 18 2020 12:58:50 GMT-0400 (Eastern Daylight Time)Hello,\nThis might sound like a trivial question, but i just want to clarify as i cannot think of anything else.\nIs it possible that the AWS cluster is actually built/instantiated on Thu Jun 18 2020 12:58:50 GMT-0400 (Eastern Daylight Time)?Thanks\nR",
"username": "errythroidd"
},
{
"code": "",
"text": "Hi Rohit,The AWS-hosted instance has been around for over a year. The only reason that this is being brought up is that we had a hardware issue that necessitated rebuilding a secondary node in another data center in another region, and when doing the initial sync for that new node, we ran into issues where the oplog was cycling over preventing the reseed to complete.Kind regards,\nGary Hampson",
"username": "Gary_Hampson"
},
{
"code": "",
"text": "Hi @Gary_HampsonAre these cluster getting similar data?If the AWS inserts/update documents are much lager than the Atlas then that could account for the discrepancy in the oplog length.",
"username": "chris"
},
{
"code": "",
"text": "Hi @chrisAs far as I can see from looking at the oplog data, and in speaking with the developers, the type of data and size of each insert/update being written is roughly the same. The amount of operations is different, as I mentioned above. The Atlas instance is receiving a much greater number of operations.Kind regards,\nGary Hampson",
"username": "Gary_Hampson"
},
{
"code": "",
"text": "I think you may have to analyze the oplog further, this doesn’t add up. Less operations but more oplog consumed on the AWS cluster?",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | OpLog sizing and log length | 2020-06-18T18:30:32.048Z | OpLog sizing and log length | 5,161 |
null | [] | [
{
"code": "",
"text": "I am confused and want to know is MongoDB Realm in beta or got a stable release, if yes, what is the current version of MongoDB Realm. Also, is MongoDB Stitch deprecated. I like realm very much and want to use it for my next apps. I want to use Realm for web React Apps, but I don’t see the Realm is beta but I definately see realm-js in beta (in github). I mostly want to use Realm for making GraphQl servers, is that stable for long term. Also, the Auth in realm for JS is provided by realm-js, is that also stable.",
"username": "Saurabh_Gupta"
},
{
"code": "",
"text": "Hi Saurabh,You can use either the realm-web SDK or the legacy stitch SDKs at the moment. The realm-web SDK is in Beta and will eventually replace the Stitch Web SDK, which we plan to sunset towards the end of the year. If you are planning on implementing Authentication using anonymous, email/password, or API key, the realm-web SDK already supports these.You can also use our GraphQL service right now with any GraphQL client in your React App. This is now GA now and should be stable.",
"username": "Sumedha_Mehta1"
}
] | MongoDB realm version confusion | 2020-06-23T05:13:34.598Z | MongoDB realm version confusion | 1,467 |
null | [
"stitch"
] | [
{
"code": "",
"text": "Hi everyone,\ni hope i’m not misusing this forum. A mongoDB employee pointed me here to try solve my issue.I am currently using Mongo Stitch with Realms to provide a basic backend for a web app i developed.\nEverything is done and working properly on Android Browsers or Maijor PC browser.The only problem is, on Safari (i tested it on iPad 7 and iPhone 8 ) the login does not work.I found that it waits forever on this call:await client.auth.loginWithCredential(new stitch.UserPasswordCredential(email, password));The client var is inited with: const client = stitch.Stitch.initializeDefaultAppClient(APP_NAME);It seems that the Stitch client manages to set the client.auth.user.id, but it does not arrive to the point were it sets the “__stitch” variables in the localStorage.Has anyone experienced this?\nThe code should be fine on my side, since it works properly on Android/Windows…Is it a Stitch bug?I appreciate any support,\nThanks.",
"username": "Emanuele_Serrao"
},
{
"code": "",
"text": "In case anyone has this issue, for me was fixed by removing the custom data from the realm app.Instead i now get it using a findOne as secondary step right after login.",
"username": "Emanuele_Serrao"
}
] | Help. Login Stuck on IOS, Safari | 2020-06-22T20:32:07.982Z | Help. Login Stuck on IOS, Safari | 2,839 |
null | [] | [
{
"code": "",
"text": "The only way is to access via mongo shell. Mongo Compass can’t access them. Is there any better way to view the hundred or thousands of users in a better way or GUI? Because it is really ridiculous to see 100+ users in the command shell.",
"username": "why_npnt"
},
{
"code": "mongodb.getUsers()filter",
"text": "Hi @why_npnt,To help provide relevant suggestions, please provide some more information about your use case and the output or interaction you’d like to achieve:Are you trying to export or scroll through hundred of users?Do you need to see all user information or just a subset of the fields?What specific version of MongoDB server are you using?Are you using a self-hosted deployment or a managed service (for example, MongoDB Atlas)?I expect you may want to count or filter a long list of users using some criteria rather than paging through a very long list. In a mongo 4.0+ shell the db.getUsers() helper accepts a filter condition to limit output to users matching the filter criteria.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "filter is not required. user input registration is random via test. overview is required.",
"username": "why_npnt"
}
] | Regarding mongodb's users in system.users | 2020-06-23T08:25:01.333Z | Regarding mongodb’s users in system.users | 1,209 |
null | [
"aggregation"
] | [
{
"code": "{_id: '1234\",\nname: 'big account'\n}\n{_id:'6789',\nname: 'small_account'\n{_id: 'abcd',\naccount: '1234'}\n{_id: \"qwer\",\naccount: '1234'}\n{_id: 'zyxw',\naccount: '6789'\ntotal size of documents in members matching pipeline { $match: { $and: [ { site: { $eq: ObjectId('xxxxxxxxx') } }, {} ] } } exceeds maximum document size\n{$match: { \n _id : {\n $nin: {\n \"xxxxxxxxxxx\"\n }\n }\n }\n}\n",
"text": "I have a relatively small dataset, 10k documents, and need to ‘join’ 2 collections (accounts and members).\naccounts:membersThe lookup stage of the aggregation pipeline is unable to complete in the view I’ve created throwing the following error:My question is this: as a first step in the aggregation pipeline, I wanted to get rid of this document to see if the rest of the collection would work :The doc appears to have been filtered out, when I test by trying to match to it later in the pipeline without success, but the view still throws the same error with the same objectId, which makes me think it is still being called in the $lookup stage.I could use some help figuring out how to get this view to work and figure out how this doc that has been filtered out continues to jam up the pipeline.",
"username": "James_Hall"
},
{
"code": "",
"text": "This was driving me crazy, but it turns out that modifying a view, and then clicking ‘update view’ does not actually save the updates. I had to delete the view, iterate, resave another view, to check if it worked.Hope that this helps some one.",
"username": "James_Hall"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Total size of documents in members matching pipeline exceeds maximum document size | 2020-06-23T08:25:05.866Z | Total size of documents in members matching pipeline exceeds maximum document size | 4,289 |
null | [] | [
{
"code": "",
"text": "",
"username": "Sakshi_Saxena"
},
{
"code": "",
"text": "test db is a default DB which comes with mongodb\nIt will not have any collections\nSwitch your db to video or some other db and run your command\nuse video\nshow collectons",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Sakshi_Saxena,I hope you found @Ramachandra_37567’s response helpful. Please feel free to get back to us if you have any doubts.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Here’s what i did to get the results!\nimage1366×768 58.2 KB",
"username": "Sakshi_Saxena"
},
{
"code": "",
"text": "Hi @Sakshi_Saxena,Good job ",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Cannot view schema upon executing command: show collections | 2020-06-21T21:23:00.413Z | Cannot view schema upon executing command: show collections | 1,476 |
[] | [
{
"code": "const removedPost = await eventArrayModel.update({resource: req.params.teamMember}, {$pull: { array : {_id: ObjectId(\"5ef0a884c09b8e9ff01c8007\")}}});\n",
"text": "I’m trying to remove specific sub document by it’s _id. In eventArrayModel there are plenty of documents. I’m looking for match for origin of sub document by field resource . This is what i come up with, but it doesn’t work. Any ideas ?image1301×616 32 KB",
"username": "Mikolaj_Deszcz"
},
{
"code": "// Node.js example\nconst bson = require('bson');\nconst teamMemberId = new bson.ObjectId(req.params.teamMember);\nconst removeId = new bsonObjectId(<id>);\nawait eventArrayModel.update(\n { resource: teamMemberId }, \n { \n $pull: { \n array : { _id: removeId }\n }\n }\n);",
"text": "The syntax is OK.\nBut, notice, that ‘req.params.teamMember’ param is of string type, but ‘_id’ that you are trying to match is of type ObjectId. You cannot match ObjectId value with a string. Nothing matched - nothing updated.\nYou need to convert teamMember to ObjectId to do a successful match:",
"username": "slava"
},
{
"code": "const removedPost = await eventArrayModel.update({resource: req.params.teamMember}, {$pull: { array : {_id: ObjectId(\"5ef0a884c09b8e9ff01c8007\")}}});\ndb.test.update( \n { 'array.resource': req.params.teamMember }, \n { $pull: { array: { _id: ObjectId(...) } } } \n)\n\ndb.test.update( \n { }, \n { $pull: { array: { _id: ObjectId(...), resource: req.params.teamMember } } }\n}",
"text": "I’m trying to remove specific sub document by it’s _id. In eventArrayModel there are plenty of documents. I’m looking for match for origin of sub document by field resource . This is what i come up with, but it doesn’t work. Any ideas ?You can try any of these two update queries:",
"username": "Prasad_Saya"
}
] | MongoDB removing subdocument issue | 2020-06-22T20:33:45.955Z | MongoDB removing subdocument issue | 2,933 |
|
null | [] | [
{
"code": "",
"text": "Hi There,How we can decide the wiedTiger cache size based on the server memory and total DB size.\nE.g -\nWe have 800GB mongodb database size and 32GB memory. So what should be optimal size of wiredTiger cache size?Note - Workload in mix or generic and this is for new deploymentThanks,\nKailas",
"username": "kailas_pawar"
},
{
"code": "",
"text": "Hi,There is not a straightforward answer, or even a single answer, to your question unfortunately. This is because the answer depends on your specific use case, and your specific use case will be very different from another person’s.Generally, you want to optimize based on your “working set”. It’s the part of your data that you access frequently. Note that the term “working set” is not unique to MongoDB, and is basically universal across all databases.Here’s a couple of good explanations on what working set is:In general, you would know the size of your working set by experimentation and expected production load of the database.Should you discover that your working set exceeds your RAM after some experiments, then increasing RAM is the only reliable method to increase performance.Typically, though, the default settings of the WiredTiger cache should be workable for the vast majority of use cases.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks Kevin.! Got it.",
"username": "kailas_pawar"
}
] | How to decide wiredTiger cache size bases on total server memory and MongoDB total size | 2020-05-31T08:27:36.626Z | How to decide wiredTiger cache size bases on total server memory and MongoDB total size | 2,545 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Trying to use database triggers to bump a specific document field into another document leaving a reference when the size get too large for the original document. Best we can find issues inline",
"username": "paul_N_A"
},
{
"code": "",
"text": "Hi Paul,Assuming you are referring to the size of an array being too big, and not your document.However, this may need a little more thought into how you’re modeling your data - you can learn more about it in the data modeling university course,You may also want to use “references” to create a one-to-many relationship with your data if that fits your needs.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks @Sumedha_Mehta1 Actually it’s not an array, it would be an object.Imagine a userDoc. Then the user uploads a photo and we put it into the userDoc so we can get both the photo info and userDoc back in one single request. We want to know the like counts of the pic so we have that too, but if a power user posts an image all of a sudden we blow out the userDoc (size) adding all 100k plus user ID’s to the userDoc. So generally we know after a few hundred that it’s going to be popular, so want to pre-emptively put the photo fields and uesrID object of those who liked it, into it’s own doc leaving the new doc reference in the userDoc. Until the next photo comes along. Putting the picture into it’s own doc 100% of the time needlessly when the userDoc can easily handle a few hundred (or much more) likes would create a secondary doc lookup, in most scenarios, needlessly. Thanks!",
"username": "paul_N_A"
}
] | Using Database Triggers to Update a Single Field | 2020-06-22T20:32:52.756Z | Using Database Triggers to Update a Single Field | 2,954 |
null | [
"ruby"
] | [
{
"code": "An unexpected error occurred! {:error=>#<Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x2012 tag_sets=[] max_staleness=nil> using server_selection_timeout=30 and local_threshold=0.015>, :backtrace=>[\"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/server_selector/selectable.rb:115:in next_primary'\", \"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/bulk_write.rb:58:in write_with_retry'\", \"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/bulk_write.rb:56:in bulk_write'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:73:in each'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:65:in synchronize'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:64:in loop'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:62:in ",
"text": "I am trying to do Bulk write API call through Ruby to MongoDB by passing list of “update_one” operations, but I am getting following error while doing the API call.An unexpected error occurred! {:error=>#<Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x2012 tag_sets=[] max_staleness=nil> using server_selection_timeout=30 and local_threshold=0.015>, :backtrace=>[\"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/server_selector/selectable.rb:115:in select_server’\", “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/cluster.rb:231:in next_primary'\", \"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/bulk_write.rb:58:in block in execute’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/retryable.rb:104:in write_with_retry'\", \"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/bulk_write.rb:56:in execute’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/mongo-2.4.3/lib/mongo/collection.rb:401:in bulk_write'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:73:in block in register’”, “org/jruby/RubyHash.java:1343:in each'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:65:in block in register’”, “org/jruby/ext/thread/Mutex.java:148:in synchronize'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:64:in block in register’”, “org/jruby/RubyKernel.java:1292:in loop'\", \"/usr/share/logstash/vendor/local_gems/20d7ca77/logstash-output-mongodb-3.1.0/lib/logstash/outputs/mongodb.rb:62:in block in register’”`Can somebody help me figure out what could be the problem?",
"username": "Gaurav_Aradhye"
},
{
"code": "",
"text": "An unexpected error occurred! {:error=>#<Mongo::Error::NoServerAvailable: No server is available matching preferencePlease check this link\nhttps://jira.mongodb.org/browse/MONGOID-4513",
"username": "Ramachandra_Tummala"
}
] | Mongo::Error::NoServerAvailable while making bulk write call | 2020-06-23T01:20:51.406Z | Mongo::Error::NoServerAvailable while making bulk write call | 4,627 |
null | [] | [
{
"code": "",
"text": "The Realm adapter Class: Adapter seems to create a local realm file, much like a (react-native) client would, when accessing realm. However,the client side realm api supports encrypting this local realm file e.g. using the configuration object Class: Realm. However, no such configuration seems to exist for the adapter API. I’m wondering what the steps are to ensure the encryption of the adapter and server-side realm?",
"username": "Salman_Mohammadi"
},
{
"code": "",
"text": "@Salman_Mohammadi Correct there is no built-in encryption API for the Adapter API, you will need to roll your own, or use disk encryption from another provider.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi @Ian_ward, thanks for the response! I noticed the MongoDB-Realm integrated product is rolled out with Sync Atlas Device Sync | MongoDB. I’m wondering if this will be encrypted, or have the option to be encrypted with the same security as ROS?",
"username": "Salman_Mohammadi"
},
{
"code": "",
"text": "The new product is encrypted as part of Atlas encryption but the Adapter API has been removed from the realm-js SDK as part of the new product. Instead you should leverage Realm triggers -",
"username": "Ian_Ward"
}
] | Realm Adapter Encryption | 2020-06-19T20:04:40.851Z | Realm Adapter Encryption | 1,395 |
null | [] | [
{
"code": "",
"text": "Hi,I am looking for a procedure to turn off some shards from a Sharded Cluster.\nI have done some research but as far as I can tell there is no clear procedure.\nI understand that I can run removeShard, but if something fails or gets deleted it could be an issue.\nI prefer first to set the primary to a specific server and then start removing shards.\nIs there a procedure for that ?Thanks",
"username": "Daniel_Benedykt"
},
{
"code": "",
"text": "Welcome @Daniel_BenedyktI am looking for a procedure to turn off some shards from a Sharded Cluster.\nI have done some research but as far as I can tell there is no clear procedure.Its a pretty well defined procedure:It covers moving the sharded collections, jumbo chunks and what to do if the shard is a primary shard.\nIt has clear warnings where you should take care.I understand that I can run removeShard, but if something fails or gets deleted it could be an issue.Can you expand on this.I prefer first to set the primary to a specific server and then start removing shards.The procedure specifies the moving of primary shard for unsharded collections to occur after the sharded collections have been drained.",
"username": "chris"
},
{
"code": "",
"text": "Thanks for pointing me to the procedure. I couldn’t find it.",
"username": "Daniel_Benedykt"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sharded Cluster, procedure to turn off some shard | 2020-06-22T13:13:21.042Z | Sharded Cluster, procedure to turn off some shard | 1,604 |
null | [
"data-modeling"
] | [
{
"code": "Users can read and write their own data{\n name: \"Mongo\" // user defined\n teams: [ // defined by some administrator\n \"team-1\",\n \"contributors\"\n ]\n}\n",
"text": "HiI’ve never used realm before, but I’ve built a lot of APIs and used a lot of RDBMS and am a certified Mongo developer. I’ve really liked the Realm presentations from Mongo live. But I’m keen on doing modelling well. So I’m just wondering, should data modelling be done differently when working with Mongo Realm? I’m just working through some of the docs, when you define role and permission on a collection and have the templates like Users can read and write their own data. Does that mean that you can’t have data elements that an administrator would manage within that collection. Thus, you need to separate your data more by user case? For example I have this document structure, that I use in a REST API, where the application holds all the business logic and can manage how certain admins can give access to certain functionality.But looking at the available templates for Realm, maybe this should be broken out? As you wouldn’t want a user to decide that he can see/perform actions to something an administrator. Am I wrong in this regard?Thanks ",
"username": "Jonny"
},
{
"code": "",
"text": "I guess to a further extent, are there any recommendations on guides/videos/tutorials for working with Realm, for people who are technical, but never worked in this space, maybe a MongoDB University course planned?",
"username": "Jonny"
},
{
"code": "",
"text": "@Jonny If you are talking about Realm Sync for mobile then yes you will generally want to split your data out more into separate documents because a document only lives in a single partition and permissions are defined per-partition per-user. Right now, you cannot add or remove fields within a document based on permissions although we are interested in improving this in the future. You can take a look at this Partitioning document here:\nhttps://docs.mongodb.com/realm/sync/partitioning/",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardThanks for the reply and getting back to me, apologies in my delay. I did see this link and had a read, so I’m glad I got to the correct docs. Having read it, I think I get the way to manage the realms, so i’ll have a go and report back when I get chanceThanks ",
"username": "Jonny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Data modelling for Realm | 2020-06-10T23:46:41.734Z | Data modelling for Realm | 1,777 |
null | [] | [
{
"code": "",
"text": "Hello,As we go through Documentation, we read following for iOS and android:MongoDB Realm iOS SDKThe MongoDB Realm iOS SDK enables client applications on the iOS, macOS, tvOS, and watchOS platforms to access data stored in local realms and interact with MongoDB Realm services like Functions, MongoDB Data Access (coming soon), and authentication. The iOS SDK supports both Swift and Objective-C applications.MongoDB Realm Android SDKThe MongoDB Realm Android SDK enables client applications on the Android platform to access data stored in local realms and interact with MongoDB Realm services like Functions, MongoDB Data Access, and authentication. The Android SDK supports both Java and Kotlin Android applications.What is the meaning for “MongoDB Data Access (coming soon)” for iOS.Thanks.",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "@Vishal_Deshai It’s for calling Remote MongoDB functions that used to be part of the old Stitch SDK like so:\nhttp://stitch-sdks.s3-website-us-east-1.amazonaws.com/stitch-sdks/swift/6/Classes/RemoteMongoReadOperation.htmlYou can still use the Realm API’s to write data and have it sync to MongoDB today.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward we are using MongoDB realm iOS Swift SDK for sync.",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "MongoDB Data AccessWhat Does it mean “MongoDB Data Access” coming soon feature??",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "@Vishal_Deshai it’s the above linked api doc I put in my response.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Does it mean to sync collection through StitchSDK?",
"username": "Vishal_Deshai"
}
] | Is iOS Support Sync? | 2020-06-22T13:54:37.315Z | Is iOS Support Sync? | 1,636 |
null | [] | [
{
"code": "",
"text": "Can Realm be used for Local State Management in our React apps?",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "@Ivan_Jeremic Not at this time, we would recommend using our GraphQL API with an Apollo client library or similar",
"username": "Ian_Ward"
}
] | Can Realm be used for Local State Management in our React apps? | 2020-06-22T15:38:19.631Z | Can Realm be used for Local State Management in our React apps? | 1,499 |
null | [] | [
{
"code": "",
"text": "Hi, i was using https://docs.realm.io/sync/backend-integration/mssql-data-connector for mssql connector. But now i figure out realm is updated to mongoDB real. i am confused now how to connect MSSQL to client realm database on android. please help with it.",
"username": "Gouravdeep_Singh"
},
{
"code": "",
"text": "@Gouravdeep_Singh The MSSQL data connector is no longer being distributed so you would need to build your own connector between MongoDB and MSSQL. I believe there is a pre-built connector that you can leverage on Confluent’s Kafka Cloud",
"username": "Ian_Ward"
}
] | Realm MSSQL connector | 2020-06-22T10:28:28.930Z | Realm MSSQL connector | 1,732 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Is .NET MongoDb Realm SDK on the roadmap?What is the ETA?Will it be possible to create uwp apps (or other Windows apps) with MongoDb Realm?",
"username": "Alexei_Vinidiktov"
},
{
"code": "",
"text": "@Alexei_Vinidiktov We want to bring our .NET SDK to MongoDB Realm, but it is going to take more time than for other SDKs - because before the merger Stitch didn’t have a .NET SDK, and because the .NET team is currently understaffed.Yes the idea would be that you would be able to use this to create UWP apps - Definitely.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Did the original realm team quit?We need the .net version with frozen objects as well for some blazor server development.",
"username": "Void"
},
{
"code": "",
"text": "No they did not, they are actually some of our more competent engineers so we were able to move them to different projects that had a higher priority during our product launch. We are looking to rebuild the team as evidenced by our hiring of more .NET developers, see here:https://www.mongodb.com/careers/jobs/2200885This process takes time though so thank you for being patient.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I see ",
"username": "Void"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm SDK for .NET? | 2020-06-17T09:24:23.633Z | Realm SDK for .NET? | 2,219 |
null | [] | [
{
"code": "So, can anyone translate it to java ? ;)",
"text": "Hi, I need to know how to get all documents which field contains specified string in JAVA.\nI know in Mongo shell it look like this: ```\ndb.users.findOne({“username” : /.son./i});",
"username": "Zdziszkee_N_A"
},
{
"code": "String patternStr = \".son.\";\nPattern pattern = Pattern.compile(patternStr, Pattern.CASE_INSENSITIVE);\nBson filter = Filters.regex(\"username\", pattern);\ncollection.find(filter);\n",
"text": "With MongoDB Java driver use the find method - see MongoDB Java Find filter Examples. To build a regex query use the find filter Filters#regex method.The code can be like this:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I did it just like u did, It works most the time but when I have string in DB for example “example string” and I try to look for “string” it will not find anything. Probably space mess it up.",
"username": "Zdziszkee_N_A"
},
{
"code": ".?String patternStr = \".son.?\";",
"text": "It is more about Regular Expressions (not Java programming). The character . (dot) essentially matches any character . To make the dot optional append a ? . The question mark makes the preceding token in the regular expression optional.String patternStr = \".son.?\";",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Checking if a field contains a string | 2020-06-20T09:08:59.852Z | Checking if a field contains a string | 12,803 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "2020-06-12 16:39:22.331146+0700 MongoRealmApp1[70396:1756850] dnssd_clientstub ConnectToServer: connect() failed path:/var/run/mDNSResponder Socket:5 Err:-1 Errno:1 Operation not permitted\n\n2020-06-12 16:39:22.335923+0700 MongoRealmApp1[70396:1756850] [] nw_resolver_create_dns_service_locked [C1] DNSServiceCreateDelegateConnection failed: ServiceNotRunning(-65563)\n\n2020-06-12 16:39:22.336530+0700 MongoRealmApp1[70396:1756850] Connection 1: received failure notification\n\n2020-06-12 16:39:22.336595+0700 MongoRealmApp1[70396:1756850] Connection 1: failed to connect 10:-72000, reason -1\n\n2020-06-12 16:39:22.336637+0700 MongoRealmApp1[70396:1756850] Connection 1: encountered error(10:-72000)\n\n2020-06-12 16:39:22.345752+0700 MongoRealmApp1[70396:1756850] Task <C6E8F1BF-F3E6-4BCA-A2FA-CB664D2B4EE0>.<1> HTTP load failed, 0/0 bytes (error code: -1003 [10:-72000])\n\n2020-06-12 16:39:22.352272+0700 MongoRealmApp1[70396:1756849] Task <C6E8F1BF-F3E6-4BCA-A2FA-CB664D2B4EE0>.<1> finished with error [-1003] Error Domain=NSURLErrorDomain Code=-1003 \"A server with the specified hostname could not be found.\" UserInfo={_kCFStreamErrorCodeKey=-72000, NSUnderlyingError=0x600000c7fb10 {Error Domain=kCFErrorDomainCFNetwork Code=-1003 \"(null)\" UserInfo={_kCFStreamErrorCodeKey=-72000, _kCFStreamErrorDomainKey=10}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <C6E8F1BF-F3E6-4BCA-A2FA-CB664D2B4EE0>.<1>, _NSURLErrorRelatedURLSessionTaskErrorKey=(\n\n\"LocalDataTask <C6E8F1BF-F3E6-4BCA-A2FA-CB664D2B4EE0>.<1>\"\n\n), NSLocalizedDescription=A server with the specified hostname could not be found., NSErrorFailingURLStringKey=https://stitch.mongodb.com/api/client/v2.0/app/testappone-ncerb/location, NSErrorFailingURLKey=https://stitch.mongodb.com/api/client/v2.0/app/testappone-ncerb/location, _kCFStreamErrorDomainKey=10}\n\nLogin failed: Error Domain=realm::app::JSONError Code=2 \"[json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: 'A'\" UserInfo={realm::app::JSONError=malformed json, NSLocalizedDescription=[json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: 'A'}\n",
"text": "I’ve been trying to get a macOS app work with MongoRealm Sync.I’ve set up the backend, I’ve tried creating a user and logging in from an iOS client.It worked.Now I’m trying to make the same code work on a macOS client.When I’m logging a user in I’m getting the following error message:",
"username": "Alexei_Vinidiktov"
},
{
"code": "",
"text": "@Alexei_Vinidiktov Can you share the version of RealmSwift you are using, The version of macOS, and a snippet of code showing how you are trying to login?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Alexei_VinidiktovWe you able to go through the entire process of Creating a Realm App via the Atlas console of the MongoDB website? You said the iOS app worked so I assume you were but we’re having some challenges with that so just checking.",
"username": "Jay"
},
{
"code": "struct Constants {\n // Set this to your Realm App ID found in the Realm UI.\n static let REALM_APP_ID = \"<my app id>\"\n}\nlet app = RealmApp(id: Constants.REALM_APP_ID)\n\napp.login(withCredential: AppCredentials(username: \"test\", password: \"123456\")) { [weak self](user, error) in\n DispatchQueue.main.sync {\n guard error == nil else {\n print(\"Login failed: \\(error!)\");\n return\n }\n\n print(\"Login succeeded!\");\n }\n };",
"text": "macOS 10.15.4\nRealmSwift 10.0.0-beta.2\nThe following code works fine in my iOS app, but it gives an error on macOS. Registering a user also works on IOS, but not on macOS.",
"username": "Alexei_Vinidiktov"
},
{
"code": "",
"text": "We you able to go through the entire process of Creating a Realm App via the Atlas console of the MongoDB website?\nYes.You said the iOS app worked so I assume you were but we’re having some challenges with that so just checking.\nYes. My iOS app is able to register and login users.The same code results in an error on macOS.I’m just starting with MongoDb Realm. I’m experimenting with iOS and macOS.",
"username": "Alexei_Vinidiktov"
},
{
"code": "",
"text": "I’ve figured it out. I had to add the “Outgoing connections (client)” capability in the Signing and Capabilities -> App Sandbox section of my Xcode project.So now logging users in works on macOS.",
"username": "Alexei_Vinidiktov"
},
{
"code": "",
"text": "worksWhat Code you have writeen for sync the collection data?Thanks anyway",
"username": "Vishal_Deshai"
}
] | Does MongoDB Realm Beta Sync work on macOS clients? | 2020-06-12T09:44:40.971Z | Does MongoDB Realm Beta Sync work on macOS clients? | 2,637 |
null | [
"mongoose-odm"
] | [
{
"code": "",
"text": "What is the difference between findOneAndDelete and findOneAndRemove and which method must be used to delete document.",
"username": "sudeep_gujju"
},
{
"code": "",
"text": "findOneAndRemoveThere is no such method in MongoDB. See the list of delete methods.To be short:",
"username": "slava"
},
{
"code": "",
"text": "Like I wrote in the MongoDB University forum findOneAndRemove() is mongoose. So if you use mongoose, use findOneAndRemove() and if you use mongo use findOndAndDelete().",
"username": "steevej"
}
] | Delete and remove | 2020-06-22T13:07:48.975Z | Delete and remove | 4,258 |
null | [] | [
{
"code": "",
"text": "I have just installed MongoDB Compass for the first time but it can not pass \" Activating Plugins\" stage.\nPlease help so I can progress on my course",
"username": "Wellington_Majora"
},
{
"code": "",
"text": "What is the version you are installing\nWhich browser you are using\nMake sure no firewall,anti virus,vpn issues preventing installPlease check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Wellington_Majora,Any update on this What is the version you are installing\nWhich browser you are using\nMake sure no firewall,anti virus,vpn issues preventing installPlease check this link~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "I have tried but dont seem to be able to solve this problem. I have tried reinstalling and it successfully installs. I can launch the app and it initialises, loads preferences, load plug ins but then becomes stuck at activating plugins. See attachedMongoCompass1905×1012 14.6 KB",
"username": "Wellington_Majora"
},
{
"code": "",
"text": "Hi @Wellington_Majora,Please share the following details :Name and version of your operating systemVersion of Compass that you have installedAlso, have you tried installing any older version of Compass ?~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "My OS is Windows 10 Home EditionMongoDB Compass 1.21.2My assignment was due today but I have not been able to complete it. Can I get an extension?",
"username": "Wellington_Majora"
},
{
"code": "",
"text": "Hi @Wellington_Majora,This Also, have you tried installing any older version of Compass ?Please try an older version of Compass 1.20.0.My assignment was due today but I have not been able to complete it. Can I get an extension?Unfortunately, you cannot get any extension. You can enrol in the next offering of M001.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "C:\\Windows\\System32\\wbem\n",
"text": "@Wellington_Majora you can just add this path to your pathThis worked well for me! Using win10",
"username": "Sakshi_Saxena"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Compass not moving past "Activating Plugins" | 2020-06-07T14:03:47.931Z | MongoDB Compass not moving past “Activating Plugins” | 1,637 |
null | [] | [
{
"code": "{\n\n \"name\": \"mean-course\",\n\n \"version\": \"0.0.0\",\n\n \"scripts\": {\n\n \"ng\": \"ng\",\n\n \"start\": \"ng serve\",\n\n \"build\": \"ng build\",\n\n \"test\": \"ng test\",\n\n \"lint\": \"ng lint\",\n\n \"e2e\": \"ng e2e\",\n\n \"serve\": \"nodemon server.js\"\n\n },\n[nodemon] restarting due to changes...\n[nodemon] starting `node server.js`\nMongooseServerSelectionError: bad auth Authentication failed.\n at NativeConnection.Connection.openUri (D:\\twt\\mean-cource\\mean-course\\node_modules\\mongoose\\lib\\connection.js:826:32)\n at Mongoose.connect (D:\\twt\\mean-cource\\mean-course\\node_modules\\mongoose\\lib\\index.js:335:15)\n at Object.<anonymous> (D:\\twt\\mean-cource\\mean-course\\backend\\app.js:21:10)\n at Module._compile (internal/modules/cjs/loader.js:1138:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)\n at Module.load (internal/modules/cjs/loader.js:986:32)\n at Function.Module._load (internal/modules/cjs/loader.js:879:14)\n at Module.require (internal/modules/cjs/loader.js:1026:19)\n at require (internal/modules/cjs/helpers.js:72:18)\n at Object.<anonymous> (D:\\twt\\mean-cource\\mean-course\\server.js:1:13)\n at Module._compile (internal/modules/cjs/loader.js:1138:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)\n at Module.load (internal/modules/cjs/loader.js:986:32)\n at Function.Module._load (internal/modules/cjs/loader.js:879:14)\n at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)\n at internal/main/run_main_module.js:17:47 {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map {\n 'cluster0-shard-00-01-scb3m.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02-scb3m.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-00-scb3m.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\nConnection Failed..\n",
"text": "Hi,below the code of my package.json file.When i start to run server using Commond : npm run servethen server started and then failed like below this.Can please help on this.",
"username": "Sunil_Malviya"
},
{
"code": "",
"text": "Are you able to connect to your mongodb from shell?\nmongo “your_connection_string”\nCheck your env file for any URI issues",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes, but after connected, its failed in a 5 second something",
"username": "Sunil_Malviya"
},
{
"code": "",
"text": "So your connection from shell failed in few seconds?\nWith what errorDid you try npx serve",
"username": "Ramachandra_Tummala"
},
{
"code": "node server.js",
"text": "I tried but and everything done in my App, but due to this i stuck, its not showing error.\njust showing after few second like Connection Faild.\nPlease help on this.[nodemon] restarting due to changes…\n[nodemon] starting node server.js\nConnection Failed…",
"username": "Sunil_Malviya"
},
{
"code": "",
"text": "Where are you running mongodb? locally, on your computer, or remotely, like on Atlas or a server somewhere?",
"username": "Michael_Lynn"
}
] | Error: while run server..Connection Failed | 2020-06-17T16:56:52.501Z | Error: while run server..Connection Failed | 3,779 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I use Mongodb 3.6 + .net driver (MongoDb.Driver 2.10) to manage our data. Recenyly, we’ve noticed that our services (background) consume a lot of memory. After anaylyzing a dump, it turned out that there’s a mongo object called BsonChunkPool that always consumes around 0.5 GB of memory. Is it normal ? I cannot really find any valuable documentation about this type, and what it actually does. Can anyone help ?",
"username": "Maciej_Pakulski"
},
{
"code": "BsonChunkPool",
"text": "Hi @Maciej_Pakulski, and welcome to the forum!I cannot really find any valuable documentation about this type, and what it actually does. Can anyone help ?BsonChunkPool exists to manage the recycle of memory buffers (chunks) to ease the amount of work the garbage collector has to do. Please see the comment on CSHARP-3054 for more information.Best regards,\nWan.",
"username": "wan"
}
] | Large size of BsonChunkPool | 2020-06-05T19:02:32.406Z | Large size of BsonChunkPool | 2,150 |
null | [
"dot-net"
] | [
{
"code": "public class Inbox : TDocument\n{\n public string Name {get;set;}\n public List<MongoDBRef> WorkOrdersRef {get;set;}\n public List<InboxRule> Rules {get;set;}\n\n [BsonIgnore] public List<WorkOrder> WorkOrders {get;set;}\n}\n\n\nInboxRule.cs\n\npublic InboxRule : TDocument\n{\n public MongoDBRef ServiceRef {get;set;}\n public MongoDBRef WorkOrderStatusRef {get;set;}\n\n\n [BsonIgnore] public Service Service {get;set;}\n [BsonIgnore] public WorkOrderStatus WorkOrderStatus {get;set;}\n}\n",
"text": "Hello,\ni want to know how to do complicated nested lookup in c#see this code for example:\nInbox.cshow to perform nested lookup or join with one call to return an array of Inboxes with its Rules and the Rules other objects such as Service and WorkOrderStatus",
"username": "Moataz_Al-HANASH"
},
{
"code": "RulesInboxInboxInboxMongoDBRef_id",
"text": "Hi @Moataz_Al-HANASH , welcome!how to perform nested lookup or join with one call to return an array of Inboxes with its Rules and the Rules other objects such as Service and WorkOrderStatusBased on the class mapping, it looks like Rules is an embedded list inside of Inbox. You should be able to query any Inbox or multiple Inbox and be able to retrieve the Rules as well.Also you may find $lookup aggregation stage a useful resource.If you still have further questions, it would be useful to provide:Note: I noticed that you have MongoDBRef as type, if that refers to DBRefs depending on the use case you may find a manual reference of _id would be easier to use. See also database references.Regards,\nWan.",
"username": "wan"
},
{
"code": "JoinLINQWorkOrdersWorkOrdersRefJoinLINQInboxRuleServiceWorkOrderStatus[BsonIgnore]InboxRules",
"text": "furthermore :A)How to perform Join using LINQ or ‘Aggregation’ to fill the property WorkOrders based on the references in the property WorkOrdersRefB)How to perform Join using LINQ or ‘Aggregation’ to fill the properties of the embedded document InboxRule known as Service and WorkOrderStatusOr simply, how to populate the [BsonIgnore] -properties for an Inbox object and it’s embedded document property known as Rules",
"username": "Moataz_Al-HANASH"
},
{
"code": "RulesInboxServiceWorkOrderStatusRulesInboxWorkOrderRulesServiceWorkOrderStatusInboxInbox {\n WorkOrders : [\n {\n WorkOrderId : \"123\",\n ...\n }\n {\n WorkOrderId : \"445\",\n ...\n }\n ],\n Rules : [\n { \n Service : {\n Name : \"Service 1\"\n } ,\n WorkOrderStatus : {\n Name : \"Status 1\"\n } \n }\n { \n Service : {\n Name : \"Service 2\"\n } ,\n WorkOrderStatus : {\n Name : \"Status 2\"\n } \n }\n ]\n}\n_dbContext.DbSet<Inbox>().Aggregate()\n .Lookup(\"WorkOrder\", inboxKeyToWorkOrders, woKey, navProp) // to fill WorkOrders property in Inbox\n .Unwind(a => a.Rules, new AggregateUnwindOptions<InboxRule>() { PreserveNullAndEmptyArrays = true })\n .Lookup(\"Service\", inboxRuleKeyToService, serviceId, inboxRuleNavPropService) // to fill Service property in InboxRule\n .Lookup(\"WorkOrderStatus\", inboxRuleKeyToStatus, statusId, inboxRuleNavPropStatus) // to fill WorkOrderStatus property in InboxRule\n .ToList();\nInbox",
"text": "Thanks for our replay.that is right , Rules are embedded inside of Inbox object\nbut if you notice Service and WorkOrderStatus are NOT embedded inside Rules , i want to retrieve an Inbox object joined with WorkOrder and the embedded Rules should also be joined with Service and WorkOrderStatus and all mapped back to Inbox objectso basiclly im looking for something like this:i tried this in C# to get the results needed but failed:but the problem is the above code does not return list of Inbox , it returns list of InboxRulehow to perform nested lookup and map results to Inbox object ?Iam using 2.10.4 MongoDB .NET/C# driver\nalso using Atlas 4.2.6 as my MongoDb serverThanks in advance",
"username": "Moataz_Al-HANASH"
},
{
"code": "",
"text": "@wan\ni added a replay sir",
"username": "Moataz_Al-HANASH"
},
{
"code": "InboxWorkOrderServiceWorkOrderStatus",
"text": "Hi @Moataz_Al-HANASH ,Based on your C# code snippet, there are 4 collections involved (Inbox, WorkOrder, Service, and WorkOrderStatus), and you’re trying to perform multiple lookups to combine them.Could you clarify the question by providing example documents for the collections ?\nAlso, what does the current output document that you’re getting ?Regards,\nWan.",
"username": "wan"
},
{
"code": " {\n _id : ObjectId('3213asdaadda'),\n Name : 'InboxName',\n WorkOrdersRef : [\n 0:ObjectId('1'),\n 1:ObjectId('2'),\n 2:ObjectId('3'),\n ],\n Rules : [\n {\n ServiceRef: ObjectId('2134'),\n WorkOrderStatusRef: ObjectId('213asd4'), \n },\n {\n ServiceRef: ObjectId('21341523'),\n WorkOrderStatusRef: ObjectId('2112131134'), \n }\n ]\n }\n {\n _id:Object('1'),\n ... *some other not related to the question fields and arrays*\n }\n {\n _id:Object('2'),\n ... *some other not related to the question fields and arrays*\n }\n {\n _id:Object('3'),\n ... *some other not related to the question fields and arrays*\n }\n {\n _id : ObjectId('12313'),\n Name : 'Initiated'\n },\n{\n _id : ObjectId('12313'),\n Name : 'Closed'\n}\n",
"text": "@wanThis is the example documentsInbox DocumentWorkOrder document:WorkOrderStatusas for second question , i dont get a result yet",
"username": "Moataz_Al-HANASH"
},
{
"code": "inboxworkorderworkorderstatusvar collection = database.GetCollection<Inbox>(\"inbox\"); \nvar docs = collection.Aggregate()\n .Lookup(\"workorder\", \"WorkOrdersRef\", \"_id\", \"WorkOrders\")\n .Unwind(\"Rules\")\n .Lookup(\"workorderstatus\", \"Rules.WorkOrderStatusRef\", \"_id\", \"Rules.WorkOrderStatus\")\n .ToList();\n[BsonIgnore]InboxRulesLookupUnwindBsonDocument",
"text": "Hi @Moataz_Al-HANASH,Given the example document for inbox, workorder, and workorderstatus collections, you could perform an aggregation as below example:how to populate the [BsonIgnore] -properties for an Inbox object and it’s embedded document property known as RulesThe use of Lookup and Unwind will change the class shape, I’d recommend to either define a new class that matches the result/output type, or just use BsonDocument .Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "@wan\nThanks for your replaycan you please provide me a way to map the result to ‘Inbox’ class ?",
"username": "Moataz_Al-HANASH"
}
] | Nested lookup 'join' using .net driver | 2020-06-14T07:10:26.645Z | Nested lookup ‘join’ using .net driver | 10,268 |
[
"charts"
] | [
{
"code": "",
"text": "Hi we have restructured our data. Now when I want to open my data source on charts it gives me the following error “there was an error when loading the data source fields”. The collection shows up on compass and i can query it in the shell. The collection currently contains 510 fields , is there a maximum amount that can be pulled into charts? Attached is a screenshot..\nToo add to this my collection size is 36MB , is there a limit on what collection sizes can be?And is the limit only for mongo charts?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Hi @johan_potgieter -There shouldn’t be any practical limit on the number of fields we show in the field panel. Also 36MB is not a large collection. For very large collections there can be a risk of timeouts when getting the data for the chart, but the field sample runs on a small number of documents so it should always work.Unfortunately there’s a bug in the current version of Charts where we don’t show the error message when the field sampling fails. This makes it hard to diagnose. We’ll have a new release out next week which fixes this, but in the meantime you can find the error if you open your browser’s dev tools and use the Network tab. You should see a red “call” entry with the error message in the response body. Let me know if you’re able to find this or if you need further help.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks @tomhollander. I ran it again now and attached is the error message i get.Mongodb1430×737 67.7 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Through some trial and error it seems to me that the error occurs when i reach close to a 1000 fields in a collection. Although further experimentation is required. Any advise will be appreciated.",
"username": "johan_potgieter"
},
{
"code": "call",
"text": "Can you click the red call entry and see the details of the response?",
"username": "tomhollander"
},
{
"code": "",
"text": "Sure attached is screenshot of all the info.md11232×597 31 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Second Part.md21232×612 74.8 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Third Part.md31230×453 40.5 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Sorry can you click the Response tab? That’s where the error message will be.",
"username": "tomhollander"
},
{
"code": "",
"text": "And sorry then another question. Is there a way to subtract 2 arrays that are the same length from one another and produce the value in another array?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Ah I found it. Here is the error.\n{“error”:“(Location16820) Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.”,“error_code”:“MongoDBError”,“link”:“App Services”}",
"username": "johan_potgieter"
},
{
"code": "chartsMonoRepl:PRIMARY> db.arraydiff.find()\n{ \"_id\" : ObjectId(\"5eefd7841994dbe8050da7b8\"), \"a\" : [ 10, 20, 30, 40, 50 ], \"b\" : [ 1, 2, 3, 4, 5 ] }\nchartsMonoRepl:PRIMARY> db.arraydiff.aggregate([\n... {\n... $addFields: {\n... diff: {\n... $map: {\n... input: {\n... $zip: {inputs: [\"$a\", \"$b\"]}\n... },\n... as: \"el\",\n... in: {\n... $subtract: [\n... {$arrayElemAt: [\"$$el\", 0]},\n... {$arrayElemAt: [\"$$el\", 1]}\n... ]\n... }\n... }\n... }\n... }\n... }\n... ]);\n{ \"_id\" : ObjectId(\"5eefd7841994dbe8050da7b8\"), \"a\" : [ 10, 20, 30, 40, 50 ], \"b\" : [ 1, 2, 3, 4, 5 ], \"diff\" : [ 9, 18, 27, 36, 45 ] }\n",
"text": "Hi @johan_potgieter -Regarding your array question, assuming the two arrays you want to subtract from each other are in different fields in the same document, you could use the following approach:Thanks for sending the full error message for your other problem. I’m still looking into why this is happening, but as you suggested it may be to do with having such a large number of fields. I’ll let you know when I have any further info on this.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Great i will give it a try. Thanks for the support Tom.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Works perfectly. Thanks so much.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Charts error loading data source fields | 2020-06-17T12:35:21.938Z | MongoDB Charts error loading data source fields | 6,530 |
|
null | [
"server"
] | [
{
"code": "",
"text": "Hello All,Hope you’re doing good…I would like to know how to build mongodb with source code. We have a requirement to install mongodb from source code instead of RPM’s . I downloaded source code from Try MongoDB Atlas Products | MongoDBPlease help me out in this regard.Thanks,\nSatya",
"username": "satya_dommeti"
},
{
"code": "",
"text": "Hi @satya_dommeti and welcome to the community!I would recommend taking a look at the Building MongoDB docs from the public MongoDB GitHub repo.Best of luck.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Doug_Duncan Thank you for your response. The document you shared is very helpful.I am trying build mongodb from source code but getting the below error. It would be great if you help me in this regard.scons: Reading SConscript files …\nscons: running with args /usr/local/bin/python3.8 buildscripts/scons.py install-mongod\nInvalid MONGO_VERSION ‘’, or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SConsOS version :\nRed Hat Enterprise Linux Server release 7.6 (Maipo)Thanks,\nSatya",
"username": "satya_dommeti"
},
{
"code": "mongod$ git clone https://github.com/mongodb/mongo.git\n\n$ cd mongo\n\n$ git checkout r4.2.8\n\n$ pip3 install -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py mongod\n... 25 minutes later ...\nInstall file: \"build/opt/mongo/mongod\" as \"mongod\"\nscons: done building targets.\n\n$ build/opt/mongo/mongod --version\ndb version v4.2.8\ngit version: 43d25964249164d76d5e04dd6cf38f6111e21f5f\nallocator: system\nmodules: none\nbuild environment:\n distarch: x86_64\n target_arch: x86_64 \nmongodinstall-mongodcould not derive from version.json or git metadatagit clone",
"text": "Hi Satya,I tried to compile mongod using the documentation pointed at by @Doug_Duncan on my Mac, and it seems straightforward. Here’s exactly what I did:Seems to be working just fine. One caveat is to check out the build instructions for the version you wanted to build, since for version 4.2.8 that I tried, the build target is called mongod. For the master branch, it’s called install-mongod.The error you posted: could not derive from version.json or git metadata seems to indicate that you did not do a git clone of the mongo repository, so it doesn’t know which version to build. You might want to check if the step-by-step method using git clone like I did above works first before trying anything else, to confirm that you have the required tools.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "x.y.z",
"text": "I would like to know how to build mongodb with source code. We have a requirement to install mongodb from source code instead of RPM’s .Hi Satya,Please provide some more detail on your requirements:What version of MongoDB Server (x.y.z) are you trying to build?Build requirements and instructions may vary between major server releases, so make sure you are following relevant instructions and have met all of the prerequisites (versions of C++ compiler, Python, and SCons are especially important).What is your motivation for building from source?If you aren’t modifying source code, there may be an alternative approach to suggest (for example, offline installation of RPMs for an airgapped server). If you are trying to build a version of MongoDB server that doesn’t have prebuilt binaries for RHEL 7, that would also be helpful context.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "x.y.z$ python3.8 buildscripts/scons.py install-mongod MONGO_VERSION=4.2.3 \n$ python3.8 buildscripts/scons.py install-mongod MONGO_VERSION=4.2.3\nscons: Reading SConscript files ...\nscons: running with args /usr/local/bin/python3.8 buildscripts/scons.py install-mongod MONGO_VERSION=4.2.3\nscons version: 3.0.4\npython version: 3 8 3 'final' 0\nCC is gcc\ngcc found in $PATH at /opt/rh/devtoolset-8/root/usr/bin/gcc\nCXX is g++\ng++ found in $PATH at /opt/rh/devtoolset-8/root/usr/bin/g++\nChecking whether the C compiler works... yes\nChecking whether the C++ compiler works... yes\nChecking that the C++ compiler can link a C++ program... yes\nChecking if C++ compiler \"g++\" is GCC... yes\nChecking if C compiler \"gcc\" is GCC... yes\nDetected a x86_64 processor\nChecking if target OS linux is supported by the toolchain... yes\n**Checking if C compiler is GCC 8.2 or newer...no**\n**Checking if C++ compiler is GCC 8.2 or newer...no**\nERROR: Refusing to build with compiler that does not meet requirements\nSee /home/mongoadm/mongo-r4.2.3/build/scons/config.log for details\n$ tail /home/mongoadm/mongo-r4.2.3/build/scons/config.log\n |int main(int argc, char* argv[]) {\n | return 0;\n |}\n |\nCompiling build/scons/opt/sconf_temp/conftest_12.o\nbuild/scons/opt/sconf_temp/conftest_12.cpp:7:2: error: #error GCC 8.2 or newer is required to build MongoDB\n #error GCC 8.2 or newer is required to build MongoDB\n ^\nscons: Configure: no\n$/opt/rh/devtoolset-8/root/usr/bin/gcc --version\ngcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)\n\n$/opt/rh/devtoolset-8/root/usr/bin/g++ --version\ng++ (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)\n\ncat /etc/redhat-release\nRed Hat Enterprise Linux Server release 7.6 (Maipo)\npython3.8 buildscripts/scons.py install-mongod MONGO_VERSION=4.2.3 --CC=/opt/rh/devtoolset-8/root/usr/bin/gcc --CXX=/opt/rh/devtoolset-8/root/usr/bin/g++ --disable-warnings-as-errors\n",
"text": "Hi @Stennie_X Thanks for your reply. please find my inline commentsWhat version of MongoDB Server ( x.y.z ) are you trying to build?I’m trying to build MongoDB server 4.2.3. The error which I posted is resolved after adding a parameter MONGO_VERSION=4.2.3 as below.But it is refusing to build with compiler that does not meet requirement.It is showing that GCC 8.2 or newer is required to build MongoDB. But I already installed GCC 8.3 versionI tried below command also but no luckWhat is your motivation for building from source?This is my client requirement to provide security. Client don’t want to install mongodb with RPM’sRegards,\nSatya",
"username": "satya_dommeti"
},
{
"code": "docs/building.mdinstall-mongodmastermongodgccg++--",
"text": "Hi Satya,Please make sure you are following the 4.2 version of Building MongoDB instructions (which should be docs/building.md in your source checkout).The SCons install-mongod target used in your output is for a newer server release branch (4.4 or master) and should be mongod for 4.2.It looks like you have an appropriate version of gcc/g++ installed, but perhaps there are conflicting versions in your path.–CC=/opt/rh/devtoolset-8/root/usr/bin/gcc --CXX=/opt/rh/devtoolset-8/root/usr/bin/g++These options should be set as environment variables, not as parameters. Try including these in your SCons command line without the leading --:CC=/opt/rh/devtoolset-8/root/usr/bin/gcc CXX=/opt/rh/devtoolset-8/root/usr/bin/g++This is my client requirement to provide security.I noticed you are building MongoDB 4.2.3. The latest 4.2 release is currently 4.2.8 (see MongoDB 4.2 Minor Releases). I strongly recommend staying current with minor releases for the latest security and stability fixes.If your client is concerned about security and stability, the official packages should provide more assurance on that front than building from source. All builds are extensively tested via our public (and open source) Evergreen CI and published packages are signed for verification.If you build from source, there may be variations in your build environment that cause unexpected issues.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to install mongodb from source code | 2020-06-16T16:29:16.152Z | How to install mongodb from source code | 18,105 |
null | [] | [
{
"code": "",
"text": "Any there any specific issues for deploying MongoDB on btrfs? I am using SSD for storage.I’ve read in docs that XFS is the recommended one. But I’m also planning on having a snapshot backup feature in future, or might even have to extend across multiple disks.Will there be any performance issues in case of btrfs?",
"username": "Dushyant_Bangal"
},
{
"code": "",
"text": "Follow the docs and use XFS. In many databases it is by default more performant. In my experience(own benchmarks ~mongo 3.4) the benefits of btrfs and zfs are not worth the performance trade off.For snapshots you can use XFS on top of LVM.Specifically for backups I have previously deployed a hidden, non-voting member on ZFS as part of a replica set.",
"username": "chris"
},
{
"code": "",
"text": "My workload will mainly consist of write and read operations. There will be rarely any update operations. Will btrfs still have adverse effect of performance?",
"username": "Dushyant_Bangal"
},
{
"code": "",
"text": "Hi,I would still recommend XFS since that’s what’s been extensively tested and proved to be the best solution. Other filesystems may still work fine, but there could be some hidden caveats or performance implications, so that the drawback may outweighs the benefits, as @chris had pointed out.As one example, there is an old ticket that identified performance issues with Ext4 with WiredTiger: SERVER-18314. There might be similar (undiscovered) issues with btrfs.Having said that, if you think that btrfs serves your use case a lot better than XFS and you’re willing to take the chance, you might want to test the configuration extensively before deploying it in production to minimize any surprises.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "storage.dbPathnodatacow",
"text": "Welcome to the community @Dushyant_Bangal!Btrfs is not widely used or tested with MongoDB. An important consideration is that by default Btrfs is a Copy On Write (COW) filesystem which will not be ideal for use cases with frequently updated data files (for example, databases and VMs). Fragmentation can lead to significant storage wear which will reduce the lifespan of your storage device.As mentioned on the Btrfs Gotchas wiki page:Files with a lot of random writes can become heavily fragmented (10000+ extents) causing thrashing on HDDs and excessive multi-second spikes of CPU load on systems with an SSD or large amount a RAMThe consequences of write activity over time are difficult to predict, but you could try simulating your predicted workload. It sounds like you may be planning for an insert-only workload, which should be less problematic for Btrfs than a workload including updates and deletes.If you do plan on using Btrfs, I would definitely look into tuning Btrfs mount options for the volume hosting your MongoDB storage.dbPath (for example, possibly using the nodatacow option for a workload involving updates & deletes). Btrfs also has some features like checksums and compression which duplicate work performed in MongoDB’s WiredTiger storage engine by default, so if performance is a consideration you probably don’t want to have overlapping features enabled.It would be excellent if you can share any Btrfs experience with the community so there is more information on what does (or doesn’t) work well with MongoDB. Btrfs tuning experience for other databases should also be relevant.For a production environment I would strongly recommend using XFS for the broadest experience and support. You can use LVM (Logical Volume Manager) with XFS for more operational flexibility, including creating logical volumes spanning multiple physical disks and taking consistent snapshots.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB with btrfs | 2020-06-21T08:58:18.385Z | MongoDB with btrfs | 3,480 |
null | [
"dot-net",
"transactions"
] | [
{
"code": "var client = new MongoClient(...);\nvar database = client.GetDatabase(\"db1\");\nvar session = client.StartSession();\nvar options = new TransactionOptions(ReadConcern.Local, ReadPreference.Primary, WriteConcern.W1);\nsession.StartTransaction(options);\nvar collection = database.GetCollection<ConnectionEntity>(\"connections\");\nvar newConnection = new ConnectionEntity() \n{\n Id = \"1234\",\n Name = \"test1234\"\n};\ncollection.InsertOne(newConnection);\nsession.AbortTransaction();\n",
"text": "I’m currently experimenting with transactions and experiencing some issues which raised the following questions:My implementation with the latest MongoDB.Driver (2.10.4) looks as follows:I expected that the new document should not be inserted in the first place or should be removed after calling AbortTransaction.Any hints would be much appreciated.",
"username": "niklr"
},
{
"code": "InsertOne()InsertOne()",
"text": "Hi,I believe you need to use the other InsertOne() that uses client session. If you’re using the InsertOne() method without specifying client session, the insert was performed outside of the session/transaction, so it won’t be rolled back since the session/transaction didn’t know about it.There are some code examples using various languages (including C#) in the Transactions page.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aborting a transaction does not rollback an inserted document | 2020-06-20T13:37:34.824Z | Aborting a transaction does not rollback an inserted document | 6,853 |
null | [
"replication",
"installation"
] | [
{
"code": "I added remaining nodes already using mongod -f \"mongod_2.conf\"\nbut still facing issue to add remaining 2 nodes.\nplease help ASAP\nThank you",
"text": "Unable to deploy replica set in lab of m103 . everything is working fine until logging in as admin but while adding new node its throwing error as ```\nNewReplicaSetConfigurationIncompatible",
"username": "Charan_Narukulla"
},
{
"code": "",
"text": "NewReplicaSetConfigurationIncompatibleIs this related to mongodb university course lab?\nPlease post it on https://discourse.university.mongodb.comPlease check your config files for all 3 nodes\nYou may be having different names for replicaSetName",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Deploying replica set issue | 2020-06-20T21:54:10.260Z | Deploying replica set issue | 2,063 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,I have created a replicaset of 3 database nodes and one arbiter.\nWhere 2 database nodes in Production environment and DR arbiter and another secondary DB node. Arbiter also holds a vote in my configuration,\nAs if one node fails,\nin order to promote one secondary as a primary, there must be at least 3 votes.\nIs my configuration is wrong?\n4 voting members including arbiter is wrong or correct?\nPlease help?",
"username": "Kasun_Magedaragama"
},
{
"code": "MongoDB write concernwrite concern",
"text": "Hello @Kasun_Magedaragama ,\nand welcome to the community. We’re glad to have you join us and we look forward to your contributions.4 voting members including arbiter is wrong or correct?An even number of voting members is always a bad choice for elections. You generally add a cheap (non dataholding) arbiter to archive an odd number of voting members - you can look at an arbiter as an tie-breaker. The MongoDB Documentation provides further detail–\nBeside this you should also keep the MongoDB write concern in mind.A write concern of zero means that the application doesn’t wait for any acknowledgments. The write might succeed or fail. The application doesn’t really care. It only checks that it can connect to the node successfully.The default write concern is one . That means the application waits for an acknowledgment from a single member of the replica set, specifically, the primary. This is a baseline guarantee of success.Write concerns greater than one increase the number of acknowledgments to include one or more secondary members. Higher levels of write concern correspond to a stronger guarantee of write durability.Majority is a keyword that translates to a majority of replica set members. Divide the number of members by two and round up. So this three-member replica set has a majority of two. A five-member replica set would have a majority of three-- so on and so forth. The nice thing with majority is that you don’t have to update your write concern if you increase the size of your replica set.Hope this helps\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Dear Michael,Thanks a lot for you response. What we thought is, if one DB node goes down,\nIn order to elect a Primary, there should be remaining 3 nodes right?\nThat means 2 DB nodes + Arbiter?Correct me if im wrong",
"username": "Kasun_Magedaragama"
},
{
"code": "",
"text": "Hello @Kasun_Magedaragamasorry for the late response, unfortunately work kept me busy.I struggle with a clear answer as well. If your primary member goes down when running in a standard configuration (odd number of hosts) then you run into the risk of a tie when voting. Maybe @Stennie_X can add more information why an odd number is the recommended setup.My understanding of the voting process is the following:The primary node is the first point of contact for any client communicating to the database. Even if secondaries go down, the client will continue communicating with the node acting as primary until the primary is unavailable.Elections take place whenever there’s a change in topology. Reconfiguring a replica set will always trigger an election that may or may not elect a new primary. But you will definitely see a new primary elected in two cases:The method to figure out which secondary will run for election begins with priority and whichever node has the latest copy of the data. Let’s say every node in your set has the same priority, which is the default. And this node has the latest copy of the data. So it’s going to run for election, and then automatically vote for itself. Then it’s going to ask the other two node(s) for support in the election. The response should be: you have a pretty recent copy of the data, you seem like a good candidate. Then they’ll pledge their support as well. This node will be elected primary.There is also the very slim possibility that two nodes run for election simultaneously. But in a replica set with an odd number of nodes, this doesn’t matter.These two nodes are both going to run, which means they’re both going to vote for themselves. And then this node is going to essentially decide which one of these nodes becomes primary by virtue of a tiebreaker.\nThis becomes a problem when we have an even number of voting members in a set.If two secondaries are running for election simultaneously and there are an even number of remaining nodes in the set, there’s a possibility that they split the vote and there’s a tie. Now a tie is not the end of the world, because the nodes will just start over and hold another election. The problem with repeating elections over and over is that any applications accessing the data will have to pause all activity and wait until a primary is elected. An even number of nodes increases the chances an election has to be repeated, so we generally try to keep an odd number in our replica sets.Another important aspect of elections is the priority assigned to each node in a set. Priority is essentially the likelihood that a node will become the primary during an election. The default primary for a node is 1, and any node with priority 1 or higher can be elected primary. You can increase the priority of a node if you want it to be more likely at this node becomes primary. But changing this value alone does not guarantee that.\nYou can also set the priority of node to be 0 if you never want that node to become primary. A priority 0 node can still vote in elections, but it can’t run for election.Michael",
"username": "michael_hoeller"
},
{
"code": "floor(voting_members_in_cluster / 2 ) +1",
"text": "To elect a primary a majority of voting members must vote for the candidate.I believe majority is expressed as floor(voting_members_in_cluster / 2 ) +1. ( I cannot find a reference for this in the documentation right now)In the case an even number sized cluster is split(with 2 nodes partitioned on each side) it is impossible to gain a majority and hence no candidate elected for primary.",
"username": "chris"
},
{
"code": "",
"text": "HiI am aware of that rule but in a slightly different context:MongoDB write concern is an acknowledgment mechanism that developers can add to write operations.\nHigher levels of acknowledgment produce a stronger durability guarantee. Durability means that the write has propagated to the number of replica set member nodes specified in the write concern.Majority here is defined as a simple majority of replica set members. So divide by two, and round up.\nTaken from M103: Basic Cluster AdministrationI was under the same assumption as you. But I found nowhere a statement that an election is blocked with an even number of members and I doubt that it would.Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "In the case an even number sized cluster is split(with 2 nodes partitioned on each side) it is impossible to gain a majority and hence no candidate elected for primary.@chris you are correct. If you have a four node replicaset with two nodes each in two different data centers and there is a network partition making it so you cannot get a majority vote count, then you will be left with no PRIMARY server and four SECONDARY servers. This is why it is recommended to have an odd number of voting members with a majority of nodes in the PRIMARY data center, or even better, spread across three or more data centers if possible.",
"username": "Doug_Duncan"
},
{
"code": "SECONDARYmajoritywtimeoutfloor(voting_members_in_cluster / 2 ) +1rs.status()majorityVoteCountwriteMajorityCount",
"text": "Welcome to the community @Kasun_Magedaragama! 4 voting members including arbiter is wrong or correct? As @michael_hoeller mentioned, an odd number of voting members is recommended. The addition of an arbiter to your 3 member deployment adds operational risk without providing any benefit.Primaries are elected (and sustained) based on a consensus vote from a strict majority (>50%) of configured voting members. The strict majority requirement is to avoid situations that might otherwise allow more than one primary (for example, a network partition separating voting members equally between data centres).If you add additional members to a replica set, there should be some motivation such as increasing data redundancy or improving fault tolerance. Adding an arbiter to a 3 member replica set does not contribute to either of those aspects.With 3 voting members, the strict majority required to elect a primary is 2 votes which means there is a fault tolerance of 1 member that can be unavailable. If a majority of voting members aren’t available, a primary cannot be elected (or sustained) and all data-bearing members will transition to SECONDARY state.With 4 voting members, the strict majority required to elect a primary is 3 votes which means there is still a fault tolerance of 1 despite the added member. There is also no improvement in data redundancy, since an arbiter only participates in elections.However, if the 4th member is an arbiter this introduces some potential operational complications when the replica set is running in a degraded state (elected primary with one data-bearing member unavailable):An arbiter contributes to the voting majority for a replica set election but cannot contribute to acknowledgement of write operations (since an arbiter doesn’t write any data).If you want to avoid potential rollback of replica set writes, a majority write concern is recommended. However, a majority write concern cannot be acknowledged if your replica set currently only has a voting majority (using an arbiter) rather than a write majority. Operations with majority write concern will either block indefinitely (default behaviour) or time out (if you have specified the wtimeout option).Cache pressure will be increased because more data will be pinned in cache waiting for the majority commit point to advance. Depending on your workload, this can cause significant problems if your replica set is in degraded state. There is a startup warning for Primary-Secondary-Arbiter (PSA) deployments which also would apply to your PSSA scenario in MongoDB 4.4 and earlier: Disable Read Concern Majority. For MongoDB 5.0+, please see Mitigate Performance Issues with PSA Replica Set.Typically, no.An arbiter can still be useful if you understand the operational caveats and are willing to compromise robustness for cost savings. You generally add a cheap (non dataholding) arbiter to archive an odd number of voting members - you can look at an arbiter as an tie-breaker. Considering an arbiter as a tie-breaker is the best possible interpretation, but adding an arbiter does not have the same benefits as a secondary.Where possible I would strongly favour using a data-bearing secondary over an arbiter for a more robust production deployment. What we thought is, if one DB node goes down, In order to elect a Primary, there should be remaining 3 nodes right? The required voting majority is based on the configured number of replica set members, not on the number that are currently healthy. The voting majority for a replica set with 4 voting members is always 3 votes.Think of replication as analogous to RAID storage: your configuration determines the level of data redundancy, performance, and fault tolerance. If there is an issue with availability of one (or more) of your replica set members, the replica set will run in a degraded mode which allows continued write availability (assuming you still have a quorum of healthy voting members to sustain a primary) and read availability (as long as at least one data-bearing member is online). If your primary member goes down when running in a standard configuration (odd number of hosts) then you run into the risk of a tie when voting. Explanations around replica set configuration are often reductive to try to provide more straightforward understanding.The election algorithm can handle an even number of voting members (for example, this is the scenario when you have a 3 member replica set with 1 member down). There are other factors that influence elections including member priority and freshness, so repeated tie-breaking scenarios should not be a concern. However, you generally want to remove any potential speed bumps for your normal deployment state (“all members healthy”). A configuration with an odd-number of voting members is also easier to rationalise when considering scenarios for data redundancy and fault tolerance.MongoDB’s historical v0 election protocol (default in MongoDB 3.0 and earlier) only supported one election at a time, so any potential ties had a more significant impact on elapsed time to reach consensus. The modern v1 election protocol (default in MongoDB 3.2+) supports multiple concurrent elections for faster consensus. If you want to learn more, there’s a relevant talk from MongoDB World 2015: Distributed Consensus in MongoDB. I believe majority is expressed as floor(voting_members_in_cluster / 2 ) +1 . ( I cannot find a reference for this in the documentation right now) That is the correct determination for voting majority, but write majority is based on data-bearing members. Ideally those calculations should be the same for a deployment, but arbiters and members with special configuration (eg delayed secondaries) will have consequences for write acknowledgements.In MongoDB 4.2.1+, the rs.status() output now has explicit majorityVoteCount and writeMajorityCount calculations to remove any uncertainty.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Replica set with 3 DB Nodes and 1 Arbiter | 2020-06-18T02:39:23.878Z | Replica set with 3 DB Nodes and 1 Arbiter | 10,529 |
null | [] | [
{
"code": "",
"text": "Are there any plugins to accelerate queries and aggregation pipeline through GPU ?orIs there a way to tell MongoDB to uses SIMD CPU vector instructions ?",
"username": "Piyush_Katariya"
},
{
"code": "",
"text": "Welcome to the community @Piyush_Katariya,The answer to both of your questions is currently no.A relevant feature suggestion to watch/upvote is SERVER-36151: GPU offloading, but I see you’ve already commented there.The first comment on that issue is still accurate:In the most abstract sense, of course we’d like to leverage the available GPU resources in some fashion. It makes the most sense for heavy OLAP/Analytical queries – but the biggest internal feature we’d need to add first is support for parallel query processing. Then the single request can be divided up into N parts and executed in parallel by multiple threads, subsystems, or even [remote] processes. Then building on that, we could determine which parts would naturally perform well on GPU’s – things like aggregations, sorting, grouping, etc. – and send that portion of the work to the GPU’s using something like the CUDA API.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Hardware Parallelism | 2020-06-20T05:17:22.994Z | Hardware Parallelism | 2,305 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi, I need to apply 2 filters for quering data from my db - by String field and boolean field.\nfor example I want to get 20 documents with with fields: boolean: true string:“someString” how I can achieve that ?",
"username": "Zdziszkee_N_A"
},
{
"code": "",
"text": "You can use the $and operator.",
"username": "Prasad_Saya"
},
{
"code": "db.collection.find({boolean: true, string:\"someString\"}).limit(20)",
"text": "There is no real need to break out the $and as it is implicit.db.collection.find({boolean: true, string:\"someString\"}).limit(20)",
"username": "chris"
},
{
"code": "mongoANDAND$andmongonode",
"text": "Hi @Zdziszkee_N_A,Please see Query Documents in the MongoDB manual for examples of queries in the mongo shell, Compass, and drivers.The relevant example for your question is Specify AND Conditions:db.foo.find( { boolean: true, string: ‘someString’}).limit(20)You can use the $and operator.A compound query is implicitly an AND query. You only need to use $and when specifying multiple conditions on the same field name (otherwise repeated field names would overwrite each other in the query object in most languages):For example, try this in a mongo shell or node interpreter:foo = {string: ‘someString’, string: ‘someOther’}In JavaScript the last value of a duplicated property will be the only value set:{ “string” : “someOther” }Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query document with string and boolean value | 2020-06-20T13:25:44.527Z | Query document with string and boolean value | 13,947 |
null | [
"mongodb-shell"
] | [
{
"code": "PS C:\\Users\\ma> mongo\nMongoDB shell version v4.2.7\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"f8fb8503-9db8-44f3-be51-81ebb7b0f0cd\") }\nMongoDB server version: 4.2.7\nServer has startup warnings:\n2020-06-07T16:24:10.577+0800 I CONTROL [initandlisten]\n2020-06-07T16:24:10.578+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2020-06-07T16:24:10.578+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2020-06-07T16:24:10.582+0800 I CONTROL [initandlisten]\n---\nEnable MongoDB's free cloud-based monitoring service, which will then receive and display\nmetrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\nThe monitoring data will be available on a MongoDB website with a unique URL accessible to you\nand anyone you share the URL with. MongoDB may use this information to make product\nimprovements and to suggest MongoDB products and deployment options to you.\n\nTo enable free monitoring, run the following command: db.enableFreeMonitoring()\nTo permanently disable this reminder, run the following command: db.disableFreeMonitoring()\n---\n\n> Object.entries(this)\n2020-06-20T16:23:42.059+0800 E QUERY [js] uncaught exception: Error: Unknown Error Code: tojson :\nget@build/opt/mongo/shell/error_codes.js:35:23\ntojsonObject@src/mongo/shell/types.js:657:9\ntojson@src/mongo/shell/types.js:629:21\nArray.tojson@src/mongo/shell/types.js:197:23\ntojsonObject@src/mongo/shell/types.js:663:16\ntojson@src/mongo/shell/types.js:629:21\nArray.tojson@src/mongo/shell/types.js:197:23\ntojsonObject@src/mongo/shell/types.js:663:16\ntojson@src/mongo/shell/types.js:629:21\nshellPrintHelper@src/mongo/shell/utils.js:636:15\n@(shell2):1:1\n> Object.getOwnPropertyDescriptors(this)\n2020-06-20T16:23:43.389+0800 E QUERY [js] uncaught exception: Error: Unknown Error Code: tojson :\nget@build/opt/mongo/shell/error_codes.js:35:23\ntojsonObject@src/mongo/shell/types.js:657:9\ntojson@src/mongo/shell/types.js:629:21\ntojsonObject@src/mongo/shell/types.js:699:57\ntojson@src/mongo/shell/types.js:629:21\ntojsonObject@src/mongo/shell/types.js:699:57\ntojson@src/mongo/shell/types.js:629:21\nshellPrintHelper@src/mongo/shell/utils.js:636:15\n@(shell2):1:1\n>\n",
"text": "",
"username": "masx200_masx200"
},
{
"code": "mongothisthisWindowglobalthisdbmongomongo",
"text": "Hi,Unlike a browser or Node.js environment, the mongo shell does not have a default globally scoped object for this so your invocations aren’t expected to provide any meaningful results. In a browser environment the global this refers to the Window object and in Node the global object.Your function calls will work as expected if you pass this within the context of a function scope or provide a valid global object like db:Object.entries(db)\nObject.getOwnPropertyDescriptors(db)Is there a particular problem you are trying to solve in the mongo shell with your approach or are you just curious about the difference in behaviours? The mongo shell provides a more limited environment than a web browser or Node runtime.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "shellthisvar entries = Object.entries(this)\nprint(entries)",
"text": "I guess you can do that in a browser console. In the shell the following prints the this object:",
"username": "Prasad_Saya"
}
] | Mongo shell behaviour bug? | 2020-06-20T08:24:27.468Z | Mongo shell behaviour bug? | 2,524 |
null | [] | [
{
"code": "2020-06-03T01:09:07.489591+00:00 heroku[web.1]: State changed from starting to up\n2020-06-03T01:09:08.389099+00:00 heroku[router]: at=info method=GET path=\"/parse/health\" host=warm-earth-97740.herokuapp.com request_id=512b98e5-7673-4f2e-a5ea-403065f5f700 fwd=\"34.227.29.87\" dyno=web.1 connect=0ms service=16ms status=200 bytes=639 protocol=https\n2020-06-03T01:09:37.327864+00:00 app[web.1]: MongoServerSelectionError: Authentication failed.\n2020-06-03T01:09:37.327872+00:00 app[web.1]: at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)\n2020-06-03T01:09:37.327873+00:00 app[web.1]: at listOnTimeout (internal/timers.js:549:17)\n2020-06-03T01:09:37.327873+00:00 app[web.1]: at processTimers (internal/timers.js:492:7) {\n2020-06-03T01:09:37.327875+00:00 app[web.1]: reason: TopologyDescription {\n2020-06-03T01:09:37.327876+00:00 app[web.1]: type: 'ReplicaSetNoPrimary',\n2020-06-03T01:09:37.327877+00:00 app[web.1]: setName: null,\n2020-06-03T01:09:37.327877+00:00 app[web.1]: maxSetVersion: null,\n2020-06-03T01:09:37.327878+00:00 app[web.1]: maxElectionId: null,\n2020-06-03T01:09:37.327878+00:00 app[web.1]: servers: Map(2) {\n2020-06-03T01:09:37.327879+00:00 app[web.1]: 'ds015044-a0.mlab.com:15044' => [ServerDescription],\n2020-06-03T01:09:37.327879+00:00 app[web.1]: 'ds015044-a1.mlab.com:15044' => [ServerDescription]\n2020-06-03T01:09:37.327880+00:00 app[web.1]: },\n2020-06-03T01:09:37.327880+00:00 app[web.1]: stale: false,\n2020-06-03T01:09:37.327880+00:00 app[web.1]: compatible: true,\n2020-06-03T01:09:37.327881+00:00 app[web.1]: compatibilityError: null,\n2020-06-03T01:09:37.327881+00:00 app[web.1]: logicalSessionTimeoutMinutes: null,\n2020-06-03T01:09:37.327882+00:00 app[web.1]: heartbeatFrequencyMS: 10000,\n2020-06-03T01:09:37.327882+00:00 app[web.1]: localThresholdMS: 15,\n2020-06-03T01:09:37.327882+00:00 app[web.1]: commonWireVersion: null\n2020-06-03T01:09:37.327883+00:00 app[web.1]: }\n2020-06-03T01:09:37.327883+00:00 app[web.1]: }\n2020-06-03T01:09:37.339659+00:00 app[web.1]: npm ERR! code ELIFECYCLE\n2020-06-03T01:09:37.339931+00:00 app[web.1]: npm ERR! errno 1\n2020-06-03T01:09:37.340761+00:00 app[web.1]: npm ERR! [email protected] start: `node ./bin/parse-server -- lib/conf.json`\n2020-06-03T01:09:37.340850+00:00 app[web.1]: npm ERR! Exit status 1\n2020-06-03T01:09:37.340957+00:00 app[web.1]: npm ERR!\n2020-06-03T01:09:37.341032+00:00 app[web.1]: npm ERR! Failed at the [email protected] start script.\n2020-06-03T01:09:37.341107+00:00 app[web.1]: npm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n2020-06-03\n",
"text": "i migrated a dev app and a prod app from mlab and both migrations success on not very large DBs.connect to both OK using mongo shell ( 3.6 )but both failing on app connectionsIm more concerned about fix to the prod app ( heroku , node webapp with parse-server )stdout from heroku log below:",
"username": "Robert_Rowntree"
},
{
"code": "",
"text": "Hi Robert,The mLab Support team ([email protected]) has reached out to you to help you with this issue in mLab Support ticket #185922. Please check your inbox for an email from [email protected] look forward to helping figure this one out in that context\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Any chance you could share how you fixed this? I have the same issue. Thanks.",
"username": "Santiago_Prieto"
}
] | Migrate from mlab on parse server error on app connect | 2020-06-03T00:00:19.256Z | Migrate from mlab on parse server error on app connect | 2,103 |
null | [] | [
{
"code": "",
"text": "I am trying to deploy a prisma server to heroku via prisma cloud. however I have notived that only postgres and mysql database are supported .I am using mongodb atlas database. any help please.",
"username": "youssef_jarid"
},
{
"code": "",
"text": "Welcome to the MongoDB community @youssef_jarid!Prisma 1 has some support for MongoDB but is no longer actively maintained.MongoDB support for Prisma 2 is still in development. Please watch and upvote this issue in their GitHub repo: MongoDB support for Prisma 2 #1277.Per recent discussion on this GitHub issue, MongoDB support is actively in progress but they do not have a timeline yet:Hey everyone, thanks so much for your interest in MongoDB support! We are indeed already working on supporting MongoDB, but as of now it’s unfortunately unclear when we’ll be able to release the first version. Please keep watching this GitHub issue since we’ll keep posting updates here.Regards,\nStennie",
"username": "Stennie_X"
}
] | Deploy Prisma server with MongoDB Atlas database | 2020-06-19T20:15:50.067Z | Deploy Prisma server with MongoDB Atlas database | 2,926 |
null | [
"queries"
] | [
{
"code": "",
"text": "I am relatively new to MongoDb as well as RDBMS.I understand that, in SQL, even though I don’t have data for the field, the field will be available. Whereas in MongoDB if the data is not there, the field won’t be there.Out of curiosity, is there a SQL Query Equivalent to the below MongoDb query?db.goods.find({“product” :{ $exists : true }});",
"username": "David_King"
},
{
"code": "$exists",
"text": "Welcome to the community @David_King!Out of curiosity, is there a SQL Query Equivalent to the below MongoDb query?In SQL a field will always exist, but the value may be nullable. The closest equivalent to an $exists query would be:SELECT * FROM goods where product IS NOT NULLFor a general guide to roughly equivalent SQL queries, the MongoDB manual has some helpful reference pages:Typically you will not use identical data models in MongoDB vs RDBMS. I included more on this in a discussion yesterday, so please read my related post for more details:This is a great use case for MongoDB, but I would encourage you to think about how your data model might be adjusted to take advantage of MongoDB’s indexing and flexible schema rather than doing a direct 1:1 translation of an existing SQL schema. You could start with a direct translation, but this typically misses out on some benefits like easier querying and better performance.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X, thanks for the explanation.",
"username": "David_King"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB query equivalent in SQL | 2020-06-19T03:42:50.046Z | MongoDB query equivalent in SQL | 2,874 |
[
"node-js"
] | [
{
"code": "",
"text": "I am building a Vue app.\nI followed thisI get this error:\n\nimage1640×424 28.9 KB\n",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "Hi Fred – We don’t believe this should be the case and were not able to reproduce. Did you take any steps outside of what is outlined in that documentation page?",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Video of steps: - YouTube",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "So the instructions for actually installing Realm for React Native lists the exact same command.\nHere is the link :https://docs.mongodb.com/realm/react-native/install/\nOn this page it has the same instructions: npm install --save [email protected] am going to assume the NON React instructions should be different. Can anyone provide them?",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "@Fred_Kufner What are you trying to do? Vue.js is a frontend framework, are you trying to call MongoDB Realm functionality from a browser? If so you need to take a look at this tutorial:We have a warning here on what library to use for which platform:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward The Web SDK does not work for Vue.\nhttps://www.mongodb.com/community/forums/t/realm-web-sdk-and-vuejs/5527",
"username": "Mellorad"
},
{
"code": "",
"text": "Understood - have you used the old stitch web sdk in a Vue.js application before? If so you can continue to do so - it appears it will work:A front-end Vue.JS app that demonstrates integration with MongoDB Stitch - GitHub - mrlynn/mongodb-stitch-vue-example: A front-end Vue.JS app that demonstrates integration with MongoDB Stitch",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian, Michael. (I recently watched your video “Create a Data API in 10 Minutes with MongoDB Stitch”, it was helpful)I see that example and its helpful. What is the call to login with API Key instead of using AnonymousCredential()?",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm for Node SDK requires AWS and React-native? | 2020-06-12T15:53:53.024Z | Realm for Node SDK requires AWS and React-native? | 3,286 |
|
null | [] | [
{
"code": "",
"text": "Hello!\nThis is a good night to have a good dream.I wonder how to backup part of documents when the number of documents reaches the number I set?I searched, But I just find “How to create capped collection”.\nThis just deletes old documents when the collection reaches maximum number of documents.All I need is these.\nWhen the number of documents in a collection reaches the number I set,Help me ",
"username": "DongHyun_Lee"
},
{
"code": "",
"text": "Hi,Backup part of documents in a collection.Do you mean backup part of the collection?As I understand it, you wanted to do something like a capped collection. But instead of deleting the old documents, you want to move them somewhere else. Is this correct?If this is not correct, could you provide some examples of what you have in mind?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "No, not the back up part.I was just saying that, I need to know how to back-up some part of my document when the no of document in my collection reaches certain limit. (No of document. Not the size )So, basically I am trying to back-up some no of documents when i have no of documents more than i need in order not to have huge file size. ( and delete the documents that were backed-up in the original collection of course )Thnx in advance",
"username": "DongHyun_Lee"
},
{
"code": "",
"text": "You can use Change Streams.This will let your application watch the collection, that is the number of documents in the collection, and when the number increases a previously set limit, a process is started to backup (or write to another collection) a selected number of documents (based upon some criteria you have).",
"username": "Prasad_Saya"
},
{
"code": "cron",
"text": "Hi @DongHyun_Lee,For self-managed deployments, using change streams (as suggested by @Prasad_Saya) is certainly one approach. However, do consider the potential impact of triggering a count every document is inserted or updated.A more efficient approach would be to write your own scheduled task that runs periodically and exports documents according to your expiry rules before removing them. You can schedule the task (using O/S scheduling tools like cron) to run during off-peak hours on a suitable frequency (twice daily, daily, every 3 days, weekly, …) to minimise impact on a production deployment.If you happen to be using MongoDB Atlas (or might consider doing so), we recently added a new Atlas Online Archive beta feature which archives data greater than an expiry date (based on rules you configure) into more cost-effective S3 storage. With Online Archive and Atlas Data Lake you can continue to query both live and archived data.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "countDocuments",
"text": "For self-managed deployments, using change streams (as suggested by @Prasad_Saya) is certainly one approach. However, do consider the potential impact of triggering a count every document is inserted or updated.Yes, the countDocuments query can take time, for each insert.The document counting can be tracked within the application, for example, a variable can be used (and the variable value can be persisted, once in every n number of documents) . Also, application servers have mechanisms to persist state (variable value) in the event of application failures.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I backup part of documents when the number of documents reaches the number I set? | 2020-06-17T12:44:15.780Z | How can I backup part of documents when the number of documents reaches the number I set? | 2,068 |
null | [] | [
{
"code": "",
"text": "Hi Team,one of the team are using Oracle server and they need to fetch data from MongoDB. As RHEL 7 is supported for installing MongoDB drivers, and the one they use is RHEL 6.9. So drivers are not installed and dblink was not created. They are looking for other options to access this MongoDB server with from their oracle server.Please let me know if there are any other options to access Mongodb server from Oracle server.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Hi,Can you provide more information on what you are trying to connect to MongoDB with (driver/tools and version)? Also, what is your MongoDB server version and deployment type (standalone, replica, or sharded cluster)?You should be able to install any of our supported drivers in RHEL 6, but it sounds like you may be using some sort of tool or integration for Oracle.Regards,\nStennie",
"username": "Stennie_X"
}
] | How to fetch data of MongoDB from Oracle server | 2020-06-19T08:56:55.954Z | How to fetch data of MongoDB from Oracle server | 1,541 |
null | [
"aggregation"
] | [
{
"code": "db.test01.aggregate([{$unwind:\"$ordDoc.custOrderItems\"},\n {$match:{\"ordDoc.custOrderItems.custOrdSubItems.prodId\":\"VU0074\"}},\n {$project:{\"ordDoc.custOrderItems.custOrdSubItems.prodId\":1}}]).pretty()\n{\n\t\"_id\" : \"ORN-178450914676\",\n{\n \"ordDoc\":{\n \"custOrdItems\":{\n \"custOrdSubItems\":[\n {\n \"prodId\":\"VU0091\"\n },\n {\n \"prodId\":\"VU0074\"\n },\n {\n \"prodId\":\"VU0081\"\n },\n {\n \"prodId\":\"VU0033\"\n },\n {\n \"prodId\": \" \"\n },\n {\n \"prodId\":\"VU0038\"\n }\n ]\n }\n }\n}\ndb.test01.aggregate([{$unwind:\"$ordDoc.custOrderItems\"},{$unwind:\"$ordDoc.custOrderItems.custOrderSubItems\"},{$match:{\"ordDoc.custOrderItems.custOrderSubItems.prodId\":\"VU0074\"}},{$project:{\"ordDoc.custOrderItems.custOrderSubItems.prodtId\":1}}]).pretty() \n{\n\t\"_id\" : \"ORN-12345678900096\",\n\t\"ordDocument\" : {\n\t\t\"custOrderItems\" : {\n\t\t\t\"custOrderSubItems\" : {\n\t\t\t\t\"prodId\" : \"VU0074\"\"\n\t\t\t}\n\t\t}\n\t}\n},\n",
"text": "Hi Team,I have a collection with 5000k documents with multiple nested arrays.I want to update the prodId key from old to new value.Some of the documents have only 1 prodId value and other documents have one or more multiple prodId values and few of the documents have no prodId value.output:output:I feel the 2nd method of querying is correct for this type of nested arrays. Correct me if I am wrong and based on this how to update one single prodId in all the documents where that prodId is available?for example how to update prodId value from VU0074 to AC0067 available in all the 5000k documents?Regards\nMam",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Welcome to the community @Mamatha_M !There are 2-3 errors on your queries.But infact you don’t need the Aggregation Framework for a problem like this. You rather need the updateMany() command. We are typically requesting a data schema with an attribute pattern.I simplified your work, I found the solution. First, you didn’t specify if in your “custOrdSubItems” array there could be several sub-documents with the same value for the “prodId” field. So there are actually 2 solutions.db.TestCollection.updateMany({“ordDoc.custOrdItems.custOrdSubItems.prodId”: “VU0074”},\n{$set: {“ordDoc.custOrdItems.custOrdSubItems.$.prodId”: “newValue”}})db.TestCollection.updateMany({},\n{$set: {“ordDoc.custOrdItems.custOrdSubItems.$[element].prodId”: “newValue”}},\n{arrayFilters: [{“element.prodId”: “VU0074”}],multi: true})More elegant than an aggregation query, right ? If you have any questions don’t hesitate.",
"username": "Gaetan_MORLET"
},
{
"code": "$(update)",
"text": "for example how to update prodId value from VU0074 to AC0067 available in all the 5000k documents?The solution 2 by @Gaetan_MORLET works fine.If you are sure , really sure , that there are no duplicates as explained above you can use the $(update) operator. If there are duplicates in the array it will just update the first subdocument.This is about the solution 1. This will not work. The $(update) operator cannot be applied for nested arrays. From the documentation:Nested Arrays\nThe positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "db.TestCollection.updateMany({“ordDoc.custOrdItems.custOrdSubItems.prodId”: “VU0074”},\n{set: {“ordDoc.custOrdItems.custOrdSubItems…prodId”: “newValue”}})I will try the 2nd query.Also can i get the find query for both these options?",
"username": "Mamatha_M"
},
{
"code": "$[ ]$[<someId>]db.collection.updateMany(\n { \"ordDoc.custOrdItems.custOrdSubItems.prodId\": \"AC0074\" },\n { $set: { \"ordDoc.custOrdItems.$[].custOrdSubItems.$[e].prodId\" : \"AC0067\" } },\n {\n arrayFilters: [ { \"e.prodId\": \"AC0074\"} ]\n }\n)\n$$[]$[<someId>]$[<someId>]arrayFilters",
"text": "Correction about the update operation:db.TestCollection.updateMany({},\n{set: {“ordDoc.custOrdItems.custOrdSubItems.[element].prodId”: “newValue”}},\n{arrayFilters: [{“element.prodId”: “VU0074”}],multi: true})I did miss something in the update solution 2 by @Gaetan_MORLET . The correct way to do it is as follows. Note the usage of the $[ ] and the $[<someId>] array update operators.Note on Array Update Operators:The latter two operators must be used with update operations on nested arrays. The $[<someId>] is used in conjunction with the arrayFilters update method option.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "db.TestCollection.updateMany({“ordDoc.custOrdItems.custOrdSubItems.prodId”: “VU0074”},\n{set: {“ordDoc.custOrdItems.custOrdSubItems…prodId”: “newValue”}})I will try the 2nd query.Also can i get the find query for both these options?You can just do that to find the documents to change.db.collection.find({“ordDoc.custOrdItems.custOrdSubItems.prodId”: “AC0074”})Can you also send a sample document from your collection @Mamatha_M ? We just have the return of the aggregation queries for the moment. It will allow us to see more clearly.",
"username": "Gaetan_MORLET"
},
{
"code": "db.test01.aggregate([{$unwind:\"$ordDoc.custOrderItems\"},\n {$match:{\"ordDoc.custOrderItems.custOrdSubItems.prodId\":\"VU0074\"}},\n {$project:{\"ordDoc.custOrderItems.custOrdSubItems.prodId\":1}}]).pretty()\n{\n \"ordDoc\":{\n \"custOrdItems\":{\n \"custOrdSubItems\":[\n {\n \"prodId\":\"VU0091\"\n },\n {\n \"prodId\":\"VU0074\"\n },\n ...\n ]\n }\n}\ncustOrdSubItems$unwind{$unwind:\"$ordDoc.custOrderItems\"}ordDoc.custOrderItems$unwind\"ordDoc.custOrderItems\"\"ordDoc.custOrderItems.custOrdSubItems\"",
"text": "@Gaetan_MORLET If you see the following first query and output in the original post by @Mamatha_M :Note the output has an array field custOrdSubItems. This is after the aggregation’s $unwind stage: {$unwind:\"$ordDoc.custOrderItems\"}. This clearly tells that the ordDoc.custOrderItems is an array (the $unwind is applied only on arrays).So, the structure of the document has nested arrays. The outer array \"ordDoc.custOrderItems\" and the inner array \"ordDoc.custOrderItems.custOrdSubItems\" - and that is a nested array.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Arf, I was editing my post \nHere is what I wanted to write:I thought the custOrdItems field was an object … so we didn’t have nested arrays.\nBut the output of the first query + the unwind, we can effectively think that it’s rather an array.You are absolutely right. I based myself too much on the result of the query.",
"username": "Gaetan_MORLET"
},
{
"code": "",
"text": "i want to know where i can share as it is a big document.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "You can share your document via pastebin.com if necessary @Mamatha_M.",
"username": "Gaetan_MORLET"
},
{
"code": "",
"text": "Hi Prasad,The find query which i had provided was returning almost 944 docs from that collection.\nThe update query which provided also updated 944 docs.\nI guess it works but still i trying the same approach on the other nested arrays.\nNow i would like to understand what it the function of this [] , [e] actually does?\nand in arrayFilters u have provided e.prodId --> that point i didnt get it as I am new to mongodb. Can you please help me here?",
"username": "Mamatha_M"
},
{
"code": "custOrdItemsordDoc\"ordDoc.custOrdItems\"\"ordDoc.custOrdItems.custOrdSubItems\"{\n \"ordDoc\": {\n \"custOrdItems\": [\n \"custOrdSubItems\":[\n {\n \"prodId\": \"VU0091\"\n },\n {\n \"prodId\": \"VU0074\"\n },\n ...\n ]\n ]\n }\n}\n\"ordDoc.custOrdItems.custOrdSubItems\"{ \"prodId\": \"VU0074\" }prodIdupdateManyarrayFiltersarrayFiltersprodIdcustOrdItemsdb.collection.updateMany(\n { \"ordDoc.custOrdItems.custOrdSubItems.prodId\": \"AC0074\" },\n { $set: { \"ordDoc.custOrdItems.$[].custOrdSubItems.$[e].prodId\" : \"AC0067\" } },\n {\n arrayFilters: [ { \"e.prodId\": \"AC0074\"} ]\n }\n)\n$set\"ordDoc.custOrdItems.$[].custOrdSubItems.$[e].prodId\"$[]$[e]$[]ordDoc.custOrdItems.$[]custOrdSubItems.$[e].prodIde$[e]arrayFiltersesubItem",
"text": "Assume your input document is as following. There is an outer array custOrdItems within the ordDoc sub-document; the array’s path is \"ordDoc.custOrdItems\". Then, there is the array \"ordDoc.custOrdItems.custOrdSubItems\", and this is referred as nested array (an array within an array, the inner array).Now, you want to update the nested array \"ordDoc.custOrdItems.custOrdSubItems\"'s element, which is a sub-document { \"prodId\": \"VU0074\" }. The update has the condition that this sub-document’s prodId field value must be “VU0074”, and this is to be updated to a new value.The update operations on nested arrays with condition use the updateMany method’s option arrayFilters. The arrayFilters specifies the condition by which the nested array is to be updated - in this case the condition is that the prodId's value must be equal to “VU0074”. And, what about the condition for the outer array custOrdItems? There are no conditions there, and it means all the elements of this array are updateable.The update statement:The $set update operator updates a field’s value. In this case, we have to update the inner array’s field value based upon the condition (as discussed above). The update field’s path is specified as \"ordDoc.custOrdItems.$[].custOrdSubItems.$[e].prodId\".The two array update operators, the $[] and the $[e] are used here. The $[] operator is used when there is no condition on the array element, in this case the outer array. The ordDoc.custOrdItems.$[] portion of the update field path says that.The remaining part of the path, custOrdSubItems.$[e].prodId says that the inner array element’s field to be updated. The e of the $[e] specifies the sub-document’s field for the condition, which is used with the arrayFilters. The e can be of any name (it is user defined, it could be subItem for example). The string “AC0067” is the new value to be updated with.Please go thru the documentation (the links I had provided earlier), and browse the details and the examples within.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for the detailed explanation.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "if i want to create an index on prodId – what index should go in and how should i create the index for arrays?",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Array field values are indexed using Multikey Indexes.",
"username": "Prasad_Saya"
},
{
"code": "db.test01.aggregate([{$unwind:\"$ordDoc.custOrderItems\"},\n{$unwind:\"$ordDoc.custOrderItems.custOrdSubItems\"},\n{$unwind:\"$ordDoc.custOrderItems.custOrdSubItems.prodAtt\"},\n{$unwind:\"$ordDoc.custOrderItems.custOrdSubItems.prodAtt.prices\"},\n{$unwind:\"$ordDoc.custOrderItems.custOrdSubItems.prodAtt.prices.Key\"},\n{$match:{\"ordDoc.custOrderItems.custOrdSubItems.prodAtt.prices.Key\":\"Technology=NONE\"}},\n{$project:{\"ordDoc.custOrderItems.custOrdSubItems.prodAtt.prices.Key\":1}}]).pretty()\n/* 1 */\n{\n\t\"_id\" : \"ORN-1628755216489\",\n\t\"ordDoc\" : {\n\t\t\"custOrderItems\" : {\n\t\t\t\"custOrderSubItems\" : {\n\t\t\t\t\"prodAtt\" : {\n\t\t\t\t\t\"prices\" : {\n\t\t\t\t\t\t\"Key\" : \"Technology=NONE\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n},\n\n/* 2 */\n{\n\t\"_id\" : \"ORN-3091717461503\",\n\t\"ordDoc\" : {\n\t\t\"custOrderItems\" : {\n\t\t\t\"custOrderSubItems\" : {\n\t\t\t\t\"prodAtt\" : {\n\t\t\t\t\t\"prices\" : {\n\t\t\t\t\t\t\"Key\" : \"Technology=NONE\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n},\n",
"text": "Hi Prasad,Nested arrays for the below aggregate query I am not able to update the key value to a newer value.I have put the script here and json format output. Can you pls help me how to update?JSON:",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Team ,Any updates for the above ask?",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Any updates for the above ask?",
"username": "Mamatha_M"
}
] | Nested arrays how to do query and update? | 2020-04-26T08:42:34.281Z | Nested arrays how to do query and update? | 25,742 |
null | [
"dot-net"
] | [
{
"code": " var MatchStage = new BsonDocument(\"$match\", new BsonDocument{\n \t{\"Parent\", BsonNull.Value},\n });\n var GraphLookupStage = new BsonDocument(\"$graphLookup\", new BsonDocument{\n \t{\"from\", \"entries\"},\n \t{ \"startWith\", \"$_id\" },\n \t{ \"connectFromField\", \"_id\"},\n \t{ \"connectToField\", \"Parent.ID\" },\n \t{ \"as\", \"children\" },\n \t{ \"depthField\", \"route\" }\n });\n var document = collection.Aggregate<BsonDocument>(new BsonDocument[] {\n \tMatchStage,\n \tGraphLookupStage\n }).ToList();\n foreach (var val in document)\n {\n \tif (val[\"children\"].Count == 0)\n \t{\n \t\tConsole.WriteLine(\"True\");\n \t} else {\n \t\tConsole.WriteLine(\"False\");\n \t}\n };\n[\n\t{\n\t\t\"_id\" : UUID(\"234350d9-775c-4fa2-9a1e-54bd7861d511\"),\n\t\t\"test\" : \"parent\",\n\t\t\"children\" : [\n\t\t\t{\n\t\t\t\t\"_id\" : UUID(\"75b1b366-697c-4690-b216-ca91151f61a6\"),\n\t\t\t\t\"Parent\" : {\n\t\t\t\t\t\"ID\" : UUID(\"234350d9-775c-4fa2-9a1e-54bd7861d511\"),\n\t\t\t\t\t\"Slot\" : 1\n\t\t\t\t},\n\t\t\t\t\"test\" : \"child2\",\n\t\t\t\t\"route\" : NumberLong(0)\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"_id\" : UUID(\"ba567231-443f-412b-af57-ae8a134c57e1\"),\n\t\t\t\t\"Parent\" : {\n\t\t\t\t\t\"ID\" : UUID(\"234350d9-775c-4fa2-9a1e-54bd7861d511\"),\n\t\t\t\t\t\"Slot\" : 1\n\t\t\t\t},\n\t\t\t\t\"test\" : \"child3\",\n\t\t\t\t\"route\" : NumberLong(0)\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"_id\" : UUID(\"4f80a73e-31eb-44bb-8426-68ee58b6b0fc\"),\n\t\t\t\t\"Parent\" : {\n\t\t\t\t\t\"ID\" : UUID(\"234350d9-775c-4fa2-9a1e-54bd7861d511\"),\n\t\t\t\t\t\"Slot\" : 1\n\t\t\t\t},\n\t\t\t\t\"test\" : \"child1\",\n\t\t\t\t\"route\" : NumberLong(0)\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"_id\" : UUID(\"7bd5efe0-707a-4c4d-9265-8d65d9c406b4\"),\n\t\t\t\t\"Parent\" : {\n\t\t\t\t\t\"ID\" : UUID(\"75b1b366-697c-4690-b216-ca91151f61a6\"),\n\t\t\t\t\t\"Slot\" : 1\n\t\t\t\t},\n\t\t\t\t\"test\" : \"test2\",\n\t\t\t\t\"route\" : NumberLong(1)\n\t\t\t}\n\t\t]\n\t},\n\t{\n\t\"_id\" : UUID(\"f1745729-7862-42a1-8545-cf6a243c9bd2\"),\n\t\"test\" : \"test2\",\n\t\"children\" : [\n\n\t]\n\t}\n]\n",
"text": "I just want to prephase this by saying that this could just be a PEBCAK but I’m not sure so that’s why I’m posting here. I’m trying to convert the results of a GraphLookup to a full hierarchy in C#. To check if an array is empty I’m trying to run this:Here is the data I’m getting out:From what I can see I should be getting 1 “True” and 1 “False” in console. Instead I get an error. Where have I screwed up?",
"username": "Lukas_Friman"
},
{
"code": "val[\"children\"]BsonArrayBsonValueBsonArrayval[\"children\"].AsBsonArray.Count\n",
"text": "Hi @Lukas_Friman,From what I can see I should be getting 1 “True” and 1 “False” in console. Instead I get an error. Where have I screwed up?This is because val[\"children\"] is an instance type of BsonValue and not BsonArray. You could convert BsonValue to BsonArray using AsBsonArray property.For example:Regards,\nWan.",
"username": "wan"
}
] | Can't access Count property from BsonArray with C# driver | 2020-06-18T16:14:22.636Z | Can’t access Count property from BsonArray with C# driver | 4,697 |
null | [] | [
{
"code": " 2020-05-24T09:17:06.391+0800 I CONTROL [conn80520] *** unhandled exception (access violation) at 0x00007FF75ED3A255, terminating\n 2020-05-24T09:17:06.391+0800 I CONTROL [conn80520] *** access violation was a write to 0x000000ED77E30000\n 2020-05-24T09:17:06.391+0800 I CONTROL [conn80520] *** stack trace for unhandled exception:\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x1eb865\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x1ebb26\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x1f3917\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x1f3076\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x1f33a4\n 2020-05-24T09:17:06.496+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x659fbd\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x65a221\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x67ecf3\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x60265a\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x601cce\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x60474e\n 2020-05-24T09:17:06.497+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x604d21\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x61641e\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x616e45\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x5fd351\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x62d984\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0xf30c\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.498+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x6a52be\n 2020-05-24T09:17:06.499+0800 I CONTROL [conn80520] mongod.exe index_collator_extension+0x161256\n 2020-05-24T09:17:06.500+0800 I CONTROL [conn80520] mongod.exe ???\n 2020-05-24T09:17:06.500+0800 I CONTROL [conn80520] MSVCP120.dll ???\n 2020-05-24T09:17:06.500+0800 I CONTROL [conn80520] MSVCR120.dll ???\n 2020-05-24T09:17:06.500+0800 I CONTROL [conn80520] MSVCR120.dll ???\n 2020-05-24T09:17:06.500+0800 I CONTROL [conn80520] KERNEL32.dll ???\n 2020-05-24T09:17:06.500+0800 I - [conn80520] \n 2020-05-24T09:17:06.501+0800 I CONTROL [conn80520] writing minidump diagnostic file C:\\Program Files\\MongoDB\\Server\\3.2020-05-24T01-17-06.mdmp\n 2020-05-24T09:17:06.503+0800 I CONTROL [conn80520] failed to create minidump : errno:-2147024888 Not enough storage is available to process this command.\n 2020-05-24T09:17:06.503+0800 I CONTROL [conn80520] *** immediate exit due to unhandled exception\n 2020-05-24T10:13:30.126+0800 I CONTROL [main] ***** SERVER RESTARTED *****\n",
"text": "Hello there,We’ve encountered a sudden termination of MongoDB, and would require manual restart;\nchecked that there was an error message -“2020-05-24T09:17:06.391+0800 I CONTROL [conn80520] *** unhandled exception (access violation) at 0x00007FF75ED3A255, terminating”mongoDB 3.2.9Details error log as below:Is there any insight can be shared with us?Thank you so much!!!",
"username": "Marco_Chou"
},
{
"code": "",
"text": "mongoDB 3.2.9Hi,MongoDB 3.2.9 was released in August, 2016 and the 3.2 series reached end of life in 2018. My first suggestion would be to upgrade to the final 3.2.22 release, as there have been many bug fixes as well as security and stability improvements since 3.2.9. Minor releases (like 3.2.x) do not include any backward-breaking compatibility changes.I would definitely plan upgrading to a supported release series (currently MongoDB 3.6 or newer) as no further fixes or support are provided for End-of-Life releases.From your log output it appears at least one of your volumes may be low on free storage space:failed to create minidump : errno:-2147024888 Not enough storage is available to process this command.I’m not sure if low storage space led to your issue, but you should look for for further details in the log lines preceding the unhandled exception.The other provided log lines don’t provide any further useful context.Regards,\nStennie",
"username": "Stennie_X"
}
] | Unhandled exception (access violation) at 0x00007FF75ED3A255 | 2020-06-18T08:54:07.082Z | Unhandled exception (access violation) at 0x00007FF75ED3A255 | 3,367 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "HiThank you for your great effort.\nIs there any generic method to CreateOrUpdateIndex ?\nCreate if the index (CreateIndexModel) not exists\nUpdate if the index options changed\nDo nothing if nothing changedFrom googling I found an old workaround that not working with the latest driver version:",
"username": "Mordechai_Ben_Zechar"
},
{
"code": "",
"text": "Hi @Mordechai_Ben_Zechar, and welcome to the forum,Is there any generic method to CreateOrUpdateIndex ?As of current version (v2.11), there is no a built-in method as you have described in MongoDB .NET/C# driver. Although there are methods that you should be able to build upon to create the logic.These methods are:Having said the above if you’re intending to introduce automatic indexing mechanism, please note that although indexes can improve query performances, indexes may also present some operational considerations. See Operational Considerations for Indexes for more information.You may also find the following resources useful:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank you for your answer.\nAre you planned to add a generic function in the upcoming versions?",
"username": "Mordechai_Ben_Zechar"
},
{
"code": "",
"text": "HI @Mordechai_Ben_Zechar,Are you planned to add a generic function in the upcoming versions?Currently there is no plan to do add a wrapper method.Please note that index options can not be updated in place. Please consider the impact of dropping and re-creating indexes for a production environment. See also Index Build On Populated Collections.Regards,\nWan.",
"username": "wan"
}
] | CreateOrUpdateIndex in C# driver | 2020-06-16T10:44:13.369Z | CreateOrUpdateIndex in C# driver | 3,907 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Hi,I upgraded my Mongo Atlas Cluster from M0 to M2. After the upgrade I can’t connect to my database any more. Every time i want to connect to my database the Error: “could not find config for cluster0-shard-00-00-xxxxx.mongodb.net” occours.Did I missed something?Thanks in advance\nAndreas",
"username": "Andreas_Sauer"
},
{
"code": "",
"text": "Update: The problem has solved it by itself. I have done nothing and now I can reconnect to my cluster. Had somebody something simular to that?",
"username": "Andreas_Sauer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error Can't find config after upgrading Cluster Tier | 2020-06-18T18:30:25.391Z | Error Can’t find config after upgrading Cluster Tier | 2,300 |
null | [] | [
{
"code": "var defaultRealmPath: URL!\nvar realm: Realm!\n\noverride func setUp() {\n super.setUp()\n \n let path = Bundle.main.path(forResource: \"count\", ofType: \"realm\") ?? \"\"\n let realmURL = URL(fileURLWithPath: path)\n \n _ = try! FileManager.default.replaceItemAt(Realm.Configuration.defaultConfiguration.fileURL!, withItemAt: realmURL)\n defaultRealmPath = Realm.Configuration.defaultConfiguration.fileURL!\n \n let config = Realm.Configuration(fileURL: defaultRealmPath)\n realm = try! Realm(configuration: config)\n}\n\noverride func tearDown() {\n super.tearDown()\n \n try! FileManager.default.removeItem(at: Realm.Configuration.defaultConfiguration.fileURL!)\n realm = nil\n}\n\nfunc test_scan10012_willIncreaseCount() {\n \n XCTAssertEqual(realm.objects(LineItem.self).count, 40)\n \n let countSession = CountSession()\n CountTests.productScanned(\"10012\", countSession: countSession)\n\n XCTAssertEqual(realm.objects(LineItem.self).count, 41)\n}\n",
"text": "Goof day,Anyone here knows how I can access Realm objects in XCTest? I have replaced the default realm file with a pre-populated one. I can see that the data is there through Realm Studio but everytime I access the objects, it’s nil.Here’s what I got so far. Where LineItem object count is always 0.class CountTests: XCTestCase {}",
"username": "Lie-an"
},
{
"code": "",
"text": "@Lie-an We use XCTest extensively in our RealmSwift binding - you can see a ton of examples here:6ac939e5966c2fb9ba800940711f9030d7ff5d52/RealmSwift/TestsRealm is a mobile database: a replacement for Core Data & SQLite - realm-swift/RealmSwift/Tests at 6ac939e5966c2fb9ba800940711f9030d7ff5d52 · realm/realm-swift",
"username": "Ian_Ward"
}
] | Testing Realm with XCTest (Swift) | 2020-06-18T04:00:04.333Z | Testing Realm with XCTest (Swift) | 2,143 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I have a WPF project using .NetCore. I installed Windows Application Packaging Project to pack it as UWP app, then Realm package throw this error :System.TypeInitializationException: ‘The type initializer for ‘Realms.SharedRealmHandle’ threw an exception.’\nDllNotFoundException: Unable to load DLL ‘realm-wrappers’: The specified module could not be found. (Exception from HRESULT: 0x8007007E)",
"username": "Nguyen_Dinh_Tam"
},
{
"code": "",
"text": "I’m sorry I cannot comment because I do not know the version details of the platform you are trying to install on, the version of realm, or how you are trying to install.",
"username": "Ian_Ward"
}
] | (.Net) Error in a project using Windows Application Packaging Project | 2020-06-18T05:01:07.934Z | (.Net) Error in a project using Windows Application Packaging Project | 2,186 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "We are using Realm db with standard tier, 20 GB of bandwidth/ Monthly for our Mobile app content sync. Our current Realm db size is approximately 400 MB. If 5000/10,000 concurrent users use our Mobile app with standard tier we would like to understand the implications :If 5000 concurrent users use our Mobile app overshooting 20 GB of bandwidth. What happens to data sync and Realm db connection?If 10, 000 concurrent users use our Mobile app overshooting 20 GB of bandwidth. What happens to data sync and Realm db connection?If 5000 concurrent users use our Mobile app is within 20 GB of bandwidth. What happens to data sync and Realm db connection?If 10, 000 concurrent users use our Mobile app is within 20 GB of bandwidth. What happens to data sync and Realm db connection?What all issues we may run into if we exceed 20 GB of bandwidth with standard tier.Thanks!\nShruthi.S",
"username": "Shruthi_S"
},
{
"code": "",
"text": "@Shruthi_S This sounds like the legacy Realm Cloud, I’d encourage you take a look at our newly launched MongoDB Realm platform which integrates with MongoDB Atlas. The MongoDB Realm pricing also scales with you as your app grows and has a substantial free tier - https://docs.mongodb.com/realm/billing/To answer your question though those tiers do not affect each other, ie. increasing the amount of concurrent users does not decrease the amount of bandwidth. Your performance mileage will vary with your usage pattern and there is no guarantee of scalability. I’d encourage you to take a look at MongoDB Realm as all future performance improvements will be going into this platform.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian - What are the implications migrating from Realmdb to Mongo Realm db?Will it need implementation and code changes to migrate from Realm db to Mongo Realmdb?What’s the pricing and bandwidth details for Mongo Realm db standard tier ?Once the standard tier bandwidth is reached does Mongo realm db auto scale?Thanks!\nShruthi.S",
"username": "Shruthi_S"
},
{
"code": "",
"text": "@Shruthi_S",
"username": "Ian_Ward"
}
] | Realm Cloud with standard tier implications for 5000/10000 concurrent users | 2020-06-17T16:55:31.131Z | Realm Cloud with standard tier implications for 5000/10000 concurrent users | 2,788 |
null | [] | [
{
"code": "",
"text": "I’m unable to connect to host. The GUI is asking for a connection string (SRV OR STANDARD). Please provide the connection string.",
"username": "Deepika_15237"
},
{
"code": "mongodb://m001-student:[email protected]:27017/?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primaryPreferred&appname=MongoDB%20Compass&ssl=true",
"text": "Hi @Deepika_15237,Please paste this connection string in the SRV OR Standard text field for connecting to the class atlas cluster.Alternatively you can also fill in the individual fields. Please click on Fill in connection fields individually and follow the instructions mentioned in the Lecture : Connecting to MongoDB Using Compass.Hope it helps!If you have any other query then please feel free to get back to us.Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hello,\nI am trying to connect to compass using the values described in the course ware and I am getting this errorconnect ECONNREFUSED 52.4.238.74:27017I have uploaded the screenshot of the screen. Pls adviseThanks\nSwateeError1920×1080 171 KB\nError21920×1080 154 KB",
"username": "Swatee_Jain"
},
{
"code": "",
"text": "It looks like you might be behind a corporate firewall or VPN that prevents outbound connection.Check with you system admin.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Swatee_Jain,In addition to @steevej-1495,Did you try to use connection string as well ? it’s better and efficient.mongodb+srv://m001-student:[email protected]/test~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Thank you for your suggestions. As suggested it is due to corporate firewall. I also tried connection string with same error. I was able to connect on my home laptop.Thanks\nSwatee",
"username": "Swatee_Jain"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Connecting to Comapss using connection string | 2019-12-10T13:18:37.852Z | Connecting to Comapss using connection string | 2,120 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi All,We have Oracle Star Schema. Can we migrate this Star Schema into Mongo DB ?We have 1 Fact table and 15 dimension tables and almost all queries perform inner joins. Data in these tables are very huge.Any pointers to blog and/or experience is highly appreciated.",
"username": "M_D"
},
{
"code": "{\n _id,\n totalA,\n totalB,\n dimentionA: {\n _id,\n dataA1,\n dataA2,\n ....\n },\n dimentionB: {\n _id,\n dataB1,\n dataB2,\n ....\n }\n}\n",
"text": "‘Table of rows’ in SQL Databases equivalent to ‘Collection of documents’ in MongoDB. And Star Data Model can be represented as document with nested properties, like this:To put simply, your fact document will contain dimension documents as nested objects.\nWith this structure, you will not need to use any joins ($lookups), because every dimension is already joined to the fact data.This scheme will be a perfect fit, only if the total size of such document will not exceed 16MB.Check this OracleDB and MongoDB Comparison for more detailed overview.",
"username": "slava"
}
] | Can I use Mongo DB for Star Schema type of Data Model? | 2020-06-16T16:28:50.273Z | Can I use Mongo DB for Star Schema type of Data Model? | 4,973 |
null | [
"aggregation"
] | [
{
"code": " {\n \"name\": \"name 1\",\n \"type\": 50,\n \"qty\": 1\n },\n {\n \"name\": \"name 2\",\n \"type\": 52,\n \"qty\": 2\n },\n {\n \"name\": \"name 3\",\n \"type\": 50,\n \"qty\": 3\n },\n {\n \"name\": \"name 4\",\n \"type\": 52,\n \"qty\": 4\n },\n {\n \"name\": \"name 5\",\n \"type\": 50,\n \"qty\": 5\n }\ndb.example.aggregate(\n {\n $match: {\"type\": 50}\n },\n {\n $group: {\n _id: null,\n total: { $sum: \"$qty\" },\n count: { $sum: 1 }\n }\n },\n {\n $project: {\"_id\": 0, total: 1, count: 1}\n }\n)\ndb.example.aggregate(\n {\n $match: {\"type\": 50}\n }\n)\n{ \"total\" : 9, \"count\" : 3 }\n { \"_id\" : ObjectId(\"5eea8e09279c3208b5132694\"), \"name\" : \"name 1\", \"type\" : 50, \"qty\" : 1 }\n{ \"_id\" : ObjectId(\"5eea8e09279c3208b5132696\"), \"name\" : \"name 3\", \"type\" : 50, \"qty\" : 3 }\n{ \"_id\" : ObjectId(\"5eea8e09279c3208b5132698\"), \"name\" : \"name 5\", \"type\" : 50, \"qty\" : 5 }\n",
"text": "Hi everyone - got a bit of a question\nCollection data -and have 2 queries - sum and countand detail matching on some criteriaIs there any way of combining the 2 queries together so the sum and count and details appear together in the output? like sothanks\nSteve",
"username": "Steve_Potts"
},
{
"code": "$facetsumAndCountdocumentsdb.example.aggregate([\n { $match: { \"type\": 50 } },\n { $facet: {\n \"sumAndCount\": [\n { $group: {\n _id: null,\n total: { $sum: \"$qty\" },\n count: { $sum: 1 }\n }},\n { $project: {\n \"_id\": 0,\n total: 1,\n count: 1\n }}\n ],\n \"documents\" : [\n { $match: { \"type\": 50 } }\n ],\n }}\n]).pretty()\n{\n\t\"sumAndCount\" : [\n\t\t{\n\t\t\t\"total\" : 9,\n\t\t\t\"count\" : 3\n\t\t}\n\t],\n\t\"documents\" : [\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"5eeaa8a79ffc9476b2ccbd7b\"),\n\t\t\t\"name\" : \"name 1\",\n\t\t\t\"type\" : 50,\n\t\t\t\"qty\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"5eeaa8a79ffc9476b2ccbd7d\"),\n\t\t\t\"name\" : \"name 3\",\n\t\t\t\"type\" : 50,\n\t\t\t\"qty\" : 3\n\t\t},\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"5eeaa8a79ffc9476b2ccbd7f\"),\n\t\t\t\"name\" : \"name 5\",\n\t\t\t\"type\" : 50,\n\t\t\t\"qty\" : 5\n\t\t}\n\t]\n}\n$match",
"text": "Welcome to the community @Steve_Potts!Is there any way of combining the 2 queries together so the sum and count and details appear together in the output?If you want to perform aggregrations for the same input documents across several dimensions, you can use a $facet stage containing multiple sub-pipelines for processing.For example, converting your two aggregation pipelines into sumAndCount and documents facets:Returns:Since a facet sub-pipeline cannot be empty, I repeated the initial $match expression to return the documents.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "{\n $group: {\n _id: null,\n total: { $sum: \"$qty\" },\n count: { $sum: 1 },\n list: { $push: '$$CURRENT' },\n }\n},\n",
"text": "You can preserve your $match results by pushing every doc into array using $group stage:",
"username": "slava"
},
{
"code": "",
"text": "Thanks just the job - was trying to avoid putting details into an array but on reflection it is probably the better way of passing the data back to the application",
"username": "Steve_Potts"
},
{
"code": "",
"text": "Thanks for the welcome - i had completely forgotten about $facet will look into that - comparatively new to Mongodb converting from microsoft SQL - will chose the other answer as the solution because my match criteria and the other pipeline commands are a bit complicated in my really world example - thanks again",
"username": "Steve_Potts"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Putting the results of 2 queries together in the returned data | 2020-06-17T23:06:04.710Z | Putting the results of 2 queries together in the returned data | 6,443 |
null | [
"dot-net",
"field-encryption"
] | [
{
"code": "",
"text": "I have setup CSFLE using the C Sharp driver and was able to encrypt a field in a document. I am now trying to read that document using the same driver.When I execute\nvar clientEncryption = new ClientEncryption(clientEncryptionSettings);\nit fails with the following error\nSystem.IO.FileNotFoundException: ‘Could not find: mongocrypt.dll –\nTried: C:\\Users\\brian.bernholtz\\AppData\\Local\\Temp\\Temporary ASP.NET Files\\vs\\a629dec2\\a52e73f9\\assembly\\dl3\\451c2076\\009d89cc_8b9ad501....\\x64\\native\\windows\\mongocrypt.dll,C:\\Users\\brian.bernholtz\\AppData\\Local\\Temp\\Temporary ASP.NET Files\\vs\\a629dec2\\a52e73f9\\assembly\\dl3\\451c2076\\009d89cc_8b9ad501\\mongocrypt.dll’The mongocrypt.dll file is clearly in my BIN directories. I even tried physically copying the file from my local BIN dir to the temp location specified and it still bombs out with same error.Any thoughts on why I cannot find this DLL?",
"username": "Brian_Bernholtz"
},
{
"code": "",
"text": "I should note, I have tried it with version 2.10 and 2.10.4 with the same result. 2.9 does not have the mongocrypt.dll dependency and doesnt seem to support CSFLE",
"username": "Brian_Bernholtz"
},
{
"code": ".csprojMongoDB.LibmongocryptWeb.config<configuration>\n <system.web>\n <hostingEnvironment shadowCopyBinAssemblies=\"false\" />\n </system.web>\n</configuration>\n",
"text": "Hi @Brian_Bernholtz, welcome!Any thoughts on why I cannot find this DLL?It’s been a while since you posted this question, have you found a solution yet ?Could you provide the .csproj file for the application? Are you importing MongoDB.Libmongocrypt ?Also, would you be able to try to disable shadow copying ? For example in your Web.config:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C Sharp Driver - Cannot find mongocrypt.dll | 2020-05-28T16:05:34.319Z | C Sharp Driver - Cannot find mongocrypt.dll | 3,876 |
null | [
"replication"
] | [
{
"code": "",
"text": "I would like to understand what does this actually mean:Rollback ID is 1Can we match this with some rollback or something?",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Hi,What’s the context of the error? When did you see this message, and could you post:Best regards,\nKevin",
"username": "kevinadi"
}
] | MongoDB rollback | 2020-06-16T07:08:34.593Z | MongoDB rollback | 1,358 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I learned in the amazing Mongo University Data Modeling course that typically a many-to-many relationship is modeled as two collections, with an array of references to one collection in the other collection.I’m facing a design problem, and I would like to get the opinion of some people with more experience than me. I need to make a good argument for my plan (laid out below) because someone else working on this project is suggesting a lookup table, essentially. He comes from a SQL background - so I feel in some ways that it’s one of those “if you’re a hammer, everything looks like a nail” scenarios. But I may also be wrong!Consider two collections: one that stores Users and one that stores Clubs. A Club may have many Users (and must have at least one), and a User may belong to many Clubs.Because a Club can assign permissions to its Users, and within the scope of the Club, those permissions will need to be easily queryable, I decided to store User references in Club documents.However, it is also crucial that all of a User’s Clubs are easily queried. So after some consideration, I decided to take the hit on data redundancy and also store an array of Club references in User documents. Now, when I need all of a User’s Clubs, I query on the User ‘Clubs’ field, and when I need all of a Club’s Users, I query on the Club’s ‘Users’ field. From my (inexperienced) point of view, I’m trading the cost of each User’s array of Club _id’s (which I think are 8B/ObjectId) for improved querying.I guess my partner wants to store { UserId, ClubId, Permissions } documents in a new join collection. It just feels icky and too “SQL-y” to me. But otherwise, I can’t think of a really good reason not to do it this way, other than the fact that now I’ve got to make two queries or a $lookup for either mentioned query - every time. I know that if I ever choose to shard either collection, that I won’t be able to use it as the joined collection in a lookup - so I’m weary to go that route.Thanks for any help.",
"username": "Michael_Jay"
},
{
"code": "{\"name\":\"Alice\", clubs:[\"Foo\"]}\n{\"name\":\"Bob\", clubs:[\"Bar\"]}\n{\"name\":\"Chuck\", clubs:[\"Foo\", \"Bar\"]}\n{\"club_id\":\"Foo\"}\n{\"club_id\":\"Bar\"}\nFoodb.clubs.aggregate([\n {\"$match\":{\"club_id\":\"Foo\"}}, \n {\"$lookup\":{\n \"from\":\"users\", \n \"let\": {\"cid\":\"$club_id\"}, \n \"pipeline\":[\n {\"$match\":{\"$expr\":{\"$in\":[\"$$cid\", \"$clubs\"]}}}], \n \"as\":\"members\"}\n }\n]); \n",
"text": "Hi @Michael_Jay,It’s been a while since you posted this question, have you found an answer yet ?Generally data modelling is a broad topic to discuss, this is due to many factors that may affect the design decision. One major factor is the application requirements, knowing how the application is going to interact with the database. With MongoDB flexible schema characteristic, developers can focus on the application design and let the database design conform for the benefit of the application. See also Building With Patterns: SummaryBecause a Club can assign permissions to its Users, and within the scope of the Club, those permissions will need to be easily queryable, I decided to store User references in Club documents.Depending on the use case, if you’re storing user references as an array in a club document you may have a very large array. Also, updating users within those array may not be a straight forward process.One alternative is to keep the reference of a club in users document. You can still utilise $lookup to query all users that belong to a club.For example, if you have Users documents as below:Example of Clubs documents as below:Example to find all users for club Foo :You may also find Data Model Examples and Patterns a useful reference.Regards,\nWan.",
"username": "wan"
}
] | Data modeling question on an M:N relationship | 2020-05-19T09:18:11.413Z | Data modeling question on an M:N relationship | 2,580 |
null | [
"app-services-user-auth"
] | [
{
"code": "https://stitch.mongodb.com/api/client/v2.0/app/client-app-id/auth/providers/local-userpass/register",
"text": "I am signing up users using the https://stitch.mongodb.com/api/client/v2.0/app/client-app-id/auth/providers/local-userpass/register endpoint from my server, which takes email & password on a POST request. However, it does not return the user ID. Is there any way I could get the user ID at this point? I would like to then create a user document with that ID to store additional fields from my signup form.Currently, I have to wait until the user logs in and then redirect them to a form to finish getting the rest of the info, which is not ideal.",
"username": "Matt_Jennings"
},
{
"code": "",
"text": "Hi Matt –This is because the user is not technically created until they are confirmed and log-in.\nAfter the user is registered (and depending on what your confirmation process is) you could simply send the credentials to the login endpoint (i.e. taking the login action on the user’s behalf). For creating the user document, a confirmation function (which will have the user ID) or a trigger on user creation are both options.Hope that helps!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Ahh, I see. So that would require disabling the built-in email confirmations in order to log-in on their behalf right away. I was hoping I could keep Mongo’s email confirmation process, but it’s not the end of the world to make my own.",
"username": "Matt_Jennings"
}
] | Retrieving user ID for email/password authentication at signup | 2020-06-17T20:36:16.236Z | Retrieving user ID for email/password authentication at signup | 2,654 |
null | [] | [
{
"code": "",
"text": "Hi! I joined this community after learning about it for the first time in one of the MongoDB Live sessions. I hope I can contribute to this community.BTW, I hope you don’t mind me asking this question here: it has been suggested to me a couple of times I should learn RDMS theory before diving into MongoDB (I have learned a semester’s worth of RDMS about five years ago if I remember correctly). I’ve been using MongoDB for a year now (mostly basic querying but this year and the next will be different), but I feel self-conscious about my lack of RDMS knowledge. At what point in learning RDMS is sufficient to work with MongoDB?",
"username": "nisajthani"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @nisajthani!At what point in learning RDMS is sufficient to work with MongoDB?RDBMS knowledge/theory is not required to work with MongoDB, and for some learners MongoDB may be the first database system they use. My overall recommendation would be to learn more about the database management system(s) you plan to use, so you can best take advantage of the features and strengths each offers.Some fundamental database concepts such as indexing are common across different database implementations, but others (such as schema design patterns and query optimisation) will have different considerations given MongoDB’s more flexible schema support. Even though all database systems support some form of indexing for data retrieval, available index types (compound, geospatial, text, …) and data size limitations will vary.MongoDB is a distributed database including automatic failover, so more advanced deployment approaches like replication and sharding will also differ from the RDBMS equivalent (depending on which RDBMS system(s) you are using as a reference). For one example comparison, see Comparing MongoDB vs PostgreSQL.The free online courses at MongoDB University are a great starting point for MongoDB knowledge, and courses like M001 (MongoDB Basics) do not presume you have any database experience. There are learning paths with recommended courses for either Developer or Database Administrator topics.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @nisajthani,Welcome to the community! Glad to hear you heard about us from .live. What was your favorite session during the show?Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "I mostly attended the beginner-level session and I loved them all but if I have to choose a favorite it would be ‘Beginner MongoDB Mistakes to Avoid’.",
"username": "nisajthani"
},
{
"code": "",
"text": "Hi @nisajthani,Thanks for sharing your favourite session from MongoDB.live! I haven’t seen @Eric_Reid’s talk yet, but I’m queuing that up on YouTube now: Silverback Notes: Beginner MongoDB Mistakes to Avoid.FYI, there’s also a discussion topic on the forums where you might find some interesting session suggestions or want to add your own: Thank you for two great days of sessions at MongoDB.live!.I added some of my own session highlights in that thread, but there are lots of great talks I haven’t seen yet.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "The Silverback is thrilled that you liked his talk!",
"username": "Eric_Reid"
}
] | Greetings! I am from Southeast Asia | 2020-06-16T03:40:11.716Z | Greetings! I am from Southeast Asia | 4,243 |
null | [
"mongodb-live-2020"
] | [
{
"code": "",
"text": "Hello,I’d like to say thank you for two days packed with sessions, Breakout and Community Cafe Sessions have been great (these which I attended). Some session have been in very small groups, but this made the conversation much more personal and direct, there was one with were we have been only in three. This could be a pattern for the next time something like “birds of a feather” sessions.\nThe session from John Page was: first technically very interesting and second: great to control the largorobot in his house. It is still online you can drive it around https://largobot.com/ (John gave permission to share the link)\nThe chats in general have been ok, it was a little bit unorganized.\nPerfectly well worked the “ask the expert” sessions.The event was worth spending the time! A virtual event can not replace a real event but this one was was very well done.Thanks!\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "I’ll have to wait for the recordings. I had every intention of sitting in on some live sessions, but unfortunately work had other plans for me. ",
"username": "Doug_Duncan"
},
{
"code": "watchFull Agendaon demand",
"text": "Hello @Doug_Duncanyou can visit MongoDB World 2022 | June 7-9, 2022 | MongoDB login, move to watch then Full Agenda there you can view all recorded sessions by clicking on demand.Cheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks Michael! Im used to having to wait for a week or two but I guess having this streamed lived would mean that it would be available that much quicker.I’m going though the talks now.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @michael_hoeller,Thank you for your feedback and enthusiasm! It was nice to have you join the Community Cafe sessions my team was involved with during Sydney hours (and ask some great questions!).Shifting from the planned in-person event to an online conference on short notice (while everyone was working from home!) was a huge challenge but we’re pleased with the overall outcome. Although we didn’t get to meet in person this year, the online format did enable more global participation (without jetlag!) and high quality recordings of sessions. The event platform we used did have a few unexpected rough edges, but we’ll take those learnings into consideration for future events.You can now view all of the recorded MongoDB.live content on-demand. Higher quality videos have been posted on MongoDB’s YouTube channel and there are playlists for different content themes from the agenda (“What’s New”, “Performance & Scaling”, etc).You can also continue to login (or register) via the MongoDB World event site for the original agenda and on-demand view of sessions. The archived event site includes some bonus Ask the Experts Panel sessions that may not be on YouTube.Note: live event-specific features like Chat and the Help Desk are no longer active or monitored.What are everyone’s session highlights? I haven’t had a chance to see many of the talks yet (and there are soooo many interesting ones to catch up on), but some talks I’ve really appreciated from my colleagues at MongoDB so far are:A Complete Methodology of Data Modelling for MongoDB (@Daniel_Coupal) - start here if you are new to data modelling in MongoDB.Advanced Schema Design Patterns (@Justin, @Daniel_Coupal) - great insight into schema design from engineers on our Consulting and Education teams.What’s new in MQL 4.4 (@Asya_Kamsky) - introduction to new features in the MongoDB Query Language in MongoDB 4.4 (and some existing features you may not be aware of).What’s new with MongoDB Charts? (@tomhollander, @Scott_Sidwell ) - tour of new features developed by the Sydney-based Charts team.Let’s .explain() Performance (@Christopher_Harris) - the third annual instalment of Chris’ query performance tips & tricks series includes a secret about explain plans and a novel metaphor for exploring query performance.Are Transactions Right For You? (@Sheeri_Cabral) - insight into the pros and cons of transactions with one of my favourite guest appearances at MongoDB.live.How MongoDB uses MongoDB at Scale (@Annie_Black) - how we troubleshoot performance problems that happen at scale for MongoDB’s Evergreen CI system (over a petabyte of data with 10,000 queries/second and 5k hosts up at peak load!).Impact of Available IOPS On Your Database Performance (@Alek_Antoniewicz) - demystifying IOPS (Input/Output Operations Per Second)Realm Scalable Sync Design (@Shane_McAllister, @Ian_Ward) - a grand tour of Realm SyncYou may notice more than a few familiar names from community forum discussion .There are also great talks sharing experience and expertise from the broader MongoDB community. Interesting ones I’ve seen so far have included:Migrating Heavy Cron Jobs to MongoDB Realm Triggers and Worker Functions (@shrey_batra)JOINs and Aggregations Using Real-Time Indexing on MongoDB Atlas (@Shruti_Bhat, Dhruba Borthakur)Rules for Disruptors - Making, Managing, and Scaling the Case for Change with MongoDB (@Jeff_Needham)Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Related feedback from @nisajthani:I mostly attended the beginner-level session and I loved them all but if I have to choose a favorite it would be ‘Beginner MongoDB Mistakes to Avoid’.If you want to catch up on this talk, it is @Eric_Reid’s Silverback Notes: Beginner MongoDB Mistakes to Avoid.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Thank you for two great days of sessions at MongoDB.live! | 2020-06-11T20:36:49.622Z | Thank you for two great days of sessions at MongoDB.live! | 5,093 |
null | [] | [
{
"code": "",
"text": "Does realm support installation with Swift Project Manager?",
"username": "Jon_Fabris"
},
{
"code": "",
"text": "@Jon_Fabris Not yet - we are working on it now though",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Support for Swift Project Manager | 2020-06-17T18:21:59.106Z | Support for Swift Project Manager | 1,452 |
null | [
"realm-web"
] | [
{
"code": "Uncaught TypeError: Cannot use 'in' operator to search for 'node' in undefined",
"text": "Using a fresh/default Vue application created by the Vue-CLI and installing Realm web-sdk will break the application with console errors:\nUncaught TypeError: Cannot use 'in' operator to search for 'node' in undefinedThis is also the case with a Svelte app too, although it breaks at compile-time and not in the browser.@kraenhansen\n@Drew_DiPalma",
"username": "Mellorad"
},
{
"code": "",
"text": "Hi Mellorad,This might answer your questionYou can also track progress of our Realm-Web SDK here - Merge Stitch and Realm SDKs - Realm Web - phase 2 · Issue #2826 · realm/realm-js · GitHubIn the meantime, our legacy stitch browser SDK will work with applications Vue.JS. You can see download and usage documentation here",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm web-sdk and VueJS | 2020-06-16T16:28:25.496Z | Realm web-sdk and VueJS | 4,545 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hi!\nI can’f find tab Schema in MongoDB Compass Community.I installed mongodb-win32-x86_64-2012plus-4.2.7-signed.msiI use this connection.\nhttps://university.mongodb.com/mercury/M001/2020_June_9/chapter/Chapter_0_Setup/lesson/594d8f1e8c07c3a9b60bdfb3/lecture\nThanks forr help",
"username": "Gabriele_Tolomei"
},
{
"code": "",
"text": "Compass Community edition will not have this featurePlease use Stable version",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I can'f find tab Schema | 2020-06-17T16:58:22.380Z | I can’f find tab Schema | 1,627 |
null | [
"security"
] | [
{
"code": "",
"text": "Hi All,Application user wants they should have privilege to change their own user password. I have created user and shared password with respective user. Due to security reason they want to change the password. Please suggest.",
"username": "Pabitra_Kumar_Roy"
},
{
"code": "",
"text": "You have to create a role with changeOwnPassword privilege and assign to the usersPlease check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | User password change | 2020-06-17T04:36:57.782Z | User password change | 1,376 |
null | [
"compass"
] | [
{
"code": "",
"text": "I have been able to test the performance using the compass Performance tab, but now I would like to share that information with my coworkers, is there a way to do this? I can’t seem to find a share/export/dump button, is this data stored somewhere anyways? Thanks.",
"username": "Jenaro_Calvino"
},
{
"code": "",
"text": "That is currently not possible in Compass and the data is not stored, it is only displayed live from the server.You can create a feature request here: Compass: Top (247 ideas) – MongoDB Feedback Engine.",
"username": "Massimiliano_Marcon"
}
] | Export performance data from MongoDB Compass | 2020-06-17T14:05:23.300Z | Export performance data from MongoDB Compass | 1,782 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "We have recently been having a lot of struggle finding a good way to automatically upgrade an existing mongo database across multiple major versions. Due to time constraints the short term emergency solution seems to be to pin the MongoDB version. But this is not a long term solution. Are there any thoughts or plans regarding MongoDB’s ability to move across multiple major versions in one update? The system we are working on is a distributed system, with alot of servers involved, so manually stepping through rolling updates is not an option. Nor can one assume all the servers are on the same version at the same time, or that all servers are passing through all versions along the way (i.e. a server could be updated from a previous version to the next version without stepping through the current version, if this makes any sense).Any thoughts or suggestions are much appreciated.",
"username": "LostNotFound"
},
{
"code": "",
"text": "Hi @LostNotFound,Upgrading can definitely be a challenge, particularly if you have multiple successive major versions to upgrade through.Rather than performing manual upgrades or relying on O/S packages (which often don’t have a smooth upgrade path) I would use MongoDB-aware tooling. Upgrading a distributed database deployment with minimal downtime requires some coordination, but is certainly possible to automate.There are three main options developed by MongoDB:MongoDB Cloud Manager - monitoring, automation, and backup for self-hosted deployments. Cloud Manager uses an agent-based approach and is a freemium service (basic monitoring is free, automation requires a per server subscription, and backup is pay for use). If cloud-based agents are an option for your deployment, there’s a 30-day trial if you want to try out the automation features. Cloud Manager automation can handle upgrading or downgrading the versions of deployments, although typically you have to be starting from a supported (non End-of-Life) version of MongoDB server.MongoDB Ops Manager - the on-premise version of Cloud Manager, which is part of a MongoDB Enterprise Advanced subscription. With Ops Manager you are responsible for setting up and managing all infrastructure for on-premise management of automation, backup, and monitoring.MongoDB Atlas - fully managed MongoDB-as-a-service with upgrades managed via the Atlas UI or API. Atlas also has a live import feature which can be handy for migrating from an older version of MongoDB to a supported version. For example, you can live migrate from MongoDB 2.6 to 4.2.There some other solutions that build on the above APIs, such as the Enterprise Operator for Kubernetes and the Atlas Open Service Broker. There are also various third party tooling & scripts which may be helpful depending on your environment and the types of deployments you manage.What types of deployments are you managing (standalone, replica set, or sharded cluster) and how are you currently automating upgrades? Are you using any automation or orchestration tooling (for example: Ansible, Chef, Puppet, Salt, Terraform, … )?Regards,\nStennie",
"username": "Stennie_X"
}
] | Automated upgrading of MongoDB deployments | 2020-06-17T13:14:51.431Z | Automated upgrading of MongoDB deployments | 2,646 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "I am new to MongoDB devops, and I’m having issues upgrading multiple servers from Ubuntu 16.04 to 18.04. Doing the upgrade breaks the Mongo databases present on the system. The servers are running the Ubuntu official versions of MongoDB, 2.6.10 on 16.04 and 3.6.3 after the upgrade. As far as I understand the way to fix it is to downgrade mongo and to the rolling updates.This does not feel like the intended way to upgrade the systems, and I have so far been unable to find a propper way to do it. Is it really the intended result to have to downgrade Mongo manually and upgrade step by step?I appreciate any suggestions, thank you in advance.Related bug on Launchpad: https://answers.launchpad.net/ubuntu/+source/mongodb/+question/691008",
"username": "LostNotFound"
},
{
"code": "mongodbmongodb-orgmongodbmongodb",
"text": "From Mongo Ubuntu Install.ImportantThe mongodb package provided by Ubuntu is not maintained by MongoDB Inc. and conflicts with the official mongodb-org package. If you have already installed the mongodb package on your Ubuntu system, you must first uninstall the mongodb package before proceeding with these instructions.As far as I understand the way to fix it is to downgrade mongo and to the rolling updates.This is the best supported way to upgrade your database to a supported version. The documentation is in each releases, upgrade notes. There are important steps along the wayMy advise is to also remove the Ubuntu repository version of mongodb and install from the MongoDB repository.You are also well advised to look at your client drivers as this many major versions include depreciations along the way.",
"username": "chris"
},
{
"code": "",
"text": "Thank you for replying Chris. I understand the Ubuntu package is not maintained by MongoDB Inc, do you have any idea why it is this way?For this project other third party repositories is not an option, so I will have to rely on what is available from Ubuntu. We also need to find a way to upgrade these systems automatically but I will create a new post for that.",
"username": "LostNotFound"
},
{
"code": "apt upgradedo-release-upgrade",
"text": "Hi @LostNotFound while I can’t state why Ubuntu maintains their own packages for MongoDB I will state that for a production system it’s probably best to pin the MongoDB version so any type of update is not going to upgrade MongoDB before you’re ready to do that.There are notes in the documentation that shows how you would pin a version in Ubuntu. This will keep MongoDB from getting updated when running apt upgrade. I would hope that it would stop when you ran do-release-upgrade as well, but I’ve not tried that before to verify. A system wouldn’t know that there are specific steps to follow for upgrading packages and will just install the newest versions of installed software unless told not to.The other option you have, and this is what I’ve done in the past, is to manually install MongoDB without a package manager. This requires more work as you’d have to manually add the user/groups, build out the directories and write your own service file among other things as well as perform the upgrades and any changes that might be needed.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Thank you for your suggestion Doug, pinning seems to be what we have to do in the short term. As for the long run I suspect we will have to move away from MongoDB, or at least rely a lot less on it compared to now.",
"username": "LostNotFound"
}
] | Upgrading Ubuntu breaks MongoDB | 2020-06-08T13:33:32.932Z | Upgrading Ubuntu breaks MongoDB | 3,000 |
null | [
"charts"
] | [
{
"code": "",
"text": "Hi im new to mongo and used Qliksense and a Sql database in the past. My question is about using the calculated fields in Charts. At the moment it is very limited so it makes it difficult for me to display what I want on the charts and to create new fields. I would like to know if all the aggregation operators can be used in the calculated field and if not when you create an aggregation pipeline that displays what you need , how do you save that into a field in the data , to reuse and manipulate in other charts.\nSorry if my question is unclear , still very new to this.",
"username": "johan_potgieter"
},
{
"code": "(sensor.temp - 32)*5/9{ $multiply: [ \"$price\", 0.075 ] }$addFields",
"text": "Hi @johan_potgieter -Calculated fields in Charts can use one of two syntax:All calculated fields become an $addFields stage in the final pipeline. If you want to do something more complex like pre-group the data, you can do this by entering full aggregation stages in the query bar rather than use calculated fields.Let me know if you have any more questions.\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks Tom this helped. I am doing the mongodb university courses know. Our company switched from an sql database but we were also using Qlik to display our charts. We want to use mongo charts now , so was wondering is there a way that links to videos can be added to the charts for certain events. So if i click on part part of a pie chart one video plays and another when i click on a different part?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "That’s not possible today, but we are planning our next set of interactivity features so I’ll see if we can do something like this.",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks that will be great. Will more kinds of charts also be released , like a spider chart and gauge? Then the other thing I would like to know is , do you plan to make it possible to do cross filtering between charts?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Gauge chart is already available (in cloud-hosted Charts). We are also planning a spider/radar chart in upcoming months. Cross-filtering is on our plan but will take us a bit longer to get there.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Calculated field in Charts | 2020-04-29T12:56:46.063Z | Calculated field in Charts | 5,778 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I’ve been using MySQL since 2003.\nI am now learning nodejs which uses mongodb (on cloud.mongodb.com and not a self-hosted one) as the database.\nI understand mongodb is for storing key:value pair data.\nI have an app which stores a lot of TEXT data (articles) into the MySQL table.",
"username": "anjanesh"
},
{
"code": "",
"text": "",
"username": "slava"
},
{
"code": "",
"text": "",
"username": "anjanesh"
},
{
"code": "FULLTEXT",
"text": "Welcome to the community @anjanesh!I understand mongodb is for storing key:value pair data.MongoDB stores structured data in documents. Key/value is an extremely simplified view as values can include more complex types like embedded documents and arrays.Is mongodb suitable to storing a page full of text (articles) as a value for a key ?You may choose to store an article or large text blog as a single value, but typically this is not the best approach if you also want to provide a search interface. For example, you would normally want to distinguish title, author, and other metadata from the body of an article. For efficient searching, you also want to consider how to index and prioritise different aspects of your content.For a great introduction to MongoDB data patterns, I suggest reviewing Building with Patterns: A Summary and taking the free online course M320: Data Modelling at MongoDB University. The latest session of M320 just started this week and you have until August 18 to complete the course.Is search fast ? (for searching strings in text)Search speed depends on several factors including how you’ve modelled your data, what sort of searches you are trying to perform, and the resources of your deployment. For example, if you are trying to perform case-insensitive regular expression matches against large text blobs, performance is unlikely to be acceptable because this will be a resource-intensive scan through all of your documents.If you have basic text search requirements, MongoDB has a standard Text Search feature which is analogous to a MySQL FULLTEXT index.If you have more complex text search requirements, definitely look into using Atlas Search which is available for MongoDB 4.2+ Atlas clusters.If you need suggestions for improving search performance or your data model, I suggest starting a new topic with an example of your documents, indexes, and typical search queries. Please provide specific details and examples in order to get relevant advice.Is the data compressed on mongodb or stored as plain-text ?All modern versions of MongoDB compress data and indexes by default. Storage compression was optional in MongoDB 3.0, but available if you changed the storage engine to WiredTiger (which has been the default storage engine since 3.2).Yes. but there is a limitation of 16MB per document (article).The limit of 16MB per document represents a significant amount of text. For example, this is about three times as much as The Complete Works of William Shakespeare in text format (ref: Project Gutenberg). If your document sizes are approaching 16MB I would give careful consideration to whether there is a more efficient schema design for your use case.Search is good. If you want it to be fast, then use MongoDB with some text-search engine, like ElasticSearch.Atlas Search integrates Apache Lucene, which is the same same search library that Elastic builds on. Atlas Search has been in beta for the last year, but is now officially Generally Available (GA) as of early June.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for your comprehensive reply Stennie. This is really helpful.One of my to-do apps is a word database which is available from WordNet (version 2) from :A complete set of MySQL batch script files that create and load the entire WordNet 2.0 data file set. WordNet 2.0 data files in MySQL data format.\nin MySQL which is 380MB in size (says phpMyAdmin).\nI intend to export this to json and push to mongodb.Is this is a good use-case for mongodb ?",
"username": "anjanesh"
},
{
"code": "$lookup",
"text": "One of my to-do apps is a word database which is available from WordNet (version 2)\n…\nIs this is a good use-case for mongodb ?Hi @anjanesh,This is a great use case for MongoDB, but I would encourage you to think about how your data model might be adjusted to take advantage of MongoDB’s indexing and flexible schema rather than doing a direct 1:1 translation of an existing SQL schema. You could start with a direct translation, but this typically misses out on some benefits like easier querying and better performance.A general difference in approach with MongoDB is that you should think about how your data will commonly be used rather than how the data will be stored. This is the opposite of traditional RBDMS data model design, where you first design a highly normalised schema and then work out how to write and optimise your queries.For example, if your word application is built around finding synonyms and antonyms, it might make sense to combine related data in a single MongoDB collection instead of requiring multiple queries or $lookup aggregation to join data. You originally mentioned searching strings in text, so I’m guessing there is a specific subset of data (and type of searching) that you’d like to optimise.The resources I suggested earlier will be helpful, and you should also check out some of the talks from our recent MongoDB.live conference. I’ve highlighted some of the interesting talks I’ve seen so far in another forum topic (see: MongoDB.live session highlights) and the first two talks happen to be about data modelling:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you Stennie for your reply - I think I’ll go through the tutorials you’ve mentioned and then revisit the application side of it using NodeJS + Express + React.",
"username": "anjanesh"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Use case for storing pages of text like articles as key:value pairs | 2020-06-16T12:16:27.710Z | Use case for storing pages of text like articles as key:value pairs | 7,970 |
null | [] | [
{
"code": "",
"text": "I’ve been following along the Tools Overview video and stuck on how to create a Stitch application. I don’t see where you can create one in Atlas? It might be because I’m on a different version. Could someone point me in the right direction? Thank you!",
"username": "Caleb_Pan"
},
{
"code": "",
"text": "This is outside the scope of the M001 course.This post is better suited to MongoDB Developer Community Forums.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Caleb_Pan,Please have a look at our documentation. There are great tutorials that you can follow.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | How to set up Stitch application? | 2020-06-16T17:33:47.088Z | How to set up Stitch application? | 1,045 |
null | [
"java"
] | [
{
"code": "package de.tudo.ls14.aqua.smarthome.dao;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.*;\n\nimport static com.mongodb.client.model.Filters.eq;\n\nimport com.mongodb.client.model.FindOneAndReplaceOptions;\nimport com.mongodb.client.model.ReturnDocument;\nimport de.tudo.ls14.aqua.smarthome.model.Device;\nimport de.tudo.ls14.aqua.smarthome.model.Household;\nimport de.tudo.ls14.aqua.smarthome.model.User;\nimport org.bson.Document;\nimport org.bson.UuidRepresentation;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.springframework.stereotype.Repository;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.UUID;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\n\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\n@Repository(\"Mongodao\")\npublic class MongoDao {\n final MongoClient mongoClient;\n final MongoDatabase mongoDatabase;\n final MongoCollection<User> userCollection;\n final MongoCollection<Household> householdCollection;\n final MongoCollection<Device> deviceCollection;\n\n\n public MongoDao() {\n String password = System.getProperty(\"password\");//Passwort aus den VM options\n Logger.getLogger(\"org.mongodb.driver\").setLevel(Level.ALL);\n ConnectionString connectionString = new ConnectionString(\"someConnectionString\");\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n MongoClientSettings clientSettings = MongoClientSettings.builder()\n .uuidRepresentation(UuidRepresentation.STANDARD)\n .applyConnectionString(connectionString)\n .codecRegistry(codecRegistry)\n .build();\n\n mongoClient = MongoClients.create(clientSettings);\n\n mongoDatabase = mongoClient.getDatabase(\"ProjektDB\");\n userCollection = mongoDatabase.getCollection(\"userCollection\", User.class);\n householdCollection = mongoDatabase.getCollection(\"householdCollection\", Household.class);\n deviceCollection = mongoDatabase.getCollection(\"deviceCollection\", Device.class);\n }\n\n public User getUserById(UUID id) {\n return userCollection.find(eq(\"id\", id)).first();\n }\n\n public Household getHouseholdById(UUID id) {\n return householdCollection.find(eq(\"id\", id)).first();\n }\n\n public Device getDeviceById(UUID id) {\n return deviceCollection.find(eq(\"id\", id)).first();\n }\n\n public int addHousehold(Household household) {\n householdCollection.insertOne(household);\n return 1;\n }\n\n public int addUser(User user) {\n userCollection.insertOne(user);\n System.out.println(\">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> added:\"+user.toString());\n return 1;\n }\n\n public int addDevice(Device device) {\n deviceCollection.insertOne(device);\n return 1;\n }\n\n public User updateUserById(User user) {\n Document filterByUserId = new Document(\"_id\", user.get_id());\n FindOneAndReplaceOptions returnDocAfterReplace = new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER);\n return userCollection.findOneAndReplace(filterByUserId, user, returnDocAfterReplace);\n }\n\n //nur zum testen\n public List<User> getAllUsers() {\n MongoCursor<User> cursor = userCollection.find().iterator();\n List<User> userList = new ArrayList<>();\n try{\n while(cursor.hasNext()){\n userList.add(cursor.next());\n }\n } finally {\n cursor.close();\n }\n return userList;\n }\n}\npackage de.tudo.ls14.aqua.smarthome.model;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.UUID;\n\npublic class User {\n private String name;\n private UUID _id;\n private List<UUID> households;\n private String email;\n\n public User(){\n name = null;\n _id = null;\n households=null;\n email = null;\n }\n\n public User(UUID _id, String name, List<UUID> households, String email) {\n this._id = _id;\n this.name = name;\n this.households = households;\n this.email = email;\n }\n\n public String getName() {\n return name;\n }\n\n public User setName(String name) {\n this.name = name;\n return this;\n }\n\n public UUID get_id() {\n return _id;\n }\n\n public User set_id(UUID _id) {\n this._id = _id;\n return this;\n }\n\n public List<UUID> getHouseholds() {\n return households;\n }\n\n public User setHouseholds(List<UUID> households) {\n this.households = households;\n return this;\n }\n\n public String getEmail() {\n return email;\n }\n\n public User setEmail(String email) {\n this.email = email;\n return this;\n }\n\n @Override\n public String toString() {\n return \"User{\" +\n \"name='\" + name + '\\'' +\n \", _id=\" + _id +\n \", households=\" + households +\n \", email='\" + email + '\\'' +\n '}';\n }\n}\n",
"text": "Hi, I have problems setting my primary key ‘_id’ as UUID.\nMy DAO code:I do not alter my POJO.\nMy POJO:The _id field is UUID, but when i open atlas, it gives every user an Object_id.\nHow can i fix that?",
"username": "Bruno_Steffen"
},
{
"code": "",
"text": "Hello, Bruno! Welcome to the community! \nIn order we could help you, please provide:",
"username": "slava"
},
{
"code": "",
"text": "Sorry about the big picture, can’t edit the post. There must’ve been a problem because I used a certain symbol",
"username": "Bruno_Steffen"
},
{
"code": "{\n \"name\": \"Bla Mcblub\",\n \"_id\": null,\n \"households\": [\n \"c0777af0-7a56-414b-a1dc-5e9ed567d4a7\",\n \"76d2635c-fe63-42d2-b6d0-866e7cd750e8\"\n ],\n \"email\": \"[email protected]\"\n}\n",
"text": "sure,\nWhen I use postman I get:As you can see the households perfectly hold the UUID, but the _id doesn’t.\nIn atlas it looks like this:{“_id”:{“$oid”:“5ee4ea278324f83031afbda3”},“email\":\"[email protected]”,“households”:[{“$binary”:{“base64”:“wHd68HpWQUuh3F6e1WfUpw==”,“subType”:“04”}},{“$binary”:{“base64”:“dtJjXP5jQtK20IZufNdQ6A==”,“subType”:“04”}}],“name”:“Bla Mcblub”}Again, there is clearly an objectid for the _id field which I don’t want and apparently postman can’t read that.Just for the fun of it here a printout of the toString right before i push the object into the DB:added:User{name=‘TotallyLegitName’, _id=05c17b25-4c78-49d8-bec0-12a4111f823e, households=[7acc98ef-30f0-4db4-a5df-e1faa6e38b85, 44b5499f-f00f-4e81-8d2b-a284a0c65436], email=‘[email protected]’}I do actually have one idea of why it may occur. I might have posted two different users with the same _id at time point by accident, thus mongo wasn’t happy and maybe decided to do smthg about it (as every _id needs to be singular as im understanding it). But what to do about that?",
"username": "Bruno_Steffen"
}
] | UUID as primary key | 2020-06-16T11:32:47.766Z | UUID as primary key | 5,154 |
null | [
"spring-data-odm"
] | [
{
"code": "",
"text": "Any annotation available for wildcard index ?",
"username": "Subhashree_Parthasar"
},
{
"code": "",
"text": "Hi @Subhashree_Parthasar, welcome to the forum!I’d assume that you’re referring to the use of Spring Data MongoDB on Spring Boot. Wildcard index is currently not supported in Spring Data MongoDB. You could up vote or add yourself as a watcher for DATAMONGO-2368 to receive notifications on it.If you have further questions about Spring Data MongoDB, I’d suggest to post on StackOverflow: spring-data-mongodb to reach wider audience with the expertise.Regards,\nWan.",
"username": "wan"
}
] | Wildcard index support in spring boot java | 2020-06-15T13:55:33.790Z | Wildcard index support in spring boot java | 2,897 |
null | [
"replication"
] | [
{
"code": "",
"text": "Can some one Please confirm. Is it possible/support SRDF with MongoDB…",
"username": "Nagarajan_Palaniappa"
},
{
"code": "",
"text": "Welcome to the community @Nagarajan_Palaniappa!MongoDB has built-in support for replication and failover which is supported through our official drivers and specifications. You can start with a replica set (the minimum recommended production deployment) and horizontally scale with a sharded cluster (each shard is backed by a replica set).Storage-level replication like EMC’s SRDF happens at a lower level and in theory should be transparent to applications like a database server. However, this is not a configuration we directly test or support. For best performance we would typically suggest local storage with SSD or NVMe rather than SAN or network storage.For general recommendations, please see Disk and Storage Systems in the MongoDB production notes.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you Stennie for the clarification…!",
"username": "Nagarajan_Palaniappa"
},
{
"code": "",
"text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Will MongoDB supports EMC SRDF | 2020-06-17T03:16:54.960Z | Will MongoDB supports EMC SRDF | 2,339 |
null | [
"java",
"legacy-realm-cloud"
] | [
{
"code": "{\n \"assessmentId\": \"njjskdsk\",\n \"_partition\": \"snja\",\n \"answers\": [\n {\n \"question_id\": \"52fdff13-2dfc-4d6d-85b5-f9116ad30f6f2\",\n \"visible\": true,\n \"fieldValue\": [\n {\n \"option_id\": \"qwer\",\n \"textArea\": \"qwer\",\n \"subOptions\": [\n \"dgd\",\n \"fdgfd\",\n \"ffdg\"\n ],\n \"trial\": 1\n }\n ]\n }\n ]\n}\n {\n \"title\": \"AssessmentAnswer\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n },\n \"answers\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"fieldValue\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"option_id\": {\n \"bsonType\": \"string\"\n },\n \"subOptions\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"textArea\": {\n \"bsonType\": \"string\"\n },\n \"trial\": {\n \"bsonType\": \"int\"\n }\n }\n }\n },\n \"question_id\": {\n \"bsonType\": \"string\"\n },\n \"visible\": {\n \"bsonType\": \"bool\"\n }\n }\n }\n },\n \"assessmentId\": {\n \"bsonType\": \"string\"\n }\n }\n }\n",
"text": "Hi, i am using MongoDB realm client SDK in android in java language. I followed github https://github.com/mongodb-university/realm-tutorial link for demo first. It was working fine. After that i created new project with some complex schema, then it shows following error:E/REALM_JAVA: Session Error[wss://realm.mongodb.com/]: BAD_CHANGESET(realm::sync::ProtocolError:212): Bad changeset (UPLOAD)Following is detail of json which i want to store in mongoDB collection and its schema:Json:Schema:And i am also using Data Models from Sync tab of Realm UI.\nplease help me with it, where i am wrong.",
"username": "Gouravdeep_Singh"
},
{
"code": "",
"text": "@Gouravdeep_Singh If you are changing around your schema on the client side and then attempting to reconnect later you will want to wipe your emulator state because there is local state that may cause this issue.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Android BAD_CHANGESET(realm::sync::ProtocolError:212): Bad changeset (UPLOAD) Error | 2020-06-16T10:44:27.282Z | Android BAD_CHANGESET(realm::sync::ProtocolError:212): Bad changeset (UPLOAD) Error | 3,807 |
null | [
"atlas-functions",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Currently, I am working on a Custom function authentication.\nCould anyone help me how to call the function from my client as we have for the anonymous, emailPassword in https://docs.mongodb.com/realm/react-native/authenticate/",
"username": "Amiya_Panigrahi"
},
{
"code": "",
"text": "@Amiya_Panigrahi You should be able to use this method:\nhttps://docs.mongodb.com/realm-sdks/js/10.0.0-beta.6/Realm.Credentials.html#.function",
"username": "Ian_Ward"
}
] | Call to a Custom Auth Function | 2020-06-13T07:36:43.044Z | Call to a Custom Auth Function | 1,408 |
null | [
"atlas-functions"
] | [
{
"code": "[\n {\n \"_id\": {\n \"carId\": \"5e90838738eff556f0fa48d0\",\n \"carMake\": \"BMW\"\n },\n \"count\": {\n \"$numberLong\": \"2\"\n }\n }\n]\n[\n {\n car: { \"carId\": \"5e90838738eff556f0fa48d0\", \"carMake\": \"BMW\" },\n count: 2\n }\n]\n const summary = cars.aggregate([\n { $match: { accountId: accountId } }, \n { $group : { _id:{carId:\"$carId\", carMake:\"$carMake\"}, \n count: {\"$sum\":1}}}\n ])\n return summary;\n",
"text": "Hello,I’ve written a function with aggregation in Realm and am getting an output that looks like this:I’m trying to getI am not sure if by default Real return EJSON so\nI tried EJSON.parse(summary) and get an error.\nAny idea how to format the aggregation?here is a snippet of the function",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Hi Herb\nI have the same issue while running the function.\nAs per my understanding “return” always modifies the object to EJOSN format.You can see the actual result by console.log after parsing instead of “return”",
"username": "Amiya_Panigrahi"
},
{
"code": "",
"text": "Thanks for the reply Amiya.\nIndeed, JSON.stringify(summary) is what I had to do to correct the issue.",
"username": "Herb_Ramos"
}
] | Realm function JSON ouput | 2020-06-13T23:19:11.033Z | Realm function JSON ouput | 2,869 |
Subsets and Splits